code
stringlengths
114
1.05M
path
stringlengths
3
312
quality_prob
float64
0.5
0.99
learning_prob
float64
0.2
1
filename
stringlengths
3
168
kind
stringclasses
1 value
.. _enzymes: ================== Enzyme definitions ================== All default available enzymes (`enzymes_definition.py`) are listed bellow. For each of them, there is the equivalent in `RPG` grammar. In the following, nomenclature of `Schechter and Berger <https://www.ncbi.nlm.nih.gov/pubmed/6035483>`_ is used. Amino acids before the cleavage site are designated as `P1`, `P2`, `P3`, etc in the N-terminal direction, and as `P1'`, `P2'`, `P3'`, etc in the C-terminal direction. For example, with cleavage site represented as '|': .. code-block:: none ...P3-P2-P1-|-P1'-P2'-P3'... In **RPG**, this nomenclature is represented as: .. code-block:: none ...(P3)(P2)(P1)(,)(P1')(P2')(P3')... ----------------- Available enzymes ----------------- ================== ================== ================== 1: :ref:`arg-c` 2: :ref:`asp-n` 3: :ref:`bnps` 4: :ref:`brom` 5: :ref:`casp1` 6: :ref:`casp2` 7: :ref:`casp3` 8: :ref:`casp4` 9: :ref:`casp5` 10: :ref:`casp6` 11: :ref:`casp7` 12: :ref:`casp8` 13: :ref:`casp9` 14: :ref:`casp10` 15: :ref:`chymh` 16: :ref:`chyml` 17: :ref:`clost` 18: :ref:`cnbr` 19: :ref:`enter` 20: :ref:`fxa` 21: :ref:`ficin` 22: :ref:`form` 23: :ref:`gluc` 24: :ref:`glue` 25: :ref:`gran` 26: :ref:`hydro` 27: :ref:`iodo` 28: :ref:`lysc` 29: :ref:`lysn` 30: :ref:`neut` 31: :ref:`ntcb` 32: :ref:`pap` 33: :ref:`peps13` 34: :ref:`peps2` 35: :ref:`prol` 36: :ref:`protk` 37: :ref:`staphI` 38: :ref:`therm` 39: :ref:`throm` 40: :ref:`thromsg` 41: :ref:`tev` 42: :ref:`tryps` 43: :ref:`asp-n2` 44: :ref:`proa` ================== ================== ================== .. _arg-c: Arg-C ..... Arg-C proteinase preferentially cleaves after R (`P1`) **RPG definition:** cleaving rule: * ``(R,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#ArgC .. _asp-n: Asp-N ..... Asp-N Sequencing Grade preferentially cleaves before C or D (`P1'` ) **RPG definition:** cleaving rule: * ``(,C or D)`` More information: https://france.promega.com/resources/pubhub/using-endoproteinases-asp-n-and-glu-c-to-improve-protein-characterization/ .. _bnps: BNPS-Skatole ............ BNPS-Skatole preferentially cleaves after W (`P1`) **RPG definition:** cleaving rule: * ``(W,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#BNPS .. _brom: Bromelain ......... Bromelain preferentially cleaves after K, A or Y (`P1`) **RPG definition:** cleaving rule: * ``(K or A or Y,)`` More information: https://www.sigmaaldrich.com/life-science/biochemicals/biochemical-products.html?TablePage=16410479 .. _casp1: Caspase 1 ......... Caspase 1 preferentially cleaves after D (`P1`) preceded by H, A or T in `P2` and preceded by F, W, Y or L in `P4`. It will not cleave if D is followed by P, E, D, Q ,K or R in `P1'`. **RPG definition:** cleaving rule: * ``(F or W or Y or L)()(H or A or T)(D,)`` exception rule: * ``(F or W or Y or L)()(H or A or T)(D,)(P or E or D or Q or K or R)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp1 .. _casp2: Caspase 2 ......... Caspase 2 preferentially cleaves after D (`P1`) preceded by DVA or DEH. It will not cleave if D is followed by P, E, D, Q ,K or R in `P1'`. **RPG definition:** cleaving rules: * ``(D)(V)(A)(D,)`` * ``(D)(E)(H)(D,)`` exception rules: * ``(D)(V)(A)(D,)(P or E or D or Q or K or R)`` * ``(D)(E)(H)(D,)(P or E or D or Q or K or R)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp2 .. _casp3: Caspase 3 ......... Caspase 3 preferentially cleaves after D (`P1`) preceded by DMQ or DEV. It will not cleave if D is followed by P, E, D, Q ,K or R in `P1'`. **RPG definition:** cleaving rules: * ``(D)(M)(Q)(D,)`` * ``(D)(E)(V)(D,)`` exception rules: * ``(D)(M)(Q)(D,)(P or E or D or Q or K or R)`` * ``(D)(E)(V)(D,)(P or E or D or Q or K or R)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp3 .. _casp4: Caspase 4 ......... Caspase 4 preferentially cleaves after D (`P1`) preceded by LEV or (W/L)EH. It will not cleave if D is followed by P, E, D, Q ,K or R in `P1'`. **RPG definition:** cleaving rules: * ``(L)(E)(V)(D,)`` * ``(W or L)(E)(H)(D,)`` exception rules: * ``(L)(E)(V)(D,)(P or E or D or Q or K or R)`` * ``(W or L)(E)(H)(D,)(P or E or D or Q or K or R)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp4 .. _casp5: Caspase 5 ......... Caspase 5 preferentially cleaves after D (`P1`) preceded by (W/L)EH. **RPG definition:** cleaving rule: * ``(W or L)(E)(H)(D,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp5 .. _casp6: Caspase 6 ......... Caspase 6 preferentially cleaves after D (`P1`) preceded by VEI or VEH. It will not cleave if D is followed by P, E, D, Q ,K or R in `P1'`. **RPG definition:** cleaving rule: * ``(V)(E)(I or H)(D,)`` exception rule: * ``(V)(E)(I or H)(D,)(P or E or D or Q or K or R)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp6 .. _casp7: Caspase 7 ......... Caspase 7 preferentially cleaves after D (`P1`) preceded by DEV. It will not cleave if D is followed by P, E, D, Q ,K or R in `P1'`. **RPG definition:** cleaving rule: * ``(D)(E)(V)(D,)`` exception rule: * ``(D)(E)(V)(D,)(P or E or D or Q or K or R)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp7 .. _casp8: Caspase 8 ......... Caspase 8 preferentially cleaves after D (`P1`) preceded by (I/L)ET. It will not cleave if D is followed by P, E, D, Q ,K or R in `P1'`. **RPG definition:** cleaving rule: * ``(I or L)(E)(T)(D,)`` exception rule: * ``(I or L)(E)(T)(D,)(P or E or D or Q or K or R)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp8 .. _casp9: Caspase 9 ......... Caspase 9 preferentially cleaves after D (`P1`) preceded by LEH. **RPG definition:** cleaving rule: * ``(L)(E)(H)(D,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp9 .. _casp10: Caspase 10 .......... Caspase 10 preferentially cleaves after D (`P1`) preceded by IEA. **RPG definition:** cleaving rule: * ``(I)(E)(A)(D,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Casp10 .. _chymh: Chymotrypsin high specificity ............................. This chymotrypsin preferentially cleaves after F, Y or W (`P1`) if not followed by P in `P1'`. It will not cleave after W followed by M in `P1'`. **RPG definition:** cleaving rule: * ``(F or Y or W,)`` exception rules: * ``(F or Y or W,)(P)`` * ``(W,)(M)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Chym .. _chyml: Chymotrypsin low specificity ............................. This chymotrypsin preferentially cleaves after F, L, Y, W, M or H (`P1`) if not followed by P in `P1'`. It will not cleave after W followed by M in `P1'`. It will not cleave after M followed by Y in `P1'`. It will not cleave after H followed by D/M/W in `P1'`. **RPG definition:** cleaving rule: * ``(F or L or Y or W or M or H,)`` exception rules: * ``(F or L or Y or W or M or H,)(P)`` * ``(W,)(M)`` * ``(M,)(Y)`` * ``(H,)(D or M or W)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Chym .. _clost: Clostripain ........... Clostripain (Clostridiopeptidase B) preferentially cleaves after R (`P1`). **RPG definition:** cleaving rule: * ``(R,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Clost .. _cnbr: CNBr .... CNBr preferentially cleaves after M (`P1`). **RPG definition:** cleaving rule: * ``(M,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#CNBr .. _enter: Enterokinase ............ Enterokinase preferentially cleaves after K (`P1`) preceded by D/E in `P2`, `P3`, `P4` and `P5`. **RPG definition:** cleaving rule: * ``(D or E)(D or E)(D or E)(D or E)(K,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Enter .. _fxa: Factor Xa ......... Factor Xa preferentially cleaves after R (`P1`) preceded by G in `P2`, D/E in `P3` and A/F/I/L/V/W/G/T in `P4`. **RPG definition:** cleaving rule: * ``(A or F or I or L or V or W or G or T)(D or E)(G)(R,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Xa .. _ficin: Ficin ..... Ficin preferentially cleaves after G, S, E or Y (`P1`) preceded by A, V, I, L ,F, Y or W in `P2`. **RPG definition:** cleaving rule: * ``(A or V or I or L or F or Y or W)(G or S or E or Y,)`` More information: https://www.sigmaaldrich.com/life-science/biochemicals/biochemical-products.html?TablePage=16410578 .. _form: Formic acid ........... Formic acid preferentially cleaves after D (`P1`). **RPG definition:** cleaving rule: * ``(D,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#HCOOH .. _gluc: Glu-C ..... Glu-C Sequencing Grade preferentially cleaves after D or E (`P1`). **RPG definition:** cleaving rule: * ``(D or E,)`` More information: https://france.promega.com/resources/pubhub/using-endoproteinases-asp-n-and-glu-c-to-improve-protein-characterization/ .. _glue: Glutamyl endopeptidase ...................... Glutamyl endopeptidase preferentially cleaves after E (`P1`). **RPG definition:** cleaving rule: * ``(E,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Glu .. _gran: Granzyme B .......... Granzyme B preferentially cleaves after D (`P1`) preceded by IEP. **RPG definition:** cleaving rule: * ``(I)(E)(P)(D,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#GranB .. _hydro: Hydroxylamine ............. Hydroxylamine (NH2OH) preferentially cleaves after N (`P1`) followed by G in `P1'`. **RPG definition:** cleaving rule: * ``(N,)(G)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Hydro .. _iodo: Iodosobenzoic acid .................. Iodosobenzoic acid preferentially cleaves after W (`P1`). **RPG definition:** cleaving rule: * ``(W,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Iodo .. _lysc: Lys-C ..... LysC Lysyl endopeptidase (Achromobacter proteinase I) preferentially cleaves after K (`P1`). **RPG definition:** cleaving rule: * ``(K,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#LysC .. _lysn: Lys-N ..... LysN Peptidyl-Lys metalloendopeptidase preferentially cleaves before K (`P1'` ). **RPG definition:** cleaving rule: * ``(,K)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#LysN .. _neut: Neutrophil elastase ................... Neutrophil elastase preferentially cleaves after A or V (`P1`). **RPG definition:** cleaving rule: * ``(A or V,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Elast .. _ntcb: NTCB .... NTCB +Ni (2-nitro-5-thiocyanobenzoic acid) preferentially cleaves before C (`P1'` ). **RPG definition:** cleaving rule: * ``(,C)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#NTCB .. _pap: Papain ...... Papain preferentially cleaves after R or K (`P1`) preceded by A, V, I, L ,F, Y or W in `P2`. It will not cleave if followed by V in `P1'`. **RPG definition:** cleaving rule: * ``(A or V or I or L or F or Y or W)(R or K,)`` exception rule: * ``(A or V or I or L or F or Y or W)(R or K,)(V)`` More information: https://www.sigmaaldrich.com/life-science/biochemicals/biochemical-products.html?TablePage=16410606 .. _peps13: Pepsin pH 1.3 ............. This pepsin preferentially cleaves around F or L (`P1` or `P1'` ). It will not cleave before F or L in `P1'` followed by P in `P2'`. It will not cleave before F or L in `P1'` preceded by R in `P1` or P in `P2` or H/K/R in `P3`. It will not cleave after F or L in `P1` followed by P in `P2'`. It will not cleave after F or L in `P1` preceded by P in `P2` or H/K/R in `P3`. **RPG definition:** cleaving rule: * ``(,F or L,)`` exception rules: * ``(,F or L)(P)`` * ``(R)(,F or L)`` * ``(P)()(,F or L)`` * ``(H or K or R)()()(,F or L)`` * ``(F or L,)()(P)`` * ``(P)(F or L,)`` * ``(H or K or R)()(F or L,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Peps .. _peps2: Pepsin pH >=2 ............. This pepsin preferentially cleaves around F, L, W or Y (`P1` or `P1'` ). It will not cleave before F, L, W or Y in `P1'` followed by P in `P2'`. It will not cleave before F, L, W or Y in `P1'` preceded by R in `P1` or P in `P2` or H/K/R in `P3`. It will not cleave after F, L, W or Y IN `P1` followed by P in `P2'`. It will not cleave after F, L, W or Y in `P1` preceded by P in `P2` or H/K/R in `P3`. **RPG definition:** cleaving rule: * ``(,F or L or W or Y,)`` exception rules: * ``(,F or L or W or Y)(P)`` * ``(R)(,F or L or W or Y)`` * ``(P)()(,F or L or W or Y)`` * ``(H or K or R)()()(,F or L or W or Y)`` * ``(F or L or W or Y,)()(P)`` * ``(P)(F or L or W or Y,)`` * ``(H or K or R)()(F or L or W or Y,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Peps .. _prol: Proline-endopeptidase ..................... Proline-endopeptidase preferentially cleaves after P (`P1`) preceded by H, K or R in `P2` but will not cleaves if followed by P in `P1'`. **RPG definition:** cleaving rule: * ``(H or K or R)(P,)`` exception rule: * ``(H or K or R)(P,)(P)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Pro .. _protk: Proteinase K ..................... Proteinase K preferentially cleaves after F, W, Y, T, E, A, V, L or I (`P1`). The predominant site of cleavage is the peptide bond adjacent to the carboxyl group of aliphatic and aromatic amino acids. **RPG definition:** cleaving rule: * ``(F or W or Y or T or E or A or V or L or I,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#ProtK .. _staphI: Staphylococcal peptidase I .......................... Staphylococcal peptidase I preferentially cleaves after E (`P1`). It will not cleave after E in `P1` preceded by E in `P2`, but cleaves after E in `P1` followed by E in `P1'`. **RPG definition:** cleaving rule: * ``(E,)`` exception rule: * ``(E)(E,)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Staph .. _therm: Thermolysin ........... Thermolysin preferentially cleaves before A,F,I,L,M or V (`P1'` ) when not followed by P in `P2'` nor preceded by D or E in `P1`. **RPG definition:** cleaving rule: * ``(,A or F or I or L or M or V)`` exception rules: * ``(,A or F or I or L or M or V)(P)`` * ``(D or E)(,A or F or I or L or M or V)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Therm .. _throm: Thrombin (PeptideCutter) ........................ This thrombin preferentially cleaves after R (`P1`). Optimum cleavage is when R is preceded and followed by G (`P2` and `P1'` ). Cleavage also occurs when R is preceded by P in `P2` and A, F, I, L, V, W, G or T in `P3` and `P4`. It will not cleave after R followed by D/E in `P1'` or `P2'`. It is not strictly coherent with the definition in PeptideCutter, as in this software there are differences between definition, summary and behavior of this enzyme. **RPG definition:** cleaving rules: * ``(G)(R,)(G)`` * ``(A or F or I or L or V or W or G or T)(A or F or I or L or V or W or G or T)(P)(R,)`` exception rules: * ``(A or F or I or L or V or W or G or T)(A or F or I or L or V or W or G or T)(P)(R,)(D or E)`` * ``(A or F or I or L or V or W or G or T)(A or F or I or L or V or W or G or T)(P)(R,)()(D or E)`` .. warning:: the following combined exception ``(A or F or I or L or V or W or G or T)(A or F or I or L or V or W or G or T)(P)(R,)(D or E)(D or E)`` cannot be used instead, as it will cleave on [...](R,)(D or E). More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Throm https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3288055/ .. _thromsg: Thrombin SG ......................... This thrombin (Sequencing Grade) preferentially cleaves after R (`P1`) preceded by P in `P2`, V in `P3` and L in `P4` and followed by G in `P1'` and S in `P2'`. This thrombin is defined in several kits (see below). **RPG definition:** cleaving rule: * ``(L)(V)(P)(R,)(G)(S)`` More information: see thrombin cleavage kits of `Abcam <http://www.abcam.com/thrombin-cleavage-kit-ab207000.html>`_, `BioVision <https://www.biovision.com/documentation/datasheets/K377.pdf>`_, `Merck <http://www.merckmillipore.com/FR/fr/life-science-research/protein-sample-preparation/protein-purification/cleavage-enzymes/0Uqb.qB.V5gAAAFBOFJlvyyv,nav#thrombin>`_ or `Novagen <http://wolfson.huji.ac.il/purification/PDF/Protease_fusion_cleavage/NOVAGEN_Thrombin_kit.pdf>`_. .. _tev: Tobacco etch virus protease ........................... Tobacco etch virus protease (TEV) preferentially cleaves after Q (`P1`) when followed by G or S in `P1'` and preceded by Y in `P3` and E in `P6`. **RPG definition:** cleaving rule: * ``(E)()()(Y)()(Q,)(G or S)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#TEV .. _tryps: Trypsin ....... Trypsin preferentially cleaves after K or R (`P1`). It will not cleave after K followed by P in `P1'` except if W in `P2`. It will not cleave after R followed by P in `P1'` except if M in `P2`. It will not cleave CKD, DKD, CKH, CKY, CRK, RRH nor RRR. **RPG definition:** cleaving rules: * ``(K or R,)`` * ``(W)(K,)(P)`` * ``(M)(R,)(P)`` exception rules: * ``(K or R,)(P)`` * ``(C)(K,)(D)`` * ``(D)(K,)(D)`` * ``(C)(K,)(H)`` * ``(C)(K,)(Y)`` * ``(C)(R,)(K)`` * ``(R)(R,)(H)`` * ``(R)(R,)(R)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#Tryps .. _asp-n2: Asp-N Endopeptidase ..... Asp-N Sequencing Grade preferentially cleaves before D (`P1'` ) **RPG definition:** cleaving rule: * ``(,D)`` More information: https://web.expasy.org/peptide_cutter/peptidecutter_enzymes.html#AspN .. _proa: ProAlanase .......... ProAlanase Sequencing Grade preferentially cleaves after A or P (`P1`). **RPG definition:** cleaving rule: * ``(A or P,)`` More information: https://france.promega.com/products/mass-spectrometry/proteases-and-surfactants/proalanase-mass-spec-grade?catNum=VA2161
/rpg-2.0.1.tar.gz/rpg-2.0.1/docs/enzymes.rst
0.794425
0.655377
enzymes.rst
pypi
# rpgPy ![](https://github.com/actris-cloudnet/rpgpy/workflows/RpgPy%20CI/badge.svg) [![PyPI version](https://badge.fury.io/py/rpgPy.svg)](https://badge.fury.io/py/rpgPy) RpgPy is a Python / Cython software for - Reading [RPG cloud radar](https://www.radiometer-physics.de/products/microwave-remote-sensing-instruments/94-ghz-fmcw-doppler-cloud-radar/) Level 0 and Level 1 binary files - Calculating spectral moments from RPG Level 0 data - Converting RPG binary data to [netCDF4](https://www.unidata.ucar.edu/software/netcdf/) format # Installation ## From PyPI python3 -m pip install rpgpy NOTE: A C-compiler is required because the Cython code is compiled locally during installation. If you get an error about missing `Python.h`, you need to install the missing header files with `$ apt install python3-dev` (or similar). ## From source git clone https://github.com/actris-cloudnet/rpgpy/ cd rpgpy/ python3 -m venv venv source venv/bin/activate python3 -m pip install --upgrade pip python3 -m pip install . python3 setup.py build_ext --inplace # Quickstart ### Converting RPG binary files into netCDF4 ```python >>> from rpgpy import rpg2nc >>> rpg2nc('rpg-data.LV1', 'rpg-file.nc') ``` This writes a compressed netCDF4 file and works with both Level 0 and Level 1 data. Several RPG files can be concatenated into singe netCDF file using wildcard. With Level 0 data, this can lead to a very large file. ```python >>> rpg2nc('/path/to/files/*.LV0', 'huge-file.nc') ``` [API reference of `rpg2nc`](#rpg2nc) ### Converting multiple files individually Multiple RPG files can be converted into corresponding individual netCDF4 files using `rpg2nc_multi`. ```python >>> from rpgpy import rpg2nc_multi >>> filenames = rpg2nc_multi(file_directory='/path/to/files') ``` Default functionality is that every file with an extension `.LV0`, `.lv0`, `.LV1` or `.lv1` in every subdirectory of the specified path will be converted. [API reference of `rpg2nc_multi`](#rpg2nc_multi) ### Creating custom Level 1 netCDF4 file `rpgpy` can estimate spectral moments from Level 0 data. The estimation is based on the most prominent peak of each time / range point. ```python >>> from rpgpy import spectra2nc >>> spectra2nc('rpg-data.LV0', 'level1.nc') ``` This calculates spectral moments from Level 0 data and writes the results in a netCDF4 file. [API reference of `spectra2nc`](#spectra2nc) ### Reading RPG binary file If you don't need the netCDF4 file: ```python >>> from rpgpy import read_rpg >>> header, data = read_rpg('rpg-data.LV1') ``` [API reference of `read_rpg`](#read_rpg) ### Calculating spectral moments ```python >>> from rpgpy import read_rpg, spectra2moments >>> header, data = read_rpg('rpg-data.LV0') >>> moments = spectra2moments(data, header) ``` This works only with Level 0 data. [API reference of `spectra2moments`](#spectra2moments) ## API reference ### Index - [rpg2nc](#rpg2nc) - [rpg2nc_multi](#rpg2nc_multi) - [spectra2nc](#spectra2nc) - [read_rpg](#read_rpg) - [spectra2moments](#spectra2moments) ## ### `rpg2nc` Convert RPG cloud radar file(s) into single netCDF file. ```python rpg2nc(path_to_files, output_file, **kwargs) ``` Positional arguments: | Name | Type | Description | | :-------------- | :-------------------------- | :---------------------------------------------------------------------------------------------- | | `path_to_files` | `str` &#124; `pathlib.Path` | Filename of single file, or multiple files identified using a wildcard, e.g., `/foo/bar/*.LV0`. | | `output_file` | `str` &#124; `pathlib.Path` | Output file name. | Keyword arguments: | Name | Type | Default value | Description | | :------------ | :----- | :------------ | :---------------------------- | | `global_attr` | `dict` | `None` | Additional global attributes. | ## ### `rpg2nc_multi` Convert RPG cloud radar files into several corresponding netCDF files. ```python filenames = rpg2nc_multi(**kwargs) ``` Default functionality: - Input files are searched recursively starting from the current working directory - Files with the suffix `.LV0`, `.lv0`, `.LV1` or `.lv1` suffix are converted - netCDF4 files are written to the current working directory Keyword arguments: | Name | Type | Default value | Description | | :----------------- | :-------------------------- | :------------------------ | :--------------------------------------------------- | | `file_directory` | `str` &#124; `pathlib.Path` | current working directory | Root path of the search. | | `output_directory` | `str` &#124; `pathlib.Path` | current working directory | Path name where the netCDF4 files are written. | | `include_lv0` | `bool` | `True` | If `False`, excludes Level 0 files. | | `recursive` | `bool` | `True` | If `False`, does not search input files recursively. | | `base_name` | `str` | `None` | Optional filename prefix for the converted files. | | `global_attr` | `dict` | `None` | Additional global attributes. | Returns: | Type | Description | | :----- | :--------------------------------------------------- | | `list` | Full paths of the successfully created netCDF files. | ## ### spectra2nc Calculate moments from RPG Level 0 spectra and write a netCDF4 file. ```python spectra2nc(input_file, output_file, **kwargs) ``` Positional arguments: | Name | Type | Description | | :------------ | :-------------------------- | :---------------------------- | | `input_file` | `str` &#124; `pathlib.Path` | Filename of RGP Level 0 file. | | `output_file` | `str` &#124; `pathlib.Path` | Output file name. | Keyword arguments: | Name | Type | Default value | Description | | :------------- | :----- | :------------ | :-------------------------------------------------- | | `global_attr` | `dict` | `None` | Additional global attributes. | | `n_points_min` | `int` | 4 | Minimum number of points in a proper spectral line. | ## ### `read_rpg` Read RPG cloud radar binary file. ```python header, data = read_rpg(filename, **kwargs) ``` Positional arguments: | Name | Type | Description | | :--------- | :-------------------------- | :---------------------------------------------------------- | | `filename` | `str` &#124; `pathlib.Path` | Filename of RPG cloud radar Level 1 or Level 0 binary file. | Keyword arguments: | Name | Type | Default value | Description | | :---------- | :----- | :------------ | :------------------------------------------------------------------------------------------------ | | `rpg_names` | `bool` | `True` | If `True`, uses RPG manual names in the returned dictionary, else uses more human-readable names. | Returns: | Type | Description | | :------ | :--------------------------------------------------------- | | `tuple` | 2-element tuple containing `header` and `data` dictionary. | ## ### `spectra2moments` Calculate spectral moments from Level 0 spectral data. A call to [`read_rpg`](#read_rpg) is required before using this function. ```python moments = spectra2moments(data, header, **kwargs) ``` Positional arguments: | Name | Type | Description | | :------- | :----- | :------------------------------------------------------ | | `data` | `dict` | Level 0 data dictionary from [`read_rpg`](#read_rpg). | | `header` | `dict` | Level 0 header dictionary from [`read_rpg`](#read_rpg). | Keyword arguments: | Name | Type | Default value | Description | | :------------- | :------ | :------------ | :---------------------------------------------------------- | | `spec_var` | `str` | `"TotSpec"` | Spectral variable to be analyzed: `"TotSpec"` or `"HSpec"`. | | `fill_value` | `float` | -999.0 | Value for the clear sky data points. | | `n_points_min` | `int` | 4 | Minimum number of points in a proper spectral line. | Returns: | Type | Description | | :----- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `dict` | Dictionary containing `Ze` (reflectivity), `MeanVel` (mean velocity), `SpecWidth` (spectral width), `Skewn` (skewness) and `Kurt` (kurtosis), which are 2D numpy arrays (time x range). | ## Development Install test-dependencies and [pre-commit](https://pre-commit.com/) hooks: python3 -m pip install -e .[test,dev] pre-commit install Compile Cython (repeat if you change `.pyx` files): python3 setup.py build_ext --inplace ### Tests Run unit tests: pytest Run end-to-end tests: for f in tests/e2e/*/*runner.py; do $f; done Force `pre-commit` checks of all files: pre-commit run --all ## Performance For reading RPG binary files, depending on the radar settings, RpgPy is roughly 20-30 times faster than equivalent native Python or Matlab implementations. ## License MIT
/rpgPy-0.14.2.tar.gz/rpgPy-0.14.2/README.md
0.409339
0.925466
README.md
pypi
from typing import Optional, Tuple import numpy as np from numba import jit def spectra2moments( data: dict, header: dict, spec_var: Optional[str] = "TotSpec", fill_value: Optional[float] = -999.0, n_points_min: Optional[int] = 4, ) -> dict: """Calculates radar moments from the main peak. This routine calculates the radar moments: reflectivity, mean Doppler velocity, spectrum width, skewness and kurtosis from compressed level 0 spectrum files (NoiseFactor > 0) of the 94 GHz RPG cloud radar. Considering only the largest peak. Args: data: Level 0 nD variables. header: Level 0 metadata. spec_var: Name of the spectral variable. Possible names are 'TotSpec', 'VSpec', and 'HSpec'. fill_value: Clear sky fill value. n_points_min: Minimum number of points in a valid spectral line. Returns: A dict with keys: 'Ze', 'MeanVel', 'SpecWidth', 'Skewn', 'Kurt'. Examples: >>> from rpgpy import read_rpg, spectra2moments >>> header, data = read_rpg('rpg-fmcw-94-file.LV0') >>> moments = spectra2moments(data, header) """ spectra = data[spec_var] n_time, n_range, _ = spectra.shape moments = np.full((n_time, n_range, 5), np.nan) no_signal = np.all(spectra == 0, axis=2) ranges = np.append(header["RngOffs"], header["RAltN"]) for ind_chirp in range(header["SequN"]): dopp_res = np.mean(np.diff(header["velocity_vectors"][ind_chirp])) for ind_range in range(ranges[ind_chirp], ranges[ind_chirp + 1]): for ind_time in range(n_time): if no_signal[ind_time, ind_range]: continue edge_left, edge_right = find_peak_edges(spectra[ind_time, ind_range, :]) if (edge_right - edge_left) < n_points_min: no_signal[ind_time, ind_range] = True continue velocity_vector = header["velocity_vectors"][ind_chirp][edge_left:edge_right] assert np.all(velocity_vector != 0) moments[ind_time, ind_range, :] = radar_moment_calculation( spectra[ind_time, ind_range, edge_left:edge_right], velocity_vector, ) # shift mean Doppler velocity by half a bin moments[:, ranges[ind_chirp] : ranges[ind_chirp + 1], 1] -= dopp_res / 2.0 output = { key: moments[:, :, i] for i, key in enumerate(["Ze", "MeanVel", "SpecWidth", "Skewn", "Kurt"]) } for key in output.keys(): output[key][no_signal] = fill_value return output @jit(nopython=True, fastmath=True) def radar_moment_calculation(signal: np.ndarray, vel_bins: np.ndarray) -> np.ndarray: """Calculates radar moments from one a single spectral line. Calculation reflectivity, mean Doppler velocity, spectral width, skewness, and kurtosis of one Doppler spectrum. Optimized for the use of Numba. Args: signal: Detected signal from a Doppler spectrum. vel_bins: Extracted velocity bins of the signal (same length as signal). Returns: array containing: - Reflectivity (0th moment) over range of velocity bins [mm6/m3] - Mean velocity (1st moment) over range of velocity bins [m/s] - Spectrum width (2nd moment) over range of velocity bins [m/s] - Skewness (3rd moment) over range of velocity bins - Kurtosis (4th moment) over range of velocity bins """ signal_sum = np.sum(signal) # linear full spectrum Ze [mm^6/m^3], scalar ze_lin = signal_sum / 2.0 # divide by 2 because vertical and horizontal channel are added. pwr_nrm = signal / signal_sum # determine normalized power (NOT normalized by Vdop bins) vel = np.sum(vel_bins * pwr_nrm) vel_diff = vel_bins - vel vel_diff2 = vel_diff * vel_diff sw = np.sqrt(np.abs(np.sum(pwr_nrm * vel_diff2))) sw2 = sw * sw skew = np.sum(pwr_nrm * vel_diff * vel_diff2 / (sw * sw2)) kurt = np.sum(pwr_nrm * vel_diff2 * vel_diff2 / (sw2 * sw2)) return np.array((ze_lin, vel, sw, skew, kurt), dtype=np.float32) @jit(nopython=True, fastmath=True) def find_peak_edges(signal: np.ndarray) -> Tuple[int, int]: """Returns the indices of left and right edge of the main signal peak in a Doppler spectra. Args: signal: 1D array Doppler spectra. Returns: 2-element tuple containing the left / right indices of the main peak edges. """ len_sig = len(signal) edge_left, edge_right = 0, len_sig threshold = np.min(signal) imax = np.argmax(signal) for ind in range(imax, len_sig): if signal[ind] > threshold: continue edge_right = ind break for ind in range(imax, -1, -1): if signal[ind] > threshold: continue edge_left = ( ind + 1 ) # the +1 is important, otherwise a fill_value will corrupt the numba code break return edge_left, edge_right def calc_spectral_LDR(header: dict, data: dict) -> np.ndarray: """Computes spectral (S)LDR for vertically pointing STSR radar. Method by Galetti et al. (2012); Based on code by Alexander Myagkov (RPG). Args: header: Level 0 nD variables. data: Level 0 nD metadata. Returns: Computed SLDR [dB]. """ spec_tot = scale_spectra(data["TotSpec"], header["SWVersion"]) spec_V = spec_tot - data["HSpec"] - 2 * data["ReVHSpec"] noise_V = data["TotNoisePow"] / 2.0 # TBD: how to obtain noise power in vertical channel? bins_per_chirp = np.diff(np.hstack((header["RngOffs"], header["RAltN"]))) noise_h_per_bin = (data["HNoisePow"] / np.repeat(header["SpecN"], bins_per_chirp))[ :, :, np.newaxis ] noise_v_per_bin = (noise_V / np.repeat(header["SpecN"], bins_per_chirp))[:, :, np.newaxis] # Avoid division by zero noise_v_per_bin[noise_v_per_bin == 0] = 1e-10 noise_h_per_bin[noise_h_per_bin == 0] = 1e-10 SNRv = spec_V / noise_v_per_bin SNRh = data["HSpec"] / noise_h_per_bin snr_mask = (SNRv < 1000) | (SNRh < 1000) rhv = np.abs(data["ReVHSpec"] + complex(imag=1) * data["ImVHSpec"]) / np.sqrt( (spec_V + noise_v_per_bin) * (data["HSpec"] + noise_h_per_bin) ) sldr = 10 * np.log10((1 - rhv) / (1 + rhv)) snr_mask = snr_mask | (data["TotSpec"] == 0.0) sldr[snr_mask] = -999 return sldr def scale_spectra(signal: np.ndarray, software_version: float) -> np.ndarray: """Scales combined spectrum. Starting from software version 5.40, the combined spectrum is normalized by 4. For previous versions, the combined spectrum was normalized by 2. Only for STSR mode radar (TBD). Args: signal: Combined spectrum (TotSpec). software_version: 10 * radar software version number. Returns: Scaled spectra. """ scale = 2 if software_version < 540 else 4 return scale * signal
/rpgPy-0.14.2.tar.gz/rpgPy-0.14.2/rpgpy/spcutil.py
0.958304
0.760806
spcutil.py
pypi
from pathlib import Path from typing import Iterator, Tuple import numpy as np from rpgpy import utils def read_rpg_header(file_name: Path) -> Tuple[dict, int]: """Reads header from RPG binary file. Supports Level 0 (version 2.0, 3.5, 4.0) and Level 1 (version 1.0, 2.0, 3.5, 4.0) Args: file_name: name of the file. Returns: 2-element tuple containing the header (as dict) and file position. """ def read(*fields): block = np.fromfile(file, np.dtype(list(fields)), 1) assert block.dtype.names is not None for name in block.dtype.names: array = block[name][0] if utils.isscalar(array): header[name] = array else: header[name] = np.array(array, dtype=_get_dtype(array)) header: dict = {} file = open(file_name, "rb") # pylint: disable=R1732 read(("FileCode", "i4"), ("HeaderLen", "i4")) level, version = utils.get_rpg_file_type(header) if version > 2.0: read(("StartTime", "uint32"), ("StopTime", "uint32")) if version > 1.0: read(("CGProg", "i4")) read(("ModelNo", "i4")) header["ProgName"] = _read_string(file) header["CustName"] = _read_string(file) if version > 1.0: read(("Freq", "f"), ("AntSep", "f"), ("AntDia", "f"), ("AntG", "f"), ("HPBW", "f")) if level == 0: read(("Cr", "f")) read(("DualPol", "i1")) if level == 0: read(("CompEna", "i1"), ("AntiAlias", "i1")) read( ("SampDur", "f"), ("GPSLat", "f"), ("GPSLong", "f"), ("CalInt", "i4"), ("RAltN", "i4"), ("TAltN", "i4"), ("HAltN", "i4"), ("SequN", "i4"), ) n_levels, n_temp, n_humidity, n_chirp = _get_number_of_levels(header) read(("RAlts", _dim(n_levels)), ("TAlts", _dim(n_temp)), ("HAlts", _dim(n_humidity))) if level == 0: read(("Fr", _dim(n_levels))) read( ("SpecN", _dim(n_chirp, "i4")), ("RngOffs", _dim(n_chirp, "i4")), ("ChirpReps", _dim(n_chirp, "i4")), ("SeqIntTime", _dim(n_chirp)), ("dR", _dim(n_chirp)), ("MaxVel", _dim(n_chirp)), ) if version > 2.0: if level == 0: read( ("ChanBW", _dim(n_chirp)), ("ChirpLowIF", _dim(n_chirp, "i4")), ("ChirpHighIF", _dim(n_chirp, "i4")), ("RangeMin", _dim(n_chirp, "i4")), ("RangeMax", _dim(n_chirp, "i4")), ("ChirpFFTSize", _dim(n_chirp, "i4")), ("ChirpInvSamples", _dim(n_chirp, "i4")), ("ChirpCenterFr", _dim(n_chirp)), ("ChirpBWFr", _dim(n_chirp)), ("FFTStartInd", _dim(n_chirp, "i4")), ("FFTStopInd", _dim(n_chirp, "i4")), ("ChirpFFTNo", _dim(n_chirp, "i4")), ("SampRate", "i4"), ("MaxRange", "i4"), ) read( ("SupPowLev", "i1"), ("SpkFilEna", "i1"), ("PhaseCorr", "i1"), ("RelPowCorr", "i1"), ("FFTWindow", "i1"), ("FFTInputRng", "uint16"), ("SWVersion", "uint16"), ("NoiseFilt", "f4"), ) if level == 1 and version > 3.5: read(("InstCalPar", "i4")) elif level == 0: _ = np.fromfile(file, "i4") if level == 0 or (level == 1 and version > 3.5): _ = np.fromfile(file, "i4", 24) _ = np.fromfile(file, "uint32", 10000) if level == 0: header["velocity_vectors"] = utils.create_velocity_vectors(header) # version 1.0 else: read(("RAltN", "i4")) n_levels = int(header["RAltN"]) read(("RAlts", _dim(n_levels))) read(("SequN", "i4")) n_chirp = int(header["SequN"]) read( ("RngOffs", _dim(n_chirp, "i4")), ("dR", _dim(n_chirp)), ("SpecN", _dim(n_chirp, "i4")), ("DoppRes", _dim(n_chirp)), ("MaxVel", _dim(n_chirp)), ) read(("CalInt", "i4"), ("AntSep", "f"), ("HPBW", "f"), ("SampDur", "f")) header["TAltN"] = np.array([0]) header["HAltN"] = np.array([0]) if header["ModelNo"] == 1: header["DualPol"] = np.array([1]) else: header["DualPol"] = np.array([0]) file_position = file.tell() file.close() return header, file_position def _read_string(file_id) -> str: """Read characters from binary data until whitespace.""" str_out = "" while True: value = np.fromfile(file_id, np.int8, 1) if value: try: str_out += chr(value[0]) except ValueError: str_out += "%" else: break return str_out def _get_number_of_levels(header: dict) -> Iterator[int]: for name in ("RAltN", "TAltN", "HAltN", "SequN"): yield int(header[name]) def _dim(length: int, dtype: str = "f") -> str: return f"({length},){dtype}" def _get_dtype(array: np.ndarray) -> type: if array.dtype in (np.int8, np.int32, np.uint32): return int return float
/rpgPy-0.14.2.tar.gz/rpgPy-0.14.2/rpgpy/header.py
0.656328
0.396331
header.py
pypi
from typing import NamedTuple, Optional class Meta(NamedTuple): name: str long_name: str standard_name: Optional[str] = None units: Optional[str] = None comment: Optional[str] = None METADATA = { "DoppRes": Meta(name="doppler_resolution", long_name="Doppler resolution", units="m/s"), "FileCode": Meta(name="file_code", long_name="File Code"), "HeaderLen": Meta(name="header_length", long_name="Header Length", units="bytes"), "StartTime": Meta( name="start_time", long_name="Start Time", comment="time of first sample in file" ), "StopTime": Meta( name="stop_time", long_name="Stop Time", comment="time of last sample in file" ), "CGProg": Meta( name="program_number", long_name="Program Number", comment="chirp generator program number" ), "ModelNo": Meta( name="model_number", long_name="Model Number", comment="0=94GHz single polarisation radar, 1=94GHz dual polarisation radar", ), "ProgName": Meta(name="program_name", long_name="Program Name"), "CustName": Meta(name="customer_name", long_name="Customer Name"), "Freq": Meta(name="radar_frequency", long_name="Radar Frequency", units="GHz"), "AntSep": Meta( name="antenna_separation", long_name="Antenna Separation", units="m", comment="separation of both antenna axis (bistatic configuration)", ), "AntDia": Meta(name="antenna_diameter", long_name="Antenna Diameter", units="m"), "AntG": Meta(name="antenna_gain", long_name="Antenna Gain", comment="linear antenna gain"), "HPBW": Meta(name="half_power_beam_width", long_name="Half Power Beam Width", units="degrees"), "Cr": Meta(name="radar_constant", long_name="Radar Constant"), "DualPol": Meta( name="dual_polarisation", long_name="Dual Polarisation", comment="0=single polarisation radar, 1=dual polarisation radar in LDR mode, " "2=dual polarisation radar in STSR mode", ), "CompEna": Meta( name="compression", long_name="Compression", comment="0=not compressed, 1=compressed, 2=compressed and polarimetric variables saved", ), "AntiAlias": Meta( name="anti_alias", long_name="Anti Alias", comment="0=spectra not anti-aliased, 1=spectra have been anti-aliased", ), "SampDur": Meta(name="sample_duration", long_name="Sample Duration", units="s"), "GPSLat": Meta(name="gps_latitude", long_name="GPS Latitude", units="degrees_north"), "GPSLong": Meta(name="gps_longitude", long_name="GPS Longitude", units="degrees_east"), "CalInt": Meta( name="calibration_interval", long_name="Calibration Interval", comment="period for automatic zero calibrations in number of samples", ), "RAltN": Meta( name="n_range_layers", long_name="Number of Range Layers", comment="number of radar ranging layers", ), "TAltN": Meta( name="n_temperature_layers", long_name="Number of Temperature Layers", ), "HAltN": Meta( name="n_humidity_layers", long_name="Number of Humidity Layers", ), "SequN": Meta( name="n_chirp_sequences", long_name="Number of Chirp Sequences", ), "RAlts": Meta(name="range_layers", long_name="Range Layers"), "TAlts": Meta(name="temperature_layers", long_name="Temperature Layers"), "HAlts": Meta(name="humidity_layers", long_name="Humidity Layers"), "Fr": Meta(name="range_factors", long_name="Range Factors"), "SpecN": Meta( name="n_samples_in_chirp", long_name="Number of Spectral Samples in Each Chirp Sequence" ), "RngOffs": Meta(name="chirp_start_indices", long_name="Chirp Sequence Start Indices"), "ChirpReps": Meta( name="n_chirps_in_sequence", long_name="Number of Averaged Chirps in Each Sequence" ), "SeqIntTime": Meta(name="integration_time", long_name="Effective Sequence Integration Time"), "dR": Meta(name="range_resolution", long_name="Chirp Sequence Range Resolution", units="m"), "MaxVel": Meta(name="nyquist_velocity", long_name="Nyquist velocity", units="m/s"), "ChanBW": Meta(name="bandwidth", long_name="Bandwidth of Individual Radar Channel", units="Hz"), "ChirpLowIF": Meta(name="lowest_IF_frequency", long_name="Lowest IF Frequency", units="Hz"), "ChirpHighIF": Meta(name="highest_IF_frequency", long_name="Highest IF Frequency", units="Hz"), "RangeMin": Meta( name="minimum_altitude", long_name="Minimum Altitude", units="m", comment="minimum altitude (range) of the sequence", ), "RangeMax": Meta( name="maximum_altitude", long_name="Maximum Altitude", units="m", comment="maximum altitude (range) of the sequence)", ), "ChirpFFTSize": Meta(name="fft_size", long_name="FFT Size", comment="Must be power of 2"), "ChirpInvSamples": Meta( name="n_invalid_samples", long_name="Number of Invalid Samples", ), "ChirpCenterFr": Meta( name="chirp_center_frequency", long_name="Chirp Center Frequency", units="MHz" ), "ChirpBWFr": Meta(name="chirp_bandwidth", long_name="Chirp Bandwidth", units="MHz"), "FFTStartInd": Meta(name="fft_start_index", long_name="FFT Start Index"), "FFTStopInd": Meta(name="fft_stop_index", long_name="FFT Stop Index"), "ChirpFFTNo": Meta( name="n_chirp_fft", long_name="Number of FFT Range Layers in Chirp", comment="Usually = 1" ), "SampRate": Meta(name="adc_sampling_rate", long_name="ADC Sampling Rate", units="Hz"), "MaxRange": Meta( name="maximum_range", long_name="Maximum Range", units="m", comment="maximum unambiguous range", ), "SupPowLev": Meta( name="power_leveling_flag", long_name="Power Leveling Flag", comment="flag indicating the use of power levelling (0=yes, 1=no)", ), "SpkFilEna": Meta( name="spike_filter_flag", long_name="Spike Filter Flag", comment="flag indicating the use of spike/plankton filter (1=yes, 0=no)", ), "PhaseCorr": Meta( name="phase_correction_flag", long_name="Phase Correction Flag", comment="flag indicating the use of phase correction (1=yes, 0=no)", ), "RelPowCorr": Meta( name="relative_power_correction_flag", long_name="Relative Power Correction Flag", comment="flag indicating the use of relative power correction (1=yes, 0=no)", ), "FFTWindow": Meta( name="fft_window", long_name="FFT Window", comment="FFT window in use: 0=square, 1=parzen, 2=blackman, 3=welch, 4=slepian2, " "5=slepian3", ), "FFTInputRng": Meta( name="adc_voltage_range", long_name="ADC Voltage Range", comment="ADC input voltage range (+/-)", units="mV", ), "NoiseFilt": Meta( name="noise_filter_threshold", long_name="Noise Filter Threshold", comment="noise filter threshold factor (multiple of STD in Doppler spectra)", ), "Time": Meta(name="time", long_name="Time of Sample", comment="since 1.1.2001", units="s"), "MSec": Meta(name="time_ms", long_name="Milliseconds of Sample", units="ms"), "QF": Meta( name="quality_flag", long_name="Quality Flag", comment="Bit 1=ADC saturation, Bit 2=spectral width too high, Bit 3=no transm. power " "leveling", ), "RR": Meta(name="rain_rate", long_name="Rain Rate", units="mm/h"), "RelHum": Meta(name="relative_humidity", long_name="Relative Humidity", units="%"), "EnvTemp": Meta(name="temperature", long_name="Environment Temperature", units="K"), "BaroP": Meta(name="pressure", long_name="Barometric Pressure", units="hPa"), "WS": Meta(name="wind_speed", long_name="Wind Speed", units="km/h"), "WD": Meta(name="wind_direction", long_name="Wind Direction", units="degrees"), "DDVolt": Meta(name="voltage", long_name="Direct Detection Channel Voltage", units="V"), "DDTb": Meta(name="brightness_temperature", long_name="Brightness Temperature", units="K"), "TransPow": Meta(name="transmitter_power", long_name="Transmitter Power", units="W"), "TransT": Meta(name="transmitter_temperature", long_name="Transmitter Temperature", units="K"), "RecT": Meta(name="receiver_temperature", long_name="Receiver Temperature", units="K"), "PCT": Meta(name="pc_temperature", long_name="PC Temperature", units="K"), "LWP": Meta(name="lwp", long_name="Liquid Water Path", units="g/m2"), "Elev": Meta(name="elevation", long_name="Elevation Angle", units="degrees"), "Azi": Meta(name="azimuth", long_name="Azimuth Angle", units="degrees"), "Status": Meta( name="status_flag", long_name="Status Flag", comment="mitigation status flags: 0/1=heater switch (ON/OFF) 0/10=blower switch (ON/OFF)", ), "TotSpec": Meta(name="doppler_spectrum", long_name="Doppler Spectrum", comment="linear Ze"), "HSpec": Meta( name="doppler_spectrum_h", long_name="Doppler Spectrum H", comment="horizontal polarisation, linear Ze", ), "ReVHSpec": Meta( name="covariance_spectrum_re", long_name="Covariance Spectrum Re", comment="real part, linear Ze", ), "LDRSpec": Meta( name="LDR spectrum", long_name="linear depolarisation ratio Doppler spectra", units="dB" ), "ImVHSpec": Meta( name="covariance_spectrum_im", long_name="Covariance Spectrum Im", comment="imaginary part, linear Ze", ), "RefRat": Meta(name="ldr", long_name="Linear Depolarisation Ratio", units="dB"), "DiffPh": Meta(name="differential_phase", long_name="Differential Phase", units="rad"), "SLDR": Meta(name="ldr_slanted", long_name="LDR Slanted", units="dB"), "CorrCoeff": Meta(name="correlation_coefficient", long_name="Correlation Coefficient"), "SCorrCoeff": Meta( name="correlation_coefficient_slanted", long_name="Correlation Coefficient Slanted", ), "KDP": Meta( name="differential_phase_shift", long_name="Differential Phase Shift", units="rad/km" ), "SLv": Meta( name="sensitivity_limit_v", long_name="Sensitivity limit for vertical polarization", units="linear units", ), "SLh": Meta( name="sensitivity_limit_h", long_name="Sensitivity limit for horizontal polarization", comment="linear units", ), "DiffAtt": Meta( name="differential_attenuation", long_name="Differential Attenuation", units="db/km" ), "TotNoisePow": Meta( name="integrated_noise", long_name="Integrated Noise", comment="integrated Doppler spectrum noise power", ), "HNoisePow": Meta( name="integrated_noise_h", long_name="Integrated Noise H", comment="integrated Doppler spectrum noise power in horizontal polarisation", ), "AliasMsk": Meta( name="anti_alias_correction", long_name="Anti Alias Correction", comment="mask indicating if anti-aliasing has been applied (=1) or not (=0)", ), "MinVel": Meta(name="minimum_velocity", long_name="Minimum Velocity", units="m/s"), "PowIF": Meta(name="IF_power", long_name="Intermediate Frequency Power", units="uW"), "Ze": Meta(name="Ze", long_name="Reflectivity", comment="vertical polarisation, linear units"), "MeanVel": Meta( name="v", long_name="Doppler Velocity", units="m/s", comment="vertical polarisation" ), "SpecWidth": Meta( name="width", long_name="Spectral Width", units="m/s", comment="vertical polarisation" ), "Skewn": Meta(name="skewness", long_name="Spectral Skewness", comment="vertical polarisation"), "Kurt": Meta(name="kurtosis", long_name="Spectral Kurtosis", comment="vertical polarisation"), "velocity_vectors": Meta( name="velocity_vectors", long_name="Doppler velocity bins", comment="for each chirp" ), "InstCalPar": Meta(name="Cal_period", units="s", long_name="Calibration period"), "SWVersion": Meta( name="software_version", long_name="Software version", comment="Multiplied by 100" ), }
/rpgPy-0.14.2.tar.gz/rpgPy-0.14.2/rpgpy/metadata.py
0.923975
0.489564
metadata.py
pypi
import datetime import os from pathlib import Path from typing import Dict, NamedTuple, Tuple, Union import numpy as np from numpy import ma def get_current_time() -> str: """Returns current UTC time.""" return datetime.datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S") def rpg_seconds2date(time_stamp: float, date_only: bool = False) -> list: """Convert RPG timestamp to UTC date + time. Args: time_stamp (int): RPG timestamp. date_only (bool): If true, return only date (no time). Returns: list: UTC date + optionally time in format ['YYYY', 'MM', 'DD', 'hh', 'min', 'sec'] """ epoch = (2001, 1, 1) epoch_in_seconds = datetime.datetime.timestamp( datetime.datetime(*epoch, tzinfo=datetime.timezone.utc) ) time_stamp += epoch_in_seconds date_time = datetime.datetime.utcfromtimestamp(time_stamp).strftime("%Y %m %d %H %M %S").split() if date_only: return date_time[:3] return date_time def rpg_seconds2datetime64(timestamps: np.ndarray) -> np.ndarray: """Convert NumPy array of RPG timestamps to datetime64 in UTC.""" return np.datetime64("2001-01-01") + timestamps.astype("timedelta64[s]") class RpgStatusFlags(NamedTuple): """ Status flag is a float which (as of 2022-11-17) can have up to 4 digits (WXYZ), where: - Z (least significant digit) is 1 when heater is on, otherwise 0 (please note, that no RPG radar has a physical heater) - Y is 1 when blower is on, otherwise 0 - X is 1 when temperature profile is from a coupled HATPRO, otherwise 0 - W is 1 when humidity profiles are from a coupled HATPRO, otherwise 0 """ heater: ma.MaskedArray blower: ma.MaskedArray hatpro_temperature: ma.MaskedArray hatpro_humidity: ma.MaskedArray def decode_rpg_status_flags(flags: np.ndarray) -> RpgStatusFlags: tmp = flags.astype(np.uint32) mask = tmp != flags output = {} for key in ["heater", "blower", "hatpro_temperature", "hatpro_humidity"]: tmp, values = np.divmod(tmp, 10) mask |= values > 1 output[key] = values masked_output: Dict[str, ma.MaskedArray] = { key: ma.masked_array(values, mask) for key, values in output.items() } return RpgStatusFlags(**masked_output) def get_rpg_file_type(header: dict) -> Tuple[int, float]: """Find level and version of RPG cloud radar binary file. Args: header (dict): Header of the radar file containing *file_code* key. Returns: tuple: 2-element tuple containing Level (0 or 1) and Version (1.0, 2.0, 3.5 or 4.0). Raises: RuntimeError: Unknown file type. """ file_code = header["FileCode"] if file_code == 789346: return 0, 2.0 if file_code == 889346: return 0, 3.5 if file_code == 789345: return 1, 1.0 if file_code == 789347: return 1, 2.0 if file_code == 889347: return 1, 3.5 if file_code == 889348: return 1, 4.0 raise RuntimeError(f"Unsupported RPG binary file. File code: {file_code}") def isscalar(array) -> bool: """Tests if input is scalar. By "scalar" we mean that array has a single value. Examples: >>> isscalar(1) True >>> isscalar([1]) True >>> isscalar(np.array(1)) True >>> isscalar(np.array([1])) True """ arr = ma.array(array) if not hasattr(arr, "__len__") or arr.shape == () or len(arr) == 1: return True return False def create_velocity_vectors(header: dict) -> np.ndarray: """Create Doppler velocity vector for each chirp. Args: header (dict): Header of the radar file. Returns: np.ndarray: Doppler velocity vector for each chirp. These are equally long vectors (max number of bins) where the padded values are masked. """ n_chirps = header["SequN"] n_bins_max = np.max(header["SpecN"]) # zeros will be automatically masked in the netCDF file: velocity_vectors = np.zeros((n_chirps, n_bins_max)) for ind, (n_bins, chirp_max_vel) in enumerate(zip(header["SpecN"], header["MaxVel"])): bins_to_shift = (n_bins_max - n_bins) // 2 dopp_res = chirp_max_vel / n_bins velocity = np.linspace(-chirp_max_vel + dopp_res, +chirp_max_vel - dopp_res, n_bins) velocity_vectors[ind, bins_to_shift : bins_to_shift + len(velocity)] = velocity return velocity_vectors def str2path(path: Union[Path, str, None]) -> Path: """Converts path as str to pathlib.Path.""" if path is None: return Path(os.getcwd()) return Path(path) if isinstance(path, str) else path
/rpgPy-0.14.2.tar.gz/rpgPy-0.14.2/rpgpy/utils.py
0.913492
0.444263
utils.py
pypi
import glob import logging import os import uuid from pathlib import Path from typing import Optional, Tuple, Union import netCDF4 import numpy as np from numpy import ma from numpy.testing import assert_array_almost_equal, assert_array_equal from tqdm import tqdm import rpgpy.metadata from rpgpy import read_rpg, utils, version from rpgpy.spcutil import spectra2moments SKIP_ME = ("ProgName", "CustName", "HAlts", "TAlts", "StartTime", "StopTime") def spectra2nc( input_file: Union[Path, str], output_file: Union[Path, str], n_points_min: int = 4, global_attr: Optional[dict] = None, ) -> None: """Calculates moments from RPG Level 0 file and writes netCDF4 file. Args: input_file: Level 0 filename. output_file: Name of the output file. n_points_min: Number of points in a valid spectral line. Default is 4. global_attr: Additional global attributes. """ input_file = utils.str2path(input_file) output_file = utils.str2path(output_file) f = netCDF4.Dataset(output_file, "w", format="NETCDF4_CLASSIC") header, data = read_rpg(input_file) moments = spectra2moments(data, header, fill_value=0, n_points_min=n_points_min) data = {key: data[key] for key, array in data.items() if array.ndim == 1} data = {**data, **moments} metadata = rpgpy.metadata.METADATA logging.info("Writing compressed netCDF4 file") _create_dimensions(f, header, level=0) _write_initial_data(f, header, metadata) _write_initial_data(f, data, metadata) _create_global_attributes(f, global_attr, header) f.close() def rpg2nc( path_to_files: Union[Path, str], output_file: Union[Path, str], global_attr: Optional[dict] = None, ) -> None: """Converts RPG binary files into a netCDF4 file. Args: path_to_files: Directory containing RPG binary file(s) and optionally a wildcard to distinguish between different types of files. E.g. '/path/to/data/*.LV0' output_file: Name of the output file. global_attr: Additional global attributes. """ path_to_files = utils.str2path(path_to_files) output_file = utils.str2path(output_file) files, level = _get_rpg_files(path_to_files) f = netCDF4.Dataset(output_file, "w", format="NETCDF4_CLASSIC") header, data = read_rpg(files[0]) metadata = rpgpy.metadata.METADATA metadata = _fix_metadata(metadata, header) logging.info("Writing compressed netCDF4 file") _create_dimensions(f, header, level) _write_initial_data(f, header, metadata) _write_initial_data(f, data, metadata) if len(files) > 1: for file in tqdm(files[1:]): header, data = read_rpg(file) _check_header_consistency(f, header) _append_data(f, data, metadata) _create_global_attributes(f, global_attr, header) f.close() logging.info(f"Created new file: {output_file}") def rpg2nc_multi( # pylint: disable=R0913 file_directory: Optional[Union[Path, str]] = None, output_directory: Optional[Union[Path, str]] = None, include_lv0: bool = True, recursive: bool = True, base_name: Optional[str] = None, global_attr: Optional[dict] = None, ) -> list: """Converts several RPG binary files individually. Converts all files with extension ['.LV0', '.LV1', '.lv0', 'lv1'] if include_lv0 is set to True (default); otherwise, it does it just for ['.LV1','.lv1'] contained in all the subdirectories of the specified folder. By default, it will write the new files with the same name of the original ones, just adding the extension '.nc' within directory where the program is executed. Args: file_directory: Root directory from which the function will start looking for files to convert. Default is the current working directory. output_directory: Directory name where files are written. Default is the current working directory. include_lv0: option to include Level 0 files or not. Default is True. recursive: If False, does not search recursively. Default is True. base_name: Base name for new filenames. global_attr: Additional global attributes. Returns: A list containing the full paths of the created netCDF files. """ new_files = [] file_directory = utils.str2path(file_directory) output_directory = utils.str2path(output_directory) for filepath in _generator_files(file_directory, include_lv0, recursive): logging.info(f"Converting {filepath}") try: prefix = f"{base_name}_" if base_name is not None else "" new_filename = f"{output_directory}/{prefix}{_new_filename(filepath)}" rpg2nc(filepath, new_filename, global_attr) new_files.append(new_filename) except IndexError as err: logging.warning(f"############### File {filepath} has not been converted: {err}") logging.info(f"Converted {len(new_files)} files") return new_files def _check_header_consistency(f: netCDF4.Dataset, header: dict) -> None: """Checks if header data is identical in all converted files.""" for key, array in header.items(): if key in f.variables: try: assert_array_almost_equal(array, f.variables[key]) except AssertionError: print("Warning: inconsistent header data in " + key, array, f.variables[key][:]) def _create_dimensions(f: netCDF4.Dataset, header: dict, level: int) -> None: f.createDimension("time", None) f.createDimension("range", header["RAltN"]) if level == 0: f.createDimension("spectrum", max(header["SpecN"])) f.createDimension("chirp", header["SequN"]) def _write_initial_data(f: netCDF4.Dataset, data: dict, metadata: dict) -> None: for key, array in data.items(): if key in SKIP_ME: continue fill_value = 0 if array.ndim > 1 and not ma.isMaskedArray(array) else None var = f.createVariable( metadata[key].name, _get_dtype(array), _get_dim(f, array), zlib=True, fill_value=fill_value, ) var[:] = array _set_attributes(var, key, metadata) def _set_attributes(obj, key: str, metadata: dict) -> None: for attr_name in ("long_name", "units", "comment"): value = getattr(metadata[key], attr_name) if value: setattr(obj, attr_name, value) obj.rpg_manual_name = key def _append_data(f: netCDF4.Dataset, data: dict, metadata: dict) -> None: ind0 = len(f.variables["time"]) ind1 = ind0 + data["Time"].shape[0] for key, array in data.items(): if key in SKIP_ME: continue key = metadata[key].name if array.ndim == 1: f.variables[key][ind0:ind1] = array elif array.ndim == 2: f.variables[key][ind0:ind1, :] = array else: f.variables[key][ind0:ind1, :, :] = array def _get_dtype(array: np.ndarray) -> str: if "int" in str(array.dtype): return "i4" return "f4" def _get_rpg_files(path_to_files: Path) -> Tuple[list, int]: """Returns list of RPG files for one day sorted by filename and level (0 or 1).""" files = glob.glob(str(path_to_files)) files.sort() if not files: raise RuntimeError("No proper RPG binary files found") extension = [file[-4:] for file in files] if all(ext.lower() == ".lv1" for ext in extension): level = 1 elif all(ext.lower() == ".lv0" for ext in extension): level = 0 else: raise RuntimeError("No consistent RPG level (0 or 1) files found.") return files, level def _get_dim(f: netCDF4.Dataset, array: np.ndarray) -> tuple: """Finds correct dimensions for a variable.""" if utils.isscalar(array): return () variable_size = [] file_dims = f.dimensions for length in array.shape: try: dim = [key for key in file_dims.keys() if file_dims[key].size == length][0] except IndexError: dim = "time" variable_size.append(dim) return tuple(variable_size) def _create_global_attributes(f: netCDF4.Dataset, global_attr: Optional[dict], header: dict): level, rpg_file_version = utils.get_rpg_file_type(header) f.Conventions = "CF-1.7" f.year, f.month, f.day = _get_measurement_date(f) f.uuid = uuid.uuid4().hex f.rpgpy_version = version.__version__ f.rpg_file_version = f"{rpg_file_version:.1f}" f.history = f"Radar file created: {utils.get_current_time()}" f.level = level if global_attr is not None and isinstance(global_attr, dict): for key, value in global_attr.items(): setattr(f, key, value) def _get_measurement_date(file: netCDF4.Dataset) -> list: time = file.variables["time"][:] date = utils.rpg_seconds2date(ma.min(time), date_only=True) assert_array_equal(date, utils.rpg_seconds2date(ma.max(time), date_only=True)) return date def _generator_files(dir_name: Path, include_lv0: bool, recursive: bool): includes = (".lv1",) if include_lv0 is False else (".lv0", "lv1") if recursive is False: for file in os.listdir(str(dir_name)): if file.lower().endswith(includes): yield os.path.join(dir_name, file) else: for subdir, _, files in sorted(os.walk(dir_name)): for file in files: if file.lower().endswith(includes): yield os.path.join(subdir, file) def _new_filename(filepath: str): return f"{os.path.split(filepath)[-1]}.nc" def _fix_metadata(metadata: dict, header: dict) -> dict: fixed_metadata = metadata.copy() if header["DualPol"] == 2: fixed_metadata["RefRat"] = rpgpy.metadata.Meta( name="zdr", long_name="Differential Reflectivity Ratio" ) return fixed_metadata
/rpgPy-0.14.2.tar.gz/rpgPy-0.14.2/rpgpy/nc.py
0.814459
0.308242
nc.py
pypi
from pathlib import Path, PurePath from typing import List, TypeVar import click import magic from rpgmaker_mv_decoder.callbacks import Callbacks from rpgmaker_mv_decoder.clickdisplay import ClickDisplay from rpgmaker_mv_decoder.constants import RPG_MAKER_MV_MAGIC from rpgmaker_mv_decoder.project import Project from rpgmaker_mv_decoder.utils import int_xor _T = TypeVar("_T", bound="ProjectEncoder") class ProjectEncoder(Project): """Class for encoding a project""" def __init__( self: _T, encoding_source: PurePath, destination: PurePath, key: str, encoding_callbacks: Callbacks = Callbacks(), ) -> _T: """`ProjectEncoder` constructor Args: - `source` (`PurePath`): Where to find the files to encode - `destination` (`PurePath`): Where to save the files to encode - `key` (`str`): Key to use when encoding - `callbacks` (`Callback`, optional): Callbacks to run on events.\ Defaults to `Callback()`. Returns: - `ProjectEncoder`: Object to run actions on """ Project.__init__(self, encoding_source, destination, key, encoding_callbacks) def encode_header(self: _T, file_header: bytes) -> bytes: """`encode_header` Encode a file with a key Takes first 16 bytes and encodes per RPGMaker MV standard Args: - `file_header` (`bytes`): 16 bytes - `key` (`str`): Key to encode with Returns: - `bytes`: First 32 bytes of the encoded file """ return RPG_MAKER_MV_MAGIC + int_xor(bytes.fromhex(self.key), file_header) def encode_file(self: _T, input_file: PurePath) -> bool: """`encode_file` Takes a path and encodes a file Args: - `input_file` (`PurePath`): File to read and modify Returns: - `bool`: True if the operation should continue """ output_file: PurePath = self.project_paths.output_directory.joinpath( PurePath(input_file).relative_to(self.project_paths.source) ) filetype: str with click.open_file(input_file, "rb") as file: file_header: bytes = file.read(16) data: bytes = file.read() filetype = magic.from_buffer(file_header + data, mime=True) data = self.encode_header(file_header) + data if filetype.startswith("image"): output_file = output_file.with_suffix(".rpgmvp") elif filetype.startswith("audio"): output_file = output_file.with_suffix(".rpgmvp") return self._save_file(output_file, data) def encode(self: _T): """`encode` Encodes the project""" files: List[Path] = self.project_paths.all_files self._callbacks.info(f"Reading from: '{self.project_paths.source}'") self._callbacks.info(f"Writing to: '{self.project_paths.output_directory}'") click_display = ClickDisplay(files) with click.progressbar( files, label="Encoding files", width=0, item_show_func=click_display.show_item, ) as files_to_encode: filename: Path for filename in files_to_encode: if self._callbacks.progressbar(files_to_encode): break if not self.encode_file(filename): break self._callbacks.progressbar(None)
/rpgmaker_mv_decoder-1.4.0.tar.gz/rpgmaker_mv_decoder-1.4.0/rpgmaker_mv_decoder/projectencoder.py
0.89682
0.263605
projectencoder.py
pypi
from typing import Callable, List, TypeVar import click from click._termui_impl import ProgressBar from rpgmaker_mv_decoder import __version__ as VERSION from rpgmaker_mv_decoder.messagetypes import MessageType from rpgmaker_mv_decoder.promptresponse import PromptResponse _T = TypeVar("_T", bound="Callbacks") def show_version(ctx: click.Context, _, value: bool): """`show_version` Click callback that displays the version number to the user Args: - `ctx` (`click.Context`): context for options parsing - `_` (`_type_`): ignored - `value` (`bool`): if true, show the version number and exit """ if not value or ctx.resilient_parsing: return click.echo(f"Version: {VERSION}") ctx.exit() def _default_progressbar_callback(_: ProgressBar) -> bool: return False def default_message_callback( level: MessageType, text: str, ) -> None: """`default_message_callback` default handling of messages Args: - `level` (`MessageType`): What kind of message this is - `text` (`str`): What to display """ text = f"{level.get_message_header()}{text}" if level == MessageType.DEBUG: click.secho(text, bold=True, fg="blue") elif level == MessageType.INFO: click.echo(text) elif level == MessageType.WARNING: click.secho(text, bold=True, fg="yellow") elif level == MessageType.ERROR: click.secho(text, bold=True, fg="red") def default_prompt_callback( message_type: MessageType = MessageType.DEBUG, message: str = "", responses: PromptResponse = PromptResponse.OK, ) -> bool: """`default_prompt_callback` default prompt Args: - `message_type` (`MessageType`, optional): What type of message is this. Defaults to\ `MessageType.DEBUG`. - `message` (`str`, optional): What to display to the user. Defaults to `""`. - `responses` (`PromptResponse`, optional): What kind of repsones can the user give.\ Defaults to `PromptResponse.OK`. Returns: - `bool`: `True` if the operation should run the action, `False` if the action should\ be skipped and `None` if the operation should be canceled. """ default_message_callback(message_type, message) if responses != PromptResponse.NONE: choice_list: List[str] = responses.get_responses() choice: str = click.prompt( "Do you want to do this?", default=choice_list[-1], type=click.Choice(choice_list, False), ) if choice == "Cancel": return None if choice == "Skip": return False if choice == "No": return False return True class Callbacks: """`Callbacks` encapsulates all the callbacks that might be used during execution""" # pylint: disable=too-many-arguments def __init__( self, progressbar_callback: Callable[[ProgressBar], bool] = _default_progressbar_callback, prompt_callback: Callable[ [MessageType, str, PromptResponse, bool], bool ] = default_prompt_callback, message_callback: Callable[[MessageType, str, bool], None] = default_message_callback, ) -> _T: """`Callbacks` constructor Args: - `progressbar_callback` (`Callable[ [ProgressBar], bool ]`, optional): How to display \ progress. Defaults to `_default_progressbar_callback`. - `prompt_callback` (`Callable[ [MessageType, str, PromptResponse], bool ]`, optional):\ How to ask the user for the appropriate action. Defaults to `default_prompt_callback`. - `message_callback` (`Callable[[MessageType, str], None]`, optional): What to do when\ displaying messages. Defaults to `default_message_callback`. """ self._progressbar_callback = progressbar_callback self._message_callback = message_callback self._prompt_callback = prompt_callback self._in_progress: bool = False # pylint: enable=too-many-arguments @property def progressbar(self: _T): """`progressbar` callback for updating the progress of the operation Returns: - `Callable[[ProgressBar], bool]`: Function to call. Progress data should \ be specified via the parameter. If the user cancels the operation, this \ should return `True` """ return self._progressbar_callback @property def prompt(self: _T) -> Callable[[MessageType, str, PromptResponse], bool]: """`prompt` callback for asking the user a question Returns: - `Callable[[MessageType, str, PromptResponse], bool]`: Function to call. First\ argument is the type of message, second is the message, third is the responses\ a user can give. """ return self._prompt_callback @property def message(self: _T) -> Callable[[MessageType, str], None]: """`message` callback for displaying a message to the user Returns: - `Callable[[MessageType, str], None]`: Function to call. First argument is the type of\ message, second is the message """ return self._message_callback def debug(self: _T, text: str) -> None: """`debug` helper function for printing a debug message Args: - `self` (`Callbacks`): callbacks object - `text` (`str`): text to display """ self.message(MessageType.DEBUG, text) def info(self: _T, text: str) -> None: """`info` helper function for printing a info message Args: - `self` (`Callbacks`): callbacks object - `text` (`str`): text to display """ self.message(MessageType.INFO, text) def warning(self: _T, text: str) -> None: """`warning` helper function for printing a warning message Args: - `self` (`Callbacks`): callbacks object - `text` (`str`): text to display """ self.message(MessageType.WARNING, text) def error(self: _T, text: str) -> None: """`error` helper function for printing a error message Args: - `self` (`Callbacks`): callbacks object - `text` (`str`): text to display """ self.message(MessageType.ERROR, text)
/rpgmaker_mv_decoder-1.4.0.tar.gz/rpgmaker_mv_decoder-1.4.0/rpgmaker_mv_decoder/callbacks.py
0.892498
0.299336
callbacks.py
pypi
import struct from binascii import crc32 from pathlib import Path, PurePath from typing import Dict, List, TypeVar import click from click._termui_impl import ProgressBar from rpgmaker_mv_decoder.callbacks import Callbacks from rpgmaker_mv_decoder.clickdisplay import ClickDisplay from rpgmaker_mv_decoder.constants import IHDR_SECTION, PNG_HEADER, RPG_MAKER_MV_MAGIC from rpgmaker_mv_decoder.exceptions import NoValidFilesFound from rpgmaker_mv_decoder.project import Project from rpgmaker_mv_decoder.utils import int_xor _T = TypeVar("_T", bound="ProjectKeyFinder") def _is_png_image(png_ihdr_data: bytes) -> bool: ihdr_data: bytes crc: bytes (ihdr_data, crc) = struct.unpack("!13s4s", png_ihdr_data) checksum = crc32(IHDR_SECTION + ihdr_data).to_bytes(4, "big") if checksum != crc: return False return True class ProjectKeyFinder(Project): """Handles finding a project key""" def __init__( self: _T, source: PurePath, callbacks: Callbacks = Callbacks(), ) -> _T: """`ProjectKeyFinder` Constructor Args: - `source` (`PurePath`): Files to use to find a key - `callbacks` (`Callback`, optional): Callbacks for specific events. Defaults to \ `Callback()`. Returns: - `ProjectKeyFinder`: Object to find key for files """ Project.__init__(self, source, None, None, callbacks) self._keys: Dict[str, int] = {} self._count: int = 0 self._keys_modified: bool = False self._skipped: int = 0 self._total: int = 0 @property def keys(self: _T) -> Dict[str, int]: """`keys` sorted dictionary of possible keys for this project""" if self._keys_modified: self._keys = dict(sorted(self._keys.items(), key=lambda item: item[1], reverse=True)) self._keys_modified = False return self._keys @keys.setter def keys(self: _T, value: str): self._keys_modified = True try: self._keys[value] += 1 except KeyError: self._keys[value] = 1 def __print_possible_keys(self: _T) -> None: """`__print_possible_keys` Prints a list (maximum 10) of keys for decoding Prints a list of possible keys for this project to the user, shows the confidence as a percentage for each key found Args: - `sorted_keys` (`Dict[str, int]`): Keys sorted by frequency - `count` (`int`): Total number of files checked """ item: str = list(self.keys.keys())[0] ratio: float = self.keys[item] / (self._count - (len(self.keys) - 1)) self._callbacks.info(f"{ratio*100:.2f}% confidence for images") self._callbacks.info( f"Possible keys: {item} used in {self.keys[item]} of {self._count} images" ) for item in list(self.keys.keys())[1:10]: self._callbacks.info( f" {item} used in {self.keys[item]} of {self._count} images" ) def __get_likely_key(self: _T) -> None: """`__get_likely_key` Takes a list of keys and returns the most likely key Sorts the keys by frequency then returns the key that's used the most Args: - `keys` (`Dict[str, int]`): Keys found and how many times they showed up - `count` (`_type_`): Total number of files checked Returns: - `str`: Key to use for decoding """ self.key = list(self.keys.keys())[0] if len(self.keys) != 1: self.__print_possible_keys() def _report_results(self: _T, item: str): percentage: float = (self._count * 100.0) / self._total self._callbacks.info("") if self._skipped > 0: self._callbacks.info( f"Found {self._skipped} files ending with .rpgmvp that were not PNG images" ) self._callbacks.info( f"Found the same key for {self._count}/{self._total} ({percentage:0.02f}%) files" ) self._callbacks.info(f"Using '{item}' as the key") def _handle_files(self: _T, all_files: ProgressBar) -> int: self._total = all_files.length min_found: int = max(9, all_files.length // 20) + 1 filename: Path self._count = 0 self._skipped = 0 for filename in all_files: item: str = None if self._callbacks.progressbar(all_files): break rpgmaker_header: bytes file_header: bytes png_ihdr: bytes with click.open_file(filename, "rb") as file: rpgmaker_header = file.read(16) file_header = file.read(16) png_ihdr = file.read(17) if rpgmaker_header == RPG_MAKER_MV_MAGIC and _is_png_image(png_ihdr): item = int_xor(file_header, PNG_HEADER).hex() self._count += 1 self.keys = item if len(self.keys) == 1 and self._count >= min_found: all_files.update(self._total - self._count) self._report_results(item) break else: self._skipped += 1 self._total -= 1 min_found = max(10, ((self._total // 20) + 1)) self._callbacks.progressbar(None) def find_key(self: _T) -> str: """`find_key` Check the path for PNG images and return the decoding key Finds image files under the specified path and looks for a key to decode all the files. This can fail if only a small number (less than 3) of the .rpgmvp files are .png images. Raises: - `NoValidFilesFound`: If no valid PNG images are found Returns: - `str`: Decoding key """ if not self.project_paths.source: raise NoValidFilesFound("Invalid source path") files: List[Path] = sorted(Path(self.project_paths.source).glob("**/*.rpgmvp")) click_display = ClickDisplay(files) with click.progressbar( files, label="Finding key", item_show_func=click_display.show_item ) as all_files: self._handle_files(all_files) if self._count == 0: raise NoValidFilesFound(f"No png files found under: '{Path}'") self.__get_likely_key() return self.key
/rpgmaker_mv_decoder-1.4.0.tar.gz/rpgmaker_mv_decoder-1.4.0/rpgmaker_mv_decoder/projectkeyfinder.py
0.880258
0.286749
projectkeyfinder.py
pypi
import os import re from abc import ABC from pathlib import Path, PurePath from typing import TypeVar import click from rpgmaker_mv_decoder.callbacks import Callbacks from rpgmaker_mv_decoder.messagetypes import MessageType from rpgmaker_mv_decoder.projectpaths import ProjectPaths from rpgmaker_mv_decoder.promptresponse import PromptResponse _T = TypeVar("_T", bound="Project") class Project(ABC): """Handles a project and runs operations""" def __init__( self: _T, source_path: PurePath = None, destination_path: PurePath = None, key: str = None, callbacks: Callbacks = Callbacks(), ) -> _T: """`Project` constructor Args: - `source` (`PurePath`): Where to find the files - `destination` (`PurePath`): Where to save the files - `key` (`str`): Key to use - `callbacks` (`Callback`, optional): Callbacks to run on events.\ Defaults to `Callback()`. - `overwrite` (`bool`, optional): if files should be overwritten. `None` will cause the\ system to prompt the user. Defaults to `None` Returns: - `Project`: Object Notes: - This is an Abstract Base Class, do not use this directly """ self.project_paths: ProjectPaths = ProjectPaths(source_path, destination_path) self.key: str = key self._callbacks: Callbacks = callbacks self._overwrite: bool = None def _save_file(self: _T, filename: PurePath, data: bytes) -> bool: """`_save_file` Saves the file to disk, calling the overwrite callback if the file exists already. Args: - `filename` (`PurePath`): File to save - `data` (`bytes`): What to write into the file Returns: - `bool`: True if the current operation should continue """ overwrite: bool = True if Path(filename).exists(): if self.overwrite is None: response = self._callbacks.prompt( MessageType.WARNING, f"""The file: {filename} Is about to be overwritten.""", PromptResponse.YES_NO_CANCEL, ) if response is None: return False else: overwrite = self.overwrite if overwrite: try: os.makedirs(filename.parent) except FileExistsError: pass with click.open_file(filename, mode="wb") as file: file.write(data) return True @property def overwrite(self: _T) -> bool: """if files should be overwritten. `None` will cause the system to prompt the user.""" return self._overwrite @overwrite.setter def overwrite(self: _T, value: bool) -> bool: """if files should be overwritten. `None` will cause the system to prompt the user.""" self._overwrite = value @property def key(self: _T) -> str: """Gets the `key` or returns `None` if the key is not valid""" return self._key if self._key else None @key.setter def key(self: _T, value: str): """Sets the `key`. Must be a 32 charcater hex string or the key will be set to `None`""" if value: if re.compile(r"^[0-9a-fA-F]{32}$").match(value): self._key = value return self._key = None
/rpgmaker_mv_decoder-1.4.0.tar.gz/rpgmaker_mv_decoder-1.4.0/rpgmaker_mv_decoder/project.py
0.838117
0.21036
project.py
pypi
import struct from pathlib import Path, PurePath from typing import List, TypeVar import click import magic from rpgmaker_mv_decoder.callbacks import Callbacks from rpgmaker_mv_decoder.clickdisplay import ClickDisplay from rpgmaker_mv_decoder.constants import OCT_STREAM, RPG_MAKER_MV_MAGIC from rpgmaker_mv_decoder.exceptions import FileFormatError, RPGMakerHeaderError from rpgmaker_mv_decoder.project import Project from rpgmaker_mv_decoder.utils import int_xor _T = TypeVar("_T", bound="ProjectDecoder") class ProjectDecoder(Project): """Handles a project and runs operations""" def __init__( self: _T, source: PurePath, destination: PurePath, key: str, callbacks: Callbacks = Callbacks(), ) -> _T: """`ProjectDecoder` constructor Args: - `source` (`PurePath`): Where to find the files to decode - `destination` (`PurePath`): Where to save the files to decode - `key` (`str`): Key to use when decoding - `callbacks` (`Callback`, optional): Callbacks to run on events.\ Defaults to `Callback()`. Returns: - `ProjectDecoder`: object to run actions on """ Project.__init__(self, source, destination, key, callbacks) def _get_output_filename(self: _T, filename: Path, data: bytes = None) -> str: """`_get_output_filename` Returns a file name for the specified file If data is not `None`, uses libmagic to figure out the actual file type and place a proper extension on the file. Otherwise it uses the original name to generate the extension. Args: - `filename` (`Path`): Original file path. - `data` (`bytes`, optional): File data (decoded) for libmagic. \ Defaults to `None`. Raises: - `FileFormatError`: If libmagic can't determine the file type\ or the existing file extension is unknown. Returns: - `str`: The decoded file extension """ output_file: PurePath = self.project_paths.output_directory.joinpath( PurePath(filename).relative_to(self.project_paths.source) ) if data: filetype: str = magic.from_buffer(data, mime=True) if filetype == OCT_STREAM: raise FileFormatError( f'"{filetype}" == "{OCT_STREAM}"', "Found octlet stream, key is probably incorrect.", ) return output_file.with_suffix("." + filetype.split("/")[-1]) if not filename: raise ValueError("data and filename are both None") if filename.suffix == ".rpgmvp": return output_file.with_suffix(".png") if filename.suffix == ".rpgmvo": return output_file.with_suffix(".ogg") raise FileFormatError( f'"{filename.suffix}"', f'Unknown extension "{filename.suffix}"', ) def decode_header(self: _T, file_header: bytes) -> bytes: """`decode_header` take a RPGMaker header and return the key or the actual file header Check's the first 16 bytes for the standard RPGMaker header, then drops them. Takes the next 16 bytes and either calculates the key based on a PNG image, or uses the specify key to decode. If png_ihdr_data is provided, checks that the IHDR section checksums correctly. Args: - `file_header` (`bytes`): First 32 bytes from the file, 16 bytes are the RPGMaker header,\ followed by 16 bytes of the file header Raises: - `RPGMakerHeaderError`: The header doesn't match RPGMaker's header Returns: - `bytes`: If key was None, the key needed for a PNG image header, otherwise the decoded\ file header. """ file_id: bytes header: bytes (file_id, header) = struct.unpack("!16s16s", file_header) if file_id != RPG_MAKER_MV_MAGIC: raise RPGMakerHeaderError( f'"{file_id.hex()}" != "{RPG_MAKER_MV_MAGIC.hex()}"', "First 16 bytes of this file do not match the RPGMaker header, " "is this a RPGMaker file?", ) return int_xor(bytes.fromhex(self.key), header) def decode_file(self: _T, input_file: PurePath, detect_type: bool) -> bool: """`decode_file` Takes a path and decodes a file Args: - `input_file` (`PurePath`): File to read and modify - `detect_type` (`bool`): True means generate file extensions based on\ file contents Returns: - `bool`: True if the operation should continue """ output_file = self._get_output_filename(input_file) data: bytes with click.open_file(input_file, "rb") as file: data: bytes = self.decode_header(file.read(32)) data += file.read() if detect_type: output_file = self._get_output_filename(input_file, data) return self._save_file(output_file, data) def decode( self: _T, detect_type: bool, ) -> None: """`decode` Decodes a project Args: - `detect_type` (`bool`): True means generate file extensions based on\ file contents """ self._callbacks.info(f"Reading from: '{self.project_paths.source}'") self._callbacks.info(f"Writing to: '{self.project_paths.output_directory}'") files: List[Path] = self.project_paths.encoded_files click_display = ClickDisplay(files) with click.progressbar( files, label="Decoding files", width=0, item_show_func=click_display.show_item, ) as files_to_decode: filename: Path for filename in files_to_decode: if self._callbacks.progressbar(files_to_decode): break try: if not self.decode_file(filename, detect_type): break except RPGMakerHeaderError: warning_text: str = f'Invalid header found on "{filename}", skipping.' self._callbacks.warning(warning_text) except FileFormatError: self._callbacks.warning( "Found octlet stream, key is probably incorrect, " f"skipping {click.format_filename(str(filename))}" ) self._callbacks.progressbar(None)
/rpgmaker_mv_decoder-1.4.0.tar.gz/rpgmaker_mv_decoder-1.4.0/rpgmaker_mv_decoder/projectdecoder.py
0.853272
0.302127
projectdecoder.py
pypi
from pathlib import Path, PurePath from typing import List, TypeVar from uuid import UUID, uuid4 import click _T = TypeVar("_T", bound="ProjectPaths") class ProjectPaths: """Object that holds/validates project paths""" def __init__( self: _T, source: PurePath = None, destination: PurePath = None, ) -> _T: """`ProjectPaths` Constructor Args: - `source` (`PurePath`, optional): Files to operate on. Defaults to `None`. - `destination` (`PurePath`, optional): Where to save the files. Defaults to `None`. """ self.source: PurePath = source self.destination: PurePath = destination self._cached_output_directory: PurePath = None @property def destination(self: _T) -> PurePath: """Gets the `destination` path to use or returns `None` if the `destination` path is not valid""" return self._destination if self._destination else None @destination.setter def destination(self: _T, value: PurePath): """Sets the `destination` path. Value must exist on disk and be a directory. Passing an invalid path sets `destination` to `None`""" if value: destination_directory: Path = Path(value).resolve(strict=False) if not destination_directory.exists() or destination_directory.is_dir(): self._destination = PurePath(destination_directory) return self._destination = None return @property def source(self: _T) -> PurePath: """Gets the `source` path to use or returns `None` if the `source` path is not valid""" return self._source if self._source else None @source.setter def source(self: _T, value: PurePath): """Sets the `source` path. Value must exist on disk and be a directory. Passing an invalid path sets `source` to `None`""" if value: try: source_directory: Path = Path(value).resolve(strict=True) if source_directory.is_dir(): if source_directory.name == "audio": source_directory = source_directory.parent if source_directory.name == "img": source_directory = source_directory.parent if source_directory.name == "www": source_directory = source_directory.parent self._source = PurePath(source_directory) return except FileNotFoundError: pass self._source = None return @property def output_directory(self: _T) -> PurePath: """`output_directory` returns the name of the output directory including the project name""" if self._cached_output_directory: return self._cached_output_directory if Path(self.source.joinpath("www")).exists(): self._cached_output_directory = self.destination.joinpath(self.source.name) elif Path(self.source.joinpath("img")).exists(): self._cached_output_directory = self.destination.joinpath(self.source.name) else: tmp_dir: UUID = uuid4() self._cached_output_directory = self.destination.joinpath(str(tmp_dir)) click.echo( f"Unable to find 'www' or 'img' directly under '{self.source}'," " generating random project directory name" ) return self._cached_output_directory @property def encoded_images(self: _T) -> List[Path]: """`encoded_images` list of encoded images under the source path Creates a sorted list of `Path` objects ending with ".rpgmvp" under the source path, or `None` if the source path is unset""" return sorted(Path(self.source).glob("**/*.rpgmvp")) if self.source else None @property def encoded_files(self: _T) -> List[Path]: """`encoded_files` list of encoded files under the source path Creates a sorted list of `Path` objects ending with ".rpgmvp" or ".rpgmvo under the source path, or `None` if the source path is unset""" return sorted(Path(self.source).glob("**/*.rpgmv[op]")) if self.source else None @property def all_files(self: _T) -> List[Path]: """`all_files` list of all files under the source path Creates a sorted list of `Path` objects that are files under the source path, or `None` if the source path is unset """ if self.source is None: return None paths: List[Path] = sorted(Path(self.source).glob("**/*")) return [e for e in paths if e.is_file()]
/rpgmaker_mv_decoder-1.4.0.tar.gz/rpgmaker_mv_decoder-1.4.0/rpgmaker_mv_decoder/projectpaths.py
0.885594
0.371878
projectpaths.py
pypi
import time try: import RPi.GPIO as GPIO except ModuleNotFoundError: pass class Segments: def __init__(self, bcm_gpio_clock=11, bcm_gpio_latch=13, bcm_gpio_data=14, num_displays=7, debug=False, offline=False): self._is_offline = offline self._is_debug = debug self._segment_clock = bcm_gpio_clock self._segment_latch = bcm_gpio_latch self._segment_data = bcm_gpio_data self._num_displays = num_displays self._num_segments = 8 # 7 + dot self._current_position = 0 self._current_state = [' '] * self._num_displays self._all_states = [] if not self._is_offline: # GPIO.setwarnings(False) GPIO.setmode(GPIO.BCM) GPIO.setup(self._segment_clock, GPIO.OUT) GPIO.setup(self._segment_data, GPIO.OUT) GPIO.setup(self._segment_latch, GPIO.OUT) GPIO.output(self._segment_clock, GPIO.LOW) GPIO.output(self._segment_data, GPIO.LOW) GPIO.output(self._segment_latch, GPIO.LOW) def shutdown(self): if not self._is_offline: GPIO.cleanup() def _debug(self, message): if self._is_debug: print(message) def clear(self): self._debug("Clearing displays") for i in range(self._num_segments - 1): self.post_character(" ") self.move_to_next_segment() while not self._current_position == 0: self.move_to_next_segment() return self._all_states def display_error(self, message, placeholder='-'): self._debug(message) self._show_string(placeholder * self._num_displays) def show(self, value, scroll_speed=0.3, pad_char=' '): self.clear() if type(value) is str: self._show_string(value, scroll_speed=scroll_speed, pad_char=pad_char) elif type(value) is int: self._show_string(str(value), scroll_speed=scroll_speed, pad_char=pad_char) elif type(value) is float: self._show_string(str(value), scroll_speed=scroll_speed, pad_char=pad_char) else: self._debug("Unhandled type for value {}, type is {}. Falling back to string".format(value, type(value))) self._show_string(str(value), scroll_speed=scroll_speed, pad_char=pad_char) return self._all_states def _show_string(self, value, scroll_speed=0.3, pad_char=' '): if len(value.replace('.', '').replace(',', '')) > self._num_displays: # Text is too wide, we'll go into scroll-mode: # Left-and-right-pad text so that the text scrolls from left to right, from 'outside' of the displays: display_value = '{padding}{text}{padding}'.format(padding=pad_char * self._num_displays, text=value) self._debug("String: In: '{}' Out: '{}'".format(value, display_value)) for i in range(len(display_value) - self._num_displays + 1): # Adjust for punctuations, by adding more letters: num_punctuations = sum(display_value[i:i + self._num_displays].count(punctuation) for punctuation in ('.', ',')) # Show one of the substrings of length that fits the displays: self._show_string(display_value[i:i + self._num_displays + num_punctuations]) time.sleep(scroll_speed) else: display_value = "{text:>}".format(text=value) self._debug("String: In: '{}' Out: '{}'".format(value, display_value)) display_value = display_value[::-1] # Reversed as we're visualizing from left to right on the displays if not (display_value[-1] == '.' or display_value[-1] == ','): for i in range(len(display_value)): if display_value[i] == '.' or display_value[i] == ',': continue if i > 0 and (display_value[i - 1] == '.' or display_value[i - 1] == ','): self.post_character(display_value[i], show_decimal=True) self.move_to_next_segment() i += 1 else: self.post_character(display_value[i]) self.move_to_next_segment() @staticmethod def _get_next_digit(value): if value <= 9: return value, None else: digit = value % 10 value = int(value / 10) return digit, value def move_to_next_segment(self): if not self._is_offline: GPIO.output(self._segment_latch, GPIO.LOW) GPIO.output(self._segment_latch, GPIO.HIGH) # Register moves storage register on the rising edge of RCK self._debug("Moving to next segment, was {}".format(self._current_position)) if self._current_position == self._num_displays - 1: self._current_position = 0 else: self._current_position += 1 # Given a number, or - shifts it out to the display def post_character(self, symbol, show_decimal=False): segments = bytes() a = 1 << 0 b = 1 << 6 c = 1 << 5 d = 1 << 4 e = 1 << 3 f = 1 << 1 g = 1 << 2 dp = 1 << 7 if symbol == 1 or symbol == "1": segments = b | c elif symbol == 2 or symbol == "2": segments = a | b | d | e | g elif symbol == 3 or symbol == "3": segments = a | b | c | d | g elif symbol == 4 or symbol == "4": segments = b | c | f | g elif symbol == 5 or symbol == "5": segments = a | c | d | f | g elif symbol == 6 or symbol == "6": segments = a | c | d | e | f | g elif symbol == 7 or symbol == "7": segments = a | b | c elif symbol == 8 or symbol == "8": segments = a | b | c | d | e | f | g elif symbol == 9 or symbol == "9": segments = a | b | c | d | f | g elif symbol == 0 or symbol == "0": segments = a | b | c | d | e | f elif symbol == "A" or symbol == "a": segments = a | b | c | e | f | g elif symbol == "B" or symbol == "b": segments = a | b | c | d | e | f | g elif symbol == 'C': segments = a | d | e | f elif symbol == 'c': segments = g | e | d elif symbol == 'D': segments = a | b | c | d | e | f elif symbol == 'd': segments = b | c | d | e | g elif symbol == "E" or symbol == "e": segments = a | d | e | f | g elif symbol == "F" or symbol == "f": segments = a | e | f | g elif symbol == "G" or symbol == "g": segments = a | c | d | e | f | g elif symbol == "H": segments = b | c | e | f | g elif symbol == "h": segments = c | e | f | g elif symbol == "I" or symbol == "i": segments = b | c elif symbol == "J" or symbol == "j": segments = b | c | d elif symbol == "K" or symbol == "k": segments = b | c | e | f | g elif symbol == "L": segments = d | e | f elif symbol == "l": segments = e | f elif symbol == "M" or symbol == "m": segments = b | c | e | f | g elif symbol == "N" or symbol == "n": segments = b | c | e | f | g elif symbol == "O": segments = a | b | c | d | e | f elif symbol == "o": segments = c | d | e | g elif symbol == "P" or symbol == "p": segments = a | b | e | f | g elif symbol == "Q": segments = a | b | f | g | c elif symbol == "q": segments = c | d | e | g | dp elif symbol == "R" or symbol == "r": segments = a | b | c | e | f | g elif symbol == "S" or symbol == "s": segments = a | c | d | f | g elif symbol == "T" or symbol == "t": segments = a | f | e elif symbol == "U" or symbol == "u": segments = b | c | d | e | f elif symbol == "V" or symbol == "v": segments = b | c | d | e | f elif symbol == "W" or symbol == "w": segments = b | c | d | e | f elif symbol == "X" or symbol == "x": segments = b | c | e | f | g elif symbol == "Y" or symbol == "y": segments = b | c | f | g elif symbol == "Z" or symbol == "z": segments = a | b | d | e | g elif symbol == "!": segments = b | c | dp elif symbol == ' ': segments = 0 elif symbol == '_': segments = d elif symbol == '.': segments = dp elif symbol == ',': segments = dp elif symbol == '-': segments = g else: segments = 0 if show_decimal: segments |= dp y = 0 while y < self._num_segments: if not self._is_offline: GPIO.output(self._segment_clock, GPIO.LOW) GPIO.output(self._segment_data, segments & 1 << (7 - y)) GPIO.output(self._segment_clock, GPIO.HIGH) y += 1 if show_decimal: stored_state = symbol + '.' else: stored_state = symbol self._current_state[self._current_position] = stored_state self._all_states.append(self._current_state[::-1]) self._debug("Displaying symbol: {symbol}. Current state: {state}" .format(symbol=stored_state, state=self._current_state[::-1])) def run_demo_pattern(segments): while True: segments.show("Hello!") time.sleep(2) segments.show(1234567) time.sleep(2) segments.show(-123.456) time.sleep(2) segments.clear() time.sleep(1) segments.show("Hello world!") if __name__ == '__main__': import signal import sys # Setup segments: # Offline == Run on actual devices / enable GPIO # Debug == Print debug state _segments = Segments(debug=True, offline=False) # Signal handler, for e.g. clean shutdown when ctrl+c is hit in the demo pattern: def signal_handler(sig, frame): print('Shutting down displays') _segments.shutdown() sys.exit(0) signal.signal(signal.SIGINT, signal_handler) # Run the demo pattern: run_demo_pattern(_segments) # Cleanup segments _segments.shutdown()
/rpi_7segment-0.0.1.tar.gz/rpi_7segment-0.0.1/rpi_7segment/segments.py
0.495117
0.252783
segments.py
pypi
from pydispatch import dispatcher import logging from apscheduler.triggers.cron import CronTrigger logger = logging.getLogger() class BaseModule(object): """ Base class for all modules expressed via configuration and loaded in the subscription chain. Classes must implement 'run' and accept and return a ModuleResult. """ scheduler = None datastore = None def __init__(self, type=None, name=None, cron=None, enabled=True, subscribed_to=[]): self.enabled = enabled self.type = type if type is None: self.type = self.__class__.__name__ self.name = name if name else self.type.lower() + str(self.__hash__()) if cron: self.set_cron(cron) self.subscribe_to(tuple(subscribed_to)) def dispatch(self, module_result=None): """ dispatch is called via the subscription chain when producers run and return data. Also called as the entry point for scheduled cron jobs and run_once calls. """ logger.debug("Running %s(%s)...", self.type, self.name) module_result = self.run(module_result) # calls derived class's 'run' method logger.debug("Done running %s(%s)...", self.type, self.name) BaseModule._broadcast(module_result) signal = self.name + '-signal' dispatcher.send(signal=signal, module_result=module_result) def run(self, module_result): return None def subscribe_to(self, modules): """ Subscribe this module to the result of each module name in *args Args: modules(list): list of modules or string names of modules """ for m in modules: if isinstance(m, BaseModule): m = m.name signal = m + '-signal' logger.debug("Subscribing %s to signal %s", self.name, signal) dispatcher.connect(self.dispatch, signal=signal, sender=dispatcher.Any) def remove_subscription(self, modules): """ Remove this module's subscription to each module name in *args Args: modules(list): list of modules or string names of modules """ for m in modules: if isinstance(m, BaseModule): m = m.name signal = m + '-signal' logger.debug("Unsubscribing %s to signal %s", self.name, signal) dispatcher.disconnect(self.dispatch, signal=signal, sender=dispatcher.Any) def set_cron(self, cron): assert BaseModule.scheduler BaseModule.scheduler.add_job(self.dispatch, CronTrigger.from_crontab(cron), None, id=self.name, jitter=4) def schedule_run(self, run_at_dt, name=None, module_result=None): assert BaseModule.scheduler if not name: name = self.name BaseModule.scheduler.add_job(self.dispatch, kwargs={'module_result': module_result}, next_run_time=run_at_dt, id=name) def then(self, *modules): """ Allows for chaining the result of one module to the next """ for m in modules: return m.subscribe_to((self,)) return self @staticmethod def _broadcast(module_result): if not BaseModule.datastore: return try: BaseModule.datastore.update(module_result) except Exception as e: logger.error(e, exc_info=True) @staticmethod def start(): assert BaseModule.scheduler BaseModule.scheduler.start() def __str__(self): return "{}({})".format(self.type, self.name)
/rpi_automator-0.0.4.tar.gz/rpi_automator-0.0.4/rpi_automator/modules/BaseModule.py
0.728265
0.282943
BaseModule.py
pypi
# title :PID.py # description :python pid controller # author :Caner Durmusoglu # date :20151218 # version :0.1 # notes : # python_version :2.7 # ============================================================================== """Ivmech PID Controller is simple implementation of a Proportional-Integral-Derivative (PID) Controller in the Python Programming Language. More information about PID Controller: http://en.wikipedia.org/wiki/PID_controller """ import time class PID: """PID Controller """ def __init__(self, P=0.2, I=0.0, D=0.0): self.Kp = P self.Ki = I self.Kd = D self.sample_time = 0.00 self.current_time = time.time() self.last_time = self.current_time self.clear() def clear(self): """Clears PID computations and coefficients""" self.SetPoint = 0.0 self.PTerm = 0.0 self.ITerm = 0.0 self.DTerm = 0.0 self.last_error = 0.0 # Windup Guard self.int_error = 0.0 self.windup_guard = 20.0 self.output = 0.0 def update(self, feedback_value): """Calculates PID value for given reference feedback .. math:: u(t) = K_p e(t) + K_i \int_{0}^{t} e(t)dt + K_d {de}/{dt} .. figure:: images/pid_1.png :align: center Test PID with Kp=1.2, Ki=1, Kd=0.001 (test_pid.py) """ error = self.SetPoint - feedback_value self.current_time = time.time() delta_time = self.current_time - self.last_time delta_error = error - self.last_error if (delta_time >= self.sample_time): self.PTerm = self.Kp * error self.ITerm += error * delta_time if (self.ITerm < -self.windup_guard): self.ITerm = -self.windup_guard elif (self.ITerm > self.windup_guard): self.ITerm = self.windup_guard self.DTerm = 0.0 if delta_time > 0: self.DTerm = delta_error / delta_time # Remember last time and last error for next calculation self.last_time = self.current_time self.last_error = error self.output = self.PTerm + (self.Ki * self.ITerm) + (self.Kd * self.DTerm) def setKp(self, proportional_gain): """Determines how aggressively the PID reacts to the current error with setting Proportional Gain""" self.Kp = proportional_gain def setKi(self, integral_gain): """Determines how aggressively the PID reacts to the current error with setting Integral Gain""" self.Ki = integral_gain def setKd(self, derivative_gain): """Determines how aggressively the PID reacts to the current error with setting Derivative Gain""" self.Kd = derivative_gain def setWindup(self, windup): """Integral windup, also known as integrator windup or reset windup, refers to the situation in a PID feedback controller where a large change in setpoint occurs (say a positive change) and the integral terms accumulates a significant error during the rise (windup), thus overshooting and continuing to increase as this accumulated error is unwound (offset by errors in the other direction). The specific problem is the excess overshooting. """ self.windup_guard = windup def setSampleTime(self, sample_time): """PID that should be updated at a regular interval. Based on a pre-determined sampe time, the PID decides if it should compute or return immediately. """ self.sample_time = sample_time
/rpi_automator-0.0.4.tar.gz/rpi_automator-0.0.4/rpi_automator/util/PID.py
0.828037
0.508361
PID.py
pypi
from pathlib import Path from tempfile import TemporaryDirectory from typing import Optional, TYPE_CHECKING if TYPE_CHECKING: from __init__ import BoardType __all__ = ["detect_board_type", "FakeBacklightSysfs"] def detect_board_type() -> Optional["BoardType"]: """Try to detect the board type based on the model string in ``/proc/device-tree/model``. """ from . import BoardType model_file = Path("/proc/device-tree/model") try: model = model_file.read_text() except OSError: return None # Tinker Board 2/2S starts with ASUS Tinker Board 2 or ASUS Tinker Board 2S if "Tinker Board 2" in model: return BoardType.TINKER_BOARD_2 # Tinker Board 1/1S starts with Rockchip RK3288 Asus Tinker Board or Rockchip RK3288 Asus Tinker Board S elif "Tinker Board" in model: return BoardType.TINKER_BOARD # Raspberry Pi starts with Raspberry Pi elif "Raspberry Pi" in model: return BoardType.RASPBERRY_PI # Microsoft Surface RT starts with Microsoft Surface RT elif "Microsoft Surface RT" in model: return BoardType.MICROSOFT_SURFACE_RT else: return None class FakeBacklightSysfs: """Context manager to create a temporary "fake sysfs" containing all relevant files. Used for tests and emulation. >>> with FakeBacklightSysfs() as backlight_sysfs: ... backlight = Backlight(backlight_sysfs_path=backlight_sysfs.path) ... # use `backlight` as usual """ def __init__(self) -> None: self._temp_dir = TemporaryDirectory() self.path = Path(self._temp_dir.name) def __enter__(self) -> "FakeBacklightSysfs": files = {"bl_power": 0, "brightness": 255, "max_brightness": 255} for filename, value in files.items(): (self.path / filename).write_text(str(value)) Path(self.path / "actual_brightness").symlink_to(self.path / "brightness") return self def __exit__(self, *_) -> None: self._temp_dir.cleanup()
/rpi-backlight-2.6.0.tar.gz/rpi-backlight-2.6.0/rpi_backlight/utils.py
0.741019
0.178472
utils.py
pypi
from argparse import ArgumentParser from . import Backlight, BoardType, __version__, utils STRING_TO_BOARD_TYPE = { "raspberry-pi": BoardType.RASPBERRY_PI, "tinker-board": BoardType.TINKER_BOARD, "tinker-board-2": BoardType.TINKER_BOARD_2, "microsoft-surface-rt": BoardType.MICROSOFT_SURFACE_RT, } BOARD_TYPE_TO_STRING = { BoardType.RASPBERRY_PI: "raspberry-pi", BoardType.TINKER_BOARD: "tinker-board", BoardType.TINKER_BOARD_2: "tinker-board-2", BoardType.MICROSOFT_SURFACE_RT: "microsoft-surface-rt", } def _create_argument_parser(): parser = ArgumentParser( description='Get/set power and brightness of the official Raspberry Pi 7" touch display.' ) parser.add_argument( "sysfs_path", metavar="SYSFS_PATH", type=str, nargs="?", default=None, help="Optional path to the backlight sysfs, set to :emulator: to use with rpi-backlight-emulator", ) parser.add_argument( "--get-brightness", action="store_true", help="get the display brightness (0-100)", ) parser.add_argument( "-b", "--set-brightness", metavar="VALUE", type=int, choices=range(0, 101), help="set the display brightness (0-100)", ) parser.add_argument( "--get-power", action="store_true", help="get the display power (on/off)" ) parser.add_argument( "-p", "--set-power", metavar="VALUE", type=str, choices=("on", "off", "toggle"), help="set the display power (on/off/toggle)", ) parser.add_argument( "-d", "--duration", type=float, default=0, help="fading duration in seconds" ) parser.add_argument( "-B", "--board-type", default=BOARD_TYPE_TO_STRING.get(utils.detect_board_type(), "raspberry-pi"), choices=STRING_TO_BOARD_TYPE.keys(), help="board type", ) parser.add_argument( "-V", "--version", action="version", version=f"%(prog)s {__version__}", ) return parser def main(): """Start the command line interface.""" parser = _create_argument_parser() args = parser.parse_args() backlight = Backlight( board_type=STRING_TO_BOARD_TYPE[args.board_type], backlight_sysfs_path=args.sysfs_path, ) if args.get_brightness: if any((args.set_brightness, args.get_power, args.set_power, args.duration)): parser.error("--get-brightness must be used without other options") print(backlight.brightness) return if args.get_power: if any( (args.get_brightness, args.set_brightness, args.set_power, args.duration) ): parser.error("--get-power must be used without other options") print("on" if backlight.power else "off") return if args.set_brightness is not None: if any((args.get_brightness, args.get_power, args.set_power)): parser.error( "-b/--set-brightness must be used without other options except for -d/--duration" ) # backlight.fade context manager can be used always as args.fade defaults to zero with backlight.fade(duration=args.duration): backlight.brightness = args.set_brightness return if args.set_power: if any((args.get_brightness, args.set_brightness, args.get_power)): parser.error("-p/--set-power may only be used with -d/--duration") if args.set_power == "toggle": if backlight.power: with backlight.fade(duration=args.duration): backlight.brightness = 0 if args.board_type == "raspberry-pi": backlight.power = False else: # Ensure brightness is 0 when we turn the display on backlight.brightness = 0 if args.board_type == "raspberry-pi": backlight.power = True with backlight.fade(duration=args.duration): backlight.brightness = 100 else: backlight.power = True if args.set_power == "on" else False return if args.duration: parser.error( "-d/--duration must be used with -b/--set-brightness or -p/--set-power toggle" )
/rpi-backlight-2.6.0.tar.gz/rpi-backlight-2.6.0/rpi_backlight/cli.py
0.655446
0.15704
cli.py
pypi
<!-- -*- coding: utf-8 -*- Author: Lars B. Rollik <L.B.Rollik@protonmail.com> License: BSD 3-Clause --> <!-- LINKS --> [isc-dhcp-server]: https://ubuntu.com/server/docs/network-dhcp [miniconda]: https://docs.conda.io/en/latest/miniconda.html [pinout.xyz]: https://pinout.xyz [Raspbian]: https://www.raspberrypi.org/documentation/installation/installing-images [arne-plugin]: https://github.com/arnefmeyer/RPiCameraPlugin [deshmukh]: https://github.com/DeshmukhLab/PicameraPaper [Vidgear]: https://github.com/abhiTronix/vidgear <!-- Banners --> [![DOI](https://zenodo.org/badge/370656006.svg)](https://zenodo.org/badge/latestdoi/370656006) [![Website](https://img.shields.io/website?up_message=online&url=https%3A%2F%2Fgithub.com/larsrollik/rpi_camera_colony)](https://github.com/larsrollik/rpi_camera_colony) [![PyPI](https://img.shields.io/pypi/v/rpi_camera_colony.svg)](https://pypi.org/project/rpi_camera_colony) [![Wheel](https://img.shields.io/pypi/wheel/rpi_camera_colony.svg)](https://pypi.org/project/rpi_camera_colony) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/python/black) <!-- [![Development Status](https://img.shields.io/pypi/status/rpi_camera_colony.svg)](https://github.com/larsrollik/rpi_camera_colony) [![Tests](https://img.shields.io/github/workflow/status/larsrollik/rpi_camera_colony/tests)]( https://github.com/larsrollik/rpi_camera_colony/actions) [![Python Version](https://img.shields.io/pypi/pyversions/rpi_camera_colony.svg)](https://pypi.org/project/rpi_camera_colony) [![Downloads](https://pepy.tech/badge/rpi_camera_colony)](https://pepy.tech/project/rpi_camera_colony) --> # RPi Camera Colony (RCC) Central control for video acquisition with (many) Raspberry Pi cameras --- Record videos in parallel with one or more remote-controlled Raspberry Pi (RPi) cameras. :movie_camera: A single configuration file and a few lines of code allow specific and reproducible acquisition settings for groups of cameras. **Example use with Python:** ```python import time from rpi_camera_colony.control.conductor import Conductor conductor = Conductor(settings_file="configuration_file") # Manages remote RPi conductor.start_acquisition() # Starts recording on all remotes time.sleep(20) # do nothing or something else in between conductor.stop_acquisition() # Stops recording on all remotes ``` or on the commandline: ```shell rcc_conductor --config-data-file CONFIG_DATA_FILE --acquisition-name ACQUISITION_NAME ``` ## Features #### A centralised control object One central object handles all communication with the remote cameras and transmits the configuration settings to each. #### A single configuration file to define reproducible multi-camera acquisition Configuration parameters are centrally defined in an easy-to-read file format and then handed down to the cameras. #### Flexible entrypoints Multiple entrypoints for use in python scripts as well as in a single line on the commandline Additionally, all levels are directly accessible: central Conductor, remote control handlers, and on the RPi the acquisition control (see below for details). #### *NEW:* Network video stream Add an additional output via network video stream directly from the main `config` or via commandline arguments when calling `rcc_acquisition`. - `config` example to use with `rcc_conductor` as usual: ```shell # ... [controllers] [[camera_red_60]] description = "back view" address = "192.168.0.22" # Network stream setup: stream_video = True stream_address = "192.168.0.22" stream_port = 8001 # ... ``` - Command-line entrypoint example: - Options: ````shell -s, --stream-video -sip STREAM_IP, --stream-ip STREAM_IP IP address for video stream. (default: 192.168.100.31) -sport STREAM_PORT, --stream-port STREAM_PORT Stream port (default: 8001) ```` - Example call: ```shell rcc_acquisition --auto-start --stream-video --stream-ip 192.168.100.31 --stream-port 9898 ``` ## Installation ### Python dependencies - python `>= 3.6` - pyzmq - configobj - tqdm - numpy - pandas Note: Use conda to install numpy/pandas to get pre-compiled packages (See below for instructions) **On RPi only:** - picamera - RPi.GPIO ### Other useful packages **For video conversion:** - gpac # contains MP4Box tool for video conversion ### Example hardware architecture ```text [outside world / internet] | | [central machine] | | [network switch] / | | \ | | | | <- network connection [rpi #1] | | | e.g. ethernet cables | | | [rpi #2] | | | | [...] | | [rpi #n] ``` #### Minimal hardware requirements - Central machine, can be RPi itself (as it only holds the control object, but does no computation) - Raspberry Pi 1. Main RPi board + fast SD card (+ card reader if not available on another machine) 2. RPi Camera (+lens?) (depends on your specific acquisition requirements) 3. RPi power supply (RPi4 requires USB-C connector) 4. Display cable (RPi4 requires mico-HDMI connector) - Ethernet cables - Network switch (if more than one RPi), e.g. any 1GB or faster ### Mapping between this package & hardware **One** Conductor to instruct **all** RPi cameras via network communication between the RemoteAcquisitionControl and PiAcquisitionControl. ```text Hardware <--> Software [central machine] <--> Conductor | | | RemoteAcquisitionControl | | ... ... | | [rpi #n] <--> PiAcquisitionControl | Camera ``` ### Raspberry Pi setup 0. Set up RPi hardware 1. Install [Raspbian] -> `NOTE: Use Raspbian Buster for now. There is no PiCamera equivalent readily available for the Raspbian Bullseye libcamera apps.` 2. Enable camera, GPIO interfaces, and ssh in `sudo raspi-config` options 3. Connect hardware: 1. Camera 2. Network cable 3. GPIO pin connection for TTL in/out (See [pinout.xyz] for **board mode** pins to use) Note: adjust pin numbers used in configuration file. Default are pin #8 for frame TTL outputs and #16 for inputs. Choose any free ground pins! 1. Install this package 1. Set up python, e.g. with [miniconda] 2. Clone this repository or use `distribute_code.sh` script (Replace hostnames for your RPi) 3. Install a. From Pypi ```shell pip install rpi_camera_colony[rpi] # <- Note: `[rpi]` argument adds specific requirements for acquisition on RPi, but is not required for controller ``` b. From Github ```shell pip install https://github.com/larsrollik/rpi_camera_colony[rpi] # <- Note: `[rpi]` argument adds specific requirements for acquisition on RPi, but is not required for controller ``` ### Central control machine setup 0. DHCP server on central computer. (Description only for Ubuntu) 1. Set up static IP address on network interface that serves RPi colony via network switch, with e.g. `/etc/network/interfaces` or `netplan` 2. Set up DHCP server with [isc-dhcp-server] 3. Set up SSH keys to allow interaction with RPi without password (__otherwise cannot drop remote process!__) ssh-keygen # into standard file if not exists, no passphrase ssh-copy-id -i ~/.ssh/id_rsa HOST # where HOST = RPi host name 1. Set up python environment, e.g. with [miniconda] 2. Install this package 1. Clone this repository 2. Install with ```shell pip install rpi_camera_colony ``` ## Entrypoints & levels ### Easy access to central Conductor ```shell rcc_conductor --help ``` ### Use acquisition directly on RPi ```python from rpi_camera_colony.acquisition.acquisition_control import PiAcquisitionControl ``` or ```shell python rpi_camera_colony/acquisition --help # or python -m rpi_camera_colony.acquisition --help # or rcc_acquisition --help ``` ### One-to-one mapping of local control to remote acquisition ```python from rpi_camera_colony.acquisition.remote_control import RemoteAcquisitionControl ``` or ```bash python rpi_camera_colony/acquisition --help # or python -m rpi_camera_colony.acquisition.remote_control --help ``` ### Read acquisition metadata & check for video files ```python from rpi_camera_colony import read_session_data ``` ### Sandbox Conductor object in separate process (python multiprocessing) See `rpi_camera_colony.control.process_sandbox` for example use of: ```python from rpi_camera_colony.control.process_sandbox import ConductorAsProcess ``` ## Citation > Rollik, Lars B. (2021). RPi Camera Colony: Central control for video acquisition with (many) Raspberry Pi cameras. doi: [10.5281/zenodo.6414747](https://doi.org/10.5281/zenodo.6414747). **BibTeX** ```BibTeX @misc{rollik2021rpi, author = {Lars B. Rollik}, title = {{RPi Camera Colony: Central control for video acquisition with (many) Raspberry Pi cameras}}, year = {2021}, month = jun, publisher = {Zenodo}, url = {https://doi.org/10.5281/zenodo.6414747}, doi = {10.5281/zenodo.6414747}, } ``` ## License This software is released under the **[BSD 3-Clause License](https://github.com/larsrollik/rpi_camera_colony/blob/master/LICENSE)** ## Related projects with similar architectures - [Arne Meyer's RPiCameraPlugin for the OpenEphys GUI][arne-plugin] Specific API for one-to-one control mappings between OpenEphys GUI plugin instances and remote RPi cameras. Inspiration for use of ØMQ communication and camera TTL integration in encoder class. - [Deshmukh lab's PicameraPaper][deshmukh] Video acquisition with multiple RPi synchronised by a central TTL that is recorded with the camera timestamps. - [Vidgear] General package for different types of video acquisition and streaming. ## Configuration file specification **-> Note:** additional picamera attributes can be used, but not all types implemented. Check below. **-> Note:** `acquisition_group` is not specified by default, but if `acquisition_name` contains `__` double underscores, then the `acquisition_group` will get auto-populated from the first segment, when split on the `__`. This is to create an acquisition folder organisation like: `/path_to_data/acquisition_group/acquisition_name/[files]` ```shell [general] acquisition_name = string(default="_test_rcc_name_config") # base name for recording acquisition_time = string(default="dummy_time") acquisition_group = string(default="") remote_data_path = string(default="/home/pi/data/") # where to store all recordings on RPi rpi_username = string(default="pi") remote_python_interpreter = string(default="/home/pi/miniconda3/envs/py36/bin/python") # path to python remote_python_entrypoint = string(default="rpi_camera_colony.acquisition") # path to __main__ entrypoint max_acquisition_time = integer(0, 7200, default=7200) # seconds, shut down acquisition after expiration save_data = boolean(default=True) # if False, then doesn't write files on RPi general_setting_has_priority = boolean(default=True) # If False, does not patch in general settings general_settings_to_patch_into_controller = string_list(default=list("save_data", "acquisition_time", "acquisition_group")) # Add variables here for patching into controllers [log] address = string(max=15, default="192.168.100.10") port = integer(default=55555) level = string(default="DEBUG") log_to_console = boolean(default=True) log_to_file = boolean(default=True) log_file = string(default="/tmp/rpi_camera_colony__logging") [control] address = string(default="192.168.100.10") port = integer(default=54545) [controllers] [[__many__]] description = string(default="") address = string(max=15, default="") save_data = boolean(default=True) ttl_channel_external = integer(default=-1) # metadata info if recording output TTL on specific channel of other acquisition system ttl_in_pin = integer(default=16) ttl_out_pin = integer(default=8) ttl_out_duration = float(default=.001) # See for list of ALL parameters https://picamera.readthedocs.io/en/latest/api_camera.html framerate = integer(min=1, max=90, default=90) resolution = int_list(default=list(640, 480)) vflip = boolean(default=False) hflip = boolean(default=False) brightness = integer(min=0, max=100, default=50) # color_effects: (128, 128) == black and white acquisition. Default is None. color_effects = int_list(default=list(128, 128)) contrast = integer(min=-100, max=100, default=0) image_denoise = boolean(default=True) iso = integer(min=0, max=1600, default=0) led = boolean(default=False) preview_alpha = integer(min=0, max=255, default=255) # DEPRECATED saturation = integer(min=-100, max=100, default=0) sharpness = integer(min=-100, max=100, default=0) still_stats = boolean(default=False) video_denoise = boolean(default=True) video_stabilization = boolean(default=False) # zoom: (x, y, w, h) zoom = float_list(default=list(0.0, 0.0, 1.0, 1.0)) # Other picamera attributes / not implemented / not tested, but might work # awb_gains # awb_mode = option(default="auto") # drc_strength # exposure_compensation # exposure_mode # exposure_speed = 0 # flash_mode = option("off", "auto", "on", "redeye", "fillin", "torch", default="off") # framerate_delta # new in 1.11 # framerate_range # new in 1.13 # image_effect = "none" # image_effect_params # https://picamera.readthedocs.io/en/release-1.13/api_camera.html#picamera.PiCamera.image_effect_params # meter_mode = option("average", "spot", "backlit", "matrix") # rotation = option(0, 90, 180, 270) # sensor_mode = integer(default=0) # shutter_speed # microseconds ``` ## Specific install hints ### HQ camera for RPi cannot acquire at resolutions or framerates outlined in the technical description `sudo rpi-update` fixes this. - Be careful, this updates the RPi firmware and might have unexpected side effects! ### Ip forwarding and routing on central machine ```bash # IP forward sysctl -w net.ipv4.ip_forward=1 # check with cat /proc/sys/net/ipv4/ip_forward # Package routing # - outside interface (dhcp): enp7s0 # - inside interface (static): enp8s0 iptables -A FORWARD -i enp8s0 -o enp7s0 -j ACCEPT iptables -A FORWARD -i enp7s0 -o enp8s0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o enp7s0 -j MASQUERADE ``` ### Update time for ssl certificates ```bash # Check with timedatectl status # force update with NTP sudo service ntp stop sudo ntpd -gq sudo service ntp start # enable permanent updates sudo systemctl restart systemd-timesyncd ``` ### Install miniconda on RPi ```bash # Installing miniconda on RPi wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-armv7l.sh sudo md5sum Miniconda3-latest-Linux-armv7l.sh # (optional) check md5 bash Miniconda3-latest-Linux-armv7l.sh # -> default directory should be: /home/pi/miniconda3 # Add conda to path echo 'export PATH="/home/pi/miniconda3/bin:$PATH"' >> .bashrc source .bashrc # or re-connect # Create conda environment and install basic packages (e.g. dependencies for this package) conda config --add channels rpi conda create -y -n py36 python=3.6 numpy pandas pyzmq echo 'source activate py36' >> .bashrc source .bashrc # or re-connect # Re/install RCC pip uninstall rpi_camera_colony -y pip install --upgrade rpi_camera_colony ``` --- Version: "0.5.0"
/rpi_camera_colony-0.5.0.tar.gz/rpi_camera_colony-0.5.0/README.md
0.618089
0.843251
README.md
pypi
import json import logging from glob import glob from pathlib import Path import pandas as pd import pandas.errors def __read_json(file=None): with open(file, "r") as f: data = f.read() try: return json.loads(data) except json.JSONDecodeError: logging.debug(f"Failed to read JSON file: {file}") return {} def __exclude_files_by_pattern(file_list=None): exclusions_contains = ["DLC_resnet"] for excl in exclusions_contains: file_list = [f for f in file_list if excl not in f] return file_list def __get_file_type_identifier(file=None, namespace_divider=None): """Return identifier part. Move to new namespace (underscores replaced with dots) for dict keys.""" return str(file.split(namespace_divider)[-1].replace("_", ".")) def read_session_data(session_dir=None, namespace_signature=".rcc."): """Read RCC session metadata & video paths (not video data itself). File name pattern: [session_name].camera_51_blue.20210928_100502.rcc.metadata.json [session_name].[camera_id].[dt].rcc.[namespace_id] Expected files per camera in acquisition: - .rcc.metadata.json - .rcc.timestamps_ttl_in.csv : frame timestamps + input timestamps - .rcc.timestamps_ttl_out.csv : frame timestamps == output timestamps - .rcc.video.h264 - .rcc.video.h264.mp4 [only there is run MSW post acquisition tasks] """ session_dir = Path(session_dir) assert session_dir.exists() rcc_files_in_dir = glob(str(session_dir / f"*{namespace_signature}*")) rcc_files_in_dir = __exclude_files_by_pattern(rcc_files_in_dir) session_data = {} for filepath in rcc_files_in_dir: filename = Path(filepath).name cam = filename.split(".")[1] ftype = __get_file_type_identifier( file=filename, namespace_divider=namespace_signature ) if cam not in session_data: logging.debug(f"New camera found in session '{filename}': {cam}") session_data[cam] = {} session_data[cam]["has_h264"] = False session_data[cam]["has_mp4"] = False # Add data if "metadata.json" in ftype: metadata = __read_json(file=filepath) if not metadata: logging.debug("No metadata") return {} session_data[cam][ftype.replace(".json", "")] = metadata elif ftype.endswith(".csv"): # Expect TTL to be empty if not connected. try: csv_data = pd.read_csv(filepath) # Assert that matches TTL-in (shape[1]=1) or TTL-out (shape[1]=2) column layout assert csv_data.shape[1] in (1, 2) # Remove leading hash and whitespace from column names (legacy naming) for c in csv_data.columns: csv_data = csv_data.rename( columns={c: c.strip("#").strip(" ")} ) except (pandas.errors.EmptyDataError, AssertionError): csv_data = pd.DataFrame() session_data[cam][ftype.replace(".csv", "")] = csv_data elif ftype == "video.h264": session_data[cam]["has_h264"] = True session_data[cam]["video_file_h264"] = str( Path(filepath).relative_to(session_dir) ) elif ftype == "video.h264.mp4": session_data[cam]["has_mp4"] = True session_data[cam]["video_file_mp4"] = str( Path(filepath).relative_to(session_dir) ) return session_data
/rpi_camera_colony-0.5.0.tar.gz/rpi_camera_colony-0.5.0/rpi_camera_colony/readers.py
0.484868
0.202976
readers.py
pypi
from __future__ import annotations # PEP 563 import logging import time import threading import signal import asyncio import concurrent.futures import inspect import enum import typing from . import gpio_driver def get_logger(): return logging.getLogger(__name__) class Controller: """Represents the object managing all buttons. It monitors the state of the GPIO and calls event callbacks on buttons when appropriate. """ class Status(enum.Enum): """Defines the various steps in a controller lifecycle.""" READY = "ready" """Controller is waiting for being started, either with :meth:`Controller.run` or :meth:`Controller.start_in_thread`""" RUNNING = "running" """Controller has been started and is monitoring GPIO. This is the active state of the controller, during which button events can be raised.""" STOPPING = "stopping" """Controller is being shut down. No new events will be raised at this point because GPIO is no longer monitored, but ongoing callbacks may still need to finish.""" STOPPED = "stopped" """Controller is at full stop and all event callbacks have returned. Controller cannot be started again.""" def __init__(self, driver: gpio_driver.GpioDriver): """Initializes a new instance of the engine controlling the GPIO. :param driver: object abstracting access to the GPIO. """ self.driver: gpio_driver.GpioDriver = driver self._buttons: typing.List[Button] = [] self.iteration_sleep: float = 1.0 self._status_lock: threading.Lock = threading.Lock() self._status: Controller.Status = Controller.Status.READY # Async loop running event callback. self._event_loop: asyncio.AbstractEventLoop = asyncio.new_event_loop() # Thread used to run control updates that are time bound instead of IO bound # For example, after a button is released, the click event must be # raised once the double click delay is over (because if a double click # occurs, we don't want to raise the click event). self._scheduled_updates_thread: threading.Thread = threading.Thread(target=self._scheduled_updates_thread_main, daemon=True) # Condition that notifies the thread when something is scheduled. self._scheduled_updates_condition: threading.Condition = threading.Condition() self._running_event_handlers: typing.List[concurrent.futures.Future] = [] @property def status(self) -> Controller.Status: """Gets the current status of this controller.""" return self._status @property def buttons(self) -> typing.Iterable[Button]: """Gets the collection of buttons that have been registered using :meth:`make_button`.""" return self._buttons def _scheduled_updates_thread_main(self) -> None: while self._status in (Controller.Status.READY, Controller.Status.RUNNING): # Immediately update buttons that need it. with self._scheduled_updates_condition: current_time = time.time() buttons_to_update = [b for b in self._buttons if b.scheduled_update_time != 0 and b.scheduled_update_time < current_time] for button in buttons_to_update: self._update_button(button) # Sleep until next update or wait for the next scheduled update. remaining_buttons_to_update = [b for b in self._buttons if b.scheduled_update_time != 0] if remaining_buttons_to_update: next_update_time = min((b.scheduled_update_time for b in remaining_buttons_to_update)) sleep_time = next_update_time - time.time() if sleep_time > 0.0: time.sleep(sleep_time) else: with self._scheduled_updates_condition: get_logger().debug("Thread for scheduled updates going to sleep.") self._scheduled_updates_condition.wait() get_logger().debug("Thread for scheduled updates wakes up.") def make_button( self, input_pin_id: int, input: Button.InputType, pull: gpio_driver.PullType, name: typing.Optional[str] = None, bounce_time: int = 0 ) -> Button: """Creates a new button connected to pins of the GPIO. :param input_pin_id: id of the *input* pin the button is connected to. Its meaning depends on the selected GPIO driver. The default driver is :class:`rpicontrols.rpi_gpio_driver.RpiGpioDriver` which uses :data:`RPi.GPIO.BOARD` unless otherwise specified. :param input: value describing the button physical behavior with respect to the electrical wiring. It helps the controller tell when the button is considered pressed or released, depending on the state of the GPIO. :param pull: whether built-in pull-up or pull-down should be used for this button. Those are resistors integrated in the Raspberry Pi's circuits that can be used to make sure GPIO pins are always at a predictable potential. The appropriate value is dependent on how the physical button or switch has been wired to the GPIO. See `Wikipedia <https://en.wikipedia.org/wiki/Pull-up_resistor>`_ for more information. :param name: optional name, used for documentation and logging purposes. If unset, a default unique name will be assigned. :param bounce_time: timespan after a GPIO rising or falling edge during which new edges should be ignored. This is meant to avoid unwanted edge detections due to the transient instability of switches when they change state. The appropriate value depends on the actual physical switch or button in use. """ button = Button(input_pin_id, input, name) self.driver.configure_button(input_pin_id, pull, bounce_time) get_logger().debug(f"New button configured for pin {input_pin_id}") # Do an initial update to initialize the internal state of the button. # No event loop is passed, so that it does not attempt to raise any # event. self._update_button(button, raise_events=False) self._buttons.append(button) return button def delete_button(self, button: Button) -> None: """Removes the button from the controller. The controller will stop monitoring this button events and will not update its status anymore. Call this method to save resources if this button is not useful anymore. It is not required to delete all buttons before deleting this controller. """ if button not in self._buttons: raise ValueError(f"Button {button.name} is not registered in this controller.") self.driver.unconfigure_button(button.pin_id) self._buttons.remove(button) def stop(self, wait: bool = False) -> None: """Stops this controller. Attempting to stop a controller that is already stopped does nothing. Otherwise, calling this method on a controller that is in a status different from :data:`Controller.Status.RUNNING` raises an exception. :param wait: whether to block until the controller has actually stopped. If False, the method returns quicker but there is no guarantee that the controller has actually reached the :data:`Status.STOPPED` status. """ get_logger().info("Stopping controller...") with self._status_lock: # Already stopped? if self.status == Controller.Status.STOPPED: get_logger().info("Controller is already stopped.") return # Otherwise, stopping only makes sense while controller is running. if self.status != Controller.Status.RUNNING: message: str = f"Controller status is {self.status} and cannot be stopped." get_logger().error(message) raise Exception(message) self._status = Controller.Status.STOPPING # Wake up the thread for scheduled updates in case it is waiting # to allow it to stop gracefully. with self._scheduled_updates_condition: self._scheduled_updates_condition.notify() # Wait for event handler that are still running to complete. while [handler_future for handler_future in self._running_event_handlers if not handler_future.done()]: time.sleep(0.01) get_logger().debug("All event handlers are now complete.") # Request the event loop to stop (so will end its thread). # https://stackoverflow.com/a/51647591 self._event_loop.call_soon_threadsafe(self._event_loop.stop) while wait and self._status != Controller.Status.STOPPED: time.sleep(0.01) def _get_button(self, pin_id: int) -> typing.Optional[Button]: buttons = [b for b in self._buttons if b.pin_id == pin_id] if not buttons: return None if len(buttons) > 1: raise Exception(f"Several buttons correspond to pin {pin_id}.") return buttons[0] def _on_gpio_edge(self, pin_id: int, edge: gpio_driver.EdgeType) -> None: get_logger().debug(f"edge start: {edge} on pin {pin_id}") with self._status_lock: # Maybe raise new events. Make sure the controller cannot stop in # the meantime. button: typing.Optional[Button] = self._get_button(pin_id) if not button: get_logger().info(f"Ignoring edge for GPIO pin {pin_id} because no button is registered for it.") return with self._scheduled_updates_condition: self._update_button(button, edge == gpio_driver.EdgeType.RISING) if button.scheduled_update_time != 0: self._scheduled_updates_condition.notify() get_logger().debug(f"edge end: {edge} on pin {pin_id}") def _update_button(self, button: Button, pin_input: typing.Optional[bool] = None, raise_events: bool = True) -> None: if self._status != Controller.Status.RUNNING: return actual_pin_input: bool = pin_input if pin_input is not None else self.driver.input(button.pin_id) event_futures: typing.List[concurrent.futures.Future] = button.update(self._event_loop if raise_events else None, actual_pin_input) self._running_event_handlers += event_futures def start_in_thread(self) -> None: """Runs the engine controlling the GPIO in its own thread.""" thread = threading.Thread(target=self.run) thread.start() while self._status == Controller.Status.READY: time.sleep(0.01) def run(self) -> None: """Runs the engine controlling the GPIO. This method blocks until the controller is stopped. See also :meth:`start_in_thread` for a non-blocking version of this start method. """ get_logger().info("Starting the controller...") with self._status_lock: # Already running or stopping. if self._status != Controller.Status.READY: message: str = f'Controller is currently "{self.status}" and cannot be started.' get_logger().error(message) raise Exception(message) self._scheduled_updates_thread.start() self.driver.set_edge_callback(self._on_gpio_edge) self._status = Controller.Status.RUNNING get_logger().debug("Async event loop for event handlers started.") self._event_loop.run_forever() get_logger().debug("Async event loop for event handlers is now stopped.") with self._status_lock: self._status = Controller.Status.STOPPED get_logger().info("Controller is now stopped") def stop_on_signals(self, signals: typing.Iterable[signal.Signals] = [signal.SIGINT, signal.SIGTERM]): """Registers a handler to stop this controller when specific signals are caught. :param signals: list of signals that should stop this controller. """ for sig in signals: signal.signal(sig, self._signal_handler) def _signal_handler(self, signal, frame) -> None: get_logger().debug(f"Signal caught: {signal} on frame={frame}.") self.stop(wait=False) class Button: """Represents a button connected to the GPIO. This object holds the current state of the button and the event handlers to be called when events are raised. """ class InputType(enum.Enum): """Defines the various physical behaviors of a button with respect to the wiring of its corresponding GPIO pins.""" PRESSED_WHEN_ON = 1 """The button is detected as pressed when its GPIO input pin is on.""" PRESSED_WHEN_OFF = 2 """The button is detected as pressed when its GPIO input pin is off.""" SyncEventHandler = typing.Callable[["Button"], None] """Represents the type for synchronous event handlers.""" AsyncEventHandler = typing.Callable[["Button"], typing.Coroutine[typing.Any, typing.Any, typing.Any]] """Represents the type for asynchronous event handlers.""" EventHandler = typing.Union[SyncEventHandler, AsyncEventHandler] """Represents the type for all kinds of event handlers (synchronous or asynchronous).""" EventHandlerList = typing.List[EventHandler] """Represents the type for lists of event handlers (synchronous or asynchronous).""" def __init__(self, input_pin_id: int, input_type: Button.InputType, name: typing.Optional[str] = None): self._pin_id: int = input_pin_id self._name: str = name or f"button for pin {input_pin_id}" self._input_type: Button.InputType = input_type self._pressed: bool = False self._long_pressed: bool = False self._press_handlers: Button.EventHandlerList = [] self._release_handlers: Button.EventHandlerList = [] self._long_press_handlers: Button.EventHandlerList = [] self._click_handlers: Button.EventHandlerList = [] self._double_click_handlers: Button.EventHandlerList = [] #: Period of time in seconds that defines the double click speed. For a double click to be detected, #: two clicks must occur so that the number of elapsed seconds between the first press and the second release #: is at most equal to this timeout. #: This timeout has an indirect impact on the detection of the click events: since no click event is raised #: when a double click occurs, the controller must wait for this double click timeout to expire once #: a first click has been detected before the actual click event can be raised. self.double_click_timeout: float = 0.5 #: Number of consecutive seconds the button must be pressed for the *long pressed* event #: to be raised. self.long_press_timeout: float = 0.5 # Timestamps of previous presses and releases. self._press_times: typing.List[float] = [] self._release_times: typing.List[float] = [] self.scheduled_update_time: float = 0.0 @property def pin_id(self) -> int: """Id of the input pin the button is connected to. See :meth:`Controller.make_button` for more info on its meaning.""" return self._pin_id @property def name(self) -> str: """Informational name of this button. This name is used mainly for logging purposes.""" return self._name @property def input_type(self) -> InputType: """Returns a value indicating the physical status of the button with respect to GPIO status.""" return self._input_type @property def pressed(self) -> bool: """Returns a value indicating whether the button is currently pressed.""" return self._pressed @property def long_pressed(self) -> bool: """Returns a value indicating whether the button is currently pressed and has been so for least a period of time at least equal to :attr:`long_press_timeout`.""" return self._long_pressed def update(self, event_loop: typing.Optional[asyncio.AbstractEventLoop], pin_input: bool) -> typing.List[concurrent.futures.Future]: was_pressed: bool = self._pressed new_pressed: bool = pin_input if self._input_type == Button.InputType.PRESSED_WHEN_ON else not pin_input self._pressed = new_pressed if not self._pressed: self._long_pressed = False current_time: float = time.time() # Mark button as updated. if self.scheduled_update_time < current_time: self._schedule_update(0.0) def log_state(new_state: str): get_logger().debug(f"Button {self.name} [{self.pin_id}] is {new_state}.") event_futures: typing.List[concurrent.futures.Future] = [] if self._pressed and not was_pressed: # PRESS # Record time of this new press. log_state("pressed") self._press_times.append(current_time) self._raise_event("press", event_loop, self._press_handlers, event_futures) elif not self._pressed and was_pressed: # RELEASE log_state("released") self._release_times.append(current_time) self._raise_event("release", event_loop, self._release_handlers, event_futures) # Maybe raise 'long press' event? if self._pressed: # LONG_PRESS last_press_time: float = self._press_times[-1] if not self._long_pressed: # Raise event only once per press! if current_time - last_press_time > self.long_press_timeout: # Press lasted long enough? log_state("long-pressed") self._long_pressed = True self._raise_event("long press", event_loop, self._long_press_handlers, event_futures) else: # Button needs to reconsider the long pressed event later. self._schedule_update(last_press_time + self.long_press_timeout) # Maybe raise 'double click' event? just_released: bool = not self._pressed and was_pressed clicked_twice: bool = len(self._press_times) >= 2 # Was pressed at least twice recently. # First of two presses was not too long ago? first_press_recent: bool = clicked_twice and current_time - self._press_times[-2] < self.double_click_timeout if just_released and clicked_twice and first_press_recent: # DOUBLE_CLICK log_state("double-clicked") self._raise_event("double click", event_loop, self._double_click_handlers, event_futures) # Consume press times not to reuse them in further events. self._press_times.clear() self._release_times.clear() # Maybe raise 'click' event? if self._press_times and self._release_times: # CLICK last_press: float = self._press_times[-1] last_release: float = self._release_times[-1] if last_release > last_press: # Was pressed then released. May now be pressed again so checking self.pressed is not enough! if current_time - last_press >= self.double_click_timeout: # Last press cannot qualify as a double click anymore. log_state("clicked") self._raise_event("click", event_loop, self._click_handlers, event_futures) # Consume press times not to reuse them in further events. self._press_times.clear() self._release_times.clear() else: # Button needs to consider the click event once # last press cannot participate in a double click anymore. self._schedule_update(last_press + self.double_click_timeout) return event_futures def _schedule_update(self, update_time: float) -> None: if update_time == 0.0 or self.scheduled_update_time == 0.0 or self.scheduled_update_time > update_time: self.scheduled_update_time = update_time def _raise_event( self, event_name: str, event_loop: typing.Optional[asyncio.AbstractEventLoop], handlers: EventHandlerList, event_futures: typing.List[concurrent.futures.Future], ) -> None: if event_loop is None: return event_futures += [asyncio.run_coroutine_threadsafe(self._call_event_handler(event_name, handler), event_loop) for handler in handlers] async def _call_event_handler(self, event_name: str, handler: EventHandler): try: handler_result = handler(self) if inspect.isawaitable(handler_result): get_logger().debug(f'Called event handler asynchronously for "{event_name}" on button {self.name}.') awaitable_result = typing.cast(typing.Awaitable, handler_result) await awaitable_result else: get_logger().debug(f'Called event handler synchronously for "{event_name}" on button {self.name}.') except BaseException as e: get_logger().exception(e) def add_on_press(self, func: EventHandler) -> None: """Adds a handler of the *press* event. This handler will be called whenever the button is pressed.""" self._press_handlers.append(func) def remove_on_press(self, func: EventHandler) -> None: """Removes a handler of the *press* event.""" self._press_handlers.remove(func) def add_on_long_press(self, func: EventHandler) -> None: """Adds a handler of the *long press* event. This handler will be called whenever the button has been kept in its pressed state for a period of time equal to :attr:`long_press_timeout` seconds.""" self._long_press_handlers.append(func) def remove_on_long_press(self, func: EventHandler) -> None: """Removes a handler of the *long press* event.""" self._long_press_handlers.remove(func) def add_on_release(self, func: EventHandler) -> None: """Adds a handler of the *release* event. This handler will be called whenever the button is released after having been pressed.""" self._release_handlers.append(func) def remove_on_release(self, func: EventHandler) -> None: """Removes a handler of the *release* event.""" self._release_handlers.remove(func) def add_on_click(self, func: EventHandler) -> None: """Adds a handler of the *click* event. This handler will be called whenever the button is pressed and released once. If a second click happens before :attr:`double_click_timeout` expires, this event is not raised. The *double click* event is raised instead.""" self._click_handlers.append(func) def remove_on_click(self, func: EventHandler) -> None: """Removes a handler of the *click* event.""" self._click_handlers.remove(func) def add_on_double_click(self, func: EventHandler) -> None: """Adds a handler of the *double click* event. This handler will be called whenever the button is pressed and released twice within a period of time at most equal to :attr:`double_click_timeout`. """ self._double_click_handlers.append(func) def remove_on_double_click(self, func: EventHandler) -> None: """Removes a handler of the *double click* event.""" self._double_click_handlers.remove(func)
/rpi_controls-1.0.3-py3-none-any.whl/rpicontrols/controller.py
0.882301
0.240418
controller.py
pypi
import RPi.GPIO as GPIO from . import gpio_driver import logging from typing import Callable, Optional import time class RpiGpioDriver(gpio_driver.GpioDriver): """Implementation of the GPIO driver interface based on `RPi.GPIO <https://pypi.org/project/RPi.GPIO/>`. This is the default driver for button controllers. """ def __init__(self, mode: int = GPIO.BOARD): gpio_driver.GpioDriver.__init__(self) GPIO.setmode(mode) self._edge_callback: Optional[Callable[[int, gpio_driver.EdgeType], None]] = None self._bounce_times: dict[int, int] = {} # Bounce time in ms indexed by pin id. def input(self, pin_id: int) -> bool: input_value: bool = GPIO.input(pin_id) logging.debug(f"Pin {pin_id} input state is {input_value}.") return input_value def configure_button(self, pin_id: int, pull: gpio_driver.PullType, bounce_time: int) -> None: # Parameters sanitizing. # - pull type: pull_up_down: int = 0 if pull == gpio_driver.PullType.NONE: pull_up_down = GPIO.PUD_OFF elif pull == gpio_driver.PullType.UP: pull_up_down = GPIO.PUD_UP elif pull == gpio_driver.PullType.DOWN: pull_up_down = GPIO.PUD_DOWN else: raise Exception(f"Unsupported pull type {pull}") # - bounce time: if bounce_time < 0: raise ValueError(f"Bounce time {bounce_time} is not supported: must be positive.") # Make sure no button has been configured for this pin before. if pin_id in self._bounce_times: raise Exception(f"A button has already been configured for pin {pin_id}.") GPIO.setup(pin_id, GPIO.IN, pull_up_down=pull_up_down) self._bounce_times[pin_id] = bounce_time GPIO.add_event_detect(pin_id, GPIO.BOTH, callback=self._on_edge) logging.debug(f"Configured pin {pin_id} on GPIO.") def unconfigure_button(self, pin_id: int) -> None: if pin_id not in self._bounce_times: raise Exception(f"No button configured for pin {pin_id}.") del self._bounce_times[pin_id] GPIO.remove_event_detect(pin_id) def _on_edge(self, pin_id: int) -> None: # In case edge is called while button is being unconfigured, abort. bounce_time: Optional[int] = self._bounce_times.get(pin_id, None) if bounce_time is None: return time.sleep(bounce_time / 1000.0) edge: gpio_driver.EdgeType = gpio_driver.EdgeType.RISING if GPIO.input(pin_id) else gpio_driver.EdgeType.FALLING if self._edge_callback: self._edge_callback(pin_id, edge) def set_edge_callback(self, callback: Callable[[int, gpio_driver.EdgeType], None]): self._edge_callback = callback
/rpi_controls-1.0.3-py3-none-any.whl/rpicontrols/rpi_gpio_driver.py
0.853119
0.204183
rpi_gpio_driver.py
pypi
import numpy as np from scipy.special import comb from .util import bdeu def LocalPrior(node, ps, d): '''a function computing local prior: p(Gi) = 1 / comb(d-1, |pa(x_i)|)''' ''' parameters: ----------- node: i, index of Xi ps: indices of parent variable of variable Xi d: dimension, total number of variables p: local graph prior p(Gi) ''' n_ps = len(ps) p = 1 / comb(d-1, n_ps) return p def LocalLikelihood(node, ps, data, node_sizes, alpha): '''a function to compute the local log likelihood: p(D|Gi)''' ''' parameters: ----------- node: i, index of Xi ps: indices of parent variable of variable Xi d: dimension, total number of variables data: input data M sample N variables ndarray of M*N node_sizes: the number of states for each variable alpha: pseudo_count ll: log likelihood l: likelihood ''' if len(ps) == 0: # no parent case count = np.zeros((node_sizes[node],1)) for s in range(node_sizes[node]): count[s,0] = len(np.argwhere(data[:,node] == s).reshape(-1,)) else: n_ps = len(ps) # number of parent variables a = list() # list a to store the possible states for all parent variables for i in range(n_ps): a.append(list(np.arange(node_sizes[ps[i]]))) ps_comb = [list(x) for x in np.array(np.meshgrid(*a)).T.reshape(-1, len(a))] # ps_comb store all the possible parent configurations ps_comb = np.array(ps_comb) n_ps_comb = len(ps_comb) # number of parent configuration count = np.zeros((node_sizes[node], n_ps_comb)) for j in range(n_ps_comb): p_config = ps_comb[j,:] # parent configuration ind = np.argwhere(np.sum(data[:, ps] == p_config,axis = 1) == n_ps).reshape(-1,) data_c = data[ind,:] # find all the samples with the specified parent configuration for s in range(node_sizes[node]): ind_s = np.argwhere(data_c[:, node] == s).reshape(-1,) count[s,j] = len(ind_s) # compute likelihood ll = bdeu(count,alpha) l = np.exp(ll) return l,ll def LocalPosterior(node, ps, data, node_sizes, alpha): ''' a function of computing the posterior probability''' ''' ----------- node: i, index of Xi ps: indices of parent variable of variable Xi data: input data M sample N variables ndarray of M*N d: dimension, total number of variables node_sizes: the number of states for each variable alpha: pseudo_count p: posterior probability p(Gi|D) propto p(D|Gi)p(Gi) ''' d = np.shape(data)[1] prior = LocalPrior(node, ps, d) l,_ = LocalLikelihood(node, ps, data, node_sizes,alpha) p = prior * l return p, l, prior
/rpi_d3m_primitives_part2-0.0.5.tar.gz/rpi_d3m_primitives_part2-0.0.5/rpi_d3m_primitives_part2/Sampling/LocalPosterior.py
0.447219
0.727564
LocalPosterior.py
pypi
from __future__ import division from copy import copy import itertools from rpi_d3m_primitives.pyBN.utils.independence_tests import are_independent, mi_test __author__ = """Nicholas Cullen <ncullen.th@dartmouth.edu>""" def markov_blanket(bn): """ Return the Markov Blanket dictionary from a fully (structurally) instantiated BayesNet object. The markov blanket for a given node is just the node's parents, children, and its children's parents (i.e. spouses) Arguments --------- *bn* : BayesNet object Returns ------- *mb* : a dictionary where each key is a node and the value is a list of the key-node's markov blanket """ mb = dict([(rv,bn.parents(rv)+bn.children(rv)) for rv in bn.nodes()]) for rv in bn.V: for child in bn.children(rv): for c_parent in bn.parents(child): if c_parent != rv: mb[rv].append(c_parent) # add spouse return mb def resolve_markov_blanket(Mb, data,alpha=0.05): """ Resolving the Markov blanket is the process by which a PDAG is constructed from the collection of Markov Blankets for each node. Since an undirected graph is returned, the edges still need to be oriented by calling some version of the "orient_edges" function in "pyBN.structure_learn.orient_edges" module. This algorithm is adapted from Margaritis, but also see [3] for good pseudocode. Arguments --------- *Mb* : a dictionary, where key = rv and value = list of vars in rv's markov blanket *data* : a nested numpy array The dataset used to learn the Mb Returns ------- *edge_dict* : a dictionary, where key = rv and value = list of rv's children Effects ------- None Notes ----- """ n_rv = data.shape[1] edge_dict = dict([(rv,[]) for rv in range(n_rv)]) for X in range(n_rv): for Y in Mb[X]: # X and Y are direct neighbors if X and Y are dependent # given S for all S in T, where T is the smaller of # B(X)-{Y} and B(Y)-{X} if len(Mb[X]) < len(Mb[Y]): T = copy(Mb[X]) # shallow copy is sufficient if Y in T: T.remove(Y) else: T = copy(Mb[Y]) # shallow copy is sufficient if X in T: T.remove(X) # X and Y must be dependent conditioned upon # EVERY POSSIBLE COMBINATION of T direct_neighbors=True for i in range(len(T)): for S in itertools.combinations(T,i): cols = (X,Y) + tuple(S) pval = mi_test(data[:,cols]) if pval > alpha: direct_neighbors=False if direct_neighbors: if Y not in edge_dict[X] and X not in edge_dict[Y]: edge_dict[X].append(Y) if X not in edge_dict[Y]: edge_dict[Y].append(X) return edge_dict def mb_fitness(data, Mb, target=None): """ Evaluate the fitness of a Markov Blanket dictionary learned from a given data set based on the distance metric provided in [1] and [2]. From [2]: A distance measure that indicates the "fitness" of the discovered blanket... to be the average, over all attributes X outside the blanket, of the expected KL-divergence between Pr(T | B(T)) and Pr(T | B(T) u {X}). We can expect this measure to be close to zero when B(T) is an approximate blanket. -- My Note: T is the target variable, and if the KL-divergence between the two distributions above is zero, then it means that {X} provides no new information about T and can thus be excluded from Mb(T) -- this is the exact definition of conditional independence. Notes ----- - Find Pr(T|B(T)) .. - For each variable X outside of the B(T), calculate D( Pr(T|B(T)), Pr(T|B(T)u{X}) ) - Take the average (closer to Zero is better) ^^^ This is basically calculating where T is independent of X given B(T).. i.e. Sum over all X not in B(T) of mi_test(data[:,(T,X,B(T))]) / |X| """ if target is None: nodes = set(Mb.keys()) else: try: nodes = set(target) except TypeError: nodes = {target} fitness_dict = dict([(rv, 0) for rv in nodes]) for T in nodes: non_blanket = nodes - set(Mb[T]) - {T} for X in non_blanket: pval = mi_test(data[:,(T,X)+tuple(Mb[T])]) fitness_dict[T] += 1/pval return fitness_dict
/rpi_d3m_primitives_part2-0.0.5.tar.gz/rpi_d3m_primitives_part2-0.0.5/rpi_d3m_primitives_part2/pyBN/utils/markov_blanket.py
0.705988
0.575409
markov_blanket.py
pypi
from __future__ import division import numpy as np from scipy.special import gamma, gammaln from rpi_d3m_primitives.pyBN.learning.parameter.mle import mle_estimator, mle_fast from rpi_d3m_primitives.pyBN.classes.empiricaldistribution import EmpiricalDistribution def BDe(bn, data, ess=1, ed=None): """ Unique Bayesian score with the property that I-equivalent networks have the same score. As Data Rows -> infinity, BDe score converges to the BIC score. Arguments --------- *bn* : a BayesNet object Needed to get the parent relationships, etc. *data* : a numpy ndarray Needed to learn the empirical distribuion *ess* : an integer Equivalent sample size *ed* : an EmpiricalDistribution object Used to cache multiple lookups in structure learning. Notes ----- *a_ijk* : a vector The number of times where x_i=k | parents(x_i)=j -> i.e. the mle counts *a_ij* : a vector summed over k's in a_ijk *n_ijk* : a vector prior (sample size or calculation) "ess" for BDe metric *n_ij* : a vector prior summed over k's in n_ijk """ counts_dict = mle_fast(bn, data, counts=True, np=True) a_ijk = [] bdeu = 1 for rv, value in counts_dict.items(): nijk = value['cpt'] nijk_prime = ess k2 *= gamma(nijk+nijk_prime)/gamma(nijk_prime) nij_prime = nijk_prime*(len(cpt)/bn.card(rv)) nij = np.mean(nijk.reshape(-1, bn.card(rv)), axis=1) # sum along parents k2 *= gamma(nij_prime) / gamma(nij+nij_prime) return bdeu def BDeu(bn, data, ess=1, ed=None): """ Unique Bayesian score with the property that I-equivalent networks have the same score. As Data Rows -> infinity, BDe score converges to the BIC score. Nijk_prime = ess/len(bn.cpt(rv)) Arguments --------- *bn* : a BayesNet object Needed to get the parent relationships, etc. *data* : a numpy ndarray Needed to learn the empirical distribuion *ess* : an integer Equivalent sample size *ed* : an EmpiricalDistribution object Used to cache multiple lookups in structure learning. Notes ----- *a_ijk* : a vector The number of times where x_i=k | parents(x_i)=j -> i.e. the mle counts *a_ij* : a vector summed over k's in a_ijk *n_ijk* : a vector prior (sample size or calculation) ess/(card(x_i)*len(cpt(x_i)/card(x_i))) for x_i for BDe metric *n_ij* : a vector prior summed over k's in n_ijk """ counts_dict = mle_fast(bn, data, counts=True, np=True) a_ijk = [] bdeu = 1 for rv, value in counts_dict.items(): nijk = value['cpt'] nijk_prime = ess / len(nijk) k2 *= gamma(nijk+nijk_prime)/gamma(nijk_prime) nij_prime = nijk_prime*(len(cpt)/bn.card(rv)) nij = np.mean(nijk.reshape(-1, bn.card(rv)), axis=1) # sum along parents k2 *= gamma(nij_prime) / gamma(nij+nij_prime) return bdeu def K2(bn, data, ed=None): """ K2 is bayesian posterior probability of structure given the data, where N'ijk = 1. """ counts_dict = mle_fast(bn, data, counts=True, np=True) a_ijk = [] k2 = 1 for rv, value in counts_dict.items(): nijk = value['cpt'] nijk_prime = 1 k2 *= gamma(nijk+nijk_prime)/gamma(nijk_prime) nij_prime = nijk_prime*(len(cpt)/bn.card(rv)) nij = np.mean(nijk.reshape(-1, bn.card(rv)), axis=1) # sum along parents k2 *= gamma(nij_prime) / gamma(nij+nij_prime) return k2
/rpi_d3m_primitives_part2-0.0.5.tar.gz/rpi_d3m_primitives_part2-0.0.5/rpi_d3m_primitives_part2/pyBN/learning/structure/score/bayes_scores.py
0.883123
0.626153
bayes_scores.py
pypi
from __future__ import division __author__ = """Nicholas Cullen <ncullen.th@dartmouth.edu>""" import numpy as np def bayes_estimator(bn, data, equiv_sample=None, prior_dict=None, nodes=None): """ Bayesian Estimation method of parameter learning. This method proceeds by either 1) assuming a uniform prior over the parameters based on the Dirichlet distribution with an equivalent sample size = *sample_size*, or 2) assuming a prior as specified by the user with the *prior_dict* argument. The prior distribution is then updated from observations in the data based on the Multinomial distribution - for which the Dirichlet is a "conjugate prior." Note that the Bayesian and MLE estimators essentially converge to the same set of values as the size of the dataset increases. Also note that, unlike the structure learning algorithms, the parameter learning functions REQUIRE a passed-in BayesNet object because there MUST be some pre-determined structure for which we can actually learn the parameters. You can't learn parameters without structure - so structure must always be there first! Finally, note that this function can be used to calculate only ONE conditional probability table in a BayesNet object by passing in a subset of random variables with the "nodes" argument - this is mostly used for score-based structure learning, where a single cpt needs to be quickly recalculate after the addition/deletion/reversal of an arc. Arguments --------- *bn* : a BayesNet object *data* : a nested numpy array Data from which to learn parameters *equiv_sample* : an integer The "equivalent sample size" (see function summary) *prior_dict* : a dictionary, where key = random variable and for each key the value is another dictionary where key = an instantiation for the random variable and the value is its FREQUENCY (an integer value, NOT its relative proportion/probability). *nodes* : a list of strings Which nodes to learn the parameters for - if None, all nodes will be used as expected. Returns ------- None Effects ------- - modifies/sets bn.data to the learned parameters Notes ----- """ if equiv_sample is None: equiv_sample = len(data) if nodes is None: nodes = list(bn.nodes()) for i, n in enumerate(nodes): bn.F[n]['values'] = list(np.unique(data[:,i])) obs_dict = dict([(rv,[]) for rv in nodes]) # set empty conditional probability table for each RV for rv in nodes: # get number of values in the CPT = product of scope vars' cardinalities p_idx = int(np.prod([bn.card(p) for p in bn.parents(rv)])*bn.card(rv)) bn.F[rv]['cpt'] = [equiv_sample/p_idx]*p_idx # loop through each row of data for row in data: # store the observation of each variable in the row obs_dict = dict([(rv,row[rv]) for rv in nodes]) # loop through each RV and increment its observed parent-self value for rv in nodes: rv_dict= { n: obs_dict[n] for n in obs_dict if n in bn.scope(rv) } offset = bn.cpt_indices(target=rv,val_dict=rv_dict)[0] bn.F[rv]['cpt'][offset]+=1 for rv in nodes: cpt = bn.cpt(rv) for i in range(0,len(bn.cpt(rv)),bn.card(rv)): temp_sum = float(np.sum(cpt[i:(i+bn.card(rv))])) for j in range(bn.card(rv)): cpt[i+j] /= (temp_sum) cpt[i+j] = round(cpt[i+j],5)
/rpi_d3m_primitives_part2-0.0.5.tar.gz/rpi_d3m_primitives_part2-0.0.5/rpi_d3m_primitives_part2/pyBN/learning/parameter/bayes.py
0.802942
0.641914
bayes.py
pypi
from __future__ import division __author__ = """Nicholas Cullen <ncullen.th@dartmouth.edu>""" import numpy as np def mle_fast(bn, data, nodes=None, counts=False, np=False): """ Maximum Likelihood estimation that is about 100 times as fast as the original mle_estimator function - but returns the same result """ def merge_cols(data, cols): if len(cols) == 1: return data[cols[0]] elif len(cols) > 1: data = data[cols].astype('str') ncols = len(cols) for i in range(len(data)): data.ix[i,0] = ''.join(data.ix[i,0:ncols]) data = data.astype('int') return data.ix[:,0] if nodes is None: nodes = list(bn.nodes()) else: if not isinstance(nodes, list): nodes = list(nodes) F = dict([(rv, {}) for rv in nodes]) for i, n in enumerate(nodes): F[n]['values'] = list(np.unique(data.ix[:,i])) bn.F[n]['values'] = list(np.unique(data.ix[:,i])) for rv in nodes: parents = bn.parents(rv) if len(parents)==0: if np: F[rv]['cpt'] = np.histogram(data.ix[:,rv], bins=bn.card(rv))[0] else: F[rv]['cpt'] = list(np.histogram(data.ix[:,rv], bins=bn.card(rv))[0]) else: if np: F[rv]['cpt'] = np.histogram2d(merge_cols(data,parents),data.ix[:,rv], bins=[np.prod([bn.card(p) for p in parents]),bn.card(rv)])[0].flatten() else: F[rv]['cpt'] = list(np.histogram2d(merge_cols(data,parents),data.ix[:,rv], bins=[np.prod([bn.card(p) for p in parents]),bn.card(rv)])[0].flatten()) if counts: return F else: for rv in nodes: F[rv]['parents'] = [var for var in nodes if rv in bn.E[var]] for i in range(0,len(F[rv]['cpt']),bn.card(rv)): temp_sum = float(np.sum(F[rv]['cpt'][i:(i+bn.card(rv))])) for j in range(bn.card(rv)): F[rv]['cpt'][i+j] /= (temp_sum+1e-7) F[rv]['cpt'][i+j] = round(F[rv]['cpt'][i+j],5) bn.F = F def mle_estimator(bn, data, nodes=None, counts=False): """ Maximum Likelihood Estimation is a frequentist method for parameter learning, where there is NO prior distribution. Instead, the frequencies/counts for each parameter start at 0 and are simply incremented as the relevant parent-child values are observed in the data. This can be a risky method for small datasets, because if a certain parent-child instantiation is never observed in the data, then its probability parameter will be ZERO (even if you know it should at least have a very small probability). Note that the Bayesian and MLE estimators essentially converge to the same set of values as the size of the dataset increases. Also note that, unlike the structure learning algorithms, the parameter learning functions REQUIRE a passed-in BayesNet object because there MUST be some pre-determined structure for which we can actually learn the parameters. You can't learn parameters without structure - so structure must always be there first! Finally, note that this function can be used to calculate only ONE conditional probability table in a BayesNet object by passing in a subset of random variables with the "nodes" argument - this is mostly used for score-based structure learning, where a single cpt needs to be quickly recalculate after the addition/deletion/reversal of an arc. Arguments --------- *bn* : a BayesNet object The associated network structure for which the parameters will be learned *data* : a nested numpy array *nodes* : a list of strings Which nodes to learn the parameters for - if None, all nodes will be used as expected. Returns ------- None Effects ------- - modifies/sets bn.data to the learned parameters Notes ----- - Currently doesn't return correct solution data attributes: "numoutcomes" : an integer "vals" : a list "parents" : a list or None "children": a list or None "cprob" : a nested python list - Do not want to alter bn.data directly! """ if nodes is None: nodes = list(bn.nodes()) else: if not isinstance(nodes, list): nodes = list(nodes) F = dict([(rv, {}) for rv in nodes]) for i, n in enumerate(nodes): F[n]['values'] = list(np.unique(data[:,i])) bn.F[n]['values'] = list(np.unique(data[:,i])) obs_dict = dict([(rv,[]) for rv in nodes]) # set empty conditional probability table for each RV for rv in nodes: # get number of values in the CPT = product of scope vars' cardinalities p_idx = int(np.prod([bn.card(p) for p in bn.parents(rv)])*bn.card(rv)) F[rv]['cpt'] = [0]*p_idx bn.F[rv]['cpt'] = [0]*p_idx # loop through each row of data for row in data: # store the observation of each variable in the row for rv in nodes: obs_dict[rv] = row[rv] #obs_dict = dict([(rv,row[rv]) for rv in nodes]) # loop through each RV and increment its observed parent-self value for rv in nodes: rv_dict= { n: obs_dict[n] for n in obs_dict if n in bn.scope(rv) } offset = bn.cpt_indices(target=rv,val_dict=rv_dict)[0] F[rv]['cpt'][offset]+=1 if counts: return F else: for rv in nodes: F[rv]['parents'] = [var for var in nodes if rv in bn.E[var]] for i in range(0,len(F[rv]['cpt']),bn.card(rv)): temp_sum = float(np.sum(F[rv]['cpt'][i:(i+bn.card(rv))])) for j in range(bn.card(rv)): F[rv]['cpt'][i+j] /= (temp_sum+1e-7) F[rv]['cpt'][i+j] = round(F[rv]['cpt'][i+j],5) bn.F = F
/rpi_d3m_primitives_part2-0.0.5.tar.gz/rpi_d3m_primitives_part2-0.0.5/rpi_d3m_primitives_part2/pyBN/learning/parameter/mle.py
0.437343
0.473109
mle.py
pypi
import numpy as np from rpi_d3m_primitives_part2.featSelect.helperFunctions import find_probs from scipy.stats import entropy from sklearn import preprocessing def findOptimalSplitPoint(min_val, max_val, ori_feat, label, incre_rate = 0.1): hm_bins = round(1/incre_rate) splits = np.linspace(min_val, max_val, hm_bins+1) hm_class = len(np.unique(label)) if hm_class <= 1: min_entropy = 0 optimal_split = max_val return optimal_split, min_entropy #edges = np.histogram(label, hm_class-1)[1] #label = np.digitize(label, edges) hm_sample = len(label) entropies = np.zeros(hm_bins,) for i in range(1, hm_bins+1): split = splits[i] left_indices = np.argwhere(ori_feat<split).flatten() left_labels = label[left_indices].flatten() left_probabilities = find_probs(left_labels) left_entropy = entropy(left_probabilities,base=2) right_indices = np.argwhere(ori_feat>=split).flatten() right_labels = label[right_indices].flatten() right_probabilities = find_probs(right_labels) right_entropy = entropy(right_probabilities,base=2) entropies[i-1] = left_labels.size/hm_sample * left_entropy + right_labels.size / hm_sample * right_entropy min_entropy = np.amin(entropies) idx = np.where(entropies == min_entropy) if len(idx[0]) != 1: idx = idx[0][0] else: idx = idx[0] #only return idx for the first minimul element optimal_split = splits[idx+1] return optimal_split, min_entropy def HillClimbing_entropy_discretization(feature, label, num_bins, relative_entropy_reduce_rate = 0.01): feature = feature.astype(np.float32) hm_class = np.unique(label).shape[0] min_val = np.min(feature) max_val = np.max(feature) min_label_val = int(np.min(label)) curr_entropy = 0 pre_entropy = 0 "Calculate entropy of original distribution" probabilities = find_probs(label) pre_entropy = entropy(probabilities,base=2) if len(np.unique(feature)) > 15: init_splitset = np.linspace(min_val, max_val, num_bins+1) stop_flag = 0 while stop_flag == 0: minentropy_inbin = np.zeros(num_bins,) #dim = (10,) curr_entropy = 0 for s in range(1, num_bins): sub_min = init_splitset[s-1] sub_max = init_splitset[s+1] index = np.argwhere((sub_min<=feature) & (feature<sub_max)).flatten() if (len(index) != 0): index = np.array(index) feat = feature[index] lab = label[index] else: feat = [] lab = [] init_splitset[s], minentropy_inbin[s-1] = findOptimalSplitPoint(sub_min, sub_max, feat, lab, 0.1) count_inbin = np.histogram(feature, init_splitset)[0] bins = np.digitize(feature, init_splitset) num_data = np.zeros(hm_class,) for n in range(num_bins): en = 0 left_limit = init_splitset[n] right_limit = init_splitset[n+1] index = np.argwhere((left_limit<=feature) & (feature<right_limit)).flatten() select_labels = label[index].flatten() probabilities = find_probs(select_labels) ent = entropy(probabilities,base=2) curr_entropy = curr_entropy + ent*count_inbin[n]/feature.shape[0] if curr_entropy < 0.0000001: stop_flag = 1 continue relative_reduction = (pre_entropy - curr_entropy) /pre_entropy if relative_reduction < relative_entropy_reduce_rate: stop_flag = 1 pre_entropy = curr_entropy discretized_feature = bins else: hm_unique_state = len(np.unique(feature)) init_splitset = hm_unique_state if hm_unique_state != 1: edges = np.histogram(feature, hm_unique_state-1)[1] discretized_feature = np.digitize(feature, edges) else: discretized_feature = feature curr_entropy = pre_entropy optimal_split_pointset = init_splitset final_entropy = curr_entropy return discretized_feature, optimal_split_pointset, final_entropy def HC_discretization(trainD, trainL, hm_bins): samples,hm_features = trainD.shape #check the labels #hm_unique_class = len(np.unique(trainL)) hm_unique_class = np.ceil(np.max(trainL)) - np.floor(np.min(trainL)) + 1 edges = np.histogram(trainL, int(hm_unique_class) - 1)[1] #disc_trainL = np.digitize(trainL, edges)[:,0] #dim = (samples,) disc_trainL = np.digitize(trainL, edges) disc_trainL = np.reshape(disc_trainL, (samples,1)) #Discretize the features disc_trainD = np.zeros([samples, hm_features]) optimal_split = [] for i in range(hm_features): feature = trainD[:,i] #dim = (samples,) disc_feat,split,_ = HillClimbing_entropy_discretization(feature, disc_trainL, hm_bins, 0.01) optimal_split.append(split) le = preprocessing.LabelEncoder() le.fit(disc_feat) disc_trainD[:,i] = le.transform(disc_feat)+1 #dim = (samples,) #disc_trainD[:,i] = np.reshape(disc_feat, [samples,1]) return disc_trainD, disc_trainL,optimal_split
/rpi_d3m_primitives_part2-0.0.5.tar.gz/rpi_d3m_primitives_part2-0.0.5/rpi_d3m_primitives_part2/featSelect/discretization.py
0.595728
0.482124
discretization.py
pypi
from __future__ import division from copy import copy import itertools from rpi_d3m_primitives.pyBN.utils.independence_tests import are_independent, mi_test __author__ = """Nicholas Cullen <ncullen.th@dartmouth.edu>""" def markov_blanket(bn): """ Return the Markov Blanket dictionary from a fully (structurally) instantiated BayesNet object. The markov blanket for a given node is just the node's parents, children, and its children's parents (i.e. spouses) Arguments --------- *bn* : BayesNet object Returns ------- *mb* : a dictionary where each key is a node and the value is a list of the key-node's markov blanket """ mb = dict([(rv,bn.parents(rv)+bn.children(rv)) for rv in bn.nodes()]) for rv in bn.V: for child in bn.children(rv): for c_parent in bn.parents(child): if c_parent != rv: mb[rv].append(c_parent) # add spouse return mb def resolve_markov_blanket(Mb, data,alpha=0.05): """ Resolving the Markov blanket is the process by which a PDAG is constructed from the collection of Markov Blankets for each node. Since an undirected graph is returned, the edges still need to be oriented by calling some version of the "orient_edges" function in "pyBN.structure_learn.orient_edges" module. This algorithm is adapted from Margaritis, but also see [3] for good pseudocode. Arguments --------- *Mb* : a dictionary, where key = rv and value = list of vars in rv's markov blanket *data* : a nested numpy array The dataset used to learn the Mb Returns ------- *edge_dict* : a dictionary, where key = rv and value = list of rv's children Effects ------- None Notes ----- """ n_rv = data.shape[1] edge_dict = dict([(rv,[]) for rv in range(n_rv)]) for X in range(n_rv): for Y in Mb[X]: # X and Y are direct neighbors if X and Y are dependent # given S for all S in T, where T is the smaller of # B(X)-{Y} and B(Y)-{X} if len(Mb[X]) < len(Mb[Y]): T = copy(Mb[X]) # shallow copy is sufficient if Y in T: T.remove(Y) else: T = copy(Mb[Y]) # shallow copy is sufficient if X in T: T.remove(X) # X and Y must be dependent conditioned upon # EVERY POSSIBLE COMBINATION of T direct_neighbors=True for i in range(len(T)): for S in itertools.combinations(T,i): cols = (X,Y) + tuple(S) pval = mi_test(data[:,cols]) if pval > alpha: direct_neighbors=False if direct_neighbors: if Y not in edge_dict[X] and X not in edge_dict[Y]: edge_dict[X].append(Y) if X not in edge_dict[Y]: edge_dict[Y].append(X) return edge_dict def mb_fitness(data, Mb, target=None): """ Evaluate the fitness of a Markov Blanket dictionary learned from a given data set based on the distance metric provided in [1] and [2]. From [2]: A distance measure that indicates the "fitness" of the discovered blanket... to be the average, over all attributes X outside the blanket, of the expected KL-divergence between Pr(T | B(T)) and Pr(T | B(T) u {X}). We can expect this measure to be close to zero when B(T) is an approximate blanket. -- My Note: T is the target variable, and if the KL-divergence between the two distributions above is zero, then it means that {X} provides no new information about T and can thus be excluded from Mb(T) -- this is the exact definition of conditional independence. Notes ----- - Find Pr(T|B(T)) .. - For each variable X outside of the B(T), calculate D( Pr(T|B(T)), Pr(T|B(T)u{X}) ) - Take the average (closer to Zero is better) ^^^ This is basically calculating where T is independent of X given B(T).. i.e. Sum over all X not in B(T) of mi_test(data[:,(T,X,B(T))]) / |X| """ if target is None: nodes = set(Mb.keys()) else: try: nodes = set(target) except TypeError: nodes = {target} fitness_dict = dict([(rv, 0) for rv in nodes]) for T in nodes: non_blanket = nodes - set(Mb[T]) - {T} for X in non_blanket: pval = mi_test(data[:,(T,X)+tuple(Mb[T])]) fitness_dict[T] += 1/pval return fitness_dict
/rpi_d3m_primitives-0.2.9.tar.gz/rpi_d3m_primitives-0.2.9/rpi_d3m_primitives/pyBN/utils/markov_blanket.py
0.705988
0.575409
markov_blanket.py
pypi
from __future__ import division import numpy as np from scipy.special import gamma, gammaln from rpi_d3m_primitives.pyBN.learning.parameter.mle import mle_estimator, mle_fast from rpi_d3m_primitives.pyBN.classes.empiricaldistribution import EmpiricalDistribution def BDe(bn, data, ess=1, ed=None): """ Unique Bayesian score with the property that I-equivalent networks have the same score. As Data Rows -> infinity, BDe score converges to the BIC score. Arguments --------- *bn* : a BayesNet object Needed to get the parent relationships, etc. *data* : a numpy ndarray Needed to learn the empirical distribuion *ess* : an integer Equivalent sample size *ed* : an EmpiricalDistribution object Used to cache multiple lookups in structure learning. Notes ----- *a_ijk* : a vector The number of times where x_i=k | parents(x_i)=j -> i.e. the mle counts *a_ij* : a vector summed over k's in a_ijk *n_ijk* : a vector prior (sample size or calculation) "ess" for BDe metric *n_ij* : a vector prior summed over k's in n_ijk """ counts_dict = mle_fast(bn, data, counts=True, np=True) a_ijk = [] bdeu = 1 for rv, value in counts_dict.items(): nijk = value['cpt'] nijk_prime = ess k2 *= gamma(nijk+nijk_prime)/gamma(nijk_prime) nij_prime = nijk_prime*(len(cpt)/bn.card(rv)) nij = np.mean(nijk.reshape(-1, bn.card(rv)), axis=1) # sum along parents k2 *= gamma(nij_prime) / gamma(nij+nij_prime) return bdeu def BDeu(bn, data, ess=1, ed=None): """ Unique Bayesian score with the property that I-equivalent networks have the same score. As Data Rows -> infinity, BDe score converges to the BIC score. Nijk_prime = ess/len(bn.cpt(rv)) Arguments --------- *bn* : a BayesNet object Needed to get the parent relationships, etc. *data* : a numpy ndarray Needed to learn the empirical distribuion *ess* : an integer Equivalent sample size *ed* : an EmpiricalDistribution object Used to cache multiple lookups in structure learning. Notes ----- *a_ijk* : a vector The number of times where x_i=k | parents(x_i)=j -> i.e. the mle counts *a_ij* : a vector summed over k's in a_ijk *n_ijk* : a vector prior (sample size or calculation) ess/(card(x_i)*len(cpt(x_i)/card(x_i))) for x_i for BDe metric *n_ij* : a vector prior summed over k's in n_ijk """ counts_dict = mle_fast(bn, data, counts=True, np=True) a_ijk = [] bdeu = 1 for rv, value in counts_dict.items(): nijk = value['cpt'] nijk_prime = ess / len(nijk) k2 *= gamma(nijk+nijk_prime)/gamma(nijk_prime) nij_prime = nijk_prime*(len(cpt)/bn.card(rv)) nij = np.mean(nijk.reshape(-1, bn.card(rv)), axis=1) # sum along parents k2 *= gamma(nij_prime) / gamma(nij+nij_prime) return bdeu def K2(bn, data, ed=None): """ K2 is bayesian posterior probability of structure given the data, where N'ijk = 1. """ counts_dict = mle_fast(bn, data, counts=True, np=True) a_ijk = [] k2 = 1 for rv, value in counts_dict.items(): nijk = value['cpt'] nijk_prime = 1 k2 *= gamma(nijk+nijk_prime)/gamma(nijk_prime) nij_prime = nijk_prime*(len(cpt)/bn.card(rv)) nij = np.mean(nijk.reshape(-1, bn.card(rv)), axis=1) # sum along parents k2 *= gamma(nij_prime) / gamma(nij+nij_prime) return k2
/rpi_d3m_primitives-0.2.9.tar.gz/rpi_d3m_primitives-0.2.9/rpi_d3m_primitives/pyBN/learning/structure/score/bayes_scores.py
0.883123
0.626153
bayes_scores.py
pypi
from __future__ import division __author__ = """Nicholas Cullen <ncullen.th@dartmouth.edu>""" import numpy as np def bayes_estimator(bn, data, equiv_sample=None, prior_dict=None, nodes=None): """ Bayesian Estimation method of parameter learning. This method proceeds by either 1) assuming a uniform prior over the parameters based on the Dirichlet distribution with an equivalent sample size = *sample_size*, or 2) assuming a prior as specified by the user with the *prior_dict* argument. The prior distribution is then updated from observations in the data based on the Multinomial distribution - for which the Dirichlet is a "conjugate prior." Note that the Bayesian and MLE estimators essentially converge to the same set of values as the size of the dataset increases. Also note that, unlike the structure learning algorithms, the parameter learning functions REQUIRE a passed-in BayesNet object because there MUST be some pre-determined structure for which we can actually learn the parameters. You can't learn parameters without structure - so structure must always be there first! Finally, note that this function can be used to calculate only ONE conditional probability table in a BayesNet object by passing in a subset of random variables with the "nodes" argument - this is mostly used for score-based structure learning, where a single cpt needs to be quickly recalculate after the addition/deletion/reversal of an arc. Arguments --------- *bn* : a BayesNet object *data* : a nested numpy array Data from which to learn parameters *equiv_sample* : an integer The "equivalent sample size" (see function summary) *prior_dict* : a dictionary, where key = random variable and for each key the value is another dictionary where key = an instantiation for the random variable and the value is its FREQUENCY (an integer value, NOT its relative proportion/probability). *nodes* : a list of strings Which nodes to learn the parameters for - if None, all nodes will be used as expected. Returns ------- None Effects ------- - modifies/sets bn.data to the learned parameters Notes ----- """ if equiv_sample is None: equiv_sample = len(data) if nodes is None: nodes = list(bn.nodes()) for i, n in enumerate(nodes): bn.F[n]['values'] = list(np.unique(data[:,i])) obs_dict = dict([(rv,[]) for rv in nodes]) # set empty conditional probability table for each RV for rv in nodes: # get number of values in the CPT = product of scope vars' cardinalities p_idx = int(np.prod([bn.card(p) for p in bn.parents(rv)])*bn.card(rv)) bn.F[rv]['cpt'] = [equiv_sample/p_idx]*p_idx # loop through each row of data for row in data: # store the observation of each variable in the row obs_dict = dict([(rv,row[rv]) for rv in nodes]) # loop through each RV and increment its observed parent-self value for rv in nodes: rv_dict= { n: obs_dict[n] for n in obs_dict if n in bn.scope(rv) } offset = bn.cpt_indices(target=rv,val_dict=rv_dict)[0] bn.F[rv]['cpt'][offset]+=1 for rv in nodes: cpt = bn.cpt(rv) for i in range(0,len(bn.cpt(rv)),bn.card(rv)): temp_sum = float(np.sum(cpt[i:(i+bn.card(rv))])) for j in range(bn.card(rv)): cpt[i+j] /= (temp_sum) cpt[i+j] = round(cpt[i+j],5)
/rpi_d3m_primitives-0.2.9.tar.gz/rpi_d3m_primitives-0.2.9/rpi_d3m_primitives/pyBN/learning/parameter/bayes.py
0.802942
0.641914
bayes.py
pypi
from __future__ import division __author__ = """Nicholas Cullen <ncullen.th@dartmouth.edu>""" import numpy as np def mle_fast(bn, data, nodes=None, counts=False, np=False): """ Maximum Likelihood estimation that is about 100 times as fast as the original mle_estimator function - but returns the same result """ def merge_cols(data, cols): if len(cols) == 1: return data[cols[0]] elif len(cols) > 1: data = data[cols].astype('str') ncols = len(cols) for i in range(len(data)): data.ix[i,0] = ''.join(data.ix[i,0:ncols]) data = data.astype('int') return data.ix[:,0] if nodes is None: nodes = list(bn.nodes()) else: if not isinstance(nodes, list): nodes = list(nodes) F = dict([(rv, {}) for rv in nodes]) for i, n in enumerate(nodes): F[n]['values'] = list(np.unique(data.ix[:,i])) bn.F[n]['values'] = list(np.unique(data.ix[:,i])) for rv in nodes: parents = bn.parents(rv) if len(parents)==0: if np: F[rv]['cpt'] = np.histogram(data.ix[:,rv], bins=bn.card(rv))[0] else: F[rv]['cpt'] = list(np.histogram(data.ix[:,rv], bins=bn.card(rv))[0]) else: if np: F[rv]['cpt'] = np.histogram2d(merge_cols(data,parents),data.ix[:,rv], bins=[np.prod([bn.card(p) for p in parents]),bn.card(rv)])[0].flatten() else: F[rv]['cpt'] = list(np.histogram2d(merge_cols(data,parents),data.ix[:,rv], bins=[np.prod([bn.card(p) for p in parents]),bn.card(rv)])[0].flatten()) if counts: return F else: for rv in nodes: F[rv]['parents'] = [var for var in nodes if rv in bn.E[var]] for i in range(0,len(F[rv]['cpt']),bn.card(rv)): temp_sum = float(np.sum(F[rv]['cpt'][i:(i+bn.card(rv))])) for j in range(bn.card(rv)): F[rv]['cpt'][i+j] /= (temp_sum+1e-7) F[rv]['cpt'][i+j] = round(F[rv]['cpt'][i+j],5) bn.F = F def mle_estimator(bn, data, nodes=None, counts=False): """ Maximum Likelihood Estimation is a frequentist method for parameter learning, where there is NO prior distribution. Instead, the frequencies/counts for each parameter start at 0 and are simply incremented as the relevant parent-child values are observed in the data. This can be a risky method for small datasets, because if a certain parent-child instantiation is never observed in the data, then its probability parameter will be ZERO (even if you know it should at least have a very small probability). Note that the Bayesian and MLE estimators essentially converge to the same set of values as the size of the dataset increases. Also note that, unlike the structure learning algorithms, the parameter learning functions REQUIRE a passed-in BayesNet object because there MUST be some pre-determined structure for which we can actually learn the parameters. You can't learn parameters without structure - so structure must always be there first! Finally, note that this function can be used to calculate only ONE conditional probability table in a BayesNet object by passing in a subset of random variables with the "nodes" argument - this is mostly used for score-based structure learning, where a single cpt needs to be quickly recalculate after the addition/deletion/reversal of an arc. Arguments --------- *bn* : a BayesNet object The associated network structure for which the parameters will be learned *data* : a nested numpy array *nodes* : a list of strings Which nodes to learn the parameters for - if None, all nodes will be used as expected. Returns ------- None Effects ------- - modifies/sets bn.data to the learned parameters Notes ----- - Currently doesn't return correct solution data attributes: "numoutcomes" : an integer "vals" : a list "parents" : a list or None "children": a list or None "cprob" : a nested python list - Do not want to alter bn.data directly! """ if nodes is None: nodes = list(bn.nodes()) else: if not isinstance(nodes, list): nodes = list(nodes) F = dict([(rv, {}) for rv in nodes]) for i, n in enumerate(nodes): F[n]['values'] = list(np.unique(data[:,i])) bn.F[n]['values'] = list(np.unique(data[:,i])) obs_dict = dict([(rv,[]) for rv in nodes]) # set empty conditional probability table for each RV for rv in nodes: # get number of values in the CPT = product of scope vars' cardinalities p_idx = int(np.prod([bn.card(p) for p in bn.parents(rv)])*bn.card(rv)) F[rv]['cpt'] = [0]*p_idx bn.F[rv]['cpt'] = [0]*p_idx # loop through each row of data for row in data: # store the observation of each variable in the row for rv in nodes: obs_dict[rv] = row[rv] #obs_dict = dict([(rv,row[rv]) for rv in nodes]) # loop through each RV and increment its observed parent-self value for rv in nodes: rv_dict= { n: obs_dict[n] for n in obs_dict if n in bn.scope(rv) } offset = bn.cpt_indices(target=rv,val_dict=rv_dict)[0] F[rv]['cpt'][offset]+=1 if counts: return F else: for rv in nodes: F[rv]['parents'] = [var for var in nodes if rv in bn.E[var]] for i in range(0,len(F[rv]['cpt']),bn.card(rv)): temp_sum = float(np.sum(F[rv]['cpt'][i:(i+bn.card(rv))])) for j in range(bn.card(rv)): F[rv]['cpt'][i+j] /= (temp_sum+1e-7) F[rv]['cpt'][i+j] = round(F[rv]['cpt'][i+j],5) bn.F = F
/rpi_d3m_primitives-0.2.9.tar.gz/rpi_d3m_primitives-0.2.9/rpi_d3m_primitives/pyBN/learning/parameter/mle.py
0.437343
0.473109
mle.py
pypi
import os, sys import typing import scipy.io import numpy as np from sklearn import preprocessing from common_primitives import utils from d3m import container from d3m.metadata import base as metadata_base from d3m.metadata import hyperparams from d3m.metadata import params from d3m.primitive_interfaces.supervised_learning import SupervisedLearnerPrimitiveBase from d3m.primitive_interfaces import base from d3m.primitive_interfaces.base import CallResult import rpi_d3m_primitives from rpi_d3m_primitives.featSelect.Feature_Selector_model import JMI from rpi_d3m_primitives.featSelect.RelationSet import RelationSet Inputs = container.DataFrame Outputs = container.DataFrame __all__ = ('JMIplus',) class Params(params.Params): pass class Hyperparams(hyperparams.Hyperparams): percentage = hyperparams.Hyperparameter[float]( default=0.4, description="Percentage of features to be selected. If the value is one, all input features will be kept", semantic_types=['https://metadata.datadrivendiscovery.org/types/TuningParameter'] ) class JMIplus(SupervisedLearnerPrimitiveBase[Inputs, Outputs, Params, Hyperparams]): """ A primitive which selects the most relevant features based on the joint mutual inforamtion between features and the target """ metadata = metadata_base.PrimitiveMetadata({ 'id': '', 'version': '2.1.5', 'name': 'JMIplus feature selector', 'keywords': ['Joint Mutual Information','Feature Selection'], 'description': 'This algorithm is selecting the most relevant features based on the joint mutual inforamtion between features and the target.', 'source': { 'name': rpi_d3m_primitives.__author__, 'contact': 'mailto:cuiz3@rpi.edu', 'uris': [ 'https://github.com/zijun-rpi/d3m-primitives/blob/master/JMIplus.py', 'https://github.com/zijun-rpi/d3m-primitives.git' ] }, 'installation':[ { 'type': metadata_base.PrimitiveInstallationType.PIP, 'package': 'rpi_d3m_primitives', 'version': rpi_d3m_primitives.__version__ } ], 'python_path': 'd3m.primitives.feature_selection.joint_mutual_information.ManualRPI', 'algorithm_types': [ metadata_base.PrimitiveAlgorithmType.MINIMUM_REDUNDANCY_FEATURE_SELECTION ], 'primitive_family': metadata_base.PrimitiveFamily.FEATURE_SELECTION }) def __init__(self, *, hyperparams: Hyperparams, random_seed: int = 0, docker_containers: typing.Union[typing.Dict[str, base.DockerContainer]] = None) -> None: super().__init__(hyperparams=hyperparams, random_seed=random_seed, docker_containers=docker_containers) self._index = None self._problem_type = 'classification' self._training_inputs = None self._training_outputs = None self._fitted = False self._LEoutput = preprocessing.LabelEncoder() def set_training_data(self, *, inputs: Inputs, outputs: Outputs) -> None: # set problem type metadata = outputs.metadata column_metadata = metadata.query((metadata_base.ALL_ELEMENTS, 0)) semantic_types = column_metadata.get('semantic_types', []) if 'https://metadata.datadrivendiscovery.org/types/CategoricalData' in semantic_types: self._problem_type = 'classification' # set training labels self._LEoutput.fit(outputs) self._training_outputs = self._LEoutput.transform(outputs) else: self._problem_type = 'regression' # convert cateforical values to numerical values in training data metadata = inputs.metadata [m,n] = inputs.shape self._training_inputs = np.zeros((m,n)) for column_index in metadata.get_elements((metadata_base.ALL_ELEMENTS,)): if column_index is metadata_base.ALL_ELEMENTS: continue column_metadata = metadata.query((metadata_base.ALL_ELEMENTS, column_index)) semantic_types = column_metadata.get('semantic_types', []) if 'https://metadata.datadrivendiscovery.org/types/CategoricalData' in semantic_types: LE = preprocessing.LabelEncoder() LE = LE.fit(inputs.iloc[:,column_index]) self._training_inputs[:,column_index] = LE.transform(inputs.iloc[:,column_index]) elif 'http://schema.org/Text' in semantic_types: pass else: temp = list(inputs.iloc[:, column_index].values) for i in np.arange(len(temp)): if bool(temp[i]): self._training_inputs[i,column_index] = float(temp[i]) else: self._training_inputs[i,column_index] = 'nan' self._fitted = False def fit(self, *, timeout: float = None, iterations: int = None) -> None: if self._fitted: return CallResult(None) if self._training_inputs.any() == None or self._training_outputs.any() == None: raise ValueError('Missing training data, or missing values exist.') Trainset = RelationSet(self._training_inputs, self._training_outputs.reshape(-1, 1)) Trainset.impute() discTrainset = RelationSet(self._training_inputs, self._training_outputs.reshape(-1, 1)) discTrainset.impute() discTrainset.discretize() model = JMI(Trainset, discTrainset,self._problem_type) percent = self.hyperparams['percentage'] index = model.select_features(int(np.ceil((self._training_inputs.shape[1]) * percent))) self._index = [] [m, ] = index.shape for ii in np.arange(m): self._index.append(index[ii].item()) self._fitted = True return CallResult(None) def produce(self, *, inputs: Inputs, timeout: float = None, iterations: int = None) -> base.CallResult[Outputs]: # inputs: m x n numpy array if self._fitted: output = inputs.iloc[:, self._index] output.metadata = utils.select_columns_metadata(inputs.metadata, columns=self._index) return CallResult(output) else: raise ValueError('Model should be fitted first.') def get_params(self) -> None: pass def set_params(self) -> None: pass
/rpi_d3m_primitives-0.2.9.tar.gz/rpi_d3m_primitives-0.2.9/rpi_d3m_primitives/featSelect/JMIplus.py
0.439988
0.362489
JMIplus.py
pypi
import numpy as np from rpi_d3m_primitives.featSelect.helperFunctions import find_probs from scipy.stats import entropy from sklearn import preprocessing def findOptimalSplitPoint(min_val, max_val, ori_feat, label, incre_rate = 0.1): hm_bins = round(1/incre_rate) splits = np.linspace(min_val, max_val, hm_bins+1) hm_class = len(np.unique(label)) if hm_class <= 1: min_entropy = 0 optimal_split = max_val return optimal_split, min_entropy #edges = np.histogram(label, hm_class-1)[1] #label = np.digitize(label, edges) hm_sample = len(label) entropies = np.zeros(hm_bins,) for i in range(1, hm_bins+1): split = splits[i] left_indices = np.argwhere(ori_feat<split).flatten() left_labels = label[left_indices].flatten() left_probabilities = find_probs(left_labels) left_entropy = entropy(left_probabilities,base=2) right_indices = np.argwhere(ori_feat>=split).flatten() right_labels = label[right_indices].flatten() right_probabilities = find_probs(right_labels) right_entropy = entropy(right_probabilities,base=2) entropies[i-1] = left_labels.size/hm_sample * left_entropy + right_labels.size / hm_sample * right_entropy min_entropy = np.amin(entropies) idx = np.where(entropies == min_entropy) if len(idx[0]) != 1: idx = idx[0][0] else: idx = idx[0] #only return idx for the first minimul element optimal_split = splits[idx+1] return optimal_split, min_entropy def HillClimbing_entropy_discretization(feature, label, num_bins, relative_entropy_reduce_rate = 0.01): feature = feature.astype(np.float32) hm_class = np.unique(label).shape[0] min_val = np.min(feature) max_val = np.max(feature) min_label_val = int(np.min(label)) curr_entropy = 0 pre_entropy = 0 "Calculate entropy of original distribution" probabilities = find_probs(label) pre_entropy = entropy(probabilities,base=2) if len(np.unique(feature)) > 15: init_splitset = np.linspace(min_val, max_val, num_bins+1) stop_flag = 0 while stop_flag == 0: minentropy_inbin = np.zeros(num_bins,) #dim = (10,) curr_entropy = 0 for s in range(1, num_bins): sub_min = init_splitset[s-1] sub_max = init_splitset[s+1] index = np.argwhere((sub_min<=feature) & (feature<sub_max)).flatten() if (len(index) != 0): index = np.array(index) feat = feature[index] lab = label[index] else: feat = [] lab = [] init_splitset[s], minentropy_inbin[s-1] = findOptimalSplitPoint(sub_min, sub_max, feat, lab, 0.1) count_inbin = np.histogram(feature, init_splitset)[0] bins = np.digitize(feature, init_splitset) num_data = np.zeros(hm_class,) for n in range(num_bins): en = 0 left_limit = init_splitset[n] right_limit = init_splitset[n+1] index = np.argwhere((left_limit<=feature) & (feature<right_limit)).flatten() select_labels = label[index].flatten() probabilities = find_probs(select_labels) ent = entropy(probabilities,base=2) curr_entropy = curr_entropy + ent*count_inbin[n]/feature.shape[0] if curr_entropy < 0.0000001: stop_flag = 1 continue relative_reduction = (pre_entropy - curr_entropy) /pre_entropy if relative_reduction < relative_entropy_reduce_rate: stop_flag = 1 pre_entropy = curr_entropy discretized_feature = bins else: hm_unique_state = len(np.unique(feature)) init_splitset = hm_unique_state if hm_unique_state != 1: edges = np.histogram(feature, hm_unique_state-1)[1] discretized_feature = np.digitize(feature, edges) else: discretized_feature = feature curr_entropy = pre_entropy optimal_split_pointset = init_splitset final_entropy = curr_entropy return discretized_feature, optimal_split_pointset, final_entropy def HC_discretization(trainD, trainL, hm_bins): samples,hm_features = trainD.shape #check the labels #hm_unique_class = len(np.unique(trainL)) hm_unique_class = np.ceil(np.max(trainL)) - np.floor(np.min(trainL)) + 1 edges = np.histogram(trainL, int(hm_unique_class) - 1)[1] #disc_trainL = np.digitize(trainL, edges)[:,0] #dim = (samples,) disc_trainL = np.digitize(trainL, edges) disc_trainL = np.reshape(disc_trainL, (samples,1)) #Discretize the features disc_trainD = np.zeros([samples, hm_features]) optimal_split = [] for i in range(hm_features): feature = trainD[:,i] #dim = (samples,) disc_feat,split,_ = HillClimbing_entropy_discretization(feature, disc_trainL, hm_bins, 0.01) optimal_split.append(split) le = preprocessing.LabelEncoder() le.fit(disc_feat) disc_trainD[:,i] = le.transform(disc_feat)+1 #dim = (samples,) #disc_trainD[:,i] = np.reshape(disc_feat, [samples,1]) return disc_trainD, disc_trainL,optimal_split
/rpi_d3m_primitives-0.2.9.tar.gz/rpi_d3m_primitives-0.2.9/rpi_d3m_primitives/featSelect/discretization.py
0.59561
0.474509
discretization.py
pypi
from sklearn.neighbors import KNeighborsRegressor, KNeighborsClassifier from sklearn.metrics import mean_squared_error, accuracy_score from sklearn.base import clone from sklearn.metrics import f1_score import numpy as np #----------------------- PREDICTOR ----------------------- class Predictor: """ Adds functionality to the sklearn predictor base """ def __init__(self,model,testing_set,training_set,tolerance=0.001): self.unfit_model = model self.testing_set = testing_set self.training_set = training_set #'tolerance' represents the smallest difference in accuracy considered meaningful. # If two independence thresholds yield predicitions within 'tolerance,' the # better one is the one with less features self.tolerance = tolerance def predict(self,selected_feats): train_data = self.training_set.data train_labels = self.training_set.labels test_data = self.testing_set.data test_labels = self.testing_set.labels predictor = clone(self.unfit_model) predictor.fit(train_data[:,selected_feats],train_labels[:,0]) return predictor.predict(test_data[:,selected_feats]) def score_from_labels(self,predictions): pass def score(self,selected_feats,cache): if (selected_feats.tostring() in cache): # score = cache[selected_feats.tostring()] predictions = self.predict(selected_feats) score = self.score_from_labels(predictions) else: predictions = self.predict(selected_feats) score = self.score_from_labels(predictions) cache[selected_feats.tostring()] = score return score def compare_scores(self,left_score,right_score): diff = self.score_difference(left_score,right_score) flag = abs(diff) > self.tolerance return diff,flag def left_performs_better(self,left_score,left_feats,right_score,right_feats): """ Returns true if the first argument is a better prediction, based on accuracy/mse, then the number of selected features """ "Uses MSE for regression problems and accuracy for classification" test_labels = self.testing_set.labels score_difference,significant_flag = self.compare_scores(left_score,right_score) if (significant_flag): return score_difference > 0 return left_feats.size < right_feats.size def choose(self,left_feats,right_feats,optimal_feats,cache): """ Returns the better prediction and feature count, as determined by the metric of 'left_performs_better' """ # Handling for empty feature sets if (left_feats.size==0 or right_feats.size==0): if (left_feats.size != 0): return left_feats,self.score(left_feats,cache) if (right_feats.size != 0): return right_feats,self.score(right_feats,cache) # already in cache return optimal_feats,self.score(optimal_feats,cache) left_score = self.score(left_feats,cache) right_score = self.score(right_feats,cache) #print(left_score, right_score) if (self.left_performs_better(left_score,left_feats,right_score,right_feats)): return left_feats,left_score return right_feats,right_score class Classifier(Predictor): def score_difference(self,left_score,right_score): return left_score - right_score def score_from_labels(self,predictions): score = f1_score(self.testing_set.labels, predictions, average='macro') return score # return accuracy_score(self.testing_set.labels,predictions) class Regressor(Predictor): def score_difference(self,left_score,right_score): return right_score - left_score def score_from_labels(self,predictions): return mean_squared_error(self.testing_set.labels,predictions)
/rpi_d3m_primitives-0.2.9.tar.gz/rpi_d3m_primitives-0.2.9/rpi_d3m_primitives/featSelect/Predictor.py
0.833562
0.576125
Predictor.py
pypi
# Raspberry Pi Deep PanTilt [![image](https://img.shields.io/pypi/v/rpi_deep_pantilt.svg)](https://pypi.python.org/pypi/rpi-deep-pantilt) <!-- [![image](https://img.shields.io/travis/leigh-johnson/rpi_deep_pantilt.svg)](https://travis-ci.org/leigh-johnson/rpi_deep_pantilt) --> [![Documentation Status](https://readthedocs.org/projects/rpi-deep-pantilt/badge/?version=latest)](https://rpi-deep-pantilt.readthedocs.io/en/latest/?badge=latest) # READ THIS FIRST! A detailed walk-through is available in [Real-time Object Tracking with TensorFlow, Raspberry Pi, and Pan-tilt HAT](https://medium.com/@grepLeigh/real-time-object-tracking-with-tensorflow-raspberry-pi-and-pan-tilt-hat-2aeaef47e134). # Build List - [Raspberry Pi 4 (4GB recommended)](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/) - [Raspberry Pi Camera V2](https://www.raspberrypi.org/products/camera-module-v2/) - [Pimoroni Pan-tilt Kit](https://shop.pimoroni.com/products/pan-tilt-hat?variant=22408353287) - Micro SD card 16+ GB - Micro HDMI Cable - [12" CSI/DSI ribbon for Raspberry Pi Camera](https://www.adafruit.com/product/1648) (optional, but highly recommended) - [Coral Edge TPU USB Accelerator](https://coral.withgoogle.com/products/accelerator) (optional) - [RGB NeoPixel Stick](https://www.adafruit.com/product/1426) (optional, makes lighting conditions more consistent) An example of deep object detection and tracking with a Raspberry Pi - Free software: MIT license - Documentation: <https://rpi-deep-pantilt.readthedocs.io>. # Basic Setup Before you get started, you should have an up-to-date installation of Raspbian 10 (Buster) running on your Raspberry Pi. You'll also need to configure SSH access into your Pi. * [Install Raspbian](https://www.raspberrypi.org/documentation/installation/installing-images/README.md) * [Configure WiFi](https://www.raspberrypi.org/forums/viewtopic.php?t=191252) * [Configure SSH Access](https://www.raspberrypi.org/documentation/remote-access/ssh/) # Installation 1. Install system dependencies ```bash $ sudo apt-get update && sudo apt-get install -y \ cmake python3-dev libjpeg-dev libatlas-base-dev raspi-gpio libhdf5-dev python3-smbus ``` 1. Create new virtual environment ```bash $ python3 -m venv .venv ``` 3. Activate virtual environment ```bash $ source .venv/bin/activate ``` 4. Upgrade setuptools ```bash $ pip install --upgrade setuptools ``` 5. Install TensorFlow 2.2 (community-built wheel) ```bash $ pip install https://github.com/leigh-johnson/Tensorflow-bin/releases/download/v2.2.0/tensorflow-2.2.0-cp37-cp37m-linux_armv7l.whl ``` 6. Install the `rpi-deep-pantilt` package. ```bash pip install rpi-deep-pantilt ``` 7. Install Coral Edge TPU `tflite_runtime` (optional) NOTE: This step is only required if you are using [Coral's Edge TPU USB Accelerator](https://coral.withgoogle.com/products/accelerator). If you would like to run TFLite inferences using CPU only, skip this step. ```bash $ pip install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.whl ``` ======= # Configuration WARNING: Do not skip this section! You will not be able to use `rpi-deep-pantilt` without properly configuring your Pi. ### Enable Pi Camera 1. Run `sudo raspi-config` and select `Interfacing Options` from the Raspberry Pi Software Configuration Tool’s main menu. Press ENTER. ![raspi-config main menu](/images/camera1.png) 2. Select the Enable Camera menu option and press ENTER. ![raspi-config interfacing options menu](/images/camera2.png) 3. In the next menu, use the right arrow key to highlight ENABLE and press ENTER. ![raspi-config enable camera yes/no menu](/images/camera3.png) ### Enable i2c in Device Tree 1. Open `/boot/config.txt` and verify the following `dtparams` lines are uncommented: ```bash dtparam=i2c1=on dtparam=i2c_arm=on ``` # Example Usage ## Object Detection The `detect` command will start a PiCamera preview and render detected objects as an overlay. Verify you're able to detect an object before trying to track it. Supports Edge TPU acceleration by passing the `--edge-tpu` option. `rpi-deep-pantilt detect [OPTIONS] [LABELS]...` ``` rpi-deep-pantilt detect --help Usage: rpi-deep-pantilt detect [OPTIONS] [LABELS]... rpi-deep-pantilt detect [OPTIONS] [LABELS] LABELS (optional) One or more labels to detect, for example: $ rpi-deep-pantilt detect person book "wine glass" If no labels are specified, model will detect all labels in this list: $ rpi-deep-pantilt list-labels Detect command will automatically load the appropriate model For example, providing "face" as the only label will initalize FaceSSD_MobileNet_V2 model $ rpi-deep-pantilt detect face Other labels use SSDMobileNetV3 with COCO labels $ rpi-deep-pantilt detect person "wine class" orange Options: --loglevel TEXT Run object detection without pan-tilt controls. Pass --loglevel=DEBUG to inspect FPS. --edge-tpu Accelerate inferences using Coral USB Edge TPU --rotation INTEGER PiCamera rotation. If you followed this guide, a rotation value of 0 is correct. https://medium.com/@grepLeigh/real-time-object-tracking- with-tensorflow-raspberry-pi-and-pan-tilt- hat-2aeaef47e134 --help Show this message and exit. ``` ## Object Tracking The following will start a PiCamera preview, render detected objects as an overlay, and track an object's movement with Pimoroni pan-tilt HAT. By default, this will track any `person` in the frame. You can track other objects by passing `--label <label>`. For a list of valid labels, run `rpi-deep-pantilt list-labels`. `rpi-deep-pantilt track` Supports Edge TPU acceleration by passing the `--edge-tpu` option. ``` Usage: rpi-deep-pantilt track [OPTIONS] [LABEL] rpi-deep-pantilt track [OPTIONS] [LABEL] LABEL (required, default: person) Exactly one label to detect, for example: $ rpi-deep-pantilt track person Track command will automatically load the appropriate model For example, providing "face" will initalize FaceSSD_MobileNet_V2 model $ rpi-deep-pantilt track face Other labels use SSDMobileNetV3 model with COCO labels $ rpi-deep-pantilt detect orange Options: --loglevel TEXT Pass --loglevel=DEBUG to inspect FPS and tracking centroid X/Y coordinates --edge-tpu Accelerate inferences using Coral USB Edge TPU --rotation INTEGER PiCamera rotation. If you followed this guide, a rotation value of 0 is correct. https://medium.com/@grepLeigh/real-time-object-tracking- with-tensorflow-raspberry-pi-and-pan-tilt- hat-2aeaef47e134 --help Show this message and exit. ``` ## Valid labels for Object Detection/Tracking `rpi-deep-pantilt list-labels` The following labels are valid tracking targets. ``` ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'] ``` ## Face Detection (NEW in v1.1.x) The following command will detect human faces. NOTE: Face detection uses a specialized model (FaceSSD_MobileNet_V2), while other labels are detecting using SSDMobileNetV3_COCO. You cannot detect both face and COCO labels at this time. Watch this repo for updates that allow you to re-train these models to support a custom mix of object labels! ``` rpi-deep-pantilt detect face Usage: cli.py face-detect [OPTIONS] Options: --loglevel TEXT Run object detection without pan-tilt controls. Pass --loglevel=DEBUG to inspect FPS. --edge-tpu Accelerate inferences using Coral USB Edge TPU --help Show this message and exit. ``` ## Face Tracking (NEW in v1.1.x) The following command will track a human face. ``` rpi-deep-pantilt track face Usage: cli.py face-detect [OPTIONS] Options: --loglevel TEXT Run object detection without pan-tilt controls. Pass --loglevel=DEBUG to inspect FPS. --edge-tpu Accelerate inferences using Coral USB Edge TPU --help Show this message and exit. ``` # Model Summary The following section describes the models used in this project. ## Object Detection & Tracking ### `FLOAT32` model (`ssd_mobilenet_v3_small_coco_2019_08_14`) `rpi-deep-pantilt detect` and `rpi-deep-pantilt track` perform inferences using this model. Bounding box and class predictions render at roughly *6 FPS* on a *Raspberry Pi 4*. The model is derived from `ssd_mobilenet_v3_small_coco_2019_08_14` in [tensorflow/models](https://github.com/tensorflow/models). I extended the model with an NMS post-processing layer, then converted to a format compatible with TensorFlow 2.x (FlatBuffer). I scripted the conversion steps in `tools/tflite-postprocess-ops-float.sh`. ### Quantized `UINT8` model (`ssdlite_mobilenet_edgetpu_coco_quant`) If you specify `--edge-tpu` option, `rpi-deep-pantilt detect` and `rpi-deep-pantilt track` perform inferences using this model. Rounding box and class predictions render at roughly *24+ FPS (real-time) on Raspberry Pi 4*. This model *REQUIRES* a [Coral Edge TPU USB Accelerator](https://coral.withgoogle.com/products/accelerator) to run. This model is derived from `ssdlite_mobilenet_edgetpu_coco_quant` in [tensorflow/models](https://github.com/tensorflow/models). I reversed the frozen `.tflite` model into a protobuf graph to add an NMS post-processing layer, quantized the model in a `.tflite` FlatBuffer format, then converted using Coral's `edgetpu_compiler` tool. I scripted the conversion steps in `tools/tflite-postprocess-ops-128-uint8-quant.sh` and `tools/tflite-edgetpu.sh`. ## Face Detection & Tracking I was able to use the same model architechture for FLOAT32 and UINT8 input, `facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2`. This model is derived from `facessd_mobilenet_v2_quantized_320x320_open_image_v4` in [tensorflow/models](https://github.com/tensorflow/models). # Common Issues ### i2c is not enabled If you run `$ rpi-deep-pantilt test pantilt` and see a similar error, check your Device Tree configuration. ```python File "/home/pi/projects/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/pantilthat/pantilt.py", line 72, in setup self._i2c = SMBus(1) FileNotFoundError: [Errno 2] No such file or directory ``` Open `/boot/config.txt` and ensure the following lines are uncommented: ```bash dtparam=i2c1=on dtparam=i2c_arm=on ``` # Credits The MobileNetV3-SSD model in this package was derived from [TensorFlow's model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md), with [post-processing ops added](https://gist.github.com/leigh-johnson/155264e343402c761c03bc0640074d8c). The PID control scheme in this package was inspired by [Adrian Rosebrock](https://github.com/jrosebr1) tutorial [Pan/tilt face tracking with a Raspberry Pi and OpenCV](https://www.pyimagesearch.com/2019/04/01/pan-tilt-face-tracking-with-a-raspberry-pi-and-opencv/) This package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the [audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage) project template.
/rpi_deep_pantilt-2.0.0rc0.tar.gz/rpi_deep_pantilt-2.0.0rc0/README.md
0.563858
0.966914
README.md
pypi
import importlib import logging from rpi_deep_pantilt.detect.util.exceptions import InvalidLabelException class ModelRegistry(object): FLOAT32_CLASSES = ( "FaceSSDMobileNetV2Float32", "SSDMobileNetV3Float32", ) UINT8_CLASSES = ( "FaceSSDMobileNetV2Int8", "SSDMobileNetV3Int8", "LeopardAutoMLInt8", ) EDGETPU_CLASSES = ( "SSDMobileNetV3EdgeTPU", "FaceSSDMobileNetV2EdgeTPU", ) def __init__(self, edge_tpu, api_version, dtype): self.edge_tpu = edge_tpu self.api_version = api_version self.version_str = f"api_{api_version}" self.import_path = f"rpi_deep_pantilt.detect.pretrained.{self.version_str}" self.module = importlib.import_module(self.import_path) if edge_tpu: self.model_list = self.EDGETPU_CLASSES self.default_model = self.module.SSDMobileNetV3EdgeTPU elif dtype == "uint8": self.model_list = self.UINT8_CLASSES self.default_model = self.module.SSDMobileNetV3Int8 else: self.model_list = self.FLOAT32_CLASSES self.default_model = self.module.SSDMobileNetV3Float32 def select_model(self, labels): """Select best model for provided labels Raises InvalidLabelException if any labels are unsupported Args: labels: List of labels or None. If labels are not provided, default to SSDMobileNet with COCO labels """ def _select(cls_list): for cls_str in cls_list: predictor_cls = getattr(self.module, cls_str) if predictor_cls.validate_labels(labels): return predictor_cls else: continue raise InvalidLabelException if len(labels) is 0: return self.default_model else: return _select(self.model_list) def label_map(self): return {x: getattr(self.module, x).LABELS for x in self.model_list}
/rpi_deep_pantilt-2.0.0rc0.tar.gz/rpi_deep_pantilt-2.0.0rc0/rpi_deep_pantilt/detect/registry.py
0.531696
0.185338
registry.py
pypi
import collections import logging # lib import numpy as np import PIL.Image as Image import PIL.ImageColor as ImageColor import PIL.ImageDraw as ImageDraw import PIL.ImageFont as ImageFont import six STANDARD_COLORS = [ "AliceBlue", "Chartreuse", "Aqua", "Aquamarine", "Azure", "Beige", "Bisque", "BlanchedAlmond", "BlueViolet", "BurlyWood", "CadetBlue", "AntiqueWhite", "Chocolate", "Coral", "CornflowerBlue", "Cornsilk", "Crimson", "Cyan", "DarkCyan", "DarkGoldenRod", "DarkGrey", "DarkKhaki", "DarkOrange", "DarkOrchid", "DarkSalmon", "DarkSeaGreen", "DarkTurquoise", "DarkViolet", "DeepPink", "DeepSkyBlue", "DodgerBlue", "FireBrick", "FloralWhite", "ForestGreen", "Fuchsia", "Gainsboro", "GhostWhite", "Gold", "GoldenRod", "Salmon", "Tan", "HoneyDew", "HotPink", "IndianRed", "Ivory", "Khaki", "Lavender", "LavenderBlush", "LawnGreen", "LemonChiffon", "LightBlue", "LightCoral", "LightCyan", "LightGoldenRodYellow", "LightGray", "LightGrey", "LightGreen", "LightPink", "LightSalmon", "LightSeaGreen", "LightSkyBlue", "LightSlateGray", "LightSlateGrey", "LightSteelBlue", "LightYellow", "Lime", "LimeGreen", "Linen", "Magenta", "MediumAquaMarine", "MediumOrchid", "MediumPurple", "MediumSeaGreen", "MediumSlateBlue", "MediumSpringGreen", "MediumTurquoise", "MediumVioletRed", "MintCream", "MistyRose", "Moccasin", "NavajoWhite", "OldLace", "Olive", "OliveDrab", "Orange", "OrangeRed", "Orchid", "PaleGoldenRod", "PaleGreen", "PaleTurquoise", "PaleVioletRed", "PapayaWhip", "PeachPuff", "Peru", "Pink", "Plum", "PowderBlue", "Purple", "Red", "RosyBrown", "RoyalBlue", "SaddleBrown", "Green", "SandyBrown", "SeaGreen", "SeaShell", "Sienna", "Silver", "SkyBlue", "SlateBlue", "SlateGray", "SlateGrey", "Snow", "SpringGreen", "SteelBlue", "GreenYellow", "Teal", "Thistle", "Tomato", "Turquoise", "Violet", "Wheat", "White", "WhiteSmoke", "Yellow", "YellowGreen", ] def _get_multiplier_for_color_randomness(): """Returns a multiplier to get semi-random colors from successive indices. This function computes a prime number, p, in the range [2, 17] that: - is closest to len(STANDARD_COLORS) / 10 - does not divide len(STANDARD_COLORS) If no prime numbers in that range satisfy the constraints, p is returned as 1. Once p is established, it can be used as a multiplier to select non-consecutive colors from STANDARD_COLORS: colors = [(p * i) % len(STANDARD_COLORS) for i in range(20)] """ num_colors = len(STANDARD_COLORS) prime_candidates = [5, 7, 11, 13, 17] # Remove all prime candidates that divide the number of colors. prime_candidates = [p for p in prime_candidates if num_colors % p] if not prime_candidates: return 1 # Return the closest prime number to num_colors / 10. abs_distance = [np.abs(num_colors / 10.0 - p) for p in prime_candidates] num_candidates = len(abs_distance) inds = [i for _, i in sorted(zip(abs_distance, range(num_candidates)))] return prime_candidates[inds[0]] def draw_mask_on_image_array(image, mask, color="red", alpha=0.4): """Draws mask on an image. Args: image: uint8 numpy array with shape (img_height, img_height, 3) mask: a uint8 numpy array of shape (img_height, img_height) with values between either 0 or 1. color: color to draw the keypoints with. Default is red. alpha: transparency value between 0 and 1. (default: 0.4) Raises: ValueError: On incorrect data type for image or masks. """ if image.dtype != np.uint8: raise ValueError("`image` not of type np.uint8") if mask.dtype != np.uint8: raise ValueError("`mask` not of type np.uint8") if np.any(np.logical_and(mask != 1, mask != 0)): raise ValueError("`mask` elements should be in [0, 1]") if image.shape[:2] != mask.shape: raise ValueError( "The image has spatial dimensions %s but the mask has " "dimensions %s" % (image.shape[:2], mask.shape) ) rgb = ImageColor.getrgb(color) pil_image = Image.fromarray(image) solid_color = np.expand_dims(np.ones_like(mask), axis=2) * np.reshape( list(rgb), [1, 1, 3] ) pil_solid_color = Image.fromarray(np.uint8(solid_color)).convert("RGBA") pil_mask = Image.fromarray(np.uint8(255.0 * alpha * mask)).convert("L") pil_image = Image.composite(pil_solid_color, pil_image, pil_mask) np.copyto(image, np.array(pil_image.convert("RGB"))) def draw_bounding_box_on_image( image, ymin, xmin, ymax, xmax, color="red", thickness=4, display_str_list=(), use_normalized_coordinates=True, ): """Adds a bounding box to an image. Bounding box coordinates can be specified in either absolute (pixel) or normalized coordinates by setting the use_normalized_coordinates argument. Each string in display_str_list is displayed on a separate line above the bounding box in black text on a rectangle filled with the input 'color'. If the top of the bounding box extends to the edge of the image, the strings are displayed below the bounding box. Args: image: a PIL.Image object. ymin: ymin of bounding box. xmin: xmin of bounding box. ymax: ymax of bounding box. xmax: xmax of bounding box. color: color to draw bounding box. Default is red. thickness: line thickness. Default value is 4. display_str_list: list of strings to display in box (each to be shown on its own line). use_normalized_coordinates: If True (default), treat coordinates ymin, xmin, ymax, xmax as relative to the image. Otherwise treat coordinates as absolute. """ draw = ImageDraw.Draw(image) im_width, im_height = image.size if use_normalized_coordinates: (left, right, top, bottom) = ( xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height, ) else: (left, right, top, bottom) = (xmin, xmax, ymin, ymax) draw.line( [(left, top), (left, bottom), (right, bottom), (right, top), (left, top)], width=thickness, fill=color, ) try: font = ImageFont.truetype("arial.ttf", 24) except IOError: font = ImageFont.load_default() # If the total height of the display strings added to the top of the bounding # box exceeds the top of the image, stack the strings below the bounding box # instead of above. display_str_heights = [font.getsize(ds)[1] for ds in display_str_list] # Each display_str has a top and bottom margin of 0.05x. total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights) if top > total_display_str_height: text_bottom = top else: text_bottom = bottom + total_display_str_height # Reverse list and print from bottom to top. for display_str in display_str_list[::-1]: text_width, text_height = font.getsize(display_str) margin = np.ceil(0.05 * text_height) draw.rectangle( [ (left, text_bottom - text_height - 2 * margin), (left + text_width, text_bottom), ], fill=color, ) draw.text( (left + margin, text_bottom - text_height - margin), display_str, fill="black", font=font, ) text_bottom -= text_height - 2 * margin def draw_bounding_box_on_image_array( image, ymin, xmin, ymax, xmax, color="red", thickness=4, display_str_list=(), use_normalized_coordinates=True, ): """Adds a bounding box to an image (numpy array). Bounding box coordinates can be specified in either absolute (pixel) or normalized coordinates by setting the use_normalized_coordinates argument. Args: image: a numpy array with shape [height, width, 3]. ymin: ymin of bounding box. xmin: xmin of bounding box. ymax: ymax of bounding box. xmax: xmax of bounding box. color: color to draw bounding box. Default is red. thickness: line thickness. Default value is 4. display_str_list: list of strings to display in box (each to be shown on its own line). use_normalized_coordinates: If True (default), treat coordinates ymin, xmin, ymax, xmax as relative to the image. Otherwise treat coordinates as absolute. """ image_pil = Image.fromarray(np.uint8(image)).convert("RGB") draw_bounding_box_on_image( image_pil, ymin, xmin, ymax, xmax, color, thickness, display_str_list, use_normalized_coordinates, ) np.copyto(image, np.array(image_pil)) def draw_keypoints_on_image( image, keypoints, color="red", radius=2, use_normalized_coordinates=True ): """Draws keypoints on an image. Args: image: a PIL.Image object. keypoints: a numpy array with shape [num_keypoints, 2]. color: color to draw the keypoints with. Default is red. radius: keypoint radius. Default value is 2. use_normalized_coordinates: if True (default), treat keypoint values as relative to the image. Otherwise treat them as absolute. """ draw = ImageDraw.Draw(image) im_width, im_height = image.size keypoints_x = [k[1] for k in keypoints] keypoints_y = [k[0] for k in keypoints] if use_normalized_coordinates: keypoints_x = tuple([im_width * x for x in keypoints_x]) keypoints_y = tuple([im_height * y for y in keypoints_y]) for keypoint_x, keypoint_y in zip(keypoints_x, keypoints_y): draw.ellipse( [ (keypoint_x - radius, keypoint_y - radius), (keypoint_x + radius, keypoint_y + radius), ], outline=color, fill=color, ) def draw_keypoints_on_image_array( image, keypoints, color="red", radius=2, use_normalized_coordinates=True ): """Draws keypoints on an image (numpy array). Args: image: a numpy array with shape [height, width, 3]. keypoints: a numpy array with shape [num_keypoints, 2]. color: color to draw the keypoints with. Default is red. radius: keypoint radius. Default value is 2. use_normalized_coordinates: if True (default), treat keypoint values as relative to the image. Otherwise treat them as absolute. """ image_pil = Image.fromarray(np.uint8(image)).convert("RGB") draw_keypoints_on_image( image_pil, keypoints, color, radius, use_normalized_coordinates ) np.copyto(image, np.array(image_pil)) def visualize_boxes_and_labels_on_image_array( image, boxes, classes, scores, category_index, instance_masks=None, instance_boundaries=None, keypoints=None, track_ids=None, use_normalized_coordinates=False, max_boxes_to_draw=20, min_score_thresh=0.5, agnostic_mode=False, line_thickness=4, groundtruth_box_visualization_color="black", skip_scores=False, skip_labels=False, skip_track_ids=False, ): """Overlay labeled boxes on an image with formatted scores and label names. This function groups boxes that correspond to the same location and creates a display string for each detection and overlays these on the image. Note that this function modifies the image in place, and returns that same image. Args: image: uint8 numpy array with shape (img_height, img_width, 3) boxes: a numpy array of shape [N, 4] classes: a numpy array of shape [N]. Note that class indices are 1-based, and match the keys in the label map. scores: a numpy array of shape [N] or None. If scores=None, then this function assumes that the boxes to be plotted are groundtruth boxes and plot all boxes as black with no classes or scores. category_index: a dict containing category dictionaries (each holding category index `id` and category name `name`) keyed by category indices. instance_masks: a numpy array of shape [N, image_height, image_width] with values ranging between 0 and 1, can be None. instance_boundaries: a numpy array of shape [N, image_height, image_width] with values ranging between 0 and 1, can be None. keypoints: a numpy array of shape [N, num_keypoints, 2], can be None track_ids: a numpy array of shape [N] with unique track ids. If provided, color-coding of boxes will be determined by these ids, and not the class indices. use_normalized_coordinates: whether boxes is to be interpreted as normalized coordinates or not. max_boxes_to_draw: maximum number of boxes to visualize. If None, draw all boxes. min_score_thresh: minimum score threshold for a box to be visualized agnostic_mode: boolean (default: False) controlling whether to evaluate in class-agnostic mode or not. This mode will display scores but ignore classes. line_thickness: integer (default: 4) controlling line width of the boxes. groundtruth_box_visualization_color: box color for visualizing groundtruth boxes skip_scores: whether to skip score when drawing a single detection skip_labels: whether to skip label when drawing a single detection skip_track_ids: whether to skip track id when drawing a single detection Returns: uint8 numpy array with shape (img_height, img_width, 3) with overlaid boxes. """ # Create a display string (and color) for every box location, group any boxes # that correspond to the same location. box_to_display_str_map = collections.defaultdict(list) box_to_color_map = collections.defaultdict(str) box_to_instance_masks_map = {} box_to_instance_boundaries_map = {} box_to_keypoints_map = collections.defaultdict(list) box_to_track_ids_map = {} if not max_boxes_to_draw: max_boxes_to_draw = boxes.shape[0] for i in range(min(max_boxes_to_draw, boxes.shape[0])): if scores is None or scores[i] > min_score_thresh: box = tuple(boxes[i].tolist()) if instance_masks is not None: box_to_instance_masks_map[box] = instance_masks[i] if instance_boundaries is not None: box_to_instance_boundaries_map[box] = instance_boundaries[i] if keypoints is not None: box_to_keypoints_map[box].extend(keypoints[i]) if track_ids is not None: box_to_track_ids_map[box] = track_ids[i] if scores is None: box_to_color_map[box] = groundtruth_box_visualization_color else: display_str = "" if not skip_labels: if not agnostic_mode: if classes[i] in six.viewkeys(category_index): class_name = category_index[classes[i]]["name"] else: class_name = "N/A" display_str = str(class_name) if not skip_scores: if not display_str: display_str = "{}%".format(int(100 * scores[i])) else: display_str = "{}: {}%".format( display_str, int(100 * scores[i]) ) if not skip_track_ids and track_ids is not None: if not display_str: display_str = "ID {}".format(track_ids[i]) else: display_str = "{}: ID {}".format(display_str, track_ids[i]) box_to_display_str_map[box].append(display_str) if agnostic_mode: box_to_color_map[box] = "DarkOrange" elif track_ids is not None: prime_multipler = _get_multiplier_for_color_randomness() box_to_color_map[box] = STANDARD_COLORS[ (prime_multipler * track_ids[i]) % len(STANDARD_COLORS) ] else: box_to_color_map[box] = STANDARD_COLORS[ classes[i] % len(STANDARD_COLORS) ] # Draw all boxes onto image. for box, color in box_to_color_map.items(): ymin, xmin, ymax, xmax = box if instance_masks is not None: draw_mask_on_image_array(image, box_to_instance_masks_map[box], color=color) if instance_boundaries is not None: draw_mask_on_image_array( image, box_to_instance_boundaries_map[box], color="red", alpha=1.0 ) draw_bounding_box_on_image_array( image, ymin, xmin, ymax, xmax, color=color, thickness=line_thickness, display_str_list=box_to_display_str_map[box], use_normalized_coordinates=use_normalized_coordinates, ) if keypoints is not None: draw_keypoints_on_image_array( image, box_to_keypoints_map[box], color=color, radius=line_thickness / 2, use_normalized_coordinates=use_normalized_coordinates, ) return image
/rpi_deep_pantilt-2.0.0rc0.tar.gz/rpi_deep_pantilt-2.0.0rc0/rpi_deep_pantilt/detect/util/visualization.py
0.802401
0.389895
visualization.py
pypi
import logging # Lib import tensorflow as tf from google.protobuf import text_format # app from rpi_deep_pantilt.detect.util import string_int_label_map_pb2 def convert_label_map_to_categories(label_map, max_num_classes, use_display_name=True): """Given label map proto returns categories list compatible with eval. This function converts label map proto and returns a list of dicts, each of which has the following keys: 'id': (required) an integer id uniquely identifying this category. 'name': (required) string representing category name e.g., 'cat', 'dog', 'pizza'. We only allow class into the list if its id-label_id_offset is between 0 (inclusive) and max_num_classes (exclusive). If there are several items mapping to the same id in the label map, we will only keep the first one in the categories list. Args: label_map: a StringIntLabelMapProto or None. If None, a default categories list is created with max_num_classes categories. max_num_classes: maximum number of (consecutive) label indices to include. use_display_name: (boolean) choose whether to load 'display_name' field as category name. If False or if the display_name field does not exist, uses 'name' field as category names instead. Returns: categories: a list of dictionaries representing all possible categories. """ categories = [] list_of_ids_already_added = [] if not label_map: label_id_offset = 1 for class_id in range(max_num_classes): categories.append( { "id": class_id + label_id_offset, "name": "category_{}".format(class_id + label_id_offset), } ) return categories for item in label_map.item: if not 0 < item.id <= max_num_classes: logging.info( "Ignore item %d since it falls outside of requested " "label range.", item.id, ) continue if use_display_name and item.HasField("display_name"): name = item.display_name else: name = item.name if item.id not in list_of_ids_already_added: list_of_ids_already_added.append(item.id) categories.append({"id": item.id, "name": name}) return categories def _validate_label_map(label_map): """Checks if a label map is valid. Args: label_map: StringIntLabelMap to validate. Raises: ValueError: if label map is invalid. """ for item in label_map.item: if item.id < 0: raise ValueError("Label map ids should be >= 0.") if ( item.id == 0 and item.name != "background" and item.display_name != "background" ): raise ValueError("Label map id 0 is reserved for the background label") def load_labelmap(path): """Loads label map proto. Args: path: path to StringIntLabelMap proto text file. Returns: a StringIntLabelMapProto """ with tf.compat.v1.gfile.GFile(path, "r") as fid: label_map_string = fid.read() label_map = string_int_label_map_pb2.StringIntLabelMap() try: text_format.Merge(label_map_string, label_map) except text_format.ParseError: label_map.ParseFromString(label_map_string) _validate_label_map(label_map) return label_map def create_categories_from_labelmap(label_map_path, use_display_name=True): """Reads a label map and returns categories list compatible with eval. This function converts label map proto and returns a list of dicts, each of which has the following keys: 'id': an integer id uniquely identifying this category. 'name': string representing category name e.g., 'cat', 'dog'. Args: label_map_path: Path to `StringIntLabelMap` proto text file. use_display_name: (boolean) choose whether to load 'display_name' field as category name. If False or if the display_name field does not exist, uses 'name' field as category names instead. Returns: categories: a list of dictionaries representing all possible categories. """ label_map = load_labelmap(label_map_path) max_num_classes = max(item.id for item in label_map.item) return convert_label_map_to_categories(label_map, max_num_classes, use_display_name) def create_category_index(categories): """Creates dictionary of COCO compatible categories keyed by category id. Args: categories: a list of dicts, each of which has the following keys: 'id': (required) an integer id uniquely identifying this category. 'name': (required) string representing category name e.g., 'cat', 'dog', 'pizza'. Returns: category_index: a dict containing the same entries as categories, but keyed by the 'id' field of each category. """ category_index = {} for cat in categories: category_index[cat["id"]] = cat return category_index def create_category_index_from_labelmap(label_map_path, use_display_name=True): """Reads a label map and returns a category index. Args: label_map_path: Path to `StringIntLabelMap` proto text file. use_display_name: (boolean) choose whether to load 'display_name' field as category name. If False or if the display_name field does not exist, uses 'name' field as category names instead. Returns: A category index, which is a dictionary that maps integer ids to dicts containing categories, e.g. {1: {'id': 1, 'name': 'dog'}, 2: {'id': 2, 'name': 'cat'}, ...} """ categories = create_categories_from_labelmap(label_map_path, use_display_name) return create_category_index(categories)
/rpi_deep_pantilt-2.0.0rc0.tar.gz/rpi_deep_pantilt-2.0.0rc0/rpi_deep_pantilt/detect/util/label.py
0.857723
0.33039
label.py
pypi
import tensorflow as tf from rpi_deep_pantilt import __path__ as rpi_deep_pantilt_path from rpi_deep_pantilt.detect.custom.base_predictors import ( TFLiteDetectionPostProcessPredictor, ) class FaceSSDMobileNetV2EdgeTPU(TFLiteDetectionPostProcessPredictor): """ Model source: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md#open-images-trained-models Non-max supression op (TFLite_Detection_Postprocess) added to graph via tools/tflite-postprocess-ops-128-uint8-quant.sh """ LABELS = ["face"] def __init__( self, model_uri="https://github.com/leigh-johnson/rpi-deep-pantilt/releases/download/v1.1.1/facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2.tar.gz", model_name="facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2", input_shape=(320, 320), min_score_thresh=0.50, input_type=tf.uint8, tflite_file="model_postprocessed_quantized_128_uint8_edgetpu.tflite", label_file=rpi_deep_pantilt_path[0] + "/data/facessd_label_map.pbtxt", ): super().__init__( model_name=model_name, tflite_file=tflite_file, label_file=label_file, model_uri=model_uri, input_shape=input_shape, min_score_thresh=min_score_thresh, input_type=input_type, edge_tpu=True, ) class FaceSSDMobileNetV2Int8(TFLiteDetectionPostProcessPredictor): """ Model source: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md#open-images-trained-models Non-max supression op (TFLite_Detection_Postprocess) added to graph via tools/tflite-postprocess-ops-128-uint8-quant.sh """ LABELS = ["face"] def __init__( self, model_uri="https://github.com/leigh-johnson/rpi-deep-pantilt/releases/download/v1.1.1/facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2.tar.gz", model_name="facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2", input_shape=(320, 320), min_score_thresh=0.50, input_type=tf.uint8, tflite_file="model_postprocessed_quantized_128_uint8.tflite", label_file=rpi_deep_pantilt_path[0] + "/data/facessd_label_map.pbtxt", ): super().__init__( model_name=model_name, tflite_file=tflite_file, label_file=label_file, model_uri=model_uri, input_shape=input_shape, min_score_thresh=min_score_thresh, input_type=input_type, ) class FaceSSDMobileNetV2Float32(TFLiteDetectionPostProcessPredictor): LABELS = ["face"] def __init__( self, model_uri="https://github.com/leigh-johnson/rpi-deep-pantilt/releases/download/v1.1.1/facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2.tar.gz", model_name="facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2", input_shape=(320, 320), min_score_thresh=0.50, input_type=tf.float32, tflite_file="model_postprocessed.tflite", label_file=rpi_deep_pantilt_path[0] + "/data/facessd_label_map.pbtxt", ): super().__init__( model_name=model_name, tflite_file=tflite_file, label_file=label_file, model_uri=model_uri, input_shape=input_shape, min_score_thresh=min_score_thresh, input_type=input_type, )
/rpi_deep_pantilt-2.0.0rc0.tar.gz/rpi_deep_pantilt-2.0.0rc0/rpi_deep_pantilt/detect/pretrained/api_v2/facessd_mobilenet_v2.py
0.785966
0.292614
facessd_mobilenet_v2.py
pypi
import tensorflow as tf from rpi_deep_pantilt import __path__ as rpi_deep_pantilt_path from rpi_deep_pantilt.detect.custom.base_predictors import ( TFLiteDetectionPostProcessPredictor, ) class SSDMobileNetV3EdgeTPU(TFLiteDetectionPostProcessPredictor): """ Model source: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md#open-images-trained-models Non-max supression op (TFLite_Detection_Postprocess) added to graph via tools/tflite-postprocess-ops-128-uint8-quant.sh """ LABELS = [ "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush", ] def __init__( self, model_uri="https://github.com/leigh-johnson/rpi-deep-pantilt/releases/download/v1.1.1/ssdlite_mobilenet_edgetpu_coco_quant.tar.gz", model_name="ssdlite_mobilenet_edgetpu_coco_quant", input_shape=(320, 320), min_score_thresh=0.50, input_type=tf.uint8, tflite_file="model_postprocessed_quantized_128_uint8_edgetpu.tflite", label_file=rpi_deep_pantilt_path[0] + "/data/mscoco_label_map.pbtxt", ): super().__init__( model_name=model_name, tflite_file=tflite_file, label_file=label_file, model_uri=model_uri, input_shape=input_shape, min_score_thresh=min_score_thresh, input_type=input_type, edge_tpu=True, ) class SSDMobileNetV3Int8(TFLiteDetectionPostProcessPredictor): """ Model source: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md#open-images-trained-models Non-max supression op (TFLite_Detection_Postprocess) added to graph via tools/tflite-postprocess-ops-128-uint8-quant.sh """ LABELS = [ "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush", ] def __init__( self, model_uri="https://github.com/leigh-johnson/rpi-deep-pantilt/releases/download/v1.1.1/ssdlite_mobilenet_edgetpu_coco_quant.tar.gz", model_name="ssd_mobilenet_v3_small_coco_2019_08_14", input_shape=(320, 320), min_score_thresh=0.50, input_type=tf.uint8, tflite_file="model_postprocessed_quantized_128_uint8.tflite", label_file=rpi_deep_pantilt_path[0] + "/data/mscoco_label_map.pbtxt", ): super().__init__( model_name=model_name, tflite_file=tflite_file, label_file=label_file, model_uri=model_uri, input_shape=input_shape, min_score_thresh=min_score_thresh, input_type=input_type, ) class SSDMobileNetV3Float32(TFLiteDetectionPostProcessPredictor): LABELS = [ "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush", ] def __init__( self, model_uri="https://github.com/leigh-johnson/rpi-deep-pantilt/releases/download/v1.1.1/ssd_mobilenet_v3_small_coco_2019_08_14.tar.gz", model_name="ssd_mobilenet_v3_small_coco_2019_08_14", input_shape=(320, 320), min_score_thresh=0.50, input_type=tf.float32, tflite_file="model_postprocessed_quantized.tflite", label_file=rpi_deep_pantilt_path[0] + "/data/mscoco_label_map.pbtxt", ): super().__init__( model_name=model_name, tflite_file=tflite_file, label_file=label_file, model_uri=model_uri, input_shape=input_shape, min_score_thresh=min_score_thresh, input_type=input_type, )
/rpi_deep_pantilt-2.0.0rc0.tar.gz/rpi_deep_pantilt-2.0.0rc0/rpi_deep_pantilt/detect/pretrained/api_v2/ssd_mobilenet_v3_coco.py
0.824073
0.386821
ssd_mobilenet_v3_coco.py
pypi
import os from typing import NamedTuple, Union, List, Sequence, Any from typing import TypeVar import scipy.io import numpy as np from d3m_types.sequence import ndarray from primitive_interfaces.supervised_learning import SupervisedLearnerPrimitiveBase import rpi_featureSelection_matlab_tools # These are just regular Python variables so that we can easily change all types # at once in the future, if needed. Otherwise, one could simply inline all these. Inputs = ndarray Outputs = ndarray Params = TypeVar('Params') class STMBPlusSelector(SupervisedLearnerPrimitiveBase[Inputs, Outputs, Params]): __author__ = "RPI DARPA D3M" """ This class implements a feature selection method: It automatically select features that best fit the labels Input: A: Collection of vectors in high dimensional space. Concretely, inputs are doubly-indexable numbers that can be called as A[i,j]. Rows i are samples and columns j are features. The entries can be integer, continuous values or categorical values, but should be expressed in a numerical form. B: The corresponding labels as a vector in a numerical form. This should be a row vector, and the length should be identical to the number of raw in A Output: W: Dimensionality reduced vectors, it is a numpy matrix with one row per vector. """ def __init__(self): super().__init__() self.is_feature_selection = True self.hyperparameters = {} self.training_inputs = None self.training_outputs = None self.fitted = False def set_training_data(self, inputs: Inputs, outputs: Outputs) -> None: self.training_inputs = inputs self.training_outputs = outputs self.fitted = False def fit(self) -> None: if self.fitted: return True if self.training_inputs.any() == None or self.training_outputs.any() == None: raise ValueError('Missing training data, or missing values exist.') scipy.io.savemat('datamat.mat', mdict={'traindata': self.training_inputs, 'traintargets': self.training_outputs}) stmbplus = rpi_featureSelection_matlab_tools.initialize() index = np.array(stmbplus.STMBPlus_binsearch()) if index.shape == (): raise ValueError('Feature selection failed.') print(index.shape[0]) index = np.reshape(index, [index.shape[0], ]) self.hyperparameters['index'] = (index - 1).astype(int) os.remove('datamat.mat') return True def produce(self, inputs: Inputs) -> Outputs: # inputs: m x n numpy array if 'index' in self.hyperparameters.keys(): return inputs[:, self.hyperparameters['index']] else: # print 'Please fit the model first using X = model.fit(training_data, training_labels)' raise ValueError('Model should be fitted first.') ''' def fit_transform(self, A, B): scipy.io.savemat('datamat.mat', mdict={'traindata': A, 'traintargets': B}) my_stmb = STMB_FeatureSelection_primitive.initialize() W = np.array(my_stmb.STMB_binsearch()) os.remove('datamat.mat') return W ''' fit.__annotations__ = {'A': 'NumpyArray(m x n)', 'B': 'NumpyArray(m x 1)', 'return': None} produce.__annotations__ = {'A': 'NumpyArray(m x n)', 'return': 'NumpyArray(m x k)'} def get_params(self): pass def set_params(self): pass
/rpi_featureSelection_matlab_tools-1.0.3.tar.gz/rpi_featureSelection_matlab_tools-1.0.3/rpi_featureSelection_matlab_tools/STMBPlusSelector.py
0.655997
0.635901
STMBPlusSelector.py
pypi
import os from typing import NamedTuple, Union, List, Sequence, Any from typing import TypeVar import scipy.io import numpy as np from d3m_types.sequence import ndarray from primitive_interfaces.supervised_learning import SupervisedLearnerPrimitiveBase import rpi_featureSelection_matlab_tools # These are just regular Python variables so that we can easily change all types # at once in the future, if needed. Otherwise, one could simply inline all these. Inputs = ndarray Outputs = ndarray Params = TypeVar('Params') class JMISelector(SupervisedLearnerPrimitiveBase[Inputs, Outputs, Params]): __author__ = "RPI DARPA D3M" """ This class implements a feature selection method: It automatically select features that best fit the labels Input: A: Collection of vectors in high dimensional space. Concretely, inputs are doubly-indexable numbers that can be called as A[i,j]. Rows i are samples and columns j are features. The entries can be integer, continuous values or categorical values, but should be expressed in a numerical form. B: The corresponding labels as a vector in a numerical form. This should be a row vector, and the length should be identical to the number of raw in A Output: W: Dimensionality reduced vectors, it is a numpy matrix with one row per vector. """ def __init__(self): super().__init__() self.is_feature_selection = True self.hyperparameters = {} self.training_inputs = None self.training_outputs = None self.fitted = False def set_training_data(self, inputs: Inputs, outputs: Outputs) -> None: self.training_inputs = inputs self.training_outputs = outputs self.fitted = False def fit(self) -> None: if self.fitted: return True if self.training_inputs.any() == None or self.training_outputs.any() == None: raise ValueError('Missing training data, or missing values exist.') scipy.io.savemat('datamat.mat', mdict={'traindata': self.training_inputs, 'traintargets': self.training_outputs}) jmi = rpi_featureSelection_matlab_tools.initialize() index = np.array(jmi.JMI_search()) if index.shape == (): raise ValueError('Feature selection failed.') print(index.shape[0]) index = np.reshape(index, [index.shape[0], ]) self.hyperparameters['index'] = (index - 1).astype(int) os.remove('datamat.mat') return True def produce(self, inputs: Inputs) -> Outputs: # inputs: m x n numpy array if 'index' in self.hyperparameters.keys(): return inputs[:, self.hyperparameters['index']] else: # print 'Please fit the model first using X = model.fit(training_data, training_labels)' raise ValueError('Model should be fitted first.') ''' def fit_transform(self, A, B): scipy.io.savemat('datamat.mat', mdict={'traindata': A, 'traintargets': B}) my_stmb = STMB_FeatureSelection_primitive.initialize() W = np.array(my_stmb.STMB_binsearch()) os.remove('datamat.mat') return W ''' fit.__annotations__ = {'A': 'NumpyArray(m x n)', 'B': 'NumpyArray(m x 1)', 'return': None} produce.__annotations__ = {'A': 'NumpyArray(m x n)', 'return': 'NumpyArray(m x k)'} def get_params(self): pass def set_params(self): pass
/rpi_featureSelection_matlab_tools-1.0.3.tar.gz/rpi_featureSelection_matlab_tools-1.0.3/rpi_featureSelection_matlab_tools/JMISelector.py
0.654674
0.657002
JMISelector.py
pypi
import os from typing import NamedTuple, Union, List, Sequence, Any from typing import TypeVar import scipy.io import numpy as np from d3m_types.sequence import ndarray from primitive_interfaces.supervised_learning import SupervisedLearnerPrimitiveBase import rpi_featureSelection_matlab_tools # These are just regular Python variables so that we can easily change all types # at once in the future, if needed. Otherwise, one could simply inline all these. Inputs = ndarray Outputs = ndarray Params = TypeVar('Params') class STMBSelector(SupervisedLearnerPrimitiveBase[Inputs, Outputs, Params]): __author__ = "RPI DARPA D3M" """ This class implements a feature selection method: It automatically select features that best fit the labels Input: A: Collection of vectors in high dimensional space. Concretely, inputs are doubly-indexable numbers that can be called as A[i,j]. Rows i are samples and columns j are features. The entries can be integer, continuous values or categorical values, but should be expressed in a numerical form. B: The corresponding labels as a vector in a numerical form. This should be a row vector, and the length should be identical to the number of raw in A Output: W: Dimensionality reduced vectors, it is a numpy matrix with one row per vector. """ def __init__(self): super().__init__() self.is_feature_selection = True self.hyperparameters = {} self.training_inputs = None self.training_outputs = None self.fitted = False def set_training_data(self, inputs: Inputs, outputs: Outputs) -> None: self.training_inputs = inputs self.training_outputs = outputs self.fitted = False def fit(self) -> None: if self.fitted: return True if self.training_inputs.any() == None or self.training_outputs.any() == None: raise ValueError('Missing training data, or missing values exist.') scipy.io.savemat('datamat.mat', mdict={'traindata': self.training_inputs, 'traintargets': self.training_outputs}) stmb = rpi_featureSelection_matlab_tools.initialize() index = np.array(stmb.STMB_binsearch()) if index.shape == (): raise ValueError('Feature selection failed.') print(index.shape[0]) index = np.reshape(index, [index.shape[0], ]) self.hyperparameters['index'] = (index - 1).astype(int) os.remove('datamat.mat') return True def produce(self, inputs: Inputs) -> Outputs: # inputs: m x n numpy array if 'index' in self.hyperparameters.keys(): return inputs[:, self.hyperparameters['index']] else: # print 'Please fit the model first using X = model.fit(training_data, training_labels)' raise ValueError('Model should be fitted first.') fit.__annotations__ = {'A': 'NumpyArray(m x n)', 'B': 'NumpyArray(m x 1)', 'return': None} produce.__annotations__ = {'A': 'NumpyArray(m x n)', 'return': 'NumpyArray(m x k)'} def get_params(self): pass def set_params(self): pass
/rpi_featureSelection_matlab_tools-1.0.3.tar.gz/rpi_featureSelection_matlab_tools-1.0.3/rpi_featureSelection_matlab_tools/STMBSelector.py
0.548674
0.589657
STMBSelector.py
pypi
import os from typing import NamedTuple, Union, List, Sequence, Any from typing import TypeVar import scipy.io import numpy as np from d3m_types.sequence import ndarray from primitive_interfaces.supervised_learning import SupervisedLearnerPrimitiveBase import rpi_featureSelection_matlab_tools # These are just regular Python variables so that we can easily change all types # at once in the future, if needed. Otherwise, one could simply inline all these. Inputs = ndarray Outputs = ndarray Params = TypeVar('Params') class IPCMBSelector(SupervisedLearnerPrimitiveBase[Inputs, Outputs, Params]): __author__ = "RPI DARPA D3M" """ This class implements a feature selection method: It automatically select features that best fit the labels Input: A: Collection of vectors in high dimensional space. Concretely, inputs are doubly-indexable numbers that can be called as A[i,j]. Rows i are samples and columns j are features. The entries can be integer, continuous values or categorical values, but should be expressed in a numerical form. B: The corresponding labels as a vector in a numerical form. This should be a row vector, and the length should be identical to the number of raw in A Output: W: Dimensionality reduced vectors, it is a numpy matrix with one row per vector. """ def __init__(self): super().__init__() self.is_feature_selection = True self.hyperparameters = {} self.training_inputs = None self.training_outputs = None self.fitted = False def set_training_data(self, inputs: Inputs, outputs: Outputs) -> None: self.training_inputs = inputs self.training_outputs = outputs self.fitted = False def fit(self) -> None: if self.fitted: return True if self.training_inputs.any() == None or self.training_outputs.any() == None: raise ValueError('Missing training data, or missing values exist.') scipy.io.savemat('datamat.mat', mdict={'traindata': self.training_inputs, 'traintargets': self.training_outputs}) ipcmb = rpi_featureSelection_matlab_tools.initialize() index = np.array(ipcmb.IPCMB_binsearch()) if index.shape == (): raise ValueError('Feature selection failed.') print(index.shape[0]) index = np.reshape(index, [index.shape[0], ]) self.hyperparameters['index'] = (index - 1).astype(int) os.remove('datamat.mat') return True def produce(self, inputs: Inputs) -> Outputs: # inputs: m x n numpy array if 'index' in self.hyperparameters.keys(): return inputs[:, self.hyperparameters['index']] else: # print 'Please fit the model first using X = model.fit(training_data, training_labels)' raise ValueError('Model should be fitted first.') ''' def fit_transform(self, A, B): scipy.io.savemat('datamat.mat', mdict={'traindata': A, 'traintargets': B}) my_stmb = STMB_FeatureSelection_primitive.initialize() W = np.array(my_stmb.STMB_binsearch()) os.remove('datamat.mat') return W ''' fit.__annotations__ = {'A': 'NumpyArray(m x n)', 'B': 'NumpyArray(m x 1)', 'return': None} produce.__annotations__ = {'A': 'NumpyArray(m x n)', 'return': 'NumpyArray(m x k)'} def get_params(self): pass def set_params(self): pass
/rpi_featureSelection_matlab_tools-1.0.3.tar.gz/rpi_featureSelection_matlab_tools-1.0.3/rpi_featureSelection_matlab_tools/IPCMBSelector.py
0.651466
0.634826
IPCMBSelector.py
pypi
from __future__ import print_function, division from RPi import GPIO from greenhouse_database import GreenhouseDatabase from datetime import datetime from time import sleep, time import math from sys import exit try: import Adafruit_DHT except ImportError: print("Adafruit DHT library missing.") exit(0) GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) class Greenhouse(object): DHT_SENSOR = Adafruit_DHT.DHT22 DHT = 19 SOIL = 26 LIGHT = 18 def __init__(self, db_path='/home/pi/.greenhouse/greenhouse.db'): """ db_path defaults to /home/pi/.greenhouse/greenhouse.db """ self.db = GreenhouseDatabase(db_path) self.darkness_level = 0.01 @property def temperature(self): humidity, temperature = self._get_humidity_and_temperature() return temperature @property def humidity(self): humidity, temperature = self._get_humidity_and_temperature() return humidity @property def soil(self): return self._get_average_soil_moisture(5) @property def light(self): return self._get_average_light_level(5) def _get_humidity_and_temperature(self): humidity, temperature = Adafruit_DHT.read_retry( sensor=self.DHT_SENSOR, pin=self.DHT, retries=5 ) return (humidity, temperature) def _get_soil_moisture(self): time_taken = self._time_charging_soil_capacitor() totally_wet_time = 8E-6 totally_dry_time = 0.01 moisture = ( math.log(time_taken / totally_dry_time) / math.log(totally_wet_time / totally_dry_time) ) return max(0, min(1, moisture)) * 100 def _time_charging_soil_capacitor(self): pin = self.SOIL GPIO.setup(pin, GPIO.OUT) GPIO.output(pin, GPIO.LOW) sleep(0.1) GPIO.setup(pin, GPIO.IN) start_time = time() end_time = time() max_time = 1 while GPIO.input(pin) == GPIO.LOW and time() - start_time < max_time: end_time = time() time_taken = end_time - start_time return time_taken def _get_light_level(self): time_taken = self._time_charging_light_capacitor() value = 100 * time_taken / self.darkness_level return 100 - value def _time_charging_light_capacitor(self): pin = self.LIGHT GPIO.setup(pin, GPIO.OUT) GPIO.output(pin, GPIO.LOW) sleep(0.1) GPIO.setup(pin, GPIO.IN) start_time = time() end_time = time() while ( GPIO.input(pin) == GPIO.LOW and time() - start_time < self.darkness_level ): end_time = time() time_taken = end_time - start_time return min(time_taken, self.darkness_level) def _get_average_soil_moisture(self, num): values = [self._get_soil_moisture() for n in range(num)] average_value = sum(values) / len(values) return average_value def _get_average_light_level(self, num): values = [self._get_light_level() for n in range(num)] average_value = sum(values) / len(values) return average_value def _get_timestamp(self): dt = datetime.now() dt_date = str(dt.date()) dt_time = str(dt.time()) timestamp = "%s %s" % (dt_date, dt_time[:8]) return timestamp def record_sensor_values(self): """ Save sensor readings to database """ timestamp = self._get_timestamp() temperature = self.temperature humidity = self.humidity soil = self.soil light = self.light values = (timestamp, temperature, humidity, soil, light) self.db.record_sensor_values(values) def export_to_csv(self, file_path='/home/pi/greenhouse.csv'): """ Export sensor data from database and save as CSV file in file_path Defaults to /home/pi/greenhouse.csv """ self.db.export_to_csv(file_path) def main(): greenhouse = Greenhouse() greenhouse.record_sensor_values() greenhouse.export_to_csv() print("Temperature:") print(greenhouse.temperature) print("Humidity:") print(greenhouse.humidity) print("Soil:") print(greenhouse.soil) print("Light:") print(greenhouse.light) if __name__ == '__main__': main()
/rpi-greenhouse-0.4.1.tar.gz/rpi-greenhouse-0.4.1/rpi_greenhouse/greenhouse.py
0.55447
0.216156
greenhouse.py
pypi
from __future__ import print_function, division from RPi import GPIO from greenhouse_database import GreenhouseDatabase from time import sleep GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) class GreenhouseIndicator(object): LED_COLOURS = [ 'white', 'red', 'blue', 'green', ] LEDS = { 'white': [13, 9, 27], 'red': [16, 11, 23], 'blue': [20, 6, 22], 'green': [21, 12, 25], } SENSOR_LOW = 'low' SENSOR_OK = 'ok' SENSOR_HIGH = 'high' def __init__(self, db_path='/home/pi/.greenhouse/greenhouse.db'): """ db_path defaults to /home/pi/.greenhouse/greenhouse.db """ self.db = GreenhouseDatabase(db_path) self.target_temperature_lower = 20 self.target_temperature_upper = 30 self.target_humidity_lower = 60 self.target_humidity_upper = 85 self.target_soil = 60 self.target_light = 60 self.status_colours = { self.SENSOR_LOW: 'blue', self.SENSOR_OK: 'green', self.SENSOR_HIGH: 'red', } self._setup_gpio() @property def temperature(self): return self.db.get_sensor_value('temperature') @property def humidity(self): return self.db.get_sensor_value('humidity') @property def soil(self): return self.db.get_sensor_value('soil') @property def light(self): return self.db.get_sensor_value('light') @property def temperature_status(self): lower = self.target_temperature_lower upper = self.target_temperature_upper if lower <= self.temperature <= upper: return self.SENSOR_OK elif self.temperature < lower: return self.SENSOR_LOW elif self.temperature > higher: return self.SENSOR_HIGH @property def humidity_status(self): lower = self.target_humidity_lower upper = self.target_humidity_upper if lower <= self.humidity <= upper: return self.SENSOR_OK elif self.humidity < lower: return self.SENSOR_LOW elif self.humidity > upper: return self.SENSOR_HIGH @property def soil_status(self): if self.soil > self.target_soil: return self.SENSOR_OK else: return self.SENSOR_LOW @property def light_status(self): if self.light >= self.target_light: return self.SENSOR_OK else: return self.SENSOR_LOW def _setup_gpio(self): for colour in self.LEDS: for led in self.LEDS[colour]: GPIO.setup(led, GPIO.OUT) GPIO.output(led, False) def _turn_led_on_or_off(self, colour, index, on_or_off): led = self.LEDS[colour][index] GPIO.output(led, on_or_off) def _turn_led_on(self, colour, index): self._turn_led_on_or_off(colour, index, on_or_off=True) def _turn_led_off(self, colour, index): self._turn_led_on_or_off(colour, index, on_or_off=False) def _turn_colour_leds_on_or_off(self, colour, on_or_off): leds = self.LEDS[colour] for led in range(len(leds)): if on_or_off: self._turn_led_on(colour, led) else: self._turn_led_off(colour, led) def _turn_colour_leds_on(self, colour): self._turn_colour_leds_on_or_off(colour=colour, on_or_off=True) def _turn_colour_leds_off(self, colour): self._turn_colour_leds_on_or_off(colour=colour, on_or_off=False) def _turn_index_leds_on_or_off(self, index, on_or_off): for colour in self.LEDS: if on_or_off: self._turn_led_on(colour, index) else: self._turn_led_off(colour, index) def _turn_index_leds_on(self, index): self._turn_index_leds_on_or_off(index=index, on_or_off=True) def _turn_index_leds_off(self, index): self._turn_index_leds_on_or_off(index=index, on_or_off=False) def _turn_all_leds_on_or_off(self, on_or_off): for colour in self.LEDS: if on_or_off: self._turn_colour_leds_on(colour) else: self._turn_colour_leds_off(colour) def _turn_all_leds_on(self): self._turn_all_leds_on_or_off(on_or_off=True) def _turn_all_leds_off(self): self._turn_all_leds_on_or_off(on_or_off=False) def turn_leds_on(self, colour=None, index=None): """ Turn LEDs on - if colour given, only that colour - if index given, only that index - if both given, only that LED - if neither given, all LEDs e.g. turn_leds_on() e.g. turn_leds_on(colour='red') e.g. turn_leds_on(index=0) e.g. turn_leds_on(colour='red', index=0) """ if colour and index is not None: self._turn_led_on(colour, index) elif colour: self._turn_colour_leds_on(colour) elif index is not None: self._turn_index_leds_on(index) else: self._turn_all_leds_on() def turn_leds_off(self, colour=None, index=None): """ Turn LEDs off - if colour given, only that colour - if index given, only that index - if both given, only that LED - if neither given, all LEDs e.g. turn_leds_off() e.g. turn_leds_off(colour='red') e.g. turn_leds_off(index=0) e.g. turn_leds_off(colour='red', index=0) """ if colour and index is not None: self._turn_led_off(colour, index) elif colour: self._turn_colour_leds_off(colour) elif index is not None: self._turn_index_leds_off(index) else: self._turn_all_leds_off() def show_status_on_leds(self): """ Use LEDs to indicate sensor statuses according to self.status_colours """ sensor_statuses = [ self.temperature_status, self.humidity_status, self.soil_status, self.light_status, ] for status in sensor_statuses: colour = self.status_colours[status] self.turn_leds_on(colour) sleep(2) self.turn_leds_off(colour) sleep(0.1) def main(): indicator = GreenhouseIndicator() print("Temperature:") print(indicator.temperature) print("Humidity:") print(indicator.humidity) print("Soil:") print(indicator.soil) print("Light:") print(indicator.light) if indicator.temperature_status == 'ok': print("Temperature ok") elif indicator.temperature_status == 'low': print("Temperature too low") elif indicator.temperature_status == 'high': print("Temperature too high") if indicator.humidity_status == 'ok': print("Humidity ok") elif indicator.humidity_status == 'low': print("Humidity too low") elif indicator.humidity_status == 'high': print("Humidity too high") if indicator.soil_status == 'ok': print("Soil ok") else: print("Soil too dry") if indicator.light_status == 'ok': print("Light ok") else: print("Light not ok") while True: indicator.show_status_on_leds() sleep(5) if __name__ == '__main__': main()
/rpi-greenhouse-0.4.1.tar.gz/rpi-greenhouse-0.4.1/rpi_greenhouse/greenhouse_indicator.py
0.812421
0.153137
greenhouse_indicator.py
pypi
import os import os.path class HardwarePWMException(Exception): pass class HardwarePWM: """ Control the hardware PWM on the Raspberry Pi. Need to first add `dtoverlay=pwm-2chan` to `/boot/config.txt`. pwm0 is GPIO pin 18 is physical pin 12 pwm1 is GPIO pin 19 is physical pin 13 Example ---------- >pwm = HardwarePWM(0, hz=20) >pwm.start(100) > >pwm.change_duty_cycle(50) >pwm.change_frequency(50) > >pwm.stop() Notes -------- - If you get "write error: Invalid argument" - you have to set duty_cycle to 0 before changing period - /sys/ pwm interface described here: https://jumpnowtek.com/rpi/Using-the-Raspberry-Pi-Hardware-PWM-timers.html """ _duty_cycle: float _hz: float chippath: str = "/sys/class/pwm/pwmchip0" def __init__(self, pwm_channel: int, hz: float) -> None: if pwm_channel not in {0, 1}: raise HardwarePWMException("Only channel 0 and 1 are available on the Rpi.") self.pwm_channel = pwm_channel self.pwm_dir = f"{self.chippath}/pwm{self.pwm_channel}" self._duty_cycle = 0 if not self.is_overlay_loaded(): raise HardwarePWMException( "Need to add 'dtoverlay=pwm-2chan' to /boot/config.txt and reboot" ) if not self.is_export_writable(): raise HardwarePWMException(f"Need write access to files in '{self.chippath}'") if not self.does_pwmX_exists(): self.create_pwmX() while True: try: self.change_frequency(hz) break except PermissionError: continue def is_overlay_loaded(self) -> bool: return os.path.isdir(self.chippath) def is_export_writable(self) -> bool: return os.access(os.path.join(self.chippath, "export"), os.W_OK) def does_pwmX_exists(self) -> bool: return os.path.isdir(self.pwm_dir) def echo(self, message: int, file: str) -> None: with open(file, "w") as f: f.write(f"{message}\n") def create_pwmX(self) -> None: self.echo(self.pwm_channel, os.path.join(self.chippath, "export")) def start(self, initial_duty_cycle: float) -> None: self.change_duty_cycle(initial_duty_cycle) self.echo(1, os.path.join(self.pwm_dir, "enable")) def stop(self) -> None: self.change_duty_cycle(0) self.echo(0, os.path.join(self.pwm_dir, "enable")) def change_duty_cycle(self, duty_cycle: float) -> None: """ a value between 0 and 100 0 represents always low. 100 represents always high. """ if not (0 <= duty_cycle <= 100): raise HardwarePWMException("Duty cycle must be between 0 and 100 (inclusive).") self._duty_cycle = duty_cycle per = 1 / float(self._hz) per *= 1000 # now in milliseconds per *= 1_000_000 # now in nanoseconds dc = int(per * duty_cycle / 100) self.echo(dc, os.path.join(self.pwm_dir, "duty_cycle")) def change_frequency(self, hz: float) -> None: if hz < 0.1: raise HardwarePWMException("Frequency can't be lower than 0.1 on the Rpi.") self._hz = hz # we first have to change duty cycle, since https://stackoverflow.com/a/23050835/1895939 original_duty_cycle = self._duty_cycle if self._duty_cycle: self.change_duty_cycle(0) per = 1 / float(self._hz) per *= 1000 # now in milliseconds per *= 1_000_000 # now in nanoseconds self.echo(int(per), os.path.join(self.pwm_dir, "period")) self.change_duty_cycle(original_duty_cycle)
/rpi_hardware_pwm-0.1.4-py3-none-any.whl/rpi_hardware_pwm/__init__.py
0.596081
0.205635
__init__.py
pypi
import numpy as np class Buffer: """ Implenets a circular queue, but optimized around inserting and removing chunks of data at a time. This makes all the indexing logic suprisingly intricate but hey, its fast. Necessary because python isnt's well know for its speed in loops all the iterative logic is delegated to numpy, which should be fast. Or maybe I'm pre-optimizing. Its not like I ran benchmarks. """ def __init__(self, length: int): self.max_length: int = length self.arr = np.zeros(self.max_length, dtype=np.float) self.start: int = 0 self.length: int = 0 def push(self, data: np.ndarray): from_start = 0 from_length = len(data) if from_length >= self.max_length: # easy case, fill the whole array and reset self.arr[0:self.max_length] = data[-self.max_length:] self.start = 0 self.length = self.max_length return # We might truncate from the end if we add all these items. If so, adjust the array truncate = from_length + self.length - self.max_length if truncate > 0: self.length -= truncate self.start = (self.start + truncate) % self.max_length first_start = (self.start + self.length) % self.max_length first_end = min(first_start + from_length, self.max_length) first_length = first_end - first_start self.arr[first_start:first_start + first_length] = data[from_start: from_start + first_length] if first_length < from_length: delta = from_length - first_length self.arr[0: delta] = data[from_start + first_length: from_start + first_length + delta] self.length += from_length def read(self, amount: int) -> np.ndarray: if amount > self.length: amount = self.length ret = np.zeros(amount) first_end = amount + self.start first_length = amount if first_end > self.max_length: first_length = self.max_length - self.start first_end = self.max_length if first_end - self.start > amount: first_end = self.start + amount ret[0:first_length] = self.arr[self.start:first_end] ret[first_length:] = self.arr[0:amount-first_length] return ret def pop(self, amount: int): ret = self.read(amount) amount = len(ret) if amount == self.length: self.start = 0 self.length = 0 else: self.length -= amount self.start = (self.start + amount) % self.max_length return ret
/rpi_intercom-0.0.19-py3-none-any.whl/rpi_intercom/circular_buffer.py
0.68721
0.47658
circular_buffer.py
pypi
from __future__ import annotations import dataclasses from typing import Callable import threading class MissingGPIOLibraryError(Exception): pass try: from RPi import GPIO as gpio except ImportError: raise MissingGPIOLibraryError( "Could not import RPi.GPIO. If this code is running on a raspberry pi, " "make sure that the rpi-gpio library is installed. You may install it " "by running `pip install rpi-gpio`." ) class NotInRestingStateError(Exception): pass @dataclasses.dataclass class RotaryEncoder: _clk_pin: int _dt_pin: int increment: Callable[[], None] decrement: Callable[[], None] def __post_init__(self) -> None: self._clk_state = False self._dt_state = False self._last_resting_state = False self._state_lock = threading.Lock() def __enter__(self) -> RotaryEncoder: gpio.setup(self._clk_pin, gpio.IN, pull_up_down=gpio.PUD_DOWN) gpio.setup(self._dt_pin, gpio.IN, pull_up_down=gpio.PUD_DOWN) self._clk_state = self._get_clk_state() self._dt_state = self._get_dt_state() self._last_resting_state = self._current_resting_state() gpio.add_event_detect(self._clk_pin, gpio.BOTH, callback=self._on_clk_changed) gpio.add_event_detect(self._dt_pin, gpio.BOTH, callback=self._on_dt_changed) return self def __exit__(self, exc_type: object, exc_val: object, exc_tb: object) -> None: gpio.remove_event_detect(self._clk_pin) gpio.remove_event_detect(self._dt_pin) gpio.cleanup((self._clk_pin, self._dt_pin)) def _get_clk_state(self) -> bool: return bool(gpio.input(self._clk_pin)) def _get_dt_state(self) -> bool: return bool(gpio.input(self._dt_pin)) def _is_resting_state(self) -> bool: return self._clk_state == self._dt_state def _current_resting_state(self) -> bool: if not self._is_resting_state(): raise NotInRestingStateError() return self._clk_state def _did_dial_move(self) -> bool: if self._is_resting_state() and self._current_resting_state() != self._last_resting_state: self._last_resting_state = self._current_resting_state() return True return False def _on_clk_changed(self, channel: object) -> None: with self._state_lock: self._dt_state = self._get_dt_state() self._clk_state = self._get_clk_state() if not self._did_dial_move(): return self.decrement() def _on_dt_changed(self, channel: object) -> None: with self._state_lock: self._dt_state = self._get_dt_state() self._clk_state = self._get_clk_state() if not self._did_dial_move(): return self.increment()
/rpi_ky_040-0.1.0-py3-none-any.whl/rpi_ky_040/_ky_040.py
0.777131
0.178992
_ky_040.py
pypi
from smbus import SMBus from time import sleep ALIGN_FUNC = { 'left': 'ljust', 'right': 'rjust', 'center': 'center'} CLEAR_DISPLAY = 0x01 ENABLE_BIT = 0b00000100 LINES = { 1: 0x80, 2: 0xC0, 3: 0x94, 4: 0xD4} LCD_BACKLIGHT = 0x08 LCD_NOBACKLIGHT = 0x00 class LCD(object): def __init__(self, address=0x27, bus=1, width=20, rows=4, backlight=True): self.address = address self.bus = SMBus(bus) self.delay = 0.0005 self.rows = rows self.width = width self.backlight_status = backlight self.write(0x33) self.write(0x32) self.write(0x06) self.write(0x0C) self.write(0x28) self.write(CLEAR_DISPLAY) sleep(self.delay) def _write_byte(self, byte): self.bus.write_byte(self.address, byte) self.bus.write_byte(self.address, (byte | ENABLE_BIT)) sleep(self.delay) self.bus.write_byte(self.address,(byte & ~ENABLE_BIT)) sleep(self.delay) def write(self, byte, mode=0): backlight_mode = LCD_BACKLIGHT if self.backlight_status else LCD_NOBACKLIGHT self._write_byte(mode | (byte & 0xF0) | backlight_mode) self._write_byte(mode | ((byte << 4) & 0xF0) | backlight_mode) def text(self, text, line, align='left'): self.write(LINES.get(line, LINES[1])) text, other_lines = self.get_text_line(text) text = getattr(text, ALIGN_FUNC.get(align, 'ljust'))(self.width) for char in text: self.write(ord(char), mode=1) if other_lines and line <= self.rows - 1: self.text(other_lines, line + 1, align=align) def backlight(self, turn_on=True): self.backlight_status = turn_on self.write(0) def get_text_line(self, text): line_break = self.width if len(text) > self.width: line_break = text[:self.width + 1].rfind(' ') if line_break < 0: line_break = self.width return text[:line_break], text[line_break:].strip() def clear(self): self.write(CLEAR_DISPLAY)
/rpi-lcd-0.0.3.tar.gz/rpi-lcd-0.0.3/rpi_lcd/__init__.py
0.45423
0.152631
__init__.py
pypi
import io import time import typing import gpiozero from mates.controller import MatesController from mates.data import * from mates.constants import * from mates.commands import MatesCommand class RPiMatesController(MatesController): """ A class representing the Raspberry Pi Python Mates Serial controller. Attributes ---------- reset_output_device: gpiozero.DigitalOutputDevice - driver of reset pin for Mates device mates_reset_pin_index: int - pin number used to drive a hard reset. """ def __init__(self, portName: str, resetPinIndex: typing.Union[int, str]=4, resetActiveHigh: bool=False, debugStream: io.TextIOWrapper=None, debugFileLength: int=50): """ Constructs all the necessary attributes associated with an instance of a Mates Controller Object. Args: portName: str - the name of the port to be opened. Example: /dev/ttyUSB0 for linux. resetPinIndex: int, string - index of pin connected to reset pin of Mates device. resetActiveHigh: bool - whether the reset pin is driven from logic low, to logic high to reset the device. debugStream: io.TextIOWrapper - Text file object to write debugging code to, supply of none will result in no debugging. Examples include sys.stdout, open('log.txt', 'r+') debugFileLength: int - Determines the extent of debug history kept with respect to lines in a file, given a circular log. O indicates full history kept with no circular logging. Users must be careful here to manage storage space effectively. """ self.mates_reset_pin_index = resetPinIndex self.reset_output_device = gpiozero.DigitalOutputDevice( resetPinIndex, active_high=resetActiveHigh, initial_value=False) super().__init__(portName, self.resetFunc, debugStream, debugFileLength) def resetFunc(self): self.reset_output_device.on() time.sleep(0.1) self.reset_output_device.off() if __name__ == '__main__': print("rpi mates controller module")
/rpi-mates-controller-1.0.2.tar.gz/rpi-mates-controller-1.0.2/src/rpi_mates/controller.py
0.610802
0.282388
controller.py
pypi
import logging import re from enum import Enum from fractions import Fraction from rpi_metar.leds import GREEN, RED, BLUE, MAGENTA, YELLOW, BLACK, ORANGE log = logging.getLogger(__name__) class FlightCategory(Enum): VFR = GREEN IFR = RED MVFR = BLUE LIFR = MAGENTA UNKNOWN = YELLOW OFF = BLACK MISSING = ORANGE def get_conditions(metar_info): """Returns the visibility, ceiling, wind speed, and gusts for a given airport from some metar info.""" log.debug(metar_info) visibility = ceiling = None speed = gust = 0 # Visibility # Match metric visibility and convert to SM match = re.search(r'(?P<CAVOK>CAVOK)|(\s(?P<visibility>\d{4}|\/{4})\s)', metar_info) if match: if match.group('visibility'): try: visibility = float(match.group('visibility')) / 1609 except ValueError: visibility = None if match.group('CAVOK'): visibility = 10 # Match SM Visibility # We may have fractions, e.g. 1/8SM or 1 1/2SM # Or it will be whole numbers, e.g. 2SM # There's also variable wind speeds, followed by vis, e.g. 300V360 1/2SM match = re.search(r'(?P<visibility>\b(?:\d+\s+)?\d+(?:/\d)?)SM', metar_info) if match: visibility = match.group('visibility') try: visibility = float(sum(Fraction(s) for s in visibility.split())) except ZeroDivisionError: visibility = None # Ceiling match = re.search(r'(VV|BKN|OVC)(?P<ceiling>\d{3})', metar_info) if match: ceiling = int(match.group('ceiling')) * 100 # It is reported in hundreds of feet # Wind info match = re.search(r'\b\d{3}(?P<speed>\d{2,3})G?(?P<gust>\d{2,3})?KT', metar_info) if match: speed = int(match.group('speed')) gust = int(match.group('gust')) if match.group('gust') else 0 return (visibility, ceiling, speed, gust) def get_flight_category(visibility, ceiling): """Converts weather conditions into a category.""" log.debug('Finding category for %s, %s', visibility, ceiling) if visibility is None and ceiling is None: return FlightCategory.UNKNOWN # Unlimited ceiling if visibility and ceiling is None: ceiling = 10000 # http://www.faraim.org/aim/aim-4-03-14-446.html try: if visibility < 1 or ceiling < 500: return FlightCategory.LIFR elif 1 <= visibility < 3 or 500 <= ceiling < 1000: return FlightCategory.IFR elif 3 <= visibility <= 5 or 1000 <= ceiling <= 3000: return FlightCategory.MVFR elif visibility > 5 and ceiling > 3000: return FlightCategory.VFR except (TypeError, ValueError): log.exception('Failed to get flight category from {vis}, {ceil}'.format( vis=visibility, ceil=ceiling ))
/rpi_metar-0.4.1.tar.gz/rpi_metar-0.4.1/rpi_metar/wx.py
0.680029
0.188324
wx.py
pypi
from RPi import GPIO import logging log = logging.getLogger(__name__) # The two pins that the encoder uses (BCM numbering). GPIO_A = 23 GPIO_B = 25 class RotaryEncoder: def __init__(self, callback, gpio_a=GPIO_A, gpio_b=GPIO_B): self.last_gpio = None self.gpio_a = gpio_a self.gpio_b = gpio_b self.callback = callback self.level_a = 0 self.level_b = 0 GPIO.setmode(GPIO.BCM) GPIO.setup(self.gpio_a, GPIO.IN, pull_up_down=GPIO.PUD_UP) GPIO.setup(self.gpio_b, GPIO.IN, pull_up_down=GPIO.PUD_UP) GPIO.add_event_detect(self.gpio_a, GPIO.BOTH, self._callback) GPIO.add_event_detect(self.gpio_b, GPIO.BOTH, self._callback) def destroy(self): GPIO.remove_event_detect(self.gpio_a) GPIO.remove_event_detect(self.gpio_b) GPIO.cleanup() def reset(self): self.last_gpio = None self.level_a = 0 self.level_b = 0 def _callback(self, channel): level = GPIO.input(channel) log.debug('{channel} = {level}'.format(channel=channel, level=level)) if channel == self.gpio_a: self.level_a = level else: self.level_b = level if level != 1: return # When both inputs are at 1, we'll fire a callback. If A was the most recent pin set high, # it'll be forward, and if B was the most recent pin set high, it'll be reverse. if channel != self.last_gpio: # debounce self.last_gpio = channel log.debug('set last_gpio to {channel}'.format(channel=channel)) if channel == self.gpio_a and self.level_b == 1: log.debug('A is set and B was already set, callback(1)') self.callback(1) self.reset() elif channel == self.gpio_b and self.level_a == 1: log.debug('B is set and A was already set, callback(-1)') self.callback(-1) self.reset()
/rpi_metar-0.4.1.tar.gz/rpi_metar-0.4.1/rpi_metar/encoder.py
0.597256
0.168823
encoder.py
pypi
import csv import logging import re import requests import time from pkg_resources import resource_filename from retrying import retry from xmltodict import parse as parsexml log = logging.getLogger(__name__) def chunks(l, n): """Yield successive n-sized chunks from l.""" for i in range(0, len(l), n): yield l[i:i + n] class METARSource: @retry(wait_exponential_multiplier=1000, wait_exponential_max=10000, stop_max_attempt_number=10) def _query(self): """Queries the NOAA METAR service.""" log.info(self.url) try: response = requests.get(self.url, timeout=10.0) response.raise_for_status() except: # noqa log.exception('Metar query failure.') raise return response class NOAA(METARSource): URL = ( 'https://{subdomain}.aviationweather.gov/adds/dataserver_current/httpparam' '?dataSource=metars' '&requestType=retrieve' '&format=xml' '&hoursBeforeNow=2' '&mostRecentForEachStation=true' '&stationString={airport_codes}' ) def __init__(self, airport_codes, subdomain='www'): self.airport_codes = airport_codes self.subdomain = subdomain def get_metar_info(self): """Queries the NOAA METAR service.""" metars = {} # NOAA can only handle so much at once, so split into chunks. # Even though we can issue larger chunk sizes, sometimes data is missing from the returned # results. Smaller chunks seem to help... for chunk in chunks(self.airport_codes, 250): self.url = self.URL.format(airport_codes=','.join(chunk), subdomain=self.subdomain) response = self._query() try: response = parsexml(response.text)['response']['data']['METAR'] if not isinstance(response, list): response = [response] except: # noqa log.exception('Metar response is invalid.') raise finally: # ...but with more requests, we should be nice and wait a bit before the next time.sleep(1.0) for m in response: metars[m['station_id'].upper()] = m return metars class NOAABackup(NOAA): def __init__(self, airport_codes): super(NOAABackup, self).__init__(airport_codes, subdomain='bcaws') class SkyVector(METARSource): URL = ( 'https://skyvector.com/api/dLayer' '?ll1={lat1},{lon1}' # lower left '&ll2={lat2},{lon2}' # upper right '&layers=metar' ) def _find_coordinates(self): data = {} file_name = resource_filename('rpi_metar', 'data/us-airports.csv') with open(file_name, newline='') as csvfile: reader = csv.reader(csvfile) for row in reader: airport_code, lat, lon = row if airport_code in self.airport_codes: data[airport_code] = (lat, lon) self.data = data lat1 = min((float(lat) for lat, _ in data.values())) lon1 = min((float(lon) for _, lon in data.values())) lat2 = max((float(lat) for lat, _ in data.values())) lon2 = max((float(lon) for _, lon in data.values())) # skyvector either isn't inclusive, or our data doesn't match theirs. Regardless, we # must expand the search area slightly. lat1, lon1 = map(lambda x: x - 0.5, [lat1, lon1]) lat2, lon2 = map(lambda x: x + 0.5, [lat2, lon2]) self.url = SkyVector.URL.format(lat1=lat1, lon1=lon1, lat2=lat2, lon2=lon2) def __init__(self, airport_codes): # Set lat / long info for the request... self.airport_codes = [code.upper() for code in airport_codes] self._find_coordinates() def get_metar_info(self): response = self._query() try: data = response.json()['weather'] except: # noqa log.exception('Metar response is invalid.') raise """Sample response: [{'a': '01h 02m ago', 'd': '2018-08-22 18:56:00', 'i': '0VFR.png', 'lat': '40.4518278', 'lon': '-105.0113361', 'm': 'KFNL 221856Z AUTO VRB03KT 6SM HZ CLR 23/14 A3025 RMK AO2 SLP194 T02280139 PNO $', 'n': 'FT COLLINS/LOVEL', 's': 'KFNL', 't': None}, ... ] """ # Make the return match the format of the other sources. metars = {} for item in data: if item['s'] in self.airport_codes: metars[item['s'].upper()] = {'raw_text': item['m']} return metars class BOM(METARSource): """Queries the BOM website service.""" URL = 'http://www.bom.gov.au/aviation/php/process.php' def __init__(self, airport_codes): self.airport_codes = ','.join(airport_codes) def get_metar_info(self): payload = { 'keyword': self.airport_codes, 'type': 'search', 'page': 'TAF', } r = requests.post(self.URL, data=payload) matches = re.finditer(r'(?:METAR |SPECI )(?P<METAR>(?P<CODE>\w{4}).*?)(?:<br />|<h3>)', r.text) metars = {} for match in matches: info = match.groupdict() metars[info['CODE'].upper()] = {'raw_text': info['METAR']} return metars
/rpi_metar-0.4.1.tar.gz/rpi_metar-0.4.1/rpi_metar/sources.py
0.625896
0.213931
sources.py
pypi
import smbus import struct class PiUSV: """ Driver for reading status flags and parameters from RPI USV+ Raspberry Pi - USV+. https://www.reichelt.de/raspberry-pi-usv-rpi-usv-p169883.html """ status_definition = { 'usb_power': 0x01, 'external_power': 0x02, 'battery_low': 0x04, 'battery_charging': 0x08, 'battery_full': 0x10, 'button_s1': 0x20, } parameter_names = [ 'battery_voltage', 'device_current', 'device_voltage', 'usb_voltage', 'external_voltage', ] def __init__(self, bus): self.bus = bus self.address = 0x18 def read(self): """ Read both parameters and status flags and return as dictionary. """ # Read parameters and status. status = self.read_status() parameters = self.read_parameters() # Merge parameters and status. data = {} data.update(parameters) data.update(status) return data def read_status(self): """ Read and decode all status flags. """ # Signal reading status flags. self.bus.write_byte(self.address, 0x00) # Read status byte. status_byte = self.bus.read_byte(self.address) # Decode status byte into dictionary with names. status = {} for status_name, status_mask in self.status_definition.items(): status[status_name] = bool((status_byte & status_mask) == status_mask) return status def read_parameters(self): """ Read and decode all parameters. ["U_Batt (V)", "I_Rasp (A)", "U_Rasp (V)", "U_USB (V)", "U_ext (V)"]. """ # Signal reading parameters. self.bus.write_byte(self.address, 0x02) # Read raw values into bytearray buffer. buffer = bytearray() for _ in range(10): item = self.bus.read_byte(self.address) buffer.append(item) # Decode binary data. # https://docs.python.org/3/library/struct.html values_raw = struct.unpack('>hhhhh', buffer) # Apply scaling. values = map(lambda x: x / 1000.0, values_raw) # Mix-in parameter names. data = dict(zip(self.parameter_names, values)) return data def read_firmware_version(self): """ Read firmware version. """ # Signal reading firmware version. self.bus.write_byte(self.address, 0x01) # Read 12 characters. version = '' for _ in range(12): version += chr(self.bus.read_byte(self.address)) return version def main(): bus = smbus.SMBus(1) # Patch status definition for old firmwares. Unklar! #PiUSV.status_definition['usb_power'] = 0x02 #PiUSV.status_definition['external_power'] = 0x01 piusv = PiUSV(bus) print('Firmware version:', piusv.read_firmware_version()) print('Data:', piusv.read()) if __name__ == '__main__': main()
/rpi-piusv-0.1.0.tar.gz/rpi-piusv-0.1.0/rpi_piusv.py
0.561936
0.452113
rpi_piusv.py
pypi
from __future__ import annotations from enum import Enum from typing import Callable from raspi_gpio import GPIO from .pin_manager import tickables, managers class LedState(Enum): off = 0 on = 1 blink = 2 fast_blink = 3 class LedManager: _get_pin_state: Callable[[], LedState] _last_led_state: LedState = LedState.off def __init__(self, pin: int): self.pin = pin GPIO.setup(pin, GPIO.OUT) GPIO.output(pin, GPIO.LOW) def tick(self, period: float = 1.0): '''period must be in range 0.0...1.0''' match self._last_led_state: case LedState.off: GPIO.output(self.pin, GPIO.LOW) case LedState.on: GPIO.output(self.pin, GPIO.HIGH) case LedState.blink: GPIO.output(self.pin, GPIO.LOW if period < 0.5 else GPIO.HIGH) case LedState.fast_blink: GPIO.output(self.pin, GPIO.LOW if period % 0.2 < 0.1 else GPIO.HIGH) def update(self, *args, **kwargs): self._last_led_state = self._get_pin_state(*args, **kwargs) # decorator def __call__(self, get_pin_state: Callable[..., LedState]) -> LedManager: global managers self._get_pin_state = get_pin_state managers.append(self) tickables.append(self) return self class RGBLedState(Enum): off = 0 red = 1 yellow = 2 green = 3 blue = 4 class RGBLedManager: _get_pin_state: Callable[[], RGBLedState] _last_led_state: RGBLedState = RGBLedState.off def __init__(self, red_pin: int, green_pin: int, blue_pin: int): self.red_pin = red_pin self.green_pin = green_pin self.blue_pin = blue_pin GPIO.setup(red_pin, GPIO.OUT) GPIO.setup(green_pin, GPIO.OUT) GPIO.setup(blue_pin, GPIO.OUT) GPIO.output(red_pin, GPIO.LOW) GPIO.output(green_pin, GPIO.LOW) GPIO.output(blue_pin, GPIO.LOW) def tick(self, period: float = 1.0): '''period must be in range 0.0...1.0''' match self._last_led_state: case RGBLedState.off: GPIO.output(self.red_pin, GPIO.LOW) GPIO.output(self.green_pin, GPIO.LOW) GPIO.output(self.blue_pin, GPIO.LOW) case RGBLedState.red: GPIO.output(self.red_pin, GPIO.HIGH) GPIO.output(self.green_pin, GPIO.LOW) GPIO.output(self.blue_pin, GPIO.LOW) case RGBLedState.yellow: GPIO.output(self.red_pin, GPIO.HIGH if period < 0.5 else GPIO.LOW) GPIO.output(self.green_pin, GPIO.HIGH if period < 0.5 else GPIO.LOW) GPIO.output(self.blue_pin, GPIO.LOW) case RGBLedState.green: GPIO.output(self.red_pin, GPIO.LOW) GPIO.output(self.green_pin, GPIO.HIGH) GPIO.output(self.blue_pin, GPIO.LOW) case RGBLedState.blue: GPIO.output(self.red_pin, GPIO.LOW) GPIO.output(self.green_pin, GPIO.LOW) GPIO.output(self.blue_pin, GPIO.HIGH) def update(self, *args, **kwargs): self._last_led_state = self._get_pin_state(*args, **kwargs) # decorator def __call__(self, get_pin_state: Callable[..., RGBLedState]) -> RGBLedManager: global managers self._get_pin_state = get_pin_state managers.append(self) return self
/rpi_reactive_gpio-0.1.8.tar.gz/rpi_reactive_gpio-0.1.8/rpi_reactive_gpio/leds.py
0.823009
0.289924
leds.py
pypi
import os import re import dateutil import matplotlib as mpl mpl.use('tkagg') import matplotlib.pyplot as plt import matplotlib.dates as md # My modules from . import RpiTempmonWork as Work class MatplotGraph(object): """ class with 6 methods to display graphs with matplotlib Methods: (1) init: pass (2) graph_log_data: Define data to draw graphs based on user choice (3) display_menu: Method to display a menu for user to select graph type (4) draw_graph: Draw a graph from logdata (5) graph_live_data: Draw live data graphs (6) Plot_now: Called from method graph_live_data to draw graph """ def __init__(self, name): """init class with name and define self.selection variable""" self.name = name self.selection = 0 def graph_log_data(self, destlog): """define data to draw graphs from logdata""" # display menu return user selection self.display_menu() # get data from log file function data_func timelist, unixlist, cpulist, gpulist, cpu_uselist, ramlist, swaplist\ = Work.data_func(destlog, True) if re.match('[1-4]', self.selection): yaxislist = timelist elif re.match('[5-8]', self.selection): yaxislist = unixlist if (self.selection == '1' or self.selection == '5'): plotlabel1 = 'CPU' plotlabel2 = 'GPU' ylabel = 'Temperature (degrees)' title = 'CPU & GPU temperature of RPi' self.draw_graph(yaxislist, cpulist, gpulist, plotlabel1, plotlabel2, ylabel, title) elif (self.selection == '2' or self.selection == '6'): plotlabel1 = 'CPU temp' plotlabel2 = 'CPU usage' ylabel = 'CPU usage(%) / Temperature (degrees)' title = 'CPU temperature and usage RPi' self.draw_graph(yaxislist, cpulist, cpu_uselist, plotlabel1, plotlabel2, ylabel, title) elif (self.selection == '3' or self.selection == '7'): plotlabel1 = 'RAM' plotlabel2 = 'SWAP' ylabel = 'Memory (% used)' title = 'RAM & Swap memory usage of RPi' self.draw_graph(yaxislist, ramlist, swaplist, plotlabel1, plotlabel2, ylabel, title) elif (self.selection == '4' or self.selection == '8'): plotlabel1 = 'CPU' ylabel = 'CPU (% used)' title = 'CPU usage of RPi' self.draw_graph(yaxislist, cpu_uselist, False, plotlabel1, False, ylabel, title) else: Work.msg_func("red", "Error: graph_log_data: Bad selection value") return def display_menu(self): """ method to display a menu for user to select graph""" os.system('clear') menu = [] menu.append("CPU and GPU Temperature versus Time-date") menu.append("CPU Temperature and CPU usage versus Time-date") menu.append("RAM and Swap memory usage versus Time-date") menu.append("CPU usage versus Time-date") menu.append("CPU and GPU Temperature versus Epoch time") menu.append("CPU Temperature and CPU usage versus Epoch time") menu.append("RAM and Swap memory usage versus Epoch time") menu.append("CPU usage versus Epoch time") menu.append("CPU usage versus live Time") menu.append("GPU Temperature versus live Time") menu.append("RAM usage versus live Time") menu.append("CPU GPU & RAM usage versus live time") menu.append("Exit") try: while True: print("\n") Work.msg_func("blue", "RPI_tempmon :: Graph Menu Options") Work.msg_func("line", "") for number, string in enumerate(menu): print(number+1, string) Work.msg_func("line", "") self.selection = (input("Please Select:")) if int(self.selection) <= 8: return elif self.selection == '9': self.graph_live_data("CPU") break elif self.selection == '10': self.graph_live_data("GPU") break elif self.selection == '11': self.graph_live_data("RAM") break elif self.selection == '12': self.graph_all_live_data() break elif self.selection == '13': quit() else: Work.msg_func("red", "\n ** Warning : Unknown Option Selected! **") Work.msg_func("anykey", "") os.system('clear') except ValueError as error: print(error) Work.msg_func("red", "Error: Wrong menu Input: Integer only : Try Again") quit() def draw_graph(self, timelist, yaxis_list1, yaxis_list2, plot_label1, plot_label2, yaxis_label, graph_title): """ Method to draw graphs two modes, single and doulbe yaxis """ # convert to ints as strings cause issue with graph in new matlib version yaxis_list1 = list(map(float, yaxis_list1)) if plot_label2: # single plot graph mode yaxis_list2 = list(map(float, yaxis_list2)) plt.xticks(rotation=90) plt.xticks(fontsize=6) plt.subplots_adjust(bottom=0.2) axisx = plt.gca() # check user input for time date or unix epoch for yaxis if self.selection == 0: mydates = timelist plt.xlabel('TestRuns') elif re.match('[1-4]', self.selection): mydates = [dateutil.parser.parse(s) for s in timelist] axisx.set_xticks(mydates) xfmt = md.DateFormatter('%m/%d %H:%M') axisx.xaxis.set_major_formatter(xfmt) plt.xlabel('Date time stamp (DD-MM HH:MM)') elif re.match('[5-8]', self.selection): mydates = timelist plt.xlabel('Unix epoch time (seconds)') axisx.xaxis.label.set_color('red') axisx.yaxis.label.set_color('red') plt.plot(mydates, yaxis_list1, label=plot_label1, color='green', marker='x') if plot_label2: # single plot graph mode plt.plot(mydates, yaxis_list2, label=plot_label2, marker='*') plt.ylabel(yaxis_label) plt.title(graph_title, color='green') plt.legend(loc='upper right', fancybox=True, shadow=True) plt.grid(True) plt.show() def graph_all_live_data(self): """ Draw a live graph of pi GPU/CPU/RAM """ print("Drawing graph of all data usage versus live time") print("Press CTRL+c to quit.") try: time_cpu_axis = [] time_ram_axis = [] time_gpu_axis = [] yaxis_cpu_data = 0 yaxis_ram_data = 0 yaxis_gpu_data = 0 plt.ion() labels = () # pre-load dummy data for i in range(0, 150): time_cpu_axis.append(.5) time_ram_axis.append(.5) time_gpu_axis.append(.5) while True: # get data yaxis_cpu_data = Work.get_cpu_use() yaxis_cpu_data = float(yaxis_cpu_data) yaxis_ram_data = Work.get_ram_info() yaxis_ram_data = float(yaxis_ram_data) ostemp = os.popen('vcgencmd measure_temp').readline() yaxis_gpu_data = (ostemp.replace("temp=", "").replace("'C\n", "")) yaxis_gpu_data = float(yaxis_gpu_data) # update the graph labels = "GPU Temp + CPU & RAM usage", "CPU-% RAM-% GPU-'C", "CPU-%", "RAM-%", "GPU-'C" time_cpu_axis.append(yaxis_cpu_data) time_ram_axis.append(yaxis_ram_data) time_gpu_axis.append(yaxis_gpu_data) time_cpu_axis.pop(0) time_ram_axis.pop(0) time_gpu_axis.pop(0) self.plot_all_now(time_cpu_axis, time_ram_axis, time_gpu_axis, labels) plt.pause(2) except Exception as error: print(error) Work.msg_func("bold", "Real-time matplotlib plot shutdown") quit() def plot_all_now(self, time_cpu_axis, time_ram_axis, time_gpu_axis, labels): """ Called from method graph_all_live_data to draw graph""" title, y_label, plot_cpu_label, plot_ram_label, plot_gpu_label = labels plt.clf() plt.ylim([1, 100]) plt.ylabel(y_label, color='red') plt.title(self.name + title, color='green') plt.grid(True) plt.xlabel("Time (last 300 seconds)", color='red') plt.plot(time_cpu_axis, color='blue', marker='', label=plot_cpu_label) plt.plot(time_ram_axis, color='red', marker='', label=plot_ram_label) plt.plot(time_gpu_axis, color='green', marker='', label=plot_gpu_label) plt.legend(loc='upper right', fancybox=True, shadow=True) plt.show() def graph_live_data(self, choice): """ Draw a live graph of pi GPU or CPU or RAM """ try: time_axis = [] yaxis_data = 0 plt.ion() labels = () # pre-load dummy data for i in range(0, 150): time_axis.append(.5) while True: if choice == "GPU": ostemp = os.popen('vcgencmd measure_temp').readline() yaxis_data = (ostemp.replace("temp=", "").replace("'C\n", "")) labels = (" GPU live temp", "Temperature ('C)", "GPU") yaxis_data = float(yaxis_data) time_axis.append(yaxis_data) time_axis.pop(0) self.plot_now(time_axis, labels) elif choice == 'CPU': yaxis_data = Work.get_cpu_use() yaxis_data = float(yaxis_data) labels = (" CPU live usage", "Usage (%)", "CPU") time_axis.append(yaxis_data) time_axis.pop(0) self.plot_now(time_axis, labels) elif choice == 'RAM': yaxis_data = Work.get_ram_info() yaxis_data = float(yaxis_data) labels = (" RAM live usage", "Usage (%)", "RAM") time_axis.append(yaxis_data) time_axis.pop(0) self.plot_now(time_axis, labels) plt.pause(2) except Exception as error: print(error) Work.msg_func("bold", "Real-time matplotlib plot shutdown") quit() def plot_now(self, timeaxis, labels): """ Called from method graph_live_data to draw graph""" title, y_label, plot_label = labels plt.clf() plt.ylim([1, 100]) plt.ylabel(y_label, color='red') plt.title(self.name + title, color='green') plt.grid(True) plt.xlabel("Time (last 300 seconds)", color='red') plt.plot(timeaxis, color='blue', marker='*', label=plot_label) plt.legend(loc='upper right', fancybox=True, shadow=True) plt.show() def importtest(text): """import print test statement""" # print(text) pass # ===================== MAIN =============================== if __name__ == '__main__': importtest("main") else: importtest("Imported {}".format(__name__)) # ===================== END ===============================
/rpi_tempmon.py-2.3.tar.gz/rpi_tempmon.py-2.3/rpiTempMod/RpiTempmonGraph.py
0.534127
0.402304
RpiTempmonGraph.py
pypi
import logging from datetime import datetime, timedelta from collections import deque from threading import Lock def _clamp(value, minvalue, maxvalue): if value < minvalue: return minvalue elif value > maxvalue: return maxvalue else: return value class Measurement(): def __init__(self, timestamp=None, temperature=None, humidity=None, datapoints=1): self._timestamp = timestamp self._temperature = temperature self._humidity = humidity self._datapoints = datapoints @property def timestamp(self): return self._timestamp @timestamp.setter def timestamp(self, t): self._timestamp = t @property def temperature(self): return self._temperature @temperature.setter def temperature(self, temperature): self._temperature = temperature @property def humidity(self): return self._humidity def humidity_bounded(self): """Returns the humidity bounded to the valid range [0.0-1.0] or None.""" if self._humidity is None: return None return _clamp(self.humidity, 0.0, 1.00) @humidity.setter def humidity(self, humidity): self._humidity = humidity @property def datapoints(self): return self._datapoints @datapoints.setter def datapoints(self, datapoints): self._datapoints = datapoints def is_valid(self): return self.temperature is not None and self.humidity is not None def __str__(self): humidity = self.humidity_bounded() humiditystring="---.-" if humidity is not None: humiditystring="{:5.1F}".format(100*humidity) temperaturestring="+/- -.-" if self.temperature is not None: temperaturestring="{: 5.1F}".format(self.temperature) return "{} {}°C {}%".format(self.timestamp or "(invalid)", temperaturestring, humiditystring) def __repr__(self): return "{} ({})".format(self.__str__(), self.datapoints) class TemperatureSensor(): def __init__(self): self._logger = logging.getLogger('tempsens') self._measurements = deque(maxlen=int(60/15*20)) # up to 20min measurements at one every 15 seconds self._lock = Lock() def log(self): return self._logger def get_1min_average(self, timestamp): with self._lock: measurements = self._measurements_in_timespan(timestamp, 60) value = self._average_for_measurements(measurements) return value def get_5min_average(self, timestamp): with self._lock: measurements = self._measurements_in_timespan(timestamp, 5*60) value = self._average_for_measurements(measurements) return value def get_15min_average(self, timestamp): with self._lock: measurements = self._measurements_in_timespan(timestamp, 15*60) value = self._average_for_measurements(measurements) return value def measurements(self): with self._lock: return self._measurements def capacity(self): return self._measurements.maxlen def _measurements_in_timespan(self, start, maxage): end = start - timedelta(seconds=maxage) return list(filter(lambda x: x._timestamp <= start and x._timestamp > end, self._measurements)) def _average_for_measurements(self, measurements): result = Measurement() if not measurements: return Measurement() count = len(measurements) result = Measurement(datapoints=0) for measurement in measurements: result.temperature = (result.temperature or 0.0) + (measurement.temperature or 0.0) result.humidity = (result.humidity or 0.0) + (measurement.humidity or 0.0) result.datapoints += measurement.datapoints result.timestamp = measurements[0].timestamp result.temperature = result.temperature/count result.humidity = result.humidity/count return result def add_measurement(self, measurement): with self._lock: self._measurements.append(measurement) self.log().debug("New measurement: {}".format(measurement))
/rpi-thingamajigs-0.6.202302122050.tar.gz/rpi-thingamajigs-0.6.202302122050/rpithingamajigs/temperature_sensor/temperature_sensor.py
0.892796
0.328341
temperature_sensor.py
pypi
import RPi.GPIO as GPIO class tlc59711: def __init__(self, clkPin, datPin, numDrivers=1, globalBrightness=0x7F): # pins (self.__clk, self.__dat) = (clkPin, datPin) # flags self.__BCr = self.__BCg = self.__BCb = globalBrightness for p in ([self.__dat, self.__clk]): GPIO.setup(p, GPIO.OUT) # number of drivers self.__numDrivers = numDrivers # initialize PwmBuffer self.__pwmBuffer = [0x000 for i in range(0, 12)] def _WriteMSB(self, d): b = 0x80 # 12 bits per channel, send MSB first while b: GPIO.output(self.__clk, False) if (b & d): GPIO.output(self.__dat, True) else: GPIO.output(self.__dat, False) GPIO.output(self.__clk, True) b = b >> 1 def _Write(self): cmd = 0x25 cmd <<= 5 cmd |= 0x16 cmd <<= 7 cmd |= self.__BCr cmd <<= 7 cmd |= self.__BCb cmd <<= 7 cmd |= self.__BCg for n in range(0, self.__numDrivers): self._WriteMSB(cmd >> 24) self._WriteMSB(cmd >> 16) self._WriteMSB(cmd >> 8) self._WriteMSB(cmd) # 12 channels per TLC59711 for c in range(11, -1, -1): self._WriteMSB(self.__pwmBuffer[n*12 + c] >> 8) self._WriteMSB(self.__pwmBuffer[n*12 + c]) def _SetPWM(self, chan, pwm): if(chan > 12*self.__numDrivers): return self.__pwmBuffer[chan] = pwm def SetPWM(self, chan, pwm): self._SetPWM(chan, pwm) self._Write() def SetLED(self, lednum, r, g, b): self._SetPWM(lednum * 3, r) self._SetPWM(lednum * 3+1, g) self._SetPWM(lednum * 3+2, b) self._Write() def SetGlobalBrightness(self, brightness): if(brightness >= 0 and brightness <= 0x7F): self.__BCr = self.__BCg = self.__BCb = brightness self._Write()
/rpi-tlc59711-1.0.0.tar.gz/rpi-tlc59711-1.0.0/tlc59711/tlc59711.py
0.417628
0.178848
tlc59711.py
pypi
from time import sleep from wiringpi import wiringPiSetupGpio, pinMode, digitalRead, digitalWrite, GPIO wiringPiSetupGpio() TM1637_CMD1 = 0x40 # 0x40 data command TM1637_CMD2 = 0xc0 # 0xC0 address command TM1637_CMD3 = 0x80 # 0x80 display control command TM1637_DSP_ON = 0x08 # 0x08 display on TM1637_DELAY = 0.00000001 # 10us delay between clk/dio pulses TM1637_MSB = 0x80 # msb is the decimal point or the colon depending on your display # 0-9, a-z, blank, dash, star _SEGMENTS = bytearray( b'\x3F\x06\x5B\x4F\x66\x6D\x7D\x07\x7F\x6F\x77\x7C\x39\x5E\x79\x71\x3D\x76\x06\x1E\x76\x38\x55\x54\x3F\x73\x67' b'\x50\x6D\x78\x3E\x1C\x2A\x76\x6E\x5B\x00\x40\x63') class TM1637(object): """Library for quad 7-segment LED modules based on the TM1637 LED driver.""" def __init__(self, clk, dio, brightness=7): self.clk = clk self.dio = dio if not 0 <= brightness <= 7: raise ValueError("Brightness out of range") self._brightness = brightness pinMode(self.clk, GPIO.OUTPUT) pinMode(self.dio, GPIO.OUTPUT) digitalWrite(self.clk, 0) digitalWrite(self.dio, 0) def _start(self): digitalWrite(self.clk, GPIO.HIGH) digitalWrite(self.dio, GPIO.HIGH) digitalWrite(self.dio, GPIO.LOW) digitalWrite(self.clk, GPIO.LOW) def _stop(self): digitalWrite(self.clk, GPIO.LOW) digitalWrite(self.dio, GPIO.LOW) digitalWrite(self.clk, GPIO.HIGH) digitalWrite(self.dio, GPIO.HIGH) def _write_data_cmd(self): # automatic address increment, normal mode self._start() self._write_byte(TM1637_CMD1) self._stop() def _write_dsp_ctrl(self): # display on, set brightness self._start() self._write_byte(TM1637_CMD3 | TM1637_DSP_ON | self._brightness) self._stop() def _write_byte(self, b): for i in range(8): digitalWrite(self.dio,(b >> i) & 1) sleep(TM1637_DELAY) digitalWrite(self.clk, GPIO.HIGH) sleep(TM1637_DELAY) digitalWrite(self.clk, GPIO.LOW) sleep(TM1637_DELAY) digitalWrite(self.clk, GPIO.LOW) sleep(TM1637_DELAY) digitalWrite(self.clk, GPIO.HIGH) sleep(TM1637_DELAY) digitalWrite(self.clk, GPIO.LOW) def brightness(self, val=None): """Set the display brightness 0-7.""" # brightness 0 = 1/16th pulse width # brightness 7 = 14/16th pulse width if val is None: return self._brightness if not 0 <= val <= 7: raise ValueError("Brightness out of range") self._brightness = val self._write_data_cmd() self._write_dsp_ctrl() def write(self, segments, pos=0): """Display up to 6 segments moving right from a given position. The MSB in the 2nd segment controls the colon between the 2nd and 3rd segments.""" if not 0 <= pos <= 5: raise ValueError("Position out of range") self._write_data_cmd() self._start() self._write_byte(TM1637_CMD2 | pos) for seg in segments: self._write_byte(seg) self._stop() self._write_dsp_ctrl() @staticmethod def encode_digit(digit): """Convert a character 0-9, a-f to a segment.""" return _SEGMENTS[digit & 0x0f] @staticmethod def encode_char(char): """Convert a character 0-9, a-z, space, dash or star to a segment.""" o = ord(char) if o == 32: return _SEGMENTS[36] # space if o == 42: return _SEGMENTS[38] # star/degrees if o == 45: return _SEGMENTS[37] # dash if 65 <= o <= 90: return _SEGMENTS[o - 55] # uppercase A-Z if 97 <= o <= 122: return _SEGMENTS[o - 87] # lowercase a-z if 48 <= o <= 57: return _SEGMENTS[o - 48] # 0-9 raise ValueError("Character out of range: {:d} '{:s}'".format(o, chr(o))) def encode_string(self, string): """Convert an up to 4 character length string containing 0-9, a-z, space, dash, star to an array of segments, matching the length of the source string.""" segments = bytearray(len(string)) for i in range(len(string)): segments[i] = self.encode_char(string[i]) return segments def hex(self, val): """Display a hex value 0x0000 through 0xffff, right aligned.""" string = '{:04x}'.format(val & 0xffff) self.write(self.encode_string(string)) def number(self, num): """Display a numeric value -999 through 9999, right aligned.""" # limit to range -999 to 9999 num = max(-999, min(num, 9999)) string = '{0: >4d}'.format(num) self.write(self.encode_string(string)) def numbers(self, num1, num2, colon=True): """Display two numeric values -9 through 99, with leading zeros and separated by a colon.""" num1 = max(-9, min(num1, 99)) num2 = max(-9, min(num2, 99)) segments = self.encode_string('{0:0>2d}{1:0>2d}'.format(num1, num2)) if colon: segments[1] |= 0x80 # colon on self.write(segments) def temperature(self, num): if num < -9: self.show('lo') # low elif num > 99: self.show('hi') # high else: string = '{0: >2d}'.format(num) self.write(self.encode_string(string)) self.write([_SEGMENTS[38], _SEGMENTS[12]], 2) # degrees C def dec_temperature(self, num): if num < -9.9: # limit to single digit negatives self.write([0, 0, 0, 0]) self.show('lo') # low elif num > 99.9: self.write([0, 0, 0, 0]) self.show('hi') # high else: intval = abs(int(num/1)) if num == 0: # exact zero seg1 = 0b00000000 seg2 = self.encode_digit(0) seg3 = self.encode_digit(0) else: if intval < 10 and num > 0: # single digit positive seg1 = 0b00000000 seg2 = self.encode_digit(intval) elif num < 0: # negative seg1 = 0b01000000 # '-' seg2 = self.encode_digit(intval) else: # two digit (can only be positive) seg1 = self.encode_digit(int(intval/10)) seg2 = self.encode_digit(int(num-(int(intval/10)*10))) try: seg3 = self.encode_digit(int(str(num).split('.')[1][0])) # will fail if there is no decimal point except: seg3 = self.encode_digit(0) segments = [seg1, seg2, seg3, _SEGMENTS[38]] # segments and degree character segments[1] |= 0x80 # colon as decimal self.write(segments) def show(self, string, colon=False): segments = self.encode_string(string) if len(segments) > 1 and colon: segments[1] |= 128 self.write(segments[:4]) def scroll(self, string, delay=250): segments = string if isinstance(string, list) else self.encode_string(string) data = [0] * 8 data[4:0] = list(segments) for i in range(len(segments) + 5): self.write(data[0 + i:4 + i]) sleep(delay / 1000) class TM1637Decimal(TM1637): """Library for quad 7-segment LED modules based on the TM1637 LED driver. This class is meant to be used with decimal display modules (modules that have a decimal point after each 7-segment LED). """ def encode_string(self, string): """Convert a string to LED segments. Convert an up to 4 character length string containing 0-9, a-z, space, dash, star and '.' to an array of segments, matching the length of the source string.""" segments = bytearray(len(string.replace('.', ''))) j = 0 for i in range(len(string)): if string[i] == '.' and j > 0: segments[j - 1] |= TM1637_MSB continue segments[j] = self.encode_char(string[i]) j += 1 return segments
/rpi_tm1637-1.3.4-py3-none-any.whl/tm1637.py
0.595257
0.225907
tm1637.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import click import collections import logging import numpy as np import os from caffe2.proto import caffe2_pb2 from caffe2.python import core import caffe2.contrib.tensorboard.tensorboard_exporter as tb_exporter try: # tensorboard>=1.14.0 from tensorboard.compat.proto.summary_pb2 import Summary, HistogramProto from tensorboard.compat.proto.event_pb2 import Event from tensorboard.summary.writer.event_file_writer import EventFileWriter as FileWriter except ImportError: from tensorflow.core.framework.summary_pb2 import Summary, HistogramProto from tensorflow.core.util.event_pb2 import Event try: # tensorflow>=1.0.0 from tensorflow.summary import FileWriter except ImportError: # tensorflow<=0.12.1 from tensorflow.train import SummaryWriter as FileWriter class Config(object): HEIGHT = 600 ASPECT_RATIO = 1.6 CODE_TEMPLATE = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load() > <div style="height:{height}px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """ IFRAME_TEMPLATE = """ <iframe seamless style="width:{width}px;height:{height}px;border:0" srcdoc="{code}"> </iframe> """ def _show_graph(graph_def): import IPython.display code = CODE_TEMPLATE.format( data=repr(str(graph_def)), id='graph' + str(np.random.rand()), height=Config.HEIGHT) iframe = IFRAME_TEMPLATE.format( code=code.replace('"', '&quot;'), width=Config.HEIGHT * Config.ASPECT_RATIO, height=Config.HEIGHT + 20) IPython.display.display(IPython.display.HTML(iframe)) def visualize_cnn(cnn, **kwargs): g = tb_exporter.cnn_to_graph_def(cnn, **kwargs) _show_graph(g) def visualize_net(nets, **kwargs): g = tb_exporter.nets_to_graph_def(nets, **kwargs) _show_graph(g) def visualize_ops(ops, **kwargs): g = tb_exporter.ops_to_graph_def(ops, **kwargs) _show_graph(g) @click.group() def cli(): pass def write_events(tf_dir, events): writer = FileWriter(tf_dir, len(events)) for event in events: writer.add_event(event) writer.flush() writer.close() def graph_def_to_event(step, graph_def): return Event( wall_time=step, step=step, graph_def=graph_def.SerializeToString()) @cli.command("tensorboard-graphs") @click.option("--c2-netdef", type=click.Path(exists=True, dir_okay=False), multiple=True) @click.option("--tf-dir", type=click.Path(exists=True)) def tensorboard_graphs(c2_netdef, tf_dir): log = logging.getLogger(__name__) log.setLevel(logging.INFO) def parse_net_def(path): import google.protobuf.text_format net_def = caffe2_pb2.NetDef() with open(path) as f: google.protobuf.text_format.Merge(f.read(), net_def) return core.Net(net_def) graph_defs = [tb_exporter.nets_to_graph_def([parse_net_def(path)]) for path in c2_netdef] events = [graph_def_to_event(i, graph_def) for (i, graph_def) in enumerate(graph_defs, start=1)] write_events(tf_dir, events) log.info("Wrote %s graphs to logdir %s", len(events), tf_dir) @cli.command("tensorboard-events") @click.option("--c2-dir", type=click.Path(exists=True, file_okay=False), help="Root directory of the Caffe2 run") @click.option("--tf-dir", type=click.Path(writable=True), help="Output path to the logdir used by TensorBoard") def tensorboard_events(c2_dir, tf_dir): np.random.seed(1701) log = logging.getLogger(__name__) log.setLevel(logging.INFO) S = collections.namedtuple('S', ['min', 'max', 'mean', 'std']) def parse_summary(filename): try: with open(filename) as f: rows = [(float(el) for el in line.split()) for line in f] return [S(*r) for r in rows] except Exception as e: log.exception(e) return None def get_named_summaries(root): summaries = [ (fname, parse_summary(os.path.join(dirname, fname))) for dirname, _, fnames in os.walk(root) for fname in fnames ] return [(n, s) for (n, s) in summaries if s] def inferred_histo(summary, samples=1000): np.random.seed( hash( summary.std + summary.mean + summary.min + summary.max ) % np.iinfo(np.int32).max ) samples = np.random.randn(samples) * summary.std + summary.mean samples = np.clip(samples, a_min=summary.min, a_max=summary.max) (hist, edges) = np.histogram(samples) upper_edges = edges[1:] r = HistogramProto( min=summary.min, max=summary.max, num=len(samples), sum=samples.sum(), sum_squares=(samples * samples).sum()) r.bucket_limit.extend(upper_edges) r.bucket.extend(hist) return r def named_summaries_to_events(named_summaries): names = [n for (n, _) in named_summaries] summaries = [s for (_, s) in named_summaries] summaries = list(zip(*summaries)) def event(step, values): s = Summary() scalar = [ Summary.Value( tag="{}/{}".format(name, field), simple_value=v) for name, value in zip(names, values) for field, v in value._asdict().items()] hist = [ Summary.Value( tag="{}/inferred_normal_hist".format(name), histo=inferred_histo(value)) for name, value in zip(names, values) ] s.value.extend(scalar + hist) return Event(wall_time=int(step), step=step, summary=s) return [event(step, values) for step, values in enumerate(summaries, start=1)] named_summaries = get_named_summaries(c2_dir) events = named_summaries_to_events(named_summaries) write_events(tf_dir, events) log.info("Wrote %s events to logdir %s", len(events), tf_dir) if __name__ == "__main__": cli()
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/contrib/tensorboard/tensorboard.py
0.555073
0.161816
tensorboard.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import numpy as np import cPickle as pickle from collections import OrderedDict from caffe2.proto import caffe2_pb2 from caffe2.python import workspace, core, scope import logging logging.basicConfig() log = logging.getLogger("AnyExpOnTerm") log.setLevel(logging.DEBUG) def initialize_params_from_file( model, weights_file, num_xpus, opts, broadcast_computed_param=False, reset_epoch=False): start_epoch, lr, best_metric = initialize_master_xpu_model_params( model, weights_file, opts, reset_epoch) broadcast_parameters(opts, model, num_xpus, broadcast_computed_param) return start_epoch, lr, best_metric def initialize_master_xpu_model_params(model, weights_file, opts, reset_epoch): log.info("Initializing model params from file: {}".format(weights_file)) with open(weights_file, 'r') as fopen: blobs = pickle.load(fopen) if 'blobs' in blobs: blobs = blobs['blobs'] start_epoch = 0 best_metric = float('-inf') if 'epoch' in blobs: log.info('epoch {} is found in model file'.format(blobs['epoch'])) if not reset_epoch: start_epoch = blobs['epoch'] else: log.info('Reset epoch') else: log.info('no epoch is found in model file') lr = opts['model_param']['base_learning_rate'] if 'lr' in blobs: lr = blobs['lr'] if 'best_metric' in blobs and not reset_epoch: best_metric = blobs['best_metric'] if model is not None: log.info('initialize model parameters using weights file: {}'.format( weights_file )) ws_blobs = workspace.Blobs() unscoped_blob_names = OrderedDict() for blob in model.GetAllParams(): unscoped_blob_names[unscope_name(str(blob))] = True root_xpu_id = opts['distributed']['first_xpu_id'] device = opts['distributed']['device'] caffe2_pb2_DEVICE =\ caffe2_pb2.CUDA if opts['distributed']['device'] == 'gpu'\ else caffe2_pb2.CPU with core.NameScope('{}_{}'.format(device, root_xpu_id)): with core.DeviceScope(core.DeviceOption(caffe2_pb2_DEVICE, 0)): for unscoped_blob_name in unscoped_blob_names.keys(): scoped_blob_name = scoped_name(unscoped_blob_name) if unscoped_blob_name not in blobs: log.info('{:s} not found'.format(unscoped_blob_name)) continue log.info( '{:s} loaded from weights file into: {:s}'.format( unscoped_blob_name, scoped_blob_name ) ) if scoped_blob_name in ws_blobs: ws_blob = workspace.FetchBlob(scoped_blob_name) if not ws_blob.shape == blobs[unscoped_blob_name].shape: log.info( ('Workspace blob {} with shape {} does ' 'not match weights file shape {}').format( unscoped_blob_name, ws_blob.shape, blobs[unscoped_blob_name].shape) ) else: workspace.FeedBlob( scoped_blob_name, blobs[unscoped_blob_name].astype( np.float32, copy=False)) else: log.info('Skip initializing model parameters from file: {}'.format( weights_file )) log.info('Complete initialize_master_xpu_model_params') return start_epoch, lr, best_metric def broadcast_parameters(opts, model, num_xpus, broadcast_computed_param=False): if num_xpus == 1: log.info("only 1 device. Skip parameter broadcast") return all_params = [model.GetParams()] if broadcast_computed_param: all_params.append(model.GetComputedParams()) caffe2_pb2_DEVICE =\ caffe2_pb2.CUDA if opts['distributed']['device'] == 'gpu'\ else caffe2_pb2.CPU for params in all_params: assert len(params) % num_xpus == 0, \ "Current model doesn't match device number when loading checkpoint" params_per_xpu = int(len(params) / num_xpus) for idx in range(params_per_xpu): blobs = [param for param in params[idx::params_per_xpu]] data = workspace.FetchBlob(blobs[0]) log.info('Broadcasting {} to'.format(str(blobs[0]))) for i, p in enumerate(blobs[1:]): log.info(' |-> {}'.format(str(p))) with core.DeviceScope(core.DeviceOption(caffe2_pb2_DEVICE, i+1)): workspace.FeedBlob(p, data) log.info("Complete parameter broadcast") def save_model_params(is_checkpoint, model, checkpoint_path, epoch, opts, best_metric): # best_metric=float('-inf') if checkpoint_path is None: return None try: save_model_params_blob( model, checkpoint_path, epoch, opts, best_metric ) except Exception as e: log.warning('Exception from save_model_params {}'.format(str(e))) return checkpoint_path def save_model_params_blob(model, params_file, epoch, opts, best_metric): # best_metric=float('-inf') log.info("Saving model params...") root_xpu_id = opts['distributed']['first_xpu_id'] device = opts['distributed']['device'] save_params = [str(param) for param in model.GetParams('{}_{}'.format(device, root_xpu_id))] save_computed_params = [str(param) for param in model.GetComputedParams('{}_{}' .format(device, root_xpu_id))] save_blobs = {} save_blobs['epoch'] = epoch save_blobs['best_metric'] = best_metric save_blobs['lr'] = \ workspace.FetchBlob('{}_{}/lr'.format(device, root_xpu_id)) for param in save_params + save_computed_params: scoped_blob_name = str(param) unscoped_blob_name = unscope_name(scoped_blob_name) if unscoped_blob_name not in save_blobs: save_blobs[unscoped_blob_name] = workspace.FetchBlob( scoped_blob_name) log.debug( '{:s} -> {:s}'.format(scoped_blob_name, unscoped_blob_name)) log.info('to weights file {}'.format(params_file)) try: with open(params_file, 'w') as fwrite: pickle.dump(dict(blobs=save_blobs), fwrite, pickle.HIGHEST_PROTOCOL) except IOError as e: log.error('I/O error({0}): {1}'.format(e.errno, e.strerror)) def unscope_name(blob_name): return blob_name[blob_name.rfind(scope._NAMESCOPE_SEPARATOR) + 1:] def scoped_name(blob_name): return scope.CurrentNameScope() + blob_name
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/contrib/playground/checkpoint.py
0.516108
0.183392
checkpoint.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import logging logging.basicConfig() log = logging.getLogger("AnyExp") log.setLevel(logging.DEBUG) # For more depths, add the block config here BLOCK_CONFIG = { 18: (2, 2, 2, 2), 34: (3, 4, 6, 3), 50: (3, 4, 6, 3), 101: (3, 4, 23, 3), 152: (3, 8, 36, 3), 200: (3, 32, 36, 3), 264: (3, 64, 36, 3), 284: (3, 32, 64, 3), } def gen_forward_pass_builder_fun(self, model, dataset, is_train): split = 'train' if is_train else 'test' opts = self.opts def model_creator(model, loss_scale): model, softmax, loss = resnet_imagenet_create_model( model=model, data='data', labels='label', split=split, opts=opts, dataset=dataset, ) return [loss] return model_creator def resnet_imagenet_create_model(model, data, labels, split, opts, dataset): model_helper = ResNetModelHelper(model, split, opts) opts_depth = opts['model_param']['num_layer'] engine = opts['model_param']['engine'] log.info(' | ResNet-{} Imagenet'.format(opts_depth)) assert opts_depth in BLOCK_CONFIG.keys(), \ 'Block config is not defined for specified model depth. Please check.' (n1, n2, n3, n4) = BLOCK_CONFIG[opts_depth] num_features = 2048 residual_block = model_helper.bottleneck_block if opts_depth in [18, 34]: num_features = 512 residual_block = model_helper.basic_block num_classes = 1000 conv_blob = model.Conv( data, 'conv1', 3, 64, 7, stride=2, pad=3, weight_init=('MSRAFill', {}), bias_init=('ConstantFill', {'value': 0.}), no_bias=0, engine=engine ) test_mode = False if split in ['test', 'val']: test_mode = True bn_blob = model.SpatialBN( conv_blob, 'res_conv1_bn', 64, # does not appear to affect test_loss performance # epsilon=1e-3, epsilon=opts['model_param']['bn_epsilon'], # momentum=0.1, momentum=opts['model_param']['bn_momentum'], is_test=test_mode, ) relu_blob = model.Relu(bn_blob, bn_blob) max_pool = model.MaxPool(relu_blob, 'pool1', kernel=3, stride=2, pad=1) # TODO: This can be further optimized by passing dim_in, dim_out = features, # dim_out = features * 4 if opts_depth in [50, 101, 152, 200, 264, 284]: blob_in, dim_in = model_helper.residual_layer( residual_block, max_pool, 64, 256, stride=1, num_blocks=n1, prefix='res2', dim_inner=64 ) blob_in, dim_in = model_helper.residual_layer( residual_block, blob_in, dim_in, 512, stride=2, num_blocks=n2, prefix='res3', dim_inner=128 ) blob_in, dim_in = model_helper.residual_layer( residual_block, blob_in, dim_in, 1024, stride=2, num_blocks=n3, prefix='res4', dim_inner=256 ) blob_in, dim_in = model_helper.residual_layer( residual_block, blob_in, dim_in, 2048, stride=2, num_blocks=n4, prefix='res5', dim_inner=512 ) elif opts_depth in [18, 34]: blob_in, dim_in = model_helper.residual_layer( residual_block, max_pool, 64, 64, stride=1, num_blocks=n1, prefix='res2', ) blob_in, dim_in = model_helper.residual_layer( residual_block, blob_in, dim_in, 128, stride=2, num_blocks=n2, prefix='res3', ) blob_in, dim_in = model_helper.residual_layer( residual_block, blob_in, dim_in, 256, stride=2, num_blocks=n3, prefix='res4', ) blob_in, dim_in = model_helper.residual_layer( residual_block, blob_in, dim_in, 512, stride=2, num_blocks=n4, prefix='res5', ) pool_blob = model.AveragePool(blob_in, 'pool5', kernel=7, stride=1) loss_scale = 1. / opts['distributed']['num_xpus'] / \ opts['distributed']['num_shards'] loss = None fc_blob = model.FC( pool_blob, 'pred', num_features, num_classes, # does not appear to affect test_loss performance # weight_init=('GaussianFill', {'std': opts.fc_init_std}), # bias_init=('ConstantFill', {'value': 0.}) weight_init=None, bias_init=None) softmax, loss = model.SoftmaxWithLoss( [fc_blob, labels], ['softmax', 'loss'], scale=loss_scale) model.Accuracy(['softmax', labels], 'accuracy') return model, softmax, loss class ResNetModelHelper(): def __init__(self, model, split, opts): self.model = model self.split = split self.opts = opts self.engine = opts['model_param']['engine'] # shortcut type B def add_shortcut(self, blob_in, dim_in, dim_out, stride, prefix): if dim_in == dim_out: return blob_in conv_blob = self.model.Conv( blob_in, prefix, dim_in, dim_out, kernel=1, stride=stride, weight_init=("MSRAFill", {}), bias_init=('ConstantFill', {'value': 0.}), no_bias=1, engine=self.engine ) test_mode = False if self.split in ['test', 'val']: test_mode = True bn_blob = self.model.SpatialBN( conv_blob, prefix + "_bn", dim_out, # epsilon=1e-3, # momentum=0.1, epsilon=self.opts['model_param']['bn_epsilon'], momentum=self.opts['model_param']['bn_momentum'], is_test=test_mode, ) return bn_blob def conv_bn( self, blob_in, dim_in, dim_out, kernel, stride, prefix, group=1, pad=1, ): conv_blob = self.model.Conv( blob_in, prefix, dim_in, dim_out, kernel, stride=stride, pad=pad, group=group, weight_init=("MSRAFill", {}), bias_init=('ConstantFill', {'value': 0.}), no_bias=1, engine=self.engine ) test_mode = False if self.split in ['test', 'val']: test_mode = True bn_blob = self.model.SpatialBN( conv_blob, prefix + "_bn", dim_out, epsilon=self.opts['model_param']['bn_epsilon'], momentum=self.opts['model_param']['bn_momentum'], is_test=test_mode, ) return bn_blob def conv_bn_relu( self, blob_in, dim_in, dim_out, kernel, stride, prefix, pad=1, group=1, ): bn_blob = self.conv_bn( blob_in, dim_in, dim_out, kernel, stride, prefix, group=group, pad=pad ) return self.model.Relu(bn_blob, bn_blob) # 3(a)this block uses multi-way group conv implementation that splits blobs def multiway_bottleneck_block( self, blob_in, dim_in, dim_out, stride, prefix, dim_inner, group ): blob_out = self.conv_bn_relu( blob_in, dim_in, dim_inner, 1, 1, prefix + "_branch2a", pad=0, ) conv_blob = self.model.GroupConv_Deprecated( blob_out, prefix + "_branch2b", dim_inner, dim_inner, kernel=3, stride=stride, pad=1, group=group, weight_init=("MSRAFill", {}), bias_init=('ConstantFill', {'value': 0.}), no_bias=1, engine=self.engine ) test_mode = False if self.split in ['test', 'val']: test_mode = True bn_blob = self.model.SpatialBN( conv_blob, prefix + "_branch2b_bn", dim_out, epsilon=self.opts['model_param']['bn_epsilon'], momentum=self.opts['model_param']['bn_momentum'], is_test=test_mode, ) relu_blob = self.model.Relu(bn_blob, bn_blob) bn_blob = self.conv_bn( relu_blob, dim_inner, dim_out, 1, 1, prefix + "_branch2c", pad=0 ) if self.opts['model_param']['custom_bn_init']: self.model.param_init_net.ConstantFill( [bn_blob + '_s'], bn_blob + '_s', value=self.opts['model_param']['bn_init_gamma']) sc_blob = self.add_shortcut( blob_in, dim_in, dim_out, stride, prefix=prefix + "_branch1" ) sum_blob = self.model.net.Sum([bn_blob, sc_blob], prefix + "_sum") return self.model.Relu(sum_blob, sum_blob) # 3(c) this block uses cudnn group conv op def group_bottleneck_block( self, blob_in, dim_in, dim_out, stride, prefix, dim_inner, group ): blob_out = self.conv_bn_relu( blob_in, dim_in, dim_inner, 1, 1, prefix + "_branch2a", pad=0, ) blob_out = self.conv_bn_relu( blob_out, dim_inner, dim_inner, 3, stride, prefix + "_branch2b", group=group ) bn_blob = self.conv_bn( blob_out, dim_inner, dim_out, 1, 1, prefix + "_branch2c", pad=0 ) if self.opts['model_param']['custom_bn_init']: self.model.param_init_net.ConstantFill( [bn_blob + '_s'], bn_blob + '_s', value=self.opts['model_param']['bn_init_gamma']) sc_blob = self.add_shortcut( blob_in, dim_in, dim_out, stride, prefix=prefix + "_branch1" ) sum_blob = self.model.net.Sum([bn_blob, sc_blob], prefix + "_sum") return self.model.Relu(sum_blob, sum_blob) # bottleneck residual layer for 50, 101, 152 layer networks def bottleneck_block( self, blob_in, dim_in, dim_out, stride, prefix, dim_inner, group=None ): blob_out = self.conv_bn_relu( blob_in, dim_in, dim_inner, 1, 1, prefix + "_branch2a", pad=0, ) blob_out = self.conv_bn_relu( blob_out, dim_inner, dim_inner, 3, stride, prefix + "_branch2b", ) bn_blob = self.conv_bn( blob_out, dim_inner, dim_out, 1, 1, prefix + "_branch2c", pad=0 ) if self.opts['model_param']['custom_bn_init']: self.model.param_init_net.ConstantFill( [bn_blob + '_s'], bn_blob + '_s', value=self.opts['model_param']['bn_init_gamma']) sc_blob = self.add_shortcut( blob_in, dim_in, dim_out, stride, prefix=prefix + "_branch1" ) sum_blob = self.model.net.Sum([bn_blob, sc_blob], prefix + "_sum") return self.model.Relu(sum_blob, sum_blob) # basic layer for the 18 and 34 layer networks and the CIFAR data netwrorks def basic_block( self, blob_in, dim_in, dim_out, stride, prefix, dim_inner=None, group=None, ): blob_out = self.conv_bn_relu( blob_in, dim_in, dim_out, 3, stride, prefix + "_branch2a" ) bn_blob = self.conv_bn( blob_out, dim_out, dim_out, 3, 1, prefix + "_branch2b", pad=1 ) sc_blob = self.add_shortcut( blob_in, dim_in, dim_out, stride, prefix=prefix + "_branch1" ) sum_blob = self.model.net.Sum([bn_blob, sc_blob], prefix + "_sum") return self.model.Relu(sum_blob, sum_blob) def residual_layer( self, block_fn, blob_in, dim_in, dim_out, stride, num_blocks, prefix, dim_inner=None, group=None ): # prefix is something like: res2, res3, etc. # each res layer has num_blocks stacked for idx in range(num_blocks): block_prefix = "{}_{}".format(prefix, idx) block_stride = 2 if (idx == 0 and stride == 2) else 1 blob_in = block_fn( blob_in, dim_in, dim_out, block_stride, block_prefix, dim_inner, group ) dim_in = dim_out return blob_in, dim_in
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/contrib/playground/resnetdemo/explicit_resnet_forward.py
0.618665
0.336767
explicit_resnet_forward.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core, dataio from caffe2.python.task import TaskGroup import logging logger = logging.getLogger(__name__) class _QueueReader(dataio.Reader): def __init__(self, wrapper, num_dequeue_records=1): assert wrapper.schema is not None, ( 'Queue needs a schema in order to be read from.') dataio.Reader.__init__(self, wrapper.schema()) self._wrapper = wrapper self._num_dequeue_records = num_dequeue_records def setup_ex(self, init_net, exit_net): exit_net.CloseBlobsQueue([self._wrapper.queue()], 0) def read_ex(self, local_init_net, local_finish_net): self._wrapper._new_reader(local_init_net) dequeue_net = core.Net('dequeue') fields, status_blob = dequeue( dequeue_net, self._wrapper.queue(), len(self.schema().field_names()), field_names=self.schema().field_names(), num_records=self._num_dequeue_records) return [dequeue_net], status_blob, fields def read(self, net): net, _, fields = self.read_ex(net, None) return net, fields class _QueueWriter(dataio.Writer): def __init__(self, wrapper): self._wrapper = wrapper def setup_ex(self, init_net, exit_net): exit_net.CloseBlobsQueue([self._wrapper.queue()], 0) def write_ex(self, fields, local_init_net, local_finish_net, status): self._wrapper._new_writer(self.schema(), local_init_net) enqueue_net = core.Net('enqueue') enqueue(enqueue_net, self._wrapper.queue(), fields, status) return [enqueue_net] class QueueWrapper(dataio.Pipe): def __init__(self, handler, schema=None, num_dequeue_records=1): dataio.Pipe.__init__(self, schema, TaskGroup.LOCAL_SETUP) self._queue = handler self._num_dequeue_records = num_dequeue_records def reader(self): return _QueueReader( self, num_dequeue_records=self._num_dequeue_records) def writer(self): return _QueueWriter(self) def queue(self): return self._queue class Queue(QueueWrapper): def __init__(self, capacity, schema=None, name='queue', num_dequeue_records=1): # find a unique blob name for the queue net = core.Net(name) queue_blob = net.AddExternalInput(net.NextName('handler')) QueueWrapper.__init__( self, queue_blob, schema, num_dequeue_records=num_dequeue_records) self.capacity = capacity self._setup_done = False def setup(self, global_init_net): assert self._schema, 'This queue does not have a schema.' self._setup_done = True global_init_net.CreateBlobsQueue( [], [self._queue], capacity=self.capacity, num_blobs=len(self._schema.field_names()), field_names=self._schema.field_names()) def enqueue(net, queue, data_blobs, status=None): if status is None: status = net.NextName('status') # Enqueueing moved the data into the queue; # duplication will result in data corruption queue_blobs = [] for blob in data_blobs: if blob not in queue_blobs: queue_blobs.append(blob) else: logger.warning("Need to copy blob {} to enqueue".format(blob)) queue_blobs.append(net.Copy(blob)) results = net.SafeEnqueueBlobs([queue] + queue_blobs, queue_blobs + [status]) return results[-1] def dequeue(net, queue, num_blobs, status=None, field_names=None, num_records=1): if field_names is not None: assert len(field_names) == num_blobs data_names = [net.NextName(name) for name in field_names] else: data_names = [net.NextName('data', i) for i in range(num_blobs)] if status is None: status = net.NextName('status') results = net.SafeDequeueBlobs( queue, data_names + [status], num_records=num_records) results = list(results) status_blob = results.pop(-1) return results, status_blob def close_queue(step, *queues): close_net = core.Net("close_queue_net") for queue in queues: close_net.CloseBlobsQueue([queue], 0) close_step = core.execution_step("%s_step" % str(close_net), close_net) return core.execution_step( "%s_wraper_step" % str(close_net), [step, close_step])
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/queue_util.py
0.618204
0.158597
queue_util.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.proto import hsm_pb2 ''' Hierarchical softmax utility methods that can be used to: 1) create TreeProto structure given list of word_ids or NodeProtos 2) create HierarchyProto structure using the user-inputted TreeProto ''' def create_node_with_words(words, name='node'): node = hsm_pb2.NodeProto() node.name = name for word in words: node.word_ids.append(word) return node def create_node_with_nodes(nodes, name='node'): node = hsm_pb2.NodeProto() node.name = name for child_node in nodes: new_child_node = node.children.add() new_child_node.MergeFrom(child_node) return node def create_hierarchy(tree_proto): max_index = 0 def create_path(path, word): path_proto = hsm_pb2.PathProto() path_proto.word_id = word for entry in path: new_path_node = path_proto.path_nodes.add() new_path_node.index = entry[0] new_path_node.length = entry[1] new_path_node.target = entry[2] return path_proto def recursive_path_builder(node_proto, path, hierarchy_proto, max_index): node_proto.offset = max_index path.append([max_index, len(node_proto.word_ids) + len(node_proto.children), 0]) max_index += len(node_proto.word_ids) + len(node_proto.children) if hierarchy_proto.size < max_index: hierarchy_proto.size = max_index for target, node in enumerate(node_proto.children): path[-1][2] = target max_index = recursive_path_builder(node, path, hierarchy_proto, max_index) for target, word in enumerate(node_proto.word_ids): path[-1][2] = target + len(node_proto.children) path_entry = create_path(path, word) new_path_entry = hierarchy_proto.paths.add() new_path_entry.MergeFrom(path_entry) del path[-1] return max_index node = tree_proto.root_node hierarchy_proto = hsm_pb2.HierarchyProto() path = [] max_index = recursive_path_builder(node, path, hierarchy_proto, max_index) return hierarchy_proto
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/hsm_util.py
0.663342
0.207215
hsm_util.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core, workspace from caffe2.proto import caffe2_pb2 from caffe2.python.onnx.workspace import Workspace from collections import namedtuple from six import string_types OpSchema = workspace.C.OpSchema def namedtupledict(typename, field_names, *args, **kwargs): field_names_map = {n: i for i, n in enumerate(field_names)} # Some output names are invalid python identifier, e.g. "0" kwargs.setdefault('rename', True) data = namedtuple(typename, field_names, *args, **kwargs) def getitem(self, key): if isinstance(key, string_types): key = field_names_map[key] return super(type(self), self).__getitem__(key) data.__getitem__ = getitem return data class _Functional(object): def __getattribute__(self, op_type): def op_func(*inputs, **args): ws = Workspace() schema = OpSchema.get(op_type) input_prefix = 'input_' output_prefix = 'output_' def get_name_list(prefix, num, max_num): return [prefix + str(x) for x in range(min(num, max_num))] input_names, output_names = [], [] input_names = get_name_list( input_prefix, len(inputs), schema.max_input ) # verify the length of input name is in range # of schema num_input = len(input_names) if num_input > schema.max_input or num_input < \ schema.min_input or not schema.num_inputs_allowed(num_input): raise ValueError( "Functional C2: Number of inputs not in \ range: {} - {} or not allowed." .format(schema.min_input, schema.max_input) ) if 'num_output' in args: num_output = args['num_output'] if num_output > schema.max_output or \ num_output < schema.min_output or \ not schema.num_outputs_allowed(num_output) or \ not schema.num_inputs_outputs_allowed(num_input, num_output): raise ValueError( "Functional C2: Number of output \ not in range: {} - {} or not allowed" .format(schema.min_output, schema.max_output) ) output_names = get_name_list( output_prefix, num_output, schema.max_output ) args.pop('num_output') calculated = schema.CalculateOutput(num_input) if not output_names and calculated != -1: output_names = get_name_list( output_prefix, calculated, schema.max_output ) if not output_names: max_output = schema.max_output # For an op with max_output == inf # and no Output defined in schema # user should pass output_size explicitly if schema.inf == max_output: raise ValueError( "For operators with max_output == inf,\ user should pass num_output explicitly." ) output_names = get_name_list( output_prefix, max_output, max_output ) # There could be input-output inplace enforcement; replace the # output names with input ones if such enforcements exist for i in range(len(input_names)): for j in range(len(output_names)): if schema.inplace_enforced(i, j): output_names[j] = input_names[i] op = core.CreateOperator( op_type, input_names, output_names, **args ) device_option = args.get('device_option', core.DeviceOption(caffe2_pb2.CPU)) with core.DeviceScope(device_option): for i, input_blob in enumerate(inputs): ws.FeedBlob(input_names[i], input_blob) # RunOperator ws.RunOperatorOnce(op) output_values = [ws.FetchBlob(x) for x in output_names] return namedtupledict('output', output_names)(*output_values) return op_func Functional = _Functional()
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/functional.py
0.668988
0.186465
functional.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core, workspace from future.utils import viewitems, viewkeys def recurrent_net( net, cell_net, inputs, initial_cell_inputs, links, timestep=None, scope=None, outputs_with_grads=(0,), recompute_blobs_on_backward=None, forward_only=False, ): ''' net: the main net operator should be added to cell_net: cell_net which is executed in a recurrent fasion inputs: sequences to be fed into the recurrent net. Currently only one input is supported. It has to be in a format T x N x (D1...Dk) where T is lengths of the sequence. N is a batch size and (D1...Dk) are the rest of dimentions initial_cell_inputs: inputs of the cell_net for the 0 timestamp. Format for each input is: (cell_net_input_name, external_blob_with_data) links: a dictionary from cell_net input names in moment t+1 and output names of moment t. Currently we assume that each output becomes an input for the next timestep. timestep: name of the timestep blob to be used. If not provided "timestep" is used. scope: Internal blobs are going to be scoped in a format <scope_name>/<blob_name> If not provided we generate a scope name automatically outputs_with_grads : position indices of output blobs which will receive error gradient (from outside recurrent network) during backpropagation recompute_blobs_on_backward: specify a list of blobs that will be recomputed for backward pass, and thus need not to be stored for each forward timestep. forward_only: if True, only forward steps are executed ''' assert len(inputs) == 1, "Only one input blob is supported so far" input_blobs = [str(i[0]) for i in inputs] initial_input_blobs = [str(x[1]) for x in initial_cell_inputs] op_name = net.NextName('recurrent') def s(name): # We have to manually scope due to our internal/external blob # relationships. scope_name = op_name if scope is None else scope return "{}/{}".format(str(scope_name), str(name)) # determine inputs that are considered to be references # it is those that are not referred to in inputs or initial_cell_inputs known_inputs = [str(b) for b in input_blobs + initial_input_blobs] known_inputs += [str(x[0]) for x in initial_cell_inputs] if timestep is not None: known_inputs.append(str(timestep)) references = [ core.BlobReference(b) for b in cell_net.Proto().external_input if b not in known_inputs] inner_outputs = list(cell_net.Proto().external_output) # These gradients are expected to be available during the backward pass inner_outputs_map = {o: o + '_grad' for o in inner_outputs} # compute the backward pass of the cell net if not forward_only: backward_ops, backward_mapping = core.GradientRegistry.GetBackwardPass( cell_net.Proto().op, inner_outputs_map) backward_mapping = {str(k): v for k, v in viewitems(backward_mapping)} backward_cell_net = core.Net("RecurrentBackwardStep") del backward_cell_net.Proto().op[:] if recompute_blobs_on_backward is not None: # Insert operators to re-compute the specified blobs. # They are added in the same order as for the forward pass, thus # the order is correct. recompute_blobs_on_backward = {str(b) for b in recompute_blobs_on_backward} for op in cell_net.Proto().op: if not recompute_blobs_on_backward.isdisjoint(set(op.output)): backward_cell_net.Proto().op.extend([op]) # This fires if other outputs than the declared # are computed by the ops that are recomputed assert set(op.output).issubset(recompute_blobs_on_backward) backward_cell_net.Proto().op.extend(backward_ops) # compute blobs used but not defined in the backward pass backward_ssa, backward_blob_versions = core.get_ssa( backward_cell_net.Proto()) undefined = core.get_undefined_blobs(backward_ssa) # also add to the output list the intermediate outputs of fwd_step that # are used by backward. ssa, blob_versions = core.get_ssa(cell_net.Proto()) scratches = [ blob for blob, ver in viewitems(blob_versions) if (ver > 0 and blob in undefined and blob not in cell_net.Proto().external_output) ] backward_cell_net.Proto().external_input.extend(scratches) backward_cell_net.Proto().type = 'simple' else: backward_cell_net = None all_inputs = [i[1] for i in inputs] + [ x[1] for x in initial_cell_inputs] + references all_outputs = [] cell_net.Proto().type = 'simple' # Internal arguments used by RecurrentNetwork operator # Links are in the format blob_name, recurrent_states, offset. # In the moment t we know that corresponding data block is at # t + offset position in the recurrent_states tensor forward_links = [] backward_links = [] # Aliases are used to expose outputs to external world # Format (internal_blob, external_blob, offset) # Negative offset stands for going from the end, # positive - from the beginning aliases = [] # States held inputs to the cell net recurrent_states = [] for cell_input, _ in initial_cell_inputs: cell_input = str(cell_input) # Recurrent_states is going to be (T + 1) x ... # It stores all inputs and outputs of the cell net over time. # Or their gradients in the case of the backward pass. state = s(cell_input + "_states") states_grad = state + "_grad" cell_output = links[str(cell_input)] forward_links.append((cell_input, state, 0)) forward_links.append((cell_output, state, 1)) aliases.append((state, cell_output + "_all", 1)) aliases.append((state, cell_output + "_last", -1)) all_outputs.extend([cell_output + "_all", cell_output + "_last"]) recurrent_states.append(state) if backward_cell_net is not None: backward_links.append((cell_output + "_grad", states_grad, 1)) backward_cell_net.Proto().external_input.append( str(cell_output) + "_grad") recurrent_input_grad = cell_input + "_grad" if not backward_blob_versions.get(recurrent_input_grad, 0): # If nobody writes to this recurrent input gradient, we need # to make sure it gets to the states grad blob after all. # We do this by using backward_links which triggers an alias # This logic is being used for example in a SumOp case backward_links.append( (backward_mapping[cell_input], states_grad, 0)) else: backward_links.append((recurrent_input_grad, states_grad, 0)) for input_t, input_blob in inputs: forward_links.append((str(input_t), str(input_blob), 0)) if backward_cell_net is not None: for input_t, input_blob in inputs: backward_links.append(( backward_mapping[str(input_t)], str(input_blob) + "_grad", 0 )) backward_cell_net.Proto().external_input.extend( cell_net.Proto().external_input) backward_cell_net.Proto().external_input.extend( cell_net.Proto().external_output) def unpack_triple(x): if x: a, b, c = zip(*x) return a, b, c return [], [], [] # Splitting to separate lists so we can pass them to c++ # where we ensemle them back link_internal, link_external, link_offset = unpack_triple(forward_links) alias_src, alias_dst, alias_offset = unpack_triple(aliases) recurrent_inputs = [str(x[1]) for x in initial_cell_inputs] # Make sure that recurrent gradients accumulate with internal gradients # (if a blob in the backward_cell_net receives gradient from both an # external connection as well as from within the backward_cell_net, # those gradients need to be added together, rather than one overwriting # the other) if backward_cell_net is not None: proto = backward_cell_net.Proto() operators = [] while len(proto.op) > 0: op = proto.op[-1] proto.op.remove(op) operators.append(op) for op in operators[::-1]: proto.op.extend([op]) for j, output_blob in enumerate(op.output): if output_blob in proto.external_input: # In place operation won't cause issues because it takes # existing value of a blob into account if output_blob in op.input: continue output_blob = core.BlobReference(output_blob) accum_blob = output_blob + "_accum" proto.op[-1].output[j] = str(accum_blob) backward_cell_net.Sum( [output_blob, accum_blob], [output_blob], ) def map_to_dual_list(m): return [str(x) for x in list(m.keys())] + \ [str(x) for x in list(m.values())] backward_args = {} if backward_cell_net is not None: backward_mapping_keys = set(viewkeys(backward_mapping)) backward_link_internal, backward_link_external, backward_link_offset = \ unpack_triple(backward_links) params = [x for x in references if x in backward_mapping_keys] param_grads = [ str(backward_mapping[x]) for x in references if x in backward_mapping_keys ] if recompute_blobs_on_backward is None: recompute_blobs_on_backward = set() backward_args = { 'param': [all_inputs.index(p) for p in params], 'backward_link_internal': [str(l) for l in backward_link_internal], 'backward_link_external': [str(l) for l in backward_link_external], 'backward_link_offset': backward_link_offset, 'outputs_with_grads': outputs_with_grads, 'recompute_blobs_on_backward': [ str(b) for b in recompute_blobs_on_backward ], 'param_grads': param_grads, } if len(backward_cell_net.Proto().op) != 0: backward_args['backward_step_net'] = backward_cell_net.Proto() results = net.RecurrentNetwork( all_inputs, all_outputs + [s("step_workspaces")], alias_src=alias_src, alias_dst=[str(a) for a in alias_dst], alias_offset=alias_offset, recurrent_states=recurrent_states, initial_recurrent_state_ids=[ all_inputs.index(i) for i in recurrent_inputs ], link_internal=[str(l) for l in link_internal], link_external=[str(l) for l in link_external], link_offset=link_offset, enable_rnn_executor=1, step_net=cell_net.Proto(), timestep="timestep" if timestep is None else str(timestep), **backward_args ) # Restore net type since 'rnn' is not recognized outside RNNs cell_net.Proto().type = 'simple' # The last output is a list of step workspaces, # which is only needed internally for gradient propogation return results[:-1] def set_rnn_executor_config(rnn_op, num_threads=None, max_cuda_streams=None): from caffe2.proto import caffe2_pb2 assert rnn_op.type in {'RecurrentNetwork', 'RecurrentNetworkGradient'} def add_arg(s, v): a = caffe2_pb2.Argument() a.name = "rnn_executor." + s a.i = v rnn_op.arg.extend([a]) if num_threads is not None: add_arg('num_threads', num_threads) if max_cuda_streams is not None: add_arg('max_cuda_streams', max_cuda_streams) def retrieve_step_blobs(net, prefix='rnn'): ''' Retrieves blobs from step workspaces (which contain intermediate recurrent network computation for each timestep) and puts them in the global workspace. This allows access to the contents of this intermediate computation in python. Returns the list of extracted blob names. net: the net from which the step workspace blobs should be extracted prefix: prefix to append to extracted blob names when placing them in the global workspace ''' count = 1 output_list = [] for op in net.Proto().op: if op.type == "RecurrentNetwork": blob_name = prefix + "_" + str(count) count = count + 1 scratch_workspaces_blob_name = op.output[-1] workspace.RunOperatorOnce( core.CreateOperator( "RecurrentNetworkBlobFetcher", [scratch_workspaces_blob_name], [blob_name], prefix=prefix ) ) output_list += workspace.FetchBlob(blob_name).tolist() return output_list
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/recurrent.py
0.695958
0.422534
recurrent.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import caffe2.python._import_c_extension as C from caffe2.python import core from caffe2.proto import caffe2_pb2 import os from subprocess import Popen, PIPE import errno class NNModule(object): def __init__(self, net=None, device_map=None): if net is not None: serialized_proto = None if isinstance(net, core.Net): serialized_proto = net.Proto().SerializeToString() elif isinstance(net, caffe2_pb2.NetDef): serialized_proto = net.SerializeToString() # Distributed if device_map is not None: serialized_device_map = {} for k in device_map: serialized_device_map[k] = device_map[k].SerializeToString() self._NNModule = C.NNModuleFromProtobufDistributed(serialized_proto, serialized_device_map) # Default elif serialized_proto: self._NNModule, self._OpList = C.NNModuleFromProtobuf(serialized_proto) else: raise Exception( "NNModule can be constructed with core.Net or caffe2_pb2.NetDef types" ) else: self._NNModule = C.NNModule() @property def dataFlow(self): return self._NNModule.dataFlow() @property def controlFlow(self): return self._NNModule.getExecutionOrder() @property def nodes(self): return self._NNModule.dataFlow().nodes @property def operators(self): return self._NNModule.dataFlow().operators @property def tensors(self): return self._NNModule.dataFlow().tensors def createNode(self, val): return self._NNModule.dataFlow().createNode(val) def deleteNode(self, node): return self._NNModule.dataFlow().deleteNode(node) def createEdge(self, a, b): return self._NNModule.dataFlow().createEdge(a, b) def deleteEdge(self, a, b=None): if b: self._NNModule.dataFlow().deleteEdge(a, b) else: self._NNModule.dataFlow().deleteEdge(a) def replaceNode(self, old_node, new_node): return self._NNModule.dataFlow().replaceNode(old_node, new_node) def replaceProducer(self, tensor, new_producer): C.replaceProducer(tensor, new_producer) def replaceAllUsesWith(self, old_tensor, new_tensor): C.replaceAllUsesWith(old_tensor, new_tensor) def replaceAsConsumer(self, old_consumer, new_consumer): C.replaceAsConsumer(old_consumer, new_consumer) def replaceSubgraph(self, subgraph, new_node, inputs, outputs): self._NNModule.replaceSubgraph(subgraph, new_node, inputs, outputs) def deleteSubgraph(self, subgraph): self._NNModule.deleteSubgraph(subgraph) def createUniqueDataNode(self, prefix="_unique"): return self._NNModule.createUniqueDataNode(prefix) def convertToCaffe2Proto(self, old_proto=None): if not old_proto: old_proto = caffe2_pb2.NetDef() output = self._NNModule.convertToCaffe2Proto(old_proto) new_proto = caffe2_pb2.NetDef() new_proto.ParseFromString(output) return new_proto def match(self, pattern): for n in self.dataFlow.getMutableNodes(): m = C.matchSubgraph(n, pattern) if m: yield m def render(s): s = str(s) cmd_exists = lambda x: any( os.access(os.path.join(path, x), os.X_OK) for path in os.environ["PATH"].split(os.pathsep) ) if cmd_exists("graph-easy"): p = Popen("graph-easy", stdin=PIPE) try: p.stdin.write(s.encode("utf-8")) except IOError as e: if e.errno == errno.EPIPE or e.errno == errno.EINVAL: pass else: # Raise any other error. raise p.stdin.close() p.wait() else: print(s) NeuralNetOperator = C.NeuralNetOperator Operator = C.NeuralNetOperator NeuralNetData = C.NeuralNetData Data = C.NeuralNetData NNSubgraph = C.NNSubgraph NNMatchGraph = C.NNMatchGraph Graph = C.Graph Annotation = C.Annotation
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/nomnigraph.py
0.53607
0.162048
nomnigraph.py
pypi
from __future__ import absolute_import, division, print_function, unicode_literals from collections import defaultdict import caffe2.python.nomnigraph as ng from caffe2.python import core, utils def transpose_network(nn): """ Convert all Convolutions operators which are in the NCHW order to NHWC order and also transform their inputs and outputs so that the rest of the graph is not affected. """ # track the incoming tensors into NHWC2NCHW operators incoming = {} # output tensor -> input tensor # track outgoing tensors from NCHW2NHWC operators outgoing = defaultdict(lambda: []) # input tensor -> list of operators dfg = nn.dataFlow orig_nodes = [x for x in nn.nodes] for node in orig_nodes: if node.isOperator() and node.name == "Conv": arg_dict = utils.ArgsToDict(node.annotation.operator_def.arg) # a missing "order" argument implies default NCHW order if "order" in arg_dict and arg_dict["order"] != "NCHW": continue inputs = [x for x in node.inputs] assert len(inputs) >= 2, "Conv operator should have two inputs" outputs = [x for x in node.outputs] assert len(outputs) >= 1, "Conv operator should have an output" for inp in inputs: nn.deleteEdge(inp, node) for outp in outputs: nn.deleteEdge(node, outp) # only the first two inputs of the Convolution the data and the # weights need to be transformed for idx in range(2): new_inp = nn.createUniqueDataNode(inputs[idx].name) transp = dfg.createNode(ng.NeuralNetOperator("NCHW2NHWC")) nn.createEdge(inputs[idx], transp) nn.createEdge(transp, new_inp) outgoing[inputs[idx]].append(transp) inputs[idx] = new_inp for idx in range(len(outputs)): new_outp = nn.createUniqueDataNode(outputs[idx].name) transp = dfg.createNode(ng.NeuralNetOperator("NHWC2NCHW")) nn.createEdge(transp, outputs[idx]) nn.createEdge(new_outp, transp) incoming[outputs[idx]] = new_outp outputs[idx] = new_outp # create a new Convolution with identical arguments as the original # one except for the order arg_dict["order"] = "NHWC" new_node = nn.createNode(core.CreateOperator("Conv", [], [], **arg_dict)) for inp in inputs: nn.createEdge(inp, new_node) for outp in outputs: nn.createEdge(new_node, outp) nn.deleteNode(node) # finally, we will compress # case 1: # X -> NHWC2NCHW -> Y -> NCHW2NHWC -> Z1 ; Y -> NCHW2NHWC -> Z2 # to: # X -> NHWC2NCHW -> Y and replace Z1 with X and replace Z2 with X # And case 2: # Y -> NCHW2NHWC -> Z1 ; Y -> NCHW2NHWC -> Z2 # to: # Y -> NCHW2NHWC -> Z1 and replace Z2 with Z1 # orig_tensor is one of the tensors in the original graph in NCHW order for orig_tensor in outgoing: # new_tensor is identical to orig_tensor except the order is NHWC if orig_tensor in incoming: # case 1 (see above) new_tensor = incoming[orig_tensor] else: # case 2 (see above) out_ops = outgoing[orig_tensor] new_tensor = out_ops[0].outputs[0] outgoing[orig_tensor] = out_ops[1:] for opnode in outgoing[orig_tensor]: # there should only be one output, so this iteration is overkill for out in opnode.outputs: nn.replaceAllUsesWith(out, new_tensor) nn.deleteNode(out) nn.deleteNode(opnode)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/nomnigraph_transformations.py
0.827689
0.472014
nomnigraph_transformations.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.proto.caffe2_pb2 import OperatorDef, NetDef from caffe2.python.checkpoint import Job from caffe2.python.core import Net, ExecutionStep, Plan from caffe2.python.task import Task, TaskGroup, WorkspaceType, TaskOutput from collections import defaultdict from contextlib import contextmanager from copy import copy from future.utils import viewkeys from itertools import chain from six import binary_type, text_type class Visitor(object): @classmethod def register(cls, Type): if not(hasattr(cls, 'visitors')): cls.visitors = {} else: assert Type not in cls.visitors, \ '{} already registered!'.format(Type) def _register(func): cls.visitors[Type] = func return func return _register def __call__(self, obj, *args, **kwargs): if obj is None: return Type = type(obj) if Type not in self.__class__.visitors: raise TypeError('%s: unsupported object type: %s' % ( self.__class__.__name__, Type)) func = self.__class__.visitors[Type] return func(self, obj, *args, **kwargs) class Analyzer(Visitor): PREFIXES_TO_IGNORE = {'distributed_ctx_init'} def __init__(self): self.workspaces = defaultdict(lambda: defaultdict(lambda: 0)) self.workspace_ctx = [] @property def workspace(self): return self.workspace_ctx[-1] @contextmanager def set_workspace(self, node=None, ws=None, do_copy=False): if ws is not None: ws = ws elif node is not None: ws = self.workspaces[str(node)] else: ws = self.workspace if do_copy: ws = copy(ws) self.workspace_ctx.append(ws) yield ws del self.workspace_ctx[-1] def define_blob(self, blob): self.workspace[blob] += 1 def need_blob(self, blob): if any(blob.startswith(p) for p in Analyzer.PREFIXES_TO_IGNORE): return assert blob in self.workspace, 'Blob undefined: %s' % blob @Analyzer.register(OperatorDef) def analyze_op(analyzer, op): for x in op.input: analyzer.need_blob(x) for x in op.output: analyzer.define_blob(x) @Analyzer.register(Net) def analyze_net(analyzer, net): for x in net.Proto().op: analyzer(x) @Analyzer.register(ExecutionStep) def analyze_step(analyzer, step): proto = step.Proto() with analyzer.set_workspace(do_copy=proto.create_workspace): if proto.report_net: with analyzer.set_workspace(do_copy=True): analyzer(step.get_net(proto.report_net)) all_new_blobs = set() substeps = step.Substeps() + [step.get_net(n) for n in proto.network] for substep in substeps: with analyzer.set_workspace( do_copy=proto.concurrent_substeps) as ws_in: analyzer(substep) if proto.should_stop_blob: analyzer.need_blob(proto.should_stop_blob) if proto.concurrent_substeps: new_blobs = set(viewkeys(ws_in)) - set(viewkeys(analyzer.workspace)) assert len(all_new_blobs & new_blobs) == 0, ( 'Error: Blobs created by multiple parallel steps: %s' % ( ', '.join(all_new_blobs & new_blobs))) all_new_blobs |= new_blobs for x in all_new_blobs: analyzer.define_blob(x) @Analyzer.register(Task) def analyze_task(analyzer, task): # check that our plan protobuf is not too large (limit of 64Mb) step = task.get_step() plan = Plan(task.node) plan.AddStep(step) proto_len = len(plan.Proto().SerializeToString()) assert proto_len < 2 ** 26, ( 'Due to a protobuf limitation, serialized tasks must be smaller ' 'than 64Mb, but this task has {} bytes.' % proto_len) is_private = task.workspace_type() != WorkspaceType.GLOBAL with analyzer.set_workspace(do_copy=is_private): analyzer(step) @Analyzer.register(TaskGroup) def analyze_task_group(analyzer, tg): for task in tg.tasks_by_node().tasks(): with analyzer.set_workspace(node=task.node): analyzer(task) @Analyzer.register(Job) def analyze_job(analyzer, job): analyzer(job.init_group) analyzer(job.epoch_group) def analyze(obj): """ Given a Job, visits all the execution steps making sure that: - no undefined blobs will be found during execution - no blob with same name is defined in concurrent steps """ Analyzer()(obj) class Text(object): def __init__(self): self._indent = 0 self._lines_in_context = [0] self.lines = [] @contextmanager def context(self, text): if text is not None: self.add('with %s:' % text) self._indent += 4 self._lines_in_context.append(0) yield if text is not None: if self._lines_in_context[-1] == 0: self.add('pass') self._indent -= 4 del self._lines_in_context[-1] def add(self, text): self._lines_in_context[-1] += 1 self.lines.append((' ' * self._indent) + text) def __str__(self): return '\n'.join(self.lines) class Printer(Visitor, Text): def __init__(self, factor_prefixes=False, c2_syntax=True): super(Visitor, self).__init__() super(Text, self).__init__() self.factor_prefixes = factor_prefixes self.c2_syntax = c2_syntax self.c2_net_name = None def _sanitize_str(s): if isinstance(s, text_type): sanitized = s elif isinstance(s, binary_type): sanitized = s.decode('ascii', errors='ignore') else: sanitized = str(s) if len(sanitized) < 64: return "'%s'" % sanitized else: return "'%s'" % sanitized[:64] + '...<+len=%d>' % (len(sanitized) - 64) def _arg_val(arg): if arg.HasField('f'): return str(arg.f) if arg.HasField('i'): return str(arg.i) if arg.HasField('s'): return _sanitize_str(arg.s) if arg.floats: return str(list(arg.floats)) if arg.ints: return str(list(arg.ints)) if arg.strings: return str([_sanitize_str(s) for s in arg.strings]) return '[]' def commonprefix(m): "Given a list of strings, returns the longest common prefix" if not m: return '' s1 = min(m) s2 = max(m) for i, c in enumerate(s1): if c != s2[i]: return s1[:i] return s1 def format_value(val): if isinstance(val, list): return '[%s]' % ', '.join("'%s'" % str(v) for v in val) else: return str(val) def factor_prefix(vals, do_it): vals = [format_value(v) for v in vals] prefix = commonprefix(vals) if len(vals) > 1 and do_it else '' joined = ', '.join(v[len(prefix):] for v in vals) return '%s[%s]' % (prefix, joined) if prefix else joined def call(op, inputs=None, outputs=None, factor_prefixes=False): if not inputs: inputs = '' else: inputs_v = [a for a in inputs if not isinstance(a, tuple)] inputs_kv = [a for a in inputs if isinstance(a, tuple)] inputs = ', '.join( x for x in chain( [factor_prefix(inputs_v, factor_prefixes)], ('%s=%s' % kv for kv in inputs_kv), ) if x ) call = '%s(%s)' % (op, inputs) return call if not outputs else '%s = %s' % ( factor_prefix(outputs, factor_prefixes), call) def format_device_option(dev_opt): if not dev_opt or not ( dev_opt.device_type or dev_opt.device_id or dev_opt.node_name): return None return call( 'DeviceOption', [dev_opt.device_type, dev_opt.device_id, "'%s'" % dev_opt.node_name]) @Printer.register(OperatorDef) def print_op(text, op): args = [(a.name, _arg_val(a)) for a in op.arg] dev_opt_txt = format_device_option(op.device_option) if dev_opt_txt: args.append(('device_option', dev_opt_txt)) if text.c2_net_name: text.add(call( text.c2_net_name + '.' + op.type, [list(op.input), list(op.output)] + args)) else: text.add(call( op.type, list(op.input) + args, op.output, factor_prefixes=text.factor_prefixes)) for arg in op.arg: if arg.HasField('n'): with text.context('arg: %s' % arg.name): text(arg.n) @Printer.register(NetDef) def print_net_def(text, net_def): if text.c2_syntax: text.add(call('core.Net', ["'%s'" % net_def.name], [net_def.name])) text.c2_net_name = net_def.name else: text.add('# net: %s' % net_def.name) for op in net_def.op: text(op) if text.c2_syntax: text.c2_net_name = None @Printer.register(Net) def print_net(text, net): text(net.Proto()) def _get_step_context(step): proto = step.Proto() if proto.should_stop_blob: return call('loop'), False if proto.num_iter and proto.num_iter != 1: return call('loop', [proto.num_iter]), False if proto.num_concurrent_instances > 1: return ( call('parallel', [('num_instances', proto.num_concurrent_instances)]), len(step.Substeps()) > 1) concurrent = proto.concurrent_substeps and len(step.Substeps()) > 1 if concurrent: return call('parallel'), True if proto.report_net: return call('run_once'), False return None, False @Printer.register(ExecutionStep) def print_step(text, step): proto = step.Proto() step_ctx, do_substep = _get_step_context(step) with text.context(step_ctx): if proto.report_net: with text.context(call('report_net', [proto.report_interval])): text(step.get_net(proto.report_net)) substeps = step.Substeps() + [step.get_net(n) for n in proto.network] for substep in substeps: sub_proto = ( substep.Proto() if isinstance(substep, ExecutionStep) else None) if sub_proto is not None and sub_proto.run_every_ms: substep_ctx = call( 'reporter', [str(substep), ('interval_ms', sub_proto.run_every_ms)]) elif do_substep: title = ( 'workspace' if sub_proto is not None and sub_proto.create_workspace else 'step') substep_ctx = call(title, [str(substep)]) else: substep_ctx = None with text.context(substep_ctx): text(substep) if proto.should_stop_blob: text.add(call('yield stop_if', [proto.should_stop_blob])) def _print_task_output(x): assert isinstance(x, TaskOutput) return 'Output[' + ', '.join(str(x) for x in x.names) + ']' @Printer.register(Task) def print_task(text, task): outs = ', '.join(_print_task_output(o) for o in task.outputs()) context = [('node', task.node), ('name', task.name), ('outputs', outs)] with text.context(call('Task', context)): text(task.get_step()) @Printer.register(TaskGroup) def print_task_group(text, tg, header=None): with text.context(header or call('TaskGroup')): for task in tg.tasks_by_node().tasks(): text(task) @Printer.register(Job) def print_job(text, job): text(job.init_group, 'Job.current().init_group') text(job.epoch_group, 'Job.current().epoch_group') with text.context('Job.current().stop_conditions'): for out in job.stop_conditions: text.add(_print_task_output(out)) text(job.download_group, 'Job.current().download_group') text(job.exit_group, 'Job.current().exit_group') def to_string(obj, **kwargs): """ Given a Net, ExecutionStep, Task, TaskGroup or Job, produces a string with detailed description of the execution steps. """ printer = Printer(**kwargs) printer(obj) return str(printer) def debug_net(net): """ Given a Net, produce another net that logs info about the operator call before each operator execution. Use for debugging purposes. """ assert isinstance(net, Net) debug_net = Net(str(net)) assert isinstance(net, Net) for op in net.Proto().op: text = Text() print_op(op, text) debug_net.LogInfo(str(text)) debug_net.Proto().op.extend([op]) return debug_net
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/net_printer.py
0.697506
0.213972
net_printer.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core, workspace from caffe2.python.dataio import Reader, Writer from caffe2.python.schema import ( Struct, from_blob_list, from_column_list, InitEmptyRecord) import numpy as np class _DatasetReader(Reader): def __init__(self, dataset, name, batch_size=1, enforce_batch_size=False): """Don't call this directly. Instead, use dataset.reader()""" Reader.__init__(self, dataset.content()) self.dataset = dataset self.name = name or (dataset.name + '_cursor') self.batch_size = batch_size self.enforce_batch_size = enforce_batch_size self.cursor = None def setup_ex(self, init_net, exit_net): if self.cursor is None: self.cursor = init_net.CreateTreeCursor( [], init_net.NextScopedBlob(self.name), fields=self.dataset.fields) def read(self, read_net): assert self.cursor, 'setup not called.' content = self.dataset.content() with core.NameScope(read_net.NextName(self.name)): fields = read_net.ReadNextBatch( [self.cursor] + content.field_blobs(), content.field_names(), batch_size=self.batch_size, enforce_batch_size=self.enforce_batch_size) fields = core.output_to_list(fields) return (read_net.IsEmpty([fields[0]]), fields) def reset(self, net): net.ResetCursor([self.cursor], []) class _DatasetRandomReader(Reader): def __init__(self, dataset, name, indices, batch_size=1, loop_over=False, enforce_batch_size=False): """Don't call this directly. Instead, use dataset.random_reader()""" Reader.__init__(self, dataset.content()) self.dataset = dataset self.cursor = None self.name = name or (dataset.name + '_cursor') self.indices = indices self.batch_size = batch_size self.loop_over = loop_over self.enforce_batch_size = enforce_batch_size def setup_ex(self, init_net, exit_net): if self.cursor is None: self.cursor = init_net.CreateTreeCursor( [], init_net.NextScopedBlob(self.name), fields=self.dataset.fields) def reset(self, net): net.ResetCursor([self.cursor], []) def computeoffset(self, net): self.reset(net) offsets = net.ComputeOffset( [self.cursor] + self.dataset.content().field_blobs(), 'offsets') self.offsets = offsets def sort_and_shuffle(self, net, sort_by_field=None, shuffle_size=1, batch_size=1): # no sorting by default content = self.dataset.content() sort_by_field_idx = -1 if sort_by_field: assert sort_by_field in content.field_names(), ( 'Must be valid field.') sort_by_field_idx = content.field_names().index(sort_by_field) self.reset(net) indices = net.SortAndShuffle( [self.cursor] + content.field_blobs(), 'indices', sort_by_field_idx=sort_by_field_idx, shuffle_size=shuffle_size, batch_size=batch_size) self.indices = indices def read(self, read_net): assert self.cursor, 'setup_ex not called' assert self.indices, 'sort_and_shuffle not called' assert self.offsets, 'computeoffset not called' content = self.dataset.content() with core.NameScope(read_net.NextName(self.name)): fields = read_net.ReadRandomBatch( [self.cursor, self.indices, self.offsets] + ( content.field_blobs()), content.field_names(), batch_size=self.batch_size, enforce_batch_size=self.enforce_batch_size, loop_over=self.loop_over) fields = core.output_to_list(fields) return (read_net.IsEmpty([fields[0]]), fields) class _DatasetWriter(Writer): def __init__(self, content): """Don't call this directly. Use dataset.writer() instead.""" self._content = content self.mutex = None def setup_ex(self, init_net, exit_net): if self.mutex is None: self.mutex = init_net.CreateMutex([]) def write(self, writer_net, fields): """ Add operations to `net` that append the blobs in `fields` to the end of the dataset. An additional operator will also be added that checks the consistency of the data in `fields` against the dataset schema. Args: writer_net: The net that will contain the Append operators. fields: A list of BlobReference to be appeneded to this dataset. """ assert self.mutex is not None, 'setup not called.' field_blobs = self._content.field_blobs() assert len(fields) == len(field_blobs), ( 'Expected %s fields, got %s.' % (len(field_blobs), len(fields))) writer_net.CheckDatasetConsistency( fields, [], fields=self._content.field_names()) writer_net.AtomicAppend( [self.mutex] + field_blobs + list(fields), field_blobs) def commit(self, finish_net): """Commit is a no-op for an in-memory dataset.""" pass def Const(net, value, dtype=None, name=None): """ Create a 'constant' by first creating an external input in the given net, and then feeding the corresponding blob with its provided value in the current workspace. The name is automatically generated in order to avoid clashes with existing blob names. """ assert isinstance(net, core.Net), 'net must be a core.Net instance.' value = np.array(value, dtype=dtype) blob = net.AddExternalInput(net.NextName(prefix=name)) workspace.FeedBlob(str(blob), value) return blob def execution_step_with_progress(name, init_net, substeps, rows_read): # progress reporter report_net = core.Net('report_net') report_net.Print([rows_read], []) return core.execution_step( name, substeps, report_net=report_net, concurrent_substeps=True, report_interval=5) class Dataset(object): """Represents an in-memory dataset with fixed schema. Use this to store and iterate through datasets with complex schema that fit in memory. Iterating through entries of this dataset is very fast since the dataset is stored as a set of native Caffe2 tensors, thus no type conversion or deserialization is necessary. """ def __init__(self, fields, name=None): """Create an un-initialized dataset with schema provided by `fields`. Before this dataset can be used, it must be initialized, either by `init_empty` or `init_from_dataframe`. Args: fields: either a schema.Struct or a list of field names in a format compatible with the one described in schema.py. name: optional name to prepend to blobs that will store the data. """ assert isinstance(fields, list) or isinstance(fields, Struct), ( 'fields must be either a Struct or a list of raw field names.') if isinstance(fields, list): fields = from_column_list(fields) self.schema = fields self.fields = fields.field_names() self.field_types = fields.field_types() self.name = name or 'dataset' self.field_blobs = fields.field_blobs() if fields.has_blobs() else None def trim(self, net, multiple_of): """ Trims the contents of this dataset so that the number of records is multiple of the given argument. """ net.TrimDataset( self.field_blobs, self.field_blobs, fields=self.fields, multiple_of=multiple_of) def init_empty(self, init_net): """Initialize the blobs for this dataset with empty values. Empty arrays will be immediately fed into the current workspace, and `init_net` will take those blobs as external inputs. """ self.field_blobs = InitEmptyRecord( init_net, self.schema.clone_schema()).field_blobs() def init_from_dataframe(self, net, dataframe): """Initialize the blobs for this dataset from a Pandas dataframe. Each column of the dataframe will be immediately fed into the current workspace, and the `net` will take this blobs as external inputs. """ assert len(self.fields) == len(dataframe.columns) self.field_blobs = [ Const(net, dataframe.as_matrix([col]).flatten(), name=field) for col, field in enumerate(self.fields)] def get_blobs(self): """ Return the list of BlobReference pointing to the blobs that contain the data for this dataset. """ assert self return self.field_blobs def content(self): """ Return a Record of BlobReferences pointing to the full content of this dataset. """ return from_blob_list(self.schema, self.field_blobs) def field_names(self): """Return the list of field names for this dataset.""" return self.fields def field_types(self): """ Return the list of field dtypes for this dataset. If a list of strings, not a schema.Struct, was passed to the constructor, this will return a list of dtype(np.void). """ return self.field_types def reader(self, init_net=None, cursor_name=None, batch_size=1, enforce_batch_size=False): """Create a Reader object that is used to iterate through the dataset. This will append operations to `init_net` that create a TreeCursor, used to iterate through the data. NOTE: Currently, it is not safe to append to a dataset while reading. Args: init_net: net that will be run once to create the cursor. cursor_name: optional name for the blob containing a pointer to the cursor. batch_size: how many samples to read per iteration. Returns: A _DatasetReader that can be used to create operators that will iterate through the dataset. """ assert self.field_blobs, 'Dataset not initialized.' reader = _DatasetReader(self, cursor_name, batch_size, enforce_batch_size) if init_net is not None: reader.setup_ex(init_net, None) return reader def random_reader(self, init_net=None, indices=None, cursor_name=None, batch_size=1, loop_over=False, enforce_batch_size=False): """Create a Reader object that is used to iterate through the dataset. NOTE: The reader order depends on the order in indices. Args: init_net: net that will be run once to create the cursor. indices: blob of reading order cursor_name: optional name for the blob containing a pointer to the cursor. batch_size: how many samples to read per iteration. loop_over: repeat the dataset indefinitely (in the same order) Returns: A DatasetReader that can be used to create operators that will iterate through the dataset according to indices. """ assert self.field_blobs, 'Dataset not initialized.' reader = _DatasetRandomReader( self, cursor_name, indices, batch_size, loop_over, enforce_batch_size) if init_net is not None: reader.setup_ex(init_net, None) return reader def writer(self, init_net=None): """Create a Writer that can be used to append entries into the dataset. NOTE: Currently, it is not safe to append to a dataset while reading from it. NOTE: Currently implementation of writer is not thread safe. TODO: fixme Args: init_net: net that will be run once in order to create the writer. (currently not used) """ assert self.field_blobs, 'Dataset not initialized.' writer = _DatasetWriter(self.content()) if init_net is not None: writer.setup_ex(init_net, None) return writer
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/dataset.py
0.842637
0.223801
dataset.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core, context from caffe2.python.task import Task, TaskGroup from caffe2.python.control_ops_util import add_if_op, add_while_op @context.define_context() class NetBuilder(object): """ Scope-driven mechanism for building nets, loops and conditional blocks. Arguments: name: NetBuilder's name initial_scope: list of blobs that are available for reading/writing Example: from caffe2.python.net_builder import NetBuilder, ops with NetBuilder() as nb: c = ops.Const(5) d = ops.Const(0) with ops.loop(): ops.stop_if(ops.LE([c, ops.Const(0)])) ops.Add([c, ops.Const(-1)], [c]) with ops.If(ops.GE([c, ops.Const(3)])): ops.Add([d, ops.Const(10)], [d]) ops.Print(c, []) ops.Print(d, []) step = core.to_execution_step(nb) """ def __init__(self, name=None, initial_scope=None, _stop_blob_required=False, _stop_blob=None, _fullname=None, _use_control_ops=False): parent = NetBuilder.current(required=False) assert not _fullname or not name, 'Cannot set both _fullname and name' assert not _use_control_ops or \ (not _stop_blob_required and not _stop_blob), \ 'Stop blobs are not used with control operators' self.name = _fullname or '/'.join( n for n in (parent.name if parent else None, name) if n ) self._frozen = False self._current_net = None self._children = [] if parent: # make sure parent has an up to date lexical scope computed parent._update_lexical_scope() self._init_lexical_scope = set(parent._lexical_scope) if parent else set() if initial_scope: self._init_lexical_scope |= set([str(b) for b in initial_scope]) self._lexical_scope = set(self._init_lexical_scope) self._stop_blob = _stop_blob self._stop_blob_required = _stop_blob_required self._use_control_ops = _use_control_ops def stop_blob(self): """ Returns the BlobReference to the stop_blob of this NetBuilder. If one is not yet available, creates one. This function assumes that the stop_blob() will be used immediatelly in the current net, so it doesn't initialize it if the current net is the first of the builder. """ assert not self._use_control_ops, \ 'Stop blobs are not used with control operators' if self._stop_blob is None: net = self.current_net() self._stop_blob = core.BlobReference( net.NextName('stop_blob'), net=net) net.Const(False, blob_out=self._stop_blob) if self._current_net != self._children[0]: self._children.insert(0, core.Net('stop_blob_init')) self._children[0].Const(False, blob_out=self._stop_blob) return self._stop_blob def stop_if(self, blob): assert not self._use_control_ops, \ 'Stop blobs are not used with control operators' stop_blob = self.stop_blob() ops.Or([stop_blob, blob], [stop_blob]) self._current_net = None def _assert_mutable(self): assert not self._frozen, ( 'This NetBuilder (%s) has been built already.' % self.name) def _update_lexical_scope(self): """ Updates lexical scope based on the current list of children. Lexical scope contains names of blobs that are currently available and were introduced in the net builder """ self._lexical_scope = set(self._init_lexical_scope) for child in self._children: if isinstance(child, core.Net): self._lexical_scope |= child.UsedBlobNames() elif isinstance(child, NetBuilder) and child._use_control_ops: self._lexical_scope |= child._lexical_scope def _reset_children(self): self._current_net = None self._children = [] self._lexical_scope = set(self._init_lexical_scope) def add(self, child): self._assert_mutable() if self._use_control_ops: assert isinstance(child, core.Net) or ( isinstance(child, NetBuilder) and child._use_control_ops), \ "Expected Net or NetBuilder with control ops" self._current_net = None self._children.append(child) # to-do : check it's not a dag net if isinstance(child, core.Net): self._current_net = child self._update_lexical_scope() return child def current_net(self, name=None): self._assert_mutable() if self._current_net is None or name is not None: self.add(core.Net(name)) return self._current_net def freeze(self): for child in self._children: if hasattr(child, 'freeze'): child.freeze() self._current_net = None self._frozen = True def get(self): self.freeze() return self._children def __exit__(self, etype, *args): if self._use_control_ops and len(self._children) > 0: _children = self._children self._reset_children() merged_net = NetBuilder.merge_nets( _children, self._lexical_scope) assert merged_net, "Expected a non-empty merge of children" self._children = [merged_net] self.freeze() if etype is not None: return assert (not self._stop_blob_required) or self._stop_blob is not None, ( 'This NetBuilder (%s) requires a stop condition ' % self.name + 'to be set with `stop` or `stop_if`') @staticmethod def merge_nets(nets_or_builders, outer_blob_names): # Only nets or builders with control ops are allowed. # Need to pay attention to external outputs, e.g. # ... # IfNet1 (cond_blob): # (Net1) # X = 1 # IfNet2 (...): # X = X + 1 # ... # In this example there're two children in then branch of IfNet1: # a subnet Net1 that creates blob X and sets its value to one, and # a net builder IfNet2 that (conditionally) increments X. # From IfNet2's point of view X is an external input # and output blob, it will be put into IfNet2 net's external_output. # At the same time, from the point of view of IfNet1 X is purely local. # Net.AppendNet just merges external outputs of the networks, so # without checking this the result of Net1.AppendNet(IfNet2's net) # would have blob X in external_output net = None for n in nets_or_builders: cur = None if isinstance(n, NetBuilder): assert n._use_control_ops, \ "Merging of NetBuilder supported only for control ops" nets = n.get() assert len(nets) == 1 and isinstance(nets[0], core.Net), \ "Invalid control op net builder" cur = nets[0] else: assert isinstance(n, core.Net) cur = n if net: net.AppendNet(cur) else: net = cur if net: # correct external output external_outputs = [o for o in net.Proto().external_output if o in outer_blob_names] net.Proto().external_output[:] = external_outputs return net def __str__(self): return self.name or 'Un-named NetBuilder' class Operations(object): """ Operations to be used in the context of a NetBuilder. """ def net(self, net=None, name=None): """ Retrieves the current net, or add a new net to the builder. Args: net: If provided, add the given net to the active builder. Else, returns the current Net or creates a new one as needed. name: if provided, creates a new Net with given name and makes it the new current net of the active builder. Cannot be provided if net is provided. """ assert name is None or net is None, ( 'Cannot provide both `net` and `name`.') if net is not None: NetBuilder.current().add(net) return net return NetBuilder.current().current_net(name=name) def __getattr__(self, op_type): """ Adds an operator call to the currently active Net. """ if op_type.startswith('__'): raise AttributeError() # We want hasattr to work properly even if no context is active. if NetBuilder.current(required=False) is None: raise AttributeError('No active NetBuilder.') return getattr(self.net(), op_type) def task_group(self): """ Creates a local task group which will execute as the next step of the current NetBuilder. """ from caffe2.python import task group = NetBuilder.current() with task.Cluster(): with task.Node('local'): tg = task.TaskGroup() group.add(tg) return tg def stop(self): """ Stop execution of the current execution step. Example: ops.Print(a, 0) ops.stop() ops.Print(b, 0) In the example, 'b' will never be printed. """ return self.stop_if(ops.Const(True)) def stop_if(self, blob): """ Stop execution of the current execution step if the condition `blob` is met. Example: ops.Print(a, 0) ops.stop_if(ops.LE([x, ops.Const(0)])) ops.Print(b, 0) In the example, 'b' will only be printed if the value of scalar tensor 'x' is greater than 0. """ return NetBuilder.current().stop_if(blob) def loop(self, iters=None, name=None): """ Creates a NetBuilder that will execute in a loop as the next step of the current NetBuilder. If `iters` is provided, the loop will execute for `iters` iterations and then stop. `iters` can be a constant or a BlobReference. If `iters` is not provided, the loop will execute until `ops.stop` or `ops.stop_if` is called. Examples: a = ops.Const(5) with ops.loop(): ops.stop_if(ops.LE([a, ops.Const(0)])) ops.Print(a, 0) ops.Add([a, ops.Const(-1)], [a]) Above, 'a' will be printed 5 times, with values 5 to 1. with ops.loop(10) as loop: ops.LogInfo(loop.iter()) This will print the numbers from 0 to 9. x = ops.Add([ops.Const(10), ops.Const(10)]) with ops.loop(x) as loop: ops.LogInfo(loop.iter()) This will print the numbers from 0 to 19. """ return NetBuilder.current().add(_Loop(iters, name=name)) def stop_guard(self, has_stopped_blob=None, name=None): """ Creates a NetBuilder that will execute once as the next step of the current NetBuilder. After execution, a bool tensor will indicate whether the inner execution was halted with `stop` or `stop_if`. Example: a = ops.Const(True) with ops.stop_guard() as sg1: ops.stop_if(a) ops.Print(ops.Const('did not stop')) b = ops.Const(False) with ops.stop_guard() as sg2: ops.stop_if(b) ops.Print(ops.Const('did not stop')) ops.Print(sg1.has_stopped(), []) ops.Print(sg2.has_stopped(), []) In the example, 'did not stop' will be printed once, followed by True and False. """ return NetBuilder.current().add( _StopGuard(has_stopped_blob=has_stopped_blob, name=name)) def If(self, cond, name=None): """ Creates a NetBuilder that will execute once as the next step of the current NetBuilder if the blob `cond` is True. Example: with ops.If(ops.Const(True)): ops.Print(ops.Const('Will print')) with ops.If(ops.Const(False)): ops.Print(ops.Const('Wont print')) The example will print 'Will print' once. """ return NetBuilder.current().add(_RunIf(cond, name=name)) def IfNet(self, cond, name=None): """ Same as If, but uses 'If' operator instead of execution step logic """ return NetBuilder.current().add(_RunIfNet(cond, name=name)) def Else(self, name=None): """ Else branch of IfNet, has to be specified immediately after IfNet. Example: with ops.IfNet(ops.LT([x, y])): ... with ops.Else(): ... """ return _RunElseNet(name=name) def WhileNet(self, name=None): """ NetBuilder for 'While' control operator """ return NetBuilder.current().add(_RunWhileNet(name=name)) def Condition(self, name=None): """ Loop's condition, executed within WhileNet context """ assert isinstance(NetBuilder.current(), _RunWhileNet), \ "Use of Condition outside of WhileNet" return _RunWhileCondition(name=name) def task_init(self): """ Defines operations that will be executed once at task startup. Useful when implementing processors, that don't have access to the Task top-level structure. This setup will be run only once, even if multiple instances of the task will run in parallel. For instance-local initialization, use `task_instance_init` instead. Example: def my_processor(rec): with ops.task_init(): one = ops.Const(1) two = ops.Const(1) return Tuple( ops.Add(rec[0](), zero), ops.Add(rec[1](), two)) """ setup = _SetupBuilder(_SetupBuilder.INIT) self.net().add_attribute(Task.TASK_SETUP, setup) return setup def task_exit(self): """ Define operations to be executed once at task shutdown. Useful when implementing processors, that don't have access to the Task top-level structure. This shutdown will be run only once, after all concurrent instances of the task have already finished. For instance-local shutdown, use `task_instance_exit` instead. Example: def read_queue(queue): with ops.task_exit(): queue.close(ops.net()) return queue.read(ops.net()) """ setup = _SetupBuilder(_SetupBuilder.EXIT) self.net().add_attribute(Task.TASK_SETUP, setup) return setup def task_instance_init(self): """ Defines operations that will be executed once at startup of each instance of a task. This can be seen as "thread_local" initialization. It is guaranteed to run only after all `task_init` logic finishes. This setup will be run concurrently for each instance of a task. For global task initialization, use `task_init` instead. """ setup = _SetupBuilder(_SetupBuilder.INIT) self.net().add_attribute(Task.TASK_INSTANCE_SETUP, setup) return setup def task_instance_exit(self): """ Defines operations that will be executed once at shutdown of each instance of a task. This can be seen as "thread_local" finalization. This shutdown will be run concurrently for each instance of a task. For global task shutdown, use `task_exit` instead. """ setup = _SetupBuilder(_SetupBuilder.EXIT) self.net().add_attribute(Task.TASK_INSTANCE_SETUP, setup) return setup def local_init(self): """ Similar to `task_init`, but executes at TaskGroup's startup instead, before any task of the group starts executing. This will run only once on each node, before initialization of any task, so it can be used e.g. to initialize blobs shared across tasks. """ setup = _SetupBuilder(_SetupBuilder.INIT) self.net().add_attribute(TaskGroup.LOCAL_SETUP, setup) return setup def local_exit(self, name=None): """ Similar to `task_exit`, but executes at TaskGroup's exit instead, after all tasks of the group finished execution. This will run only once on each node. """ setup = _SetupBuilder(_SetupBuilder.EXIT, name) self.net().add_attribute(TaskGroup.LOCAL_SETUP, setup) return setup def task_reporter(self, interval_ms=1000, name=None): """ Define operations to be executed at every time interval from task start-up to finish. These operations are guaranteed to execute at least once after all other operations of the task are finished. Example: with ops.task_reporter(interval_ms=10000): ops.LogInfo('10s elapsed') """ return _ReporterBuilder(interval_ms, net=self.net(), name=name) def local_reporter(self, interval_ms=1000, name=None): """ Similar to task_report, but operations defined within this block will run repeatedly for as long as any of the tasks in the current TaskGroup have not finished. """ return _ReporterBuilder(interval_ms, name=name) ops = Operations() class _ReporterBuilder(NetBuilder): def __init__(self, interval_ms, net=None, name=None): NetBuilder.__init__(self, name) self._net = net self.interval_ms = interval_ms def __exit__(self, etype, *args): if etype is None: step = core.to_execution_step(self) step.RunEveryMillis(self.interval_ms) if self._net: self._net.add_attribute(Task.REPORT_STEP, step) else: TaskGroup.current().report_step( step, interval_ms=self.interval_ms) NetBuilder.__exit__(self, etype, *args) class _SetupBuilder(NetBuilder): INIT = 'init' EXIT = 'exit' def __init__(self, type, name=None): NetBuilder.__init__(self, name) self.type = type def setup(self, net): if self.type == _SetupBuilder.INIT: return core.to_execution_step(self) def exit(self, net): if self.type == _SetupBuilder.EXIT: return core.to_execution_step(self) class _RunOnce(NetBuilder): def __init__(self, name=None): NetBuilder.__init__(self, name) def __exit__(self, etype, *args): if etype is None and self._stop_blob is not None: ops.stop() NetBuilder.__exit__(self, etype, *args) class _StopGuard(_RunOnce): def __init__(self, has_stopped_blob=None, name=None): _RunOnce.__init__(self, name) self._stopped = has_stopped_blob self._ran = False def __enter__(self): r = _RunOnce.__enter__(self) self._stopped = ops.Const(True, blob_out=self._stopped) return r def __exit__(self, etype, *args): if etype is None: self._ran = True ops.Const(False, blob_out=self._stopped) _RunOnce.__exit__(self, etype, *args) def has_stopped(self): """ Return a blob that will be set to scalar bool `True` after this net builder ran, iff it was halted early. """ assert self._ran, 'Context not used yet.' return self._stopped class _Loop(NetBuilder): def __init__(self, iters=None, name=None): NetBuilder.__init__(self, name, _stop_blob_required=True) if iters is not None: self._inc = ops.Const(1) self._iter = ops.Const(0) self._num_iters = ( iters if isinstance(iters, core.BlobReference) else ops.Const(iters)) else: self._num_iters = None def iter(self): assert self._num_iters is not None, ( 'This loop does not have a number of iterations.') assert self._iter is not None, ( 'iter() must be called from inside the loop context') return self._iter def __enter__(self): builder = NetBuilder.__enter__(self) if self._num_iters is not None: ops.stop_if(ops.GE([self._iter, self._num_iters])) return builder def __exit__(self, type, *args): if type is None and self._num_iters is not None: self.current_net().Add([self._iter, self._inc], [self._iter]) NetBuilder.__exit__(self, type, *args) class _RunIf(_RunOnce): def __init__(self, cond_blob=None, name=None, _already_ran=None): _RunOnce.__init__(self, name) assert cond_blob or _already_ran self._is_else = cond_blob is None if _already_ran is None: self._else_blob = ops.Not(cond_blob) self._already_ran = ops.Const(False) else: self._already_ran = _already_ran self._else_blob = _already_ran if cond_blob is None else ( ops.Or([_already_ran, ops.Not(cond_blob)])) def __enter__(self): r = _RunOnce.__enter__(self) ops.stop_if(self._else_blob) ops.Const(True, blob_out=self._already_ran) return r def Elif(self, cond, name=None): assert not self._is_else, 'Else not allowed for an Else.' return NetBuilder.current().add(_RunIf( cond, name=name or self.name, _already_ran=self._already_ran)) def Else(self, name=None): assert not self._is_else, 'Elif not allowed for an Else.' return NetBuilder.current().add( _RunIf(name=name or self.name, _already_ran=self._already_ran)) class _RunIfNet(NetBuilder): """ Generates a single net that uses If operator """ def __init__(self, cond_blob, name=None): NetBuilder.__init__(self, name=name, _use_control_ops=True) assert cond_blob, 'Conditional blob is not specified for an If net' self._cond_blob = cond_blob self._then_net = None self._else_net = None def add(self, child): return NetBuilder.add(self, child) def __exit__(self, type, *args): if type is None: _then_nets = self._children self._reset_children() self._then_net = NetBuilder.merge_nets( _then_nets, self._lexical_scope) if not self._then_net: self._then_net = core.Net('empty_then_net') if_net = core.Net(self.name + '/if_net') add_if_op(if_net, self._cond_blob, self._lexical_scope, self._then_net, self._else_net) self._current_net = if_net self._children = [if_net] NetBuilder.__exit__(self, type, *args) class _RunElseNet(NetBuilder): """ Else branch for _RunIfNet builder """ def __init__(self, name=None): NetBuilder.__init__(self, name=name, _use_control_ops=True) parent = NetBuilder.current(required=False) assert parent and len(parent._children) > 0 and \ isinstance(parent._children[-1], _RunIfNet), \ 'Invalid use of Else builder' self._if_builder = parent._children[-1] def __exit__(self, type, *args): if type is None: _else_nets = self._children self._reset_children() self._if_builder._else_net = NetBuilder.merge_nets( _else_nets, self._lexical_scope) if self._if_builder._else_net: if_else_net = core.Net(self.name + '/if_else_net') add_if_op( if_else_net, self._if_builder._cond_blob, self._lexical_scope, self._if_builder._then_net, self._if_builder._else_net) self._if_builder._current_net = if_else_net self._if_builder._children = [if_else_net] NetBuilder.__exit__(self, type, *args) class _RunWhileNet(NetBuilder): """ Generates a single net that uses While operator """ def __init__(self, name=None): NetBuilder.__init__(self, name=name, _use_control_ops=True) self._cond_builder = None def __exit__(self, type, *args): if type is None: assert self._cond_builder, \ 'Condition builder must be specified in While op' _cond_blob = self._cond_builder._cond_blob _cond_net = self._cond_builder._cond_net loop_body = self._children self._reset_children() loop_body_net = NetBuilder.merge_nets( loop_body, self._lexical_scope) if not loop_body_net: loop_body_net = core.Net('empty_loop_body_net') while_net = core.Net(self.name + '/while_net') add_while_op(while_net, _cond_blob, self._lexical_scope, loop_body_net, _cond_net) self._current_net = while_net self._children = [while_net] NetBuilder.__exit__(self, type, *args) class _RunWhileCondition(NetBuilder): """ Computes loop's condition, used in the context of WhileNet. Last operator must have a single scalar boolean output that will be used as a condition value, no other blobs created in the condition net are visible outside of it """ def __init__(self, name=None): NetBuilder.__init__(self, name=name, _use_control_ops=True) parent = NetBuilder.current(required=False) assert parent and isinstance(parent, _RunWhileNet), \ 'Invalid use of loop condition builder' assert not parent._cond_builder, \ 'Multiple loop condition builders specified' assert len(parent._children) == 0, \ 'Condition definition must be specified before the loop\'s body' parent._cond_builder = self self._cond_blob = None self._cond_net = None def __exit__(self, type, *args): if type is None: condition_body = self._children self._reset_children() self._cond_net = NetBuilder.merge_nets( condition_body, self._lexical_scope) assert self._cond_net, 'Invalid loop condition specified' assert len(self._cond_net.Proto().op) > 0, 'Invalid condition net' last_op = self._cond_net.Proto().op[-1] assert len(last_op.output) == 1, 'Invalid condition net' self._cond_blob = core.BlobReference(name=last_op.output[0], net=None) self._current_net = self._cond_net self._children = [self._cond_net] NetBuilder.__exit__(self, type, *args)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/net_builder.py
0.734881
0.152852
net_builder.py
pypi
import numpy as np from matplotlib import cm, pyplot def ChannelFirst(arr): """Convert a HWC array to CHW.""" ndim = arr.ndim return arr.swapaxes(ndim - 1, ndim - 2).swapaxes(ndim - 2, ndim - 3) def ChannelLast(arr): """Convert a CHW array to HWC.""" ndim = arr.ndim return arr.swapaxes(ndim - 3, ndim - 2).swapaxes(ndim - 2, ndim - 1) class PatchVisualizer(object): """PatchVisualizer visualizes patches. """ def __init__(self, gap=1): self.gap = gap def ShowSingle(self, patch, cmap=None): """Visualizes one single patch. The input patch could be a vector (in which case we try to infer the shape of the patch), a 2-D matrix, or a 3-D matrix whose 3rd dimension has 3 channels. """ if len(patch.shape) == 1: patch = patch.reshape(self.get_patch_shape(patch)) elif len(patch.shape) > 2 and patch.shape[2] != 3: raise ValueError("The input patch shape isn't correct.") # determine color if len(patch.shape) == 2 and cmap is None: cmap = cm.gray pyplot.imshow(patch, cmap=cmap) return patch def ShowMultiple(self, patches, ncols=None, cmap=None, bg_func=np.mean): """Visualize multiple patches. In the passed in patches matrix, each row is a patch, in the shape of either n*n, n*n*1 or n*n*3, either in a flattened format (so patches would be a 2-D array), or a multi-dimensional tensor. We will try our best to figure out automatically the patch size. """ num_patches = patches.shape[0] if ncols is None: ncols = int(np.ceil(np.sqrt(num_patches))) nrows = int(np.ceil(num_patches / float(ncols))) if len(patches.shape) == 2: patches = patches.reshape( (patches.shape[0], ) + self.get_patch_shape(patches[0]) ) patch_size_expand = np.array(patches.shape[1:3]) + self.gap image_size = patch_size_expand * np.array([nrows, ncols]) - self.gap if len(patches.shape) == 4: if patches.shape[3] == 1: # gray patches patches = patches.reshape(patches.shape[:-1]) image_shape = tuple(image_size) if cmap is None: cmap = cm.gray elif patches.shape[3] == 3: # color patches image_shape = tuple(image_size) + (3, ) else: raise ValueError("The input patch shape isn't expected.") else: image_shape = tuple(image_size) if cmap is None: cmap = cm.gray image = np.ones(image_shape) * bg_func(patches) for pid in range(num_patches): row = pid // ncols * patch_size_expand[0] col = pid % ncols * patch_size_expand[1] image[row:row+patches.shape[1], col:col+patches.shape[2]] = \ patches[pid] pyplot.imshow(image, cmap=cmap, interpolation='nearest') pyplot.axis('off') return image def ShowImages(self, patches, *args, **kwargs): """Similar to ShowMultiple, but always normalize the values between 0 and 1 for better visualization of image-type data. """ patches = patches - np.min(patches) patches /= np.max(patches) + np.finfo(np.float64).eps return self.ShowMultiple(patches, *args, **kwargs) def ShowChannels(self, patch, cmap=None, bg_func=np.mean): """ This function shows the channels of a patch. The incoming patch should have shape [w, h, num_channels], and each channel will be visualized as a separate gray patch. """ if len(patch.shape) != 3: raise ValueError("The input patch shape isn't correct.") patch_reordered = np.swapaxes(patch.T, 1, 2) return self.ShowMultiple(patch_reordered, cmap=cmap, bg_func=bg_func) def get_patch_shape(self, patch): """Gets the shape of a single patch. Basically it tries to interpret the patch as a square, and also check if it is in color (3 channels) """ edgeLen = np.sqrt(patch.size) if edgeLen != np.floor(edgeLen): # we are given color patches edgeLen = np.sqrt(patch.size / 3.) if edgeLen != np.floor(edgeLen): raise ValueError("I can't figure out the patch shape.") return (edgeLen, edgeLen, 3) else: edgeLen = int(edgeLen) return (edgeLen, edgeLen) _default_visualizer = PatchVisualizer() """Utility functions that directly point to functions in the default visualizer. These functions don't return anything, so you won't see annoying printouts of the visualized images. If you want to save the images for example, you should explicitly instantiate a patch visualizer, and call those functions. """ class NHWC(object): @staticmethod def ShowSingle(*args, **kwargs): _default_visualizer.ShowSingle(*args, **kwargs) @staticmethod def ShowMultiple(*args, **kwargs): _default_visualizer.ShowMultiple(*args, **kwargs) @staticmethod def ShowImages(*args, **kwargs): _default_visualizer.ShowImages(*args, **kwargs) @staticmethod def ShowChannels(*args, **kwargs): _default_visualizer.ShowChannels(*args, **kwargs) class NCHW(object): @staticmethod def ShowSingle(patch, *args, **kwargs): _default_visualizer.ShowSingle(ChannelLast(patch), *args, **kwargs) @staticmethod def ShowMultiple(patch, *args, **kwargs): _default_visualizer.ShowMultiple(ChannelLast(patch), *args, **kwargs) @staticmethod def ShowImages(patch, *args, **kwargs): _default_visualizer.ShowImages(ChannelLast(patch), *args, **kwargs) @staticmethod def ShowChannels(patch, *args, **kwargs): _default_visualizer.ShowChannels(ChannelLast(patch), *args, **kwargs)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/visualize.py
0.844473
0.6973
visualize.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import numpy as np from caffe2.python import core, workspace, net_drawer from caffe2.proto import caffe2_pb2 def getGradientForOp(op): return core.GradientRegistry.GetGradientForOp( op, [s + '_grad' for s in op.output]) def _get_grad_blob(grad_map, input_to_check): grad_blob = grad_map[input_to_check] if isinstance(grad_blob, core.BlobReference): return workspace.blobs[grad_blob] # If grad_blob is not a single blob, it should be a gradient slice. # To make it comparable with the estimiated gradient which is dense, # we need to first convert grad_blob to dense gradient. assert isinstance(grad_blob, core.GradientSlice) dense_grad = 'tmp_dense_grad' sparse_to_dense_op = core.CreateOperator( 'SparseToDense', [grad_blob.indices, grad_blob.values, input_to_check], dense_grad, ) workspace.RunOperatorOnce(sparse_to_dense_op) return workspace.blobs[dense_grad] def _get_grad(net, outputs, outputs_with_grad, input_values, inputs_with_grads): grad_net = net.Clone(net.Name() + "_copy") grad_map = grad_net.AddGradientOperators(outputs_with_grad) for name, value in (input_values or {}).items(): workspace.blobs[name] = value for input_to_check in inputs_with_grads: assert input_to_check in grad_map, ( '{} has no gradient, cannot check net gradient.'.format( input_to_check)) assert str(input_to_check) in workspace.blobs workspace.RunNetOnce(grad_net) forward_results = [(output, workspace.blobs[output]) for output in outputs] grads = {input_to_check: _get_grad_blob(grad_map, input_to_check) for input_to_check in inputs_with_grads} return forward_results, grads, grad_net def _assert_close(value1, value2, threshold, err_msg=''): np.testing.assert_allclose( value1, value2, atol=threshold, rtol=threshold, err_msg=err_msg, ) delta = np.abs(value1 - value2).flatten() return np.mean(delta), max(delta) class NetGradientChecker(object): @staticmethod def CompareNets(nets, outputs, outputs_with_grad_ids, inputs_with_grads, input_values=None, threshold=0.0000001, print_net_images=False): def _get_output_with_grad_names(net_outputs): return [net_outputs[i] for i in outputs_with_grad_ids] if print_net_images: for i, net in enumerate(nets): png = net_drawer.GetPydotGraph(net).create_png() with open("caffe2_net_forward_" + str(i) + net.Name() + ".png", 'wb') \ as f: f.write(png) results = [ _get_grad(net, net_outputs, _get_output_with_grad_names(net_outputs), input_values, inputs_with_grads) for net, net_outputs in zip(nets, outputs) ] if print_net_images: _, _, backward_nets = zip(*results) for i, net in enumerate(backward_nets): png = net_drawer.GetPydotGraph(net).create_png() with open("caffe2_net_" + str(i) + net.Name() + ".png", 'wb') \ as f: f.write(png) first_net_results, first_net_grads, _ = results[0] for net_results, net_grads, _ in results[1:]: assert len(net_results) == len(first_net_results) for idx, ((blob1, blob_value1), (blob2, blob_value2)) in enumerate( zip(first_net_results, net_results)): _assert_close( blob_value1, blob_value2, threshold, err_msg="Different forward pass results for output id {}. " "Corresponding output blobs: {} and {}".format( idx, blob1, blob2)) assert net_grads.keys() == first_net_grads.keys() for blob, blob_grad_value in net_grads.items(): _assert_close( first_net_grads[blob], blob_grad_value, threshold, err_msg="Different gradients for input {}".format(blob)) @staticmethod def Check(net, outputs_with_grad, input_values, input_to_check, step_size=0.0001, threshold=0.05, print_net=True): net_results, net_grads, full_net = _get_grad( net, [], outputs_with_grad, input_values, [input_to_check]) analytic_grad = net_grads[input_to_check] def GetLoss(new_value): workspace.blobs[input_to_check] = new_value workspace.RunNetOnce(full_net) return sum([ workspace.blobs[output] for output in outputs_with_grad ]).sum() def GetValue(dim, delta): input_value = input_values[input_to_check].copy() input_value.flat[dim] += delta return input_value grad_estimate = np.zeros_like(input_values[input_to_check]) for dim in range(input_values[input_to_check].size): pos_loss = GetLoss(GetValue(dim, step_size)) neg_loss = GetLoss(GetValue(dim, -step_size)) grad_estimate.flat[dim] = (pos_loss - neg_loss) / step_size / 2 err_msg = "Error in gradient check for net_copy {}".format( net.Name()) if print_net: err_msg += ": {}".format(net.Proto()) return _assert_close(analytic_grad, grad_estimate, threshold, err_msg) class GradientChecker: """A gradient checker in Python. This is not the most efficient way to check gradients, as the Python interface will involve a lot of copies back and forth operations. Use at your own risk. """ def __init__( self, stepsize, threshold, device_option=None, workspace_name="gradient_check", input_device_options=None, ): self._stepsize = stepsize self._threshold = threshold self._device_option = device_option or caffe2_pb2.DeviceOption() self._workspace_name = workspace_name if input_device_options is None: self._input_device_options = {} else: self._input_device_options = input_device_options def GetLossAndGrad( self, op, grad_ops, inputs, input_names, input_to_check, grad_name, outputs_with_grads ): for i in range(len(inputs)): workspace.FeedBlob(input_names[i], inputs[i], self._input_device_options.get( input_names[i], self._device_option)) x = inputs[input_to_check] # Run. workspace.RunOperatorOnce(op) loss = 0. # Get Loss and feed in the gradients, run gradient ops. for idx in outputs_with_grads: name = op.output[idx] arr = workspace.FetchBlob(name) loss += (arr**2).sum() workspace.FeedBlob(name + '_grad', arr, self._device_option) loss /= 2. # Run gradient ops workspace.RunOperatorsOnce(grad_ops) # Get gradients if isinstance(grad_name, core.GradientSlice): workspace.FeedBlob('zeros', np.zeros_like(x, dtype=np.float32)) workspace.FeedBlob('ones', np.ones(1, dtype=np.float32)) gv_cpu_op = core.CreateOperator( 'EnsureCPUOutput', grad_name.values, grad_name.values + '_cpu', device_option=self._device_option ) gi_cpu_op = core.CreateOperator( 'EnsureCPUOutput', grad_name.indices, grad_name.indices + '_cpu', device_option=self._device_option ) sparse_to_dense_op = core.CreateOperator( 'ScatterWeightedSum', [ 'zeros', 'ones', grad_name.indices + '_cpu', grad_name.values + '_cpu', 'ones' ], 'zeros', ) workspace.RunOperatorOnce(gv_cpu_op) workspace.RunOperatorOnce(gi_cpu_op) workspace.RunOperatorOnce(sparse_to_dense_op) grad = workspace.FetchBlob('zeros') else: grad = workspace.FetchBlob(grad_name) return loss, grad def CheckSimple( self, op, inputs, input_to_check, outputs_with_grads, grad_ops=None, input_device_options=None ): """Checks the operator in a very simple fashion by stacking a sum of squares on the top. Inputs: op: the operator to be checked. inputs: the input data in numpy arrays. input_to_check: an index specifying which input blob we should check. outputs_with_grads: indices specifying which output blobs will we need to check gradients with. For these outputs, we will collect a squared sum and also feed in their gradients. grad_operator: the gradient operator. If not given, we will get the gradient operator from the gradient registry. input_device_options: an optional mapping from input names to DeviceOptions (to override the default DeviceOption) Outputs: boolean: True if it passes, False if it does not pass. """ # Entering the checker workspace old_ws_name = workspace.CurrentWorkspace() if self._workspace_name != old_ws_name: workspace.SwitchWorkspace(self._workspace_name, True) op.device_option.CopyFrom(self._device_option) if grad_ops is None: # TODO(jiayq): use the gradient registration instead of the old # hack. grad_ops, g_input = getGradientForOp(op) _input_device_options = input_device_options or \ core.InferOpBlobDevicesAsDict(op)[0] # First, feed in the input. for i, arr in enumerate(inputs): workspace.FeedBlob( op.input[i], arr, _input_device_options.get( op.input[i], self._device_option)) # Get the loss and gradient for the original. grad_name = g_input[input_to_check] loss, grad = self.GetLossAndGrad( op, grad_ops, inputs, op.input, input_to_check, grad_name, outputs_with_grads ) grad_estimate = np.zeros_like(inputs[input_to_check]) if grad_estimate.shape != grad.shape: raise Exception( "Mismatched gradient shapes: estimated ({}), grad ({})".format( grad_estimate.shape, grad.shape)) dims_to_check = inputs[input_to_check].size for current_dim in range(dims_to_check): # Positive gradient inputs[input_to_check].flat[current_dim] += self._stepsize pos_loss, _ = self.GetLossAndGrad( op, grad_ops, inputs, op.input, input_to_check, grad_name, outputs_with_grads ) # Negative gradient inputs[input_to_check].flat[current_dim] -= self._stepsize * 2 neg_loss, _ = self.GetLossAndGrad( op, grad_ops, inputs, op.input, input_to_check, grad_name, outputs_with_grads ) # Recover the value inputs[input_to_check].flat[current_dim] += self._stepsize grad_estimate.flat[current_dim] = ( pos_loss - neg_loss) / self._stepsize / 2 # Now, check correctness fail_mat = ~np.isclose( grad, grad_estimate, atol=self._threshold, rtol=self._threshold) if np.any(fail_mat): idx = np.flatnonzero(fail_mat) print('Failed. [idx, grad, grad_estimate] are:') print(np.vstack([idx, grad.flat[idx], grad_estimate.flat[idx]]).T) ret = False else: ret = True # After finishing, cleaning up things. if self._workspace_name != old_ws_name: # We reset the workspace to make sure everything intermediate is # cleaned up. Note that there is no need to delete a workspace - # when empty it takes a very limited amount of memory. workspace.ResetWorkspace() workspace.SwitchWorkspace(old_ws_name) return ret, grad, grad_estimate
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/gradient_checker.py
0.763087
0.499451
gradient_checker.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core, scope, workspace from caffe2.python.helpers.db_input import db_input from caffe2.python.modeling import parameter_info from caffe2.python.modeling.parameter_sharing import ( parameter_sharing_context, ) from caffe2.python.optimizer_context import ( OptimizerContext, DEFAULT_OPTIM, ) from caffe2.python.regularizer_context import RegularizerContext from future.utils import viewitems, viewkeys from itertools import chain import logging import six # _known_working_ops are operators that do not need special care. _known_working_ops = [ "Accuracy", "Adam", "Add", "Adagrad", "SparseAdagrad", "Adadelta", "SparseAdadelta", "AveragedLoss", "Cast", "Checkpoint", "ConstantFill", "Copy", "CopyGPUToCPU", "CopyCPUToGPU", "DequeueBlobs", "EnsureCPUOutput", "ExpandDims", "Flatten", "FlattenToVec", "LabelCrossEntropy", "LearningRate", "MakeTwoClass", "MatMul", "NCCLAllreduce", "NHWC2NCHW", "PackSegments", "Print", "PRelu", "ReduceFrontSum", "Scale", "ScatterWeightedSum", "Sigmoid", "SortedSegmentSum", "Snapshot", # Note: snapshot is deprecated, use Checkpoint "Softmax", "SoftmaxWithLoss", "SquaredL2Distance", "Squeeze", "StopGradient", "Summarize", "Tanh", "Transpose", "UnpackSegments", "WeightedSum", "YellowFin" ] class ModelHelper(object): """A helper model so we can manange models more easily. It contains net def and parameter storages. You can add an Operator yourself, e.g. model = model_helper.ModelHelper(name="train_net") # init your weight and bias as w and b w = model.param_init_net.XavierFill(...) b = model.param_init_net.ConstantFill(...) fc1 = model.FC([input, w, b], output, **kwargs) or you can use helper functions in brew module without manually defining parameter initializations and operators. model = model_helper.ModelHelper(name="train_net") fc1 = brew.fc(model, input, output, dim_in, dim_out, **kwargs) """ def __init__(self, name=None, init_params=True, allow_not_known_ops=True, skip_sparse_optim=False, param_model=None, arg_scope=None): self.name = name or "model" self.net = core.Net(self.name) if param_model is not None: self.param_init_net = param_model.param_init_net self.param_to_grad = param_model.param_to_grad self.params = param_model.params self._parameters_info = param_model._parameters_info self._computed_params = param_model._computed_params else: self.param_init_net = core.Net(self.name + '_init') self.param_to_grad = {} self.params = [] self._parameters_info = {} self._computed_params = [] self._param_info_deprecated = [] self._devices = [] self.gradient_ops_added = False self.init_params = init_params self.allow_not_known_ops = allow_not_known_ops self.skip_sparse_optim = skip_sparse_optim self.weights = [] self.biases = [] self._arg_scope = { 'order': "NCHW", 'use_cudnn': True, 'cudnn_exhaustive_search': False, } if arg_scope is not None: # Please notice value as None is not acceptable. We are not checking it # here because we already have check in MakeArgument. self._arg_scope.update(arg_scope) @property def arg_scope(self): return self._arg_scope def get_name(self): return self.name def _infer_param_shape(self, param): for op in self.param_init_net.Proto().op: if str(param) in op.output: for arg in op.arg: if arg.name == "shape": return list(arg.ints) return None def _update_param_info_deprecated(self): assert len(self._param_info_deprecated) <= len(self.params) for param in self.params[len(self._param_info_deprecated):]: if not isinstance(param, core.BlobReference): raise ValueError( "Param %s must be a BlobReference!" % str(param)) self._param_info_deprecated.append(parameter_info.ParameterInfo( param_id=len(self._param_info_deprecated), param=param, shape=self._infer_param_shape(param))) for info in self._param_info_deprecated: info.grad = self.param_to_grad.get(info.name) def _normalize_tags(self, tags): tags = tags or [] return set(tags) if isinstance(tags, list) else set([tags]) def create_param(self, param_name, shape, initializer, tags=None): """ Creates parameter with a given name and initializer. If param_name is instance of BlobRefernce - then this blob will be used to store parameter (no any logic will affect it's location). If param_name is instance of a string type, then the final blob will be created in the CurrentNameScope with the respect of all parameter sharing logic, i.e. 'resolved_name_scope/param_name'. Parameter sharing logic is going to override CurrentNameScope according to the rules that are specified through ParameterSharing contexts, all ParameterSharing contexts are applied recursively until there are no extra overrides present, where on each step the best match will be applied first. The following examples should clarify the way ParameterSharing logic works: As an example if this function is called with parameter 'w': a. Call from some scope 'global_scope' with no Parameter sharing: 'global_scope/w' b. Call from scope 'scope_b', with override {'scope_b': 'scope_a'}: 'scope_a/w' c. Call from scope 'scope_a', with override {'scope_a': ''}: 'scope_a/w' d. Call from scope 'scope_b/shared', with overrides {'scope_b/shared': 'scope_b', 'scope_b': 'scope_a'}: 'scope_a/w' d. Call from scope 'scope_b/unshared', with overrides {'scope_b/shared': 'scope_b', 'scope_b': 'scope_a'}: 'scope_a/unshared/w' """ # ParameterSharing works only for case when param_name is instance of # a string type. If param_name is a BlobReference - no attempt for # ParameterSharing will be applied. if isinstance(param_name, core.BlobReference): param_name = str(param_name) elif isinstance(param_name, six.string_types): # Parameter name will be equal to current Namescope that got # resolved with the respect of parameter sharing of the scopes. param_name = parameter_sharing_context.get_parameter_name( param_name) else: raise TypeError("Unsupported type for param_name") if param_name in self._parameters_info: assert self._parameters_info[param_name].shape == shape return self._parameters_info[param_name].blob param_info = initializer.create_param( param_name=core.BlobReference(param_name), init_net=self.param_init_net, shape=shape, ) optim_context = OptimizerContext.current() for tag in self._normalize_tags(tags): if optim_context.has_optimizer(tag): # param_info will check optimizer has not been set param_info.optimizer = optim_context.get_optimizer(tag) if not param_info.optimizer and optim_context.has_optimizer(DEFAULT_OPTIM): param_info.optimizer = optim_context.get_optimizer(DEFAULT_OPTIM) reg_context = RegularizerContext.current() param_info.regularizer = reg_context self._parameters_info[param_name] = param_info # Add param to legacy structs as well, so all other functions for # parameters are still working. self.AddParameter(param_info.blob, tags) return param_info.blob def get_param_info(self, param): assert isinstance(param, core.BlobReference), \ "Param {} is not a BlobReference".format(param) return self._parameters_info.get(param, None) # This method is deprecated, use create_param method which # also does parameter initialization when needed def add_param_DEPRECATED(self, param, key=None, shape=None, length=None): logging.warning("add_param method is DEPRECATED") self._update_param_info_deprecated() self.AddParameter(param) if key is not None and self.net.input_record() is not None: idx = self.net.input_record().field_blobs().index(key) key = self.net.input_record().field_names()[idx] shape = shape if shape is not None else self._infer_param_shape(param) if not isinstance(param, core.BlobReference): raise ValueError("Param %s must be a BlobReference!" % str(param)) self._param_info_deprecated.append(parameter_info.ParameterInfo( param_id=len(self._param_info_deprecated), param=param, shape=shape, key=key, length=length, )) return self._param_info_deprecated[-1] def AddParameter(self, param, tags=None): assert isinstance(param, core.BlobReference) tags = self._normalize_tags(tags) if parameter_info.ParameterTags.COMPUTED_PARAM in tags: self._computed_params.append(param) else: self.params.append(param) if parameter_info.ParameterTags.WEIGHT in tags: self.weights.append(param) if parameter_info.ParameterTags.BIAS in tags: self.biases.append(param) @staticmethod def _NormalizeNamescope(namescope): if namescope is None: return scope.CurrentNameScope() elif namescope == '' or namescope.endswith(scope._NAMESCOPE_SEPARATOR): return namescope else: return namescope + scope._NAMESCOPE_SEPARATOR def GetParams(self, namescope=None, top_scope=False): ''' Returns the params in current namescope ''' namescope = ModelHelper._NormalizeNamescope(namescope) if namescope == '': return self.params[:] else: return [p for p in self.params if p.GetNameScope().startswith(namescope)] def Proto(self): return self.net.Proto() def InitProto(self): return self.param_init_net.Proto() def RunAllOnGPU(self, *args, **kwargs): self.param_init_net.RunAllOnGPU(*args, **kwargs) self.net.RunAllOnGPU(*args, **kwargs) def CreateDB(self, blob_out, db, db_type, **kwargs): dbreader = self.param_init_net.CreateDB( [], blob_out, db=db, db_type=db_type, **kwargs) return dbreader def AddGradientOperators(self, *args, **kwargs): if self.gradient_ops_added: raise RuntimeError("You cannot run AddGradientOperators twice.") self.Validate() self.gradient_ops_added = True self.grad_map = self.net.AddGradientOperators(*args, **kwargs) self.param_to_grad = self.get_param_to_grad(self.params) # Populate ParameterInfo for all parameters if missing # and add gradient blob information. So optimizers can use it for param, grad in self.param_to_grad.items(): param_info = self.get_param_info(param) if param_info: param_info.grad = grad else: self._parameters_info[param] = parameter_info.ParameterInfo( param_id=None, param=param, grad=grad, ) return self.grad_map def get_param_to_grad(self, params): ''' Given a list of parameters returns a dict from a parameter to a corresponding gradient ''' param_to_grad = {} if not self.gradient_ops_added: raise RuntimeError("You need to run AddGradientOperators first.") # We need to use empty namescope when creating the gradients # to prevent duplicating the namescope prefix for gradient blobs. for p in params: if str(p) in self.grad_map: param_to_grad[p] = self.grad_map[str(p)] return param_to_grad def GetOptimizationParamInfo(self, params=None): ''' Returns a map for param => grad. If params is not specified, all parameters will be considered. ''' if not self.gradient_ops_added: raise RuntimeError("Need to call AddGradientOperators first") param_to_grad = self.param_to_grad if params: param_to_grad = self.get_param_to_grad(params) return [ self.get_param_info(param) for param, grad in viewitems(param_to_grad) if ( not self.skip_sparse_optim or not isinstance(grad, core.GradientSlice) ) ] def _Validate(self): ''' Check for duplicate params ''' params_list = [str(p) for p in self.params] params_set = set(params_list) dupes = [] if len(params_set) != len(params_list): params_list = sorted(params_list) for j, p in enumerate(params_list): if j > 0 and params_list[j - 1] == p: if p not in dupes: dupes.append(p) return dupes def Validate(self): dupes = self._Validate() assert dupes == [], "Duplicate params: {}".format(dupes) def GetComputedParams(self, namescope=None): ''' Returns the computed params in current namescope. 'Computed params' are such parameters that are not optimized via gradient descent but are directly computed from data, such as the running mean and variance of Spatial Batch Normalization. ''' namescope = ModelHelper._NormalizeNamescope(namescope) if namescope == '': return self._computed_params[:] else: return [p for p in self._computed_params if p.GetNameScope().startswith(namescope)] def GetAllParams(self, namescope=None): return self.GetParams(namescope) + self.GetComputedParams(namescope) def TensorProtosDBInput( self, unused_blob_in, blob_out, batch_size, db, db_type, **kwargs ): """TensorProtosDBInput.""" assert len(unused_blob_in) == 0, \ """You cannot pass reader to model_helper.TensorProtosDBInput. Use model.net.TensorProtosDBInput instead to create the op.""" return db_input( self, blob_out, batch_size, db, db_type, **kwargs) def GetDevices(self): assert len(self._devices) > 0, \ "Use data_parallel_model to run model on multiple GPUs." return self._devices def __getattr__(self, op_type): """Catch-all for all other operators, mostly those without params.""" if op_type.startswith('__'): raise AttributeError(op_type) if not core.IsOperator(op_type): raise AttributeError( 'Method ' + op_type + ' is not a registered operator.' + ' Did you mean: [' + ','.join(workspace.C.nearby_opnames(op_type)) + ']' ) if op_type not in _known_working_ops: if not self.allow_not_known_ops: raise AttributeError( "Operator {} is not known to be safe".format(op_type)) logging.warning("You are creating an op that the ModelHelper " "does not recognize: {}.".format(op_type)) return self.net.__getattr__(op_type) def __dir__(self): return sorted(set(chain( dir(type(self)), viewkeys(self.__dict__), _known_working_ops ))) def GetCompleteNet(self): r""" Return param_init_net + net Net. Returns: 'core.Net' containing param_init_net and net """ new_net = self.param_init_net.Clone( self.name + "_complete_net", keep_schema=True) # add init net info to debug info for op in new_net.Proto().op: op.debug_info = op.debug_info + "/param_init_net" new_net.AppendNet(self.net) # keep the execution optimization if self.net.Proto().HasField("type"): new_net.Proto().type = self.net.Proto().type return new_net def ConstructInitTrainNetfromNet(self, net): r""" construct init net and train net from complete_net Inputs: net: 'core.Net' containing param_init_net and train net """ param_op_mask = [] train_op_mask = [] for idx, op in enumerate(net.Proto().op): if op.debug_info.endswith("/param_init_net"): param_op_mask.append(idx) else: train_op_mask.append(idx) self.param_init_net = net.Clone( net.Name() + "/generated_param_init_net", keep_schema=True, op_id_mask=param_op_mask, update_external_list=True, ) self.net = net.Clone( net.Name() + "/generated_net", keep_schema=True, op_id_mask=train_op_mask, update_external_list=True, ) def ExtractPredictorNet( net_proto, input_blobs, output_blobs, device=None, renames=None, disabled_inputs=None, ): ''' Takes a model net for training and returns a net which can be used for prediction. For example, all gradient operators and input operators are removed. @param net_proto protobuf of the net you want to process (net.Proto()) @param input_blobs list/set of blob names that are the inputs of predictor @param output_blobs list/set of blob names that are outputs of predictor @param device optional device option that is assigned @param renames dictionary of blob name to a new name (optional) @param disabled_inputs optional set of blobs that are 'switched off'. This will cause branches with those blobs as inputs to be removed ''' predict_net = core.Net(net_proto.name + "_predict") predict_proto = predict_net.Proto() orig_external_inputs = set(net_proto.external_input) orig_external_outputs = set(net_proto.external_output) input_blobs = {str(b) for b in input_blobs} known_blobs = set(orig_external_inputs).union(input_blobs) output_blobs = {str(b) for b in output_blobs} external_inputs = set(input_blobs) external_outputs = set(output_blobs) if renames is None: renames = {} if disabled_inputs is not None: known_blobs = known_blobs - set(disabled_inputs) ops = list(net_proto.op) # Find the range of ops that we should include try: first_op_with_input = min( [ j for j in range(len(ops)) if input_blobs.intersection(ops[j].input) and ops[j].type != 'StopGradient' ] ) except ValueError: raise Exception("No ops with input={}".format(input_blobs)) try: last_op_with_output = max( [ j for j in range(len(ops)) if output_blobs.intersection(ops[j].output) ] ) except ValueError: raise Exception("No ops with output={}".format(output_blobs)) def validate_op(op): # Check that the op does not have is_test = 0 set. This is a common # pitfall with SpatialBN op, at lest. for arg in op.arg: if arg.name == "is_test" and arg.i == 0: raise Exception( "An operator had is_test=0, did you try to extract a " + "predictor from a train model (instead of test model)?" + " Op was: {}".format(str(op)) ) def rename_list(proto_list): # proto lists don't support assignments new_list = proto_list[:] for j, b in enumerate(new_list): if b in renames: new_list[j] = renames[b] del proto_list[:] proto_list.extend(new_list) # Iterate through the ops and only include those whose inputs # we can satisfy. for op in ops[first_op_with_input:(last_op_with_output + 1)]: if known_blobs.issuperset(op.input): # Special handling for recurrent nets # TODO: when standard argument type for "nets" is introduced, # this can be more general if op.type == 'RecurrentNetwork': for arg in op.arg: if arg.name == 'backward_step_net': arg.ClearField(str('n')) elif arg.name == 'step_net': for step_op in arg.n.op: rename_list(step_op.input) rename_list(step_op.output) if device is not None: step_op.device_option.device_type = device.device_type step_op.device_option.device_id = device.device_id rename_list(arg.n.external_input) rename_list(arg.n.external_output) # Add additional external inputs external_inputs.update( set(arg.n.external_input).intersection( orig_external_inputs ) ) if device is not None: op.device_option.device_type = device.device_type op.device_option.device_id = device.device_id validate_op(op) predict_proto.op.extend([op]) known_blobs.update(op.output) external_inputs.update( set(op.input).intersection(orig_external_inputs) ) external_outputs.update( set(op.output).intersection(orig_external_outputs) ) else: logging.debug( "Op {} had unknown inputs: {}".format( op.type, set(op.input).difference(known_blobs) ) ) # Predictor net's external inputs and outputs include only those # that are part of this net. predict_proto.external_input.extend(external_inputs) predict_proto.external_output.extend(external_outputs) rename_list(predict_proto.external_input) rename_list(predict_proto.external_output) renamed_input_blobs = [] for b in input_blobs: if b in renames: renamed_input_blobs.append(renames[b]) else: renamed_input_blobs.append(b) for op in predict_proto.op: rename_list(op.input) rename_list(op.output) return predict_net, list( set(predict_proto.external_input) - set(renamed_input_blobs) )
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/model_helper.py
0.759939
0.191328
model_helper.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core def get_external_blob_names(net, lexical_scope): """ Returns a set of blobs a given net depends on and a set of output blobs that are written by the net Inputs: net - net to return input/output blobs for; lexical_scope - all external blob names visible to the net """ # Use the blobs that are actually read/written to as external inputs/outputs net_proto = net.Proto() net_ssa, _ = core.get_ssa(net_proto) input_names = core.get_undefined_blobs(net_ssa) for input_name in input_names: assert str(input_name) in lexical_scope, \ "Input blob " + input_name + " is undefined" output_names = set() for op in net_proto.op: for output in op.output: if output in lexical_scope: output_names.add(output) return input_names, output_names def add_if_op(if_net, cond_blob, lexical_scope, then_net, else_net=None): """ A helper function to add an If op to the net. Automatically determines whether blobs in the then/else subnets are external (from the outer workspace) or local (visible only inside subnet's workspace) based on lexical scope - set of all outer blob names visible to the 'If' operator. All the blobs in then/else subnets with names matching a name in lexical scope and all the blobs that are first used as the operators' inputs are considered outer blobs - these blobs must exist in the outer workspace, then/else subnets can read their values and new values written into these blobs will be visible outside of the 'If' operator. All other blobs are local - exist only within inner workspaces for then/else. Inputs: if_net - net to add an If op to; cond_blob - scalar bool blob reference, used as If condition; lexical_scope - a set of outer blob names visible to then/else branches; then_net/else_net - nets (core.Net) for then/else branches """ then_input_blob_names, then_output_blob_names = get_external_blob_names( then_net, lexical_scope) else_input_blob_names = set() else_output_blob_names = set() if else_net: else_input_blob_names, else_output_blob_names = get_external_blob_names( else_net, lexical_scope) input_blob_names = then_input_blob_names | else_input_blob_names output_blob_names = then_output_blob_names | else_output_blob_names if_inputs = [cond_blob] if_inputs += [core.BlobReference(name=b, net=None) for b in input_blob_names] if_outputs = [core.BlobReference(name=b, net=None) for b in output_blob_names] do_then_net = core.Net('do_then_net') then_input_blobs = \ [core.BlobReference(name=b, net=None) for b in then_input_blob_names] then_output_blobs = \ [core.BlobReference(name=b, net=None) for b in then_output_blob_names] then_input_output_names_ordered = [ str(b) for b in (then_input_blobs + then_output_blobs)] then_outer_blob_names = list(then_input_blob_names | then_output_blob_names) then_outer_blob_names_idx = [ then_input_output_names_ordered.index(b) for b in then_outer_blob_names] # make sure to use net's name to have unique blob name across multiple subnets do_then_workspace_blob = if_net.NextScopedBlob(if_net.Name() + '/workspace_if_then') then_input_blobs.append(do_then_workspace_blob) then_output_blobs.append(do_then_workspace_blob) # make sure that added workspace pointer blobs are in if inputs/outputs if_inputs.append(do_then_workspace_blob) if_outputs.append(do_then_workspace_blob) do_then_net.Do( then_input_blobs, then_output_blobs, net=then_net.Proto(), inner_blobs=then_outer_blob_names, outer_blobs_idx=then_outer_blob_names_idx) do_then_net.AddExternalOutput(*then_output_blobs) if_args = {} if_args['then_net'] = do_then_net.Proto() do_else_workspace_blob = None if else_net: do_else_net = core.Net('do_else_net') else_input_blobs = \ [core.BlobReference(name=b, net=None) for b in else_input_blob_names] else_output_blobs = \ [core.BlobReference(name=b, net=None) for b in else_output_blob_names] else_input_output_names_ordered = [ str(b) for b in (else_input_blobs + else_output_blobs)] else_outer_blob_names = list(else_input_blob_names | else_output_blob_names) else_outer_blob_names_idx = [ else_input_output_names_ordered.index(b) for b in else_outer_blob_names] do_else_workspace_blob = \ if_net.NextScopedBlob(if_net.Name() + '/workspace_if_else') else_input_blobs.append(do_else_workspace_blob) else_output_blobs.append(do_else_workspace_blob) # make sure that added workspace pointer blobs are in if inputs/outputs if_inputs.append(do_else_workspace_blob) if_outputs.append(do_else_workspace_blob) do_else_net.Do( else_input_blobs, else_output_blobs, net=else_net.Proto(), inner_blobs=else_outer_blob_names, outer_blobs_idx=else_outer_blob_names_idx) do_else_net.AddExternalOutput(*else_output_blobs) if_args['else_net'] = do_else_net.Proto() if_net.CreateScope([], [do_then_workspace_blob]) if do_else_workspace_blob: if_net.CreateScope([], [do_else_workspace_blob]) if_net.If(if_inputs, if_outputs, **if_args) if_net.AddExternalOutput(*if_outputs) def add_while_op( while_net, cond_blob, lexical_scope, loop_body_net, condition_body_net=None): """ A helper function to add a While op to the net. Same rules for determining outer and inner blobs as for the 'If' operator apply for the 'While' operator loop and condition subnets. If specified, condition net is executed in a separate workspace before the first and after each iteration, the last operator must have a single scalar boolean output that is written into the condition blob. Inputs: while_net - net to add a While op to; cond_blob - scalar bool blob reference, used as a stop condition; lexical_scope - a set of outer blob names visible to the loop's body; loop_body_net - net to execute on each iteration; condition_body_net - net to compute condition value """ input_blob_names, output_blob_names = get_external_blob_names( loop_body_net, lexical_scope) # Since it's possible that loop is not going to run even once # we have to add loop's external outputs into inputs input_blob_names |= output_blob_names loop_inputs = [core.BlobReference(name=b, net=None) for b in input_blob_names] loop_outputs = [core.BlobReference(name=b, net=None) for b in output_blob_names] while_inputs = [cond_blob] + loop_inputs while_outputs = [] + loop_outputs do_loop_body_net = core.Net('do_loop_body_net') loop_input_output_names_ordered = [ str(b) for b in (loop_inputs + loop_outputs)] loop_body_outer_blob_names = list(input_blob_names | output_blob_names) loop_body_outer_blob_names_idx = [ loop_input_output_names_ordered.index(b) for b in loop_body_outer_blob_names] do_loop_body_workspace_blob = \ while_net.NextScopedBlob(while_net.Name() + '/workspace_loop_body') loop_inputs.append(do_loop_body_workspace_blob) loop_outputs.append(do_loop_body_workspace_blob) # make sure that added workspace pointer blobs are in While inputs/outputs while_inputs.append(do_loop_body_workspace_blob) while_outputs.append(do_loop_body_workspace_blob) do_loop_body_net.Do( loop_inputs, loop_outputs, net=loop_body_net.Proto(), inner_blobs=loop_body_outer_blob_names, outer_blobs_idx=loop_body_outer_blob_names_idx, copy_external_blobs=True) do_loop_body_net.AddExternalOutput(*loop_outputs) while_args = {} while_args['loop_net'] = do_loop_body_net.Proto() cond_workspace_blob = None if condition_body_net: cond_input_blob_names, cond_output_blob_names = get_external_blob_names( condition_body_net, lexical_scope) # make sure condition blob is written by condition net and is # visible outside of it found_condition_output = False for op in condition_body_net.Proto().op: if str(cond_blob) in op.output: found_condition_output = True break assert found_condition_output, \ "Condition net does not write into condition blob" if str(cond_blob) not in cond_output_blob_names: cond_output_blob_names.add(str(cond_blob)) cond_inputs = [core.BlobReference(name=b, net=None) for b in cond_input_blob_names] assert str(cond_blob) in cond_output_blob_names, \ 'Condition blob expected in condition net output' cond_outputs = [core.BlobReference(name=b, net=None) for b in cond_output_blob_names] condition_net = core.Net('do_loop_condition_net') cond_input_output_names_ordered = [ str(b) for b in (cond_inputs + cond_outputs)] cond_body_outer_blob_names = \ list(cond_input_blob_names | cond_output_blob_names) cond_body_outer_blob_names_idx = [ cond_input_output_names_ordered.index(b) for b in cond_body_outer_blob_names] cond_workspace_blob = \ while_net.NextScopedBlob(while_net.Name() + '/workspace_loop_cond') cond_inputs.append(cond_workspace_blob) cond_outputs.append(cond_workspace_blob) condition_net.Do( cond_inputs, cond_outputs, net=condition_body_net.Proto(), inner_blobs=cond_body_outer_blob_names, outer_blobs_idx=cond_body_outer_blob_names_idx) condition_net.AddExternalOutput(*cond_outputs) while_args['cond_net'] = condition_net.Proto() while_inputs += [b for b in cond_inputs if str(b) not in input_blob_names] while_outputs += [b for b in cond_outputs if str(b) not in output_blob_names] if str(cond_blob) not in lexical_scope: while_net.ConstantFill( [], cond_blob, dtype=core.DataType.BOOL, value=False) while_net.CreateScope([], [do_loop_body_workspace_blob]) if cond_workspace_blob: while_net.CreateScope([], [cond_workspace_blob]) while_net.While(while_inputs, while_outputs, **while_args) while_net.AddExternalOutput(*while_outputs)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/control_ops_util.py
0.687525
0.271258
control_ops_util.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core from caffe2.python.dataio import Reader from caffe2.python.schema import Scalar, Struct, data_type_for_dtype class TextFileReader(Reader): """ Wrapper around operators for reading from text files. """ def __init__(self, init_net, filename, schema, num_passes=1, batch_size=1): """ Create op for building a TextFileReader instance in the workspace. Args: init_net : Net that will be run only once at startup. filename : Path to file to read from. schema : schema.Struct representing the schema of the data. Currently, only support Struct of strings and float32. num_passes : Number of passes over the data. batch_size : Number of rows to read at a time. """ assert isinstance(schema, Struct), 'Schema must be a schema.Struct' for name, child in schema.get_children(): assert isinstance(child, Scalar), ( 'Only scalar fields are supported in TextFileReader.') field_types = [ data_type_for_dtype(dtype) for dtype in schema.field_types()] Reader.__init__(self, schema) self._reader = init_net.CreateTextFileReader( [], filename=filename, num_passes=num_passes, field_types=field_types) self._batch_size = batch_size def read(self, net): """ Create op for reading a batch of rows. """ blobs = net.TextFileReaderRead( [self._reader], len(self.schema().field_names()), batch_size=self._batch_size) if type(blobs) is core.BlobReference: blobs = [blobs] is_empty = net.IsEmpty( [blobs[0]], core.ScopedBlobReference(net.NextName('should_stop')) ) return (is_empty, blobs)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/text_file_reader.py
0.855021
0.239183
text_file_reader.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import argparse import subprocess import sys class Trie(object): """A simple class that represents a Trie.""" def __init__(self, name): """Initializes a Trie object.""" self.name = name self.size = 0 self.dictionary = {} def GetSymbolTrie(target, nm_command, max_depth): """Gets a symbol trie with the passed in target. Args: target: the target binary to inspect. nm_command: the command to run nm. max_depth: the maximum depth to create the trie. """ # Run nm to get a dump on the strings. proc = subprocess.Popen( [nm_command, '--radix=d', '--size-sort', '--print-size', target], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) nm_out, _ = proc.communicate() if proc.returncode != 0: print('NM command failed. Output is as follows:') print(nm_out) sys.exit(1) # Run c++filt to get proper symbols. proc = subprocess.Popen(['c++filt'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) out, _ = proc.communicate(input=nm_out) if proc.returncode != 0: print('c++filt failed. Output is as follows:') print(out) sys.exit(1) # Splits the output to size and function name. data = [] for line in out.split('\n'): if line: content = line.split(' ') if len(content) < 4: # This is a line not representing symbol sizes. skip. continue data.append([int(content[1]), ' '.join(content[3:])]) symbol_trie = Trie('') for size, name in data: curr = symbol_trie for c in name: if c not in curr.dictionary: curr.dictionary[c] = Trie(curr.name + c) curr = curr.dictionary[c] curr.size += size if len(curr.name) > max_depth: break symbol_trie.size = sum(t.size for t in symbol_trie.dictionary.values()) return symbol_trie def MaybeAddColor(s, color): """Wrap the input string to the xterm green color, if color is set. """ if color: return '\033[92m{0}\033[0m'.format(s) else: return s def ReadableSize(num): """Get a human-readable size.""" for unit in ['B', 'KB', 'MB', 'GB']: if abs(num) <= 1024.0: return '%3.2f%s' % (num, unit) num /= 1024.0 return '%.1f TB' % (num,) # Note(jiayq): I know, I know, this is a recursive function, but it is # convenient to write. def PrintTrie(trie, prefix, max_depth, min_size, color): """Prints the symbol trie in a readable manner. """ if len(trie.name) == max_depth or not trie.dictionary.keys(): # If we are reaching a leaf node or the maximum depth, we will print the # result. if trie.size > min_size: print('{0}{1} {2}'.format( prefix, MaybeAddColor(trie.name, color), ReadableSize(trie.size))) elif len(trie.dictionary.keys()) == 1: # There is only one child in this dictionary, so we will just delegate # to the downstream trie to print stuff. PrintTrie( trie.dictionary.values()[0], prefix, max_depth, min_size, color) elif trie.size > min_size: print('{0}{1} {2}'.format( prefix, MaybeAddColor(trie.name, color), ReadableSize(trie.size))) keys_with_sizes = [ (k, trie.dictionary[k].size) for k in trie.dictionary.keys()] keys_with_sizes.sort(key=lambda x: x[1]) for k, _ in keys_with_sizes[::-1]: PrintTrie( trie.dictionary[k], prefix + ' |', max_depth, min_size, color) def main(argv): if not sys.platform.startswith('linux'): raise RuntimeError('Currently this tool only supports Linux.') parser = argparse.ArgumentParser( description="Tool to inspect binary size.") parser.add_argument( '--max_depth', type=int, default=10, help='The maximum depth to print the symbol tree.') parser.add_argument( '--min_size', type=int, default=1024, help='The mininum symbol size to print.') parser.add_argument( '--nm_command', type=str, default='nm', help='The path to the nm command that the tool needs.') parser.add_argument( '--color', action='store_true', help='If set, use ascii color for output.') parser.add_argument( '--target', type=str, help='The binary target to inspect.') args = parser.parse_args(argv) if not args.target: raise RuntimeError('You must specify a target to inspect.') symbol_trie = GetSymbolTrie( args.target, args.nm_command, args.max_depth) PrintTrie(symbol_trie, '', args.max_depth, args.min_size, args.color) if __name__ == '__main__': main(sys.argv[1:])
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/binarysize.py
0.68763
0.318287
binarysize.py
pypi
import numpy as np from caffe2.proto import caffe2_pb2 from caffe2.python import workspace def OnGPU(gpu_id): """A utility function that returns a device option protobuf of the specified gpu id. """ device_option = caffe2_pb2.DeviceOption() device_option.device_type = workspace.GpuDeviceType device_option.device_id = gpu_id return device_option def OnCPU(): device_option = caffe2_pb2.DeviceOption() device_option.device_type = caffe2_pb2.CPU return device_option def Allreduce(net, blobs, reduced_affix="_reduced", gpu_indices=None): """The general Allreduce interface that reroutes the function calls. CPUs and AMD GPUs are not supported because GetGpuPeerAccessPattern is called to get gpu peer access pattern. """ if gpu_indices is None: gpu_indices = list(range(len(blobs))) if len(gpu_indices) != len(blobs): raise RuntimeError( "gpu_indices length and blobs length mismatch: %d vs %d" % (len(gpu_indices), len(blobs)) ) pattern = workspace.GetGpuPeerAccessPattern() if len(blobs) == 2 and pattern.shape[0] >= 2 and np.all(pattern[:2, :2]): return Allreduce2(net, blobs, reduced_affix, gpu_indices) elif len(blobs) == 4 and pattern.shape[0] >= 4 and np.all(pattern[:4, :4]): return Allreduce4(net, blobs, reduced_affix, gpu_indices) elif len(blobs) == 4 and pattern.shape[0] >= 4 and np.all(pattern[:2, :2]) and np.all(pattern[2:4, 2:4]): return Allreduce4Group2(net, blobs, reduced_affix, gpu_indices) elif len(blobs) == 8 and pattern.shape[0] >= 8 and np.all(pattern[:8, :8]): return Allreduce8(net, blobs, reduced_affix, gpu_indices) else: return AllreduceFallback(net, blobs, reduced_affix, gpu_indices) def Allreduce2(net, blobs, reduced_affix, gpu_indices): """Allreduce for 2 gpus. Algorithm: 0r <- 0 + 1, 1r <- 0r, where r means "reduced" """ a, b = blobs gpu_a, gpu_b = gpu_indices a_reduced = net.Add([a, b], a + reduced_affix, device_option=OnGPU(gpu_a)) b_reduced = a_reduced.Copy( [], b + reduced_affix, device_option=OnGPU(gpu_b) ) return a_reduced, b_reduced def Allreduce4(net, blobs, reduced_affix, gpu_indices): """Allreduce for 4 gpus. Algorithm: 2 level reduction. 0r <- 0 + 1, 2r <- 2 + 3 0r <- 0r + 2r 2r <- 0r, 1r <- 0r, 3r <- 2r """ a, b, c, d = blobs gpu_a, gpu_b, gpu_c, gpu_d = gpu_indices # a_reduced <- a+b, c_reduced <- c + d a_reduced = net.Add( [a, b], str(a) + reduced_affix, device_option=OnGPU(gpu_a) ) c_reduced = net.Add( [c, d], str(c) + reduced_affix, device_option=OnGPU(gpu_c) ) # a_reduced <- a_reduced + c_reduced a_reduced = a_reduced.Add(c_reduced, a_reduced, device_option=OnGPU(gpu_a)) # broadcast a_reduced to c_reduced c_reduced = a_reduced.Copy([], c_reduced, device_option=OnGPU(gpu_c)) # broadcast to b and d b_reduced = a_reduced.Copy( [], str(b) + reduced_affix, device_option=OnGPU(gpu_b) ) d_reduced = c_reduced.Copy( [], str(d) + reduced_affix, device_option=OnGPU(gpu_d) ) return a_reduced, b_reduced, c_reduced, d_reduced def Allreduce4Group2(net, blobs, reduced_affix, gpu_indices): """Allreduce for 4 gpus where peer access are enabled in {0,1} and {2,3} Algorithm: 2 level reduction. 0r <- 0 + 1, 2r <- 2 + 3 0r <- 0r + 2r 2r <- 0r, 1r <- 0r, 3r <- 2r """ a, b, c, d = blobs gpu_a, gpu_b, gpu_c, gpu_d = gpu_indices # a_reduced <- a+b, c_reduced <- c + d a_reduced = net.Add( [a, b], str(a) + reduced_affix, device_option=OnGPU(gpu_a) ) c_reduced = net.Add( [c, d], str(c) + reduced_affix, device_option=OnGPU(gpu_c) ) # copy from c_reduce(gpu_c) to c_reduce_copy(gpu_a) c_reduced_copy = c_reduced.Copy( [], str(c_reduced) + '_copy', device_option=OnGPU(gpu_a) ) # a_reduced <- a_reduced + c_reduced_copy a_reduced = a_reduced.Add(c_reduced_copy, a_reduced, device_option=OnGPU(gpu_a)) # broadcast a_reduced to c_reduced c_reduced = a_reduced.Copy([], c_reduced, device_option=OnGPU(gpu_c)) # broadcast to b and d b_reduced = a_reduced.Copy( [], str(b) + reduced_affix, device_option=OnGPU(gpu_b) ) d_reduced = c_reduced.Copy( [], str(d) + reduced_affix, device_option=OnGPU(gpu_d) ) return a_reduced, b_reduced, c_reduced, d_reduced def Allreduce8(net, blobs, reduced_affix, gpu_indices): """Allreduce for 8 gpus. Algorithm: 3 level reduction. 0r <- 0 + 1, 2r <- 2 + 3, 4r <- 4 + 5, 6r <- 6 + 7 0r <- 0r + 2r, 4r <- 4r + 6r 0r <- 0r + 4r 4r <- 0r 2r <- 0r, 6r <- 4r 1r <- 0r, 3r <- 2r, 5r <- 4r, 7r <- 6r """ reduced = [None] * 8 # Reduction level 1 for i in [0, 2, 4, 6]: reduced[i] = net.Add( [blobs[i], blobs[i + 1]], blobs[i] + reduced_affix, device_option=OnGPU(gpu_indices[i]) ) # Reduction level 2 for i in [0, 4]: reduced[i] = net.Add( [reduced[i], reduced[i + 2]], str(blobs[i]) + reduced_affix, device_option=OnGPU(gpu_indices[i]) ) # Reduction level 3: this involves a copy. reduced_4_copy = reduced[4].Copy( [], str(reduced[4]) + '_copy', device_option=OnGPU(gpu_indices[0]) ) reduced[0] = reduced[0].Add( reduced_4_copy, reduced[0], device_option=OnGPU(gpu_indices[0]) ) # Broadcast level 1 reduced[4] = reduced[0].Copy( [], reduced[4], device_option=OnGPU(gpu_indices[4]) ) # Broadcast level 2 for i in [2, 6]: reduced[i] = reduced[i - 2].Copy( [], reduced[i], device_option=OnGPU(gpu_indices[i]) ) # Broadcast level 3 for i in [1, 3, 5, 7]: reduced[i] = reduced[i - 1].Copy( [], blobs[i] + reduced_affix, device_option=OnGPU(gpu_indices[i]) ) return reduced def AllreduceFallback(net, blobs, reduced_affix, gpu_indices): """A fallback option for Allreduce with no assumption on p2p. Algorithm: a flat operation on gpu 0 0r <- 0 0r <- 0r + i for i in gpu_indices[1:] ir <- 0r for i in gpu_indices[1:] """ reduced = [None] * len(gpu_indices) if reduced_affix != '': # copy first reduced[0] = net.Copy( blobs[0], blobs[0] + reduced_affix, device_option=OnGPU(gpu_indices[0]) ) else: reduced[0] = blobs[0] # do temp copy and add temp_name = reduced[0] + '_temp_copy' for i in range(1, len(gpu_indices)): temp = net.Copy( blobs[i], temp_name, device_option=OnGPU(gpu_indices[0]) ) reduced[0] = net.Add( [temp, reduced[0]], reduced[0], device_option=OnGPU(gpu_indices[0]) ) # Broadcast to everyone else for i in range(1, len(gpu_indices)): reduced[i] = net.Copy( reduced[0], blobs[i] + reduced_affix, device_option=OnGPU(gpu_indices[i]) ) return reduced
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/muji.py
0.602179
0.424233
muji.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import networkx as nx import collections import time import copy from caffe2.python import workspace, core from caffe2.proto import caffe2_pb2 import enum import logging from future.utils import viewitems, viewvalues import caffe2.python._import_c_extension as C log = logging.getLogger("memonger") log.setLevel(logging.INFO) LiveRange = collections.namedtuple('LiveRange', ["defined", "used", "size"]) def share_grad_blobs( net, losses, param_grads, namescope, dont_share_blobs=None, share_activations=False, blob_shapes=None, ): ''' Implements similar optimization as Torch's shareGradInput(): for the gradients that are passed between layers, share blobs between operators when possible. This yields significant memory savings with deep networks. Returns an optimized protobuf (assign to net._net) ''' def is_grad_blob(b): name = str(b) # Note: need to look at _{namescope} pattern as it matches # to handle the auto-split gradients return name.endswith("_grad") and (name.startswith(namescope) or name.startswith("_" + namescope)) and name not in param_grads def is_grad_op(op): # TODO: something smarter for b in list(op.input) + list(op.output): if is_grad_blob(b): return True return False log.warn("NOTE: Executing memonger to optimize gradient memory") # Collect ops that have something to do with gradients if namescope != "" and not namescope.endswith("/"): namescope += "/" netproto = copy.deepcopy(net.Proto()) activations = [] external_output = set(net.Proto().external_output) # Hacky way to get activations, think of a better way for op in net.Proto().op: for b in op.output: if b + "_w" in op.input and b not in external_output: activations.append(b) # Remove last activations, as they are usually accessed externally activations = set(activations[:-2]) # Gradient ops grad_op_indices = [] for idx, op in enumerate(netproto.op): if (is_grad_op(op)): grad_op_indices.append(idx) shared_blobs = set() for op in net.Proto().op: for b in list(op.input) + list(op.output): if is_grad_blob(b) or (share_activations and b in activations): shared_blobs.add(b) start_time = time.time() optim_str = C.memonger_compute_blob_recycling_for_dag( netproto.SerializeToString(), [str(s).encode('utf-8') for s in losses], grad_op_indices, set(str(s).encode('utf-8') for s in shared_blobs), namescope.encode('utf-8'), set() if dont_share_blobs is None else dont_share_blobs, {} if blob_shapes is None else blob_shapes ) log.info("Memonger memory optimization took {} secs".format( time.time() - start_time), ) optim = caffe2_pb2.NetDef() optim.ParseFromString(optim_str) assert verify_graph_equality(net.Proto(), optim), \ "Memonger graph is not equal to original." assert verify_inplace_blobs(net.Proto(), optim), \ "Inplace assignments differ in memonger net." return optim def optimize_inference_for_dag(net, input_blobs, namescope=""): netproto = copy.deepcopy(net.Proto()) external_input = set(net.Proto().external_input) external_output = set(net.Proto().external_output) def is_activation_blob(b): return b not in external_input and b not in external_output activation_blobs = set() seen_as_output = set() ops = list(net.Proto().op) op_indices = [index for index, op in enumerate(net.Proto().op)] # Sanity check: check that all external inputs are properly accounted # and that no gradient ops are included in 'net' for op in ops: for b in op.input: if is_activation_blob(b): activation_blobs.add(b) if b not in seen_as_output: assert False, "{} not in external input".format(b) for b in op.output: if is_activation_blob(b): activation_blobs.add(b) seen_as_output = seen_as_output.union(set(op.output)) assert not op.is_gradient_op, \ "You can only pass inference-only nets to optimize_inference_for_dag" start_time = time.time() optim_str = C.memonger_compute_blob_recycling_for_dag( netproto.SerializeToString(), [str(s).encode('utf-8') for s in input_blobs], op_indices, set(str(s).encode('utf-8') for s in activation_blobs), namescope.encode('utf-8'), set(), {} ) log.info("Memonger memory optimization took {} secs".format( time.time() - start_time), ) optim = caffe2_pb2.NetDef() optim.ParseFromString(optim_str) assert verify_graph_equality(net.Proto(), optim), \ "Memonger graph is not equal to original." assert verify_inplace_blobs(net.Proto(), optim), \ "Inplace assignments differ in memonger net." return optim def estimate_memory_usage(protos, shapes, types, devicescope): import numpy as np ''' Estimate memory usage of a model. This is an estimate because we assume a single threaded execution and miss some internal memory usage of operators. Only estimates the memory for a given device scope. Also, currently it does not handle correctly if blob sizes vary during execution, as it uses only the final blob size. Returns (total, highwater, by op type) memory allocation in bytes. ''' sizeofs = { caffe2_pb2.TensorProto.DOUBLE: 8, caffe2_pb2.TensorProto.FLOAT: 4, caffe2_pb2.TensorProto.FLOAT16: 2, caffe2_pb2.TensorProto.INT32: 4, caffe2_pb2.TensorProto.INT8: 1, caffe2_pb2.TensorProto.UINT8: 1, caffe2_pb2.TensorProto.UINT16: 2, caffe2_pb2.TensorProto.INT16: 2, caffe2_pb2.TensorProto.BOOL: 1, caffe2_pb2.TensorProto.INT64: 8, } def split_net(proto): ops = [op for op in proto.op if op.device_option == devicescope or op.type in {"Free", "Alias"}] del proto.op[:] proto.op.extend(ops) return proto def num_bytes(blob): if blob not in shapes or blob not in types: log.warning("Unknown blob encountered: {}".format(blob)) return 0 sizeof = sizeofs[types[blob]] return sizeof * np.prod(shapes[blob]) protos = [split_net(proto) for proto in protos] allocs_by_ops = collections.defaultdict(lambda: 0) # Evaluate current_allocated = 0 max_allocated = 0 total_allocated = 0 allocated = set() for proto in protos: for op in proto.op: if op.type == "Free" or op.type == "Alias": for o in op.output: if o in allocated: current_allocated -= num_bytes(o) allocated.remove(o) else: for output in op.output: if output not in allocated: nbytes = num_bytes(output) total_allocated += nbytes current_allocated += nbytes max_allocated = max(max_allocated, current_allocated) allocated.add(output) allocs_by_ops[op.type] += nbytes return (total_allocated, max_allocated, allocs_by_ops) def release_blobs_when_used(netproto, dont_free_blobs, selector_fun=None): ''' Insert Free-ops after a blob has been used the last time, so that its memory can be reclaimed. Use this only with efficient caching memory managers (such as CUB, --caffe2_cuda_memory_pool=cub). Blobs used with Alias op won't be freed. @dont_free_blobs: is a set of blobs that should not be freed @selector_fun: optional lambda that return True if blob name can be released. Use for easy special filtering, like excluding blobs with "loss" in the name. Returns a new protobuffer. To use with a model, use: model.net._net = memonger.release_blobs_when_used(..) ''' input_blobs = set() can_release = set() alias_blobs = set() netproto = copy.deepcopy(netproto) for op in netproto.op: if op.type == 'Alias': alias_blobs.add(op.input[0]) continue for inp in op.input: input_blobs.add(inp) for outp in op.output: if outp not in input_blobs: if selector_fun is None or selector_fun(outp): can_release.add(outp) # Remove such blobs that are not input at all and external outputs can_release = can_release - set(netproto.external_output) can_release = can_release.intersection(input_blobs) can_release = can_release - dont_free_blobs can_release = can_release - alias_blobs ops = list(netproto.op) # .. then find last use of each can-release blob, and insert a Free op for j in reversed(range(0, len(netproto.op))): op = netproto.op[j] for inp in op.input: if inp in can_release: can_release.remove(inp) ops.insert(j + 1, core.CreateOperator("Free", [inp], [inp])) del netproto.op[:] netproto.op.extend(ops) return netproto def _find_source_nodes(g): ''' Return nodes without predecessors ''' ret = [] for cn in g: cur_pred = list(g.predecessors(cn)) if not cur_pred: ret.append(cn) return ret def _find_target_nodes(g): ''' Return nodes without successors ''' ret = [] for cn in g: cur_succ = list(g.successors(cn)) if not cur_succ: ret.append(cn) return ret def _add_single_target_ifneeded(g): targets = _find_target_nodes(g) assert len(targets) >= 1 if len(targets) == 1: return g ret = copy.deepcopy(g) def _next_available_idx(g): ret = -1 for cn in g: if cn > ret: ret = cn ret += 1 return ret target_node_idx = _next_available_idx(g) ret.add_node(target_node_idx) for cn in targets: ret.add_edge(cn, target_node_idx) return ret def _get_path(pred_list, dist_list): ''' Get the path from nx.bellman_ford()'s output ''' # distances are negative assert all(dist_list[x] <= 0 for x in dist_list) # node with longest distance to source is the target target = min(dist_list, key=lambda x: dist_list[x]) ret = [] cur = target while cur is not None: ret.append(cur) # Hack to get networkx 2.0 happy: it uses list in pred. # TODO(tulloch): are there cases with multiple predecessors? try: cur = pred_list[cur][0] except TypeError: cur = pred_list[cur] return list(reversed(ret)) def _get_longest_paths(g, source_nodes): ''' Get the longest path for nodes in 'source_nodes' Find with bellman_ford() by setting weight = -1 ''' ng = copy.deepcopy(g) for u, v in ng.edges(): ng[u][v]["weight"] = -1 ret = {} for cn in source_nodes: pred, dist = nx.bellman_ford(ng, cn, weight="weight") path = _get_path(pred, dist) assert path[0] == cn assert len(path) - 1 == -dist[path[-1]] ret[cn] = path return ret def _build_tree(paths): ''' Build a tree for given paths based on common elements. Last elements of all paths are the same, which is the root of the tree. ''' assert all(cp[-1] == paths[0][-1] for cp in paths) g = nx.DiGraph() node_set = {y for x in paths for y in x} g.add_nodes_from(node_set) for cp in paths: for ce in zip(cp[0:-1], cp[1:]): g.add_edge(ce[1], ce[0]) root = paths[0][-1] _compute_tree_height(g, root) return (g, root) def _compute_tree_height(g, root): ''' Compute the heights of the tree for all nodes Height of leaves are 0 ''' def _get_height(root): children = list(g.successors(root)) height = 0 if children: child_heights = [_get_height(x) for x in children] height = max(child_heights) + 1 g.node[root]["height"] = height return height _get_height(root) def _sort_tree_leaves(g, root): ''' For each node, sort its child nodes based on the height of the nodes. Return the leaf nodes of the tree after sorting. ''' def _get_height(root): return g.node[root]["height"] def _get_sorted_leaves(root): children = list(g.successors(root)) if not children: return [root] child_heights = [_get_height(x) for x in children] order = sorted(range(len(children)), key=lambda x: child_heights[x]) ret = [] for co in order: cr = children[co] ret += _get_sorted_leaves(cr) return ret return _get_sorted_leaves(root) def topological_sort_traversal_longest_path(g): ''' The graph 'g' may contain several source nodes (nodes without incoming edge), which could be in any order and still be a valid topological sorting result. We would like to arrange these source nodes so that the average live spans of the computed blobs are shorter. The idea is to sort the source nodes based on the length of their path to the target node so that the one with longer path is used first. This is done by: - Add a single target node if there are multiple target nodes in 'g'. - Find the longest path between each source and the target node. - Convert the longest paths to a tree with the target node being the root and source nodes being the leaves. - Sort the nodes of the tree based on the height of the tree. ''' gt = _add_single_target_ifneeded(g) source_nodes = _find_source_nodes(gt) lpaths = _get_longest_paths(gt, source_nodes) tree, root = _build_tree(list(viewvalues(lpaths))) sorted_sources = _sort_tree_leaves(tree, root) assert(sorted(sorted_sources) == sorted(source_nodes)) if nx.__version__ < '2.0': ret = nx.topological_sort(g, sorted_sources) else: # Manually making a sorted descendent list dependency_order = list(sorted_sources) seen_nodes = set(sorted_sources) for s in sorted_sources: desc = nx.descendants(g, s) for d in desc: if d not in seen_nodes: seen_nodes.add(d) dependency_order.append(d) sort_key = dict((v, len(dependency_order) - i) for i, v in enumerate(dependency_order)) ret = nx.algorithms.dag.lexicographical_topological_sort( g, key=lambda x: sort_key[x]) ret = list(ret) assert(len(ret) == len(g.node)) return ret def topological_sort_traversal(g): return list(nx.topological_sort(g)) def compute_ranges(linearized_ops, blob_sizes=None): if not blob_sizes: log.warning('Provide blob sizes to get more accurate assignments.') blobs = collections.defaultdict( lambda: LiveRange(defined=None, used=None, size=None)) for i, op in enumerate(linearized_ops): for blob in op.input: used = blobs[blob].used if used is None: used = i else: used = max(used, i) blobs[blob] = blobs[blob]._replace(used=used) blob_size = blob_sizes[blob] if blob_sizes else None assert not blob_sizes or blob_size is not None blobs[blob] = blobs[blob]._replace(size=blob_size) for blob in op.output: defined = blobs[blob].defined if defined is None: defined = i else: defined = min(defined, i) blobs[blob] = blobs[blob]._replace(defined=defined) blob_size = blob_sizes[blob] if blob_sizes else None assert not blob_sizes or blob_size is not None blobs[blob] = blobs[blob]._replace(size=blob_size) return blobs def is_compatible(candidate_range, assignment, static_blobs): (name, range_) = assignment[-1] if name in static_blobs: return False if candidate_range.defined is None or range_.defined is None \ or range_.used is None: return False return candidate_range.defined > range_.used def compute_blob_assignments(assignments): blob_assignments = {} for assignment in assignments: if len(assignment) == 1: continue last_blob, _ = assignment[-1] for (blob, _) in assignment: blob_assignments[blob] = last_blob return blob_assignments def _get_max_size(assignment): if not assignment: return 0 ret = max([x[1].size for x in assignment]) ret = 0 if ret is None else ret return ret def get_memory_usage(assignments): ret = 0 for cur in assignments: ret += _get_max_size(cur) return ret def compute_assignments_greedy(ranges_sorted, init_assignments=None): assignments = init_assignments or [] visited = {y[0] for x in assignments for y in x} for (name, range_) in ranges_sorted: if name in visited: continue assigned = False best_assignment = 0 min_dist = float("inf") candidate_size = range_.size or 0 for idx, assignment in enumerate(assignments): if is_compatible(range_, assignment, []): assigned = True dist = abs(_get_max_size(assignment) - candidate_size) if dist < min_dist: min_dist = dist best_assignment = idx if assigned: assignment = assignments[best_assignment] assignment.append((name, range_)) else: assignments.append([(name, range_)]) return assignments def _get_count(assignments): ''' Return number of blobs in assignments ''' if assignments: return sum([len(x) for x in assignments]) return 0 def compute_assignments_dp(ranges_sorted, init_assignment, counter=None): ''' Compute assignment for blobs in 'ranges_sorted' on top of 'init_assignment' using dynamic programming + recursion. ranges_sorted: blobs sorted by 'used' init_assignment: assignment to start with, blobs in 'ranges_sorted' should not be used in 'init_assignment' Using f(b, k, init) to represent the best assignment for blobs b[0:k] given initial assignment 'init', we have f(b, k, init) = f(b, j, init) + find_best(b[j:k], f(b, j, init)) where j is the index of the last best assignment that is independent of blob b[k - 1] (b[k - 1] is compatible with all assignments in f(b, j, init)), and find_best(b1, init1) gives the best assignment for blobs in 'b1' based on the initial assignment 'init1', and blobs b1[0:-1] should be incompatible with b1[-1]. f(b, len(b), []) gives the best assignment for blobs 'b'. For find_best(b, init), since b[0:-1] are not compatible with b[-1], we could reduce it to a smaller problem to find best assignment for b[0:-1] as find_best(b, init) = min { f(b[0:-1], len(b) - 1, init - x) + [x, b[-1]] for x in init, or f(b[0:-1], len(b) - 1, init) + [b[-1]] } where min{} gives the assignment with minimum memory usage. ''' def _get_compatible_prev(candidate_range, best_assignments, cur_idx): ''' Find closest position k of best_assignments that is independent of candidate_range that candiate_range is compatible with all assignments in best_assignments[k]. Return -1 if not found. ''' def is_compatible_all(candidate_range, assignments): ''' return true if compatible for all assignments in assignments ''' return all([is_compatible(candidate_range[1], x, []) for x in assignments]) ii = cur_idx - 1 while ii >= 0: cba = best_assignments[ii] if is_compatible_all(candidate_range, cba): return ii ii -= 1 return -1 def _find_best(ranges, init_assignment, prev_best_assignment, counter): ''' Find the best assignment for blobs 'ranges' given an initialized assignment 'init_assignment'. Blobs in ranges[0:-1] should be incompatible with blob range[-1]. 'prev_best_assignment': best assignment for blobs in ranges[:-1] By assigning ranges[-1] to each assignment k in 'init_assignment' or in a new assignment, the problem becomes a smaller problem to find the best assignment for ranges[0:-1] given the initial assignment init_assigment[0:k, (k+1):-1]. ''' # Blob to check find_range = ranges[-1] # Blobs in ranges[0:-1] are incompatible with ranges[-1] so that we can # reduce it to a smaller problem. assert all(not is_compatible(x[1], [find_range], []) for x in ranges[0:-1]) sz = len(init_assignment) best_candidates = [] # Try to assign 'find_range' to each assignment in init_assignment for ii in range(sz): if not is_compatible(find_range[1], init_assignment[ii], []): continue cur_best = copy.deepcopy(init_assignment) cur_best[ii].append(find_range) if len(ranges) > 1: cur_best_tmp = [x for i, x in enumerate(cur_best) if i != ii] # reduce to a smaller dp problem cur_best_tmp = compute_assignments_dp( ranges[:-1], cur_best_tmp, counter) cur_best = cur_best_tmp + [cur_best[ii]] best_candidates.append(cur_best) # Try to put 'find_range' in a new assignment best_candidates.append(prev_best_assignment + [[find_range]]) ret = min(best_candidates, key=lambda x: get_memory_usage(x)) return ret if not counter: counter = [0] counter[0] += 1 if counter and counter[0] % 5000 == 0: rs = [ranges_sorted[0][1].defined, ranges_sorted[-1][1].used] log.info('Finding assignments {} ({} -> {})...'.format( counter[0], rs[0], rs[1])) init_assignment = init_assignment or [] # best_assignments[k]: best assignments for first k blobs ranges_sorted[0:(k+1)] best_assignments = [] # Find best assignment for blobs ranges_sorted[0:ii] for ii, cur_range in enumerate(ranges_sorted): # closest best_assignment that is independent of ranges_sorted[ii] prev_idx = _get_compatible_prev(cur_range, best_assignments, ii) prev_best = copy.deepcopy(init_assignment) if prev_idx < 0 else \ copy.deepcopy(best_assignments[prev_idx]) # Need to find best assignment for blobs in 'ranges_part' ranges_part = ranges_sorted[(prev_idx + 1):(ii + 1)] cur_best = _find_best( ranges_part, prev_best, best_assignments[-1] if best_assignments else init_assignment, counter) assert _get_count(cur_best) == _get_count(prev_best) + len(ranges_part) best_assignments.append(copy.deepcopy(cur_best)) assert len(best_assignments) == len(ranges_sorted) best = best_assignments[-1] return best def get_updated_ranges(ranges, max_live=None): ''' Set LiveRange.defined = -1 if it is None Set LiveRange.used = max_live if it is None Set LiveRanee.size = 1 if it is None ''' def _get_max_live(ranges): max_live = max(x[1].used for x in ranges if x[1].used) + 1 return max_live def _update_range(x, max_live, size): cx = x if x[1].defined is None: cx = (cx[0], cx[1]._replace(defined=-1)) if x[1].used is None: cx = (cx[0], cx[1]._replace(used=max_live)) if x[1].size is None: cx = (cx[0], cx[1]._replace(size=size)) return cx if max_live is None: max_live = _get_max_live(ranges) ranges = [_update_range(x, max_live, 1) for x in ranges] return ranges def compute_assignments(ranges, static_blobs, algo): ''' algo: Method used to find assignments (AssignmentAlgorithm.GREEDY or AssignmentAlgorithm.DYNAMIC_PROGRAMMING). AssignmentAlgorithm.DYNAMIC_PROGRAMMING gives optimal solution at the cost of more computation. AssignmentAlgorithm.GREEDY may be better in the case 'blob_sizes' is not provided. ''' # Sort the ranges based on when they are last used. # If LiveRange.used is None, then the blob is never used and could # be consumed externally. Sort these to the end of the list as opposed # to the beginning so that they can be shared as well. ranges = sorted( viewitems(ranges), key=lambda p: (p[1].used is None, p[1].used), ) # Update None values ranges = get_updated_ranges(ranges) # Sharable blobs ranges_sharable = [x for x in ranges if x[0] not in static_blobs] # Static blobs, not sharable ranges_static = [x for x in ranges if x[0] in static_blobs] log.info("Total sharable blobs {}".format(len(ranges_sharable))) best_assignment = [] if algo == AssignmentAlgorithm.DYNAMIC_PROGRAMMING: best_assignment = compute_assignments_dp(ranges_sharable, []) elif algo == AssignmentAlgorithm.GREEDY: best_assignment = compute_assignments_greedy(ranges_sharable, []) else: assert "Invalid algo name {}".format(algo) best_assignment += [[x] for x in ranges_static] # verify_assignments(best_assignment) return best_assignment def verify_assignments(assignments): for cur in assignments: for x, y in zip(cur[0:-1], cur[1:]): assert x[1].used < y[1].defined def compute_interference_graph(ops): g = nx.DiGraph() for i, op in enumerate(ops): g.add_node(i, op=op) for i, parent_op in enumerate(ops): for j, child_op in enumerate(ops): if i >= j: continue if any(output in child_op.input for output in parent_op.output): deps = set(child_op.input).intersection(parent_op.output) g.add_edge(i, j, deps=deps) assert nx.is_directed_acyclic_graph(g), child_op return g Optimization = collections.namedtuple( 'Optimization', ['net', 'assignments', 'blob_assignments']) def apply_assignments(net, blob_assignments): def canonical_name(blob): if blob not in blob_assignments: return blob return blob_assignments[blob] for op in net.op: # Descend into subnets of the recurrent network if op.type.startswith('RecurrentNetwork'): apply_recurrent_blob_assignments(op, blob_assignments, canonical_name) for i, input_ in enumerate(op.input): op.input[i] = canonical_name(input_) for i, output in enumerate(op.output): op.output[i] = canonical_name(output) def apply_recurrent_blob_assignments(op, blob_assignments, canonical_name): log.debug("Applying assignments to recurrent op: {}".format(op.type)) step_args = [a for a in op.arg if a.name.endswith("step_net")] for step_arg in step_args: apply_assignments(step_arg.n, blob_assignments) for i, einp in enumerate(step_arg.n.external_input): if einp in blob_assignments: step_arg.n.external_input[i] = canonical_name(einp) # Store renamings for blob, renamed in viewitems(blob_assignments): if blob in list(op.input) + list(op.output): a = caffe2_pb2.Argument() a.name = blob + ".rename" a.s = str(renamed).encode("ascii") op.arg.extend([a]) class AssignmentAlgorithm(enum.Enum): GREEDY = 0 DYNAMIC_PROGRAMMING = 1 def optimize_inference_fast(net, static_blobs): optim = caffe2_pb2.NetDef() optim_str = C.memonger_optimize_inference_net( net.SerializeToString(), [str(s).encode('utf-8') for s in static_blobs] ) optim.ParseFromString(optim_str) return optim def optimize_interference(net, static_blobs, ordering_function=topological_sort_traversal, blob_sizes=None, algo=AssignmentAlgorithm.GREEDY): """ ordering_function: topological_sort_traversal or topological_sort_traversal_longest_path. topological_sort_traversal_longest_path gives better results but needs a bit more computation. algo: Method used to find assignments (AssignmentAlgorithm.GREEDY or AssignmentAlgorithm.DYNAMIC_PROGRAMMING). AssignmentAlgorithm.DYNAMIC_PROGRAMMING gives optimal solution at the cost of more computation. AssignmentAlgorithm.GREEDY may be better in the case 'blob_sizes' is not provided. """ """ 1) Use a BFS traversal of the execution graph to generate an ordering of the node executions. 2) Generate use-def ranges for each `blob` in the BFS traversal order. 3) Assign blobs to `canonical blobs` 4) Rename blobs to canonical blobs """ net = copy.deepcopy(net) g = compute_interference_graph(net.op) ordering = ordering_function(g) linearized_ops = [net.op[i] for i in ordering] # Reorder ops in net based on the computed linearlized order. # If the graph has multiple topological orderings and if the NetDef's # ordering differs from the order used to compute ranges, then the # runtime might end up overwriting blobs before they are used. del net.op[:] net.op.extend(linearized_ops) ranges = compute_ranges(linearized_ops, blob_sizes) assignments = compute_assignments(ranges, static_blobs, algo) blob_assignments = compute_blob_assignments(assignments) apply_assignments(net, blob_assignments) return Optimization( net=net, blob_assignments=blob_assignments, assignments=assignments) def verify_inplace_blobs(net_a, net_b): """ Verifies that net_a and net_b have the same in-place blob assignments. Particularly, that memonger did not add an in-place assignment when that did not exist before. """ def get_inplaces(op): out = list(op.output) inplaces = [] for j, inp in enumerate(op.input): if inp in out: inplaces.append([j, out.index(inp)]) return inplaces for op_a, op_b in zip(net_a.op, net_b.op): if op_a.type != op_b.type: return False if get_inplaces(op_a) != get_inplaces(op_b): return False return True def verify_graph_equality(net_a, net_b): """ Determines if the execution of two graphs are identical. That is, all inputs blobs are mapped to the same output blobs for each operator in their respective positions. This is meant to check the output of memonger with the original graph. It assumes that the nets have same external input and output. O(E) runtime + O(1) amortized cost to hash for python dict """ def parent_list(ops): parent_list = [[] for _ in ops] edge_owner = {} for i, op in enumerate(ops): for blob in op.input: parent_id = edge_owner.get(blob) if parent_id is not None: parent_list[i].append(parent_id) for blob in op.output: edge_owner[blob] = i return parent_list # Operator wise equality checks if (len(net_a.op) != len(net_b.op)): return False for op_a, op_b in zip(net_a.op, net_b.op): if (op_a.type != op_b.type or op_a.device_option != op_b.device_option or op_a.engine != op_b.engine): return False # Print debug info parent_list_a = parent_list(net_a.op) parent_list_b = parent_list(net_b.op) if parent_list_a != parent_list_b: j = 0 for a, b in zip(parent_list_a, parent_list_b): if a != b: print("Difference {} vs {} \n {}".format( j, net_a.op[j], net_b.op[j])) print("Parents: {} vs {}".format(a, b)) j += 1 # Net wise equality check return parent_list_a == parent_list_b Statistics = collections.namedtuple( 'Statistics', ['baseline_nbytes', 'optimized_nbytes']) def blob_nbytes(blob): sz = 0 try: sz = workspace.FetchBlob(blob).nbytes except Exception: log.warning('Error when fetching blob {}'.format(blob)) return sz def compute_statistics(assignments): blob_bytes = { blob: blob_nbytes(blob) for assignment in assignments for (blob, _) in assignment} baseline_nbytes = sum(viewvalues(blob_bytes)) optimized_nbytes = sum( max(blob_bytes[blob] for (blob, _) in assignment) for assignment in assignments) return Statistics( baseline_nbytes=baseline_nbytes, optimized_nbytes=optimized_nbytes) def collect_blob_sizes(net): blobs = {} for op in net.op: for blob in op.input: blobs[blob] = blob_nbytes(blob) for blob in op.output: blobs[blob] = blob_nbytes(blob) return blobs
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/memonger.py
0.548432
0.272896
memonger.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import os import logging from caffe2.python import core, context from caffe2.python.net_builder import ops from caffe2.python.task import ( final_output, Node, Task, TaskGroup, TaskOutput, WorkspaceType, ) logger = logging.getLogger(__name__) @context.define_context() class Job(object): """ A Job defines three TaskGroups: the `init_group`, the `epoch_group` and the `exit_group` which will be run by a JobRunner. The `init_group` will be run only once at startup. Its role is to initialize globally persistent blobs such as model weights, accumulators and data file lists. The `epoch_group` will be run in a loop after init_group. The loop will exit when any of the stop signals added with `add_stop_condition` is True at the end of an epoch. The download_group will be run only once, after all the executions of epoch_group finish. Its role is to collect the distribute scattered parameters back after training. The `exit_group` will be run only once at the very end of the job, the role of this group is to save the results of training in the end of the job. Jobs are context-driven, so that Tasks can be added to the active Job without having to explicitly pass the job object around. Example of usage: def build_reader(partitions): with Job.current().init_group: reader = HiveReader(init_reader, ..., partitions) Task(step=init_reader) with Job.current().epoch_group: limited_reader = ReaderWithLimit(reader, num_iter=10000) data_queue = pipe(limited_reader, num_threads=8) Job.current().add_stop_condition(limited_reader.data_finished()) return data_queue def build_hogwild_trainer(reader, model): with Job.current().init_group: Task(step=model.param_init_net) with Job.current().epoch_group: pipe(reader, processor=model, num_threads=8) with Job.current().exit_group: Task(step=model.save_model_net) with Job() as job: reader = build_reader(partitions) model = build_model(params) build_hogwild_trainer(reader, model) """ def __init__(self, init_group=None, epoch_group=None, download_group=None, exit_group=None, stop_conditions=None, nodes_to_checkpoint=None): self.init_group = init_group or TaskGroup( workspace_type=WorkspaceType.GLOBAL) self.epoch_group = epoch_group or TaskGroup() self.download_group = download_group or TaskGroup() self.exit_group = exit_group or TaskGroup() self.stop_conditions = stop_conditions or [] self._nodes_to_checkpoint = nodes_to_checkpoint def nodes_to_checkpoint(self): if self._nodes_to_checkpoint: return self._nodes_to_checkpoint else: return self.init_group.used_nodes() def compile(self, session_class): self._nodes_to_checkpoint = self.nodes_to_checkpoint() self.init_group = session_class.compile(self.init_group) self.epoch_group = session_class.compile(self.epoch_group) self.download_group = session_class.compile(self.download_group) self.exit_group = session_class.compile(self.exit_group) def __enter__(self): self.epoch_group.__enter__() return self def __exit__(self, *args): self.epoch_group.__exit__() def add_stop_condition(self, output): if isinstance(output, core.BlobReference): t = Task(outputs=[output], group=self.epoch_group) output = t.outputs()[0] assert isinstance(output, TaskOutput) self.stop_conditions.append(output) def get_ckpt_filename(node_name, epoch): """Returns the checkpoint filename. Args: node_name: A string. The name of the node. epoch: An integer. The checkpoint epoch. Returns: ckpt_filename: A string. The filename of the checkpoint. """ return node_name + '.' + str(epoch) def db_name(epoch, node_name, db_prefix, path_prefix=None): """Returns the full db name where checkpoint files are saved. Args: epoch: An integer. The checkpoint epoch. node_name: A string. The name of the node. db_prefix: A string. The prefix used to construct full db name. path_prefix: A string. Optional param used to construct db name or path where checkpoint files are are stored. Returns: db_name: A string. The absolute path of full_db_name where checkpoint files are saved """ if path_prefix: db_name = path_prefix + get_ckpt_filename(node_name, epoch) else: ckpt_filename = get_ckpt_filename(node_name, epoch) db_name = os.path.join(db_prefix, ckpt_filename) return db_name class CheckpointManager(object): """ Controls saving and loading of workspaces on every epoch boundary of a job. If a CheckpointManager instance is passed to JobRunner, then JobRunner will call `init`, `read` and `save` at different moments in between epoch runs. Args: db_prefix: The prefix used to construct full db name. Since `absolute_path` is set to True, this will be used as db_name in SaveOp. node_name: Name of the node where this checkpoint_manager is used. db_type: Type of database to use for storing checkpoint. metadata_handler: An optional object capable of reading/writing checkpoint info in storage of choice. """ BLOB_NAMES = "blob_names" def __init__(self, db_prefix, node_name, db_type, metadata_handler=None): self._db_prefix = db_prefix self._node_name = node_name self._db_type = db_type self._metadata_handler = metadata_handler # make sure these blobs are the first in the checkpoint file. self._net = core.Net('!!checkpoint_mngr') self._blob_names = self._net.AddExternalInput(self.BLOB_NAMES) self._names_output = None self._path_prefix = None self._path_type = None self._current_db_name = None self._current_checkpoint_duration = None """ Initialize the checkpoint manager. Determines all blobs that need to be saved or loads from a checkpoint. Args: nodes: An array of nodes where this checkpoint manager is running. Should only contain a single node. retrieve_from_epoch: Set to a number to load blobs from this epoch. path_prefix: Used to construct db name or path where checkpoint files are stored. path_type: Indicate the type of path where checkpoint files are stored. """ def init( self, nodes=None, retrieve_from_epoch=None, path_prefix=None, path_type=None ): """ Build a Task that will be run once after the job's `init_group` is run. This task will determine which blobs need to be checkpointed. If retrieve_from_epoch is not None, then the checkpoint metadata is retrieved from a previously saved checkpoint. """ assert nodes is None or len(nodes) == 1, ( 'CheckpointManager only supports single node.') with Task(outputs=[self._blob_names]) as task: if retrieve_from_epoch is None: ops.GetAllBlobNames( [], self._blob_names, include_shared=False) else: full_db_name = db_name(retrieve_from_epoch, self._node_name, self._db_prefix, path_prefix) db_type = path_type or self._db_type logger.info("Initializing checkpoints from = %s" % full_db_name) ops.Load( [], self._blob_names, db=full_db_name, db_type=db_type, absolute_path=True, keep_device=True, ) self._names_output = task.outputs()[0] return task def blob_list(self): assert self._names_output return self._names_output.fetch().tolist() def _timed_task(self, cp_op_name, add_op): """ Build a Task that will measure the time span of checkpoint operations, once operation is done, time can be read from _current_checkpoint_duration. Args: cp_op_name: A string name of the checkpoint operation. add_op: A functor to add the checkpoint operation. Returns: A task with timer. """ with Task(name=cp_op_name) as task: with ops.task_init(): timer = ops.TimerBegin([], counter_name=self._node_name) add_op() with ops.task_exit(): time_span_blob = ops.TimerGetAndEnd(timer) self._current_checkpoint_duration = final_output(time_span_blob) return task def collect_checkpoint_stats(self, stats): """ Add one checkpoint stats into the stats. Args: stats: A dict of checkpoint stats that will be reported. """ if self._current_db_name and self._current_checkpoint_duration: stats[self._current_db_name] = self._current_checkpoint_duration.fetch()[0] else: logger.info( "Failed to collect checkpoint stats: {}".format( self._current_db_name ) ) def load(self, epoch, path_prefix=None, path_type=None): """ Build a Task that will be run by JobRunner when the job is to be resumed from a given epoch. This task will run a Load op that will load and deserialize all relevant blobs from a persistent storage. """ self._current_db_name = db_name( epoch, self._node_name, self._db_prefix, path_prefix ) db_type = path_type or self._db_type logger.info("Loading checkpoints from = %s" % self._current_db_name) def add_op(): ops.Load( [], self.blob_list(), db=self._current_db_name, db_type=db_type, absolute_path=True, keep_device=True, ) return self._timed_task('checkpoint_load', add_op) def load_blobs_from_checkpoint(self, blob_names, epoch): """ Builds a Task that loads only the necessary blobs from a checkpoint of the given epoch. The necessary blobs are given in the blob_names argument. Args: blob_names: A list of strings. Each string is the name of a blob. epoch: The checkpoint epoch to load from. Returns: A Task which loads the specified blobs from the checkpoint of the given epoch. """ self._current_db_name = db_name(epoch, self._node_name, self._db_prefix) logger.info('Load from %s' % self._current_db_name) def add_op(): ops.Load( [], blob_names, db=self._current_db_name, db_type=self._db_type, absolute_path=True, allow_incomplete=True) return self._timed_task('checkpoint_partial_load', add_op) def check_db_exists(self, epoch): logger.info('Check existence of %s' % db_name(epoch, self._node_name, self._db_prefix)) with Task() as task: existence = ops.Const(False) ops.DBExists( [], [existence], db_name=db_name(epoch, self._node_name, self._db_prefix), db_type=self._db_type, absolute_path=True) task.add_output(existence) return task def report_checkpoint_stats(self, action_name): """ Report checkpoint operation stats for current node. Args: action_name: A string of the name of checkpoint operation. """ all_stats = {} self.collect_checkpoint_stats(all_stats) if self._metadata_handler: self._metadata_handler.report(action_name, all_stats) def save(self, epoch): """ Build a Task that is run once after `init_group` and after each epoch is run. This will execute a Save ops to serialize and persist blobs present in the global workspace. """ self._current_db_name = db_name(epoch, self._node_name, self._db_prefix) logger.info('Saving to %s' % self._current_db_name) def add_op(): ops.Save( self.blob_list(), [], db=self._current_db_name, db_type=self._db_type, absolute_path=True) return self._timed_task('checkpoint_save', add_op) def write_checkpoint_metadata(self, epoch): """ Write metadata for checkpoint Args: epoch: An integer. The epoch-id for which checkpoint metadata is written """ if self._metadata_handler is not None: self._metadata_handler.write(epoch=epoch) def get_resume_from_epoch_id(self, user_epoch=None): """ Identify the epoch-id from which Job must resume Args: user_epoch: An integer. Optional parameter for user to explicitly identify the epoch-id to load checkpoint from Returns: epoch: the epoch-id to load checkpoints from or None if no checkpoints were written """ last_epoch = user_epoch if self._metadata_handler is not None: last_epoch = self._metadata_handler.last_epoch(user_epoch=user_epoch) return last_epoch def set_params(self, nodes, path_prefix=None, path_type=None): """Set parameters associated with CP manager Args: nodes: An array of nodes where this checkpoint manager is running. path_prefix: Used to construct db name or path where checkpoint files are stored. path_type: Indicate the type of path where checkpoint files are stored. """ if path_prefix: self._path_prefix = path_prefix if path_type: self._path_type = path_type if self._metadata_handler: self._metadata_handler.set_params( db_prefix=self._db_prefix, db_type=self._db_type, node_names=[str(self._node_name)], path_prefix=self._path_prefix, path_type=self._path_type) def cp_accessible(self, epoch=None): """Returns True if Checkpoint data is accessible Args: epoch: An integer. The epoch of the checkpoint. If None, it implies we need to check if checkpoint directory is accessible Returns: is_cp_accessible: A boolean. Returns True if Checkpoint data is accessible """ if self._metadata_handler is not None: return self._metadata_handler.cp_accessible(epoch) else: return True class MultiNodeCheckpointManager(object): """ Coordinates checkpointing and checkpointing across multiple nodes. Each of `init`, `load` and `save` will build TaskGroups which will trigger checkpointing on each of the nodes involved in a distributed job. Args: db_prefix: The prefix used to construct full db name. Since `absolute_path` is set to True, this will be used as db_name in SaveOp. db_type: Type of database to use for storing checkpoint. metadata_handler: An optional object capable of reading/writing checkpoint info in storage of choice. """ def __init__(self, db_prefix, db_type, metadata_handler=None): self._node_managers = None self._db_prefix = db_prefix self._db_type = db_type self._metadata_handler = metadata_handler self._path_prefix = None self._path_type = None def _task_group(self, func, *args, **kw): assert self._node_managers is not None, 'init must be called first.' with TaskGroup(WorkspaceType.GLOBAL) as task_group: for node, manager in self._node_managers: with Node(node): func(manager, *args, **kw) return task_group """ Args: nodes: An array of nodes where this checkpoint manager is running. retrieve_from_epoch: Set to a number to load blobs from this epoch. path_prefix: Used to construct db name or path where checkpoint files are stored. path_type: Indicate the type of path where checkpoint files are stored. """ def init( self, nodes, retrieve_from_epoch=None, path_prefix=None, path_type=None ): if self._node_managers is not None: assert [node for node, _ in self._node_managers] == nodes return TaskGroup(WorkspaceType.GLOBAL) self._node_managers = [] for node in nodes: with Node(node): manager = CheckpointManager( db_prefix=self._db_prefix, node_name=str(node), db_type=self._db_type) self._node_managers.append((node, manager)) return self._task_group( CheckpointManager.init, nodes=[node], retrieve_from_epoch=retrieve_from_epoch, path_prefix=path_prefix, path_type=path_type) def load(self, epoch, path_prefix=None, path_type=None): return self._task_group( CheckpointManager.load, epoch, path_prefix=path_prefix, path_type=path_type) def load_blobs_locally(self, nodes, blob_names, epoch, session): """Loads the necessary blobs from the checkpoints to the current node. Args: blob_names: A list of strings. Each string is the name of a blob. epoch: An integer. The checkpoint epoch to load from. session: A Session object to execute the Load ops. """ if self._node_managers is not None: assert [node for node, _ in self._node_managers] == nodes else: self._node_managers = [] for node in nodes: with Node(node): manager = CheckpointManager( db_prefix=self._db_prefix, node_name=str(node), db_type=self._db_type) self._node_managers.append((node, manager)) assert self._node_managers is not None, 'must initialize node managers' for _, manager in self._node_managers: existence_task = manager.check_db_exists(epoch) session.run(existence_task) existence = existence_task.outputs()[0].fetch() if not existence: logger.info('DB %s does not exist!' % db_name(epoch, manager._node_name, manager._db_prefix)) return False load_task = manager.load_blobs_from_checkpoint(blob_names, epoch) session.run(load_task) logger.info('Successfully loaded from checkpoints.') return True def get_ckpt_db_name(self, node_name, epoch): """Returns the DB name of the given node and the given epoch. The DB name is effectively the checkpoint path of the given node and the given epoch. Args: node_name: A string. The node name of interest. epoch: An integer. The epoch of the checkpoint. Returns: checkpoint_db_name: A string. The checkpoint path of the given node and the given epoch. """ for node, manager in self._node_managers: if str(node) == node_name: return db_name(epoch, manager._node_name, manager._db_prefix) def report_checkpoint_stats(self, action_name): """ Report the checkpoint stats for all the nodes, we need to aggregate all the node's stats together so that we know which node's checkpoint operation dominates. Args: action_name: A string of the name of checkpoint operation. """ all_stats = {} for _, manager in self._node_managers: manager.collect_checkpoint_stats(all_stats) logger.debug("checkpoint stats: {}".format(all_stats)) if self._metadata_handler: self._metadata_handler.report(action_name, all_stats) def save(self, epoch): """ Build a Task that will execute a Save ops to serialize and persist blobs present in the global workspace. """ return self._task_group(CheckpointManager.save, epoch) def write_checkpoint_metadata(self, epoch): """ Write metadata for checkpoint Args: epoch: An integer. The epoch-id for which checkpoint metadata is written """ if self._metadata_handler is not None: self._metadata_handler.write(epoch=epoch) def get_resume_from_epoch_id(self, user_epoch=None): """ Identify the epoch-id from which Job must resume Args: user_epoch: An integer. Optional parameter for user to explicitly identify the epoch-id to load checkpoint from Returns: epoch: the epoch-id to load checkpoints from or None if no checkpoints were written """ last_epoch = user_epoch if self._metadata_handler is not None: last_epoch = self._metadata_handler.last_epoch(user_epoch=user_epoch) return last_epoch def set_params(self, nodes, path_prefix=None, path_type=None): """Set parameters associated with CP manager Args: nodes: An array of nodes where this checkpoint manager is running. path_prefix: Used to construct db name or path where checkpoint files are stored. path_type: Indicate the type of path where checkpoint files are stored. """ self._node_names = [str(node) for node in nodes] if path_prefix: self._path_prefix = path_prefix if path_type: self._path_type = path_type if self._metadata_handler: self._metadata_handler.set_params( db_prefix=self._db_prefix, db_type=self._db_type, node_names=self._node_names, path_prefix=self._path_prefix, path_type=self._path_type) def cp_accessible(self, epoch=None): """Returns True if Checkpoint data is accessible Args: epoch: An integer. The epoch of the checkpoint. If None, it implies we need to check if checkpoint directory is accessible Returns: is_cp_accessible: A boolean. Returns True if Checkpoint data is accessible """ if self._metadata_handler is not None: return self._metadata_handler.cp_accessible(epoch) else: return True class UploadTaskGroupBuilder(object): """A simple class to upload checkpoints.""" def build(self, epoch, checkpoint_manager): """Builds the task group to upload checkpoints. Args: epoch: An integer. The checkpoint epoch to be uploaded. checkpoint_manager: Can be a CheckpointManager for single machine or a MultiNodeCheckpointManager for multi-machine. The manager that initializes/saves/loads checkpoints. Raises: NotImplementedError: This base class only has the interface, the implementation will be in the subclasses. """ raise NotImplementedError() class JobRunner(object): """ Implement the runtime logic for jobs with checkpointing at the level of epoch. Can be used to run either single-host or distributed jobs. Job runner is a callable to be called once from the master, passing a session as an argument. This call will block until the Job execution is complete. If a checkpoint_manager is passed, checkpoints will be taken after initialization and after each epoch execution. If, in addition, `resume_from_epoch` is an epoch number, the corresponding checkpoint will be loaded and job execution will continue from the given epoch. In this case, the job's init_group will not be run. Refer to checkpoint_test.py for an example. """ def __init__(self, job, checkpoint_manager=None, resume_from_epoch=None, upload_task_group_builder=None): """Initializes the JobRunner. Args: job: A Job object. The job to be executed. checkpoint_manager: Can be a CheckpointManager for single machine or a MultiNodeCheckpointManager for multi-machine. The manager that initializes/saves/loads checkpoints. resume_from_epoch: An integer. The epoch to resume from. upload_task_group_builder: A subclass of the UploadTaskGroupBuilder. Creates a task group to upload checkpoints. """ self.resume_from_epoch = resume_from_epoch self.checkpoint_manager = checkpoint_manager self.job = job self.upload_task_group_builder = upload_task_group_builder def train(self, session): """Runs the training flow. Args: session: A Session object. Valid choises are: LocalSession, LocalHostScheduler, and DistributedSession. It is used to execute one TaskGroup a time. """ # identify the epoch we must resume from if self.checkpoint_manager: self.checkpoint_manager.set_params(nodes=self.job.nodes_to_checkpoint()) self.resume_from_epoch = self.checkpoint_manager.\ get_resume_from_epoch_id(self.resume_from_epoch) if self.resume_from_epoch is not None: logger.info('Resuming from epoch {}'.format(self.resume_from_epoch)) # Initialize all the nodes. from_scratch = self.resume_from_epoch is None if from_scratch: session.run(self.job.init_group) if self.checkpoint_manager: logger.info('Preparing checkpoints ...') session.run(self.checkpoint_manager.init( self.job.nodes_to_checkpoint(), retrieve_from_epoch=self.resume_from_epoch)) # Save the first checkpoint before training starts, or resume from # a previously saved checkpoint. if from_scratch: self.save_checkpoints(0, session) else: logger.info('Loading checkpoints for epoch {} ...'.format( self.resume_from_epoch)) session.run( self.checkpoint_manager.load(self.resume_from_epoch)) self.checkpoint_manager.report_checkpoint_stats('checkpoint_load') logger.info('Checkpoint loaded') logger.info("Finished initializing") # Start training. epoch = 1 if from_scratch else self.resume_from_epoch + 1 while True: logger.info('Starting epoch %d' % epoch) session.run(self.job.epoch_group) logger.info('Finished epoch %d' % epoch) stop_conditions = [o.fetch() for o in self.job.stop_conditions] if self.checkpoint_manager: self.save_checkpoints(epoch, session) if any(stop_conditions): logger.info('Stopping') break epoch += 1 logger.info('Finished training') # Upload the checkpoints. if (self.upload_task_group_builder): upload_task_group = self.upload_task_group_builder.build( epoch, self.checkpoint_manager) session.run(upload_task_group) logger.info('Finished uploading the checkpoints') # Download the parameters to save session.run(self.job.download_group) logger.info('Finished downloading the parameters') # Finally run the exit step to save nets session.run(self.job.exit_group) logger.info('Finished running the exit group') return epoch def load_blobs_from_checkpoints(self, blob_names, epoch, session): """Loads the necessary blobs from the checkpoints. Checkpoints store the snapshots of the workspace in each node. Sometimes we only need to load a subset of the blobs from the checkpoints. One common scenario is to load only the model blobs from the checkpoints for evaluation purpose. Given the names of the necessary blobs, this function goes over all the checkpoints of all the nodes, but only loads the blobs specified in the blob_names to the current workspace. Args: blob_names: A list of strings. Each string is the name of a blob. epoch: An integer. The checkpoint epoch to load from. session: A Session object to execute the load ops. Raises: ValueError: When the checkpoint manager is invalid. """ if not self.checkpoint_manager: raise ValueError('Checkpoint manager is None') logger.info('Loading checkpoint for epoch {} ...'.format(epoch)) result = self.checkpoint_manager.load_blobs_locally( self.job.nodes_to_checkpoint(), blob_names, epoch, session) self.checkpoint_manager.report_checkpoint_stats('checkpoint_partial_load') return result def save_checkpoints(self, epoch, session): """Triggers operation to save checkpoints This method will trigger the Save ops to serialize and persist the blobs present in the global workspaace. Args: epoch: An integer. The checkpoint epoch-id that we are saving. session: A Session object to execute the save ops. Raises: ValueError: When the checkpoint manager is invalid. """ if not self.checkpoint_manager: raise ValueError('Checkpoint manager is None') try: is_accessible = self.checkpoint_manager.cp_accessible(epoch=None) if is_accessible: logger.info('Saving checkpoints for epoch {}'.format(epoch)) session.run(self.checkpoint_manager.save(epoch)) self.checkpoint_manager.write_checkpoint_metadata(epoch) logger.info('Checkpoints saved') self.checkpoint_manager.report_checkpoint_stats('checkpoint_save') else: logger.warning("Checkpoint files cannot be accessed!") except Exception as ex: logger.warning("Unable to write checkpoint for epoch {}. Error={}". format(epoch, ex)) def epoch_limiter(job, num_epochs): """ Creates a task that will output True when a given number of epochs has finished. """ with job.init_group: init_net = core.Net('epoch_counter_init') counter = init_net.CreateCounter([], init_count=num_epochs - 1) Task(step=init_net) with job.epoch_group: epoch_net = core.Net('epoch_countdown') finished = epoch_net.CountDown(counter) output = Task(step=epoch_net, outputs=finished).outputs()[0] job.add_stop_condition(output)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/checkpoint.py
0.718496
0.315591
checkpoint.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import functools from caffe2.python import brew, rnn_cell class GRUCell(rnn_cell.RNNCell): def __init__( self, input_size, hidden_size, forget_bias, # Currently unused! Values here will be ignored. memory_optimization, drop_states=False, linear_before_reset=False, **kwargs ): super(GRUCell, self).__init__(**kwargs) self.input_size = input_size self.hidden_size = hidden_size self.forget_bias = float(forget_bias) self.memory_optimization = memory_optimization self.drop_states = drop_states self.linear_before_reset = linear_before_reset # Unlike LSTMCell, GRUCell needs the output of one gate to feed into another. # (reset gate -> output_gate) # So, much of the logic to calculate the reset gate output and modified # output gate input is set here, in the graph definition. # The remaining logic lives in in gru_unit_op.{h,cc}. def _apply( self, model, input_t, seq_lengths, states, timestep, extra_inputs=None, ): hidden_t_prev = states[0] # Split input tensors to get inputs for each gate. input_t_reset, input_t_update, input_t_output = model.net.Split( [ input_t, ], [ self.scope('input_t_reset'), self.scope('input_t_update'), self.scope('input_t_output'), ], axis=2, ) # Fully connected layers for reset and update gates. reset_gate_t = brew.fc( model, hidden_t_prev, self.scope('reset_gate_t'), dim_in=self.hidden_size, dim_out=self.hidden_size, axis=2, ) update_gate_t = brew.fc( model, hidden_t_prev, self.scope('update_gate_t'), dim_in=self.hidden_size, dim_out=self.hidden_size, axis=2, ) # Calculating the modified hidden state going into output gate. reset_gate_t = model.net.Sum( [reset_gate_t, input_t_reset], self.scope('reset_gate_t') ) reset_gate_t_sigmoid = model.net.Sigmoid( reset_gate_t, self.scope('reset_gate_t_sigmoid') ) # `self.linear_before_reset = True` matches cudnn semantics if self.linear_before_reset: output_gate_fc = brew.fc( model, hidden_t_prev, self.scope('output_gate_t'), dim_in=self.hidden_size, dim_out=self.hidden_size, axis=2, ) output_gate_t = model.net.Mul( [reset_gate_t_sigmoid, output_gate_fc], self.scope('output_gate_t_mul') ) else: modified_hidden_t_prev = model.net.Mul( [reset_gate_t_sigmoid, hidden_t_prev], self.scope('modified_hidden_t_prev') ) output_gate_t = brew.fc( model, modified_hidden_t_prev, self.scope('output_gate_t'), dim_in=self.hidden_size, dim_out=self.hidden_size, axis=2, ) # Add input contributions to update and output gate. # We already (in-place) added input contributions to the reset gate. update_gate_t = model.net.Sum( [update_gate_t, input_t_update], self.scope('update_gate_t'), ) output_gate_t = model.net.Sum( [output_gate_t, input_t_output], self.scope('output_gate_t_summed'), ) # Join gate outputs and add input contributions gates_t, _gates_t_concat_dims = model.net.Concat( [ reset_gate_t, update_gate_t, output_gate_t, ], [ self.scope('gates_t'), self.scope('_gates_t_concat_dims'), ], axis=2, ) if seq_lengths is not None: inputs = [hidden_t_prev, gates_t, seq_lengths, timestep] else: inputs = [hidden_t_prev, gates_t, timestep] hidden_t = model.net.GRUUnit( inputs, list(self.get_state_names()), forget_bias=self.forget_bias, drop_states=self.drop_states, sequence_lengths=(seq_lengths is not None), ) model.net.AddExternalOutputs(hidden_t) return (hidden_t,) def prepare_input(self, model, input_blob): return brew.fc( model, input_blob, self.scope('i2h'), dim_in=self.input_size, dim_out=3 * self.hidden_size, axis=2, ) def get_state_names(self): return (self.scope('hidden_t'),) def get_output_dim(self): return self.hidden_size GRU = functools.partial(rnn_cell._LSTM, GRUCell)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/gru_cell.py
0.874023
0.309141
gru_cell.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core, workspace from caffe2.python.task import Cluster, Task, TaskGroup, WorkspaceType class CompiledRunnable(object): """ Wrapper for compiled runnable returned from session.compile() """ def __init__(self, obj, session_class): self.obj = obj self.session_class = session_class class Session(object): """ Allows to run Nets, ExecutionSteps, Plans, Tasks and TaskGroups. A session can potentially run in multiple nodes concurrently. Example: from core import Net from caffe2.python.task import Task, TaskGroup, WorkspaceType net = Net('test1') net.Add([net.Const(1), net.Const(2)]) net2 = net.Clone() step = core.execution_step('step1', [net2]) with TaskGroup(WorkspaceType.GLOBAL) as init_tg: with Node('node1'): n1setup = net.Net('n1setup') n1msg = n1setup.Const('Hello from node 1.') Task(step=n1setup) with TaskGroup() as private_tg: with Node('node1'): n1 = net.Net('n1') n1.Print(n1msg, 0) Task(step=n1) with Node('node2'): n2 = net.Net('n2') n2.Print(n2.Const('Hello from node 2.'), 0) Task(step=n2) session = LocalSession() session.run(net) session.run(step) session.run(init_tg) session.run(private_tg) Global Workspace: At the beginning of the session, a global workspace is created and kept alive for the duration of the session. Private Workspace: Tasks can be run either directly on the global workspace, or they can instantiate a private child workspace that is released after each run. Blob visibility: Tasks running in different nodes in parallel will always run under different workspaces, so it must be assumed that they won't be able to access each other's blobs. Tasks running on the same node will follow Workspace hierarchy rules: tasks running on separate private workspaces will only be able to share blobs defined on a common parent Workspace. """ _compiled_cache = {} def __init__(self): self._open = True def is_open(self): return self._open @classmethod def compile(cls, runnable, workspace_type=None, setup_net_list=None): if isinstance(runnable, CompiledRunnable): assert cls == runnable.session_class, ( 'Runnable was compiled for different session type. ' + 'Need: %s, got: %s' % ( cls.__name__, runnable.session_class.__name__)) return runnable if runnable in cls._compiled_cache: return cls._compiled_cache[runnable] if isinstance(runnable, TaskGroup): if workspace_type: if runnable.workspace_type(): assert runnable.workspace_type() == workspace_type, \ "Require {} but already have {}".format( workspace_type, runnable.workspace_type()) else: runnable._workspace_type = workspace_type tg = runnable else: if workspace_type is None: workspace_type = WorkspaceType.GLOBAL tg = TaskGroup(workspace_type=workspace_type) if isinstance(runnable, Task): tg.add(runnable) elif isinstance(runnable, core.ExecutionStep): tg.add(Task(step=runnable)) elif isinstance(runnable, core.Plan): # ExecutionSteps in Plan() object is supposed to run sequentially, while # tasks in TaskGroup run in parallel. So if we have multiple # ExecutionSteps in Plan() object, we choose to have a root # ExecutionStep to wrap all ExecutionSteps. assert len(runnable.Steps()) > 0 if len(runnable.Steps()) == 1: tg.add(Task(step=runnable.Steps()[0])) else: # Task takes a list of ExecutionSteps and automatically wrap into # a root ExecutionStep tg.add(Task(step=runnable.Steps())) else: step = core.execution_step('runnable', runnable) tg.add(Task(step=step)) compiled = CompiledRunnable( cls._compile_task_group(tg, setup_net_list), session_class=cls) cls._compiled_cache[runnable] = compiled return compiled def run(self, runnable, workspace_type=None, setup_net_list=None): """Run the given runnable. Args: runnable: Object recognized by the Session. Currently, we support TaskGroup, Task, Plan, ExecutionStep, and Net. workspace_type: A string defined in the WorkspaceType object. setup_net_list: A list of Net objects or a list of NetDef protos. So far this is only used by the DistributedSession, in which we need to pass a list of special nets to setup the master. """ assert self.is_open(), 'Session is closed.' assert runnable is not None, 'Got a none runnable.' self._run_compiled(self.compile(runnable, workspace_type, setup_net_list).obj) def close(self): if self.is_open(): self._do_close() self._open = False def fetch_output(self, output): raise NotImplementedError() def _run_compiled(self, task_group): raise NotImplementedError() @classmethod def _compile_task_group(cls, task_group, setup_net_list=None): return task_group def _do_close(self): pass def __enter__(self): assert self._open, 'Session already closed.' return self def __exit__(self, ex_type, value, traceback): if ex_type is None: self.close() class LocalSession(Session): """ Session that runs in a single node. Tasks are all remapped to run in parallel in the 'local' node. Currently, LocalSession runs all parallel tasks in the same workspace, but this behavior may change in the future. Only tasks pointing to the same logical node are guaranteed to always run in the same workspace. """ def __init__(self, ws=None): Session.__init__(self) self._ws = ws or workspace.C.Workspace.current @classmethod def _compile_task_group(cls, task_group, setup_net_list=None): with Cluster(): task = task_group.to_task() plan = core.Plan('task_group_plan') plan.AddStep(task.get_step()) return (plan, task.output_list(), task.workspace_type) def _run_compiled(self, compiled): plan, output_list, workspace_type = compiled # make sure the output blobs belong to the parent workspace outputs = [] for name in output_list.names(): self._ws.create_blob(str(name)) outputs.append(core.BlobReference(str(name))) output_list.set_values(outputs, _fetch_func=self._fetch_output) task_ws = ( workspace.C.Workspace(self._ws) if workspace_type == WorkspaceType.PRIVATE else self._ws) with workspace.WorkspaceGuard(task_ws): task_ws.run(plan) def _fetch_output(self, output): return self._ws.blobs[str(output)].fetch()
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/session.py
0.587707
0.290503
session.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core, scope, workspace, _import_c_extension as C from caffe2.python.dataio import Reader from caffe2.python.dataset import Dataset from caffe2.python.schema import from_column_list import os class DBFileReader(Reader): default_name_suffix = 'db_file_reader' """Reader reads from a DB file. Example usage: db_file_reader = DBFileReader(db_path='/tmp/cache.db', db_type='LevelDB') Args: db_path: str. db_type: str. DB type of file. A db_type is registed by `REGISTER_CAFFE2_DB(<db_type>, <DB Class>)`. name: str or None. Name of DBFileReader. Optional name to prepend to blobs that will store the data. Default to '<db_name>_<default_name_suffix>'. batch_size: int. How many examples are read for each time the read_net is run. loop_over: bool. If True given, will go through examples in random order endlessly. field_names: List[str]. If the schema.field_names() should not in alphabetic order, it must be specified. Otherwise, schema will be automatically restored with schema.field_names() sorted in alphabetic order. """ def __init__( self, db_path, db_type, name=None, batch_size=100, loop_over=False, field_names=None, ): assert db_path is not None, "db_path can't be None." assert db_type in C.registered_dbs(), \ "db_type [{db_type}] is not available. \n" \ "Choose one of these: {registered_dbs}.".format( db_type=db_type, registered_dbs=C.registered_dbs(), ) self.db_path = os.path.expanduser(db_path) self.db_type = db_type self.name = name or '{db_name}_{default_name_suffix}'.format( db_name=self._extract_db_name_from_db_path(), default_name_suffix=self.default_name_suffix, ) self.batch_size = batch_size self.loop_over = loop_over # Before self._init_reader_schema(...), # self.db_path and self.db_type are required to be set. super(DBFileReader, self).__init__(self._init_reader_schema(field_names)) self.ds = Dataset(self._schema, self.name + '_dataset') self.ds_reader = None def _init_name(self, name): return name or self._extract_db_name_from_db_path( ) + '_db_file_reader' def _init_reader_schema(self, field_names=None): """Restore a reader schema from the DB file. If `field_names` given, restore scheme according to it. Overwise, loade blobs from the DB file into the workspace, and restore schema from these blob names. It is also assumed that: 1). Each field of the schema have corresponding blobs stored in the DB file. 2). Each blob loaded from the DB file corresponds to a field of the schema. 3). field_names in the original schema are in alphabetic order, since blob names loaded to the workspace from the DB file will be in alphabetic order. Load a set of blobs from a DB file. From names of these blobs, restore the DB file schema using `from_column_list(...)`. Returns: schema: schema.Struct. Used in Reader.__init__(...). """ if field_names: return from_column_list(field_names) assert os.path.exists(self.db_path), \ 'db_path [{db_path}] does not exist'.format(db_path=self.db_path) with core.NameScope(self.name): # blob_prefix is for avoiding name conflict in workspace blob_prefix = scope.CurrentNameScope() workspace.RunOperatorOnce( core.CreateOperator( 'Load', [], [], absolute_path=True, db=self.db_path, db_type=self.db_type, load_all=True, add_prefix=blob_prefix, ) ) col_names = [ blob_name[len(blob_prefix):] for blob_name in workspace.Blobs() if blob_name.startswith(blob_prefix) ] schema = from_column_list(col_names) return schema def setup_ex(self, init_net, finish_net): """From the Dataset, create a _DatasetReader and setup a init_net. Make sure the _init_field_blobs_as_empty(...) is only called once. Because the underlying NewRecord(...) creats blobs by calling NextScopedBlob(...), so that references to previously-initiated empty blobs will be lost, causing accessibility issue. """ if self.ds_reader: self.ds_reader.setup_ex(init_net, finish_net) else: self._init_field_blobs_as_empty(init_net) self._feed_field_blobs_from_db_file(init_net) self.ds_reader = self.ds.random_reader( init_net, batch_size=self.batch_size, loop_over=self.loop_over, ) self.ds_reader.sort_and_shuffle(init_net) self.ds_reader.computeoffset(init_net) def read(self, read_net): assert self.ds_reader, 'setup_ex must be called first' return self.ds_reader.read(read_net) def _init_field_blobs_as_empty(self, init_net): """Initialize dataset field blobs by creating an empty record""" with core.NameScope(self.name): self.ds.init_empty(init_net) def _feed_field_blobs_from_db_file(self, net): """Load from the DB file at db_path and feed dataset field blobs""" assert os.path.exists(self.db_path), \ 'db_path [{db_path}] does not exist'.format(db_path=self.db_path) net.Load( [], self.ds.get_blobs(), db=self.db_path, db_type=self.db_type, absolute_path=True, source_blob_names=self.ds.field_names(), ) def _extract_db_name_from_db_path(self): """Extract DB name from DB path E.g. given self.db_path=`/tmp/sample.db`, it returns `sample`. Returns: db_name: str. """ return os.path.basename(self.db_path).rsplit('.', 1)[0]
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/db_file_reader.py
0.845337
0.184768
db_file_reader.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.proto import caffe2_pb2 from caffe2.python import workspace, core, utils, rnn_cell, model_helper from caffe2.python import recurrent import argparse import numpy as np import time import logging logging.basicConfig() log = logging.getLogger("lstm_bench") log.setLevel(logging.DEBUG) def generate_data(T, shape, num_labels, fixed_shape): ''' Fill a queue with input data ''' log.info("Generating T={} sequence batches".format(T)) generate_input_init_net = core.Net('generate_input_init') queue = generate_input_init_net.CreateBlobsQueue( [], "inputqueue", num_blobs=1, capacity=T, ) label_queue = generate_input_init_net.CreateBlobsQueue( [], "labelqueue", num_blobs=1, capacity=T, ) workspace.RunNetOnce(generate_input_init_net) generate_input_net = core.Net('generate_input') generate_input_net.EnqueueBlobs([queue, "scratch"], ["scratch"]) generate_input_net.EnqueueBlobs([label_queue, "label_scr"], ["label_scr"]) np.random.seed(2603) entry_counts = [] for t in range(T): if (t % (max(10, T // 10)) == 0): print("Generating data {}/{}".format(t, T)) # Randomize the seqlength random_shape = ( [np.random.randint(1, shape[0])] + shape[1:] if t > 0 and not fixed_shape else shape ) X = np.random.rand(*random_shape).astype(np.float32) batch_size = random_shape[1] L = num_labels * batch_size labels = (np.random.rand(random_shape[0]) * L).astype(np.int32) workspace.FeedBlob("scratch", X) workspace.FeedBlob("label_scr", labels) workspace.RunNetOnce(generate_input_net.Proto()) entry_counts.append(random_shape[0] * random_shape[1]) log.info("Finished data generation") return queue, label_queue, entry_counts def create_model(args, queue, label_queue, input_shape): model = model_helper.ModelHelper(name="LSTM_bench") seq_lengths, target = \ model.net.AddExternalInputs( 'seq_lengths', 'target', ) input_blob = model.net.DequeueBlobs(queue, "input_data") labels = model.net.DequeueBlobs(label_queue, "label") init_blobs = [] if args.implementation in ["own", "static", "static_dag"]: T = None if "static" in args.implementation: assert args.fixed_shape, \ "Random input length is not static RNN compatible" T = args.seq_length print("Using static RNN of size {}".format(T)) for i in range(args.num_layers): hidden_init, cell_init = model.net.AddExternalInputs( "hidden_init_{}".format(i), "cell_init_{}".format(i) ) init_blobs.extend([hidden_init, cell_init]) output, last_hidden, _, last_state = rnn_cell.LSTM( model=model, input_blob=input_blob, seq_lengths=seq_lengths, initial_states=init_blobs, dim_in=args.input_dim, dim_out=[args.hidden_dim] * args.num_layers, scope="lstm1", memory_optimization=args.memory_optimization, forward_only=args.forward_only, drop_states=True, return_last_layer_only=True, static_rnn_unroll_size=T, ) if "dag" in args.implementation: print("Using DAG net type") model.net.Proto().type = 'dag' model.net.Proto().num_workers = 4 elif args.implementation == "cudnn": # We need to feed a placeholder input so that RecurrentInitOp # can infer the dimensions. init_blobs = model.net.AddExternalInputs("hidden_init", "cell_init") model.param_init_net.ConstantFill([], input_blob, shape=input_shape) output, last_hidden, _ = rnn_cell.cudnn_LSTM( model=model, input_blob=input_blob, initial_states=init_blobs, dim_in=args.input_dim, dim_out=args.hidden_dim, scope="cudnnlstm", num_layers=args.num_layers, ) else: assert False, "Unknown implementation" weights = model.net.UniformFill(labels, "weights") softmax, loss = model.net.SoftmaxWithLoss( [model.Flatten(output), labels, weights], ['softmax', 'loss'], ) if not args.forward_only: model.AddGradientOperators([loss]) # carry states over for init_blob in init_blobs: model.net.Copy(last_hidden, init_blob) sz = args.hidden_dim if args.implementation == "cudnn": sz *= args.num_layers workspace.FeedBlob(init_blob, np.zeros( [1, args.batch_size, sz], dtype=np.float32 )) if args.rnn_executor: for op in model.net.Proto().op: if op.type.startswith('RecurrentNetwork'): recurrent.set_rnn_executor_config( op, num_threads=args.rnn_executor_num_threads, max_cuda_streams=args.rnn_executor_max_cuda_streams, ) return model, output def Caffe2LSTM(args): T = args.data_size // args.batch_size input_blob_shape = [args.seq_length, args.batch_size, args.input_dim] queue, label_queue, entry_counts = generate_data(T // args.seq_length, input_blob_shape, args.hidden_dim, args.fixed_shape) workspace.FeedBlob( "seq_lengths", np.array([args.seq_length] * args.batch_size, dtype=np.int32) ) model, output = create_model(args, queue, label_queue, input_blob_shape) workspace.RunNetOnce(model.param_init_net) workspace.CreateNet(model.net) start_time = time.time() num_iters = T // args.seq_length total_iters = 0 # Run the Benchmark log.info("------ Warming up ------") workspace.RunNet(model.net.Proto().name) if (args.gpu): log.info("Memory stats:") stats = utils.GetGPUMemoryUsageStats() log.info("GPU memory:\t{} MB".format(stats['max_total'] / 1024 / 1024)) log.info("------ Starting benchmark ------") start_time = time.time() last_time = time.time() for iteration in range(1, num_iters, args.iters_to_report): iters_once = min(args.iters_to_report, num_iters - iteration) total_iters += iters_once workspace.RunNet(model.net.Proto().name, iters_once) new_time = time.time() log.info( "Iter: {} / {}. Entries Per Second: {}k.".format( iteration, num_iters, np.sum(entry_counts[iteration:iteration + iters_once]) / (new_time - last_time) // 100 / 10, ) ) last_time = new_time log.info("Done. Total EPS excluding 1st iteration: {}k {}".format( np.sum(entry_counts[1:]) / (time.time() - start_time) // 100 / 10, " (with RNN executor)" if args.rnn_executor else "", )) if (args.gpu): log.info("Memory stats:") stats = utils.GetGPUMemoryUsageStats() log.info("GPU memory:\t{} MB".format(stats['max_total'] / 1024 / 1024)) if (stats['max_total'] != stats['total']): log.warning( "Max usage differs from current total usage: {} > {}". format(stats['max_total'], stats['total']) ) log.warning("This means that costly deallocations occurred.") return time.time() - start_time @utils.debug def Benchmark(args): return Caffe2LSTM(args) def GetArgumentParser(): parser = argparse.ArgumentParser(description="LSTM benchmark.") parser.add_argument( "--hidden_dim", type=int, default=800, help="Hidden dimension", ) parser.add_argument( "--input_dim", type=int, default=40, help="Input dimension", ) parser.add_argument( "--batch_size", type=int, default=128, help="The batch size." ) parser.add_argument( "--seq_length", type=int, default=20, help="Max sequence length" ) parser.add_argument( "--data_size", type=int, default=1000000, help="Number of data points to generate" ) parser.add_argument( "--iters_to_report", type=int, default=20, help="Number of iteration to report progress" ) parser.add_argument( "--gpu", action="store_true", help="Run all on GPU", ) parser.add_argument( "--implementation", type=str, default="own", help="'cudnn', 'own', 'static' or 'static_dag'", ) parser.add_argument( "--fixed_shape", action="store_true", help=("Whether to randomize shape of input batches. " "Static RNN requires fixed shape"), ) parser.add_argument( "--memory_optimization", action="store_true", help="Whether to use memory optimized LSTM or not", ) parser.add_argument( "--forward_only", action="store_true", help="Whether to run only forward pass" ) parser.add_argument( "--num_layers", type=int, default=1, help="Number of LSTM layers. All output dimensions are going to be" "of hidden_dim size", ) parser.add_argument( "--rnn_executor", action="store_true", help="Whether to use RNN executor" ) parser.add_argument( "--rnn_executor_num_threads", type=int, default=None, help="Number of threads used by CPU RNN Executor" ) parser.add_argument( "--rnn_executor_max_cuda_streams", type=int, default=None, help="Maximum number of CUDA streams used by RNN executor on GPU" ) return parser if __name__ == '__main__': args, extra_args = GetArgumentParser().parse_known_args() rnn_executor_opt = 1 if args.rnn_executor else 0 workspace.GlobalInit([ 'caffe2', '--caffe2_log_level=0', '--caffe2_print_blob_sizes_at_exit=0', '--caffe2_rnn_executor={}'.format(rnn_executor_opt), '--caffe2_gpu_memory_tracking=1'] + extra_args) device = core.DeviceOption( workspace.GpuDeviceType if args.gpu else caffe2_pb2.CPU, 4) with core.DeviceScope(device): Benchmark(args)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/lstm_benchmark.py
0.718989
0.272962
lstm_benchmark.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import string import argparse import numpy as np from caffe2.python.model_helper import ModelHelper from caffe2.python.predictor import mobile_exporter from caffe2.python import core, workspace, brew, utils def parse_kwarg(kwarg_str): key, value = map(string.strip, kwarg_str.split("=", 1)) try: value = int(value) except ValueError: try: value = float(value) except ValueError: pass return key, value def main(args): # User defined keyword arguments kwargs = {"order": "NCHW"} kwargs.update(dict(args.kwargs)) model = ModelHelper(name=args.benchmark_name) op_type = args.operator # assumes a brew type op name input_name = args.input_name output_name = args.output_name iters = int(args.iters) for i in range(iters): input_blob_name = input_name + (str(i) if i > 0 and args.chain else '') output_blob_name = output_name + str(i + 1) add_op = getattr(brew, op_type) add_op(model, input_blob_name, output_blob_name, **kwargs) if args.chain: input_name, output_name = output_name, input_name workspace.RunNetOnce(model.param_init_net) extra_init_net_ops = [] def make_blob_on_context(blob_name, blob_data, context): if context.upper() != "CPU": blob_name_modified = "{}_CPU".format(blob_name) else: # CPU case is simple blob_name_modified = blob_name fill_op = core.CreateOperator( "GivenTensorFill", [], [blob_name_modified], arg=[ utils.MakeArgument("shape", blob_data.shape), utils.MakeArgument("values", blob_data) ] ) extra_init_net_ops.append(fill_op) # We need to create CPU blobs and add some copy operations in # the init_net if context.upper() == "OPENGL": copy_op = core.CreateOperator("CopyToOpenGL", [blob_name_modified], [blob_name]) extra_init_net_ops.append(copy_op) for unparsed_blob in args.blob: name, unparsed_dims = unparsed_blob.split('=') dims = [int(d) for d in unparsed_dims.split(',')] np_input = np.random.rand(*dims).astype(np.float32) make_blob_on_context(name, np_input, args.context) init_net, predict_net = mobile_exporter.Export( workspace, model.net, model.params ) init_net.op.extend(extra_init_net_ops) # Handle manual rewrite if args.context.upper() == "OPENGL": old_ops = [op for op in predict_net.op] del predict_net.op[:] for op in old_ops: op.type = 'OpenGL{}'.format(op.type) predict_net.op.extend(old_ops) if args.debug: print("init_net:") for op in init_net.op: print(" ", op.type, op.input, "-->", op.output) print("predict_net:") for op in predict_net.op: print(" ", op.type, op.input, "-->", op.output) with open(args.predict_net, 'wb') as f: f.write(predict_net.SerializeToString()) with open(args.init_net, 'wb') as f: f.write(init_net.SerializeToString()) if __name__ == "__main__": parser = argparse.ArgumentParser( description="Utilitity to generate Caffe2 benchmark models.") parser.add_argument("operator", help="Caffe2 operator to benchmark.") parser.add_argument("-b", "--blob", help="Instantiate a blob --blob name=dim1,dim2,dim3", action='append') parser.add_argument("--context", help="Context to run on.", default="CPU") parser.add_argument("--kwargs", help="kwargs to pass to operator.", nargs="*", type=parse_kwarg, default=[]) parser.add_argument("--init_net", help="Output initialization net.", default="init_net.pb") parser.add_argument("--predict_net", help="Output prediction net.", default="predict_net.pb") parser.add_argument("--benchmark_name", help="Name of the benchmark network", default="benchmark") parser.add_argument("--input_name", help="Name of the input blob.", default="data") parser.add_argument("--output_name", help="Name of the output blob.", default="output") parser.add_argument("--iters", help="Number of iterations to run the operator.", default="1") parser.add_argument("-d", "--debug", help="Print debug information.", action='store_true') parser.add_argument("-c", "--chain", help="Chain ops together (create data dependencies)", action='store_true') args = parser.parse_args() main(args)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/benchmark_generator.py
0.624179
0.236968
benchmark_generator.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.proto import caffe2_pb2 from caffe2.python import workspace, core, utils, model_helper import argparse import numpy as np import time import logging logging.basicConfig() log = logging.getLogger("embedding_generation_benchmark") log.setLevel(logging.DEBUG) def generate_data(T, batch_size, max_seq_length): ''' Fill a queue with input data ''' log.info("Generating T={} batches".format(T)) generate_input_init_net = core.Net('generate_input_init') queue = generate_input_init_net.CreateBlobsQueue( [], "inputqueue", num_blobs=1, capacity=T, ) workspace.RunNetOnce(generate_input_init_net) generate_input_net = core.Net('generate_input') generate_input_net.EnqueueBlobs([queue, "scratch"], ["scratch"]) np.random.seed(2603) for t in range(T): if (t % (max(10, T // 10)) == 0): log.info("Generating data {}/{}".format(t, T)) X = np.tile(np.arange(max_seq_length), [batch_size, 1]).transpose() workspace.FeedBlob("scratch", X) workspace.RunNetOnce(generate_input_net.Proto()) log.info("Finished data generation") return queue def generate_embedding_table(vocab_size, embedding_size): log.info("Generating embedding table with dimensions {}" .format([vocab_size, embedding_size])) generate_table_net = core.Net('generate_table') table = generate_table_net.GaussianFill( [], ['embedding_table'], shape=[vocab_size, embedding_size], ) workspace.RunNetOnce(generate_table_net) return table def create_model(args, queue, embedding_table, embedding_size): model = model_helper.ModelHelper(name='embedding_generation_bench') input_blob = model.net.DequeueBlobs(queue, 'input_data') if args.implementation == 'sinusoid': model.net.SinusoidPositionEncoding( [input_blob], ['output'], embedding_size=embedding_size ) else: model.net.Gather( [embedding_table, input_blob], ['output'], ) return model def Caffe2EmbeddingGeneration(args): T = args.data_size // args.batch_size queue = generate_data(T, args.batch_size, args.seq_length) embedding_table = None if args.implementation == 'table': embedding_table = generate_embedding_table( args.seq_length, args.embedding_size, ) model = create_model(args, queue, embedding_table, args.embedding_size) workspace.RunNetOnce(model.param_init_net) workspace.CreateNet(model.net) start_time = time.time() num_iters = T total_iters = 0 # Run the Benchmark log.info("------ Warming up ------") workspace.RunNet(model.net.Proto().name) log.info("------ Starting benchmark ------") start_time = time.time() last_time = time.time() for iteration in range(1, num_iters, args.iters_to_report): iters_once = min(args.iters_to_report, num_iters - iteration) total_iters += iters_once workspace.RunNet(model.net.Proto().name, iters_once) new_time = time.time() log.info( "Iter: {} / {}. Embeddings Generated Per Second: {}k.".format( iteration, num_iters, (iters_once * args.batch_size * args.seq_length) / (new_time - last_time) // 100 / 10, ) ) last_time = new_time total_per_sec = (num_iters - 1) * args.batch_size * args.seq_length total_per_sec = total_per_sec / (time.time() - start_time) // 100 / 10 log.info("Done. Total embeddings generated per second " + "excluding 1st iteration: {}k".format(total_per_sec)) return time.time() - start_time @utils.debug def Benchmark(args): return Caffe2EmbeddingGeneration(args) def GetArgumentParser(): parser = argparse.ArgumentParser( description="Embedding generation benchmark." ) parser.add_argument( "--embedding_size", type=int, default=512, help="Embedding size", ) parser.add_argument( "--batch_size", type=int, default=16, help="The batch size." ) parser.add_argument( "--data_size", type=int, default=10000, help="Number of sequences to generate" ) parser.add_argument( "--seq_length", type=int, default=128, help="Max sequence length" ) parser.add_argument( "--iters_to_report", type=int, default=20, help="Number of iterations to report progress" ) parser.add_argument( "--implementation", type=str, default="sinusoid", help="'table' or 'sinusoid'", ) return parser if __name__ == '__main__': args, extra_args = GetArgumentParser().parse_known_args() workspace.GlobalInit([ 'caffe2', '--caffe2_log_level=0', '--caffe2_print_blob_sizes_at_exit=0'] + extra_args) device = core.DeviceOption(caffe2_pb2.CPU) with core.DeviceScope(device): Benchmark(args)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/embedding_generation_benchmark.py
0.750553
0.207978
embedding_generation_benchmark.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core from caffe2.python.schema import Field, Struct, from_blob_list import numpy as np import time class Reader(object): """ Reader is an abstract class to be implemented in order to provide operations capable of iterating through a dataset or stream of data. A Reader must implement at least one operation, `read`, which adds operations to a net that read the next batch of data. Readers can optionally support the `reset` operation, which is useful when multiple passes over the data are required. """ def __init__(self, schema=None): if schema is not None: assert isinstance(schema, Field) self._schema = schema def schema(self): assert self._schema is not None, 'Schema not provided for this reader.' return self._schema def _set_schema(self, schema): self._schema = schema def setup_ex(self, init_net, finish_net): """Setup nets to run at task initialization and cleanup time. Args: global_init_net: A net invoked at task init time. global_finish_net: A net invoked at task cleanup time. """ pass def read_ex(self, local_init_net, local_finish_net): read_net = core.Net('reader_body') return ([read_net], ) + self.read(read_net) def read_record_ex(self, local_init_net, local_finish_net): nets, should_stop, fields = self.read_ex( local_init_net, local_finish_net) if self._schema: fields = from_blob_list(self._schema, fields) return nets, should_stop, fields def read(self, read_net): """Append operations to read_net that will read a batch from the underlying data soruce. Operations added to `read_net` must be thread safe and atomic, that is, it should be possible to clone `read_net` and run multiple instances of it in parallel. Args: read_net: the net that will be appended with read operations Returns: A tuple (should_stop, fields), with: should_stop: BlobReference pointing to a boolean scalar blob that indicates whether the read operation was succesfull or whether the end of data has been reached. fields: A tuple of BlobReference containing the latest batch of data that was read. """ raise NotImplementedError('Readers must implement `read`.') def reset(self, net): """Append operations to `net` that will reset the reader. This can be used to read the data multiple times. Not all readers support this operation. """ raise NotImplementedError('This reader cannot be resetted.') def read_record(self, read_net): should_stop, fields = self.read(read_net) if self._schema: fields = from_blob_list(self._schema, fields) return should_stop, fields def execution_step(self, reader_net_name=None, external_should_stop=None): """Create an execution step with a net containing read operators. The execution step will contain a `stop_blob` that knows how to stop the execution loop when end of data was reached. E.g.: read_step, fields = reader.execution_step() consume_net = core.Net('consume') consume_net.Print(fields[0], []) p = core.Plan('reader') p.AddStep(read_step.AddNet(consume_net)) core.RunPlan(p) Args: reader_net_name: (optional) the name of the reader_net to be created. The execution step will be named accordingly. Returns: A tuple (read_step, fields), with: read_step: A newly created execution step containing a net with read operations. The step will have `stop_blob` set, in order to stop the loop on end of data. fields: A tuple of BlobReference containing the latest batch of data that was read. """ reader_net = core.Net(reader_net_name or 'reader') should_stop, fields = self.read_record(reader_net) if external_should_stop is not None: should_stop = reader_net.Or([external_should_stop, should_stop]) read_step = core.execution_step( '{}_step'.format(reader_net_name), reader_net, should_stop_blob=should_stop) return (read_step, fields) class Writer(object): """ Writer is an abstract class to be implemented in order to provide operations capable of feeding a data stream or a dataset. A Writer must implement 2 operations: `write`, which adds operations to a net that write the write batch of data, and `commit`, which adds operations to a net in order to indicate that no more data will be written. """ _schema = None def schema(self): return self._schema def write(self, writer_net, fields): """Add operations to `writer_net` that write the next batch of data. Operations added to the net must be thread-safe and unique, that is: multiple writers must be able to write to the dataset in parallel. Args: fields: a tuple of BlobReference containing the batch of data to write. """ raise NotImplementedError('Writers must implement write.') def write_record(self, writer_net, fields): if isinstance(fields, Field): self._schema = fields fields = fields.field_blobs() self.write(writer_net, fields) def setup_ex(self, init_net, finish_net): """Experimental, don't use yet""" self.commit(finish_net) def write_ex(self, fields, local_init_net, local_finish_net, stop_blob): """Experimental extension to the interface. Don't use yet""" write_net = core.Net('write_net') self.write(write_net, fields) return [write_net] def write_record_ex( self, fields, local_init_net, local_finish_net, stop_blob=None): """Experimental extension to the interface. Don't use yet.""" if isinstance(fields, Field): self._schema = fields fields = fields.field_blobs() if stop_blob is None: stop_blob = local_init_net.NextName("dequeue_status") write_nets = self.write_ex( fields, local_init_net, local_finish_net, stop_blob) return (write_nets, stop_blob) def commit(self, finish_net): """Add operations to `finish_net` that signal end of data. This must be implemented by all Writers, but may be no-op for some of them. """ pass class ReaderBuilder(object): """ Allow usage of a reader in distributed fashion. """ def schema(self): raise NotImplementedError() def setup(self, **kwargs): """ Optionally, perform one-time setup before calling new_reader(). Subclass should make sure this function is only called once. """ raise NotImplementedError() def new_reader(self, **kwargs): raise NotImplementedError() class PipedReaderBuilder(ReaderBuilder): """ReaderBuilder that modifies underlying builder by calling `piper` function on each new reader produced, and return the result of the function. This way, it is possible to append data processing pipelines that will be replicated for each reader that gets created. E.g.: PipedReaderBuilder( ReaderBuilder(...), lambda reader: pipe(reader, processor=my_proc)) """ def __init__(self, builder, piper): self._builder = builder self._piper = piper def schema(self): return self._builder.schema() def setup(self, **kwargs): return self._builder.setup(**kwargs) def new_reader(self, **kwargs): # Passing everything down since you could wrap a PipedReaderBuilder in # another PipedReaderBuilder output = self._piper( reader=self._builder.new_reader(**kwargs), **kwargs ) return output if isinstance(output, Reader) else output.reader() class Pipe(object): def __init__(self, schema=None, obj_key=None): self._num_writers = 0 self._num_readers = 0 self._schema = schema self._obj_key = obj_key def schema(self): return self._schema def setup(self, global_init_net): pass def reader(self): raise NotImplementedError() def writer(self): raise NotImplementedError() def num_readers(self): return self._num_readers def num_writers(self): return self._num_writers def _new_writer(self, writer_schema, writer_init_net): if writer_schema is not None and self._schema is None: self._schema = writer_schema self._num_writers += 1 if self._obj_key is not None: writer_init_net.add_attribute(self._obj_key, self) def _new_reader(self, reader_init_net): self._num_readers += 1 if self._obj_key is not None: reader_init_net.add_attribute(self._obj_key, self) class CounterReader(Reader): """ Reader that produces increasing integers. """ def __init__(self): Reader.__init__(self, schema=Struct(('iter', np.int64))) self.counter = None self.should_stop = None def setup_ex(self, global_init_net, global_finish_net): if self.counter is None: self.counter = global_init_net.CreateCounter([], init_count=0) self.should_stop = global_init_net.ConstantFill( [], shape=[], dtype=core.DataType.BOOL, value=False) def read_ex(self, local_init_net, local_finish_net): count_net = core.Net('limited_reader_counter') value = count_net.CountUp([self.counter], 1) return [count_net], self.should_stop, [value] class ReaderWithLimitBase(Reader): """Abstract Reader constrained by certain conditions. Base class for Reader classes which check for certain conditions to stop further processing (e.g. max number of iterations or time limit). Also produces a boolean blob (data_finished) that can be used to see if the reader exausted all input data (true) or stopped for another reason (false). """ def __init__(self, reader): Reader.__init__(self, schema=reader._schema) self.reader = reader self.net = core.Net('reader_with_limit') self._data_finished = self.net.AddExternalInput( self.net.NextName('data_finished')) self.should_stop = None def setup_ex(self, global_init_net, global_finish_net): global_init_net.ConstantFill( [], [self._data_finished], shape=[], value=False, dtype=core.DataType.BOOL) self.reader.setup_ex(global_init_net, global_finish_net) self.setup_limiter(global_init_net, global_finish_net) def read_ex(self, local_init_net, local_finish_net): """Reads from an underlying Reader class, but may stop due to additional constraints. Build and return network(s) to read data from a Reader with additional constraints, depending on which derived class is used. Derived classes implement setup_limited and check_limiter_condition which determine the nature of the constraint imposed on the reader, e.g. iteration limits or time limit. Args: local_init_net: A net invoked at task instance init time (Once per parallel thread). local_finish_net: A net invoked at task instance cleanup time (Once per parallel thread). """ # Check if limiting constraint is met. stop_condition_net = core.Net('limited_reader_condition') should_stop = self.check_limiter_condition(stop_condition_net) # Call original reader. nets, local_data_finished, fields = self.reader.read_ex( local_init_net, local_finish_net) self._set_schema(self.reader._schema) # Check if original reader is done. check_done_net = core.Net('limited_reader_post') # Copy to the same blob as the counter output to trigger reader # stopping - this is ok because execution will check should_stop_blob # after every single operation, so it has already been checked on this # iteration by this point. check_done_net.Copy(local_data_finished, should_stop) # Update externally-accessible flag indicating if reader is done check_done_net.Or([self._data_finished, local_data_finished], [self._data_finished]) return [stop_condition_net] + nets + [check_done_net], should_stop, fields def setup_limiter(self, global_init_net, global_finish_net): """Configure task level init/cleanup nets required to implement limit condition. Must be implemented by subclass. Args: global_init_net: A net invoked at task init time. global_finish_net: A net invoked at task cleanup time. """ raise NotImplementedError("Subclass must implement `setup_limiter`") def check_limiter_condition(self, stop_condition_net): """Configure a net that is invoked between reading batches to see if limit condition is met. Must be implemented by subclass. Args: stop_condition_net: A net invoked to evaluate an early termination condition. """ raise NotImplementedError("Subclass must implement `check_limiter_condition") def data_finished(self): """ Return a blob that can be checked after the end of the reading task, which will contain a scalar float indicating whether the underlying reader has been exhausted (True) or whether we stopped because reached the limit of iterations (False). """ return self._data_finished class ReaderWithLimit(ReaderWithLimitBase): """Reader that stops after `num_iter` batches. If `num_iter` <= 0 or is None, reverts to an unconstrained reader that exports a boolean blob indicating that the reader has exhausted the data steam. """ def __init__(self, reader, num_iter=1): """Class initializer. Args: reader: The underlying reader object doing the actual read. num_iter: Number of batches to read. If `None`, the class reverts to a normal reader except that it also produces a data_finished blob as a side effect to indicate whether the input stream is exhausted. """ super(ReaderWithLimit, self).__init__(reader) self.counter = None self.num_iter = num_iter if self.num_iter is not None: self.counter = self.net.AddExternalInput( self.net.NextName('counter')) def setup_limiter(self, global_init_net, global_finish_net): if self.counter: global_init_net.CreateCounter( [], [self.counter], init_count=int(self.num_iter)) def check_limiter_condition(self, stop_condition_net): if self.counter: return stop_condition_net.CountDown([self.counter], 1) else: return stop_condition_net.ConstantFill( [], 1, shape=[], value=False, dtype=core.DataType.BOOL) def CountUntil(num_iter): return ReaderWithLimit(CounterReader(), num_iter) class ReaderWithTimeLimit(ReaderWithLimitBase): """Reader that stops after `duration` seconds. If `duration` <= 0 or is None, reverts to an unconstrained reader that exports a boolean blob indicating that the reader has exhausted the data steam. """ def __init__(self, reader, duration=0): """Class initializer. Args: reader: The underlying reader object doing the actual read. duration: Number of seconds to read. If un-specified, None, or <= 0, the class reverts to a normal reader except that it also produces a data_finished blob as a side effect to indicate whether the input stream is exhausted. """ super(ReaderWithTimeLimit, self).__init__(reader) self.timer = None self.duration = duration self.duration_ns_blob = None def setup_limiter(self, global_init_net, global_finish_net): if self.duration is not None and self.duration > 0: duration_ns = int(self.duration * (10**9)) self.timer = global_init_net.TimerBegin( [], counter_name='epoch_timer') start_time = global_init_net.TimerGet(self.timer) self.duration_ns_blob = global_init_net.ConstantFill( [start_time], value=duration_ns) global_finish_net.TimerEnd([self.timer], []) def check_limiter_condition(self, stop_condition_net): if self.duration: time_elapsed = stop_condition_net.TimerGet(self.timer) return stop_condition_net.GE( [time_elapsed, self.duration_ns_blob], str(self.should_stop)) else: return stop_condition_net.ConstantFill( [], 1, shape=[], value=False, dtype=core.DataType.BOOL ) class ReaderWithDelay(Reader): """Test reader class that inserts a delay between reading batches.""" def __init__(self, reader, delay): Reader.__init__(self, schema=reader._schema) self.reader = reader self.delay = delay def setup_ex(self, global_init_net, global_finish_net): self.reader.setup_ex(global_init_net, global_finish_net) def read_ex(self, local_init_net, local_finish_net): read_net = core.Net("reader_body") def sleep_op(*args, **argd): time.sleep(self.delay) read_net.Python(sleep_op)([], []) return ([read_net],) + self.reader.read(read_net) class CompositeReader(Reader): """ Base class for a reader that wrap multiple readers, e.g., reading from multiple sources simultaneously. """ def __init__(self, names, readers): """ Args: names: list[str] names of readers; used as schema keys readers: list[Reader] Reader instances, must have schema """ assert len(names) == len(readers) super(CompositeReader, self).__init__(schema=Struct(*[ (name, reader.schema()) for name, reader in zip(names, readers) ])) self._names = names self._readers = readers def setup_ex(self, init_net, finish_net): for reader in self._readers: reader.setup_ex(init_net, finish_net) def read_ex(self, local_init_net, local_finish_net): """ Stops when one of the reader finished """ # First, instantiate all the reader nets fields = [] stop_blobs = [] all_sub_read_nets = [] for name, reader in zip(self._names, self._readers): sub_read_nets, should_stop, record = reader.read_record_ex( local_init_net, local_finish_net) stop_blobs.append(should_stop) all_sub_read_nets.append(sub_read_nets) fields.extend(record.field_blobs()) read_nets = [] # Use the stop blob of the last reader as stop blob of composite reader. local_should_stop = stop_blobs[-1] for name, sub_read_nets, stop_blob in zip(self._names, all_sub_read_nets, stop_blobs): read_nets.extend(sub_read_nets) if stop_blob == local_should_stop: # Skip adding stop net because Or([A, A], A) doesn't pass operator # schema check continue stop_net = core.Net("{}_stop".format(name)) stop_net.Or([local_should_stop, stop_blob], local_should_stop) read_nets.append(stop_net) return read_nets, local_should_stop, fields def reset(self, net): for reader in self._readers: reader.reset(net) class CompositeReaderBuilder(ReaderBuilder): """ A reader builder for CompositeReader """ def __init__(self, names, reader_builders): """ Args: names: list[str] names of readers; used as schema keys reader_builders: list[ReaderBuilder] ReaderBuilder instances; must have schema """ super(CompositeReaderBuilder, self).__init__() self._names = names self._reader_builders = reader_builders self._schema = Struct(*[ (name, reader_builder.schema()) for name, reader_builder in zip(names, reader_builders) ]) def schema(self): return self._schema def setup(self, **kwargs): data_finished_blobs = {} # limiter is stateful; it can only be used once. Since # CompositeReader stops when one of the reader stops, # this is fine. if "limiter" in kwargs: limiter = kwargs.pop("limiter") else: limiter = None for i, reader_builder in enumerate(self._reader_builders): if i == len(self._reader_builders) - 1 and limiter is not None: # The limiter must be applied to the last reader so that the # batch counter is incremented only if every reader has data kwargs["limiter"] = limiter sub_reader_data_finished_blobs = reader_builder.setup(**kwargs) overlapping_keys = set(data_finished_blobs.keys()) & set(sub_reader_data_finished_blobs.keys()) overlapping_values = set(data_finished_blobs.values()) & set(sub_reader_data_finished_blobs.values()) assert overlapping_keys == set(), "Overlapping keys: {}".format(overlapping_keys) assert overlapping_values == set(), "Overlapping values: {}".format(overlapping_values) data_finished_blobs.update(sub_reader_data_finished_blobs) return data_finished_blobs def new_reader(self, **kwargs): readers = [] for reader_builder in self._reader_builders: reader = reader_builder.new_reader(**kwargs) if isinstance(reader, Reader): pass elif hasattr(reader, 'reader'): reader = reader.reader() else: raise ValueError('reader must be an instance of Reader or Pipe') readers.append(reader) multi_reader = CompositeReader(self._names, readers) assert multi_reader.schema() == self._schema return multi_reader
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/dataio.py
0.916687
0.420659
dataio.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals ''' This module provides a python-land multithreaded data input mechanism for Caffe2 nets. Basic usage is as follows: coordinator = data_workers.init_data_input_workers( net, ["data", "label"], my_fetch_fun, batch_size=32, input_source_name="train", dont_rebatch=False ) ... coordinator.start() First argument is the Caffe2 net (or model helper), and second argument is list of input blobs that are to be fed. Argument 'input_source_name' is used to distinguish different sources of data, such as train or test data. This is to ensure the data does not get mixed up, although two nets would share blobs. To do the actual data loading, one defines a "fetcher function" that has call signature my_fetch_fun(worker_id, batch_size) Optionally, one can define a "init function" that is called once before threads start, and has call signature: my_init_fun(data_coordinator, global_coordinator) If dont_rebatch is set to True, the data input is not batched into equal sized chunks but data directly provided by fetchers is used. 'batch_columns' can be used to specify which dimension is the batch dimension, for each of the inputs. Default is 0 for all iputs. 'timeout' is the timeout in seconds after which if no data is available, the net will fail (default 600s = 10 mins). This function returns a list of numpy arrays corresponding to the different input blobs. In the example above, it would return two arrays, one for the data blob and another for the labels. These arrays can have arbitrary number of elements (i.e they do not need to match the batch size). The batch size is provided for the function as a hint only. For example, fetcher function could download images from a remote service or load random images from a directory on a file system. For a dummy example, see the data_workers_test unit test. Note that for data_parallel_models, init_data_input_workers will be called for each GPU. Note that the 'coordinator' returned by the function is same each time. ''' try: import Queue except ImportError: # Py3 import queue as Queue from itertools import chain import logging import threading import numpy as np import time from caffe2.python import workspace, core, scope, utils from caffe2.proto import caffe2_pb2 from caffe2.python.parallel_workers import Metrics, State, \ WorkerCoordinator, GlobalWorkerCoordinator, Worker, run_worker log = logging.getLogger("data_workers") log.setLevel(logging.INFO) LOG_INT_SECS = 60 def get_worker_ids(num_workers): return list(range(0, num_workers)) def init_data_input_workers( net, input_blob_names, fetch_fun, batch_size, num_worker_threads=2, input_source_name="train", max_buffered_batches=800, init_fun=None, external_loggers=None, dont_rebatch=False, batch_columns=None, timeout=600 ): global global_coordinator device_option = scope.CurrentDeviceScope() if (device_option is None): device_option = caffe2_pb2.DeviceOption(device_type=caffe2_pb2.CPU) metrics = Metrics(external_loggers) batch_feeder = BatchFeeder( net, input_blob_names, batch_size, device_option, scope.CurrentNameScope(), input_source_name, global_coordinator.get_queue(input_source_name, max_buffered_batches), metrics, dont_rebatch, batch_columns, timeout=timeout ) # Launch fetch worker threads worker_ids = [ global_coordinator.get_new_worker_id() for i in range(num_worker_threads) ] # Create coordinator object coordinator = WorkerCoordinator( input_source_name, worker_ids, init_fun, batch_feeder) workers = [ threading.Thread( target=run_worker, name="data_workers fetcher id {}".format(worker_id), args=[coordinator, DataWorker(coordinator, worker_id, fetch_fun, metrics, batch_size, batch_feeder)], ) for worker_id in worker_ids ] workers.append(threading.Thread( target=enqueuer, name="Enqueuer {} {}".format(input_source_name, scope.CurrentNameScope()), args=[coordinator, batch_feeder])) coordinator._workers = workers global_coordinator.add(coordinator) return global_coordinator class BatchFeeder(State): def __init__(self, net, input_blob_names, batch_size, device_option, namescope, input_source_name, queue, metrics, dont_rebatch, batch_columns, timeout=600): self._counter = 0 self._input_blob_names = input_blob_names self._batch_size = batch_size self._internal_queue = queue self._queues = [] self._device_option = device_option self._namescope = namescope self._timeout = timeout self._input_source_name = input_source_name self._c2_queue_capacity = 4 self._create_caffe2_queues(net) self._create_caffe2_ops(net) self._inputs = 0 self._prev_seconds = 0 self._last_warning = time.time() self._dont_rebatch = dont_rebatch self._init_scratch() self._metrics = metrics if batch_columns is None: batch_columns = [0 for _ in input_blob_names] self._batch_columns = batch_columns def start(self): self._inputs = 0 self._prev_seconds = time.time() def stop(self): try: for q in self._queues: workspace.RunOperatorOnce( core.CreateOperator("CloseBlobsQueue", [q], []) ) finally: self._log_inputs_per_interval(0, force=True) def cleanup(self): utils.ResetBlobs(self._scratch_blob.values()) utils.ResetBlobs(self._scratch_status.values()) def _get(self, data_input_coordinator): start_time = time.time() last_warning = time.time() while data_input_coordinator.is_active(): try: return self._internal_queue.get(block=True, timeout=0.5) except Queue.Empty: if time.time() - last_warning > 10.0: log.warning("** Data input is slow: (still) no data in {} secs.".format( time.time() - start_time)) last_warning = time.time() continue return None def _validate_chunk(self, chunk): if chunk is None: log.warning("Fetcher function returned None") return False assert len(chunk) == len(self._input_blob_names), \ "Expecting data blob for each input" for d in chunk: assert isinstance(d, np.ndarray), \ "Fetcher function must return a numpy array" if not self._dont_rebatch: j = 1 for d in chunk[1:]: assert d.shape[self._batch_columns[j]] == \ chunk[0].shape[self._batch_columns[0]], \ "Each returned input must have equal number of samples" j += 1 if len(chunk) == 0: log.warning("Worker provided zero length input") return False return True def put(self, chunk, data_input_coordinator): if not self._validate_chunk(chunk): return while data_input_coordinator.is_active(): try: qsize = self._internal_queue.qsize() if qsize < 2 and (time.time() - self._last_warning) > LOG_INT_SECS: log.warning("Warning, data loading lagging behind: " + "queue size={}, name={}".format(qsize, self._input_source_name)) self._last_warning = time.time() self._counter += 1 self._internal_queue.put(chunk, block=True, timeout=0.5) self._log_inputs_per_interval(chunk[0].shape[0]) return except Queue.Full: log.debug("Queue full: stalling fetchers...") continue def _enqueue_batch_direct(self, data_input_coordinator): data = self._get(data_input_coordinator) if data is None: return if data_input_coordinator.is_active(): for b, q, c in zip(self._input_blob_names, self._queues, data): self._enqueue(b, q, c) def _enqueue_batch(self, data_input_coordinator): ''' This pulls data from the python-side queue and collects them into batch-sized pieces, unless dont_rebatch is set to true. ''' if self._dont_rebatch: self._enqueue_batch_direct(data_input_coordinator) return cur_batch = [np.array([]) for d in self._input_blob_names] first_batch_col = self._batch_columns[0] # Collect data until we have a full batch size while ( cur_batch[0].shape[0] == 0 or cur_batch[0].shape[first_batch_col] < self._batch_size ) and data_input_coordinator.is_active(): chunk = self._get(data_input_coordinator) if chunk is None: continue for j, chunk_elem in enumerate(chunk): if cur_batch[j].shape[0] == 0: cur_batch[j] = chunk_elem.copy() else: cur_batch[j] = np.append( cur_batch[j], chunk_elem, axis=self._batch_columns[j] ) start_time = time.time() try: # Return data over the batch size back to queue if cur_batch[0].shape[0] > 0 and cur_batch[0].shape[ first_batch_col ] > self._batch_size: leftover = [] trimmed_batch = [] for j, b in enumerate(cur_batch): [c, l] = np.split( b, [self._batch_size], axis=self._batch_columns[j] ) leftover.append(l) trimmed_batch.append(c) cur_batch = trimmed_batch try: self._internal_queue.put(leftover, block=False) except Queue.Full: pass assert cur_batch[0].shape[first_batch_col] == self._batch_size if data_input_coordinator.is_active(): for b, q, c in zip( self._input_blob_names, self._queues, cur_batch ): self._enqueue(b, q, c) finally: self._metrics.put_metric('enqueue_time', time.time() - start_time) def _init_scratch(self): self._scratch_blob = {} self._scratch_status = {} for blob_name in self._input_blob_names: scratch_name = self._namescope + blob_name + \ "_scratch_" + self._input_source_name self._scratch_blob[blob_name] = core.BlobReference(scratch_name) self._scratch_status[blob_name] = core.BlobReference( scratch_name + "_status" ) # Feed empty arrays to the scratch blobs here, so that there won't be # race conditions when calling FeedBlob (which calls wworkspace # CreateBlob()) from enqueue threads for b in chain( self._scratch_blob.values(), self._scratch_status.values() ): workspace.FeedBlob( b, np.array([]).astype(np.float32), device_option=self._device_option, ) def _enqueue(self, blob_name, queue, data_arr): ''' Enqueue the correctly sized batch arrays to Caffe2's queue. ''' workspace.FeedBlob( self._scratch_blob[blob_name], data_arr, device_option=self._device_option ) op = core.CreateOperator( "SafeEnqueueBlobs", [queue, self._scratch_blob[blob_name]], [self._scratch_blob[blob_name], self._scratch_status[blob_name]], device_option=self._device_option ) workspace.RunOperatorOnce(op) def _create_caffe2_queues(self, net): ''' Creates queues on caffe2 side ''' def create_queue(queue_name, num_blobs, capacity): workspace.RunOperatorOnce( core.CreateOperator( "CreateBlobsQueue", [], [queue_name], num_blobs=1, capacity=capacity)) return core.ScopedBlobReference(queue_name) for blob_name in self._input_blob_names: qname = blob_name + "_c2queue" + "_" + self._input_source_name q = create_queue( qname, num_blobs=1, capacity=self._c2_queue_capacity ) self._queues.append(q) def _create_caffe2_ops(self, net): ''' Creates dequeue-ops on caffe2 side ''' for q, blob_name in zip(self._queues, self._input_blob_names): # Add operator to the Caffe2 network to dequeue net.DequeueBlobs(q, blob_name, timeout_secs=float(self._timeout)) def _log_inputs_per_interval(self, inputs, force=False): self._inputs += inputs current_seconds = time.time() delta_seconds = current_seconds - self._prev_seconds if delta_seconds >= LOG_INT_SECS or force: inputs_per_sec = int(self._inputs / delta_seconds) qsize = self._internal_queue.qsize() log.info("{}/{}: {} inputs/sec".format( self._input_source_name, self._namescope, inputs_per_sec, )) log.info("-- queue: {} batches".format(qsize)) # log and reset perf metrics self._metrics.put_metric( 'inputs_per_sec', inputs_per_sec, False) self._metrics.put_metric('queue_size', qsize, False) self._metrics.put_metric( 'time_elapsed', delta_seconds, False) self._metrics.log_metrics() self._metrics.reset_metrics() self._inputs = 0 self._prev_seconds = current_seconds class GlobalCoordinator(GlobalWorkerCoordinator): def __init__(self): GlobalWorkerCoordinator.__init__(self) self._queues = {} def get_queue(self, queue_name, max_buffered_batches): assert isinstance(max_buffered_batches, int) if queue_name not in self._queues: self._queues[queue_name] = Queue.Queue(maxsize=max_buffered_batches) return self._queues[queue_name] def reset_data_input(self, namescope, name, net, batch_size): log.info("Reset data input {}, batch size {}: ".format(name, batch_size)) for c in self._coordinators: if c._worker_name == name and c._state._namescope == namescope: c._state._batch_size = batch_size c._state._create_caffe2_ops(net) class DataWorker(Worker): def __init__( self, coordinator, worker_id, worker_fun, metrics, batch_size, batch_feeder ): Worker.__init__(self, coordinator, worker_id, worker_fun=worker_fun, metrics=metrics) self._batch_size = batch_size self._batch_feeder = batch_feeder def run(self): input_data = self._worker_fun(self._worker_id, self._batch_size) self._batch_feeder.put(input_data, self._coordinator) def finish(self): self._metrics.put_metric( 'fetcher_time', time.time() - self._start_time) global_coordinator = GlobalCoordinator() def enqueuer(coordinator, batch_feeder): while coordinator.is_active(): batch_feeder._enqueue_batch(coordinator)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/data_workers.py
0.747155
0.414129
data_workers.py
pypi
import numpy as np import copy from caffe2.python import workspace from caffe2.python.core import InferOpBlobDevicesAsDict from future.utils import viewitems class DeviceChecker(object): """A device checker in Python to check consistency across multiple devices. This is not the most efficient way to check devices, as the Python interface will involve a lot of copies back and forth operations. Use at your own risk. """ def __init__(self, threshold, device_options): self._threshold = threshold self._device_options = device_options def CheckSimple(self, op, inputs, outputs_to_check, input_device_options=None): """Checks the operator with different device implementations. Inputs: op: the operator to be checked. inputs: the input data in numpy arrays. outputs_to_check: the outputs to check between devices. input_device_options: a mapping from input name to a device to use (instead of self._device_options) Outputs: boolean: True if it passes, False if it does not pass. """ op = copy.deepcopy(op) # Entering the checker workspace old_ws_name = workspace.CurrentWorkspace() results = [] workspace.SwitchWorkspace("_device_check_", True) for i, device_option in enumerate(self._device_options): op.device_option.CopyFrom(device_option) _input_device_options = input_device_options or \ InferOpBlobDevicesAsDict(op)[0] print(_input_device_options) for i, arr in enumerate(inputs): workspace.FeedBlob( op.input[i], np.array(arr), _input_device_options.get(op.input[i], device_option) ) workspace.RunOperatorOnce(op) results.append( [workspace.FetchBlob(op.output[idx]) for idx in outputs_to_check]) # Everything is done, reset the workspace. workspace.ResetWorkspace() # After running on all devices, check correctness success = True for i in range(1, len(self._device_options)): for j in range(len(outputs_to_check)): x = results[i][j] y = results[0][j] if not np.allclose(x, y, atol=self._threshold, rtol=self._threshold): print('Failure in checking device option {}' ' and output {}. The outputs are:' .format(i, op.output[outputs_to_check[j]])) print(x.flatten()) print(y.flatten()) print(np.max(np.abs(x - y))) success = False # else: # print ('Passed device pair (0, %d), %s %s' % # (i, outputs_to_check[j], y.shape)) workspace.SwitchWorkspace(old_ws_name) return success def CheckNet(self, net, inputs=None, blobs_to_check=None, ignore=None): """Checks a network by inspecting all of its intermediate results, and see if things match. """ if inputs is None: inputs = {} if ignore is None: ignore = set() old_ws_name = workspace.CurrentWorkspace() results = [] if blobs_to_check is None: blobs_to_check = sum([list(op.output) for op in net.op], []) blobs_to_check = [b for b in blobs_to_check if b not in ignore] workspace.SwitchWorkspace("_device_check_", True) for device_option in self._device_options: for name, arr in viewitems(inputs): # print 'feeding', name workspace.FeedBlob(name, arr, device_option) for op in net.op: op.device_option.CopyFrom(device_option) workspace.RunNetOnce(net) results.append( [workspace.FetchBlob(name) for name in blobs_to_check] ) # After running on all devices, check correctness success = True for i in range(1, len(results)): for j in range(len(blobs_to_check)): x = results[i][j] y = results[0][j] if not np.allclose(x, y, atol=self._threshold, rtol=self._threshold): print('Failure in checking device option {}' ' and output {}. The outputs are:' .format(i, blobs_to_check[j])) print(x.flatten()) print(y.flatten()) print(np.max(np.abs(x - y))) success = False # else: # print ('Passed device pair (%d, %d), %s %s: %s' % # (i, j, blobs_to_check[j], y.shape, # str(y.flatten()))) workspace.SwitchWorkspace(old_ws_name) return success
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/device_checker.py
0.441312
0.434341
device_checker.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core from caffe2.python.dataio import Reader, Writer from caffe2.python.schema import ( Struct, Field, from_column_list) class _QueueReader(Reader): def __init__(self, blobs_queue, schema, name=None): """Don't call this directly. Instead, use dataset.reader()""" super(_QueueReader, self).__init__(schema) self.blobs_queue = blobs_queue self.name = name def read(self, read_net): with core.NameScope(read_net.NextName(self.name)): status = read_net.NextName() fields = read_net.SafeDequeueBlobs( self.blobs_queue, self._schema.field_names() + [status]) return (fields[-1], fields[:-1]) class _QueueWriter(Writer): def __init__(self, blobs_queue, schema): self.blobs_queue = blobs_queue self.schema = schema def write(self, writer_net, fields): if isinstance(fields, Field): fields = fields.field_blobs() writer_net.CheckDatasetConsistency( fields, [], fields=self.schema.field_names()) status = writer_net.NextName() writer_net.SafeEnqueueBlobs( [self.blobs_queue] + fields, fields + [status]) return status class RecordQueue(object): """ The class is used to feed data with some process from a reader into a queue and provider a reader interface for data fetching from the queue. """ def __init__(self, fields, name=None, capacity=1, enforce_unique_name=False, num_threads=1): assert isinstance(fields, list) or isinstance(fields, Struct), ( 'fields must be either a Struct or a list of raw field names.') if isinstance(fields, list): fields = from_column_list(fields) self.schema = fields self.name = name or 'queue' self.num_threads = num_threads num_blobs = len(self.schema.field_names()) init_net = core.Net(self.name + '/init_net') self.blobs_queue = init_net.CreateBlobsQueue( [], 1, capacity=capacity, num_blobs=num_blobs, enforce_unique_name=enforce_unique_name) core.workspace.RunNetOnce(init_net) self.writer = _QueueWriter(self.blobs_queue, self.schema) reader_name = self.name + '_reader' self.reader = _QueueReader(self.blobs_queue, self.schema, reader_name) exit_net = core.Net(self.name + '/exit_net') exit_net.CloseBlobsQueue(self.blobs_queue, 0) self.exit_step = core.execution_step( '{}_close_step'.format(str(exit_net)), exit_net) def build(self, reader, process=None): """ Build the producer_step to feed data from reader into the queue, and return the reader interface. Inputs: reader: read data which will be stored in the queue. process: preprocess data before enqueue. Outputs: reader: reader to fetch the data from the queue. producer_step: the step insert the data into the queue. Should be run with comsume_step together. exit_step: the step to close queue schema: the schema for the reader. """ producer_steps = [] for i in range(self.num_threads): name = 'reader_' + str(i) net_reader = core.Net(name) should_stop, fields = reader.read_record(net_reader) step_read = core.execution_step(name, net_reader) name = 'queue_writer' + str(i) net_prod = core.Net(name) field_blobs = fields.field_blobs() if process: field_blobs = process(net_prod, fields).field_blobs() self.writer.write(net_prod, field_blobs) step_prod = core.execution_step(name, net_prod) step = core.execution_step( 'producer_' + str(i), [step_read, step_prod], should_stop_blob=should_stop) producer_steps.append(step) producer_step = core.execution_step( 'producers', producer_steps, concurrent_substeps=True) return self.reader, producer_step, self.exit_step, self.schema
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/record_queue.py
0.627267
0.169337
record_queue.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import brew class AttentionType: Regular, Recurrent, Dot, SoftCoverage = tuple(range(4)) def s(scope, name): # We have to manually scope due to our internal/external blob # relationships. return "{}/{}".format(str(scope), str(name)) # c_i = \sum_j w_{ij}\textbf{s}_j def _calc_weighted_context( model, encoder_outputs_transposed, encoder_output_dim, attention_weights_3d, scope, ): # [batch_size, encoder_output_dim, 1] attention_weighted_encoder_context = brew.batch_mat_mul( model, [encoder_outputs_transposed, attention_weights_3d], s(scope, 'attention_weighted_encoder_context'), ) # [batch_size, encoder_output_dim] attention_weighted_encoder_context, _ = model.net.Reshape( attention_weighted_encoder_context, [ attention_weighted_encoder_context, s(scope, 'attention_weighted_encoder_context_old_shape'), ], shape=[1, -1, encoder_output_dim], ) return attention_weighted_encoder_context # Calculate a softmax over the passed in attention energy logits def _calc_attention_weights( model, attention_logits_transposed, scope, encoder_lengths=None, ): if encoder_lengths is not None: attention_logits_transposed = model.net.SequenceMask( [attention_logits_transposed, encoder_lengths], ['masked_attention_logits'], mode='sequence', ) # [batch_size, encoder_length, 1] attention_weights_3d = brew.softmax( model, attention_logits_transposed, s(scope, 'attention_weights_3d'), engine='CUDNN', axis=1, ) return attention_weights_3d # e_{ij} = \textbf{v}^T tanh \alpha(\textbf{h}_{i-1}, \textbf{s}_j) def _calc_attention_logits_from_sum_match( model, decoder_hidden_encoder_outputs_sum, encoder_output_dim, scope, ): # [encoder_length, batch_size, encoder_output_dim] decoder_hidden_encoder_outputs_sum = model.net.Tanh( decoder_hidden_encoder_outputs_sum, decoder_hidden_encoder_outputs_sum, ) # [encoder_length, batch_size, 1] attention_logits = brew.fc( model, decoder_hidden_encoder_outputs_sum, s(scope, 'attention_logits'), dim_in=encoder_output_dim, dim_out=1, axis=2, freeze_bias=True, ) # [batch_size, encoder_length, 1] attention_logits_transposed = brew.transpose( model, attention_logits, s(scope, 'attention_logits_transposed'), axes=[1, 0, 2], ) return attention_logits_transposed # \textbf{W}^\alpha used in the context of \alpha_{sum}(a,b) def _apply_fc_weight_for_sum_match( model, input, dim_in, dim_out, scope, name, ): output = brew.fc( model, input, s(scope, name), dim_in=dim_in, dim_out=dim_out, axis=2, ) output = model.net.Squeeze( output, output, dims=[0], ) return output # Implement RecAtt due to section 4.1 in http://arxiv.org/abs/1601.03317 def apply_recurrent_attention( model, encoder_output_dim, encoder_outputs_transposed, weighted_encoder_outputs, decoder_hidden_state_t, decoder_hidden_state_dim, attention_weighted_encoder_context_t_prev, scope, encoder_lengths=None, ): weighted_prev_attention_context = _apply_fc_weight_for_sum_match( model=model, input=attention_weighted_encoder_context_t_prev, dim_in=encoder_output_dim, dim_out=encoder_output_dim, scope=scope, name='weighted_prev_attention_context', ) weighted_decoder_hidden_state = _apply_fc_weight_for_sum_match( model=model, input=decoder_hidden_state_t, dim_in=decoder_hidden_state_dim, dim_out=encoder_output_dim, scope=scope, name='weighted_decoder_hidden_state', ) # [1, batch_size, encoder_output_dim] decoder_hidden_encoder_outputs_sum_tmp = model.net.Add( [ weighted_prev_attention_context, weighted_decoder_hidden_state, ], s(scope, 'decoder_hidden_encoder_outputs_sum_tmp'), ) # [encoder_length, batch_size, encoder_output_dim] decoder_hidden_encoder_outputs_sum = model.net.Add( [ weighted_encoder_outputs, decoder_hidden_encoder_outputs_sum_tmp, ], s(scope, 'decoder_hidden_encoder_outputs_sum'), broadcast=1, ) attention_logits_transposed = _calc_attention_logits_from_sum_match( model=model, decoder_hidden_encoder_outputs_sum=decoder_hidden_encoder_outputs_sum, encoder_output_dim=encoder_output_dim, scope=scope, ) # [batch_size, encoder_length, 1] attention_weights_3d = _calc_attention_weights( model=model, attention_logits_transposed=attention_logits_transposed, scope=scope, encoder_lengths=encoder_lengths, ) # [batch_size, encoder_output_dim, 1] attention_weighted_encoder_context = _calc_weighted_context( model=model, encoder_outputs_transposed=encoder_outputs_transposed, encoder_output_dim=encoder_output_dim, attention_weights_3d=attention_weights_3d, scope=scope, ) return attention_weighted_encoder_context, attention_weights_3d, [ decoder_hidden_encoder_outputs_sum, ] def apply_regular_attention( model, encoder_output_dim, encoder_outputs_transposed, weighted_encoder_outputs, decoder_hidden_state_t, decoder_hidden_state_dim, scope, encoder_lengths=None, ): weighted_decoder_hidden_state = _apply_fc_weight_for_sum_match( model=model, input=decoder_hidden_state_t, dim_in=decoder_hidden_state_dim, dim_out=encoder_output_dim, scope=scope, name='weighted_decoder_hidden_state', ) # [encoder_length, batch_size, encoder_output_dim] decoder_hidden_encoder_outputs_sum = model.net.Add( [weighted_encoder_outputs, weighted_decoder_hidden_state], s(scope, 'decoder_hidden_encoder_outputs_sum'), broadcast=1, use_grad_hack=1, ) attention_logits_transposed = _calc_attention_logits_from_sum_match( model=model, decoder_hidden_encoder_outputs_sum=decoder_hidden_encoder_outputs_sum, encoder_output_dim=encoder_output_dim, scope=scope, ) # [batch_size, encoder_length, 1] attention_weights_3d = _calc_attention_weights( model=model, attention_logits_transposed=attention_logits_transposed, scope=scope, encoder_lengths=encoder_lengths, ) # [batch_size, encoder_output_dim, 1] attention_weighted_encoder_context = _calc_weighted_context( model=model, encoder_outputs_transposed=encoder_outputs_transposed, encoder_output_dim=encoder_output_dim, attention_weights_3d=attention_weights_3d, scope=scope, ) return attention_weighted_encoder_context, attention_weights_3d, [ decoder_hidden_encoder_outputs_sum, ] def apply_dot_attention( model, encoder_output_dim, # [batch_size, encoder_output_dim, encoder_length] encoder_outputs_transposed, # [1, batch_size, decoder_state_dim] decoder_hidden_state_t, decoder_hidden_state_dim, scope, encoder_lengths=None, ): if decoder_hidden_state_dim != encoder_output_dim: weighted_decoder_hidden_state = brew.fc( model, decoder_hidden_state_t, s(scope, 'weighted_decoder_hidden_state'), dim_in=decoder_hidden_state_dim, dim_out=encoder_output_dim, axis=2, ) else: weighted_decoder_hidden_state = decoder_hidden_state_t # [batch_size, decoder_state_dim] squeezed_weighted_decoder_hidden_state = model.net.Squeeze( weighted_decoder_hidden_state, s(scope, 'squeezed_weighted_decoder_hidden_state'), dims=[0], ) # [batch_size, decoder_state_dim, 1] expanddims_squeezed_weighted_decoder_hidden_state = model.net.ExpandDims( squeezed_weighted_decoder_hidden_state, squeezed_weighted_decoder_hidden_state, dims=[2], ) # [batch_size, encoder_output_dim, 1] attention_logits_transposed = model.net.BatchMatMul( [ encoder_outputs_transposed, expanddims_squeezed_weighted_decoder_hidden_state, ], s(scope, 'attention_logits'), trans_a=1, ) # [batch_size, encoder_length, 1] attention_weights_3d = _calc_attention_weights( model=model, attention_logits_transposed=attention_logits_transposed, scope=scope, encoder_lengths=encoder_lengths, ) # [batch_size, encoder_output_dim, 1] attention_weighted_encoder_context = _calc_weighted_context( model=model, encoder_outputs_transposed=encoder_outputs_transposed, encoder_output_dim=encoder_output_dim, attention_weights_3d=attention_weights_3d, scope=scope, ) return attention_weighted_encoder_context, attention_weights_3d, [] def apply_soft_coverage_attention( model, encoder_output_dim, encoder_outputs_transposed, weighted_encoder_outputs, decoder_hidden_state_t, decoder_hidden_state_dim, scope, encoder_lengths, coverage_t_prev, coverage_weights, ): weighted_decoder_hidden_state = _apply_fc_weight_for_sum_match( model=model, input=decoder_hidden_state_t, dim_in=decoder_hidden_state_dim, dim_out=encoder_output_dim, scope=scope, name='weighted_decoder_hidden_state', ) # [encoder_length, batch_size, encoder_output_dim] decoder_hidden_encoder_outputs_sum_tmp = model.net.Add( [weighted_encoder_outputs, weighted_decoder_hidden_state], s(scope, 'decoder_hidden_encoder_outputs_sum_tmp'), broadcast=1, ) # [batch_size, encoder_length] coverage_t_prev_2d = model.net.Squeeze( coverage_t_prev, s(scope, 'coverage_t_prev_2d'), dims=[0], ) # [encoder_length, batch_size] coverage_t_prev_transposed = brew.transpose( model, coverage_t_prev_2d, s(scope, 'coverage_t_prev_transposed'), ) # [encoder_length, batch_size, encoder_output_dim] scaled_coverage_weights = model.net.Mul( [coverage_weights, coverage_t_prev_transposed], s(scope, 'scaled_coverage_weights'), broadcast=1, axis=0, ) # [encoder_length, batch_size, encoder_output_dim] decoder_hidden_encoder_outputs_sum = model.net.Add( [decoder_hidden_encoder_outputs_sum_tmp, scaled_coverage_weights], s(scope, 'decoder_hidden_encoder_outputs_sum'), ) # [batch_size, encoder_length, 1] attention_logits_transposed = _calc_attention_logits_from_sum_match( model=model, decoder_hidden_encoder_outputs_sum=decoder_hidden_encoder_outputs_sum, encoder_output_dim=encoder_output_dim, scope=scope, ) # [batch_size, encoder_length, 1] attention_weights_3d = _calc_attention_weights( model=model, attention_logits_transposed=attention_logits_transposed, scope=scope, encoder_lengths=encoder_lengths, ) # [batch_size, encoder_output_dim, 1] attention_weighted_encoder_context = _calc_weighted_context( model=model, encoder_outputs_transposed=encoder_outputs_transposed, encoder_output_dim=encoder_output_dim, attention_weights_3d=attention_weights_3d, scope=scope, ) # [batch_size, encoder_length] attention_weights_2d = model.net.Squeeze( attention_weights_3d, s(scope, 'attention_weights_2d'), dims=[2], ) coverage_t = model.net.Add( [coverage_t_prev, attention_weights_2d], s(scope, 'coverage_t'), broadcast=1, ) return ( attention_weighted_encoder_context, attention_weights_3d, [decoder_hidden_encoder_outputs_sum], coverage_t, )
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/attention.py
0.902443
0.193623
attention.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import os from caffe2.python import core from caffe2.python.db_file_reader import DBFileReader from caffe2.python.pipeline import pipe from caffe2.python.task import Cluster, TaskGroup class CachedReader(DBFileReader): default_name_suffix = 'cached_reader' """Reader with persistent in-file cache. Example usage: cached_reader = CachedReader( reader, db_path='/tmp/cache.db', db_type='LevelDB', ) build_cache_step = cached_reader.build_cache_step() with LocalSession() as session: session.run(build_cache_step) Every time new CachedReader is created, it's expected that db_path exists before calling .setup_ex(...) and .read(...). If db_path doesn't exist, it's expected build_cache_step to be called first to build a cache at db_path. build_cache_step will check existence of provided db_path and in case it's missing will initialize it by reading data from original reader. All consequent attempts to read will ignore original reader (i.e. no additional data will be read from it). Args: original_reader: Reader. If provided, it's the original reader used to build the cache file. db_path: str. Optional Args: db_type: str. DB type of file. A db_type is registed by `REGISTER_CAFFE2_DB(<db_type>, <DB Class>)`. Default to 'LevelDB'. name: str or None. Name of CachedReader. Optional name to prepend to blobs that will store the data. Default to '<db_name>_<default_name_suffix>'. batch_size: int. How many examples are read for each time the read_net is run. Defaults to 100. loop_over: bool. If True given, will go through examples in random order endlessly. Defaults to False. """ def __init__( self, original_reader, db_path, db_type='LevelDB', name=None, batch_size=100, loop_over=False, ): assert original_reader is not None, "original_reader can't be None" self.original_reader = original_reader super(CachedReader, self).__init__( db_path, db_type, name, batch_size, loop_over, ) def _init_reader_schema(self, *args, **kwargs): """Prepare the reader schema. Since an original reader is given, use it's schema as ground truth. Returns: schema: schema.Struct. Used in Reader.__init__(...). """ return self.original_reader._schema def build_cache_step(self, overwrite=False): """Build a step for generating cache DB file. If self.db_path exists and not overwritting, build an empty step. Overwise, build a step as follows. Pipe original reader to the _DatasetWriter, so that dataset field blobs are populated. Then save these blobs into a file. Args: overwrite: bool. If true, ignore the existing file and build a new one overwritting the existing one anyway. Returns: build_cache_step: ExecutionStep. The step to be run for building a cache DB file. """ if os.path.exists(self.db_path) and not overwrite: # cache already exists, no need to rebuild it return core.execution_step('build_step', []) init_net = core.Net('init') self._init_field_blobs_as_empty(init_net) with Cluster(), core.NameScope(self.name), TaskGroup() as copy_tg: pipe(self.original_reader, self.ds.writer(), num_threads=16) copy_step = copy_tg.to_task().get_step() save_net = core.Net('save') self._save_field_blobs_to_db_file(save_net) return core.execution_step('build_cache', [init_net, copy_step, save_net]) def _save_field_blobs_to_db_file(self, net): """Save dataset field blobs to a DB file at db_path""" net.Save( self.ds.get_blobs(), [], db=self.db_path, db_type=self.db_type, blob_name_overrides=self.ds.field_names(), absolute_path=True, )
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/cached_reader.py
0.8119
0.222362
cached_reader.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np """ The following methods are various utility methods for using the Tensor-Train decomposition, or TT-decomposition introduced by I. V. Oseledets (2011) in his paper (http://epubs.siam.org/doi/abs/10.1137/090752286). Broadly speaking, these methods are used to replace fully connected layers in neural networks with Tensor-Train layers introduced by A. Novikov et. al. (2015) in their paper (http://arxiv.org/abs/1509.06569). More details about each of the methods are provided in each respective docstring. """ def init_tt_cores(inp_sizes, out_sizes, tt_ranks, seed=1234): """ Initialize randomized orthogonalized TT-cores. This method should be used when a TT-layer is trained from scratch. The sizes of each of the cores are specified by the inp_sizes and out_sizes, and the respective tt_ranks will dictate the ranks of each of the cores. Note that a larger set of tt_ranks will result in slower computation but will result in more accurate approximations. The size of the ith core is: tt_ranks[i] * inp_sizes[i] * out_sizes[i] * tt_ranks[i + 1]. Note that the following relationships of lengths of each input is expected: len(inp_sizes) == len(out_sizes) == len(tt_ranks) - 1. Args: inp_sizes: list of the input dimensions of the respective cores out_sizes: list of the output dimensions of the respective cores tt_ranks: list of the ranks of the respective cores seed: integer to seed the random number generator Returns: cores: One-dimensional list of cores concatentated along an axis """ np.random.seed(seed) # Assert that the sizes of each input is correct assert(len(inp_sizes) == len(out_sizes)), \ "The number of input dimensions (" + str(len(inp_sizes)) + \ ") must be equal to the number of output dimensions (" + \ str(len(out_sizes)) + ")." assert(len(tt_ranks) == len(inp_sizes) + 1), \ "The number of tt-ranks (" + str(len(tt_ranks)) + ") must be " + \ "one more than the number of input and output dims (" + \ str(len(out_sizes)) + ")." # Convert to numpy arrays inp_sizes = np.array(inp_sizes) out_sizes = np.array(out_sizes) tt_ranks = np.array(tt_ranks) # Initialize the cores array cores_len = np.sum( inp_sizes * out_sizes * tt_ranks[1:] * tt_ranks[:-1]) cores = np.zeros(cores_len) cores_idx = 0 rv = 1 # Compute the full list of cores by computing each individual one for i in range(inp_sizes.shape[0]): shape = [tt_ranks[i], inp_sizes[i], out_sizes[i], tt_ranks[i + 1]] # Precompute the shape of each core tall_shape = (np.prod(shape[:3]), shape[3]) # Randomly initialize the current core using a normal distribution curr_core = np.dot(rv, np.random.normal( 0, 1, size=(shape[0], np.prod(shape[1:])))) curr_core = curr_core.reshape(tall_shape) # Orthogonalize the initialized current core and append to cores list if i < inp_sizes.shape[0] - 1: curr_core, rv = np.linalg.qr(curr_core) cores[cores_idx:cores_idx + curr_core.size] = curr_core.flatten() cores_idx += curr_core.size # Normalize the list of arrays using this Glarot trick glarot_style = (np.prod(inp_sizes) * np.prod(tt_ranks))**(1.0 / inp_sizes.shape[0]) return (0.1 / glarot_style) * np.array(cores).astype(np.float32) def matrix_to_tt(W, inp_sizes, out_sizes, tt_ranks): """ Convert a matrix into the TT-format. This method will consume a 2D weight matrix such as those used in fully connected layers in a neural network and will compute the TT-decomposition of the weight matrix and return the TT-cores of the resulting computation. This method should be used when converting a trained, fully connected layer, into a TT-layer for increased speed and decreased parameter size. The size of the ith core is: tt_ranks[i] * inp_sizes[i] * out_sizes[i] * tt_ranks[i + 1]. Note that the following relationships of lengths of each input is expected: len(inp_sizes) == len(out_sizes) == len(tt_ranks) - 1. We also require that np.prod(inp_sizes) == W.shape[0] and that np.prod(out_sizes) == W.shape[1]. Args: W: two-dimensional weight matrix numpy array representing a fully connected layer to be converted to TT-format; note that the weight matrix is transposed before decomposed because we want to emulate the X * W^T operation that the FC layer performs. inp_sizes: list of the input dimensions of the respective cores out_sizes: list of the output dimensions of the respective cores tt_ranks: list of the ranks of the respective cores Returns: new_cores: One-dimensional list of cores concatentated along an axis """ # Assert that the sizes of each input is correct assert(len(inp_sizes) == len(out_sizes)), \ "The number of input dimensions (" + str(len(inp_sizes)) + \ ") must be equal to the number of output dimensions (" + \ str(len(out_sizes)) + ")." assert(len(tt_ranks) == len(inp_sizes) + 1), \ "The number of tt-ranks (" + str(len(tt_ranks)) + ") must be " + \ "one more than the number of input and output dimensions (" + \ str(len(out_sizes)) + ")." assert(W.shape[0] == np.prod(inp_sizes)), \ "The product of the input sizes (" + str(np.prod(inp_sizes)) + \ ") must be equal to first dimension of W (" + str(W.shape[0]) + ")." assert(W.shape[1] == np.prod(out_sizes)), \ "The product of the output sizes (" + str(np.prod(out_sizes)) + \ ") must be equal to second dimension of W (" + str(W.shape[1]) + ")." # W is transposed so that the multiplication X * W^T can be computed, just # as it is in the FC layer. W = W.transpose() # Convert to numpy arrays inp_sizes = np.array(inp_sizes) out_sizes = np.array(out_sizes) tt_ranks = np.array(tt_ranks) # Copy the original weight matrix in order to permute and reshape the weight # matrix. In addition, the inp_sizes and out_sizes are combined to a single # sizes array to use the tt_svd helper method, which only consumes a single # sizes array. W_copy = W.copy() total_inp_size = inp_sizes.size W_copy = np.reshape(W_copy, np.concatenate((inp_sizes, out_sizes))) order = np.repeat(np.arange(0, total_inp_size), 2) + \ np.tile([0, total_inp_size], total_inp_size) W_copy = np.transpose(W_copy, axes=order) W_copy = np.reshape(W_copy, inp_sizes * out_sizes) # Use helper method to convert the W matrix copy into the preliminary # cores array. cores = tt_svd(W_copy, inp_sizes * out_sizes, tt_ranks) # Permute the dimensions of each of the cores to be compatible with the # TT-layer. new_cores = np.zeros(cores.shape).astype(np.float32) idx = 0 for i in range(len(inp_sizes)): shape = (tt_ranks[i], inp_sizes[i], out_sizes[i], tt_ranks[i + 1]) current_core = cores[idx:idx + np.prod(shape)].reshape(shape) current_core = current_core.transpose((1, 3, 0, 2)) new_cores[new_cores.shape[0] - idx - np.prod(shape): new_cores.shape[0] - idx] \ = current_core.flatten() idx += np.prod(shape) return new_cores def tt_svd(W, sizes, tt_ranks): """ Helper method for the matrix_to_tt() method performing the TT-SVD decomposition. Uses the TT-decomposition algorithm to convert a matrix to TT-format using multiple reduced SVD operations. Args: W: two-dimensional weight matrix representing a fully connected layer to be converted to TT-format preprocessed by the matrix_to_tt() method. sizes: list of the dimensions of each of the cores tt_ranks: list of the ranks of the respective cores Returns: cores: One-dimensional list of cores concatentated along an axis """ assert(len(tt_ranks) == len(sizes) + 1) C = W.copy() total_size = sizes.size core = np.zeros(np.sum(tt_ranks[:-1] * sizes * tt_ranks[1:]), dtype='float32') # Compute iterative reduced SVD operations and store each resulting U matrix # as an individual core. pos = 0 for i in range(0, total_size - 1): shape = tt_ranks[i] * sizes[i] C = np.reshape(C, [shape, -1]) U, S, V = np.linalg.svd(C, full_matrices=False) U = U[:, 0:tt_ranks[i + 1]] S = S[0:tt_ranks[i + 1]] V = V[0:tt_ranks[i + 1], :] core[pos:pos + tt_ranks[i] * sizes[i] * tt_ranks[i + 1]] = U.ravel() pos += tt_ranks[i] * sizes[i] * tt_ranks[i + 1] C = np.dot(np.diag(S), V) core[pos:pos + tt_ranks[total_size - 1] * sizes[total_size - 1] * tt_ranks[total_size]] = C.ravel() return core # TODO(Surya) Write a method to convert an entire network where all fully # connected layers are replaced by an TT layer. def fc_net_to_tt_net(net): pass
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/tt_core.py
0.888305
0.700959
tt_core.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core, scope from caffe2.proto import caffe2_pb2 def _get_weights(model, namescope=None): if namescope is None: namescope = scope.CurrentNameScope() if namescope == '': return model.weights[:] else: return [w for w in model.weights if w.GetNameScope() == namescope] def iter(model, blob_out, **kwargs): if 'device_option' in kwargs: del kwargs['device_option'] model.param_init_net.ConstantFill( [], blob_out, shape=[1], value=0, dtype=core.DataType.INT64, device_option=core.DeviceOption(caffe2_pb2.CPU, 0), **kwargs ) return model.net.Iter(blob_out, blob_out, **kwargs) def accuracy(model, blob_in, blob_out, **kwargs): dev = kwargs['device_option'] if 'device_option' in kwargs \ else scope.CurrentDeviceScope() is_cpu = dev is None or dev.device_type == caffe2_pb2.CPU # We support top_k > 1 only on CPU if not is_cpu and 'top_k' in kwargs and kwargs['top_k'] > 1: pred_host = model.net.CopyGPUToCPU(blob_in[0], blob_in[0] + "_host") label_host = model.net.CopyGPUToCPU(blob_in[1], blob_in[1] + "_host") # Now use the Host version of the accuracy op model.net.Accuracy( [pred_host, label_host], blob_out, device_option=core.DeviceOption(caffe2_pb2.CPU, 0), **kwargs ) else: model.net.Accuracy(blob_in, blob_out) def add_weight_decay(model, weight_decay): """Adds a decay to weights in the model. This is a form of L2 regularization. Args: weight_decay: strength of the regularization """ if weight_decay <= 0.0: return wd = model.param_init_net.ConstantFill( [], 'wd', shape=[1], value=weight_decay ) ONE = model.param_init_net.ConstantFill([], "ONE", shape=[1], value=1.0) for param in _get_weights(model): # Equivalent to: grad += wd * param grad = model.param_to_grad[param] model.net.WeightedSum( [grad, ONE, param, wd], grad, )
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/helpers/train.py
0.807309
0.161386
train.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core from caffe2.python.modeling import initializers from caffe2.python.modeling.parameter_info import ParameterTags def _ConvBase( model, is_nd, blob_in, blob_out, dim_in, dim_out, kernel, weight_init=None, bias_init=None, WeightInitializer=None, BiasInitializer=None, group=1, transform_inputs=None, use_cudnn=False, order="NCHW", cudnn_exhaustive_search=False, ws_nbytes_limit=None, float16_compute=False, **kwargs ): kernels = [] if is_nd: if not isinstance(kernel, list): kernels = [kernel] else: kernels = kernel else: if isinstance(kernel, list): assert len(kernel) == 2, "Conv support only a 2D kernel." kernels = kernel else: kernels = [kernel] * 2 requested_engine = kwargs.get('engine') if requested_engine is not None: if use_cudnn and requested_engine != 'CUDNN': raise ValueError( 'When use_cudnn=True, the only engine you can specify is ' '"CUDNN"') elif not use_cudnn and requested_engine == 'CUDNN': raise ValueError( 'When use_cudnn=False, the only engine you can specify is ' '""') if use_cudnn: kwargs['engine'] = 'CUDNN' kwargs['exhaustive_search'] = cudnn_exhaustive_search if ws_nbytes_limit: kwargs['ws_nbytes_limit'] = ws_nbytes_limit use_bias =\ False if ("no_bias" in kwargs and kwargs["no_bias"]) else True blob_out = blob_out or model.net.NextName() weight_shape = [dim_out] if order == "NCHW": weight_shape.append(int(dim_in / group)) weight_shape.extend(kernels) else: weight_shape.extend(kernels) weight_shape.append(int(dim_in / group)) WeightInitializer = initializers.update_initializer( WeightInitializer, weight_init, ("XavierFill", {}) ) BiasInitializer = initializers.update_initializer( BiasInitializer, bias_init, ("ConstantFill", {}) ) if not model.init_params: WeightInitializer = initializers.ExternalInitializer() BiasInitializer = initializers.ExternalInitializer() weight = model.create_param( param_name=blob_out + '_w', shape=weight_shape, initializer=WeightInitializer, tags=ParameterTags.WEIGHT ) if use_bias: bias = model.create_param( param_name=blob_out + '_b', shape=[dim_out, ], initializer=BiasInitializer, tags=ParameterTags.BIAS ) if use_bias: inputs = [blob_in, weight, bias] else: inputs = [blob_in, weight] if transform_inputs is not None: transform_inputs(model, blob_out, inputs) # Enable float 16 compute kernel (relevant for CUDA) if float16_compute: kwargs['float16_compute'] = True # For the operator, we no longer need to provide the no_bias field # because it can automatically figure this out from the number of # inputs. if 'no_bias' in kwargs: del kwargs['no_bias'] if group != 1: kwargs['group'] = group if is_nd: return model.net.Conv( inputs, blob_out, kernels=kernels, order=order, **kwargs) else: if isinstance(kernel, list): return model.net.Conv( inputs, blob_out, kernel_h=kernel[0], kernel_w=kernel[1], order=order, **kwargs) else: return model.net.Conv( inputs, blob_out, kernel=kernel, order=order, **kwargs) def conv_nd( model, blob_in, blob_out, dim_in, dim_out, kernel, weight_init=None, bias_init=None, WeightInitializer=None, BiasInitializer=None, group=1, transform_inputs=None, order="NCHW", **kwargs ): """N-dimensional convolution for inputs with NCHW storage order. """ assert order == "NCHW", "ConvNd only supported for NCHW storage." return _ConvBase(model, True, blob_in, blob_out, dim_in, dim_out, kernel, weight_init, bias_init, WeightInitializer, BiasInitializer, group, transform_inputs, order=order, **kwargs) def conv( model, blob_in, blob_out, dim_in, dim_out, kernel, weight_init=None, bias_init=None, WeightInitializer=None, BiasInitializer=None, group=1, transform_inputs=None, **kwargs ): """2-dimensional convolution. """ return _ConvBase(model, False, blob_in, blob_out, dim_in, dim_out, kernel, weight_init, bias_init, WeightInitializer, BiasInitializer, group, transform_inputs, **kwargs) def conv_transpose( model, blob_in, blob_out, dim_in, dim_out, kernel, weight_init=None, bias_init=None, use_cudnn=False, order="NCHW", cudnn_exhaustive_search=False, ws_nbytes_limit=None, **kwargs ): """ConvTranspose. """ weight_init = weight_init if weight_init else ('XavierFill', {}) bias_init = bias_init if bias_init else ('ConstantFill', {}) blob_out = blob_out or model.net.NextName() weight_shape = ( [dim_in, dim_out, kernel, kernel] if order == "NCHW" else [dim_in, kernel, kernel, dim_out] ) if model.init_params: weight = model.param_init_net.__getattr__(weight_init[0])( [], blob_out + '_w', shape=weight_shape, **weight_init[1] ) bias = model.param_init_net.__getattr__(bias_init[0])( [], blob_out + '_b', shape=[dim_out, ], **bias_init[1] ) else: weight = core.ScopedBlobReference( blob_out + '_w', model.param_init_net) bias = core.ScopedBlobReference( blob_out + '_b', model.param_init_net) model.AddParameter(weight, ParameterTags.WEIGHT) model.AddParameter(bias, ParameterTags.BIAS) if use_cudnn: kwargs['engine'] = 'CUDNN' kwargs['exhaustive_search'] = cudnn_exhaustive_search if ws_nbytes_limit: kwargs['ws_nbytes_limit'] = ws_nbytes_limit return model.net.ConvTranspose( [blob_in, weight, bias], blob_out, kernel=kernel, order=order, **kwargs ) def group_conv( model, blob_in, blob_out, dim_in, dim_out, kernel, weight_init=None, bias_init=None, group=1, **kwargs ): """Group Convolution. This is essentially the same as Conv with a group argument passed in. We specialize this for backward interface compatibility. """ return conv(model, blob_in, blob_out, dim_in, dim_out, kernel, weight_init=weight_init, bias_init=bias_init, group=group, **kwargs) def group_conv_deprecated( model, blob_in, blob_out, dim_in, dim_out, kernel, weight_init=None, bias_init=None, group=1, use_cudnn=False, order="NCHW", cudnn_exhaustive_search=False, ws_nbytes_limit=None, **kwargs ): """GroupConvolution's deprecated interface. This is used to simulate a group convolution via split and concat. You should always use the new group convolution in your new code. """ weight_init = weight_init if weight_init else ('XavierFill', {}) bias_init = bias_init if bias_init else ('ConstantFill', {}) use_bias = False if ("no_bias" in kwargs and kwargs["no_bias"]) else True if use_cudnn: kwargs['engine'] = 'CUDNN' kwargs['exhaustive_search'] = cudnn_exhaustive_search if ws_nbytes_limit: kwargs['ws_nbytes_limit'] = ws_nbytes_limit if dim_in % group: raise ValueError("dim_in should be divisible by group.") if dim_out % group: raise ValueError("dim_out should be divisible by group.") splitted_blobs = model.net.DepthSplit( blob_in, ['_' + blob_out + '_gconv_split_' + str(i) for i in range(group)], dimensions=[int(dim_in / group) for i in range(group)], order=order ) weight_shape = ( [dim_out / group, dim_in / group, kernel, kernel] if order == "NCHW" else [dim_out / group, kernel, kernel, dim_in / group] ) # Make sure that the shapes are of int format. Especially for py3 where # int division gives float output. weight_shape = [int(v) for v in weight_shape] conv_blobs = [] for i in range(group): if model.init_params: weight = model.param_init_net.__getattr__(weight_init[0])( [], blob_out + '_gconv_%d_w' % i, shape=weight_shape, **weight_init[1] ) if use_bias: bias = model.param_init_net.__getattr__(bias_init[0])( [], blob_out + '_gconv_%d_b' % i, shape=[int(dim_out / group)], **bias_init[1] ) else: weight = core.ScopedBlobReference( blob_out + '_gconv_%d_w' % i, model.param_init_net) if use_bias: bias = core.ScopedBlobReference( blob_out + '_gconv_%d_b' % i, model.param_init_net) model.AddParameter(weight, ParameterTags.WEIGHT) if use_bias: model.AddParameter(bias, ParameterTags.BIAS) if use_bias: inputs = [weight, bias] else: inputs = [weight] if 'no_bias' in kwargs: del kwargs['no_bias'] conv_blobs.append( splitted_blobs[i].Conv( inputs, blob_out + '_gconv_%d' % i, kernel=kernel, order=order, **kwargs ) ) concat, concat_dims = model.net.Concat( conv_blobs, [blob_out, "_" + blob_out + "_concat_dims"], order=order ) return concat
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/helpers/conv.py
0.681409
0.247192
conv.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import scope from caffe2.python.modeling.parameter_info import ParameterTags from caffe2.proto import caffe2_pb2 from caffe2.python.modeling import initializers def lrn(model, blob_in, blob_out, order="NCHW", use_cudnn=False, **kwargs): """LRN""" dev = kwargs['device_option'] if 'device_option' in kwargs \ else scope.CurrentDeviceScope() is_cpu = dev is None or dev.device_type == caffe2_pb2.CPU if use_cudnn and (not is_cpu): kwargs['engine'] = 'CUDNN' blobs_out = blob_out else: blobs_out = [blob_out, "_" + blob_out + "_scale"] lrn = model.net.LRN( blob_in, blobs_out, order=order, **kwargs ) if use_cudnn and (not is_cpu): return lrn else: return lrn[0] def softmax(model, blob_in, blob_out=None, use_cudnn=False, **kwargs): """Softmax.""" if use_cudnn: kwargs['engine'] = 'CUDNN' if blob_out is not None: return model.net.Softmax(blob_in, blob_out, **kwargs) else: return model.net.Softmax(blob_in, **kwargs) def instance_norm(model, blob_in, blob_out, dim_in, order="NCHW", **kwargs): blob_out = blob_out or model.net.NextName() # Input: input, scale, bias # Output: output, saved_mean, saved_inv_std # scale: initialize with ones # bias: initialize with zeros def init_blob(value, suffix): return model.param_init_net.ConstantFill( [], blob_out + "_" + suffix, shape=[dim_in], value=value) scale, bias = init_blob(1.0, "s"), init_blob(0.0, "b") model.AddParameter(scale, ParameterTags.WEIGHT) model.AddParameter(bias, ParameterTags.BIAS) blob_outs = [blob_out, blob_out + "_sm", blob_out + "_siv"] if 'is_test' in kwargs and kwargs['is_test']: blob_outputs = model.net.InstanceNorm( [blob_in, scale, bias], [blob_out], order=order, **kwargs) return blob_outputs else: blob_outputs = model.net.InstanceNorm( [blob_in, scale, bias], blob_outs, order=order, **kwargs) # Return the output return blob_outputs[0] def spatial_bn(model, blob_in, blob_out, dim_in, init_scale=1., init_bias=0., ScaleInitializer=None, BiasInitializer=None, RunningMeanInitializer=None, RunningVarianceInitializer=None, order="NCHW", **kwargs): blob_out = blob_out or model.net.NextName() # Input: input, scale, bias, est_mean, est_inv_var # Output: output, running_mean, running_inv_var, saved_mean, # saved_inv_var # scale: initialize with init_scale (default 1.) # bias: initialize with init_bias (default 0.) # est mean: zero # est var: ones if model.init_params: scale_init = ("ConstantFill", {'value': init_scale}) bias_init = ("ConstantFill", {'value': init_bias}) rm_init = ("ConstantFill", {'value': 0.0}) riv_init = ("ConstantFill", {'value': 1.0}) ScaleInitializer = initializers.update_initializer( ScaleInitializer, scale_init, ("ConstantFill", {}) ) BiasInitializer = initializers.update_initializer( BiasInitializer, bias_init, ("ConstantFill", {}) ) RunningMeanInitializer = initializers.update_initializer( RunningMeanInitializer, rm_init, ("ConstantFill", {}) ) RunningVarianceInitializer = initializers.update_initializer( RunningVarianceInitializer, riv_init, ("ConstantFill", {}) ) else: ScaleInitializer = initializers.ExternalInitializer() BiasInitializer = initializers.ExternalInitializer() RunningMeanInitializer = initializers.ExternalInitializer() RunningVarianceInitializer = initializers.ExternalInitializer() scale = model.create_param( param_name=blob_out + '_s', shape=[dim_in], initializer=ScaleInitializer, tags=ParameterTags.WEIGHT ) bias = model.create_param( param_name=blob_out + '_b', shape=[dim_in], initializer=BiasInitializer, tags=ParameterTags.BIAS ) running_mean = model.create_param( param_name=blob_out + '_rm', shape=[dim_in], initializer=RunningMeanInitializer, tags=ParameterTags.COMPUTED_PARAM ) running_inv_var = model.create_param( param_name=blob_out + '_riv', shape=[dim_in], initializer=RunningVarianceInitializer, tags=ParameterTags.COMPUTED_PARAM ) blob_outs = [blob_out, running_mean, running_inv_var, blob_out + "_sm", blob_out + "_siv"] if 'is_test' in kwargs and kwargs['is_test']: blob_outputs = model.net.SpatialBN( [blob_in, scale, bias, blob_outs[1], blob_outs[2]], [blob_out], order=order, **kwargs) return blob_outputs else: blob_outputs = model.net.SpatialBN( [blob_in, scale, bias, blob_outs[1], blob_outs[2]], blob_outs, order=order, **kwargs) # Return the output return blob_outputs[0] def spatial_gn(model, blob_in, blob_out, dim_in, init_scale=1., init_bias=0., ScaleInitializer=None, BiasInitializer=None, RunningMeanInitializer=None, RunningVarianceInitializer=None, order="NCHW", **kwargs): ''' Group normalizes the input, cf. https://arxiv.org/abs/1803.08494. ''' blob_out = blob_out or model.net.NextName() # Input: input, scale, bias # Output: output, group_mean, group_inv_std # scale: initialize with init_scale (default 1.) # [recommendation: set init_scale = 0. in the last layer for each res block] # bias: initialize with init_bias (default 0.) if model.init_params: scale_init = ("ConstantFill", {'value': init_scale}) bias_init = ("ConstantFill", {'value': init_bias}) ScaleInitializer = initializers.update_initializer( ScaleInitializer, scale_init, ("ConstantFill", {}) ) BiasInitializer = initializers.update_initializer( BiasInitializer, bias_init, ("ConstantFill", {}) ) else: ScaleInitializer = initializers.ExternalInitializer() BiasInitializer = initializers.ExternalInitializer() scale = model.create_param( param_name=blob_out + '_s', shape=[dim_in], initializer=ScaleInitializer, tags=ParameterTags.WEIGHT ) bias = model.create_param( param_name=blob_out + '_b', shape=[dim_in], initializer=BiasInitializer, tags=ParameterTags.BIAS ) blob_outs = [blob_out, blob_out + "_mean", blob_out + "_std"] blob_outputs = model.net.GroupNorm( [blob_in, scale, bias], blob_outs, **kwargs) # Return the output return blob_outputs[0] def layer_norm( model, blob_in, blob_out, dim_in, axis=1, epsilon=1e-4, initial_scale=1.0, initial_bias=0.0, ): ''' Layer normalizes the input, cf. https://arxiv.org/pdf/1607.06450.pdf. Args: blob_in: The input blob to layer normalize. blob_out: The layer normalized output blob. dim_in: The dimension of the scale and bias. For example, if blob_in is a 2D design matrix and axis is 1, this would be the number of columns. axis: (optional) The axis to normalize. Typically the feature axis. Defaults to 1. epsilon: (optional) A small value used for numerical stability in calculation. Defaults to 1e-4. initial_scale: (optional) The initial value for the learned scale parameter. Defaults to 1.0 initial_bias: (optional) The initial value for the learned bias parameter of the layerwise standard deviation. Defaults to 0.0. Returns: A 3-tuple consisting of: - The layer normalized input blob. - The mean of the input blob across the given axis. - The standard deviation of the input blob acress the given axis. ''' # The learned multiplicative scale or "gain". scale = model.create_param( param_name='{}_scale'.format(blob_out), shape=[dim_in] if isinstance(dim_in, int) else dim_in, initializer=initializers.Initializer( 'ConstantFill', value=initial_scale, ), tags=ParameterTags.WEIGHT, ) # The learned additive bias or "shift". bias = model.create_param( param_name='{}_bias'.format(blob_out), shape=[dim_in] if isinstance(dim_in, int) else dim_in, initializer=initializers.Initializer( 'ConstantFill', value=initial_bias, ), tags=ParameterTags.BIAS, ) normalized, mean, std = model.net.LayerNorm( [blob_in, scale, bias], [blob_out, blob_out + "_mean", blob_out + "_std"], axis=axis, epsilon=epsilon, elementwise_affine=True, ) return normalized, mean, std def moments_with_running_stats(model, blob_in, blob_out, dim_in, RunningMeanInitializer=None, RunningVarianceInitializer=None, order="NCHW", **kwargs): if model.init_params: rm_init = ("ConstantFill", {'value': 0.0}) riv_init = ("ConstantFill", {'value': 1.0}) RunningMeanInitializer = initializers.update_initializer( RunningMeanInitializer, rm_init, ("ConstantFill", {}) ) RunningVarianceInitializer = initializers.update_initializer( RunningVarianceInitializer, riv_init, ("ConstantFill", {}) ) else: RunningMeanInitializer = initializers.ExternalInitializer() RunningVarianceInitializer = initializers.ExternalInitializer() running_mean = model.create_param( param_name=blob_out + '_rm', shape=[dim_in], initializer=RunningMeanInitializer, tags=ParameterTags.COMPUTED_PARAM ) # this is just running variance running_inv_var = model.create_param( param_name=blob_out + '_riv', shape=[dim_in], initializer=RunningVarianceInitializer, tags=ParameterTags.COMPUTED_PARAM ) blob_outs = [blob_out + "_sm", blob_out + "_sv"] if order == 'NCHW': blob_outputs = model.net.Moments( [blob_in], blob_outs, axes=[0, 2, 3], order=order, keepdims=False, **kwargs) elif order == 'NHWC': blob_outputs = model.net.Moments( [blob_in], blob_outs, axes=[0, 1, 2], order=order, keepdims=False, **kwargs) return blob_outputs
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/helpers/normalization.py
0.817028
0.201774
normalization.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.python import core from caffe2.python.modeling import initializers from caffe2.python.modeling.parameter_info import ParameterTags def _FC_or_packed_FC( model, op_call, blob_in, blob_out, dim_in, dim_out, weight_init=None, bias_init=None, WeightInitializer=None, BiasInitializer=None, enable_tensor_core=False, float16_compute=False, **kwargs ): WeightInitializer = initializers.update_initializer( WeightInitializer, weight_init, ("XavierFill", {}) ) BiasInitializer = initializers.update_initializer( BiasInitializer, bias_init, ("ConstantFill", {}) ) if not model.init_params: WeightInitializer = initializers.ExternalInitializer() BiasInitializer = initializers.ExternalInitializer() blob_out = blob_out or model.net.NextName() bias_tags = [ParameterTags.BIAS] if 'freeze_bias' in kwargs: bias_tags.append(ParameterTags.COMPUTED_PARAM) weight = model.create_param( param_name=blob_out + '_w', shape=[dim_out, dim_in], initializer=WeightInitializer, tags=ParameterTags.WEIGHT ) bias = model.create_param( param_name=blob_out + '_b', shape=[dim_out, ], initializer=BiasInitializer, tags=bias_tags ) # enable TensorCore by setting appropriate engine if enable_tensor_core: kwargs['engine'] = 'TENSORCORE' # Enable float 16 compute kernel (relevant for CUDA) if float16_compute: kwargs['float16_compute'] = True return op_call([blob_in, weight, bias], blob_out, **kwargs) def fc(model, *args, **kwargs): return _FC_or_packed_FC(model, model.net.FC, *args, **kwargs) def packed_fc(model, *args, **kwargs): return _FC_or_packed_FC(model, model.net.PackedFC, *args, **kwargs) def fc_decomp( model, blob_in, blob_out, dim_in, dim_out, rank_approx=5, weight_init=None, bias_init=None, WeightInitializer=None, BiasInitializer=None, **kwargs ): """FC_Decomp version Here we assume that the rank of original input is bigger than 5. """ WeightInitializer = initializers.update_initializer( WeightInitializer, weight_init, ("XavierFill", {}) ) BiasInitializer = initializers.update_initializer( BiasInitializer, bias_init, ("ConstantFill", {}) ) blob_out = blob_out or model.net.NextName() u = model.create_param( param_name=blob_out + '_u', shape=[dim_out, rank_approx], initializer=WeightInitializer, ) v = model.create_param( param_name=blob_out + '_v', shape=[dim_in, rank_approx], initializer=WeightInitializer, ) bias = model.create_param( param_name=blob_out + '_b', shape=[dim_out, ], initializer=BiasInitializer, ) return model.net.FC_Decomp([blob_in, u, v, bias], blob_out, **kwargs) def fc_prune( model, blob_in, blob_out, dim_in, dim_out, weight_init=None, bias_init=None, mask_init=None, threshold=0.00001, need_compress_rate=False, comp_lb=0.05, **kwargs ): """FC_Prune version Runnable so far. Great!:) """ weight_init = weight_init if weight_init else ('XavierFill', {}) bias_init = bias_init if bias_init else ('ConstantFill', {}) mask_init = mask_init if mask_init else ('ConstantFill', {}) blob_out = blob_out or model.net.NextName() compress_rate = blob_out + '_compress_rate' if model.init_params: compress_lb = model.param_init_net.ConstantFill( [], blob_out + '_lb', shape=[1], value=comp_lb ) weight = model.param_init_net.__getattr__(weight_init[0])( [], blob_out + '_w', shape=[dim_out, dim_in], **weight_init[1] ) mask = model.param_init_net.ConstantFill( [], blob_out + '_m', shape=[dim_out, dim_in], value=1.0 ) ag_dw = model.param_init_net.__getattr__(mask_init[0])( [], blob_out + '_ag_dw', shape=[dim_out, dim_in], **mask_init[1] ) bias = model.param_init_net.__getattr__(bias_init[0])( [], blob_out + '_b', shape=[dim_out, ], **bias_init[1] ) mask_seq = model.param_init_net.__getattr__(mask_init[0])( [], blob_out + '_mask_seq', shape=[dim_out, dim_in], **mask_init[1] ) thres = model.param_init_net.ConstantFill( [], blob_out + '_thres', shape=[1], value=threshold ) else: compress_lb = core.ScopedBlobReference( blob_out + '_lb', model.param_init_net) weight = core.ScopedBlobReference( blob_out + '_w', model.param_init_net) bias = core.ScopedBlobReference( blob_out + '_b', model.param_init_net) mask = core.ScopedBlobReference( blob_out + '_m', model.param_init_net) ag_dw = core.ScopedBlobReference( blob_out + '_ag_dw', model.param_init_net) mask_seq = core.ScopedBlobReference( blob_out + '_mask_seq', model.param_init_net) thres = core.ScopedBlobReference( blob_out + '_thres', model.param_init_net) model.AddParameter(weight) model.AddParameter(bias) if need_compress_rate: return model.net.FC_Prune([blob_in, weight, mask, bias, ag_dw, mask_seq, thres, compress_lb], [blob_out, compress_rate], **kwargs) else: return model.net.FC_Prune([blob_in, weight, mask, bias, ag_dw, mask_seq, thres, compress_lb], blob_out, **kwargs) def fc_sparse( model, blob_in, blob_out, w_csr, iw, jw, bias, **kwargs ): """FC_Sparse: Only takes in allocated weights""" if not (w_csr and iw and jw and bias): print("Warning...") model.AddParameter(w_csr) model.AddParameter(iw) model.AddParameter(jw) model.AddParameter(bias) return model.net.FC_Sparse([blob_in, w_csr, iw, jw, bias], blob_out, **kwargs)
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/helpers/fc.py
0.737158
0.234571
fc.py
pypi
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from caffe2.proto import caffe2_pb2 from onnx.backend.base import namedtupledict from caffe2.python.onnx.workspace import Workspace import caffe2.python._import_c_extension as C import io import logging import time log = logging.getLogger(__name__) def c2_native_run_op(op_def, inputs): ws = Workspace() if isinstance(inputs, dict): for key, value in inputs.items(): ws.FeedBlob(key, value, op_def.device_option) else: assert(len(op_def.input) == len(inputs)) for key, value in zip(op_def.input, inputs): ws.FeedBlob(key, value, op_def.device_option) ws.RunOperatorOnce(op_def) output_names = op_def.output output_values = [ws.FetchBlob(name) for name in output_names] return ws, namedtupledict('Outputs', output_names)(*output_values) def c2_native_run_net(init_net, predict_net, inputs, debug_arg=None): ws = Workspace() if init_net: ws.RunNetOnce(init_net) if isinstance(inputs, dict): for key, value in inputs.items(): ws.FeedBlob(key, value, predict_net.device_option) else: uninitialized = [input_name for input_name in predict_net.external_input if not ws.HasBlob(input_name)] if len(uninitialized) == len(inputs): for key, value in zip(uninitialized, inputs): ws.FeedBlob(key, value, predict_net.device_option) else: # If everything is initialized, # we just initialized the first len(inputs) external_input. # Added some extra logging to help debug sporadic sandcastle fails if len(inputs) > len(predict_net.external_input): print("c2_native_run_net assert. len(inputs)=", len(inputs), "len(predict_net.external_input)=", len(predict_net.external_input)) print("debug_arg: ", debug_arg) print("predict_net ", type(predict_net), ":", predict_net) print("inputs ", type(inputs), ":", inputs) assert(len(inputs) <= len(predict_net.external_input)) for i in range(len(inputs)): ws.FeedBlob(predict_net.external_input[i], inputs[i], predict_net.device_option) ws.RunNetOnce(predict_net) output_names = predict_net.external_output output_values = [ws.FetchBlob(name) for name in output_names] return ws, namedtupledict('Outputs', output_names)(*output_values) def load_caffe2_net(file): net = caffe2_pb2.NetDef() with open(file, "rb") as f: net.ParseFromString(f.read()) return net def save_caffe2_net(net, file, output_txt=False): with open(file, "wb") as f: f.write(net.SerializeToString()) if output_txt: with open(file + "txt", "w") as f: f.write(str(net)) def benchmark_caffe2_model(init_net, predict_net, warmup_iters=3, main_iters=10, layer_details=True): ''' Run the benchmark net on the target model. Return the execution time per iteration (millisecond). ''' ws = Workspace() if init_net: ws.RunNetOnce(init_net) ws.CreateNet(predict_net) results = ws.BenchmarkNet(predict_net.name, warmup_iters, main_iters, layer_details) del ws return results[0] def benchmark_pytorch_model(model, inputs, training=False, warmup_iters=3, main_iters=10, verbose=False): ''' Run the model several times, and measure the execution time. Return the execution time per iteration (millisecond). ''' for _i in range(warmup_iters): model(*inputs) total_pytorch_time = 0.0 for _i in range(main_iters): ts = time.time() model(*inputs) te = time.time() total_pytorch_time += te - ts log.info("The PyTorch model execution time per iter is {} milliseconds, " "{} iters per second.".format(total_pytorch_time / main_iters * 1000, main_iters / total_pytorch_time)) return total_pytorch_time * 1000 / main_iters
/rpi_torch-1.5.0-cp37-cp37m-linux_armv7l.whl/caffe2/python/onnx/helper.py
0.60288
0.264724
helper.py
pypi