hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5fc9d52bad132750122442a97be0e798b60e469f | 672 | rst | reStructuredText | docs/index.rst | f2hex/patchwork | 85c4d8d8e2dc258abec8cee5bfc25c5a9baf5e11 | [
"BSD-2-Clause"
] | 157 | 2015-01-07T06:48:34.000Z | 2022-03-21T10:04:55.000Z | docs/index.rst | f2hex/patchwork | 85c4d8d8e2dc258abec8cee5bfc25c5a9baf5e11 | [
"BSD-2-Clause"
] | 23 | 2015-08-12T06:34:56.000Z | 2021-09-13T19:25:31.000Z | docs/index.rst | f2hex/patchwork | 85c4d8d8e2dc258abec8cee5bfc25c5a9baf5e11 | [
"BSD-2-Clause"
] | 43 | 2016-01-05T05:08:27.000Z | 2021-11-11T13:11:12.000Z | =========
Patchwork
=========
.. include:: ../README.rst
Roadmap
=======
While Patchwork has been released with a major version number to signal
adherence to semantic versioning, it's still early in development and has not
fully achieved its design vision yet.
We expect it to gain maturity in tandem with the adoption and development of
Fabric 2. It's also highly likely that Patchwork will see a few major releases
as its API (and those of its sister library, `invocations
<https://invocations.readthedocs.io>`_) matures.
Contents
========
.. toctree::
:glob:
:maxdepth: 1
*
API documentation
=================
.. toctree::
:glob:
api/*
| 17.684211 | 78 | 0.674107 |
2f38c176cd2dc31cfc898131db927ad4cd63ec0e | 3,953 | rst | reStructuredText | docs/source/info.rst | gelles-brandeis/tapqir | 60da3fda1632d4309ff7d0ffeeab5940a020963a | [
"Apache-2.0"
] | 2 | 2021-06-24T20:44:10.000Z | 2022-03-24T20:01:18.000Z | docs/source/info.rst | gelles-brandeis/tapqir | 60da3fda1632d4309ff7d0ffeeab5940a020963a | [
"Apache-2.0"
] | 12 | 2021-06-04T03:38:47.000Z | 2022-02-22T14:59:46.000Z | docs/source/info.rst | gelles-brandeis/tapqir | 60da3fda1632d4309ff7d0ffeeab5940a020963a | [
"Apache-2.0"
] | 1 | 2021-05-30T21:54:37.000Z | 2021-05-30T21:54:37.000Z | General Information
===================
About Tapqir
------------
Single-molecule fluorescence microscopy is widely used in vitro to study the biochemical and physical mechanisms
of the protein and nucleic acid macromolecular “machines” that perform essential biological functions. The simplest
such technique is multi-wavelength colocalization, which is sometimes called CoSMoS (co-localization single-molecule
spectroscopy). In CoSMoS, formation and/or dissociation of molecular complexes is observed by total internal
reflection fluorescence (TIRF) or other forms of single-molecule fluorescence microscopy by observing the colocalization
of two or more macromolecular components each labeled with a different color of fluorescent dye. Analysis of the dynamics
observed in the microscope is then used to define the quantitative kinetic mechanism of the process being studied.
Reliable analysis of CoSMoS data remains a significant challenge to the effective and more widespread use of the
technique. Existing analysis methods are at least partially subjective and require painstaking manual tuning.
Data analysis is usually the slowest and most laborious part of a CoSMoS project.
Tapqir is a computer program for rigorous statistical classification and analysis of image data from CoSMoS experiments.
The program has multiple advantageous features:
* Tapqir maximizes extraction of useful information by globally fitting experimental images to a causal probabilistic
model that explicitly accounts for all important physical and chemical aspects of CoSMoS image formation. The fitting
employs Bayesian inference, incorporating appropriate levels of prior knowledge (or lack of knowledge) for all parameters.
* Existing methods produce a binary spot/no-spot classification that does not convey the uncertainties inherent in
interpreting the low signal-to-noise single-molecule images. Tapqir instead produces spot probability estimates that
accurately convey experimental uncertainty at each individual time point. These probability estimates can then be
used to perform more reliable downstream kinetic and thermodynamic analyses.
* Tapqir has been thoroughly validated by measuring its performance on simulated image datasets.
* Tapqir is a fully objective method; we have shown that it works without manual parameter tweaking on both simulated and
experiment-derived data sets with a wide range of signal, noise, and non-specific binding characteristics.
Citation
--------
Initial development and validation of Tapqir is described in
|DOI|
Ordabayev YA, Friedman LJ, Gelles J, Theobald DL. *Bayesian machine learning analysis of single-molecule
fluorescence colocalization images*. bioRxiv. 2021 Oct.
If you publish work that uses Tapqir, please consider citing this article.
License
-------
This project is licensed under the `Apache License 2.0 <https://www.apache.org/licenses/LICENSE-2.0.txt>`_.
By submitting a pull request to this project, you agree to license your contribution under the Apache
License 2.0 to this project.
Backend
-------
Tapqir's model is implemented in `Pyro`_, a Python-based probabilistic programming language
(PPL) (`Bingham et al., 2019`_). Probabilistic programming is a relatively new paradigm in
which probabilistic models are expressed in a high-level language that allows easy formulation,
modification, and automated inference.
Pyro relies on the `PyTorch`_ numeric library for vectorized math operations on GPU and
automatic differentiation. We also use `KeOps`_ library for kernel operations on the GPU
without memory overflow.
.. _Bingham et al., 2019: https://jmlr.org/papers/v20/18-403.html
.. _Pyro: https://pyro.ai/
.. _KeOps: https://www.kernel-operations.io/keops/index.html
.. _PyTorch: https://pytorch.org/
.. |DOI| image:: https://img.shields.io/badge/DOI-10.1101%2F2021.09.30.462536-blue
:target: https://doi.org/10.1101/2021.09.30.462536
:alt: DOI
| 52.706667 | 124 | 0.800405 |
0f73fd85f94f3de94bfc8ddd153d1e1a1eeca190 | 73 | rst | reStructuredText | docs/usage.rst | huntzhan/papernote | c1ed1245c483727e80e606fca89c6a7e4120fd0d | [
"MIT"
] | null | null | null | docs/usage.rst | huntzhan/papernote | c1ed1245c483727e80e606fca89c6a7e4120fd0d | [
"MIT"
] | null | null | null | docs/usage.rst | huntzhan/papernote | c1ed1245c483727e80e606fca89c6a7e4120fd0d | [
"MIT"
] | null | null | null | =====
Usage
=====
To use papernote in a project::
import papernote
| 9.125 | 31 | 0.60274 |
b202fc22759414a1589cdd88ebbf7ff6087b1cce | 521 | rst | reStructuredText | doc/main/tbb_userguide/Flow_Graph_resource_tips.rst | dishasrivastavapuresoftware/oneTBB | 34dd71e7238d278a807a0380311f39102a916223 | [
"Apache-2.0"
] | 1,736 | 2020-03-17T20:23:25.000Z | 2022-03-31T16:01:44.000Z | doc/main/tbb_userguide/Flow_Graph_resource_tips.rst | dishasrivastavapuresoftware/oneTBB | 34dd71e7238d278a807a0380311f39102a916223 | [
"Apache-2.0"
] | 461 | 2020-03-18T00:48:29.000Z | 2022-03-31T08:29:50.000Z | doc/main/tbb_userguide/Flow_Graph_resource_tips.rst | dishasrivastavapuresoftware/oneTBB | 34dd71e7238d278a807a0380311f39102a916223 | [
"Apache-2.0"
] | 376 | 2020-03-19T06:15:59.000Z | 2022-03-25T06:26:31.000Z | .. _Flow_Graph_resource_tips:
Flow Graph Tips for Limiting Resource Consumption
=================================================
You may want to control the number of messages allowed to enter parts of
your graph, or control the maximum number of tasks in the work pool.
There are several mechanisms available for limiting resource consumption
in a flow graph.
.. toctree::
:maxdepth: 4
../tbb_userguide/use_limiter_node
../tbb_userguide/use_concurrency_limits
../tbb_userguide/create_token_based_system | 30.647059 | 72 | 0.71977 |
a5d425be5957667c8ab0e8623504783e9b50e7b4 | 152 | rst | reStructuredText | Docs/Book/source/rdkit.Chem.MolStandardize.rst | kazuyaujihara/rdkit | 06027dcd05674787b61f27ba46ec0d42a6037540 | [
"BSD-3-Clause"
] | 1,609 | 2015-01-05T02:41:13.000Z | 2022-03-30T21:57:24.000Z | Docs/Book/source/rdkit.Chem.MolStandardize.rst | kazuyaujihara/rdkit | 06027dcd05674787b61f27ba46ec0d42a6037540 | [
"BSD-3-Clause"
] | 3,412 | 2015-01-06T12:13:33.000Z | 2022-03-31T17:25:41.000Z | Docs/Book/source/rdkit.Chem.MolStandardize.rst | kazuyaujihara/rdkit | 06027dcd05674787b61f27ba46ec0d42a6037540 | [
"BSD-3-Clause"
] | 811 | 2015-01-11T03:33:48.000Z | 2022-03-28T11:57:49.000Z | rdkit.Chem.MolStandardize module
================================
Submodules
----------
.. toctree::
rdkit.Chem.MolStandardize.rdMolStandardize
| 12.666667 | 45 | 0.559211 |
98e201586e928a9b39d34d079a8aec1ac0ec22e3 | 167 | rst | reStructuredText | docs/api/pycatia/navigator_interfaces/marker_3Ds.rst | evereux/catia_python | 08948585899b12587b0415ce3c9191a408b34897 | [
"MIT"
] | 90 | 2019-02-21T10:05:28.000Z | 2022-03-19T01:53:41.000Z | docs/api/pycatia/navigator_interfaces/marker_3Ds.rst | Luanee/pycatia | ea5eef8178f73de12404561c00baf7a7ca30da59 | [
"MIT"
] | 99 | 2019-05-21T08:29:12.000Z | 2022-03-25T09:55:15.000Z | docs/api/pycatia/navigator_interfaces/marker_3Ds.rst | Luanee/pycatia | ea5eef8178f73de12404561c00baf7a7ca30da59 | [
"MIT"
] | 26 | 2019-04-04T06:31:36.000Z | 2022-03-30T07:24:47.000Z | .. _Marker_3ds:
pycatia.navigator_interfaces.marker_3Ds
=======================================
.. automodule:: pycatia.navigator_interfaces.marker_3Ds
:members: | 23.857143 | 55 | 0.610778 |
e0e9bc7113680aa1d881694d854794af012e1cf9 | 991 | rst | reStructuredText | docs/source/Technical Specifications/Objective.rst | MySmile/mysmile | 5abe4baa7970674d1f8365d875519283c2e29dae | [
"BSD-3-Clause"
] | 5 | 2015-05-03T09:51:32.000Z | 2019-05-21T14:19:02.000Z | docs/source/Technical Specifications/Objective.rst | MySmile/mysmile | 5abe4baa7970674d1f8365d875519283c2e29dae | [
"BSD-3-Clause"
] | 24 | 2015-04-05T16:28:08.000Z | 2022-03-11T23:36:56.000Z | docs/source/Technical Specifications/Objective.rst | MySmile/mysmile | 5abe4baa7970674d1f8365d875519283c2e29dae | [
"BSD-3-Clause"
] | null | null | null | Objective
=========
MySmile is the lightweight open-source CMS based on Django. The project is focused on context, usability and SEO.
It is addressed particularly to small sites that provides information about a person or business.
It is somewhat similar to a Blog and Homepage. Information of such a site does not change so often as in case of a Blog but more often then on a Homepage.
MySmile includes the following:
* Admin panel for content managing
* SEO: friendly url, meta, HTML5 semantic, Google analytics, etc
* Flexible design configuration
Examples of site structures for which MySmile will be the best choice are as follows:
* Small business site with pages: "Main", "Products", "Technologies", "Partners", "Contacts".
* Personal site with pages: "Main", "Education", "Hobbies", "Contacts".
* Conference site with pages: "Main", "Schedule", "Speakers", "Accommodation", "Contacts".
* Quick start of Python and Django.
.. image:: _static/images/site_diagram.png
| 49.55 | 154 | 0.750757 |
9bd81098aa7f2449a58392d13b9db6f12e7c846b | 1,905 | rst | reStructuredText | elements/iso/README.rst | takahashinobuyuki/diskimage-builder | b32a3096be775dd5f2f48b923a71c1a509c54090 | [
"Apache-2.0"
] | 3 | 2015-07-15T08:39:29.000Z | 2018-11-12T14:22:21.000Z | elements/iso/README.rst | takahashinobuyuki/diskimage-builder | b32a3096be775dd5f2f48b923a71c1a509c54090 | [
"Apache-2.0"
] | 8 | 2017-11-18T19:53:44.000Z | 2020-05-06T12:43:45.000Z | diskimage_builder/elements/iso/README.rst | sitedata/diskimage-builder | 1ac31afd6297c2a9a0673b0cde17e18230c3b977 | [
"Apache-2.0"
] | 5 | 2015-07-16T14:20:25.000Z | 2019-09-30T08:27:09.000Z | ===
iso
===
Generates a bootable ISO image from the kernel/ramdisk generated by the
elements ``baremetal``, ``ironic-agent`` or ``ramdisk``. It uses isolinux to boot on BIOS
machines and grub to boot on EFI machines.
This element has been tested on the following distro(s):
* ubuntu
* fedora
* debian
**NOTE**: For other distros, please make sure the ``isolinux.bin`` file
exists at ``/usr/lib/syslinux/isolinux.bin``.
baremetal element
-----------------
When used with ``baremetal`` element, this generates a bootable ISO image
named ``<image-name>-boot.iso`` booting the generated kernel and ramdisk.
It also automatically appends kernel command-line argument
'root=UUID=<uuid-of-the-root-partition>'. Any more kernel command-line
arguments required may be provided by specifying them in
``DIB_BOOT_ISO_KERNEL_CMDLINE_ARGS``.
**NOTE**: It uses pre-built efiboot.img by default to work for UEFI machines.
This is because of a bug in latest version of grub[1]. The user may choose
to avoid using pre-built binary and build efiboot.img on their own machine
by setting the environment variable DIB\_UEFI\_ISO\_BUILD\_EFIBOOT to 1 (this
might work only on certain versions of grub). The current efiboot.img was
generated by the method build\_efiboot\_img() in 100-build-iso on
Ubuntu 13.10 with grub 2.00-19ubuntu2.1.
ramdisk element
---------------
When used with ``ramdisk`` element, this generates a bootable ISO image
named ``<image-name>.iso`` booting the generated kernel and ramdisk. It also
automatically appends kernel command-line argument 'boot\_method=vmedia'
which is required for Ironic drivers ``iscsi_ilo``.
ironic-agent element
--------------------
When used with ``ironic-agent`` element, this generates a bootable ISO image named ``<image-name>.iso`` which boots the agent kernel and agent ramdisk.
**REFERENCES**
[1] https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1378658
| 40.531915 | 151 | 0.749081 |
ed25710c3153264e778fc7a5379f2d0ca161e357 | 187 | rst | reStructuredText | docs/data/qhkc/index.rst | baierxxl/mssdk | c72770ef0a44060db0384937e94f23ebca738872 | [
"BSD-3-Clause"
] | 1 | 2020-07-06T11:44:21.000Z | 2020-07-06T11:44:21.000Z | docs/data/qhkc/index.rst | wuyaoyao99/mssdk | 9373e1a6d2b6e347319de3890b6a77b48a7e6922 | [
"BSD-3-Clause"
] | null | null | null | docs/data/qhkc/index.rst | wuyaoyao99/mssdk | 9373e1a6d2b6e347319de3890b6a77b48a7e6922 | [
"BSD-3-Clause"
] | null | null | null | mssdk 奇货可查
=================
mssdk 奇货可查模块主要介绍奇货可查提供的数据接口的详细说明
.. toctree::
:maxdepth: 2
commodity.md
broker.md
index_data.md
fundamental.md
tools.md
fund.md | 13.357143 | 32 | 0.604278 |
c8ef20434509694a4f4e1c12027debdac44e37b6 | 2,445 | rst | reStructuredText | manuals/sources/devtools/debug_messages.rst | PaulBrownMagic/logtalk3 | 0ba68cb0024428e267cd13b0713cf03d1b3b44a0 | [
"Apache-2.0"
] | 279 | 2015-01-06T14:04:01.000Z | 2022-03-24T08:11:00.000Z | manuals/sources/devtools/debug_messages.rst | Seanpm2001-LogTalk-lang/logtalk3 | c98adfe227210affab20d586362b8beaf5f0993d | [
"Apache-2.0"
] | 65 | 2015-01-06T14:22:52.000Z | 2022-03-15T10:59:17.000Z | manuals/sources/devtools/debug_messages.rst | Seanpm2001-LogTalk-lang/logtalk3 | c98adfe227210affab20d586362b8beaf5f0993d | [
"Apache-2.0"
] | 57 | 2015-01-29T17:49:56.000Z | 2022-02-02T17:15:13.000Z | ``debug_messages``
==================
By default, ``debug`` and ``debug(Group)`` messages are only printed
when the ``debug`` flag is turned on. These messages are also suppressed
when compiling code with the ``optimize`` flag turned on. This tool
supports selective enabling of ``debug`` and ``debug(Group)`` messages
in normal and debug modes.
API documentation
-----------------
This tool API documentation is available at:
`../../docs/library_index.html#debug-messages <../../docs/library_index.html#debug-messages>`__
For general information on debugging, open in a web browser the
following link and consult the debugging section of the User Manual:
`../../manuals/userman/debugging.html <../../manuals/userman/debugging.html>`__
Loading
-------
This tool can be loaded using the query:
::
| ?- logtalk_load(debug_messages(loader)).
Testing
-------
To test this tool, load the ``tester.lgt`` file:
::
| ?- logtalk_load(debug_messages(tester)).
Usage
-----
The tool provides two sets of predicates. The first set allows enabling
and disabling of all ``debug`` and ``debug(Group)`` messages for a given
component. The second set allows enabling and disabling of
``debug(Group)`` messages for a given group and component for
fine-grained control.
Upon loading the tool, all debug messages are skipped. The user is then
expected to use the tool API to selectively enable the messages that
will be printed. As an example, consider the following object, part of a
``xyz`` component:
::
:- object(foo).
:- public([bar/0, baz/0]).
:- uses(logtalk, [print_message/3]).
bar :-
print_message(debug(bar), xyz, @'bar/0 called').
baz :-
print_message(debug(baz), xyz, @'baz/0 called').
:- end_object.
Assuming the object ``foo`` is compiled and loaded in normal or debug
mode, after also loading this tool, ``bar/0`` and ``baz/0`` messages
will not print any debug messages:
::
| ?- {debug_messages(loader), foo}.
...
yes
| ?- foo::(bar, baz).
yes
We can then enable all debug messages for the ``xyz`` component:
::
| ?- debug_messages::enable(xyx).
yes
| ?- foo::(bar, baz).
bar/0 called
baz/0 called
yes
Or we can selectively enable only debug messages for a specific group:
::
| ?- debug_messages::disable(xyx).
yes
| ?- debug_messages::enable(xyx, bar).
yes
| ?- foo::(bar, baz).
bar/0 called
yes
| 22.850467 | 95 | 0.667485 |
f65531364ad4ec042270fa7075319c0e9befc085 | 747 | rst | reStructuredText | news/page-buffer.rst | h5py/h5py | 571b926b2143269a012799027998cdf177b7babc | [
"BSD-3-Clause"
] | 1,657 | 2015-01-07T15:17:44.000Z | 2022-03-30T13:11:55.000Z | news/page-buffer.rst | h5py/h5py | 571b926b2143269a012799027998cdf177b7babc | [
"BSD-3-Clause"
] | 1,531 | 2015-01-05T21:58:02.000Z | 2022-03-31T12:34:02.000Z | news/page-buffer.rst | h5py/h5py | 571b926b2143269a012799027998cdf177b7babc | [
"BSD-3-Clause"
] | 503 | 2015-01-08T07:14:27.000Z | 2022-03-23T21:09:41.000Z | New features
------------
* Enable setting file space page size when creating new HDF5 files. A new named argument ``fs_page_size`` is added to ``File()`` class.
* Enable HDF5 page buffering, a low-level caching feature, that may improve overall I/O performance in some cases. Three new named arguments are added to ``File()`` class: ``page_buf_size``, ``min_meta_keep``, and ``min_raw_keep``.
* Get and reset HDF5 page buffering statistics. Available as the low-level API of the ``FileID`` class.
Exposing HDF5 functions
-----------------------
* ``H5Freset_page_buffering_stats``
* ``H5Fget_page_buffering_stats``
* ``H5Pset_file_space_page_size``
* ``H5Pget_file_space_page_size``
* ``H5Pset_page_buffer_size``
* ``H5Pget_page_buffer_size``
| 43.941176 | 231 | 0.72423 |
654067ddc87861b8e8662d32d30058e52459af35 | 368 | rst | reStructuredText | docs/examine_files.rst | hobnobpirate/examine_files | ac1c6c9331124aa187b1dff022e106aca404cec7 | [
"MIT"
] | null | null | null | docs/examine_files.rst | hobnobpirate/examine_files | ac1c6c9331124aa187b1dff022e106aca404cec7 | [
"MIT"
] | null | null | null | docs/examine_files.rst | hobnobpirate/examine_files | ac1c6c9331124aa187b1dff022e106aca404cec7 | [
"MIT"
] | null | null | null | examine\_files package
======================
Submodules
----------
examine\_files.examine\_files module
------------------------------------
.. automodule:: examine_files.examine_files
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: examine_files
:members:
:undoc-members:
:show-inheritance:
| 16 | 43 | 0.55163 |
71f1d44af7909dfa10d02736e6d1e4db71a67073 | 479 | rst | reStructuredText | docs/account-ref.rst | phaustin/canvasapi | a488a974ef9d7037d615320802387e7d51279d65 | [
"MIT"
] | 1 | 2018-11-20T17:17:50.000Z | 2018-11-20T17:17:50.000Z | docs/account-ref.rst | PennState/canvasapi | 077cbd51516484a5c44834c8aa3d0c4425e4ffcf | [
"MIT"
] | null | null | null | docs/account-ref.rst | PennState/canvasapi | 077cbd51516484a5c44834c8aa3d0c4425e4ffcf | [
"MIT"
] | null | null | null | =======
Account
=======
.. autoclass:: canvasapi.account.Account
:members:
===================
AccountNotification
===================
.. autoclass:: canvasapi.account.AccountNotification
:members:
=============
AccountReport
=============
.. autoclass:: canvasapi.account.AccountReport
:members:
====
Role
====
.. autoclass:: canvasapi.account.Role
:members:
===========
SSOSettings
===========
.. autoclass:: canvasapi.account.SSOSettings
:members: | 14.088235 | 52 | 0.559499 |
2f7f4699398ab3b9e32443dc2f2bca93e0234b48 | 2,862 | rst | reStructuredText | stack.rst | RainMark/gtk3-tutorial | a2c0cc82e4e9a090b1617ab43f8256146c7150c2 | [
"CC0-1.0"
] | 28 | 2018-11-29T02:49:29.000Z | 2022-03-27T04:13:17.000Z | stack.rst | RainMark/gtk3-tutorial | a2c0cc82e4e9a090b1617ab43f8256146c7150c2 | [
"CC0-1.0"
] | null | null | null | stack.rst | RainMark/gtk3-tutorial | a2c0cc82e4e9a090b1617ab43f8256146c7150c2 | [
"CC0-1.0"
] | 14 | 2019-04-05T09:23:50.000Z | 2022-02-17T09:18:48.000Z | Stack
=====
The Stack widget is similar to a :doc:`notebook` by showing only one child at a time.
The Stack on its own does not provide any way for the user to change which child is visible, so if this functionality is required, the :doc:`stackswitcher` should be also used.
===========
Constructor
===========
The construction for the Stack is made with::
GtkWidget *stack = gtk_stack_new();
=======
Methods
=======
A child can be added to the Stack using::
gtk_stack_add_named(GTK_STACK(stack), child, name);
gtk_stack_add_titled(GTK_STACK(stack), child, name, title);
The *child* parameter should be specified, with a typical child being a container. The *name* parameter is specified to identify the child. The *title* argument is displayed on the StackSwitcher if in use.
The visible child can be set by declaring the child and name using::
gtk_stack_set_visible_child(GTK_STACK(stack), child);
gtk_stack_set_visible_child_name(GTK_STACK(stack), name);
The currently visible child and name can be retrieved using the methods::
gtk_stack_get_visible_child(GTK_STACK(stack));
gtk_stack_get_visible_child_name(GTK_STACK(stack));
The Stack can be made homogeneous, with the same size requested for all children with::
gtk_stack_set_homogeneous(GTK_STACK(stack), homogeneous);
The vertical and horizontal homogeneous setting can be controlled individually via::
gtk_stack_set_vhomogeneous(GTK_STACK(stack), homogeneous);
gtk_stack_set_hhomogeneous(GTK_STACK(stack), homogeneous);
Configuration of the Stack widget transitions can be set by::
gtk_stack_set_transition_type(GTK_STACK(stack), transition);
The *transition* value can be set to any of the following:
* ``Gtk.StackTransitionType.NONE``
* ``Gtk.StackTransitionType.CROSSFADE``
* ``Gtk.StackTransitionType.SLIDE_RIGHT``
* ``Gtk.StackTransitionType.SLIDE_LEFT``
* ``Gtk.StackTransitionType.SLIDE_UP``
* ``Gtk.StackTransitionType.SLIDE_DOWN``
* ``Gtk.StackTransitionType.SLIDE_LEFT_RIGHT``
* ``Gtk.StackTransitionType.SLIDE_UP_DOWN``
* ``Gtk.StackTransitionType.OVER_UP``
* ``Gtk.StackTransitionType.OVER_DOWN``
* ``Gtk.StackTransitionType.OVER_LEFT``
* ``Gtk.StackTransitionType.OVER_RIGHT``
* ``Gtk.StackTransitionType.UNDER_UP``
* ``Gtk.StackTransitionType.UNDER_DOWN``
* ``Gtk.StackTransitionType.UNDER_LEFT``
* ``Gtk.StackTransitionType.UNDER_RIGHT``
* ``Gtk.StackTransitionType.OVER_UP_DOWN``
* ``Gtk.StackTransitionType.OVER_DOWN_UP``
* ``Gtk.StackTransitionType.OVER_LEFT_RIGHT``
* ``Gtk.StackTransitionType.OVER_RIGHT_LEFT``
Transition animation times can also be set by calling::
gtk_stack_set_transition_duration(GTK_STACK(stack), duration);
The *duration* parameter should be specified in milliseconds.
=======
Example
=======
Below is an example of a Stack:
.. literalinclude:: _examples/stack.c
Download: :download:`Stack <_examples/stack.c>`
| 34.071429 | 205 | 0.771838 |
c54be82884ad7149fe0971fc80dc45789718de53 | 730 | rst | reStructuredText | CHANGELOG.rst | bbedward/nano-python | a996bf5af57ece64c8389fc0b54bafecf045c5f4 | [
"MIT"
] | null | null | null | CHANGELOG.rst | bbedward/nano-python | a996bf5af57ece64c8389fc0b54bafecf045c5f4 | [
"MIT"
] | null | null | null | CHANGELOG.rst | bbedward/nano-python | a996bf5af57ece64c8389fc0b54bafecf045c5f4 | [
"MIT"
] | 1 | 2021-09-12T12:32:14.000Z | 2021-09-12T12:32:14.000Z | Changelog
=========
Version 2.0.1 (2018-02-17)
--------------------------
- Add "id" parameter to `nano.rpc.Client.send` method to avoiding dup sends
Version 2.0.0 (2018-02-11)
--------------------------
- Nano rebrand - the `raiblocks` module has been replaced with `nano`
- Pypi package name is now `nano-python`
- Use `import nano` instead of `import raiblocks`
- rpc client has been renamed from `nano.rpc.RPCClient` to `nano.rpc.Client`
Version 1.1.0 (2018-02-11)
--------------------------
- RPC no longer automatically retries backend calls since this could lead to
double sends when a `send` call was issued twice due to a timeout
- RPC `.delegators()` now returns voting weights as integers instead of strings
| 28.076923 | 79 | 0.647945 |
46dc915f36750c56cfc19d4577dfbcd9e1b8f09c | 1,692 | rst | reStructuredText | Doc/src/hash/k12.rst | bartbroere/pycryptodome | ff8bbb5642547f5f451720309f1bad178c91d84e | [
"Unlicense"
] | null | null | null | Doc/src/hash/k12.rst | bartbroere/pycryptodome | ff8bbb5642547f5f451720309f1bad178c91d84e | [
"Unlicense"
] | null | null | null | Doc/src/hash/k12.rst | bartbroere/pycryptodome | ff8bbb5642547f5f451720309f1bad178c91d84e | [
"Unlicense"
] | null | null | null | KangarooTwelve
==============
KangarooTwelve is an *extendable-output function* (XOF) based on the Keccak permutation,
which is also the basis for SHA-3.
As a XOF, KangarooTwelve is a generalization of a cryptographic hash function.
It is not limited to creating fixed-length digests (e.g., SHA-256 will always output exactly 32 bytes):
it produces digests of any length, and it can be used as a Pseudo Random Generator (PRG).
Output bits do **not** depend on the output length.
KangarooTwelve is not standardized. However, an RFC_ is being written.
It provides 128 bit of security against (second) pre-image attacks when the output is at least 128 bits long.
It provides the same security level against collision attacks when the output is at least 256 bits long.
In addition to hashing, KangarooTwelve allows for domain separation
via a customization string (``custom`` parameter to :func:`Crypto.Hash.KangarooTwelve.new`).
.. hint::
For instance, if you are using KangarooTwelve in two applications,
by picking different customization strings you can ensure
that they will never end up using the same digest in practice.
The important factor is that the strings are different;
what the strings say does not matter.
In the following example, we extract 26 bytes (208 bits) from the XOF::
>>> from Crypto.Hash import KangarooTwelve as K12
>>>
>>> kangaroo = K12.new(custom=b'Email Signature')
>>> kangaroo.update(b'Some data')
>>> print(kangaroo.read(26).hex())
61e571c51da64228a85d495f3546c43a4dd2c1fd5de87e45dc58
.. _RFC: https://datatracker.ietf.org/doc/draft-irtf-cfrg-kangarootwelve/
.. automodule:: Crypto.Hash.KangarooTwelve
:members:
| 41.268293 | 109 | 0.757683 |
2f956fe7e3c05dc07aaa8816627e77e037dc2ef0 | 541 | rst | reStructuredText | docs/locale/ru_RU/source/videos.rst | dnakashima/fabric-docs-i18n | 2c7961f315f4bb80a302c0cf6f6e15bf6c06269f | [
"CC-BY-4.0"
] | null | null | null | docs/locale/ru_RU/source/videos.rst | dnakashima/fabric-docs-i18n | 2c7961f315f4bb80a302c0cf6f6e15bf6c06269f | [
"CC-BY-4.0"
] | null | null | null | docs/locale/ru_RU/source/videos.rst | dnakashima/fabric-docs-i18n | 2c7961f315f4bb80a302c0cf6f6e15bf6c06269f | [
"CC-BY-4.0"
] | null | null | null | Обучающие видео
===============
Все обучающие видео находятся в YouTube-канале Hyperledger Fabric.
.. raw:: html
<iframe width="560" height="315" src="https://www.youtube.com/embed/ZgKAahU3FcM?list=PLfuKAwZlKV0_--JYykteXjKyq0GA9j_i1" frameborder="0" allowfullscreen></iframe>
<br/><br/>
Здесь собраны видеопрезентации разработчиков, демонстрирующих различные функции и компоненты версии v1, такие как:
реестр, каналы, протокол gossip, комплекты для разработки (SDK), чейнкод, провайдер службы членства и многое другое.
| 41.615385 | 166 | 0.746765 |
05e6634f2583983c8f66a95bbfbd490942ea3b7d | 1,486 | rst | reStructuredText | tools/cmake/3.23.2/Windows/share/cmake-3.23/Help/prop_test/TIMEOUT_AFTER_MATCH.rst | realxye/xbuild | f1dab4990005c2aca942246e9a5f02f097f02343 | [
"MIT"
] | null | null | null | tools/cmake/3.23.2/Windows/share/cmake-3.23/Help/prop_test/TIMEOUT_AFTER_MATCH.rst | realxye/xbuild | f1dab4990005c2aca942246e9a5f02f097f02343 | [
"MIT"
] | null | null | null | tools/cmake/3.23.2/Windows/share/cmake-3.23/Help/prop_test/TIMEOUT_AFTER_MATCH.rst | realxye/xbuild | f1dab4990005c2aca942246e9a5f02f097f02343 | [
"MIT"
] | null | null | null | TIMEOUT_AFTER_MATCH
-------------------
.. versionadded:: 3.6
Change a test's timeout duration after a matching line is encountered
in its output.
Usage
^^^^^
.. code-block:: cmake
add_test(mytest ...)
set_property(TEST mytest PROPERTY TIMEOUT_AFTER_MATCH "${seconds}" "${regex}")
Description
^^^^^^^^^^^
Allow a test ``seconds`` to complete after ``regex`` is encountered in
its output.
When the test outputs a line that matches ``regex`` its start time is
reset to the current time and its timeout duration is changed to
``seconds``. Prior to this, the timeout duration is determined by the
:prop_test:`TIMEOUT` property or the :variable:`CTEST_TEST_TIMEOUT`
variable if either of these are set. Because the test's start time is
reset, its execution time will not include any time that was spent
waiting for the matching output.
:prop_test:`TIMEOUT_AFTER_MATCH` is useful for avoiding spurious
timeouts when your test must wait for some system resource to become
available before it can execute. Set :prop_test:`TIMEOUT` to a longer
duration that accounts for resource acquisition and use
:prop_test:`TIMEOUT_AFTER_MATCH` to control how long the actual test
is allowed to run.
If the required resource can be controlled by CTest you should use
:prop_test:`RESOURCE_LOCK` instead of :prop_test:`TIMEOUT_AFTER_MATCH`.
This property should be used when only the test itself can determine
when its required resources are available.
| 35.380952 | 80 | 0.747645 |
af877d8ad3b5c0c66bc7d3ba143eb8e46394aa12 | 2,903 | rst | reStructuredText | api/autoapi/dia2/IDiaEnumDebugStreamData/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | 2 | 2021-05-10T11:57:55.000Z | 2022-03-15T12:50:36.000Z | api/autoapi/dia2/IDiaEnumDebugStreamData/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | null | null | null | api/autoapi/dia2/IDiaEnumDebugStreamData/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | 3 | 2017-12-12T05:08:29.000Z | 2022-02-02T08:39:25.000Z |
IDiaEnumDebugStreamData Interface
=================================
.. contents::
:local:
Syntax
------
.. code-block:: csharp
public interface IDiaEnumDebugStreamData
GitHub
------
`View on GitHub <https://github.com/aspnet/testing/blob/master/src/Microsoft.Dnx.TestHost/DIA/IDiaEnumDebugStreamData.cs>`_
.. dn:interface:: dia2.IDiaEnumDebugStreamData
Methods
-------
.. dn:interface:: dia2.IDiaEnumDebugStreamData
:noindex:
:hidden:
.. dn:method:: dia2.IDiaEnumDebugStreamData.Clone(out dia2.IDiaEnumDebugStreamData)
:type ppenum: dia2.IDiaEnumDebugStreamData
.. code-block:: csharp
void Clone(out IDiaEnumDebugStreamData ppenum)
.. dn:method:: dia2.IDiaEnumDebugStreamData.GetEnumerator()
:rtype: System.Collections.IEnumerator
.. code-block:: csharp
IEnumerator GetEnumerator()
.. dn:method:: dia2.IDiaEnumDebugStreamData.Item(System.UInt32, System.UInt32, out System.UInt32, out System.Byte)
:type index: System.UInt32
:type cbData: System.UInt32
:type pcbData: System.UInt32
:type pbData: System.Byte
.. code-block:: csharp
void Item(uint index, uint cbData, out uint pcbData, out byte pbData)
.. dn:method:: dia2.IDiaEnumDebugStreamData.Next(System.UInt32, System.UInt32, out System.UInt32, out System.Byte, out System.UInt32)
:type celt: System.UInt32
:type cbData: System.UInt32
:type pcbData: System.UInt32
:type pbData: System.Byte
:type pceltFetched: System.UInt32
.. code-block:: csharp
void Next(uint celt, uint cbData, out uint pcbData, out byte pbData, out uint pceltFetched)
.. dn:method:: dia2.IDiaEnumDebugStreamData.Reset()
.. code-block:: csharp
void Reset()
.. dn:method:: dia2.IDiaEnumDebugStreamData.Skip(System.UInt32)
:type celt: System.UInt32
.. code-block:: csharp
void Skip(uint celt)
Properties
----------
.. dn:interface:: dia2.IDiaEnumDebugStreamData
:noindex:
:hidden:
.. dn:property:: dia2.IDiaEnumDebugStreamData.count
:rtype: System.Int32
.. code-block:: csharp
int count { get; }
.. dn:property:: dia2.IDiaEnumDebugStreamData.name
:rtype: System.String
.. code-block:: csharp
string name { get; }
| 16.976608 | 137 | 0.530141 |
74919b894e8d92b3e072c3fcebc6297f69eb147a | 1,290 | rst | reStructuredText | README.rst | ChaoYue/fortran_magic | df9712a34b36d3e5f4fa20cac696365c2327e431 | [
"BSD-3-Clause"
] | 1 | 2019-07-01T12:49:27.000Z | 2019-07-01T12:49:27.000Z | README.rst | ChaoYue/fortran_magic | df9712a34b36d3e5f4fa20cac696365c2327e431 | [
"BSD-3-Clause"
] | null | null | null | README.rst | ChaoYue/fortran_magic | df9712a34b36d3e5f4fa20cac696365c2327e431 | [
"BSD-3-Clause"
] | null | null | null | Fortran magic
=============
Compile and import everything from a Fortran code cell, using f2py.
The contents of the cell are written to a `.f90` file in the
directory `IPYTHONDIR/fortran` using a filename with the hash of the
code. This file is then compiled. The resulting module
is imported and all of its symbols are injected into the user's
namespace.
:author: Martín Gaitán <gaitan@gmail.com>
:homepage: https://github.com/mgaitan/fortran_magic
:example: see `this notebook <http://nbviewer.ipython.org/urls/raw.github.com/mgaitan/fortran_magic/master/example_notebook.ipynb>`_
Install
=======
Install the extension with `%install_ext` ::
In[1]: %install_ext https://raw.github.com/mgaitan/fortran_magic/master/fortranmagic.py
Usage
=====
Once it's installed, you can load it with `%load_ext fortranmagic`. Then put your Fortran code in a cell started with the cell magic `%%fortran``.
For example::
In[2]: %load_ext fortranmagic
In[3]: %%fortran
subroutine f1(x, y, z)
real, intent(in) :: x,y
real, intent(out) :: z
z = sin(x+y)
end subroutine f1
Every symbol is automatically imported. So `f1` is already available::
In[4]: f1(1.0, 2.1415)
Out[4]: 9.26574066397734e-05
| 25.294118 | 146 | 0.681395 |
2cdb5b33bb19ec5f5a66037f58aef043512dc1e2 | 199 | rst | reStructuredText | AUTHORS.rst | tianhm/pyfx | 515dc8eaa9862d2bb28656a8c5c5c21d2a054f69 | [
"MIT"
] | 19 | 2016-12-13T12:55:09.000Z | 2021-11-19T00:21:54.000Z | AUTHORS.rst | tianhm/pyfx | 515dc8eaa9862d2bb28656a8c5c5c21d2a054f69 | [
"MIT"
] | null | null | null | AUTHORS.rst | tianhm/pyfx | 515dc8eaa9862d2bb28656a8c5c5c21d2a054f69 | [
"MIT"
] | 16 | 2017-03-10T18:52:28.000Z | 2021-10-04T05:18:42.000Z | ====================
Project contributors
====================
* Joseph Melettukunnel <jmelett@gmail.com>
* Jonathan Stoppani <jonathan@stoppani.name>
* Esteban Zacharzewski <zacha.eg@gmail.com>
| 24.875 | 45 | 0.61809 |
0741104615b949377ff41bab18a88b59d9b9f7bd | 1,502 | rst | reStructuredText | Documentation/source/Cpp/lib3mf_ColorGroup.rst | geaz/lib3mf | fd07e571a869f5c495a3beea014d341d67c9d405 | [
"BSD-2-Clause"
] | 171 | 2015-04-30T21:54:02.000Z | 2022-03-13T13:33:59.000Z | Documentation/source/Cpp/lib3mf_ColorGroup.rst | geaz/lib3mf | fd07e571a869f5c495a3beea014d341d67c9d405 | [
"BSD-2-Clause"
] | 190 | 2015-07-21T22:15:54.000Z | 2022-03-30T15:48:37.000Z | Documentation/source/Cpp/lib3mf_ColorGroup.rst | geaz/lib3mf | fd07e571a869f5c495a3beea014d341d67c9d405 | [
"BSD-2-Clause"
] | 80 | 2015-04-30T22:15:54.000Z | 2022-03-09T12:38:49.000Z |
CColorGroup
====================================================================================================
.. cpp:class:: Lib3MF::CColorGroup : public CResource
.. cpp:function:: Lib3MF_uint32 GetCount()
Retrieves the count of base materials in this Color Group.
:returns: returns the count of colors within this color group.
.. cpp:function:: void GetAllPropertyIDs(std::vector<Lib3MF_uint32> & PropertyIDsBuffer)
returns all the PropertyIDs of all colors within this group
:param PropertyIDsBuffer: PropertyID of the color in the color group.
.. cpp:function:: Lib3MF_uint32 AddColor(const sColor & TheColor)
Adds a new value.
:param TheColor: The new color
:returns: PropertyID of the new color within this color group.
.. cpp:function:: void RemoveColor(const Lib3MF_uint32 nPropertyID)
Removes a color from the color group.
:param nPropertyID: PropertyID of the color to be removed from the color group.
.. cpp:function:: void SetColor(const Lib3MF_uint32 nPropertyID, const sColor & TheColor)
Sets a color value.
:param nPropertyID: PropertyID of a color within this color group.
:param TheColor: The color
.. cpp:function:: sColor GetColor(const Lib3MF_uint32 nPropertyID)
Sets a color value.
:param nPropertyID: PropertyID of a color within this color group.
:returns: The color
.. cpp:type:: std::shared_ptr<CColorGroup> Lib3MF::PColorGroup
Shared pointer to CColorGroup to easily allow reference counting.
| 24.225806 | 100 | 0.698402 |
111314c26725810b58fcad2a819c5874acaf3db6 | 22,854 | rst | reStructuredText | docs/avi_ipamdnsproviderprofile.rst | bince-criticalcase/ansible-collection-alb | 88e4e66d321c937539534d31ab9cd79d1a323324 | [
"CNRI-Python"
] | 7 | 2021-06-09T16:00:28.000Z | 2022-03-18T13:03:04.000Z | docs/avi_ipamdnsproviderprofile.rst | bince-criticalcase/ansible-collection-alb | 88e4e66d321c937539534d31ab9cd79d1a323324 | [
"CNRI-Python"
] | 10 | 2021-06-15T16:43:21.000Z | 2022-03-11T05:50:07.000Z | docs/avi_ipamdnsproviderprofile.rst | avinetworks/ansible-collection-alb | b1ade9f2dedb31b12197c10878332b5676a18a42 | [
"CNRI-Python"
] | 6 | 2021-05-11T08:47:49.000Z | 2022-03-04T07:17:12.000Z | .. vmware.alb.avi_ipamdnsproviderprofile:
**********************************************
vmware.alb.avi_ipamdnsproviderprofile
**********************************************
**Module for setup of IpamDnsProviderProfile Avi RESTful Object**
.. contents::
:local:
:depth: 1
Synopsis
--------
- This module is used to configure IpamDnsProviderProfile object.
- More examples at (https://github.com/avinetworks/devops).
Parameters
----------
.. raw:: html
<table border=0 cellpadding=0 class="documentation-table">
<tr>
<th colspan="2">Parameter</th>
<th>Choices/<font color="blue">Defaults</font></th>
<th width="100%">Comments</th>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>state</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td>
<ul style="margin: 0; padding: 0">
<li>absent</li>
<li><div style="color: blue"><b>present</b> ←</div></li>
</ul>
</td>
<td>
<div style="font-size: small">
- The state that should be applied on the entity.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>avi_api_update_method</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td>
<ul style="margin: 0; padding: 0">
<li><div style="color: blue"><b>put</b> ←</div></li>
<li>patch</li>
</ul>
</td>
<td>
<div style="font-size: small">
- Default method for object update is HTTP PUT.
</div>
<div style="font-size: small">
- Setting to patch will override that behavior to use HTTP PATCH.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>avi_api_patch_op</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td>
<ul style="margin: 0; padding: 0">
<li><div style="color: blue"><b>add</b> ←</div></li>
<li>replace</li>
<li>delete</li>
<li>remove</li>
</ul>
</td>
<td>
<div style="font-size: small">
- Patch operation to use when using avi_api_update_method as patch.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>avi_patch_path</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td></td>
<td>
<div style="font-size: small">
- Patch path to use when using avi_api_update_method as patch.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>avi_patch_value</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td></td>
<td>
<div style="font-size: small">
- Patch value to use when using avi_api_update_method as patch.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>allocate_ip_in_vrf</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">bool</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- If this flag is set, only allocate ip from networks in the virtual service vrf.
</div>
<div style="font-size: small">
- Applicable for avi vantage ipam only.
</div>
<div style="font-size: small">
- Field introduced in 17.2.4.
</div>
<div style="font-size: small">
- Default value when not specified in API or module is interpreted by Avi Controller as False.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>aws_profile</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Provider details if type is aws.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>azure_profile</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Provider details if type is microsoft azure.
</div>
<div style="font-size: small">
- Field introduced in 17.2.1.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>configpb_attributes</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Protobuf versioning for config pbs.
</div>
<div style="font-size: small">
- Field introduced in 21.1.1.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>custom_profile</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Provider details if type is custom.
</div>
<div style="font-size: small">
- Field introduced in 17.1.1.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>gcp_profile</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Provider details if type is google cloud.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>infoblox_profile</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Provider details if type is infoblox.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>internal_profile</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Provider details if type is avi.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>labels</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">list</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Key value pairs for granular object access control.
</div>
<div style="font-size: small">
- Also allows for classification and tagging of similar objects.
</div>
<div style="font-size: small">
- Field deprecated in 20.1.5.
</div>
<div style="font-size: small">
- Field introduced in 20.1.2.
</div>
<div style="font-size: small">
- Maximum of 4 items allowed.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>markers</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">list</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- List of labels to be used for granular rbac.
</div>
<div style="font-size: small">
- Field introduced in 20.1.5.
</div>
<div style="font-size: small">
- Allowed in basic edition, essentials edition, enterprise edition.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>name</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td>
<div style="font-size: small">
<b>required: true</b>
</div>
</td>
<td>
<div style="font-size: small">
- Name for the ipam/dns provider profile.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>oci_profile</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Provider details for oracle cloud.
</div>
<div style="font-size: small">
- Field introduced in 18.2.1,18.1.3.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>openstack_profile</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Provider details if type is openstack.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>proxy_configuration</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Field introduced in 17.1.1.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>tenant_ref</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- It is a reference to an object of type tenant.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>tencent_profile</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">dict</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Provider details for tencent cloud.
</div>
<div style="font-size: small">
- Field introduced in 18.2.3.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>type</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td>
<div style="font-size: small">
<b>required: true</b>
</div>
</td>
<td>
<div style="font-size: small">
- Provider type for the ipam/dns provider profile.
</div>
<div style="font-size: small">
- Enum options - IPAMDNS_TYPE_INFOBLOX, IPAMDNS_TYPE_AWS, IPAMDNS_TYPE_OPENSTACK, IPAMDNS_TYPE_GCP, IPAMDNS_TYPE_INFOBLOX_DNS, IPAMDNS_TYPE_CUSTOM,
</div>
<div style="font-size: small">
- IPAMDNS_TYPE_CUSTOM_DNS, IPAMDNS_TYPE_AZURE, IPAMDNS_TYPE_OCI, IPAMDNS_TYPE_TENCENT, IPAMDNS_TYPE_INTERNAL, IPAMDNS_TYPE_INTERNAL_DNS,
</div>
<div style="font-size: small">
- IPAMDNS_TYPE_AWS_DNS, IPAMDNS_TYPE_AZURE_DNS.
</div>
<div style="font-size: small">
- Allowed in basic(allowed values- ipamdns_type_internal) edition, essentials(allowed values- ipamdns_type_internal) edition, enterprise edition.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>url</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Avi controller URL of the object.
</div>
</td>
</tr>
<tr>
<td colspan="2">
<div class="ansibleOptionAnchor" id="parameter-"></div>
<b>uuid</b>
<a class="ansibleOptionLink" href="#parameter-" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">str</span>
</div>
</td>
<td>
</td>
<td>
<div style="font-size: small">
- Uuid of the ipam/dns provider profile.
</div>
</td>
</tr>
</table>
<br/>
Examples
--------
.. code-block:: yaml
- hosts: localhost
connection: local
collections:
- vmware.alb
vars:
avi_credentials:
username: "{{ username }}"
password: "{{ password }}"
controller: "{{ controller }}"
api_version: "{{ api_version }}"
tasks:
- name: Create IPAM DNS provider setting
avi_ipamdnsproviderprofile:
avi_credentials: "{{ avi_credentials }}"
internal_profile:
dns_service_domain:
- domain_name: ashish.local
num_dns_ip: 1
pass_through: true
record_ttl: 100
- domain_name: guru.local
num_dns_ip: 1
pass_through: true
record_ttl: 200
ttl: 300
name: Ashish-DNS
tenant_ref: /api/tenant?name=Demo
type: IPAMDNS_TYPE_INTERNAL
Authors
~~~~~~~
- Gaurav Rastogi (grastogi@vmware.com)
- Sandeep Bandi (sbandi@vmware.com)
- Amol Shinde (samol@vmware.com)
| 40.521277 | 165 | 0.382078 |
a07a73258a1e19aec5ca6e57bd96f46c3f338b0b | 229 | rst | reStructuredText | source/Photoshop/TextFont/typename.rst | theiviaxx/photoshop-docs | 02a26d36acfe158f6ca638c9f36d3e96bf3631f0 | [
"MIT"
] | 10 | 2018-06-24T18:39:40.000Z | 2022-02-23T03:59:30.000Z | docs/html/_sources/Photoshop/TextFont/typename.rst.txt | theiviaxx/photoshop-docs | 02a26d36acfe158f6ca638c9f36d3e96bf3631f0 | [
"MIT"
] | null | null | null | docs/html/_sources/Photoshop/TextFont/typename.rst.txt | theiviaxx/photoshop-docs | 02a26d36acfe158f6ca638c9f36d3e96bf3631f0 | [
"MIT"
] | 4 | 2019-02-26T20:44:40.000Z | 2022-03-02T07:12:41.000Z | .. _TextFont.typename:
================================================
TextFont.typename
================================================
:ref:`string` **typename**
Description
-----------
The class name of the object.
| 15.266667 | 48 | 0.366812 |
0f0c2c591dc70310255265b3c89984eafa876fad | 2,158 | rst | reStructuredText | api/autoapi/Microsoft/AspNetCore/Mvc/DataAnnotations/Internal/MvcDataAnnotationsMvcOptionsSetup/index.rst | JakeGinnivan/Docs | 2e8d94b77a6a4197a8a1ad820085fbcadca65cf9 | [
"Apache-2.0"
] | 13 | 2019-02-14T19:48:34.000Z | 2021-12-24T13:38:23.000Z | api/autoapi/Microsoft/AspNetCore/Mvc/DataAnnotations/Internal/MvcDataAnnotationsMvcOptionsSetup/index.rst | JakeGinnivan/Docs | 2e8d94b77a6a4197a8a1ad820085fbcadca65cf9 | [
"Apache-2.0"
] | null | null | null | api/autoapi/Microsoft/AspNetCore/Mvc/DataAnnotations/Internal/MvcDataAnnotationsMvcOptionsSetup/index.rst | JakeGinnivan/Docs | 2e8d94b77a6a4197a8a1ad820085fbcadca65cf9 | [
"Apache-2.0"
] | 3 | 2017-12-29T18:10:16.000Z | 2018-07-24T18:41:45.000Z |
MvcDataAnnotationsMvcOptionsSetup Class
=======================================
Sets up default options for :any:`Microsoft.AspNetCore.Mvc.MvcOptions`\.
Namespace
:dn:ns:`Microsoft.AspNetCore.Mvc.DataAnnotations.Internal`
Assemblies
* Microsoft.AspNetCore.Mvc.DataAnnotations
----
.. contents::
:local:
Inheritance Hierarchy
---------------------
* :dn:cls:`System.Object`
* :dn:cls:`Microsoft.Extensions.Options.ConfigureOptions{Microsoft.AspNetCore.Mvc.MvcOptions}`
* :dn:cls:`Microsoft.AspNetCore.Mvc.DataAnnotations.Internal.MvcDataAnnotationsMvcOptionsSetup`
Syntax
------
.. code-block:: csharp
public class MvcDataAnnotationsMvcOptionsSetup : ConfigureOptions<MvcOptions>, IConfigureOptions<MvcOptions>
.. dn:class:: Microsoft.AspNetCore.Mvc.DataAnnotations.Internal.MvcDataAnnotationsMvcOptionsSetup
:hidden:
.. dn:class:: Microsoft.AspNetCore.Mvc.DataAnnotations.Internal.MvcDataAnnotationsMvcOptionsSetup
Constructors
------------
.. dn:class:: Microsoft.AspNetCore.Mvc.DataAnnotations.Internal.MvcDataAnnotationsMvcOptionsSetup
:noindex:
:hidden:
.. dn:constructor:: Microsoft.AspNetCore.Mvc.DataAnnotations.Internal.MvcDataAnnotationsMvcOptionsSetup.MvcDataAnnotationsMvcOptionsSetup(System.IServiceProvider)
:type serviceProvider: System.IServiceProvider
.. code-block:: csharp
public MvcDataAnnotationsMvcOptionsSetup(IServiceProvider serviceProvider)
Methods
-------
.. dn:class:: Microsoft.AspNetCore.Mvc.DataAnnotations.Internal.MvcDataAnnotationsMvcOptionsSetup
:noindex:
:hidden:
.. dn:method:: Microsoft.AspNetCore.Mvc.DataAnnotations.Internal.MvcDataAnnotationsMvcOptionsSetup.ConfigureMvc(Microsoft.AspNetCore.Mvc.MvcOptions, System.IServiceProvider)
:type options: Microsoft.AspNetCore.Mvc.MvcOptions
:type serviceProvider: System.IServiceProvider
.. code-block:: csharp
public static void ConfigureMvc(MvcOptions options, IServiceProvider serviceProvider)
| 20.552381 | 177 | 0.704819 |
4a4f3e56538f055c580f27fac7a35ccb37134bbb | 4,097 | rst | reStructuredText | docs/_include/mooring.rst | accient/WEC-Sim | b07143730a45998a19e86505b49ea889213ee260 | [
"Apache-2.0"
] | 1 | 2020-11-16T06:19:25.000Z | 2020-11-16T06:19:25.000Z | docs/_include/mooring.rst | accient/WEC-Sim | b07143730a45998a19e86505b49ea889213ee260 | [
"Apache-2.0"
] | null | null | null | docs/_include/mooring.rst | accient/WEC-Sim | b07143730a45998a19e86505b49ea889213ee260 | [
"Apache-2.0"
] | null | null | null |
This section provides an overview of WEC-Sim's mooring class features; for more information about the mooring class code structure, refer to :ref:`man/code_structure:Mooring Class`.
Floating WEC systems are often connected to mooring lines to keep the device in position. WEC-Sim allows the user to model the mooring dynamics in the simulation by specifying the mooring matrix or coupling with MoorDyn. To include mooring connections, the user can use the mooring block (i.e., Mooring Matrix block or MoorDyn block) given in the WEC-Sim library under Moorings lib and connect it between the body and the Global reference frame.
Refer to the :ref:`man/advanced_features:RM3 with MoorDyn`, and the :ref:`webinar4` for more information.
MoorDyn is hosted on a separate `MoorDyn repository <https://github.com/WEC-Sim/moorDyn>`_. It must be download separately, and all files and folders should be placed in the ``$WECSIM/functions/moorDyn`` directory.
Mooring Matrix
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When the mooring matrix block is used, the user first needs to initiate the mooring class by setting :code:`mooring(i) = mooringClass('mooring name')` in the WEC-Sim input file (``wecSimInputFile.m``). Typically, the mooring connection location also needs to be specified, :code:`mooring(i).ref = [1x3]` (the default connection location is ``[0 0 0]``). The user can also define the mooring matrix properties in the WEC-Sim input file using:
* Mooring stiffness matrix - :code:`mooring(i).matrix.k = [6x6]` in [N/m]
* Mooring damping matrix - :code:`mooring(i).matrix.c = [6x6]` in [Ns/m]
* Mooring pretension - :code:`mooring(i).matrix.preTension = [1x6]` in [N]
.. Note:
"i" indicates the mooring number. More than one mooring can be specified in the WEC-Sim model when the mooring matrix block is used.
MoorDyn
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When the MoorDyn block is used, the user needs to initiate the mooring class by setting :code:`mooring = mooringClass('mooring name')` in the WEC-Sim input file (wecSimInputFile.m), followed by the number of mooring lines that are defined in MoorDyn (``mooring(1).moorDynLines = <Number of mooring lines>``)
A mooring folder that includes a moorDyn input file (``lines.txt``) is required in the simulation folder.
.. Note:
WEC-Sim/MoorDyn coupling only allows one mooring configuration in the simulation.
RM3 with MoorDyn
""""""""""""""""""""""""""""""
This section describes how to simulate a mooring connected WEC system in WEC-Sim using MoorDyn. The RM3 two-body floating point absorber is connected to a three-point catenary mooring system with an angle of 120 between the lines in this example case. The RM3 with MoorDyn folder is located under the `WEC-Sim Applications <https://github.com/WEC-Sim/WEC-Sim_Applications>`_ repository.
* **WEC-Sim Simulink Model**: Start out by following the instructions on how to model the :ref:`man/tutorials:Two-Body Point Absorber (RM3)`. To couple WEC-Sim with MoorDyn, the MoorDyn Block is added in parallel to the constraint block
.. _WECSimmoorDyn:
.. figure:: /_static/images/WECSimMoorDyn.png
:width: 320pt
:align: center
* **WEC-Sim Input File**: In the ``wecSimInputFile.m`` file, the user needs to initiate the mooring class and define the number of mooring lines.
.. _WECSimInputMoorDyn:
.. rli:: https://raw.githubusercontent.com/WEC-Sim/WEC-Sim_Applications/master/Mooring/MoorDyn/wecSimInputFile.m
:language: matlab
* **MoorDyn Input File**: A mooring folder that includes a moorDyn input file (``lines.txt``) is created. The moorDyn input file (``lines.txt``) is shown in the figure below. More details on how to setup the MooDyn input file are described in the MoorDyn User Guide :cite:`Hall2015MoorDynGuide`.
.. _moorDynInput:
.. figure:: /_static/images/moorDynInput.png
:width: 400pt
:align: center
* **Simulation and Post-processing**: Simulation and post-processing are the same process as described in Tutorial Section.
.. Note::
You may need to install the MinGW-w64 compiler to run this simulation.
| 59.376812 | 446 | 0.739077 |
a3da527886a3ff3bed1e26ee0beae06186cb0dfc | 2,133 | rst | reStructuredText | source/monster/glabrezu.rst | rptb1/dnd-srd-sphinx | 0cbe362834d41021dfd368b718e8efe7591a229f | [
"OML"
] | 1 | 2021-12-11T01:08:40.000Z | 2021-12-11T01:08:40.000Z | source/monster/glabrezu.rst | rptb1/dnd-srd-sphinx | 0cbe362834d41021dfd368b718e8efe7591a229f | [
"OML"
] | null | null | null | source/monster/glabrezu.rst | rptb1/dnd-srd-sphinx | 0cbe362834d41021dfd368b718e8efe7591a229f | [
"OML"
] | null | null | null | Glabrezu
~~~~~~~~
.. https://stackoverflow.com/questions/11984652/bold-italic-in-restructuredtext
.. raw:: html
<style type="text/css">
span.bolditalic {
font-weight: bold;
font-style: italic;
}
</style>
.. role:: bi
:class: bolditalic
*Large fiend (demon), chaotic evil*
**Armor Class** 17 (natural armor)
**Hit Points** 157 (15d10 + 75)
**Speed** 40 ft.
+-----------+-----------+-----------+-----------+-----------+-----------+
| STR | DEX | CON | INT | WIS | CHA |
+===========+===========+===========+===========+===========+===========+
| 20 (+5) | 15 (+2) | 21 (+5) | 19 (+4) | 17 (+3) | 16 (+3) |
+-----------+-----------+-----------+-----------+-----------+-----------+
**Saving Throws** Str +9, Con +9, Wis +7, Cha +7
**Damage Resistances** cold, fire, lightning; bludgeoning, piercing, and
slashing from nonmagical attacks
**Damage Immunities** poison
**Condition Immunities** :ref:`poisoned`
**Senses** truesight 120 ft., passive Perception 13
**Languages** Abyssal, telepathy 120 ft.
**Challenge** 9 (5,000 XP)
:bi:`Innate Spellcasting`. The glabrezu's spellcasting ability is
Intelligence (spell save DC 16). The glabrezu can innately cast the
following spells, requiring no material components:
At will: *darkness, detect magic, dispel magic*
1/day each: *confusion, fly, power word stun*
:bi:`Magic Resistance`. The glabrezu has advantage on saving throws
against spells and other magical effects.
Actions
^^^^^^^
:bi:`Multiattack`. The glabrezu makes four attacks: two with its pincers
and two with its fists. Alternatively, it makes two attacks with its
pincers and casts one spell.
.. index:: grappled; by glabrezu pincer
:bi:`Pincer`. *Melee Weapon Attack:* +9 to hit, reach 10 ft., one
target. *Hit:* 16 (2d10 + 5) bludgeoning damage. If the target is a
Medium or smaller creature, it is :ref:`grappled` (escape DC 15). The glabrezu
has two pincers, each of which can grapple only one target.
:bi:`Fist`. *Melee Weapon Attack:* +9 to hit, reach 5 ft., one target.
*Hit:* 7 (2d4 + 2) bludgeoning damage.
| 27.346154 | 79 | 0.598687 |
125853f2d128ccde57172d3339f4574df8348fd3 | 661 | rst | reStructuredText | README.rst | q60/zhenyan | f0b11231ab4f0b04f4643a87becdcdf9e7cb5f16 | [
"MIT"
] | 8 | 2021-08-23T19:55:10.000Z | 2022-01-09T09:50:07.000Z | README.rst | q60/zhenyan | f0b11231ab4f0b04f4643a87becdcdf9e7cb5f16 | [
"MIT"
] | null | null | null | README.rst | q60/zhenyan | f0b11231ab4f0b04f4643a87becdcdf9e7cb5f16 | [
"MIT"
] | null | null | null | zhēnyán |v|
======
**箴言** (IPA: [ʈ͡ʂən⁵⁵ jɛn³⁵]) is a random quote
fetching console utility. Written in V.
|screenshot|
Installing
----------
+ Use latest pre-built binary from `releases <https://github.com/q60/zhenyan/releases>`__
Building and installing manually
--------------------------------
You need *GNU make* and *V* to build **箴言**.
.. code-block:: sh
make
sudo make install
Running
-------
.. code-block:: sh
zhenyan
Uninstalling
------------
.. code-block:: sh
sudo make uninstall
.. |screenshot| image:: https://i.imgur.com/SOKASzJ.png
.. |v| image:: https://img.shields.io/badge/-V-FFFFFF?style=for-the-badge&logo=v
| 16.525 | 89 | 0.612708 |
802ee5aed3aacc37c1205dc3c94951870accff38 | 13,518 | rst | reStructuredText | docs/main.rst | akavel/metar | a5d4a5432fb96042a813376ba764e0b6fb5094ad | [
"MIT"
] | 15 | 2020-01-09T08:38:36.000Z | 2021-11-03T03:58:09.000Z | docs/main.rst | akavel/metar | a5d4a5432fb96042a813376ba764e0b6fb5094ad | [
"MIT"
] | 2 | 2021-02-26T17:49:24.000Z | 2021-03-27T00:02:03.000Z | docs/main.rst | akavel/metar | a5d4a5432fb96042a813376ba764e0b6fb5094ad | [
"MIT"
] | 1 | 2021-02-26T09:59:12.000Z | 2021-02-26T09:59:12.000Z | =================
Metar Processing Details
=================
* `Reader Processing and Error Handling`_
* `Metadata Structure`_
* `Required & Common Sections`_
* `Image Section`_
* `The Meta Section`_
* `Ranges Section`_
* `Xmp Section`_
* `Exif Section`_
* `Iptc Section`_
* `Key Names`_
* `Scan Disk for Images`_
Reader Processing and Error Handling
=================
To read metadata, metar loops through its readers, jpeg and tiff
(in the future there could be many more readers, png, gif, etc).
The reader quickly checks that the file is one that it
understands by looking at the first few bytes of the file. For
unrecognized images it generates an UnknownFormatError and the
next reader runs.
If the reader understands the image, it reads the file's bytes
and generates the metadata. Image file formats are complicated
and have many different versions. The reader might not know how
to read every part of the file, especially while under
development. If the reader comes across a part of the file it
doesn't know how to read, it marks that part as unknown and
continues. You can see the unknown parts in the ranges section
lines marked with an asterisk.
As the reader is processing the metadata it might encounter a
problem in the image that it cannot recover from. In this case it
generates a NotSupportedError and the error is recorded in the meta
section problems list. Then the next reader runs.
If no reader understands the image file, an empty string is
returned for the metadata.
Metadata Structure
=================
Metar returns metadata as an ordered json dictionary. The
dictionary consists of key, value pairs called sections. The key
is the section name and the value contains the section
information. Example section keys: "meta", "xmp", "iptc", "ranges".
In a json sudo notation::
metadata = {
key: sectionValue, # section 1
key: sectionValue, # section 2
key: sectionValue, # section 3
...
}
A section value is either an ordered dictionary or a list of dictionaries::
sectionValue = {} or
sectionValue = [{}, {}, {},...]
A sectionValue dictionary contains strings, numbers, lists or dictionaries::
dict = {
key, number,
key, string,
key, [], # list
key, {}, # dictionary
...
}
A list contains strings, numbers, lists or dictionaries::
list = [number, string, [], {}, ...]
Unlike regular JSON, no booleans, or nulls are used.
Required & Common Sections
=================
The metadata always contains the following three sections and
these have a clearly defined format as documented below:
* image
* ranges
* meta
The xmp, iptc and exif sections are common image metadata formats
and you may see them in both jpeg and tiff.
* xmp
* iptc
* exif
Depending on the reader and the image contents, you may see other
sections as well.
Image Section
=================
The image section always exist and it contains information about
the images inside the image file. The section must have at least
one image. Jpeg files typically have one, Tiff files typically
have two or more. You can look here to determine the number of
images, their dimensions and the byte offsets of the image
pixels.
Here is a sample image section for a dng image::
========== image ==========
-- 1 --
width = 256
height = 171
pixels = [[37312, 168640]]
-- 2 --
width = 3596
height = 2360
pixels = [[261420, 6777513]]
-- 3 --
width = 1024
height = 683
pixels = [[168640, 261420]]
Image Fields:
* width -- the width of the image in pixels.
* height -- the height of the image in pixels.
* pixels -- a list of file offsets telling where the image pixels
are in the file. Each tuple is a half open interval, [start,
finish).
The Meta Section
=================
The meta section always exists and it contains information about
the environment.
Here is a sample meta section::
========== meta ==========
filename = "image.jpg"
reader = "jpeg"
size = 2198
version = "0.0.4"
nimVersion = "0.19.0"
os = "macosx"
cpu = "amd64"
problems = []
readers = ["jpeg", "tiff"]
These fields always exist:
* filename -- the basename of the image file.
* reader -- the metar reader that generated the metadata.
* size -- the image file size in bytes.
* version -- the metar version number following Semantic
Versioning 2.0.0, see https://semver.org/. When new sections
and fields are added, the minor version number is
incremented. If any previous required section or field is
removed or modified that is an incompatible change and the
major version number is increased. Care is taken to only make
backward compatible changes.
* nimVersion -- the nim compiler used to build metar.
* os -- the system OS.
* cpu -- the system CPU.
* problems -- a list of problems, for example: [['jpeg', "corrupt
file at offset 2345"]]. Each problem entry contains the reader
name, and the error message. You will see entries when a reader
identified the file as one it understands but it encountered a
unrecoverable problem when decoding the file.
* readers -- the list of available readers. The readers are
processed in the order listed.
Ranges Section
=================
The ranges section always exists. It describes the file as a list
of byte ranges. You can determine where the section exist in the
file. It shows the unknown as well as known ranges.
Here is a sample ranges section from a Jpeg image::
========== ranges ==========
SOI (0, 2)
APP0 (2, 20)
APPE (20, 36)
exif (36, 46) id
exif (46, 54) header
exif (54, 162) entries
exif (122, 2182) Padding(59932)
exif (2182, 2191) ImageDescription(270)
gap* (2191, 2192) 1 gap byte: 00 .
exif (2192, 2198) Make(271)
exif (2198, 2212) Model(272)
exif (2212, 2232) ModifyDate(306)
exif (2232, 2240) Artist(315)
gap* (2240, 4664) 2424 gap bytes: 6E 6F 6E 00 43 61 6E 6F... non.Cano
exif (4664, 4682) XPTitle(40091)
exif (4682, 4750) XPComment(40092)
gap* (4750, 4796) 46 gap bytes: 68 00 69 00 73 00 20 00... h.i.s. .
iptc (4796, 4818) header
APPD* (4796, 4948) Iptc: marker not 0x1c.
iptc* (4818, 4824) unknown header bytes
iptc (4824, 4826) header
iptc (4826, 4843) 65
iptc (4843, 4856) Keywords(25)
iptc (4856, 4866) Keywords(25)
iptc (4866, 4877) Keywords(25)
iptc (4877, 4919) Description(120)
iptc (4919, 4933) Title(5)
iptc (4933, 4947) Headline(105)
APP2* (4948, 5526)
xmp (5526, 11794)
DQT (11794, 11863)
DQT (11863, 11932)
SOF0 (11932, 11951)
DHT (11951, 11984)
DHT (11984, 12167)
DHT (12167, 12200)
DHT (12200, 12383)
DRI (12383, 12389)
SOS (12389, 12403)
scans (12403, 758218)
EOI (758218, 758220)
Each line describes a byte range of the file. The lines are
sorted.
Range columns:
* the first column is the name of the range. Often it is a
section name. You can see where the section comes from in the
file. If the reader leaves out a range, it appears here as a gap
range and is marked with an asterisk.
* the next optional column is an asterisk. The asterisk means
the reader did not understand this part of the file.
* the next column, [start, finish) is the offset of the beginning
of the range and finish is one past the end.
* the next optional column is a description of the range.
Xmp Section
=================
The Extensible Metadata Platform (XMP) is an ISO standard for
storing metadata in files. The format incorporates the exif, iptc and
other metadata so it is the most complete. It is an xml format
that metar converts to a key value dictionary.
Here is a sample xmp section from a dng image::
========== xmp ==========
xpacket:begin = ""
xpacket:id = "W5M0MpCehiHzreSzNTczkc9d"
crs:Version = "3.2"
crs:RawFileName = "IMG_6093.dng"
crs:WhiteBalance = "As Shot"
crs:Temperature = "5000"
crs:Tint = "0"
crs:Exposure = "-0.20"
--snip--
exif:Function = "False"
exif:RedEyeMode = "False"
aux:SerialNumber = "620423455"
aux:LensInfo = "24/1 70/1 0/0 0/0"
aux:Lens = "24.0-70.0 mm"
aux:ImageNumber = "205"
aux:FlashCompensation = "0/1"
xap:MetadataDate = "2014-10-14T20:32:57-07:00"
dc:creator = ["unknown"]
xpacket:end = "w'?"
xmlns:x = "adobe:ns:meta/"
x:xmptk = "XMP toolkit 3.0-28, framework 1.6"
xmlns:rdf = "http://www.w3.org/1999/02/22-rdf-syn...
xmlns:iX = "http://ns.adobe.com/iX/1.0/"
xmlns:crs = "http://ns.adobe.com/camera-raw-setti...
xmlns:exif = "http://ns.adobe.com/exif/1.0/"
xmlns:aux = "http://ns.adobe.com/exif/1.0/aux/"
xmlns:pdf = "http://ns.adobe.com/pdf/1.3/"
xmlns:photoshop = "http://ns.adobe.com/photoshop/1.0/"
xmlns:tiff = "http://ns.adobe.com/tiff/1.0/"
xmlns:xap = "http://ns.adobe.com/xap/1.0/"
xmlns:dc = "http://purl.org/dc/elements/1.1/"
Exif Section
=================
Exchangeable image file format (Exif) is a standard metadata
format used by digital camera and others. It is encoded in the
file using tiff tags.
Note:
As you can see from the example data below, a lot of the
information doesn't mean much to the casual user. You can puzzle
out the meaning of some of fields like the date/time, version
number, ISO, but others like exposure time, fnumber mean
little. Metar extracts and shows the file metadata content with
very little interpretation. Metar's current focus is to extract and
decode as much information it can from the files. Interpreting at
a higher level can be implemented post processing metar metadata.
Here is a sample exif section from a dng image::
========== exif4 ==========
offset = 36962
next = 0
ExposureTime(33434) = [[1, 40]]
FNumber(33437) = [[28, 10]]
ExposureProgram(34850) = [2]
ISO(34855) = [100]
ExifVersion(36864) = [48, 50, 50, 49]
DateTimeOriginal(36867) = ["2014:10:04 06:14:16"]
CreateDate(36868) = ["2014:10:04 06:14:16"]
ShutterSpeedValue(37377) = [[5321928, 1000000]]
ApertureValue(37378) = [[2970854, 1000000]]
ExposureCompensation(37380) = [[0, 2]]
MeteringMode(37383) = [1]
Flash(37385) = [16]
FocalLength(37386) = [[27, 1]]
FocalPlaneXResolution2(41486) = [[3504000, 885]]
FocalPlaneYResolution2(41487) = [[2336000, 590]]
FocalPlaneResolutionUnit2(41488) = [2]
CustomRendered(41985) = [0]
ExposureMode(41986) = [0]
WhiteBalance(41987) = [1]
SceneCaptureType(41990) = [0]
Iptc Section
=================
International Press Telecommunications Council (IPTC)
standardized the metadata used between new agencies and
newspapers created around 1990.
Here is a sample iptc section from an image::
========== iptc ==========
City(90) = ["", "", "", "", "", "", "City (Core) (ref2016)"]
Description(120) = "The description aka caption (ref2016)"
CaptionWriter(122) = "Description Writer (ref2016)"
Headline(105) = "The Headline (ref2016)"
Instructions(40) = "An Instruction (ref2016)"
Photographer(80) = "Creator1 (ref2016)"
Photographer's Job Title(85) = "Creator's Job Title (ref2016)"
Credit(110) = "Credit Line (ref2016)"
Source(115) = "Source (ref2016)"
Title(5) = "The Title (ref2016)"
DateCreated(55) = "20161121"
60 = "160101+0000"
Location(92) = "Sublocation (Core) (ref2016)"
ProvinceState(95) = "Province/State (Core) (ref2016)"
Country(101) = "Country (Core) (ref2016)"
CountryCode(100) = "R16"
Reference(103) = "Job Id (ref2016)"
Keywords(25) = ["Keyword1ref2016", "Keyword2ref2016", "Keyword3ref2016"]
Copyright(116) = "Copyright (Notice) 2016 IPTC - www.i...
IntellectualGenre(4) = "A Genre (ref2016)"
12 = ["IPTC:1ref2016", "IPTC:2ref2016", "IPTC:3ref2016"]
Key Names
=================
The metadata keys are often numbers to reflect the actual data in
the file. You can convert these numbers to more human readable
names using the keyName procedure.
For example the iptc copyright key is "116". The keyName
procedure will convert it to "Copyright". The getMetadata
procedure calls keyName and combines that with the original
number, for example, "Copyright(116)".
Scan Disk for Images
=================
You can use metar to scan your disk and count image files it
recognizes. The following command counts how many image are in
your home folder on linux. It uses the find command to list all
the files in your home folder then feed them to metar. It uses
grep, sort and uniq to origanize them by image type. On my
machine there are 5523 jpegs and 2207 tiff files::
find ~ -type f -print0 | xargs -0 bin/metar | grep '^reader =' | sort | uniq -c
5523 reader = "jpeg"
2207 reader = "tiff"
The ranges section marks unknown ranges with a asterisk. As a
metar developer you may want to find areas to improve. You can
search for these unknown areas in all your files. For example to
search all the files in the testfiles folder use a command
similar to the following command::
find testfiles -type f | xargs bin/metar | grep '^[a-zA-Z0-9]\+\* \|^file:'
The output is shown below. In this test several unknown ranges
were found. The APPD section has an unknown marker byte, the iptc
section has an unknown header and APP2 is unknown and there are
some unknown gaps.::
...
file: testfiles/IMG_6093.JPG
gap* (2191, 2192) 1 gap byte: 00 .
gap* (2240, 4664) 2424 gap bytes: 6E 6F 6E 00 43 61 6E 6F... non.Cano
gap* (4750, 4796) 46 gap bytes: 68 00 69 00 73 00 20 00... h.i.s. .
APPD* (4796, 4948) Iptc: marker not 0x1c.
iptc* (4818, 4824) unknown header bytes
APP2* (4948, 5526)
...
| 32.26253 | 81 | 0.684495 |
3755ef964ec65bf12321b85e566324e3a23b5b2c | 1,408 | rst | reStructuredText | docs/index.rst | xlab-si/iac-scanner-docs | c218f075494efc438b98ea3f43b64f51744730ca | [
"Apache-2.0"
] | 1 | 2022-03-01T13:26:00.000Z | 2022-03-01T13:26:00.000Z | docs/index.rst | xlab-si/iac-scanner-docs | c218f075494efc438b98ea3f43b64f51744730ca | [
"Apache-2.0"
] | null | null | null | docs/index.rst | xlab-si/iac-scanner-docs | c218f075494efc438b98ea3f43b64f51744730ca | [
"Apache-2.0"
] | null | null | null | ***************************************
Welcome to IaC Scanner's documentation!
***************************************
The following documentation explains the **IaC Scanner**.
.. toctree::
:caption: Table of Contents
:hidden:
01-intro
02-runner
03-saas
.. toctree::
:caption: More info
:hidden:
04-contact
GitHub <https://github.com/xlab-si/iac-scan-runner>
..
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. _Contact:
=======
Contact
=======
You can contact the xOpera team by sending an email to xopera@xlab.si.
The list of contributors:
Current members:
- Matija Cankar (https://github.com/cankarm/): project designing and ideas
- Anže Luzar (https://github.com/anzoman/): IaC Scan Runner, this documentation
.. _Acknowledgments:
===============
Acknowledgments
===============
This project is being developed by xOpera team from `XLAB`_ company.
You can find more information about `xOpera`_ project and xOpera tools in `xOpera's documentation`_.
- This work is being supported by the European Union’s Horizon 2020 research and innovation programme
(grant no. 101000162, `PIACERE`_).
.. _XLAB: https://www.xlab.si/
.. _xOpera: https://www.xlab.si/solutions/orchestrator/
.. _xOpera's documentation: https://xlab-si.github.io/xopera-docs/
.. _PIACERE: https://www.piacere-project.eu/
| 23.081967 | 101 | 0.638494 |
88227deacba7bee08838e0986243ef63dc103119 | 123 | rst | reStructuredText | docs/api/index.rst | ap98nb26u/pyvisa | 6c36592c1bc26fc49785a43160cd6f27623a50fc | [
"MIT"
] | 393 | 2017-06-01T04:22:32.000Z | 2022-03-28T18:39:50.000Z | docs/api/index.rst | ap98nb26u/pyvisa | 6c36592c1bc26fc49785a43160cd6f27623a50fc | [
"MIT"
] | 462 | 2017-05-31T00:49:12.000Z | 2022-03-27T07:43:40.000Z | docs/api/index.rst | ap98nb26u/pyvisa | 6c36592c1bc26fc49785a43160cd6f27623a50fc | [
"MIT"
] | 138 | 2017-05-31T04:10:07.000Z | 2022-03-10T17:49:07.000Z | .. _api:
===
API
===
.. toctree::
:maxdepth: 1
visalibrarybase
resourcemanager
resources
constants
| 8.2 | 19 | 0.577236 |
bfe18d6e8abaf3f2ab03bab4b35177700e9ca3d8 | 961 | rst | reStructuredText | doc/source/ray-air/use-pretrained-model.rst | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | doc/source/ray-air/use-pretrained-model.rst | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | 12 | 2022-03-05T05:37:28.000Z | 2022-03-19T07:14:43.000Z | doc/source/ray-air/use-pretrained-model.rst | jianoaix/ray | 1701b923bc83905f8961c06a6a173e3eba46a936 | [
"Apache-2.0"
] | null | null | null | .. _use-pretrained-model:
Use a pretrained model for batch or online inference
=====================================================
Ray AIR moves end to end machine learning workloads seamlessly through the construct of ``Checkpoint``. ``Checkpoint``
is the output of training and tuning as well as the input to downstream inference tasks.
Having said that, it is entirely possible and supported to use Ray AIR in a piecemeal fashion.
Say you already have a model trained elsewhere, you can use Ray AIR for downstream tasks such as batch and
online inference. To do that, you would need to convert the pretrained model together with any preprocessing
steps into ``Checkpoint``.
To facilitate this, we have prepared framework specific ``to_air_checkpoint`` helper function.
Examples:
.. literalinclude:: doc_code/use_pretrained_model.py
:language: python
:start-after: __use_pretrained_model_start__
:end-before: __use_pretrained_model_end__
| 41.782609 | 118 | 0.75026 |
a79c4af7431927509a018404a0dc3f2740205da0 | 119 | rst | reStructuredText | docs/api/raredecay.tools.dev_tool.rst | jonas-eschle/raredecay | 6285f91e0819d01c80125f50b24e60ee5353ae2e | [
"Apache-2.0"
] | 7 | 2016-11-19T17:28:07.000Z | 2020-12-29T19:49:37.000Z | docs/api/raredecay.tools.dev_tool.rst | mayou36/raredecay | 5b319ada66ebe54f81e216efad81fc9f06237a30 | [
"Apache-2.0"
] | 23 | 2017-03-13T19:13:58.000Z | 2021-05-30T21:48:50.000Z | docs/api/raredecay.tools.dev_tool.rst | jonas-eschle/raredecay | 6285f91e0819d01c80125f50b24e60ee5353ae2e | [
"Apache-2.0"
] | 5 | 2016-12-17T19:24:13.000Z | 2021-05-31T14:32:34.000Z | dev\_tool
=========
.. automodule:: raredecay.tools.dev_tool
:members:
:undoc-members:
:show-inheritance:
| 14.875 | 40 | 0.621849 |
8419610cd1c1acd313ebc79117ad13887d70f894 | 562 | rst | reStructuredText | devstack/README.rst | haydemtzl/stx-gui | c2cf6456940cef34059e754be022b871a5001de6 | [
"Apache-2.0"
] | null | null | null | devstack/README.rst | haydemtzl/stx-gui | c2cf6456940cef34059e754be022b871a5001de6 | [
"Apache-2.0"
] | null | null | null | devstack/README.rst | haydemtzl/stx-gui | c2cf6456940cef34059e754be022b871a5001de6 | [
"Apache-2.0"
] | null | null | null | =================================
Stx-gui dashboard devstack plugin
=================================
This directory contains the stx-gui devstack plugin
To enable the plugin, add the following to your local.conf:
enable_plugin stx-gui <stx-gui GITURL> [GITREF]
where
<stx-gui GITURL> is the URL of stx-gui repository
[GITREF] is an optional git ref (branch/ref/tag). The default is master
For example:
enable_plugin stx-gui https://git.openstack.org/openstack/stx-gui
Note:
So far, this plugin is setup to work with openstack pike.
| 24.434783 | 75 | 0.653025 |
a9947216fb5b2a677f28a8afe189909dd9a3fcc5 | 1,026 | rst | reStructuredText | doc/source/software.rst | Booritas/slideio | fdee97747cc73f087a5538aef6a0315ec75becca | [
"BSD-3-Clause"
] | 6 | 2021-01-25T15:21:31.000Z | 2022-03-07T09:23:37.000Z | doc/source/software.rst | Booritas/slideio | fdee97747cc73f087a5538aef6a0315ec75becca | [
"BSD-3-Clause"
] | 3 | 2020-12-30T16:21:42.000Z | 2022-03-07T09:23:18.000Z | doc/source/software.rst | Booritas/slideio | fdee97747cc73f087a5538aef6a0315ec75becca | [
"BSD-3-Clause"
] | null | null | null | Used Software
=============
- `OpenCV (Open Source Computer Vision Library) <https://opencv.org>`_
- `GDAL library <https://gdal.org>`_
- `boost library <https://boost.org>`_
- `conan c++ package manager <https://conan.io>`_
- `pybind11 (python binding library) <https://github.com/pybind/pybind11>`_
- `libtiff library <http://libtiff.org>`_
- `libjpeg library <http://libjpeg.sourceforge.net/>`_
- `openjpeg library <https://www.openjpeg.org/>`_
- `gtest library <https://github.com/google/googletest>`_
- `tinyxml2 library <https://github.com/leethomason/tinyxml2>`_
- `pole library <http://www.dimin.net/software/pole/>`_
- `expat xml parser <https://libexpat.github.io/>`_
Used development environments
==============================
- `JetBrains <https://www.jetbrains.com/?from=slideio>`_
- `Microsoft Visual Studio <https://visualstudio.microsoft.com/>`_
- `Visual Studio Code <https://code.visualstudio.com/>`_
.. image:: images/jetbrains.png
:width: 200px
:target: https://www.jetbrains.com/?from=slideio
| 39.461538 | 75 | 0.687135 |
298f0237296840bf84876e2f840ea9e5a4b37b41 | 4,807 | rst | reStructuredText | src/whatsnew/3.1.rst | camtauxe/couchdb-documentation | b622dc9eb653bde0452e9edb2f433927df7b8c39 | [
"Apache-2.0"
] | null | null | null | src/whatsnew/3.1.rst | camtauxe/couchdb-documentation | b622dc9eb653bde0452e9edb2f433927df7b8c39 | [
"Apache-2.0"
] | null | null | null | src/whatsnew/3.1.rst | camtauxe/couchdb-documentation | b622dc9eb653bde0452e9edb2f433927df7b8c39 | [
"Apache-2.0"
] | null | null | null | .. Licensed under the Apache License, Version 2.0 (the "License"); you may not
.. use this file except in compliance with the License. You may obtain a copy of
.. the License at
..
.. http://www.apache.org/licenses/LICENSE-2.0
..
.. Unless required by applicable law or agreed to in writing, software
.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
.. License for the specific language governing permissions and limitations under
.. the License.
.. _release/3.1.x:
============
3.1.x Branch
============
.. contents::
:depth: 1
:local:
.. _release/3.1.1:
Version 3.1.1
=============
Features and Enhancements
-------------------------
.. rst-class:: open
* :ghissue:`3102`, :ghissue:`1600`, :ghissue:`2877`, :ghissue:`2041`: When a
client disconnects unexpectedly, CouchDB will no longer log a "``normal :
unknown``" error. Bring forth the rainbows.
.. figure:: ../../images/gf-gnome-rainbows.png
:align: center
:alt: The Gravity Falls gnome pukes some rainbows for us.
* :ghissue:`3109`: Drilldown parameters for text index searches may now be
specified as a list of lists, to avoid having to define this redundantly
in a single query. (Some languages don't have this facility.)
* :ghissue:`3132`: The new ``[chttpd] buffer_response`` option can be enabled
to delay the start of a response until the end has been calculated. This
increases memory usage, but simplifies client error handling as it
eliminates the possibility that a response may be deliberately
terminated midway through, due to a timeout. This config value may be
changed at runtime, without impacting any in-flight responses.
Performance
-----------
Bugfixes
--------
* :ghissue:`2935`: The replicator now correctly picks jobs to restart during
rescheduling, where previously with high load it may have failed to try to
restart crashed jobs.
* :ghissue:`2981`: When handling extremely large documents (≥50MB), CouchDB
can no longer time out on a ``gen_server:call`` if bypassing the IOQ.
* :ghissue:`2941`: CouchDB will no longer fail to compact databases if it
finds files from a 2.x compaction process (prior to an upgrade) on disk.
* :ghissue:`2955` CouchDB now sends the correct CSP header to ensure
Fauxton operates correctly with newer browsers.
* :ghissue:`3061`, :ghissue:`3080`: The `couch_index` server won't crash
and log errors if a design document is deleted while that index is
building, or when a ddoc is added immediately after database creation.
* :ghissue:`3078`: CouchDB now checks for and complains correctly about
invalid parameters on database creation.
* :ghissue:`3090`: CouchDB now correctly encodes URLs correctly when
encoding the ``atts_since`` query string.
* :ghissue:`2953`: Some parameters not allowed for text-index queries on
partitioned database are now properly validated and rejected.
* :ghissue:`3118`: Text-based search indexes may now be cleaned up
correctly, even if the design document is now invalid.
* :ghissue:`3121`: ``fips`` is now only reported in the welcome message
if FIPS mode was enabled at boot (such as in ``vm.args``).
* :ghissue:`3128`: Using :method:`COPY` to copy a document will no longer
return a JSON result with two ``ok`` fields.
* :ghissue:`3138`: Malformed URLs in replication requests or documents
will no longer throw an error.
Other
-----
* JS tests skip faster now.
* More JS tests ported into elixir: ``reader_acl``, ``reduce_builtin``,
``reduce_false``, ``rev_stemming``, ``update_documents``,
``view_collation_raw``, ``view_compaction``, all the
``view_multi_key`` tests, ``view_sandboxing``,
``view_update_seq``.
.. _release/3.1.0:
Version 3.1.0
=============
Features and Enhancements
-------------------------
.. rst-class:: open
* :ghissue:`2648`: Authentication via :ref:`JSON Web Token (JWT) <api/auth/jwt>`. Full
documentation is at the friendly link.
* :ghissue:`2770`: CouchDB now supports linking against SpiderMonkey 68, the current
Mozilla SpiderMonkey ESR release. This provides direct support for packaging on the
latest operating system variants, including Ubuntu 20.04 "Focal Fossa."
* A new Fauxton release is included, with updated dependencies, and a new optional
CouchDB news page.
Performance
-----------
.. rst-class:: open
* :ghissue:`2754`: Optimized compactor performance, resulting in a 40% speed improvement
when document revisions approach the ``revs_limit``. The fixes also include additional
metrics on size tracking during the sort and copy phases, accessible via the
:get:`GET /_active_tasks </active_tasks>` endpoint.
* A big bowl of candy! OK, no, not really. If you got this far...thank you for reading.
| 34.833333 | 88 | 0.719992 |
9f567b5a5cb8d79871c67b92559504fac6cdc5df | 3,043 | rst | reStructuredText | docs/source/output/output.rst | LukasK13/ESBO-ETC | d1db999f1670f2777c5227d79629d421f03e5393 | [
"Apache-2.0"
] | null | null | null | docs/source/output/output.rst | LukasK13/ESBO-ETC | d1db999f1670f2777c5227d79629d421f03e5393 | [
"Apache-2.0"
] | null | null | null | docs/source/output/output.rst | LukasK13/ESBO-ETC | d1db999f1670f2777c5227d79629d421f03e5393 | [
"Apache-2.0"
] | null | null | null | .. _output:
******
Output
******
The results of the computation is printed as table to the command line after each run of ESBO-ETC.
An exemplary output is shown below for the calculation of the SNR::
┏━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ # ┃ Exposure Time ┃ SNR ┃
┡━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ 1 │ 2.3000e+03 s │ 2.3838e-02 │
└──────┴───────────────┴────────────┘
Depending on the computed quantity (SNR, exposure time, sensitivity), the layout may vary.
Besides the general output of the computation result, some more information is written to files in the output directory (see also :ref:`output_dir`).
The structure depends on the used detector type.
Imager Detector
===============
The signal, background, read noise and dark current in number of collected electrons are written as matrices to separate files.
These files can be either CSV-files or fits files depending on the settings in the configuration file.
The data written to these files is reduced to the relevant region containing the photometric aperture.
Nevertheless, the reduction strategy is written to the file header to allow a lossless restoration of the original pixel matrix.
An exemplary CSV file may look like this::
# Signal in electrons
# Range reduced to nonzero values.
# The origin is in the top left corner, starting with 0.
# Column index range: 507 - 516
# Row index range: 507 - 516
#
1.804512872189814043e-01, 1.807562922319483345e-01, ...
1.807562922319483345e-01, 1.810618127424260260e-01, ...
... , ... , ...
Heterodyne Instrument
=====================
In case of the heterodyne instrument, the spectral signal temperature, background temperature, RMS noise temperature and SNR are written to a CSV file in the output directory.
An exemplary output file is shown below.
+--------------------+------------------------+----------------------------+---------------------------+----------------------+
| wavelength [nm] | Signal Temperature [K] | Background Temperature [K] | RMS Noise Temperature [K] | SNR [-] |
+====================+========================+============================+===========================+======================+
| 207499.99999999997 | 0.002272235573155022 | 133.25311622347617 | 0.09403710578848441 | 0.024163180636656255 |
+--------------------+------------------------+----------------------------+---------------------------+----------------------+
| 207500.15818541934 | 0.002272264945847538 | 133.25312130031648 | 0.09403717788245403 | 0.02416347445781345 |
+--------------------+------------------------+----------------------------+---------------------------+----------------------+
| ... | ... | ... | ... | ... |
+--------------------+------------------------+----------------------------+---------------------------+----------------------+
| 55.327273 | 175 | 0.495235 |
ad35ec99b4458b50a538df51552423850e91c587 | 351 | rst | reStructuredText | doc/source/_static/documentation.rst | ambros-gleixner/rubberband | 72dd935dbc4bed93860fdcaa0cbe752bcbd6e395 | [
"MIT"
] | 4 | 2018-03-25T15:01:20.000Z | 2020-06-22T14:34:01.000Z | doc/source/_static/documentation.rst | ambros-gleixner/rubberband | 72dd935dbc4bed93860fdcaa0cbe752bcbd6e395 | [
"MIT"
] | 41 | 2016-12-19T21:17:41.000Z | 2021-12-13T19:50:34.000Z | doc/source/_static/documentation.rst | ambros-gleixner/rubberband | 72dd935dbc4bed93860fdcaa0cbe752bcbd6e395 | [
"MIT"
] | 1 | 2017-10-06T13:52:57.000Z | 2017-10-06T13:52:57.000Z | Documentation
=============
How to build the documentation?
-------------------------------
You can compile it like this::
source venv/bin/activate
cd doc/
make doc
where `venv` is the name of your virtual environment.
Convention
----------
The rubberband documentation uses `PEP257 <http://www.python.org/dev/peps/pep-0257/>`_ style.
| 17.55 | 93 | 0.623932 |
07bd2c6865aab9ac433f4c39aaea03975a9abe40 | 5,126 | rst | reStructuredText | reference/commands/consumer/config.rst | bryceschober/docs | 0b2cbd5dd711cff27076b90e7eea5c09a6131e55 | [
"MIT"
] | null | null | null | reference/commands/consumer/config.rst | bryceschober/docs | 0b2cbd5dd711cff27076b90e7eea5c09a6131e55 | [
"MIT"
] | null | null | null | reference/commands/consumer/config.rst | bryceschober/docs | 0b2cbd5dd711cff27076b90e7eea5c09a6131e55 | [
"MIT"
] | null | null | null |
.. _conan_config:
conan config
============
.. code-block:: bash
$ conan config [-h] {rm,set,get,install} ...
Manages Conan configuration. Edits the conan.conf or installs config files.
.. code-block:: text
positional arguments:
{rm,set,get,install} sub-command help
rm Remove an existing config element
set Set a value for a configuration item
get Get the value of configuration item
install install a full configuration from a local or remote
zip file
optional arguments:
-h, --help show this help message and exit
**Examples**
- Change the logging level to 10:
.. code-block:: bash
$ conan config set log.level=10
- Get the logging level:
.. code-block:: bash
$ conan config get log.level
$> 10
.. _conan_config_install:
conan config install
--------------------
The ``config install`` is intended to share the Conan client configuration. For example, in a company or organization,
is important to have common ``settings.yml``, ``profiles``, etc.
It retrieves a zip file from a local directory or url and apply the files in the local Conan configuration.
The zip can contain only a subset of all the allowed configuration files, only the present files will be
replaced, except the **conan.conf** file, that will apply only the declared variables in the zipped ``conan.conf`` file
and will keep the rest of the local variables.
The **profiles files**, that will be overwritten if already present, but won't delete any other profile file that the user
has in the local machine.
All files in the zip will be copied to the conan home directory.
These are the special files and the rules applied to merge them:
+--------------------------------+----------------------------------------------------------------------+
| File | How it is applied |
+================================+======================================================================+
| profiles/MyProfile | Overrides the local ~/.conan/profiles/MyProfile if already exists |
+--------------------------------+----------------------------------------------------------------------+
| settings.yml | Overrides the local ~/.conan/settings.yml |
+--------------------------------+----------------------------------------------------------------------+
| remotes.txt | Overrides remotes. Will remove remotes that are not present in file |
+--------------------------------+----------------------------------------------------------------------+
| config/conan.conf | Merges the variables, overriding only the declared variables |
+--------------------------------+----------------------------------------------------------------------+
The file *remotes.txt* is the only file listed above which does not have a direct counterpart in
the ``~/.conan`` folder. Its format is a list of entries, one on each line, with the form
.. code-block:: text
[remote name] [remote url] [bool]
where ``[bool]`` (either ``True`` or ``False``) indicates whether SSL should be used to verify that remote.
The local cache *registry.txt* file contains the remotes definitions, as well as the mapping from packages
to remotes. In general it is not a good idea to add it to the installed files. That being said, the remote
definitions part of the *registry.txt* file uses the format required for *remotes.txt*, so you may find it
provides a helpful starting point when writing a *remotes.txt* to be packaged in a Conan
client configuration.
The specified URL will be stored in the ``general.config_install`` variable of the ``conan.conf`` file,
so following calls to :command:`conan config install` command doesn't need to specify the URL.
**Examples**:
- Install the configuration from a URL:
.. code-block:: bash
$ conan config install http://url/to/some/config.zip
Conan config command stores the specified URL in the conan.conf ``general.config_install`` variable.
- Install the configuration from a Git repository:
.. code-block:: bash
$ conan config install http://github.com/user/conan_config/.git
You can also force the git download by using :command:`--type git` (in case it is not deduced from the URL automatically):
.. code-block:: bash
$ conan config install http://github.com/user/conan_config/.git --type git
- Install from a URL skipping SSL verification:
.. code-block:: bash
$ conan config install http://url/to/some/config.zip --verify-ssl=False
This will disable the SSL check of the certificate. This option is defaulted to ``True``.
- Refresh the configuration again:
.. code-block:: bash
$ conan config install
It's not needed to specify the url again, it is already stored.
- Install the configuration from a local path:
.. code-block:: bash
$ conan config install /path/to/some/config.zip
| 37.97037 | 124 | 0.59208 |
47ef6c9085f19101eb90eff26834be7a645d998c | 2,451 | rst | reStructuredText | buildbot/README.rst | cyborginstitute/stack | 563a30efd3aba7387b91f5ad0be28c380f84f23b | [
"MIT"
] | 3 | 2015-01-07T07:06:46.000Z | 2021-04-03T15:26:37.000Z | buildbot/README.rst | cyborginstitute/stack | 563a30efd3aba7387b91f5ad0be28c380f84f23b | [
"MIT"
] | null | null | null | buildbot/README.rst | cyborginstitute/stack | 563a30efd3aba7387b91f5ad0be28c380f84f23b | [
"MIT"
] | 1 | 2021-04-03T15:26:42.000Z | 2021-04-03T15:26:42.000Z | ===============================================
Cyborg Institute Default Buildbot Configuration
===============================================
Synopsis
--------
This base configuration provides a very clear, very sparse framework
for developing your very own `Buildbot`_ configuration, in a sane,
organized and manageable fashion without potentially confusing
defaults. This base configuration is directly derived from the
`metabbotcfg`_ configuration file
Full documentation will eventually reside at
`cyborginstitute.org/projects/stack/buildbot`_, which currently
mirrors the README.
Use
---
1. Make a directory for your all buildbot files. Such as: ::
mkdir ~/buildbot
2. Make a sub-directory for all "build master" configuration. This is
where the primary logic and code for your build bot config
lives. For example: ::
mkdir ~/buildbot/master
3. Copy this directory into the ``~/buildbot/master/``
directory. Modify the following as needed: ::
mv ~/downloads/buildbotcfg ~/buildbot/master/config
4. Create a symbolic link to make everything work as needed: ::
ln -s ~/buildbot/master/config/master.cfg ~/buildbot/master/master.cfg
5. In the ``~/buildbot/master/`` directory run the ``upgrade-master``
command as follows: ::
buildbot upgrade-master
6. Make modifications to all ``.py`` files in this directory as
needed. The comments and the full documentation will help. Indeed,
seeing an existing buildbot configuration like the `metabbotcfg`_
or the `Mozilla Buildbot Configurations`_ will be useful for
developing your own build system.
You should make a practice of storing changes to the
``~/buildbot/master/config`` files in a revision control system.
7. Start the buildbot in the ``~/buildbot/master/`` directory: ::
buildbot start
Rejoice. You have a buildbot. Also familiarize yourself with the
`Buildbot`_ documentation for more information about setting up your
own buildbot configuration
If you have any issues with this, please send a message to the `cyborg
institute listserv`_
.. _`Buildbot`: http://buildbot.net
.. _`cyborginstitute.org/projects/stack/buildbot`: http://cyborginstitute.org/projects/stack/buildbot
.. _`metabbotcfg`: https://github.com/buildbot/metabbotcfg
.. _`Mozilla Buildbot Configurations`: https://github.com/mozilla/buildbot-configs
.. _`cyborg institute listserv`: http://lists.cyborginstitute.net/listinfo/institute
| 35.014286 | 101 | 0.729498 |
eb3535b1dae3541d36ce538ff9fb4581fb54c6ca | 2,299 | rst | reStructuredText | docs/modules/zend.module.manager.best.practices.rst | ezimuel/zf2-documentation | a2b7532ed4417acd5feb43016eddc3c33dd002b5 | [
"Zend-2.0"
] | null | null | null | docs/modules/zend.module.manager.best.practices.rst | ezimuel/zf2-documentation | a2b7532ed4417acd5feb43016eddc3c33dd002b5 | [
"Zend-2.0"
] | null | null | null | docs/modules/zend.module.manager.best.practices.rst | ezimuel/zf2-documentation | a2b7532ed4417acd5feb43016eddc3c33dd002b5 | [
"Zend-2.0"
] | null | null | null |
.. _zend.module-manager.best-practices:
Best Practices when Creating Modules
====================================
When creating a ZF2 module, there are some best practices you should keep in mind.
- **Keep the init() method lightweight.** Be conservative with the actions you perform in the ``init()`` and ``onBootstrap()`` methods of your ``Module`` class. These methods are run for **every** page request, and should not perform anything heavy. As a rule of thumb, registering event listeners is an appropriate task to perform in these methods. Such lightweight tasks will generally not have a measurable impact on the performance of your application, even with many modules enabled. It is considered bad practice to utilize these methods for setting up or configuring instances of application resources such as a database connection, application logger, or mailer. Tasks such as these are better served through the service manager capabilities of Zend Framework 2.
- **Do not perform writes within a module.** You should **never** code your module to perform or expect any writes within the module's directory. Once installed, the files within a module's directory should always match the distribution verbatim. Any user-provided configuration should be performed via overrides in the Application module or via application-level configuration files. Any other required filesystem writes should be performed in some writeable path that is outside of the module's directory.
There are two primary advantages to following this rule. First, any modules which attempt to write within themselves will not be compatible with phar packaging. Second, by keeping the module in sync with the upstream distribution, updates via mechanisms such as Git will be simple and trouble-free. Of course, the Application module is a special exception to this rule, as there is typically no upstream distribution for this module, and it's unlikely you would want to run this package from within a phar archive.
- **Utilize a vendor prefix for module names.** To avoid module naming conflicts, you are encouraged to prefix your module namespace with a vendor prefix. As an example, the (incomplete) developer tools module distributed by Zend is named "ZendDeveloperTools" instead of simply "DeveloperTools".
| 121 | 770 | 0.788604 |
734abf0b58742005591a54f52688cd036f81b72f | 131 | rst | reStructuredText | docs/rst/rsgui.install.rst | EHRI/rspub-gui | 8f9a7105ba7ccfb5bda777d3a12b09505ac0c863 | [
"Apache-2.0"
] | null | null | null | docs/rst/rsgui.install.rst | EHRI/rspub-gui | 8f9a7105ba7ccfb5bda777d3a12b09505ac0c863 | [
"Apache-2.0"
] | 1 | 2017-04-20T08:08:13.000Z | 2017-04-22T14:20:54.000Z | docs/rst/rsgui.install.rst | isabella232/rspub-gui | 8f9a7105ba7ccfb5bda777d3a12b09505ac0c863 | [
"Apache-2.0"
] | 2 | 2017-02-15T09:04:06.000Z | 2021-06-22T08:20:51.000Z | Installation
============
.. toctree::
rsgui.install.wininst.rst
rsgui.install.winfold.rst
rsgui.install.macinst.rst
| 14.555556 | 29 | 0.648855 |
3aef7c3609732642fbcf3f265e913af27dfd0cf5 | 7,739 | rst | reStructuredText | docs/averaging-api.rst | JoshVStaden/codex-africanus | 4a38994431d51510b1749fa0e4b8b6190b8b530f | [
"BSD-3-Clause"
] | null | null | null | docs/averaging-api.rst | JoshVStaden/codex-africanus | 4a38994431d51510b1749fa0e4b8b6190b8b530f | [
"BSD-3-Clause"
] | null | null | null | docs/averaging-api.rst | JoshVStaden/codex-africanus | 4a38994431d51510b1749fa0e4b8b6190b8b530f | [
"BSD-3-Clause"
] | null | null | null | ---------
Averaging
---------
Routines for averaging visibility data.
Time and Channel Averaging
--------------------------
The routines in this section average row-based samples by:
1. Averaging samples of consecutive **time** values into bins defined
by an period of :code:`time_bin_secs` seconds.
2. Averaging channel data into equally sized bins of :code:`chan_bin_size`.
In order to achieve this, a **baseline x time** ordering is established
over the input data where **baseline** corresponds to the
unique **(ANTENNA1, ANTENNA2)** pairs and **time** corresponds
to the unique, monotonically increasing **TIME** values
associated with the rows of a Measurement Set.
======== === === === === ===
Baseline T0 T1 T2 T3 T4
======== === === === === ===
(0, 0) 0.1 0.2 0.3 0.4 0.5
(0, 1) 0.1 0.2 0.3 0.4 0.5
(0, 2) 0.1 0.2 X 0.4 0.5
(1, 1) 0.1 0.2 0.3 0.4 0.5
(1, 2) 0.1 0.2 0.3 0.4 0.5
(2, 2) 0.1 0.2 0.3 0.4 0.5
======== === === === === ===
It is possible for times or baselines to be missing. In the above
example, T2 is missing for baseline (0, 2).
.. warning::
The above requires unique lexicographical
combinations of (TIME, ANTENNA1, ANTENNA2). This can usually
be achieved by suitably partitioning input data on indexing rows,
DATA_DESC_ID and SCAN_NUMBER in particular.
For each baseline, adjacent time's are assigned to a bin
if :math:`h_c - h_e/2 - (l_c - l_e/2) <` :code:`time_bin_secs`, where
:math:`h_c` and :math:`l_c` are the upper and lower time and
:math:`h_e` and :math:`l_e` are the upper and lower intervals,
taken from the **INTERVAL** column.
Note that no distinction is made between flagged and unflagged data
when establishing the endpoints in the bin.
The reason for this is that the `Measurement Set v2.0 Specification
<https://casa.nrao.edu/Memos/229.html>`_ specifies that
**TIME** and **INTERVAL** columns
are defined as containing the *nominal*
time and period at which the visibility was sampled.
This means that their values includie valid, flagged and missing data.
Thus, averaging a
regular high-resolution **baseline x htime** grid should produce
a regular low-resolution **baseline x ltime** grid (**htime > ltime**)
in the presence of bad data
By contrast, other columns such as **TIME_CENTROID**
and **EXPOSURE** contain the *effective* time and period as
they exclude missing and bad data. Their increased accuracy,
and therefore variability means that they are unsuitable for
establishing a grid over the data.
To summarise, the averaged times in each bin establish a map:
- from possibly unordered input rows.
- to a reduced set of output rows ordered by
averaged :code:`(TIME, ANTENNA1, ANTENNA2)`.
Flagged Data Handling
~~~~~~~~~~~~~~~~~~~~~
Both **FLAG_ROW** and **FLAG** columns may be supplied to the averager,
but they should be consistent with each other. The averager will throw
an exception if this is not the case, rather than making an assumption as
to which is correct.
When provided with flags, the averager will output averages
for bins that are completely flagged.
Part of the reason for this is that the specifies that
the **TIME** and **INTERVAL** columns represent the *nominal* time and interval
values.
This means that they should represent valid as well as flagged or missing data
in their computation.
By contrast, most other columns such as **TIME_CENTROID** and **EXPOSURE**,
contain the *effective* values and should only include valid, unflagged data.
To support this:
1. **TIME** and **INTERVAL** are averaged using both flagged and
unflagged samples.
2. Other columns, such as **TIME_CENTROID** are handled as follows:
1. If the bin contains some unflagged data, only this data
is used to calculate average.
2. If the bin is completely flagged, the average of all samples
(which are all flagged) will be used.
3. In both cases, a completely flagged bin will have it's flag set.
4. To support the two cases, twice the memory of the output array
is required to track both averages, but only one array of merged
values is returned.
Guarantees
~~~~~~~~~~
1. Averaged output data will be lexicographically ordered by
:code:`(TIME, ANTENNA1, ANTENNA2)`
2. **TIME** and **INTERVAL** columns always contain the
*nominal* average and sum and therefore contain both and missing
or unflagged data.
3. Other columns will contain the *effective*
average and will contain only valid data *except* when
all data in the bin is flagged.
4. Completely flagged bins will be set as flagged in
both the *nominal* and *effective* case.
5. Certain columns are averaged, while others are summed,
or simply assigned to the last value in the bin in the case
of antenna indices.
6. **Visibility data** is averaged by multiplying and dividing
by **WEIGHT_SPECTRUM** or **WEIGHT** or natural weighting,
in order of priority.
.. math::
\frac{\sum v_i w_i}{\sum w_i}
7. **SIGMA_SPECTRUM** is averaged by multiplying and dividing
by **WEIGHT_SPECTRUM** or **WEIGHT** or natural weighting,
in order of priority and availability.
**SIGMA** is only averaged with **WEIGHT** or natural weighting.
.. math::
\sqrt{\frac{\sum w_i^2 \sigma_i^2}{(\sum w_i)^2}}
The following table summarizes the handling of each
column in the main Measurement Set table:
=============== ================= ============================ ===========
Column Unflagged/Flagged Aggregation Method Required
sample handling
=============== ================= ============================ ===========
TIME Nominal Mean Yes
INTERVAL Nominal Sum Yes
ANTENNA1 Nominal Assigned to Last Input Yes
ANTENNA2 Nominal Assigned to Last Input Yes
TIME_CENTROID Effective Mean No
EXPOSURE Effective Sum No
FLAG_ROW Effective Set if All Inputs Flagged No
UVW Effective Mean No
WEIGHT Effective Sum No
SIGMA Effective Weighted Mean No
DATA (vis) Effective Weighted Mean No
FLAG Effective Set if All Inputs Flagged No
WEIGHT_SPECTRUM Effective Sum No
SIGMA_SPECTRUM Effective Weighted Mean No
=============== ================= ============================ ===========
The following SPECTRAL_WINDOW sub-table columns are averaged as follows:
=============== ============================
Column Aggregation Method
=============== ============================
CHAN_FREQ Mean
CHAN_WIDTH Sum
EFFECTIVE_BW Sum
RESOLUTION Sum
=============== ============================
Dask Implementation
~~~~~~~~~~~~~~~~~~~
The dask implementation chunks data up by row and channel and
averages each chunk independently of values in other chunks. This should
be kept in mind if one wishes to maintain a particular ordering
in the output dask arrays.
Typically, Measurement Set data is monotonically ordered in time. To
maintain this guarantee in output dask arrays,
the chunks will need to be separated by distinct time values.
Practically speaking this means that the first and second chunk
should not both contain value time 0.1, for example.
Numpy
~~~~~
.. currentmodule:: africanus.averaging
.. autosummary::
time_and_channel
.. autofunction:: time_and_channel
Dask
~~~~
.. currentmodule:: africanus.averaging.dask
.. autosummary::
time_and_channel
.. autofunction:: time_and_channel
| 35.995349 | 79 | 0.653185 |
a6c4fb8baca1de02e83ffc27cc0d54044b11806d | 168 | rst | reStructuredText | doc/api/genetic_code/genetic_code.rst | StephenRogers1/cogent3 | 1116a0ab14d9c29a560297205546714e2db1896c | [
"BSD-3-Clause"
] | null | null | null | doc/api/genetic_code/genetic_code.rst | StephenRogers1/cogent3 | 1116a0ab14d9c29a560297205546714e2db1896c | [
"BSD-3-Clause"
] | 9 | 2021-07-07T19:10:31.000Z | 2022-01-23T08:31:30.000Z | doc/api/genetic_code/genetic_code.rst | genomematt/cogent3 | 7594710560a148164d64fdb9231aefd3f5b33ee2 | [
"BSD-3-Clause"
] | null | null | null | :mod:`genetic_code`
===================
.. currentmodule:: cogent3.core.genetic_code
.. autosummary::
:toctree: classes
:template: class.rst
GeneticCode
| 15.272727 | 44 | 0.613095 |
cf2628a32a3ca784086b9074d91cedbf1a0d6b54 | 566 | rst | reStructuredText | doc/ccnx.rst | chris-wood/ccn-onpath-simulation-ccnsim | a3baf7fd3131a9e37744985e3c8a7178a89e9324 | [
"BSD-2-Clause"
] | null | null | null | doc/ccnx.rst | chris-wood/ccn-onpath-simulation-ccnsim | a3baf7fd3131a9e37744985e3c8a7178a89e9324 | [
"BSD-2-Clause"
] | null | null | null | doc/ccnx.rst | chris-wood/ccn-onpath-simulation-ccnsim | a3baf7fd3131a9e37744985e3c8a7178a89e9324 | [
"BSD-2-Clause"
] | null | null | null | .. include:: replace.txt
CCNx 1.0 Protocols (CCNx)
---------------------------------------
Model Description
*****************
Design
++++++
Scope and Limitations
+++++++++++++++++++++
Future Work
+++++++++++
No announced plans.
..
References
++++++++++
..
Usage
*****
..
Examples
++++++++
..
Helpers
+++++++
..
Attributes
++++++++++
..
Tracing
+++++++
..
Logging
+++++++
..
Caveats
+++++++
..
Validation
**********
Unit tests
++++++++++
Larger-scale performance tests
++++++++++++++++++++++++++++++
| 8.575758 | 39 | 0.371025 |
07293d24d566b0e3ab9d8236e5d1a7aee225e5de | 6,471 | rst | reStructuredText | docs/index.rst | paulchubatyy/github3.py | a264cc1470c973a06c60533e982f77a52ba19443 | [
"BSD-3-Clause"
] | null | null | null | docs/index.rst | paulchubatyy/github3.py | a264cc1470c973a06c60533e982f77a52ba19443 | [
"BSD-3-Clause"
] | null | null | null | docs/index.rst | paulchubatyy/github3.py | a264cc1470c973a06c60533e982f77a52ba19443 | [
"BSD-3-Clause"
] | null | null | null | github3.py
==========
Release v\ |version|.
github3.py is wrapper for the `GitHub API`_ written in python. The design of
github3.py is centered around having a logical organization of the methods
needed to interact with the API. Let me demonstrate this with a code example.
Example
-------
Let's get information about a user::
from github3 import login
gh = login('sigmavirus24', password='<password>')
sigmavirus24 = gh.me()
# <User [sigmavirus24:Ian Stapleton Cordasco]>
print(sigmavirus24.name)
# Ian Stapleton Cordasco
print(sigmavirus24.login)
# sigmavirus24
print(sigmavirus24.followers_count)
# 4
for f in gh.followers():
print(str(f))
kennethreitz = gh.user('kennethreitz')
# <User [kennethreitz:Kenneth Reitz]>
print(kennethreitz.name)
print(kennethreitz.login)
print(kennethreitz.followers_count)
followers = [str(f) for f in gh.followers('kennethreitz')]
More Examples
~~~~~~~~~~~~~
.. toctree::
:maxdepth: 2
examples/two_factor_auth
examples/oauth
examples/gist
examples/git
examples/github
examples/issue
examples/iterators.rst
examples/logging
examples/octocat
.. links
.. _GitHub API: http://developer.github.com
Modules
-------
.. toctree::
:maxdepth: 1
api
auths
events
gists
git
github
issues
models
notifications
orgs
pulls
repos
search_structs
structs
users
Internals
~~~~~~~~~
For objects you're not likely to see in practice. This is useful if you ever
feel the need to contribute to the project.
.. toctree::
:maxdepth: 1
models
decorators
Installation
------------
.. code-block:: sh
$ pip install github3.py
# OR:
$ git clone git://github.com/sigmavirus24/github3.py.git github3.py
$ cd github3.py
$ python setup.py install
Dependencies
~~~~~~~~~~~~
- requests_ by Kenneth Reitz
- uritemplate_ by Ian Stapleton Cordasco
.. _requests: https://github.com/kennethreitz/requests
.. _uritemplate: https://github.com/sigmavirus24/uritemplate
Contributing
------------
I'm maintaining two public copies of the project. The first can be found on
GitHub_ and the second on BitBucket_. I would prefer pull requests to take
place on GitHub, but feel free to do them via BitBucket. Please make sure to
add yourself to the list of contributors in AUTHORS.rst, especially if you're
going to be working on the list below.
.. links
.. _GitHub: https://github.com/sigmavirus24/github3.py
.. _BitBucket: https://bitbucket.org/icordasc/github3.py/src
Contributor Friendly Work
~~~~~~~~~~~~~~~~~~~~~~~~~
In order of importance:
Documentation
I know I'm not the best at writing documentation so if you want to clarify
or correct something, please do so.
Examples
Have a clever example that takes advantage of github3.py? Feel free to
share it.
Running the Unittests
~~~~~~~~~~~~~~~~~~~~~
The tests are generally run using tox. Tox can be installed lke so::
pip install tox
We test against PyPy and the following versions of Python:
- 2.6
- 2.7
- 3.2
- 3.3
- 3.4
If you simply run ``tox`` it will run tests against all of these versions of
python and run ``flake8`` against the codebase as well. If you want to run
against one specific version, you can do::
tox -e py34
And if you want to run tests against a specific file, you can do::
tox -e py34 -- tests/uni/test_github.py
To run the tests, ``tox`` uses ``py.test`` so you can pass any options or
parameters to ``py.test`` after specifying ``--``. For example, you can get
more verbose output by doing::
tox -e py34 -- -vv
.. toctree::
testing
Contact
-------
- Twitter: @\ sigmavirus24_
- Private email: graffatcolmingov [at] gmail
- Mailing list: github3.py [at] librelist.com
.. _sigmavirus24: https://twitter.com/sigmavirus24
Latest Version's Changes
------------------------
.. include:: ../LATEST_VERSION_NOTES.rst
The full history of the project is available as well.
.. toctree::
project_changelog
Testimonials
------------
.. raw:: html
<blockquote class="twitter-tweet"><p>gotta hand it to @<a
href="https://twitter.com/sigmavirus24">sigmavirus24</a> ... github3.py is
really well written. It will soon be powering the github stuff on
@<a href="https://twitter.com/workforpie">workforpie</a>
</p>— Brad Montgomery # (@bkmontgomery) <a
href="https://twitter.com/bkmontgomery/status/325644863561400320">April 20, 2013</a>
</blockquote>
<blockquote class="twitter-tweet"><p>awesome github v3 api wrapper in
python <a href="https://t.co/PhD0Aj5X"
title="https://github.com/sigmavirus24/github3.py">github.com/sigmavirus24/g#</a></p>—
Mahdi Yusuf (@myusuf3) <a
href="https://twitter.com/myusuf3/status/258571050927915008">October 17,
2012</a></blockquote>
<blockquote class="twitter-tweet">
<p>@<a href="https://twitter.com/sigmavirus24">sigmavirus24</a> github3 is
awesome! Made my life much easier tonight, which is a very good
thing.</p>— Mike Grouchy (@mgrouchy) <a
href="https://twitter.com/mgrouchy/status/316370772782350336">March 26,
2013</a></blockquote>
<blockquote class="twitter-tweet" data-conversation="none">
<p>@<a href="https://twitter.com/sigmavirus24">sigmavirus24</a> "There are
so many Python client libraries for GitHub API, I tried all of them, and
my conclusion is: github3.py is the best."</p>— Hong Minhee
(@hongminhee) <a
href="https://twitter.com/hongminhee/status/315295733899210752">March 23,
2013</a></blockquote>
<blockquote class="twitter-tweet">
<p>@<a href="https://twitter.com/sigmavirus24">sigmavirus24</a> I cannot
wait to use your github package for <a
href="https://twitter.com/search/%23zci">#zci</a>. Do you have it packaged
for debian by any chance?</p>— Zygmunt Krynicki (@zygoon) <a
href="https://twitter.com/zygoon/status/316608301527887872">March 26,
2013</a></blockquote>
<blockquote class="twitter-tweet">
<p>Developing against github3.py's API is a joy, kudos to @<a
href="https://twitter.com/sigmavirus24">sigmavirus24</a></p>—
Alejandro Gomez (@dialelo) <a
href="https://twitter.com/dialelo/status/316846075015229440">March 27,
2013</a></blockquote>
<script async src="//platform.twitter.com/widgets.js"
charset="utf-8"></script>
| 25.277344 | 96 | 0.683975 |
059aa6c92165984dad4b885fd5acf55f529435b2 | 589 | rst | reStructuredText | README.rst | Mycroft-eus/mycroft-stt-plugin-elhuyar | 845e60665062c94a480ba370e443498a957567a6 | [
"Apache-2.0"
] | null | null | null | README.rst | Mycroft-eus/mycroft-stt-plugin-elhuyar | 845e60665062c94a480ba370e443498a957567a6 | [
"Apache-2.0"
] | null | null | null | README.rst | Mycroft-eus/mycroft-stt-plugin-elhuyar | 845e60665062c94a480ba370e443498a957567a6 | [
"Apache-2.0"
] | null | null | null | mycroft-stt-plugin-elhuyar
==========================
This STT service for Mycroft requires credentials for accessing Elhuyar STT API. The access to the API is free for developers.
Configuration parameters (only the credentials are mandatory, others are defaulted as in the following) ::
"stt": {
"module": "elhuyar_stt",
"elhuyar_stt": {
"api_id": "insert_your_api_id_here",
"api_key": "insert_your_api_key_here"
}
}
Installation
------------
::
mycroft-pip install mycroft-stt-plugin-elhuyar
License
-------
Apache-2.0
| 21.814815 | 126 | 0.631579 |
5f0b8cc0d790d7b72280e32bc814321b45f0b627 | 365 | rst | reStructuredText | cmake/share/cmake-3.3/Help/prop_dir/LISTFILE_STACK.rst | htfy96/htscheme | b44c9f9672f69d9b3c2eb1c80969bcfcfec9990f | [
"MIT"
] | 5 | 2015-07-07T01:30:37.000Z | 2020-08-14T10:45:01.000Z | cmake/share/cmake-3.3/Help/prop_dir/LISTFILE_STACK.rst | htfy96/htscheme | b44c9f9672f69d9b3c2eb1c80969bcfcfec9990f | [
"MIT"
] | null | null | null | cmake/share/cmake-3.3/Help/prop_dir/LISTFILE_STACK.rst | htfy96/htscheme | b44c9f9672f69d9b3c2eb1c80969bcfcfec9990f | [
"MIT"
] | null | null | null | LISTFILE_STACK
--------------
The current stack of listfiles being processed.
This property is mainly useful when trying to debug errors in your
CMake scripts. It returns a list of what list files are currently
being processed, in order. So if one listfile does an INCLUDE command
then that is effectively pushing the included listfile onto the stack.
| 36.5 | 71 | 0.758904 |
610fc760341a486b4fe3938dd003151576822166 | 3,259 | rst | reStructuredText | lib/Doctrine/Dbal/doc/troubleshooting.rst | webburza/rollerworks-search | 8cc0e1b45df2eec84a17720dd18854c4abf76003 | [
"MIT"
] | null | null | null | lib/Doctrine/Dbal/doc/troubleshooting.rst | webburza/rollerworks-search | 8cc0e1b45df2eec84a17720dd18854c4abf76003 | [
"MIT"
] | null | null | null | lib/Doctrine/Dbal/doc/troubleshooting.rst | webburza/rollerworks-search | 8cc0e1b45df2eec84a17720dd18854c4abf76003 | [
"MIT"
] | null | null | null | Troubleshooting
===============
Why am I not netting any results
--------------------------------
Make sure sure you have properly configured column-mapping for the WhereBuilder.
If you are still not getting any results check the following:
#. Are there actual records in the database?
#. Are you using a result limiter like ``WHERE id = ?`` in the query?
If so try running the query without it.
#. Using multiple tables (with JOIN) will only work when all the tables
have a positive match, try using ``LEFT JOIN`` to ignore any missing
matches.
#. Are you using any custom Conversions? Check if they are missing quotes
or are quoting an integer value. *SQLIte doesn't work well with quoted
integer values.*
#. Try using a smaller SearchCondition and make sure the values you are
searching are actually existent.
Didn't any of this work? Ask help at the `RollerworksSearch Gitter channel`_.
Why am I getting duplicate results?
-----------------------------------
This a known problem and is not fixable, this happens when you select from
multiple tables using a JOIN and at least one record matched in multiple
tables.
For example you have a head table called "users", the user has "contacts".
These contacts are linked to the user by the "userId". If you search for
a user by it's contacts multiple contacts will be found that all point to
the same userId. Because of how relational databases work you get the
contacts list and the linked user. But these records are flat, so one
row can contain the data of multiple tables. And therefor you get duplicate
results. *Even if you don't select from any of the contacts fields.*
How to solve this?
~~~~~~~~~~~~~~~~~~
There are a few possibilities you can consider, all of which have there
pro's and con's.
#. You can use ``DISTINCT`` to remove any duplicate records, but this will
not work when you select from other columns of JOINED tables.
#. You can use ``DISTINCT`` to remove any duplicate userId's, giving you
a list list of none-duplicate userId's, but you need to perform an
additional query to get all the users with the found userId's.
#. You can remove the duplicate values yourself using a PHP script.
But again this will not work when you need the columns from JOINING
tables.
If this all seems like a bit to much work you may want to consider
using `Doctrine ORM`_, which does the part of transforming flat
data to an array/object graph for you. And there is also a
`RollerworksSearch Doctrine ORM extension`_ for searching with
Doctrine ORM!
.. caution::
You may be tempted to use something like:
.. code-block:: sql
SELECT * FROM users AS u LEFT JOIN contacts as c ON c.userid = u.userid GROUP BY userId
**STOP!** This will not work because the behavior on the other columns
is unspecified. MySQL will only accept this query when you did not enable
strict-mode, but just because it's not giving any errors doesn't mean
it will work as expected.
.. _`RollerworksSearch Gitter channel`: https://gitter.im/rollerworks/RollerworksSearch
.. _`Doctrine ORM`: http://www.doctrine-project.org/projects/orm.html
.. _`RollerworksSearch Doctrine ORM extension`: https://github.com/rollerworks/rollerworks-search-doctrine-orm
| 42.881579 | 110 | 0.736729 |
a1c522f39cbba8955746b9b91faf4f30abec3c5b | 272 | rst | reStructuredText | docs/kudos.rst | EduPonz/jenkins_badges | 5dadb44d1df3dfa27cb1a1ec75c939de962a009e | [
"MIT"
] | 11 | 2017-12-22T23:38:02.000Z | 2020-12-01T08:34:51.000Z | docs/kudos.rst | EduPonz/jenkins_badges | 5dadb44d1df3dfa27cb1a1ec75c939de962a009e | [
"MIT"
] | 4 | 2017-07-18T11:48:07.000Z | 2021-01-27T16:37:00.000Z | docs/kudos.rst | EduPonz/jenkins_badges | 5dadb44d1df3dfa27cb1a1ec75c939de962a009e | [
"MIT"
] | 11 | 2018-01-03T23:05:52.000Z | 2020-11-20T11:00:39.000Z | Kudos
------
- Idea came from mnpk's `jenkins-coverage-badge <https://github.com/mnpk/jenkins-coverage-badge>`_ written in nodeJS.
- `shields.io <https://shields.io/>`_ for providing scalable badges over a clean API
- `Jenkins <https://jenkins.io/>`_ for being...jenkins
| 38.857143 | 117 | 0.720588 |
eb38e0e5aaf18f3c821b0934ec0b152524fdd30f | 69 | rst | reStructuredText | docs/source/plugindocs/trim_whitespace.rst | saksham1115/mediagoblin | 41302ad2b622b340caeb13339338ab3a5d0f7e6b | [
"CC0-1.0"
] | 60 | 2015-01-17T01:19:47.000Z | 2021-09-17T01:25:47.000Z | docs/source/plugindocs/trim_whitespace.rst | saksham1115/mediagoblin | 41302ad2b622b340caeb13339338ab3a5d0f7e6b | [
"CC0-1.0"
] | 12 | 2015-02-03T09:14:42.000Z | 2020-12-04T12:18:03.000Z | docs/source/plugindocs/trim_whitespace.rst | saksham1115/mediagoblin | 41302ad2b622b340caeb13339338ab3a5d0f7e6b | [
"CC0-1.0"
] | 23 | 2015-08-18T01:32:50.000Z | 2021-09-05T23:22:55.000Z | .. include:: ../../../mediagoblin/plugins/trim_whitespace/README.rst
| 34.5 | 68 | 0.710145 |
0ea6793f3bcc76e87aa0c6e55f226673f017bad1 | 1,875 | rst | reStructuredText | docs/command_reference/instruction/upload.rst | kurusugawa-computer/annofab-cli | 8edad492d439bc8fe64e9471464f545d07aba8b7 | [
"MIT"
] | 9 | 2019-07-22T23:54:05.000Z | 2020-11-05T06:26:04.000Z | docs/command_reference/instruction/upload.rst | kurusugawa-computer/annofab-cli | 8edad492d439bc8fe64e9471464f545d07aba8b7 | [
"MIT"
] | 389 | 2019-07-03T04:39:11.000Z | 2022-03-28T14:06:11.000Z | docs/command_reference/instruction/upload.rst | kurusugawa-computer/annofab-cli | 8edad492d439bc8fe64e9471464f545d07aba8b7 | [
"MIT"
] | 1 | 2021-08-30T14:22:04.000Z | 2021-08-30T14:22:04.000Z | =================================
instruction upload
=================================
Description
=================================
HTMLファイルを作業ガイドとして登録します。
Examples
=================================
基本的な使い方
--------------------------
作業ガイドとして登録するHTMLファイルのパスを、``--html`` に指定してください。
img要素のsrc属性がローカルの画像を参照している場合(http, https, dataスキーマが付与されていない)は、画像も作業ガイドの画像としてアップロードします。
.. code-block:: html
:caption: instruction.html
<html>
<head></head>
<body>
作業ガイドのサンプル
<img src="lenan.png">
</body>
</html>
.. code-block::
$ annofabcli instruction upload --project_id prj1 --html instruction.html
補足:ConfluenceのページをAnnoFabの作業ガイドとして登録する
------------------------------------------------------------------------
以下の手順に従って、HTMLファイルを作成してください。
1. Confluenceのエクスポート機能で、作業ガイドに登録したいページをエクスポートする。
2. エクスポートしたzipに格納されている ``site.css`` を https://raw.githubusercontent.com/kurusugawa-computer/annofab-cli/master/docs/command_reference/instruction/upload/site.css に置き換える。
デフォルトの状態では、表の罫線や背景色が表示されていないため。
3. エクスポートしたHTMLのスタイルを、style属性に反映する。AnnoFabの作業ガイドには、スタイルシートを登録できないため。
1. エクスポートしたファイルをChromeで開く。
2. Chrome開発ツールのConfoleタブで以下のJavaScriptを実行して、表関係の要素スタイルをstyle属性に反映させる。
.. code-block:: javascript
elms = document.querySelectorAll("table,thead,tbody,tfoot,caption,colgroup,col,tr,td,th");
for (let e of elms) {
s = window.getComputedStyle(e);
e.style.background = s.background;
e.style.color = s.color;
e.style.border = s.border;
e.style.borderCollapse = s.borderCollapse
}
1. Chrome開発ツールのElementタブで、html要素をコピー(Copy outerHTML)して、HTMLファイルを上書きする。
Usage Details
=================================
.. argparse::
:ref: annofabcli.instruction.upload_instruction.add_parser
:prog: annofabcli instruction upload
:nosubcommands:
:nodefaultconst:
| 26.785714 | 169 | 0.627733 |
0ef555bbd51baa1123dd003c49b9567b9700044f | 252 | rst | reStructuredText | docs/index.rst | grignards/serpyco | d2a0363d89065220c2863642bc155e81638f7f9b | [
"MIT"
] | 11 | 2020-03-22T17:09:01.000Z | 2022-03-30T09:39:33.000Z | docs/index.rst | grignards/serpyco | d2a0363d89065220c2863642bc155e81638f7f9b | [
"MIT"
] | null | null | null | docs/index.rst | grignards/serpyco | d2a0363d89065220c2863642bc155e81638f7f9b | [
"MIT"
] | 2 | 2021-12-06T13:13:03.000Z | 2021-12-21T09:55:29.000Z | ==================================
Welcome to Serpyco's documentation
==================================
.. highlight: python
.. include:: ../README.rst
.. toctree::
:maxdepth: 2
:caption: Contents:
getting_started
benchmark
api | 16.8 | 34 | 0.472222 |
ecb8c31f5f8901a1e0fd026f5d274e4bfa11c5b3 | 125 | rst | reStructuredText | doc/source/tutorials.rst | kapteyn-astro/kapteyn | f12332cfd567c7c0da40628dcfc7b297971ee636 | [
"BSD-3-Clause"
] | 3 | 2016-04-28T08:55:33.000Z | 2018-07-23T18:35:58.000Z | doc/source/tutorials.rst | kapteyn-astro/kapteyn | f12332cfd567c7c0da40628dcfc7b297971ee636 | [
"BSD-3-Clause"
] | 2 | 2020-07-23T12:28:37.000Z | 2021-07-13T18:26:06.000Z | doc/source/tutorials.rst | kapteyn-astro/kapteyn | f12332cfd567c7c0da40628dcfc7b297971ee636 | [
"BSD-3-Clause"
] | 3 | 2017-05-03T14:01:08.000Z | 2020-07-23T12:23:28.000Z | Tutorials
=========
.. toctree::
:maxdepth: 2
wcstutorial
maputilstutorial
kmpfittutorial
tabarraytutorial
| 10.416667 | 19 | 0.656 |
566461b8a7f32d06d9e877e3a88f1d15da871482 | 198 | rst | reStructuredText | docs/snakefiles/utils.rst | graingert/snakemake | 4a63a26959cd0626e2113e48b4d4fe5a5421d954 | [
"MIT"
] | 1,326 | 2019-10-04T15:11:20.000Z | 2022-03-31T18:39:40.000Z | docs/snakefiles/utils.rst | graingert/snakemake | 4a63a26959cd0626e2113e48b4d4fe5a5421d954 | [
"MIT"
] | 1,496 | 2019-10-04T15:15:12.000Z | 2022-03-31T23:14:33.000Z | docs/snakefiles/utils.rst | graingert/snakemake | 4a63a26959cd0626e2113e48b4d4fe5a5421d954 | [
"MIT"
] | 375 | 2019-10-08T21:28:51.000Z | 2022-03-28T18:44:36.000Z | .. _snakefiles-utils:
=====
Utils
=====
The module ``snakemake.utils`` provides a collection of helper functions for common tasks in Snakemake workflows. Details can be found in :ref:`utils-api`.
| 24.75 | 155 | 0.727273 |
a96a8e9e1f4a1c19255811d0dffb7a8d60451de4 | 210 | rst | reStructuredText | docs/source2/generated/statsmodels.sandbox.distributions.extras.pdf_moments_st.rst | GreatWei/pythonStates | c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37 | [
"BSD-3-Clause"
] | 76 | 2019-12-28T08:37:10.000Z | 2022-03-29T02:19:41.000Z | docs/source2/generated/statsmodels.sandbox.distributions.extras.pdf_moments_st.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 11 | 2015-07-22T22:11:59.000Z | 2020-10-09T08:02:15.000Z | docs/source2/generated/statsmodels.sandbox.distributions.extras.pdf_moments_st.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 35 | 2020-02-04T14:46:25.000Z | 2022-03-24T03:56:17.000Z | statsmodels.sandbox.distributions.extras.pdf\_moments\_st
=========================================================
.. currentmodule:: statsmodels.sandbox.distributions.extras
.. autofunction:: pdf_moments_st | 35 | 59 | 0.604762 |
7e0fc7d89b3451fd2d21c576dc5b1ce27fa12239 | 1,018 | rst | reStructuredText | docs/tips/download_machine.rst | fabiosangregorio/scopus | b8e72df8881d6eb24788ab1012387c55b088d2f2 | [
"MIT"
] | 11 | 2019-07-03T03:35:48.000Z | 2021-07-19T06:04:19.000Z | docs/tips/download_machine.rst | fabiosangregorio/scopus | b8e72df8881d6eb24788ab1012387c55b088d2f2 | [
"MIT"
] | null | null | null | docs/tips/download_machine.rst | fabiosangregorio/scopus | b8e72df8881d6eb24788ab1012387c55b088d2f2 | [
"MIT"
] | 6 | 2020-02-10T20:49:54.000Z | 2021-03-26T18:00:26.000Z | Download Machine
~~~~~~~~~~~~~~~~
Often one is interested in downloading (and caching) many more items than one key allows. Either users would have to wait a week until the key resets or change the key in the :doc:`configuration file <../configuration/>`.
However, it is also possible to programmatically change the API key that pybliometrics should use once a :ref:`Scopus429Error <Scopus429Error>` occurs:
.. code-block:: python
>>> from pybliometrics.scopus import config, AuthorRetrieval
>>> from pybliometrics.scopus.exception import Scopus429Error
>>> _keys = ["key1", "key2", "key3"]
>>> try:
>>> au = AuthorRetrieval("16656197000")
>>> except Scopus429Error:
>>> # Use the last item of _keys, drop it and assign it as
>>> # current API key
>>> config["Authentication"]["APIKey"] = _keys.pop()
>>> au = AuthorRetrieval("16656197000")
>>> # continue with normal code
Of course, any other class instead of AuthorRetrieval will work as well.
| 46.272727 | 222 | 0.679764 |
9f3e78a7451e4e8b24527fda65eb68243d7f0ee5 | 3,695 | rst | reStructuredText | CHANGELOG.rst | EOX-A/OpenSarToolk | b0c24065cef282cb433b5f0c14eb535c6c7cbf0c | [
"MIT"
] | 1 | 2020-04-03T17:01:16.000Z | 2020-04-03T17:01:16.000Z | CHANGELOG.rst | EOX-A/OpenSarToolk | b0c24065cef282cb433b5f0c14eb535c6c7cbf0c | [
"MIT"
] | 10 | 2020-05-29T16:21:17.000Z | 2021-03-05T12:23:57.000Z | CHANGELOG.rst | EOX-A/OpenSarToolk | b0c24065cef282cb433b5f0c14eb535c6c7cbf0c | [
"MIT"
] | 1 | 2021-09-24T10:53:39.000Z | 2021-09-24T10:53:39.000Z | #########
Changelog
#########
------
0.9.10 - 2021-06-08
------
* update Dockerfile
* fix tests
* update scihub API hub Endpoint to **https://apihub.copernicus.eu/apihub**
* updated download test product from scihub to a 2021 product
-----
0.9.9
-----
* testing against ``python 3.8``
* Proper tested ``GDAL 3.x.x`` compability
* switched from TRAVIS CI to GITLAB Actions
* update some ``is`` statements and ``np.bool`` to just ``bool`` as it is recomended by numpy
* update ``Dockerfile`` and upload newer image
* now as a part of the CI
* includes ``SNAP 8.0.0``
* removed ``OrfeoToolbox`` completely for now as it does not easily support python 3.8(maybe forever)
* ``libgfortran5`` package is required for most of SLC processing
-----
0.9.8
-----
* Minor version upgrades not to conflict ``GDAL3`` dependend packages
-----
0.9.7
-----
* added asf search
* set asf search as primary search
* remove get_paths from creo and onda DIAS from Sentinel1Scene class
* remove peps from s1scene
* remove onda and peps download fuctions as they are not tested
* updated SLC routine to current PhiLab version
* adjusted burst processing to EOX-A release
* adjusted ard_json for slc to match PhiLab
* added Coherence test and updateted polarimetric test
* Animation from time series update
* will fix properly in 0.10 along with the new mosaicing etc.
* Quickfix for GRD VV VH timeseries
* will fix properly in 0.10 along with the new mosaicing etc.
-----
0.9.6
-----
* added single geotiff output for GRD processing
* GRD batch returns updated inventory with processed products
-----
0.9.5
-----
* 2020-04-16
* temporary EOX version, core functions should work for both SLC and GRD
* Project GRD and burst processing goes up to (including) Timescans
* Mosaic creation currently under construction and will raise Warning if triggered in the project
* changed retrying module to retry
* reason: could find how to get log/error output of retrying module
* Conda installation is far away
* updated Timeseries and Timescan for the current GRD processing
* tests run only on 1 Product, TODO test on multiple
* TODO test on burst(s)
* Timeseries extent.shp to extent.gpkg
* godale for batch Download
* added get_bursts_by_polygon in s1.burst_inventory function
* added np_binary_erosion in helpers.raster function
* added Depre. Warning to PEPS and ONDA, curently not availible
* scihub and ASF as default search and dl
* added a DownloadError as general custom DL error
* added SLC processing to the Sentinel1Scene class
* also added a test for it
* burst batch processing now returns dict with processed out_files
* GRD s1 scene now returns also "bs" and "ls" paths in dictionary
* also conversion of GRD and SLC to RGB GeoTiffs (core functions now in ost.s1.ard_to_rgb.py)
* defined a default OST Geotiff profile for rasterio in settings/py
* Project class now gets HERBERT for search and download as default
* renamed the number of cores to be used for GPT and regular concurency to:
* config_dict['gpt_max_workers'] (down to the wrappers!)
* config_dict['godale_max_workers']
* Layover shadow masks converted to gpkg vector file
* GRD batch processing now returns updated inventory with paths to ard products
* concurency with godale instead of multiprocessing
* add silent mode to ost.helpers.helpers run_command (no gpt output in stdout)
* burst_ts.py added it should contain all timeseries and timescan related functions
* batch/cuncurency of these functions will be handled by burst_batch.py
-----
0.9.4
-----
* SLC burst processing there but in developement
* GRD processing there
* no tests
* pre 2020 version of the OST | 38.092784 | 105 | 0.742896 |
a109ac8be0080657b2da6d3729bc5c5a1f2c63c4 | 7,724 | rst | reStructuredText | docs/source/playbooks.rst | ddimatos/ibm_zos_sysauto | 1a372e6dd7592fd14ae6b9252e865efcbe8fa7bb | [
"Apache-2.0"
] | null | null | null | docs/source/playbooks.rst | ddimatos/ibm_zos_sysauto | 1a372e6dd7592fd14ae6b9252e865efcbe8fa7bb | [
"Apache-2.0"
] | null | null | null | docs/source/playbooks.rst | ddimatos/ibm_zos_sysauto | 1a372e6dd7592fd14ae6b9252e865efcbe8fa7bb | [
"Apache-2.0"
] | null | null | null | .. ...........................................................................
.. © Copyright IBM Corporation 2020 .
.. ...........................................................................
=======================
Playbooks
=======================
An `Ansible playbook`_ consists of organized instructions that define work for
a managed node (host) to be managed with Ansible.
Sample playbooks that demonstrate how to use the collection content are **included**
in the **IBM Z System Automation collection**. You can find the samples on `Github`_ or in the collections playbooks
directory included with the installation.
For more information about the collections installation and hierarchy, refer to the `Installation`_ documentation of this collection.
The sample playbooks can be run with the ``ansible-playbook`` command with some
modifications to the **inventory**, **ansible.cfg** and variable files.
Ansible Configuration
=====================
Ansible configuration file ``ansible.cfg`` can override nearly all ``ansible-playbook`` configurations.
Included in the `playbooks directory`_ is a sample `ansible.cfg`_ that can supplement ``ansible-playbook`` with a little modification.
You can modify the following configuration statement to refer to your own installation path for the collection:
.. code-block:: yaml
collections_paths = ~/.ansible/collections:/usr/share/ansible/collections
For more information about available configurations for ``ansible.cfg``, see `Ansible Configuration Settings`_.
Inventory
=========
Ansible works with multiple managed nodes (hosts) at the same time, using a list or group of lists known as an `inventory`_.
Once the inventory is defined, you can use `patterns`_ to select the hosts, or groups, you want Ansible to run against.
Included in the `playbooks directory`_ is a sample inventory file `hosts`_ that with little modification
can be used to manage the target z/OS systems. This inventory file should be included when running the sample playbook.
.. code-block:: yaml
[sample]
sampleHost1
sampleHost2
* **sample**: An example of host grouping.
* **sampleHost1**: Nickname for the target z/OS system. You can modify it to refer to your own z/OS system.
Host Vars
=========
You can supply host variables in either the inventory file or the separate variable file. Storing separate host and group
variables files may help you organize your variable values more easily.
Included in the `playbooks directory`_ are variable files in the directory `host_vars`_ one for each host nickname (provided in the hosts inventory file).
* `sampleHost1.yaml`_: It contains the variables for host ``sampleHost1``:
.. code-block:: yaml
sa_service_hostname: your.host.name
sa_service_port: port_number
sa_service_protocol: http or https
* **sa_service_hostname**: The value of this property identifies the host name of the IBM Z System Automation Operations REST server.
* **sa_service_port**: The value of this property identifies the port number of the IBM Z System Automation Operations REST server.
* **sa_service_protocol**: The value (http or https) of this property identifies if you have configured your IBM Z System Automation Operations REST server to use SSL.
Refer to the `configuration`_ of the System Automation Operations REST Server for details about these settings.
Variables
=========
We supply sample variables in a variables file. Storing separate host and variables files may help you organize your variable values more easily.
Included in the `playbooks directory`_ is a sample variables file in the directory `vars`_.
* `vars.yaml`_: It contains the variables for the playbooks:
.. code-block:: yaml
templateName: name_of_the_template
subsystem: subsystem_name
system: system_name
job: jobname
# procedure: procedureName
# comment: comment
# group: group
# sdesc: "a short description"
* **templateName**: The value of this property specifies the template name that will be used to create the dynamic resource. This parameter is mandatory.
* **subsystem**: The value of this property specifies the subsystem name of the new resource. This parameter is mandatory.
* **system**: The value of this property specifies the system where the resource will be created. This parameter is mandatory.
* **job**: The value of this property specifies the job name of the new resource. This parameter is mandatory.
* **procedure**: The value of this property specifies the procedure name used by the new resource. This parameter is optional.
* **comment**: The value of this property specifies a comment to be associated with the creation of the new resource. This parameter is optional.
* **group**: The value of this property specifies the automation name of the application group (APG) that will host the new resource. This parameter is optional.
* **sdesc**: The value of this property specifies a short description of the new resource. This parameter is optional.
Sample Playbooks
================
.. toctree::
:maxdepth: 1
:glob:
playbooks/sample_pb_create_dynres
playbooks/sample_pb_delete_dynres
Run the Playbooks
=================
The sample playbooks must be run from the playbooks directory of the installed collection: ``~/.ansible/collections/ansible_collections/ibm/ibm_zos_sysauto/playbooks/``
Use the `ansible-playbook`_ command to run a sample playbook. The command syntax is:
.. code-block:: sh
$ ansible-playbook [-i hosts] sample_pb_*.yaml
for example:
.. code-block:: sh
$ ansible-playbook -i hosts sample_pb_create_dynres.yaml
To adjust the logging verbosity, include the ``-v`` option with `ansible-playbook`_ command. You can append more letter ``v``'s, for example, ``-v``, ``-vv``, ``-vvv``, or ``-vvvv``, to obtain more details in case a connection failed.
Each letter ``v`` increases the logging verbosity similar to the traditional logging levels, such as INFO, WARN, ERROR, or DEBUG.
.. _Ansible Playbook:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html#playbooks-intro
.. _Github:
https://github.com/ansible-collections/ibm_zos_sysauto/tree/main/playbooks
.. _playbooks directory:
https://github.com/ansible-collections/ibm_zos_sysauto/blob/main/playbooks
.. _Installation:
installation.html
.. _ansible.cfg:
https://github.com/ansible-collections/ibm_zos_sysauto/blob/main/playbooks/ansible.cfg
.. _Ansible Configuration Settings:
https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-configuration-settings-locations
.. _inventory:
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
.. _patterns:
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html#intro-patterns
.. _hosts:
https://github.com/ansible-collections/ibm_zos_sysauto/blob/main/playbooks/hosts
.. _host_vars:
https://github.com/ansible-collections/ibm_zos_sysauto/blob/main/playbooks/host_vars/
.. _vars:
https://github.com/ansible-collections/ibm_zos_sysauto/blob/main/playbooks/vars/
.. _vars.yaml:
https://github.com/ansible-collections/ibm_zos_sysauto/blob/main/playbooks/vars/vars.yaml
.. _sampleHost1.yaml:
https://github.com/ansible-collections/ibm_zos_sysauto/blob/main/playbooks/host_vars/sampleHost1.yaml
.. _ansible-playbook:
https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html
.. _configuration:
https://www.ibm.com/support/knowledgecenter/de/SSWRCJ_4.2.0/com.ibm.safos.doc_4.2/InstallPlan/JCL_procedure_embeded.html | 43.886364 | 234 | 0.729933 |
bb67886a9f110292ad0ac6622c708cd7e51ecf4c | 2,249 | rst | reStructuredText | docs/source/refs/misc.rst | gwappa/amorphys | adaf4c24846fb725f3d803ac74e6e4be58a3e1bb | [
"MIT"
] | null | null | null | docs/source/refs/misc.rst | gwappa/amorphys | adaf4c24846fb725f3d803ac74e6e4be58a3e1bb | [
"MIT"
] | 22 | 2019-10-27T14:08:34.000Z | 2019-11-10T21:55:20.000Z | docs/source/refs/misc.rst | gwappa/amorphys | adaf4c24846fb725f3d803ac74e6e4be58a3e1bb | [
"MIT"
] | null | null | null | Miscellaneous
==============
dataset
--------
.. py:class:: dataset
a class to represent the metadata for this project dataset as a whole,
including a license information based on a :py:class:`license` object.
a typical :py:class:`dataset` would look something like below:
.. literalinclude:: /_static/misc/dataset.json
:language: javascript
:caption: example dataset object
.. py:attribute:: name
a required ``string`` property representing the identifiable name of this dataset as a whole.
.. py:attribute:: description
a required ``string`` property representing the description of this dataset as a whole.
.. py:attribute:: keywords
an optional array of ``string`` objects representing the free keywords for this dataset.
.. py:attribute:: license
a required :py:class:`license` object corresponding to the license clause of this dataset publication.
.. py:attribute:: references
a required array of :py:class:`citation` objects referring to the articles related to this dataset.
credit
-------
.. py:class:: credit
a ``string`` role of a :py:class:`contributor`, specified in terms of
`contributor roles <https://dictionary.casrai.org/Contributor_Roles>`_
(as it is defined in the `CRediT taxonomy <https://www.casrai.org/credit.html>`_).
It must be one of the followings:
- **conceptualization**
- **project-administration**
- **supervision**
- **funding-acquisition**
- **investigation**
- **methodology**
- **software**
- **formal-analysis**
- **visualization**
- **writing-original-draft**
- **writing-review-editing**
- **data-curation**
funding
--------
.. py:class:: funding
:py:class:`funding` represents a funding source from an :py:class:`individual`.
.. py:attribute:: source
a required property referencing an :py:class:`individual`, indicating the funding body.
.. py:attribute:: provided_to
a required array of :py:class:`person` instances indicating to whom the funding is supposed to be given.
.. py:attribute:: id
a required ``string`` property that identifies the funding from the :py:attr:`source`.
| 28.1125 | 112 | 0.665629 |
6d80039587b8593b0d266a701f45145dc97505f6 | 1,086 | rst | reStructuredText | configs/twig.rst | fizedoc/FizeView | cb17a84549c713e73db6be89b3f89d83f77db06a | [
"MIT"
] | null | null | null | configs/twig.rst | fizedoc/FizeView | cb17a84549c713e73db6be89b3f89d83f77db06a | [
"MIT"
] | null | null | null | configs/twig.rst | fizedoc/FizeView | cb17a84549c713e73db6be89b3f89d83f77db06a | [
"MIT"
] | null | null | null | ========
Twig
========
Twig是一个流行、严谨的模板引擎,多个 IDE 都对 Twig 进行高亮支持,对开发者非常友好。
+---------------+-----------------------------------------------------------+---------+--------------+
|参数名 |说明 |是否可选 |默认值 |
+===============+===========================================================+=========+==============+
|view |模板文件夹 |是 |'./view' |
+---------------+-----------------------------------------------------------+---------+--------------+
|suffix |模板后缀名 |是 |'twig' |
+---------------+-----------------------------------------------------------+---------+--------------+
|cache |缓存文件夹 |是 |'./runtime' |
+---------------+-----------------------------------------------------------+---------+--------------+
.. note::
使用 composer 安装 `twig/twig` 以开启 Twig。
其他参数请参考 `Twig 手册 <https://twig.symfony.com/doc/3.x/>`_ | 49.363636 | 102 | 0.160221 |
ca6dcf9cd1ce81b3290554473e763f7524755c52 | 282 | rst | reStructuredText | doc/build/changelog/unreleased_14/5397.rst | iamalbert/sqlalchemy | 17306220498cea779f360d03eeb5aadac3ccb59f | [
"MIT"
] | null | null | null | doc/build/changelog/unreleased_14/5397.rst | iamalbert/sqlalchemy | 17306220498cea779f360d03eeb5aadac3ccb59f | [
"MIT"
] | null | null | null | doc/build/changelog/unreleased_14/5397.rst | iamalbert/sqlalchemy | 17306220498cea779f360d03eeb5aadac3ccb59f | [
"MIT"
] | null | null | null | .. change::
:tags: bug, documentation, mysql
:tickets: 5397
Added support for the ``ssl_check_hostname=`` parameter in mysql connection
URIs and updated the mysql dialect documentation regarding secure
connections. Original pull request courtesy of Jerry Zhao.
| 35.25 | 79 | 0.744681 |
727f4946e4ca4753eb014d7d630a0fcd5aa705a8 | 5,873 | rst | reStructuredText | README.rst | omnivore/esky | 39e77deacf3e48d4d5d7a46701d9ef0f5593820d | [
"BSD-3-Clause"
] | 1 | 2019-07-11T14:50:07.000Z | 2019-07-11T14:50:07.000Z | README.rst | omnivore/esky | 39e77deacf3e48d4d5d7a46701d9ef0f5593820d | [
"BSD-3-Clause"
] | null | null | null | README.rst | omnivore/esky | 39e77deacf3e48d4d5d7a46701d9ef0f5593820d | [
"BSD-3-Clause"
] | null | null | null |
esky: keep frozen apps fresh
Esky is an auto-update framework for frozen Python applications. It provides
a simple API through which apps can find, fetch and install updates, and a
bootstrapping mechanism that keeps the app safe in the face of failed or
partial updates.
Esky is currently capable of freezing apps with py2exe, py2app, cxfreeze and
bbfreeze. Adding support for other freezer programs should be straightforward;
patches will be gratefully accepted.
The main interface is the 'Esky' class, which represents a frozen app. An Esky
must be given the path to the top-level directory of the frozen app, and a
'VersionFinder' object that it will use to search for updates. Typical usage
for an app automatically updating itself would look something like this::
if hasattr(sys,"frozen"):
app = esky.Esky(sys.executable,"http://example.com/downloads/")
app.auto_update()
A simple default VersionFinder is provided that hits a specified URL to get
a list of available versions. More sophisticated implementations will likely
be added in the future, and you're encouraged to develop a custom VersionFinder
subclass to meet your specific needs.
The real trick is freezing your app in a format sutiable for use with esky.
You'll almost certainly want to use the "bdist_esky" distutils command, and
should consult its docstring for full details; the following is an example
of a simple setup.py script using esky::
from esky import bdist_esky
from distutils.core import setup
setup(name="appname",
version="1.2.3",
scripts=["appname/script1.py","appname/gui/script2.pyw"],
options={"bdist_esky":{"includes":["mylib"]}},
)
Invoking this setup script would create an esky for "appname" version 1.2.3::
#> python setup.py bdist_esky
...
...
#> ls dist/
appname-1.2.3.linux-i686.zip
#>
The contents of this zipfile can be extracted to the filesystem to give a
fully working application. If made available online then it can also be found,
downloaded and used as an upgrade by older versions of the application.
When you find you need to move beyond the simple logic of Esky.auto_update()
(e.g. to show feedback in the GUI) then the following properties and methods
are available on the Esky class:
app.version: the current best available version.
app.active_version: the currently-executing version, or None
if the esky isn't for the current app.
app.find_update(): find the best available update, or None
if no updates are available.
app.fetch_version(v): fetch the specified version into local storage.
app.install_version(v): install and activate the specified version.
app.uninstall_version(v): (try to) uninstall the specified version; will
fail if the version is currently in use.
app.cleanup(): (try to) clean up various partly-installed
or old versions lying around the app dir.
app.reinitialize(): re-initialize internal state after changing
the installed version.
If updating an application that is not writable by normal users, esky has the
ability to gain root privileges through the use of a helper program. The
following methods control this behaviour:
app.has_root(): check whether esky currently has root privs.
app.get_root(): escalate to root privs by spawning helper app.
app.drop_root(): kill helper app and drop root privileges
When properly installed, the on-disk layout of an app managed by esky looks
like this::
prog.exe - esky bootstrapping executable
appdata/ - container for all the esky magic
appname-X.Y.platform/ - specific version of the application
prog.exe - executable(s) as produced by freezer module
library.zip - pure-python frozen modules
pythonXY.dll - python DLL
esky-files/ - esky control files
bootstrap/ - files not yet moved into bootstrapping env
bootstrap-manifest.txt - list of files expected in bootstrap env
lockfile.txt - lockfile to block removal of in-use versions
...other deps...
updates/ - work area for fetching/unpacking updates
This is also the layout of the zipfiles produced by bdist_esky. The
"appname-X.Y" directory is simply a frozen app directory with some extra
control information generated by esky.
To install a new version "appname-X.Z", esky performs the following steps:
* extract it into a temporary directory under "updates"
* move all bootstrapping files into "appname-X.Z.platm/esky/bootstrap"
* atomically rename it into the main directory as "appname-X.Z.platform"
* move contents of "appname-X.Z.platform/esky/bootstrap" into the main dir
* remove the "appname-X.Z.platform/esky/bootstrap" directory
To uninstall an existing version "appname-X.Y", esky does the following
* remove files used by only that version from the bootstrap env
* rename its "bootstrap-manifest.txt" file to "bootstrap-manifest-old.txt"
Where such facilities are provided by the operating system, this process is
performed within a filesystem transaction. Nevertheless, the esky bootstrapping
executable is able to detect and recover from a failed update should such an
unfortunate situation arise.
To clean up after failed or partial updates, applications should periodically
call the "cleanup" method on their esky. This removes uninstalled versions
and generally tries to tidy up in the main application directory.
| 44.492424 | 79 | 0.700153 |
0dcee45fc5a67abaeb0ce6f1ee75374857c7514d | 1,558 | rst | reStructuredText | api/autoapi/Microsoft/AspNet/Mvc/WebApiCompatShim/HttpRequestMessageFeature/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | 2 | 2017-12-12T05:08:17.000Z | 2021-02-08T10:15:42.000Z | api/autoapi/Microsoft/AspNet/Mvc/WebApiCompatShim/HttpRequestMessageFeature/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | null | null | null | api/autoapi/Microsoft/AspNet/Mvc/WebApiCompatShim/HttpRequestMessageFeature/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | 3 | 2017-12-12T05:08:29.000Z | 2022-02-02T08:39:25.000Z |
HttpRequestMessageFeature Class
===============================
.. contents::
:local:
Inheritance Hierarchy
---------------------
* :dn:cls:`System.Object`
* :dn:cls:`Microsoft.AspNet.Mvc.WebApiCompatShim.HttpRequestMessageFeature`
Syntax
------
.. code-block:: csharp
public class HttpRequestMessageFeature : IHttpRequestMessageFeature
GitHub
------
`View on GitHub <https://github.com/aspnet/mvc/blob/master/src/Microsoft.AspNet.Mvc.WebApiCompatShim/HttpRequestMessage/HttpRequestMessageFeature.cs>`_
.. dn:class:: Microsoft.AspNet.Mvc.WebApiCompatShim.HttpRequestMessageFeature
Constructors
------------
.. dn:class:: Microsoft.AspNet.Mvc.WebApiCompatShim.HttpRequestMessageFeature
:noindex:
:hidden:
.. dn:constructor:: Microsoft.AspNet.Mvc.WebApiCompatShim.HttpRequestMessageFeature.HttpRequestMessageFeature(Microsoft.AspNet.Http.HttpContext)
:type httpContext: Microsoft.AspNet.Http.HttpContext
.. code-block:: csharp
public HttpRequestMessageFeature(HttpContext httpContext)
Properties
----------
.. dn:class:: Microsoft.AspNet.Mvc.WebApiCompatShim.HttpRequestMessageFeature
:noindex:
:hidden:
.. dn:property:: Microsoft.AspNet.Mvc.WebApiCompatShim.HttpRequestMessageFeature.HttpRequestMessage
:rtype: System.Net.Http.HttpRequestMessage
.. code-block:: csharp
public HttpRequestMessage HttpRequestMessage { get; set; }
| 16.752688 | 151 | 0.66303 |
43723cd55535940a722e01a79e9b7ee8a2e3787b | 2,655 | rst | reStructuredText | docs/index.rst | Nurul-GC/pynotify2 | aa22209387c7e7ea521c1c80601acfcfb17bb609 | [
"BSD-2-Clause"
] | 1 | 2021-02-01T02:09:35.000Z | 2021-02-01T02:09:35.000Z | docs/index.rst | Nurul-GC/pynotify2 | aa22209387c7e7ea521c1c80601acfcfb17bb609 | [
"BSD-2-Clause"
] | null | null | null | docs/index.rst | Nurul-GC/pynotify2 | aa22209387c7e7ea521c1c80601acfcfb17bb609 | [
"BSD-2-Clause"
] | null | null | null | notify2 API documentation
=========================
notify2 is - or was - a package to display desktop notifications on Linux.
Those are the little bubbles which tell a user about e.g. new emails.
notify2 is *deprecated*. Here are some alternatives:
- `desktop_notify <https://pypi.org/project/desktop-notify/>`_ is a newer module doing essentially the same thing.
- If you're writing a GTK application, you may want to use GNotification
(`intro <https://developer.gnome.org/GNotification/>`__, `Python API <https://lazka.github.io/pgi-docs/#Gio-2.0/classes/Notification.html>`__).
- For simple cases, you can run ``notify-send`` as a subprocess.
The `py-notifier <https://pypi.org/project/py-notifier/>`__ package provides a simple Python API around this, and can also display notifications on Windows.
notify2 is a replacement for pynotify which can be used from different GUI toolkits
and from programs without a GUI. The API is largely the same as that of pynotify,
but some less important parts are left out.
Notifications are sent to a notification daemon over `D-Bus <http://www.freedesktop.org/wiki/Software/dbus/>`_,
according to the `Desktop notifications spec <http://people.gnome.org/~mccann/docs/notification-spec/notification-spec-latest.html>`_,
and the server is responsible for displaying them to the user. So your application
has limited control over when and how a notification appears.
.. toctree::
:maxdepth: 1
license
.. module:: notify2
.. autofunction:: init
.. autofunction:: get_server_caps
.. autofunction:: get_server_info
Creating and showing notifications
----------------------------------
.. autoclass:: Notification
.. automethod:: show
.. automethod:: update
.. automethod:: close
Extra parameters
----------------
.. class:: Notification
.. automethod:: set_urgency
.. automethod:: set_timeout
.. automethod:: set_category
.. automethod:: set_location
.. automethod:: set_icon_from_pixbuf
.. automethod:: set_hint
.. automethod:: set_hint_byte
Callbacks
---------
To receive callbacks, you must have set a D-Bus event loop when you called
:func:`init`.
.. class:: Notification
.. automethod:: connect
.. automethod:: add_action
Constants
---------
.. data:: URGENCY_LOW
URGENCY_NORMAL
URGENCY_CRITICAL
Urgency levels to pass to :meth:`Notification.set_urgency`.
.. data:: EXPIRES_DEFAULT
EXPIRES_NEVER
Special expiration times to pass to :meth:`Notification.set_timeout`.
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 26.029412 | 158 | 0.694162 |
c7e75bd7204f0d0f6334d68880cbcb02317a3790 | 1,370 | rst | reStructuredText | docs/overview.rst | litchfield/merchant | e4fba8a88a326bbde39c26e937c17d5283817320 | [
"BSD-3-Clause"
] | 332 | 2015-01-02T17:52:27.000Z | 2022-03-15T11:34:30.000Z | docs/overview.rst | Jbdealord/merchant | 38b20af698b4653168161ad91c2279ab5516b9d5 | [
"BSD-3-Clause"
] | 29 | 2015-01-19T09:45:22.000Z | 2021-12-13T19:39:38.000Z | docs/overview.rst | Jbdealord/merchant | 38b20af698b4653168161ad91c2279ab5516b9d5 | [
"BSD-3-Clause"
] | 93 | 2015-01-05T23:26:48.000Z | 2022-03-15T11:34:33.000Z | Merchant: Pluggable and Unified API for Payment Processors
-----------------------------------------------------------
Merchant_, is a django_ app that offers a uniform api and pluggable interface to
interact with a variety of payment processors. It is heavily inspired from Ruby's
ActiveMerchant_.
Overview
---------
Simple how to::
# settings.py
# Authorize.Net settings
AUTHORIZE_LOGIN_ID = "..."
AUTHORIZE_TRANSACTION_KEY = "..."
# PayPal settings
PAYPAL_TEST = True
PAYPAL_WPP_USER = "..."
PAYPAL_WPP_PASSWORD = "..."
PAYPAL_WPP_SIGNATURE = "..."
# views.py or wherever you want to use it
>>> g1 = get_gateway("authorize_net")
>>>
>>> cc = CreditCard(first_name= "Test",
... last_name = "User,
... month=10, year=2011,
... number="4222222222222",
... verification_value="100")
>>>
>>> response1 = g1.purchase(100, cc, options = {...})
>>> response1
{"status": "SUCCESS", "response": <AuthorizeNetAIMResponse object>}
>>>
>>> g2 = get_gateway("pay_pal")
>>>
>>> response2 = g2.purchase(100, cc, options = {...})
>>> response2
{"status": "SUCCESS", "response": <PayPalNVP object>}
.. _Merchant: http://github.com/agiliq/merchant
.. _ActiveMerchant: http://activemerchant.org/
.. _django: http://www.djangoproject.com/
| 29.782609 | 82 | 0.592701 |
b10276ff759ae8ce565b97ded896e906cd072d6a | 23,352 | rst | reStructuredText | docs/languages/en/modules/zend.pdf.interactive-features.rst | sasezaki/zf2-documentation | c55465a104fdb48704ac3a061fcb1911fe83bcb1 | [
"BSD-3-Clause"
] | 1 | 2019-06-13T16:05:46.000Z | 2019-06-13T16:05:46.000Z | docs/languages/en/modules/zend.pdf.interactive-features.rst | wdalmut/zf2-documentation | 8d898c2ccef2ade657562ff6d8367991588c41ac | [
"BSD-3-Clause"
] | null | null | null | docs/languages/en/modules/zend.pdf.interactive-features.rst | wdalmut/zf2-documentation | 8d898c2ccef2ade657562ff6d8367991588c41ac | [
"BSD-3-Clause"
] | null | null | null | .. _zend.pdf.interactive-features:
Interactive Features
====================
.. _zend.pdf.pages.interactive-features.destinations:
Destinations
------------
A destination defines a particular view of a document, consisting of the following items:
- The page of the document to be displayed.
- The location of the document window on that page.
- The magnification (zoom) factor to use when displaying the page.
Destinations may be associated with outline items (:ref:`Document Outline (bookmarks)
<zend.pdf.pages.interactive-features.outlines>`), annotations (:ref:`Annotations
<zend.pdf.pages.interactive-features.annotations>`), or actions (:ref:`Actions
<zend.pdf.pages.interactive-features.actions>`). In each case, the destination specifies the view of the document
to be presented when the outline item or annotation is opened or the action is performed. In addition, the optional
document open action can be specified.
.. _zend.pdf.pages.interactive-features.destinations.types:
Supported Destination Types
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following types are supported by ``Zend_Pdf`` component.
.. _zend.pdf.pages.interactive-features.destinations.types.zoom:
Zend_Pdf_Destination_Zoom
^^^^^^^^^^^^^^^^^^^^^^^^^
Display the specified page, with the coordinates (left, top) positioned at the upper-left corner of the window and
the contents of the page magnified by the factor zoom.
Destination object may be created using ``Zend_Pdf_Destination_Zoom::create($page, $left = null, $top = null, $zoom
= null)`` method.
Where:
- ``$page`` is a destination page (a ``Zend_Pdf_Page`` object or a page number).
- ``$left`` is a left edge of the displayed page (float).
- ``$top`` is a top edge of the displayed page (float).
- ``$zoom`` is a zoom factor (float).
``NULL``, specified for ``$left``, ``$top`` or ``$zoom`` parameter means "current viewer application value".
``Zend_Pdf_Destination_Zoom`` class also provides the following methods:
- ``Float`` ``getLeftEdge()``;
- ``setLeftEdge(float $left)``;
- ``Float`` ``getTopEdge()``;
- ``setTopEdge(float $top)``;
- ``Float`` ``getZoomFactor()``;
- ``setZoomFactor(float $zoom)``;
.. _zend.pdf.pages.interactive-features.destinations.types.fit:
Zend_Pdf_Destination_Fit
^^^^^^^^^^^^^^^^^^^^^^^^
Display the specified page, with the coordinates (left, top) positioned at the upper-left corner of the window and
the contents of the page magnified by the factor zoom. Display the specified page, with its contents magnified just
enough to fit the entire page within the window both horizontally and vertically. If the required horizontal and
vertical magnification factors are different, use the smaller of the two, centering the page within the window in
the other dimension.
Destination object may be created using ``Zend_Pdf_Destination_Fit::create($page)`` method.
Where ``$page`` is a destination page (a ``Zend_Pdf_Page`` object or a page number).
.. _zend.pdf.pages.interactive-features.destinations.types.fit-horizontally:
Zend_Pdf_Destination_FitHorizontally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Display the specified page, with the vertical coordinate top positioned at the top edge of the window and the
contents of the page magnified just enough to fit the entire width of the page within the window.
Destination object may be created using ``Zend_Pdf_Destination_FitHorizontally::create($page, $top)`` method.
Where:
- ``$page`` is a destination page (a ``Zend_Pdf_Page`` object or a page number).
- ``$top`` is a top edge of the displayed page (float).
``Zend_Pdf_Destination_FitHorizontally`` class also provides the following methods:
- ``Float`` ``getTopEdge()``;
- ``setTopEdge(float $top)``;
.. _zend.pdf.pages.interactive-features.destinations.types.fit-vertically:
Zend_Pdf_Destination_FitVertically
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Display the specified page, with the horizontal coordinate left positioned at the left edge of the window and the
contents of the page magnified just enough to fit the entire height of the page within the window.
Destination object may be created using ``Zend_Pdf_Destination_FitVertically::create($page, $left)`` method.
Where:
- ``$page`` is a destination page (a ``Zend_Pdf_Page`` object or a page number).
- ``$left`` is a left edge of the displayed page (float).
``Zend_Pdf_Destination_FitVertically`` class also provides the following methods:
- ``Float`` ``getLeftEdge()``;
- ``setLeftEdge(float $left)``;
.. _zend.pdf.pages.interactive-features.destinations.types.fit-rectangle:
Zend_Pdf_Destination_FitRectangle
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Display the specified page, with its contents magnified just enough to fit the rectangle specified by the
coordinates left, bottom, right, and top entirely within the window both horizontally and vertically. If the
required horizontal and vertical magnification factors are different, use the smaller of the two, centering the
rectangle within the window in the other dimension.
Destination object may be created using ``Zend_Pdf_Destination_FitRectangle::create($page, $left, $bottom, $right,
$top)`` method.
Where:
- ``$page`` is a destination page (a ``Zend_Pdf_Page`` object or a page number).
- ``$left`` is a left edge of the displayed page (float).
- ``$bottom`` is a bottom edge of the displayed page (float).
- ``$right`` is a right edge of the displayed page (float).
- ``$top`` is a top edge of the displayed page (float).
``Zend_Pdf_Destination_FitRectangle`` class also provides the following methods:
- ``Float`` ``getLeftEdge()``;
- ``setLeftEdge(float $left)``;
- ``Float`` ``getBottomEdge()``;
- ``setBottomEdge(float $bottom)``;
- ``Float`` ``getRightEdge()``;
- ``setRightEdge(float $right)``;
- ``Float`` ``getTopEdge()``;
- ``setTopEdge(float $top)``;
.. _zend.pdf.pages.interactive-features.destinations.types.fit-bounding-box:
Zend_Pdf_Destination_FitBoundingBox
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Display the specified page, with its contents magnified just enough to fit its bounding box entirely within the
window both horizontally and vertically. If the required horizontal and vertical magnification factors are
different, use the smaller of the two, centering the bounding box within the window in the other dimension.
Destination object may be created using ``Zend_Pdf_Destination_FitBoundingBox::create($page, $left, $bottom,
$right, $top)`` method.
Where ``$page`` is a destination page (a ``Zend_Pdf_Page`` object or a page number).
.. _zend.pdf.pages.interactive-features.destinations.types.fit-bounding-box-horizontally:
Zend_Pdf_Destination_FitBoundingBoxHorizontally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Display the specified page, with the vertical coordinate top positioned at the top edge of the window and the
contents of the page magnified just enough to fit the entire width of its bounding box within the window.
Destination object may be created using ``Zend_Pdf_Destination_FitBoundingBoxHorizontally::create($page, $top)``
method.
Where
- ``$page`` is a destination page (a ``Zend_Pdf_Page`` object or a page number).
- ``$top`` is a top edge of the displayed page (float).
``Zend_Pdf_Destination_FitBoundingBoxHorizontally`` class also provides the following methods:
- ``Float`` ``getTopEdge()``;
- ``setTopEdge(float $top)``;
.. _zend.pdf.pages.interactive-features.destinations.types.fit-bounding-box-vertically:
Zend_Pdf_Destination_FitBoundingBoxVertically
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Display the specified page, with the horizontal coordinate left positioned at the left edge of the window and the
contents of the page magnified just enough to fit the entire height of its bounding box within the window.
Destination object may be created using ``Zend_Pdf_Destination_FitBoundingBoxVertically::create($page, $left)``
method.
Where
- ``$page`` is a destination page (a ``Zend_Pdf_Page`` object or a page number).
- ``$left`` is a left edge of the displayed page (float).
``Zend_Pdf_Destination_FitBoundingBoxVertically`` class also provides the following methods:
- ``Float`` ``getLeftEdge()``;
- ``setLeftEdge(float $left)``;
.. _zend.pdf.pages.interactive-features.destinations.types.named:
Zend_Pdf_Destination_Named
^^^^^^^^^^^^^^^^^^^^^^^^^^
All destinations listed above are "Explicit Destinations".
In addition to this, *PDF* document may contain a dictionary of such destinations which may be used to reference
from outside the *PDF* (e.g. '``http://www.mycompany.com/document.pdf#chapter3``').
``Zend_Pdf_Destination_Named`` objects allow to refer destinations from the document named destinations dictionary.
Named destination object may be created using ``Zend_Pdf_Destination_Named::create(string $name)`` method.
``Zend_Pdf_Destination_Named`` class provides the only one additional method:
``String`` ``getName()``;
.. _zend.pdf.pages.interactive-features.destinations.processing:
Document level destination processing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``Zend_Pdf`` class provides a set of destinations processing methods.
Each destination object (including named destinations) can be resolved using the
``resolveDestination($destination)`` method. It returns corresponding ``Zend_Pdf_Page`` object, if destination
target is found, or ``NULL`` otherwise.
``Zend_Pdf::resolveDestination()`` method also takes an optional boolean parameter
``$refreshPageCollectionHashes``, which is ``TRUE`` by default. It forces ``Zend_Pdf`` object to refresh internal
page collection hashes since document pages list may be updated by user using ``Zend_Pdf::$pages`` property
(:ref:`Working with Pages <zend.pdf.pages>`). It may be turned off for performance reasons, if it's known that
document pages list wasn't changed since last method request.
Complete list of named destinations can be retrieved using ``Zend_Pdf::getNamedDestinations()`` method. It returns
an array of ``Zend_Pdf_Target`` objects, which are actually either an explicit destination or a GoTo action
(:ref:`Actions <zend.pdf.pages.interactive-features.actions>`).
``Zend_Pdf::getNamedDestination(string $name)`` method returns specified named destination (an explicit destination
or a GoTo action).
*PDF* document named destinations dictionary may be updated with ``Zend_Pdf::setNamedDestination(string $name,
$destination)`` method, where ``$destination`` is either an explicit destination (any destination except
``Zend_Pdf_Destination_Named``) or a GoTo action.
If ``NULL`` is specified in place of ``$destination``, then specified named destination is removed.
.. note::
Unresolvable named destinations are automatically removed from a document while document saving.
.. _zend.pdf.interactive-features.destinations.example-1:
.. rubric:: Destinations usage example
.. code-block:: php
:linenos:
$pdf = new Zend_Pdf();
$page1 = $pdf->newPage(Zend_Pdf_Page::SIZE_A4);
$page2 = $pdf->newPage(Zend_Pdf_Page::SIZE_A4);
$page3 = $pdf->newPage(Zend_Pdf_Page::SIZE_A4);
// Page created, but not included into pages list
$pdf->pages[] = $page1;
$pdf->pages[] = $page2;
$destination1 = Zend_Pdf_Destination_Fit::create($page2);
$destination2 = Zend_Pdf_Destination_Fit::create($page3);
// Returns $page2 object
$page = $pdf->resolveDestination($destination1);
// Returns null, page 3 is not included into document yet
$page = $pdf->resolveDestination($destination2);
$pdf->setNamedDestination('Page2', $destination1);
$pdf->setNamedDestination('Page3', $destination2);
// Returns $destination2
$destination = $pdf->getNamedDestination('Page3');
// Returns $destination1
$pdf->resolveDestination(Zend_Pdf_Destination_Named::create('Page2'));
// Returns null, page 3 is not included into document yet
$pdf->resolveDestination(Zend_Pdf_Destination_Named::create('Page3'));
.. _zend.pdf.pages.interactive-features.actions:
Actions
-------
Instead of simply jumping to a destination in the document, an annotation or outline item can specify an action for
the viewer application to perform, such as launching an application, playing a sound, or changing an annotation's
appearance state.
.. _zend.pdf.pages.interactive-features.actions.types:
Supported action types
^^^^^^^^^^^^^^^^^^^^^^
The following action types are recognized while loading *PDF* document:
- ``Zend_Pdf_Action_GoTo``- go to a destination in the current document.
- ``Zend_Pdf_Action_GoToR``- go to a destination in another document.
- ``Zend_Pdf_Action_GoToE``- go to a destination in an embedded file.
- ``Zend_Pdf_Action_Launch``- launch an application or open or print a document.
- ``Zend_Pdf_Action_Thread``- begin reading an article thread.
- ``Zend_Pdf_Action_URI``- resolve a *URI*.
- ``Zend_Pdf_Action_Sound``- play a sound.
- ``Zend_Pdf_Action_Movie``- play a movie.
- ``Zend_Pdf_Action_Hide``- hides or shows one or more annotations on the screen.
- ``Zend_Pdf_Action_Named``- execute an action predefined by the viewer application:
- **NextPage**- Go to the next page of the document.
- **PrevPage**- Go to the previous page of the document.
- **FirstPage**- Go to the first page of the document.
- **LastPage**- Go to the last page of the document.
- ``Zend_Pdf_Action_SubmitForm``- send data to a uniform resource locator.
- ``Zend_Pdf_Action_ResetForm``- set fields to their default values.
- ``Zend_Pdf_Action_ImportData``- import field values from a file.
- ``Zend_Pdf_Action_JavaScript``- execute a JavaScript script.
- ``Zend_Pdf_Action_SetOCGState``- set the state of one or more optional content groups.
- ``Zend_Pdf_Action_Rendition``- control the playing of multimedia content (begin, stop, pause, or resume a playing
rendition).
- ``Zend_Pdf_Action_Trans``- update the display of a document, using a transition dictionary.
- ``Zend_Pdf_Action_GoTo3DView``- set the current view of a 3D annotation.
Only ``Zend_Pdf_Action_GoTo`` and ``Zend_Pdf_Action_URI`` actions can be created by user now.
GoTo action object can be created using ``Zend_Pdf_Action_GoTo::create($destination)`` method, where
``$destination`` is a ``Zend_Pdf_Destination`` object or a string which can be used to identify named destination.
``Zend_Pdf_Action_URI::create($uri[, $isMap])`` method has to be used to create a URI action (see *API*
documentation for the details). Optional ``$isMap`` parameter is set to ``FALSE`` by default.
It also supports the following methods:
.. _zend.pdf.pages.interactive-features.actions.chaining:
Actions chaining
^^^^^^^^^^^^^^^^
Actions objects can be chained using ``Zend_Pdf_Action::$next`` public property.
It's an array of ``Zend_Pdf_Action`` objects, which also may have their sub-actions.
``Zend_Pdf_Action`` class supports RecursiveIterator interface, so child actions may be iterated recursively:
.. code-block:: php
:linenos:
$pdf = new Zend_Pdf();
$page1 = $pdf->newPage(Zend_Pdf_Page::SIZE_A4);
$page2 = $pdf->newPage(Zend_Pdf_Page::SIZE_A4);
// Page created, but not included into pages list
$page3 = $pdf->newPage(Zend_Pdf_Page::SIZE_A4);
$pdf->pages[] = $page1;
$pdf->pages[] = $page2;
$action1 = Zend_Pdf_Action_GoTo::create(
Zend_Pdf_Destination_Fit::create($page2));
$action2 = Zend_Pdf_Action_GoTo::create(
Zend_Pdf_Destination_Fit::create($page3));
$action3 = Zend_Pdf_Action_GoTo::create(
Zend_Pdf_Destination_Named::create('Chapter1'));
$action4 = Zend_Pdf_Action_GoTo::create(
Zend_Pdf_Destination_Named::create('Chapter5'));
$action2->next[] = $action3;
$action2->next[] = $action4;
$action1->next[] = $action2;
$actionsCount = 1; // Note! Iteration doesn't include top level action and
// walks through children only
$iterator = new RecursiveIteratorIterator(
$action1,
RecursiveIteratorIterator::SELF_FIRST);
foreach ($iterator as $chainedAction) {
$actionsCount++;
}
// Prints 'Actions in a tree: 4'
printf("Actions in a tree: %d\n", $actionsCount++);
.. _zend.pdf.pages.interactive-features.actions.open-action:
Document Open Action
^^^^^^^^^^^^^^^^^^^^
Special open action may be specify a destination to be displayed or an action to be performed when the document is
opened.
``Zend_Pdf_Target Zend_Pdf::getOpenAction()`` method returns current document open action (or ``NULL`` if open
action is not set).
``setOpenAction(Zend_Pdf_Target $openAction = null)`` method sets document open action or clean it if
``$openAction`` is ``NULL``.
.. _zend.pdf.pages.interactive-features.outlines:
Document Outline (bookmarks)
----------------------------
A PDF document may optionally display a document outline on the screen, allowing the user to navigate interactively
from one part of the document to another. The outline consists of a tree-structured hierarchy of outline items
(sometimes called bookmarks), which serve as a visual table of contents to display the document's structure to the
user. The user can interactively open and close individual items by clicking them with the mouse. When an item is
open, its immediate children in the hierarchy become visible on the screen; each child may in turn be open or
closed, selectively revealing or hiding further parts of the hierarchy. When an item is closed, all of its
descendants in the hierarchy are hidden. Clicking the text of any visible item activates the item, causing the
viewer application to jump to a destination or trigger an action associated with the item.
``Zend_Pdf`` class provides public property ``$outlines`` which is an array of ``Zend_Pdf_Outline`` objects.
.. code-block:: php
:linenos:
$pdf = Zend_Pdf::load($path);
// Remove outline item
unset($pdf->outlines[0]->childOutlines[1]);
// Set Outline to be displayed in bold
$pdf->outlines[0]->childOutlines[3]->setIsBold(true);
// Add outline entry
$pdf->outlines[0]->childOutlines[5]->childOutlines[] =
Zend_Pdf_Outline::create('Chapter 2', 'chapter_2');
$pdf->save($path, true);
Outline attributes may be retrieved or set using the following methods:
- ``string getTitle()``- get outline item title.
- ``setTitle(string $title)``- set outline item title.
- ``boolean isOpen()``-``TRUE`` if outline is open by default.
- ``setIsOpen(boolean $isOpen)``- set isOpen state.
- ``boolean isItalic()``-``TRUE`` if outline item is displayed in italic.
- ``setIsItalic(boolean $isItalic)``- set isItalic state.
- ``boolean isBold()``-``TRUE`` if outline item is displayed in bold.
- ``setIsBold(boolean $isBold)``- set isBold state.
- ``Zend_Pdf_Color_Rgb getColor()``- get outline text color (``NULL`` means black).
- ``setColor(Zend_Pdf_Color_Rgb $color)``- set outline text color (``NULL`` means black).
- ``Zend_Pdf_Target getTarget()``- get outline target (action or explicit or named destination object).
- ``setTarget(Zend_Pdf_Target|string $target)``- set outline target (action or destination). String may be used to
identify named destination. ``NULL`` means 'no target'.
- ``array getOptions()``- get outline attributes as an array.
- ``setOptions(array $options)``- set outline options. The following options are recognized: 'title', 'open',
'color', 'italic', 'bold', and 'target'.
New outline may be created in two ways:
- ``Zend_Pdf_Outline::create(string $title[, Zend_Pdf_Target|string $target])``
- ``Zend_Pdf_Outline::create(array $options)``
Each outline object may have child outline items listed in ``Zend_Pdf_Outline::$childOutlines`` public property.
It's an array of ``Zend_Pdf_Outline`` objects, so outlines are organized in a tree.
``Zend_Pdf_Outline`` class implements RecursiveArray interface, so child outlines may be recursively iterated using
RecursiveIteratorIterator:
.. code-block:: php
:linenos:
$pdf = Zend_Pdf::load($path);
foreach ($pdf->outlines as $documentRootOutlineEntry) {
$iterator = new RecursiveIteratorIterator(
$documentRootOutlineEntry,
RecursiveIteratorIterator::SELF_FIRST
);
foreach ($iterator as $childOutlineItem) {
$OutlineItemTarget = $childOutlineItem->getTarget();
if ($OutlineItemTarget instanceof Zend_Pdf_Destination) {
if ($pdf->resolveDestination($OutlineItemTarget) === null) {
// Mark Outline item with unresolvable destination
// using RED color
$childOutlineItem->setColor(new Zend_Pdf_Color_Rgb(1, 0, 0));
}
} else if ($OutlineItemTarget instanceof Zend_Pdf_Action_GoTo) {
$OutlineItemTarget->setDestination();
if ($pdf->resolveDestination($OutlineItemTarget) === null) {
// Mark Outline item with unresolvable destination
// using RED color
$childOutlineItem->setColor(new Zend_Pdf_Color_Rgb(1, 0, 0));
}
}
}
}
$pdf->save($path, true);
.. note::
All outline items with unresolved destinations (or destinations of GoTo actions) are updated while document
saving by setting their targets to ``NULL``. So document will not be corrupted by removing pages referenced by
outlines.
.. _zend.pdf.pages.interactive-features.annotations:
Annotations
-----------
An annotation associates an object such as a note, sound, or movie with a location on a page of a PDF document, or
provides a way to interact with the user by means of the mouse and keyboard.
All annotations are represented by ``Zend_Pdf_Annotation`` abstract class.
Annotation may be attached to a page using ``Zend_Pdf_Page::attachAnnotation(Zend_Pdf_Annotation $annotation)``
method.
Three types of annotations may be created by user now:
- ``Zend_Pdf_Annotation_Link::create($x1, $y1, $x2, $y2, $target)`` where ``$target`` is an action object or a
destination or string (which may be used in place of named destination object).
- ``Zend_Pdf_Annotation_Text::create($x1, $y1, $x2, $y2, $text)``
- ``Zend_Pdf_Annotation_FileAttachment::create($x1, $y1, $x2, $y2, $fileSpecification)``
A link annotation represents either a hypertext link to a destination elsewhere in the document or an action to be
performed.
A text annotation represents a "sticky note" attached to a point in the PDF document.
A file attachment annotation contains a reference to a file.
The following methods are shared between all annotation types:
- ``setLeft(float $left)``
- ``float getLeft()``
- ``setRight(float $right)``
- ``float getRight()``
- ``setTop(float $top)``
- ``float getTop()``
- ``setBottom(float $bottom)``
- ``float getBottom()``
- ``setText(string $text)``
- ``string getText()``
Text annotation property is a text to be displayed for the annotation or, if this type of annotation does not
display text, an alternate description of the annotation's contents in human-readable form.
Link annotation objects also provide two additional methods:
- ``setDestination(Zend_Pdf_Target|string $target)``
- ``Zend_Pdf_Target getDestination()``
| 36.949367 | 115 | 0.718568 |
1e07aa805279fe42d87fa188304a04d196137bdd | 560 | rst | reStructuredText | docs/source/terrainbento.derived_models.model_040_basicCh.rst | mcflugen/terrainbento | 1b756477b8a8ab6a8f1275b1b30ec84855c840ea | [
"MIT"
] | null | null | null | docs/source/terrainbento.derived_models.model_040_basicCh.rst | mcflugen/terrainbento | 1b756477b8a8ab6a8f1275b1b30ec84855c840ea | [
"MIT"
] | null | null | null | docs/source/terrainbento.derived_models.model_040_basicCh.rst | mcflugen/terrainbento | 1b756477b8a8ab6a8f1275b1b30ec84855c840ea | [
"MIT"
] | null | null | null | terrainbento\.derived\_models\.model\_040\_basicCh package
==========================================================
.. automodule:: terrainbento.derived_models.model_040_basicCh
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
terrainbento\.derived\_models\.model\_040\_basicCh\.model\_040\_basicCh module
------------------------------------------------------------------------------
.. automodule:: terrainbento.derived_models.model_040_basicCh.model_040_basicCh
:members:
:undoc-members:
:show-inheritance:
| 26.666667 | 79 | 0.566071 |
9ef63bf55c863ece72294d80d5d38963c328d575 | 1,229 | rst | reStructuredText | README.rst | tmehlinger/okta-uuid | ccccf65dac55bafa475386d34d36f67148d63d63 | [
"MIT"
] | null | null | null | README.rst | tmehlinger/okta-uuid | ccccf65dac55bafa475386d34d36f67148d63d63 | [
"MIT"
] | null | null | null | README.rst | tmehlinger/okta-uuid | ccccf65dac55bafa475386d34d36f67148d63d63 | [
"MIT"
] | 1 | 2020-01-29T16:38:52.000Z | 2020-01-29T16:38:52.000Z | `okta-uuid`
===========
This is a simple module for turning Okta's user IDs (which *appear* to be
base62-encoded integers) into UUIDs, and vice versa. This is useful for
integrating Okta with systems or services where you don't necessarily want to
use string identifiers.
Installing
----------
First, make sure you're using Python 3.2 or newer.
Install from pypi: ``pip install okta-uuid``
Developing
----------
* Create a virtualenv.
* Clone this repo.
* Install the requirements: ``python setup.py develop``.
* Hack away!
There's a (small!) test suite included. You can run it with ``python test.py``.
Using
-----
Get a UUID from an Okta ID:
.. code-block:: python
idstr = '00ABCD1234wxyz5678pq'
oid = okta_uuid.OktaUserId(idstr)
print(repr(oid))
print(oid)
print(oid.uuid)
# output:
#
# OktaUserId('00ABCD1234wxyz5678pq')
# 00ABCD1234wxyz5678pq
# cb406d76-d66a-6007-5001-36cc7b010000
Get an Okta ID from a UUID:
.. code-block:: python
idstr = '00ABCD1234wxyz5678pq'
oid = okta_uuid.OktaUserId(idstr)
new_oid = okta_uuid.OktaUserId.from_uuid(oid.uuid)
print(new_oid)
print(oid == new_oid)
# output:
#
# 00ABCD1234wxyz5678pq
# True
| 19.507937 | 79 | 0.676159 |
51a80dcadaa0b5e32389f105ee69896d7852cd4a | 182 | rst | reStructuredText | source/micro-controllers/esp32/esp32.rst | iocafe/iocafe-doc | b5b29fc55caedddc538bf7fdea1921cebca04822 | [
"MIT"
] | 1 | 2020-04-28T23:26:45.000Z | 2020-04-28T23:26:45.000Z | source/micro-controllers/esp32/esp32.rst | iocafe/iocafe-doc | b5b29fc55caedddc538bf7fdea1921cebca04822 | [
"MIT"
] | null | null | null | source/micro-controllers/esp32/esp32.rst | iocafe/iocafe-doc | b5b29fc55caedddc538bf7fdea1921cebca04822 | [
"MIT"
] | null | null | null | ESP32
==================================
.. toctree::
:maxdepth: 2
:caption: Contents:
200327-esp32-modules-recommended-by-espressif
200701-esp32-ota-software-updates
| 16.545455 | 48 | 0.576923 |
01b90e8520e85f48749ef9ce5d2932b886128bbe | 1,913 | rst | reStructuredText | manuals/sources/refman/directives/set_logtalk_flag_2.rst | sergio-castro/logtalk3 | 821cb1277cf144be36b52bef9d9f86c530f96fac | [
"Apache-2.0"
] | null | null | null | manuals/sources/refman/directives/set_logtalk_flag_2.rst | sergio-castro/logtalk3 | 821cb1277cf144be36b52bef9d9f86c530f96fac | [
"Apache-2.0"
] | null | null | null | manuals/sources/refman/directives/set_logtalk_flag_2.rst | sergio-castro/logtalk3 | 821cb1277cf144be36b52bef9d9f86c530f96fac | [
"Apache-2.0"
] | null | null | null | ..
This file is part of Logtalk <https://logtalk.org/>
Copyright 1998-2019 Paulo Moura <pmoura@logtalk.org>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.. index:: set_logtalk_flag/2
.. _directives_set_logtalk_flag_2:
set_logtalk_flag/2
==================
Description
-----------
::
set_logtalk_flag(Flag, Value)
Sets Logtalk flag values. The scope of this directive is the entity or
the source file containing it. For global scope, use the corresponding
:ref:`predicates_set_logtalk_flag_2` built-in predicate called from an
:ref:`directives_initialization_1` directive.
Template and modes
------------------
::
set_logtalk_flag(+atom, +nonvar)
Errors
------
| Flag is a variable:
| ``instantiation_error``
| Value is a variable:
| ``instantiation_error``
| Flag is not an atom:
| ``type_error(atom, Flag)``
| Flag is neither a variable nor a valid flag:
| ``domain_error(flag, Flag)``
| Value is not a valid value for flag Flag:
| ``domain_error(flag_value, Flag + Value)``
| Flag is a read-only flag:
| ``permission_error(modify, flag, Flag)``
Examples
--------
::
% turn off the compiler unknown entity warnings
% during the compilation of the source file:
:- set_logtalk_flag(unknown_entities, silent).
:- object(...).
% generate events for messages sent from this object:
:- set_logtalk_flag(events, allow).
... | 26.205479 | 75 | 0.69472 |
ac6374196770567502a650d28b2a774aa351c3cf | 6,454 | rst | reStructuredText | docs/mturk/run.rst | paulu/opensurfaces | 7f3e987560faa62cd37f821760683ccd1e053c7c | [
"MIT"
] | 137 | 2015-02-19T00:00:42.000Z | 2022-03-31T03:56:01.000Z | docs/mturk/run.rst | paulu/opensurfaces | 7f3e987560faa62cd37f821760683ccd1e053c7c | [
"MIT"
] | 20 | 2015-07-28T23:39:58.000Z | 2020-05-19T11:40:55.000Z | docs/mturk/run.rst | paulu/opensurfaces | 7f3e987560faa62cd37f821760683ccd1e053c7c | [
"MIT"
] | 49 | 2015-02-09T15:21:46.000Z | 2021-12-15T14:22:33.000Z | Running experiments
===================
This section outlines the steps in running experiments with our platform.
Sandbox experiments
-------------------
Before spending any money, you should veriy that everything works in the MTurk
sandbox.
make sure you are in production mode
See :doc:`../setup` for instructions on how to set up the server and run in
production mode.
switch to sandbox mode
Set ``MTURK_SANDBOX = True`` in ``server/config/settings_local.py``.
This will use https://workersandbox.mturk.com instead of
https://www.mturk.com/.
configure the experiments
For the experiments that you want to run, edit the ``experiments.py`` file
in those apps (e.g. ``intrinsic/experiments.py``). The most important
parameter is ``auto_add_hits``. Set this to ``True`` if you want to run
this experiment.
start the experiments
The following commands will dispatch tasks to MTurk:
.. code-block:: bash
./manage.py mtconfigure
./manage.py mtconsume
See :ref:`mturk-commands` for documentation of these commands.
verify that it works
Navigate to http://workersandbox.mturk.com and find your task on the
sandbox marketplace.
Try submitting some results and check that they show up in the admin
submission view (http://YOUR_HOSTNAME/mturk/admin/submission/)
expire experiments
To expire all experiments, run:
.. code-block:: bash
./manage.py mtexpire '.*'
Paid experiments
----------------
This section outlines the step in running paid experiments with our platform.
make sure you are in production mode
See :doc:`../setup` for instructions on how to set up the server and run in
production mode.
disable sandbox mode
Set ``MTURK_SANDBOX = False`` in ``server/config/settings_local.py``. This
will switch to the main https://www.mturk.com/ server.
configure the experiments
For the experiments that you want to run, edit the ``experiments.py`` file
in those apps (e.g. ``intrinsic/experiments.py``). The most important
parameter is ``auto_add_hits``. Set this to ``True`` if you want to run
this experiment.
start the experiments
The following commands will dispatch tasks to MTurk:
.. code-block:: bash
./manage.py mtconfigure
./manage.py mtconsume
Note that if all input objects have received the specified number of
labels, then no work will be dispatched. See :ref:`mturk-commands` for
documentation of these commands.
monitor workers
Watch users by either setting up Google Analytics or viewing the server log:
.. code-block:: bash
tail -f run/gunicorn.log
It will take ~10min before you see the first submissions.
review submissions
There are two methods to reviewing submissions:
1. Automatically approve all submissions. When using both tutorials and
sentinels, I find that the proportion of high quality submissions is
high enough to approve all workers. While some bad work sneaks by I
find that it is not worth rejecting, since workers get upset and don't
like the uncertainty.
To approve all submissions, set ``MTURK_AUTO_APPROVE = True`` in
``server/config/settings_local.py``. This will approve with celery,
which could have a long delay. Workers like seeing instant approvals (I
found that my submission rate increased by 50-100%), so it is worth
running
.. code-block:: bash
./manage.py mtapprove_loop '.*'
while the experiment is running to automatically approve everything as
quickly as possible. The argument is a regular expression on the
Experiment ``slug`` (human-readable ID).
2. Manual review.
Unfortunately I haven't had time to update the admin interface to have
approve/reject buttons (since I always approve all submissions). You
can manually approve/reject by opening a Python shell on the server
(``./scripts/django_shell.sh``) and running the command:
.. code-block:: py
MtAssignment.objects.get(id='ID').approve(feedback='Thank you!')
or
.. code-block:: py
MtAssignment.objects.get(id='ID').reject(feedback='You made too many mistakes.')
where ``ID`` is the assignment ID.
I find that quality tends to be consistent within a worker, so you could
write a loop to iterate over known good workers and approve those:
.. code-block:: py
GOOD_WORKER_MTURK_IDS = [ ... ]
asst_qset = MtAssignment.objects.filter(
status='S', worker__mturk_worker_id__in=GOOD_WORKER_MTURK_IDS)
for asst in asst_qset:
try:
asst.approve(feedback='Thank you!')
except:
pass
See :class:`mturk.models.MtAssignment` for more assignment-related methods.
Note that approve/reject commands have a high chance of failing. The
Amazon MTurk server takes a while to recognize that a certain assignment is
ready for approval. The above scripts take this into account, so don't
worry about lots of errors in the celery logs regarding approvals.
grant bonuses
You can grant bonuses to assignments in the Python shell
(``./scripts/django_shell.sh``) with the command:
.. code-block:: py
MtAssignment.objects.get(id='ID').grant_bonus(price=0.10, reason='You did a great job')
where ``ID`` is the assignment ID.
Note that users are promised small bonuses for completing feedback. This
is automatically handled by the :meth:`mturk.models.MtAssignment.approve`
method.
stop experiments
To expire all experiments, run:
.. code-block:: bash
./manage.py mtexpire '.*'
sync status
OpenSurfaces stores a local copy of the status of each HIT and Assignment.
To make sure that local data is synchronized, run:
.. code-block:: bash
./manage.py mtsync
check account balance
To print your Amazon account balance to the console, run:
.. code-block:: bash
./manage.py mtbalance
CUBAM
If an experiment uses CUBAM to aggregate binary answers, run this to update
all labels:
.. code-block:: py
./manage.py mtcubam
*Warning*: this will take several hours to run if you have millions of
labels.
To add your own experiment, see :doc:`../extending`.
| 31.028846 | 98 | 0.683607 |
b3da62847cd01e31336788329d59fa00832db384 | 8,246 | rst | reStructuredText | using_DEchildpages/using-preferences-menu.rst | CyVerse-learning-materials/de_manual | ff98cf63ecd048a178c3da01a8f9ac5835331755 | [
"CC-BY-4.0"
] | null | null | null | using_DEchildpages/using-preferences-menu.rst | CyVerse-learning-materials/de_manual | ff98cf63ecd048a178c3da01a8f9ac5835331755 | [
"CC-BY-4.0"
] | null | null | null | using_DEchildpages/using-preferences-menu.rst | CyVerse-learning-materials/de_manual | ff98cf63ecd048a178c3da01a8f9ac5835331755 | [
"CC-BY-4.0"
] | null | null | null | .. include:: cyverse_rst_defined_substitutions.txt
|CyVerse logo|_
|Home_Icon|_
`Learning Center Home <http://learning.cyverse.org/>`_
==========================
Using the Preferences Menu
==========================
---------------------------------------
Opening the Preferences settings window
---------------------------------------
.. |person_icon| image:: img/person_icon.png
:width: 2
.. |X-icon| image:: img/X-icon.png
Click |person_icon| at the top right of the window and then click **Preferences:**
.. image:: img/Preferences.jpg
--------------------
Changing Preferences
--------------------
- Email me when my analysis status change
Click **Notify me by email when my analysis status changes** to receive or stop receiving email notifications when the status of an analysis changes to completed or failed.
This is useful when you want to track the status of your analyses while outside of the DE.
For more information on notifications, see `Viewing and Deleting Notifications <https://wiki.cyverse.org/wiki/display/DEmanual/Viewing+and+Deleting+Notifications>`_
--------------------------------------------------------------------
Adding and Deleting Users from the Collaborators List in Preferences
--------------------------------------------------------------------
The Collaborators list is a "short list" of CyVerse users with whom you frequently share `data files and folders <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+and+Unsharing+Data+Files+and+Folders+in+the+DE>`_, `analyses <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+and+Unsharing+an+Analysis>`_, or `unpublished apps <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+your+App+or+Workflow+and+Editing+the+User+Manual>`_.
Although a user does not have to be in your Collaborators list to share a data file or folder, adding a user to your Collaborators list makes it easier to share.
-------------------------------------------------------
Adding a user to your Collaborators list in Preferences
-------------------------------------------------------
.. |CollaboratorsWindow| image:: img/CollaboratorsWindow.png
1. Click |person_icon| (Preferences) at the top right of the screen.
2. Click **Collaborators**:
|CollaboratorsWindow|
3. In the search field, enter all or part of the user's name (not case-sensitive).
4. In the results list, click the name to add the name to your Collaborators list.
5. Repeat for each user to add.
6. When done, click **OK**.
The user is now available in your Collaborators list the next time you want to `share selected data files and folders <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+Data+Files+and+Folders>`_ or `share an unpublished app <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+your+App+or+Workflow+and+Editing+the+User+Manual>`_ with a user.
-You can also share with users on the fly:
- Data items: `Sharing and Unsharing Data Files and Folders in the DE <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+and+Unsharing+Data+Files+and+Folders+in+the+DE>`_
- Analysis results: `Sharing and Unsharing an Analysis <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+and+Unsharing+an+Analysis>`_
- Unpublished apps: `Sharing your App or Workflow and Editing the User Manual <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+your+App+or+Workflow+and+Editing+the+User+Manual>`_
-----------------------------------------------------------
Deleting a user from your Collaborators list in Preferences
-----------------------------------------------------------
Removing a user from your Collaborators list removes the user only from the Collaborators list but retains the permission level you have granted for that user to each data item. See `Unsharing Files and Folders <https://wiki.cyverse.org/wiki/display/DEmanual/Unsharing+Files+and+Folders>`_ to removen access to the data items as well.
1. Click |person_icon| and then click **Collaborators**.
2. In the Collaborators window, click the checkbox for the user to remove from your Collaborators list, and then click |X-icon|.
3. Click **OK**.
You can also unshare a `shared data item <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+and+Unsharing+Data+Files+and+Folders+in+the+DE>`_, `shared analysis <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+and+Unsharing+an+Analysis>`_, or `shared unpublished app <https://wiki.cyverse.org/wiki/display/DEmanual/Sharing+your+App+or+Workflow+and+Editing+the+User+Manual>`_.
-----------------------------------------------------
Using DE System Messages and Important Announcements
-----------------------------------------------------
System messages provide information about updates and changes without having to leave the DE home page.
Important announcements are displayed at the top of the DE window. Once you have clicked the announcement at the top of the window, the announcement is no longer displayed.
System Messages are accessed from the Preferences menu.
`Notifications <https://wiki.cyverse.org/wiki/display/DEmanual/Viewing+and+Deleting+Notifications>`_ are also used in DE.
-----------------------
Viewing system messages
-----------------------
.. |closewindowicon| image:: img/CloseWindowIcon.png
1. Click |person_icon| (Preferences) at the top right of the screen.
2. Click **System Messages**:
.. image:: img/de/SystemMessages.png
3. In the System Messages list, select the message view.
4. When done, click |closewindowicon| at the top right to close the window.
-------------------------------
Viewing important announcements
-------------------------------
1. Click **Read it** in the announcement at the tip right of the window:
.. image:: img/ImportantAnnouncement.png
after reading the announcement, the message is no longer displayed.
--------------------------------------------------
Viewing the User Manual and Introduction to the DE
--------------------------------------------------
.. |PreferencesIcon| image:: img/de/DE-PreferencesIcon.jpg
You are viewing the DE User Manual now, but you can access it at any time. You also can view an introduction to the DE, which shows you a few pertinent points about the DE window and basics on how to use it.
1. Click the |PreferencesIcon| (Preferences) at the top right of the screen.
- To view the user manual, click **User Manual**.
- To view the introduction to the DE, click **Introduction**.
---------------------------------------------------
Viewing Information About the Discovery Environment
---------------------------------------------------
You can view information about the current release number, funding, and other pertinent information about the DE.
1. Log in to the CyVerse Discovery Environment.
2. Click |PreferencesIcon| (Preferences) at the top right of the screen.
3. Click About.
----
**Fix or improve this documentation:**
- On Github: |Github Repo Link|
- Send feedback: `Tutorials@CyVerse.org <Tutorials@CyVerse.org>`_
- Live chat/help: Click on the |intercom| on the bottom-right of the page for questions on documentation
----
|Home_Icon|_
`Learning Center Home <http://learning.cyverse.org/>`_
.. Comment: Place Images Below This Line
use :width: to give a desired width for your image
use :height: to give a desired height for your image
replace the image name/location and URL if hyperlinked
.. |Clickable hyperlinked image| image:: ./img/IMAGENAME.png
:width: 500
:height: 100
.. _CyVerse logo: http://learning.cyverse.org/
.. |Static image| image:: ./img/IMAGENAME.png
:width: 25
:height: 25
.. Comment: Place URLS Below This Line
# Use this example to ensure that links open in new tabs, avoiding
# forcing users to leave the document, and making it easy to update links
# In a single place in this document
.. |Substitution| raw:: html # Place this anywhere in the text you want a hyperlink
<a href="REPLACE_THIS_WITH_URL" target="blank">Replace_with_text</a>
.. |Github Repo Link| raw:: html
<a href="FIX_FIX_FIX_FIX_FIX_FIX_FIX_FIX_FIX_FIX_FIX_FIX_FIX_FIX_FIX" target="blank">Github Repo Link</a>
| 43.4 | 445 | 0.667354 |
97fa2f5c1fbf4e7a303f2a8483f79ebec9c8a67a | 6,169 | rst | reStructuredText | docs/demo.rst | chiao45/parpydtk2 | a65971466f74fb56cde1ed19c95aa05ccecbee92 | [
"MIT"
] | null | null | null | docs/demo.rst | chiao45/parpydtk2 | a65971466f74fb56cde1ed19c95aa05ccecbee92 | [
"MIT"
] | null | null | null | docs/demo.rst | chiao45/parpydtk2 | a65971466f74fb56cde1ed19c95aa05ccecbee92 | [
"MIT"
] | null | null | null | .. include:: links.txt
.. _demo:
A Demo
======
Here, we show a demo for transferring solutions between two meshes of the
unit square. We let the ``blue`` mesh participant run in parallel with two
cores, while the ``green`` side is treated as a serial mesh.
.. code-block:: python
:linenos:
import numpy as np
from mpi4py import MPI
from parpydtk2 import *
comm = MPI.COMM_WORLD
blue, green = create_imeshdb_pair(comm)
assert comm.size == 2
rank = comm.rank
For demo purpose, we construct the meshes globally.
.. code-block:: python
:lineno-start: 12
# create blue meshes on all processes
cob = np.empty(shape=(16, 3), dtype=float, order='C')
dh = 0.33333333333333333
index = 0
x = 0.0
for i in range(4):
y = 0.0
for j in range(4):
cob[index] = np.asarray([x, y, 0.0])
index += 1
y += dh
x += dh
# global IDs, one based
bgids = np.arange(16, dtype='int32')
bgids += 1
The ``blue`` side has 16 nodes, the following is for the ``green`` side:
.. code-block:: python
:lineno-start: 29
# create green meshes on all processes
cog = np.empty(shape=(36, 3), dtype=float, order='C')
dh = 0.2
index = 0
x = 0.0
for i in range(6):
y = 0.0
for j in range(6):
cog[index] = np.asarray([x, y, 0.0])
index += 1
y += dh
x += dh
# global IDs
ggids = np.arange(36, dtype='int32')
ggids += 1
The ``green`` participant has 36 nodes. The next step is to put the data in
the two mesh databases.
.. code-block:: python
:lineno-start: 46
# creating the mesh database
blue.begin_create()
# local vertices and global IDs
lcob = cob[8 * rank:8 * (rank + 1), :].copy()
lbgids = bgids[8 * rank:8 * (rank + 1)].copy()
blue.create_vertices(lcob)
blue.assign_gids(lbgids)
# do not use trivial global ID strategy
blue.finish_create(False)
blue.create_field('b')
As we can see, we equally distributed the mesh into the two cores as well as
the corresponding global IDs. In line 54, the ``False`` flag indicates that
the mesh database should use the user-provided global IDs.
.. warning::
Creating vertices and assigning global IDs must be called between
:py:func:`~parpydtk2.IMeshDB.begin_create` and
:py:func:`~parpydtk2.IMeshDB.finish_create`! Otherwise, exceptions are
thrown.
.. warning::
Creating fields must be done after
:py:func:`~parpydtk2.IMeshDB.finish_create`!
Here is the treatment for the "serial" participant:
.. code-block:: python
:lineno-start: 58
# NOTE that green is assumed to be serial mesh
green.begin_create()
# only create on master rank
if not rank:
green.create_vertices(cog)
# since green is serial, we just use the trivial global IDs
green.finish_create() # empty partition is resolve here
green.create_field('g') # must after finish create
assert green.has_empty()
As we can see, only the master process has data.
.. note::
The :ref:`duplicated <empty_part>` node is handled inside
:py:func:`~parpydtk2.IMeshDB.finish_create`
With the two participants ready, we can now create our
:py:class:`~parpydtk2.Mapper`.
.. code-block:: python
:lineno-start: 69
# create our analytical model, i.e. 10+sin(x)*cos(y)
bf = 10.0 + np.sin(lcob[:, 0]) * np.cos(lcob[:, 1])
gf = np.sin(cog[:, 0]) * np.cos(cog[:, 1]) + 10.0
# Construct our mapper
mapper = Mapper(blue=blue, green=green)
# using mmls, blue radius 1.0 green radius 0.6
mapper.method = MMLS
mapper.radius_b = 1.0
mapper.radius_g = 0.6
mapper.dimension = 2
# Mapper initialization region
mapper.begin_initialization()
mapper.register_coupling_fields(bf='b', gf='g', direct=B2G)
mapper.register_coupling_fields(bf='b', gf='g', direct=G2B)
mapper.end_initialization()
Line 69-70 just create an analytic model for error analysis. Line 77-80 are
for parameters, for this case, we use MMLS (default) with ``blue`` side
radius 1.0 and ``green`` side radius 0.6 for searching.
The important part is from line 83 to 86. Particularly speaking, the function
:py:func:`~parpydtk2.Mapper.register_coupling_fields`. It takes three
parameters, where the first two are string tokens that represents the data
fields in ``blue`` and ``green``. The ``direct`` is to indicate the transfer
direction, e.g. :py:attr:`~parpydtk2.B2G` stands for blue to green.
.. warning::
:py:func:`~parpydtk2.Mapper.register_coupling_fields` must be called within
:py:func:`~parpydtk2.Mapper.begin_initialization` and
:py:func:`~parpydtk2.Mapper.end_initialization`.
.. code-block:: python
:lineno-start: 88
# NOTE that the following only runs on green master mesh
if not rank:
green.assign_field('g', gf)
# Since green is serial and has empty partition, we must call this to
# resolve asynchronous values
green.resolve_empty_partitions('g')
The above section is to assign values on the ``green`` participant. Notice that
it is a "serial" mesh, so we only assign values on the master process. But
resolve the :ref:`duplicated <empty_part>` node is needed, this is done in
line 93.
Finally, the solution transfer part is pretty straightforward:
.. code-block:: python
:lineno-start: 94
# solution transfer region
mapper.begin_transfer()
mapper.transfer_data(bf='b', gf='g', direct=G2B)
err_b = (bf - blue.extract_field('b'))/10
mapper.transfer_data(bf='b', gf='g', direct=B2G)
err_g = (gf - green.extract_field('g'))/10
mapper.end_transfer()
comm.barrier()
print(rank, 'blue L2-error=%.3e' % (np.linalg.norm(err_b)/np.sqrt(err_b.size)))
if rank == 0:
print(0, 'green L2-error=%.3e' % (np.linalg.norm(err_g)/np.sqrt(err_g.size)))
.. only:: html
This code can be obtained :download:`here parallel2serial.py<../examples/parallel2serial.py>`.
.. only:: latex
This code can be obtained `here parallel2serial.py <https://github.com/chiao45/parpydtk2/blob/parallel/examples/parallel2serial.py>`_.
| 29.658654 | 138 | 0.664613 |
b9e54217ffc21d2c466737681a2be7cf19e1a763 | 202 | rst | reStructuredText | doc/source/magni.afm.io._data_attachment.rst | SIP-AAU/Magni | 6328dc98a273506f433af52e6bd394754a844550 | [
"BSD-2-Clause"
] | 42 | 2015-02-09T10:17:26.000Z | 2021-12-21T09:38:04.000Z | doc/source/magni.afm.io._data_attachment.rst | SIP-AAU/Magni | 6328dc98a273506f433af52e6bd394754a844550 | [
"BSD-2-Clause"
] | 3 | 2015-03-20T12:00:40.000Z | 2015-03-20T12:01:16.000Z | doc/source/magni.afm.io._data_attachment.rst | SIP-AAU/Magni | 6328dc98a273506f433af52e6bd394754a844550 | [
"BSD-2-Clause"
] | 14 | 2015-04-28T03:08:32.000Z | 2021-07-24T13:29:24.000Z | magni.afm.io._data_attachment module
====================================
.. automodule:: magni.afm.io._data_attachment
:members:
:private-members:
:special-members:
:show-inheritance:
| 22.444444 | 45 | 0.584158 |
47ba77cc588c92468d634f3aa40e3c550510610f | 26,328 | rst | reStructuredText | papers/111_lu/paper.rst | InessaPawson/scipy_proceedings | 328df8991ea69e252555bff81a44f30376dfd07d | [
"FTL",
"OML"
] | 116 | 2015-04-07T08:02:48.000Z | 2022-03-17T15:33:13.000Z | papers/111_lu/paper.rst | InessaPawson/scipy_proceedings | 328df8991ea69e252555bff81a44f30376dfd07d | [
"FTL",
"OML"
] | 447 | 2015-02-10T15:51:58.000Z | 2022-03-30T00:57:27.000Z | papers/111_lu/paper.rst | InessaPawson/scipy_proceedings | 328df8991ea69e252555bff81a44f30376dfd07d | [
"FTL",
"OML"
] | 261 | 2015-05-14T00:47:30.000Z | 2022-03-12T14:41:37.000Z |
:author: Haw-minn Lu
:email: hlu@westhealth.org
:institution: Gary and Mary West Health Institute
:author: José Unpingco
:email: jhunpingco@westhealth.org
:institution: Gary and Mary West Health Institute
:bibliography: ourbib
=============================================================================
How PDFrw and fillable forms improves throughput at a Covid-19 Vaccine Clinic
=============================================================================
.. class:: abstract
PDFrw was used to prepopulate Covid-19 vaccination forms to improve the efficiency and integrity of the vaccination process in terms of federal and state privacy requirements. We will describe the vaccination process from the initial appointment, through the vaccination delivery, to the creation of subsequent required documentation. Although Python modules for PDF generation are common, they struggle with managing fillable forms where a fillable field may appear multiple times within the same form. Additionally, field types such as checkboxes, radio buttons, lists and combo boxes are not straightforward to programmatically fill. Another challenge is combining multiple *filled* forms while maintaining the integrity of the values of the fillable fields. Additionally, HIPAA compliance issues are discussed.
.. class:: keywords
acrobat documents, form filling, HIPAA compliance, COVID-19
Introduction
------------
The coronavirus pandemic has been one of the most disruptive nationwide events in living memory. The frail,
vulnerable, and elderly have been disproportionately affected by serious hospitalizations and deaths.
Notwithstanding the amazing pace of vaccine development, logistical problems can still inhibit large-scale
vaccine distribution, especially among the elderly. Vaccination centers typically require online
appointments to facilitate vaccine distribution by State and Federal governments, but many elderly do not
have Internet access or know how to make online appointments, or how to use online resources to coordinate
transportation to and from the vaccination site, as needed.
As a personal anecdote, when vaccinations were opened to all aged 65 and older, one of the authors tried to
get his parents vaccinated and discovered that the experience documented here :cite:`letters_2021` was
unfortunately typical and required regularly pinging the appointment website for a week to get an appointment.
However, beyond persistence, getting an appointment required monitoring the website to track when batches of new
appointments were released --- all tasks that require an uncommon knowledge of Internet infrastructure beyond
most patients, not just the elderly.
To help San Diego County with the vaccine rollout, the Gary and Mary West PACE (WestPACE)
center established a pop-up point of distribution (POD) for the COVID-19 vaccine
:cite:`press` specifically for the elderly with emphasis on those who are most vulnerable.
The success in the POD was reported in the local news media :cite:`knsd` :cite:`kpbs` and
prompted the State of California to ask WestPACE's sister organization (the
Gary and Mary West Health Institute) to develop a playbook for the deploying a pop-up POD
:cite:`pod`.
This paper describes the logistical challenges regarding the vaccination rollout
for WestPACE and focuses on the use of Python's :code:`PDFrw` module
to address real-world sensitive data issues with PDF documents.
This paper gives a little more background of the effort. Next the overall
infrastructure and information flow is described. Finally, a very detailed
discussion on the use of python and the :code:`PDFrw` library to address a
major bottleneck and volunteer pain point.
Background
----------
WestPACE operates a Program of All-Inclusive Care for the Elderly (PACE) center which
provides nursing-home-level care and wrap-around services such as transportation to the
most vulnerable elderly. To provide vaccinations to WestPACE patients as quickly as
possible, WestPACE tried to acquire suitable freezers (some vaccines require special cold
storage) instead of waiting for San Diego County to provide them; but, due to high-demand,
acquiring a suitably-sized freezer was very problematic. As a pivot, WestPACE opted to
acquire a freezer that was available but with excess capacity beyond what was needed for
just WestPACE, and then collaborated with the County to use this excess capacity to
establish a walk-up vaccination center for all San Diego senior citizens, in or out of WestPACE.
WestPACE coordinated with the local 2-1-1 organization responsible for coordination of
community health and disaster services. The 2-1-1 organization provided a call center with
in-person support for vaccine appointments and transportation coordination to and from
WestPACE. This immediately eased the difficulty of making online appointments and the burden
of transportation coordination. With these relationships in place, the vaccination clinic
went from concept to active vaccine distribution site in about two weeks
resulting in the successful vaccination of thousands of elderly.
Although this is a technical paper, this background describes the real impact technology
can make in the lives of the vulnerable and elderly in society
in a crisis situation.
Infrastructure
--------------
The goal of the WestPACE vaccine clinic was to provide a friendly environment to vaccinate
senior citizens. Because this was a nonprofit and volunteer effort, the clinic did not have any
pre-existing record management practices with corresponding IT infrastructure to handle
sensitive health information according to Health Insurance Portability and Accountability
Act (HIPAA) standards. One key obstacle is paperwork for appointments, questionnaires,
consent forms, and reminder cards (among others) that must be processed securely and at
speed, given the fierce demand for vaccines. Putting the burden of dealing with this
paperwork on the patients would be confusing for the patient and time-consuming and limit
the overall count of vaccinations delivered. Thus, the strategy was to use electronic
systems to handle Protected Health Information (PHI) wherever possible and comply with HIPAA
requirements :cite:`Moore269` for data encryption at rest and in-transit, including appropriate Business
Associate Agreements (BAA) for any cloud service providers :cite:`filkins`. For physical
paper, HIPAA requirements mean that PHI must always be kept in a locked room or a container
with restricted access.
.. figure:: diagram.pdf
Vaccination Pipeline :label:`fig:infrastructure`
Figure :ref:`fig:infrastructure` shows a high level view of the user experience and
information flow. Making appointments can be challenging, especially those with limited
caregiver support. Because the appointment systems were set up in a hurry, many user
interfaces were confusing and poorly designed. In the depicted pipeline, the person (or
caregiver) telephones the 2-1-1 call center and the live operator collects demographic and
health information, and coordinates any necessary travel arrangements, as needed. The
demographic and health information is entered into the appointment system managed by the
California Department of Public Health. The information is then downloaded to
the clinic from the appointment system the day before the scheduled vaccination. Next, a forms
packet is generated for every scheduled patient and consolidated into a PDF file that is
then printed and handed to the volunteers at the clinic. The packet consolidates documents
including consent forms, health forms, and CDC-provided vaccination cards.
When the patient arrives at the clinic, their forms are pulled and a
volunteer reviews the questions while correcting any errors. Once the
information is validated, the patient is directed to sign the
appropriate forms. The crucially efficient part is that the patient
and volunteer only have to *validate* previously collected information
instead of filling out multiple forms with redundant information. This
was crucial during peak demand so that most patients experienced less
than a five minute delay between arrival and vaccine
administration. While there was consideration of commercial services
to do the electronic form filling and electronic signatures, they were
discounted because these turned out to be too expensive and
time-consuming to set up.
Different entities such as 2-1-1 and the State of California handle certain elements of the
data pipeline, but strict HIPAA requirements are followed at each step. All clinic
communications with the State appointment system were managed through a properly
authenticated and encrypted system. The vaccine clinic utilized pre-existing, cloud-based
HIPAA-compliant system, with corresponding BAAs. All sensitive data processing occurred on
this system. The system, which is described at :cite:`72_lu-proc-scipy-2020`, uses both python alone and in Jupyter notebooks.
Finally, the processed PDF forms were transferred using encryption to a server at the clinic
site where an authorized operator printed them out. The paper forms were placed in the
custody of a clinic volunteer until they were delivered to a back office for storage in a
locked cabinet, pursuant to health department regulations.
Though all aspects of the pipeline faced challenges, the pre-population of forms turned out
to be surprisingly difficult due to the lack of programmatic PDF tools that properly work
with fillable forms. The remainder of the paper discusses the challenges and provides
instructions on how to use Python to fill PDF forms for printing.
Programmatically Fill Forms
---------------------------
Programmatically filling in PDF forms can be a quick and accurate way to
disseminate forms. Bits and pieces can be found throughout the Internet and places like Stack Overflow but no
single source provides a complete answer. The *Medium* blog post by Vivsvaan
Sharma :cite:`sharma` is a good starting place. Another useful resource is the PDF 1.7
specification :cite:`pdf`. Since the deployment of the vaccine clinic, the details of the
form filling can be found at WestHealth's blog :cite:`whblog`. The code is available on
GitHub as described below.
The following imports are used in the examples given below.
.. code:: python
import pdfrw
from pdfrw.objects.pdfstring import PdfString
from pdfrw.objects.pdfstring import BasePdfName
from pdfrw import PdfDict, PdfObject
Finding Your Way Around PDFrw and Fillable Forms
------------------------------------------------
Several examples of basic form filling code can be found on the
Internet, including the above-mentioned *Medium* blog post. The
following is a typical snippet which was taken largely from the blog post.
.. code:: python
pdf = pdfrw.PdfReader(file_path)
for page in pdf.pages:
annotations = page['/Annots']
if annotations is None:
continue
for annotation in annotations:
if annotation['/Subtype']=='/Widget':
if annotation['/T']:
key = annotation['/T'].to_unicode()
print (key)
The type of ``annotation['/T']`` is ``pdfString``. While some sources use
``[1:-1]`` to extract the string from ``pdfString``, the ``to_unicode``
method is the proper way to extract the string. According to the PDF 1.7
specification § 12.5.6.19, all fillable forms use widget annotation.
The check for ``annotation['/SubType']`` filters the annotations
to only widget annotations.
To set the value ``value``, a ``PDFString`` needs to be created by
encoding ``value`` with the ``encode`` method. The encoded
``PDFString`` is then used to update the ``annotation`` as
shown in the following code snippet.
.. code:: python
annotation.update(PdfDict(V=PdfString.encode(value)))
This converts ``value`` into a ``PdfString`` and updates the
``annotation``, creating a value for ``annotation['/V'``].
In addition, at the top level of the ``PdfReader`` object ``pdf``, the
``NeedAppearances`` property in the interactive form dictionary,
``AcroForm`` (See § 12.7.2) needs to be set, without this, the fields are updated but
will not necessarily display. To remedy this, the following code
snippet can be used.
.. code:: python
pdf.Root.AcroForm.update(PdfDict(
NeedAppearances=PdfObject('true')))
Multiple Fields with Same Name
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Combining the code snippets provides a simple method for filling
in text fields, except if there are multiple instances of the same field. To
refer back to the clinic example, each patient's form packet comprised multiple
forms each with the ``Name`` field. Some forms even had the ``Name`` appear
twice such as in a demographic section and then in a ``Print Name`` field
next to a signature line. If the code above on such a form were run,
the ``Name`` field will not show up.
Whenever the multiple
fields occur with the same name, the situation is more complicated. One
way to deal with this is to simply rename the fields to be different
such as ``Name-1`` and ``Name-2``, which is fine if the sole use of the
form is for automated form filling. This would require access to a form authoring tool.
If the form is also to be used for manual filling, this would require the user to enter the
``Name`` multiple times.
When fields appear multiple times, the widget annotation does not have
the ``/T`` field but has a ``/Parent`` field. As it turns out this ``/Parent``
contains the field name ``/T`` as well as the default value ``/V``.
Each ``/Parent`` has one ``/Kids`` for each occurrence of the field.
To modify the code to handle repeated occurrences of a field, the following lines can be inserted:
.. code:: python
if not annotation['/T']:
annotation=annotation['/Parent']
These lines allow the inspection and modifications of annotations that appear more than once. With this modification, the result of the inspection code
yields:
.. code:: python
pdf = pdfrw.PdfReader(file_path)
for page in pdf.pages:
annotations = page['/Annots']
if annotations is None:
continue
for annotation in annotations:
if annotation['/Subtype']=='/Widget':
if not annotation['/T']:
annotation=annotation['/Parent']
if annotation['/T']:
key = annotation['/T'].to_unicode()
print (key)
With this code in the above example, ``Name`` would be printed
multiple times, once for each
instance, but each instance points to the same ``/Parent``. With this
modification, the form filler actually fills the ``/Parent`` value
multiple times, but this has no impact since it is overwriting the
default value with the same value.
Checkboxes
----------
In accordance to §12.7.4.2.3, the checkbox state can be set as
follows:
.. code:: python
def checkbox(annotation, value):
if value:
val_str = BasePdfName('/Yes')
else:
val_str = BasePdfName('/Off')
annotation.update(PdfDict(V=val_str))
This could work if the export value of the checkbox is ``Yes``, which is the default, but not when the export value is something else. The easiest solution is to edit the form to ensure that the
export value of the checkbox is ``Yes`` and the default state of the box
is unchecked. The recommendation in the specification is that it
be set to ``Yes``. In the event tools to make this change are not
available, the ``/V`` and ``/AS`` fields should be set to the export value
not ``Yes``. The export value can be
inspected by examining the appearance dictionary ``/AP`` and specifically at the ``/N``
field. Each annotation has up to three appearances in its appearance dictionary: ``/N``,
``/R`` and ``/D``, standing for *normal*, *rollover*, and *down* (§12.5.5). The latter two
have to do with appearance in interacting with the mouse. The normal appearance has to do
with how the form is printed.
There may be circumstances where the form has checkboxes whose default state
is checked. In that case, in order to uncheck a box, the best practice is to delete
the ``/V`` as well as the ``/AS`` field from the dictionary.
According to the PDF specification for checkboxes, the appearance stream ``/AS`` should be
set to the same value as ``/V``. Failure to do so may mean that the checkboxes do not appear.
More Complex Forms
------------------
For the purpose of the vaccine clinic application, the filling of text fields and checkboxes
were all that were needed. However, for completeness, other form field types were studied
and solutions are given below.
Radio Buttons
~~~~~~~~~~~~~
Radio buttons are by far the most complex of the form entry types. Each widget links to
``/Kids`` which represent the other buttons in the radio group. Each widget in a radio
group will link to the same \`kids'. Much like the \`parents' for the repeated forms fields
with the same name, each kid need only be updated once, but
the same update can be used multiple times if it simplifies the code.
In a nutshell, the value ``/V`` of each widget in a radio group needs to
be set to the export value of the button selected. In each kid, the
appearance stream ``/AS`` should be set to ``/Off`` except for the kid
corresponding to the export value. In order to identify the kid with its
corresponding export value, the ``/N`` field of
the appearance dictionary ``/AP`` needs to be examined just as was
done with the checkboxes.
The resulting code could look like the following:
.. code:: python
def radio_button(annotation, value):
for each in annotation['/Kids']:
# determine the export value of each kid
keys = each['/AP']['/N'].keys()
keys.remove('/Off')
export = keys[0]
if f'/{value}' == export:
val_str = BasePdfName(f'/{value}')
else:
val_str = BasePdfName(f'/Off')
each.update(PdfDict(AS=val_str))
annotation.update(PdfDict(
V=BasePdfName(f'/{value}')))
Combo Boxes and Lists
~~~~~~~~~~~~~~~~~~~~~
Both combo boxes and lists are forms of the form type *choice*. The combo
boxes resemble drop-down menus and lists are similar to list pickers in
HTML. Functionally, they are very similar in form filling. The value
``/V`` and appearance stream ``/AS`` need to be set to their exported
values. The ``/Op`` field yields a list of lists associating the exported
value with the value that appears in the widget.
To set the combo box, the value needs to be set to the export
value.
.. code:: python
def combobox(annotation, value):
export=None
for each in annotation['/Opt']:
if each[1].to_unicode()==value:
export = each[0].to_unicode()
if export is None:
err = f"Export Value: ""{value} Not Found"
raise KeyError(err)
pdfstr = PdfString.encode(export)
annotation.update(PdfDict(V=pdfstr, AS=pdfstr))
Lists are structurally very similar. The list of exported values can be
found in the ``/Opt`` field. The main difference is that lists based on
their configuration can take multiple values. Multiple values can be set
with ``PDFrw`` by setting ``/V`` and ``/AS`` to a list of ``PdfString``\ s.
The code presented here uses two separate helpers, but because of the
similarity in structure between list boxes and combo boxes, they could
be combined into one function.
.. code:: python
def listbox(annotation, values):
pdfstrs=[]
for value in values:
export=None
for each in annotation['/Opt']:
if each[1].to_unicode()==value:
export = each[0].to_unicode()
if export is None:
err = f"Export Value: {value} Not Found"
raise KeyError(err)
pdfstrs.append(PdfString.encode(export))
annotation.update(PdfDict(V=pdfstrs, AS=pdfstrs))
Determining Form Field Types Programmatically
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
While PDF authoring tools or visual inspection can identify each form's type, the type can be determined programmatically as well. It is important to understand that fillable forms fall
into four form types, button (push button, checkboxes and radio buttons), text, choice
(combo box and list box), and signature. They correspond to following values of the ``/FT``
form type field of a given annotation, ``/Btn``, ``/Tx``, ``/Ch`` and ``/Sig``,
respectively. Since signature filling is not supported and the push button is a widget which
can cause an action but is not fillable, those corresponding types are omitted from
consideration.
To distinguish the types of buttons and choices, the form
flags ``/Ff`` field is examined. For radio buttons, the 16th bit is set. For combo
box the 18th bit is set. Please note that ``annotation['/Ff']`` returns
a ``PdfObject`` when returned and must be coerced into an ``int`` for
bit testing.
.. code:: python
def field_type(annotation):
ft = annotation['/FT']
ff = annotation['/Ff']
if ft == '/Tx':
return 'text'
if ft == '/Ch':
if ff and int(ff) & 1 << 17: # test 18th bit
return 'combo'
else:
return 'list'
if ft == '/Btn':
if ff and int(ff) & 1 << 15: # test 16th bit
return 'radio'
else:
return 'checkbox'
For completeness, the following ``text_form`` filler helper is
included.
.. code:: python
def text_form(annotation, value):
pdfstr = PdfString.encode(value)
annotation.update(PdfDict(V=pdfstr, AS=pdfstr))
This completes the building blocks to an automatic form filler.
Consolidating Multiple Filled Forms
-----------------------------------
There are two problems with consolidating multiple filled forms. The
first problem is that when two PDF files are merged, fields with
matching names are
associated with each other. For instance, if John Doe were entered in
one form's name field and Jane Doe in the second. After combining the two forms John Doe will
override the second form's name field and John Doe would appear in both
forms. The second problem is that most simple command line or
programmatic methods of combining two or more PDF files lose form data.
One solution is to "flatten" each PDF file. This is equivalent to
printing the file to PDF. In effect, this bakes in the filled form
values and does not permit the editing the fields. Going even further,
one could render the PDFs as images if the only requirement is that the
combined files be printable. However, tools like
``ghostscript``, ``imagemagick``, and PDFUnite don't do a good job of
preserving form data when rendering PDF files.
Form Field Name Collisions
~~~~~~~~~~~~~~~~~~~~~~~~~~
Combining multiple filled PDF files was an issue for the vaccine clinic because the same
form was filled out for multiple patients. The alternative of printing hundreds of
individual forms was infeasible. To combine a batch of PDF forms, all form field names must
be different. Thankfully, the solution is quite simple, in the process of filling out the
form using the code above, rename (set) the value of ``/T``.
.. code:: python
def form_filler(in_path, data, out_path, suffix):
pdf = pdfrw.PdfReader(in_path)
for page in pdf.pages:
annotations = page['/Annots']
if annotations is None:
continue
for annotation in annotations:
if annotation['/SubType'] == '/Widget':
key = annotation['/T'].to_unicode()
if key in data:
pdfstr = PdfString.encode(data[key])
new_key = key + suffix
annotation.update(
PdfDict(V=pdfstr, T=new_key))
pdf.Root.AcroForm.update(PdfDict(
NeedAppearances=PdfObject('true')))
pdfrw.PdfWriter().write(out_path, pdf)
Only a unique suffix needs to be supplied to each form. The suffix
can be as simple as a sequential number.
Combining the Files
~~~~~~~~~~~~~~~~~~~
Solutions for combining PDF files with ``PDFrw`` can be found on the Internet.
The following recipe is typical:
.. code:: python
writer = PdfWriter()
for fname in files:
r = PdfReader(fname)
writer.addpages(r.pages)
writer.write("output.pdf")
While the form data still exists in the output file, the rendering
information is lost and won't show when displayed or printed. The
problem comes from the fact that the written PDF does not have an
interactive form dictionary (see §12.7.2 of the PDF 1.7 specification).
In particular, the interactive forms dictionary contains the boolean
``NeedAppearances`` which needs to be set for fields to be shown. If the
forms being combined have different interactive form dictionaries, they
need to be merged. In this application where the source forms are identical among the various copies, any ``AcroForm`` dictionary can be used.
After obtaining the dictionary from ``pdf.Root.AcroForm`` (assuming the
``PdfReader`` object is stored in ``pdf``), it is not clear how to
add it to the ``PdfWriter`` object. The clue comes from a simple
recipe for copying a pdf file.
.. code:: python
pdf = PdfReader(in_file)
PdfWriter().write(out_file, pdf)
Examination of the underlying source code shows the second parameter ``pdf``
to be set to the attribute ``trailer`` of the ``PdfWriter`` object. Assuming ``acro_form``
contains the desired interactive form, the interactive form dictionary
can be added to the output document by using
``writer.trailer.Root.AcroForm = acro_form``.
Conclusion
----------
A complete functional version of this PDF form filler is open source
and can be found at WestHealth's GitHub repository
`https://github.com/WestHealth/pdf-form-filler
<https://github.com/WestHealth/pdf-form-filler>`_.
This process was able to produce large quantities of pre-populated forms for senior citizens seeking
COVID-19 vaccinations relieving one of the bottlenecks that have plagued many other vaccine
clinics.
| 47.609403 | 817 | 0.731503 |
b193dad5cdf32044caefaaa3705a8350ff5e9d72 | 198 | rst | reStructuredText | sphinx/mods/requirements.rst | Boyne272/Thin_Section_Analsis | 91023267f96f709d62fa44d10ff5636d263e346c | [
"MIT"
] | 2 | 2020-01-15T09:02:04.000Z | 2020-01-15T09:02:30.000Z | docs/_sources/mods/requirements.rst.txt | msc-acse/acse-9-independent-research-project-Boyne272 | b6f52a189dbb1cfb53325793966e32ee39155e9e | [
"MIT"
] | null | null | null | docs/_sources/mods/requirements.rst.txt | msc-acse/acse-9-independent-research-project-Boyne272 | b6f52a189dbb1cfb53325793966e32ee39155e9e | [
"MIT"
] | 3 | 2019-08-27T12:44:14.000Z | 2020-01-15T09:02:41.000Z | Requirements
============
The following packages are requiered with only the stated versions tested.
* numpy==1.16.4
* matplotlib==3.0.3
* scipy==1.3.1
* torch==1.1.0
* scikit-image==0.15.0 | 22 | 75 | 0.646465 |
1345678e497412b6f782c8281f89f299b066f5eb | 2,509 | rst | reStructuredText | bessemer/software/apps/SAMtools.rst | jkwmoore/sheffield_hpc | 793a63d29915e3bd11a70d2f762ad5de9da49617 | [
"CC-BY-3.0"
] | null | null | null | bessemer/software/apps/SAMtools.rst | jkwmoore/sheffield_hpc | 793a63d29915e3bd11a70d2f762ad5de9da49617 | [
"CC-BY-3.0"
] | 22 | 2021-09-15T14:57:37.000Z | 2022-03-04T19:47:40.000Z | bessemer/software/apps/SAMtools.rst | jkwmoore/sheffield_hpc | 793a63d29915e3bd11a70d2f762ad5de9da49617 | [
"CC-BY-3.0"
] | null | null | null | .. _bessemer_SAMtools:
SAMtools
========
.. sidebar:: SAMtools
:Version: 1.9
:Dependencies: Easybuild foss-2018b toolchain
:URL: https://github.com/samtools/samtools/releases/
:Documentation: http://www.htslib.org/doc/samtools.html
SAM (Sequence Alignment/Map) format is a generic format for storing large nucleotide sequence alignments. SAM aims to be a format that is
- flexible enough to store all the alignment information generated by various alignment programs
- simple enough to be easily generated by alignment programs or converted from existing alignment formats
- compact in file size
- allows most of operations on the alignment to work on a stream without loading the whole alignment into memory
- allows the file to be indexed by genomic position to efficiently retrieve all reads aligning to a locus
SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
Usage
-----
SAMtools can be activated using the module file::
module load SAMtools/1.9-foss-2018b
Note: The module file also loads the compiler Easybuild foss-2018b toolchain (including GCC 7.3.0).
Test
----
Using the tutorial provided at http://quinlanlab.org/tutorials/samtools/samtools.html :
.. code-block:: console
$ cd ~
$ mkdir samtools-demo
$ cd samtools-demo
$ curl https://s3.amazonaws.com/samtools-tutorial/sample.sam.gz > sample.sam.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 371M 100 371M 0 0 29.3M 0 0:00:12 0:00:12 --:--:-- 33.2M
$ gzip -d sample.sam.gz
$ samtools view -S -b sample.sam > sample.bam
$ samtools view sample.bam | head
HWI-ST354R:351:C0UPMACXX:5:1115:20112:49057 99 1 861268 60 100M = 861543
375 TCCCTCACAGGGTCTGCCTCGGCTCTGCTCGCAGGGAAAAGTCTGAAGACGCTTATGTCCAAGGGGATCCTGCAGGTGCATCCTCCGATCTGCGACTGCC
CCCFFFFFHHHGFHJIIJJJJJIJJJJJJJJIIJJIIJJIGCHCHGGIGIIJIJGHGFFFFFFFDD@BDCCCDDDDDCDDECC@C9<@BBDDDDDDD59>
MC:Z:100M MD:Z:100 RG:Z:1719PC0017_51 NM:i:0 MQ:i:60 AS:i:100 XS:i:0
$ # Further steps truncated.
Installation notes
------------------
SAMtools was compiled using EasyBuild. The module file generated is
:download:`/usr/local/modulefiles/live/eb/all/SAMtools/1.9-foss-2018b </bessemer/software/modulefiles/SAMtools/1.9/1.9-foss-2018b>` and was
tested as per the tutorial above.
| 38.6 | 172 | 0.739737 |
bd22684fa78cb720f7d256190b3f34d1158c9dc6 | 1,222 | rst | reStructuredText | README.rst | ndzou/httpie-negotiate | ea86a28b5c3ace083c321c09ce53476761dafb4c | [
"BSD-3-Clause"
] | 3 | 2017-01-04T13:42:46.000Z | 2019-10-01T12:44:43.000Z | README.rst | ndzou/httpie-negotiate | ea86a28b5c3ace083c321c09ce53476761dafb4c | [
"BSD-3-Clause"
] | 1 | 2019-06-15T00:04:25.000Z | 2019-06-15T00:04:25.000Z | README.rst | ndzou/httpie-negotiate | ea86a28b5c3ace083c321c09ce53476761dafb4c | [
"BSD-3-Clause"
] | 1 | 2019-02-04T21:36:43.000Z | 2019-02-04T21:36:43.000Z | httpie-negotiate
===========
SPNEGO (GSS Negotiate) auth plugin for `HTTPie <https://github.com/jkbr/httpie>`_, based on Jakub's httpie-ntlm example.
Installation
------------
.. code-block:: bash
$ pip install httpie-negotiate
You should now see ``negotiate`` under ``--auth-type`` in ``$ http --help`` output.
Usage
-----
You need to have a valid Kerberos principal, run kinit first if necessary.
.. code-block:: bash
$ http --auth-type=negotiate --auth : https://example.org
Kerberos mutual authentication is REQUIRED by default and is recommended.
If you strictly require mutual authentication to be OPTIONAL or DISBALED, then you can use the ``HTTPIE_KERBEROS_MUTUAL`` environment variable.
.. code-block:: bash
$ HTTPIE_KERBEROS_MUTUAL=OPTIONAL http --auth-type=negotiate --auth : https://example.org
$ HTTPIE_KERBEORS_MUTUAL=DISABLED http --auth-type=negotiate --auth : https://example.org
You can also use `HTTPie sessions <https://github.com/jkbr/httpie#sessions>`_:
.. code-block:: bash
# Create session
$ http --session=logged-in --auth-type=negotiate --auth : https://example.org
# Re-use auth
$ http --session=logged-in POST https://example.org hello=world
| 26 | 143 | 0.704583 |
804714231e8c9bcf268c44bf8e2ea4e5286440e4 | 98 | rst | reStructuredText | docs/authors.rst | lnielsen/filededupe | 1bc4f8e838ca640802b0ce4d95e94f68ca0beb52 | [
"BSD-3-Clause"
] | null | null | null | docs/authors.rst | lnielsen/filededupe | 1bc4f8e838ca640802b0ce4d95e94f68ca0beb52 | [
"BSD-3-Clause"
] | null | null | null | docs/authors.rst | lnielsen/filededupe | 1bc4f8e838ca640802b0ce4d95e94f68ca0beb52 | [
"BSD-3-Clause"
] | null | null | null | .. include:: ../CHANGES.rst
License
=======
.. include:: ../LICENSE
.. include:: ../AUTHORS.rst
| 12.25 | 27 | 0.561224 |
20df9a5be92b039903d7e00b6f4d260deb7759aa | 308 | rst | reStructuredText | docs/api/reference/coldtype.fx.skia.rst | rohernandezz/coldtype | 724234fce454699a469d17b6c78ae50fa8138169 | [
"Apache-2.0"
] | 142 | 2020-06-12T17:01:58.000Z | 2022-03-16T23:21:37.000Z | docs/api/reference/coldtype.fx.skia.rst | rohernandezz/coldtype | 724234fce454699a469d17b6c78ae50fa8138169 | [
"Apache-2.0"
] | 35 | 2020-04-15T15:34:54.000Z | 2022-03-19T20:26:47.000Z | docs/api/reference/coldtype.fx.skia.rst | rohernandezz/coldtype | 724234fce454699a469d17b6c78ae50fa8138169 | [
"Apache-2.0"
] | 14 | 2020-06-23T18:56:46.000Z | 2022-03-31T15:54:56.000Z | coldtype.fx.skia
================
.. currentmodule:: coldtype.fx
A collection of chainable functions for rasterizing pens with various skia-python image-processing utilities.
.. autosummary::
coldtype.fx.skia.luma
coldtype.fx.skia.fill
coldtype.fx.skia.phototype
coldtype.fx.skia.potrace | 23.692308 | 109 | 0.720779 |
d20970aacc3d0ad7be5cb8a62c5f155043c6b6d6 | 3,101 | rst | reStructuredText | docs/source/community.rst | pietroastolfi/fury | d017680233dcd9e9e8a9083a9493fb1a496c7be5 | [
"BSD-3-Clause"
] | null | null | null | docs/source/community.rst | pietroastolfi/fury | d017680233dcd9e9e8a9083a9493fb1a496c7be5 | [
"BSD-3-Clause"
] | null | null | null | docs/source/community.rst | pietroastolfi/fury | d017680233dcd9e9e8a9083a9493fb1a496c7be5 | [
"BSD-3-Clause"
] | null | null | null | .. _community:
=========
Community
=========
Join Us!
--------
.. raw:: html
<ul style="list-style-type:none;">
<li style="display: block"><a href='https://discord.gg/6btFPPj'><i class="fa fa-discord fa-fw"></i> Discord</a></li>
<li style="display: block"><a href='https://mail.python.org/mailman3/lists/fury.python.org'><i class="fa fa-envelope fa-fw"></i> Mailing list</a></li>
<li style="display: block"><a href='https://github.com/fury-gl/fury'><i class="fa fa-github fa-fw"></i> Github</a></li>
<ul>
Contributors
------------
.. raw:: html
<div id="github_visualization_main_container">
<div class="github_visualization_visualization_container">
<div class="github_visualization_basic_stats_container">
<div class="github_visualization_basic_stats" id="github_visualization_repo_stars">
<span class="stat-value banner-start-link">{{ basic_stats["stargazers_count"] }}</span> Stars
<img class="basic_stat_icon" src="_static/images/stars.png">
</div>
<div class="github_visualization_basic_stats" id="github_visualization_repo_forks">
<span class="stat-value">{{ basic_stats["forks_count"] }}</span> Forks
<img class="basic_stat_icon" src="_static/images/forks.png">
</div>
<div class="github_visualization_basic_stats" id="github_visualization_repo_contributors_count">
<span class="stat-value">{{ contributors["total_contributors"] }}</span> Contributors
<img class="basic_stat_icon" src="_static/images/contributors.png">
</div>
<div class="github_visualization_basic_stats" id="github_visualization_repo_commits_count">
<span class="stat-value">{{ contributors["total_commits"] }}</span> Commits
<img class="basic_stat_icon" src="_static/images/commits.png">
</div>
</div>
<div id="github_visualization_contributors_wrapper">
{% for contributor in contributors["contributors"] %}
<a href="{{ contributor.html_url }}" target="_blank">
<div class="github_visualization_contributor_info">
<img class="github_visualization_contributor_img" src="{{ contributor.avatar_url }}">
{% if contributor.fullname %}
<span class="github_visualization_contributor_name">{{ contributor.fullname }}</span>
{% else %}
<span class="github_visualization_contributor_name">{{ contributor.username }}</span>
{% endif %}
<span class="github_visualization_contributor_commits">Commits: {{ contributor.nb_commits }}</span>
<span class="github_visualization_contributor_additions"> ++{{ contributor.total_additions }}</span>
<span class="github_visualization_contributor_deletions"> --{{contributor.total_deletions }}</span>
</div>
</a>
{% endfor %}
</div>
</div>
</div> | 50.836066 | 158 | 0.614318 |
caec226dd55c62188543bc07919ad70efe08ca5b | 37 | rst | reStructuredText | docs/source/advanced/dataio_m.rst | astro-friedel/yggdrasil | 5ecbfd083240965c20c502b4795b6dc93d94b020 | [
"BSD-3-Clause"
] | 22 | 2019-02-05T15:20:07.000Z | 2022-02-25T09:00:40.000Z | docs/source/advanced/dataio_m.rst | astro-friedel/yggdrasil | 5ecbfd083240965c20c502b4795b6dc93d94b020 | [
"BSD-3-Clause"
] | 48 | 2019-02-15T20:41:24.000Z | 2022-03-16T20:52:02.000Z | docs/source/advanced/dataio_m.rst | astro-friedel/yggdrasil | 5ecbfd083240965c20c502b4795b6dc93d94b020 | [
"BSD-3-Clause"
] | 16 | 2019-04-27T03:36:40.000Z | 2021-12-02T09:47:06.000Z | Matlab DataIO API
=================
| 9.25 | 17 | 0.405405 |
ab713d8f7006a333cbd1bc6ee81ff36fd0258cd0 | 2,245 | rst | reStructuredText | docs/source/windows/installation.rst | cloudmesh/windows | f089e9ef83ae1cae9b29c877c462130a7b93b5e8 | [
"Apache-2.0"
] | null | null | null | docs/source/windows/installation.rst | cloudmesh/windows | f089e9ef83ae1cae9b29c877c462130a7b93b5e8 | [
"Apache-2.0"
] | 1 | 2015-06-08T23:15:25.000Z | 2015-06-08T23:15:25.000Z | docs/source/windows/installation.rst | cloudmesh/windows | f089e9ef83ae1cae9b29c877c462130a7b93b5e8 | [
"Apache-2.0"
] | null | null | null | Cygwin Installation
=========================================================================================================
We install cygwin via chocolatey. To do so you first have to
install chocolatey.
Please open a cmd.exe window as administrator (you can do this as follows):
....
Step 1: Install Chocolatey
---------------------------------------------------------------------------------------------------------
You have to copy and paste **one** of the following comamnds into a terminal (cmd or PowerShell).
If you use cmd.exe::
C:> @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin
Or if you prefer to use PowerShell you can say::
PS> iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
Step 2: Install Cygwin
---------------------------------------------------------------------------------------------------------
Put the following line into cmd.exe::
choco install --force -y cygwin
Note: if Cygwin is already installed, --force will reinstall it.
Step 3: Install apt-cyg
---------------------------------------------------------------------------------------------------------
Put the following command into Cygwin terminal (its shorcut can be found on your Desktop)::
lynx -source rawgit.com/transcode-open/apt-cyg/master/apt-cyg > apt-cyg
install apt-cyg /bin
Step 4: Install additional packages
---------------------------------------------------------------------------------------------------------
Run the following command in Cygwin terminal::
apt-cyg install wget curl connect-proxy emacs gedit openssh git graphviz grep make nano ncurses nc openssl ping pylint rsync keychain head vi vim which
Packages can be found `here`_
.. _here: https://cygwin.com/packages/package_list.html
We recommend that you install some packages that you may need in
future. This includes the following packages.
Security::
opensssh
keychain
Editors::
nano
emacs
vim
Development::
git
pylint
make
Linux::
ncurses
nc
ping
xtail
wget
Visuzalization::
graphviz
dialog
| 25.224719 | 202 | 0.564365 |
2370f84100fbffeb23563158f08e84d9607c2bad | 648 | rst | reStructuredText | docs/source/contribute.rst | alorence/django-modern-rpc | 00b70bb72df3f2cfd1594a7b21c09e0b31decb4e | [
"MIT"
] | 89 | 2016-10-03T15:42:30.000Z | 2022-03-05T20:41:34.000Z | docs/source/contribute.rst | alorence/django-modern-rpc | 00b70bb72df3f2cfd1594a7b21c09e0b31decb4e | [
"MIT"
] | 39 | 2017-02-10T16:44:38.000Z | 2022-03-10T09:38:16.000Z | docs/source/contribute.rst | alorence/django-modern-rpc | 00b70bb72df3f2cfd1594a7b21c09e0b31decb4e | [
"MIT"
] | 17 | 2017-06-28T17:06:45.000Z | 2022-02-13T17:03:25.000Z | ============
Get involved
============
There is many way to contribute to project development.
Report issues, suggest enhancements
===================================
If you find a bug, want to ask question about configuration or suggest an improvement to the project, feel free to use
`the issue tracker <https://github.com/alorence/django-modern-rpc/issues>`_. You will need a GitHub account.
Submit a pull request
=====================
If you improved something or fixed a bug by yourself in a fork, you can
`submit a pull request <https://github.com/alorence/django-modern-rpc/pulls>`_. We will be happy to review it before
doing a merge.
| 34.105263 | 118 | 0.682099 |
24bba20c5dc4005c07ad9470aba4ad546ea9a391 | 84 | rst | reStructuredText | docs/usingidle.rst | offmessage/blackpearl | ffbba460fe7fc7fe4d7e3466f5ff13ea0c081fc5 | [
"MIT"
] | null | null | null | docs/usingidle.rst | offmessage/blackpearl | ffbba460fe7fc7fe4d7e3466f5ff13ea0c081fc5 | [
"MIT"
] | null | null | null | docs/usingidle.rst | offmessage/blackpearl | ffbba460fe7fc7fe4d7e3466f5ff13ea0c081fc5 | [
"MIT"
] | null | null | null | .. _using-idle:
Using IDLE with **blackpearl**
==============================
| 14 | 30 | 0.392857 |
c413490e38c3a56636072c5ea69932282aa9aa52 | 2,774 | rst | reStructuredText | app/web/typo3/sysext/core/Documentation/Changelog/8.6/Feature-79622-IntroducingFrameClassForFluidStyledContent.rst | Foberm/cms | a7d0f9bd8b6213820f0f999b9d8ac2508463e9c1 | [
"MIT"
] | null | null | null | app/web/typo3/sysext/core/Documentation/Changelog/8.6/Feature-79622-IntroducingFrameClassForFluidStyledContent.rst | Foberm/cms | a7d0f9bd8b6213820f0f999b9d8ac2508463e9c1 | [
"MIT"
] | null | null | null | app/web/typo3/sysext/core/Documentation/Changelog/8.6/Feature-79622-IntroducingFrameClassForFluidStyledContent.rst | Foberm/cms | a7d0f9bd8b6213820f0f999b9d8ac2508463e9c1 | [
"MIT"
] | null | null | null | .. include:: ../../Includes.txt
==================================================================
Feature: #79622 - Introducing Frame Class for Fluid Styled Content
==================================================================
See :issue:`79622`
Description
===========
In CSS Styled Content it is possible to provide additional CSS classes
for the wrapping container element. This feature is now available
for Fluid Styled Content, too.
The default layout of Fluid Styled Content is now passing the value of
`Frame Class` directly to the template and prefixes the value by default
with `frame-<key>`.
Implementation in Fluid Styled Content
--------------------------------------
.. code-block:: html
<div id="c{data.uid}" class="frame frame-{data.frame_class} ...">
...
</div>
Explanation of Keys and Effects of Frame Classes
-------------------------------------------------
=============== =============== ===================== ==================================================
Name Key CSS Class Additional Effects
=============== =============== ===================== ==================================================
Default default frame-default -
Ruler Before ruler-before frame-ruler-before A ruler is added after the output.
Ruler After ruler-after frame-ruler-after A ruler is added after the output.
Indent indent frame-indent Margin of 15% is added to the left and right side.
Indent, 33/66% intent-left frame-indent-left Margin of 33% is added to the left side.
Indent, 66/33% indent-right frame-indent-right Margin of 33% is added to the right side.
No Frame none (none) No Frame is rendered.
=============== =============== ===================== ==================================================
Please note that you need to include the optional static template "Fluid Styled
Content Styling" to have a visual effect on the new added CSS classes.
Edit Predefined Options
-----------------------
.. code-block:: typoscript
TCEFORM.tt_content.frame_class {
removeItems = default,ruler-before,ruler-after,indent,indent-left,indent-right,none
addItems {
superframe = LLL:EXT:extension/Resources/Private/Language/locallang.xlf:superframe
}
}
.. code-block:: php
$GLOBALS['TCA']['tt_content']['columns']['frame_class']['config']['items'][] = [
0 = LLL:EXT:extension/Resources/Private/Language/locallang.xlf:superframe
1 = superframe
];
Impact
======
`Frame Class` is now available to all Fluid Styled Content elements.
.. index:: Fluid, Frontend
| 36.025974 | 110 | 0.526316 |
bea5ba175d38f4f52a31cd95bf8ff67162b1d728 | 184 | rst | reStructuredText | docs/ref/exec/all/network/load_balancer.rst | johnoneill98/idem-azurerm | 37e0743fd236f758b093f5898cc99e95ffde9889 | [
"Apache-2.0"
] | 6 | 2019-10-31T16:01:39.000Z | 2020-01-30T01:13:45.000Z | docs/ref/exec/all/network/load_balancer.rst | johnoneill98/idem-azurerm | 37e0743fd236f758b093f5898cc99e95ffde9889 | [
"Apache-2.0"
] | 69 | 2020-05-05T02:49:25.000Z | 2021-09-20T01:21:34.000Z | docs/ref/exec/all/network/load_balancer.rst | johnoneill98/idem-azurerm | 37e0743fd236f758b093f5898cc99e95ffde9889 | [
"Apache-2.0"
] | 3 | 2020-06-01T19:35:15.000Z | 2021-04-21T19:37:57.000Z | ==================================
exec.azurerm.network.load_balancer
==================================
.. automodule:: idem_azurerm.exec.azurerm.network.load_balancer
:members:
| 26.285714 | 63 | 0.494565 |
ae11c22c356f05ecbeda40564473e13a3084bc11 | 75 | rst | reStructuredText | doc/en/api/template.rst | jonathanverner/brython-jinja2 | cec6e16de1750203a858d0acf590f230fc3bf848 | [
"BSD-3-Clause"
] | 2 | 2020-09-13T17:51:55.000Z | 2020-11-25T18:47:12.000Z | doc/en/api/template.rst | jonathanverner/brython-jinja2 | cec6e16de1750203a858d0acf590f230fc3bf848 | [
"BSD-3-Clause"
] | 2 | 2020-11-25T19:18:15.000Z | 2021-06-01T21:48:12.000Z | doc/en/api/template.rst | jonathanverner/brython-jinja2 | cec6e16de1750203a858d0acf590f230fc3bf848 | [
"BSD-3-Clause"
] | null | null | null | Template
===========
.. automodule:: brython_jinja2.template
:members:
| 12.5 | 39 | 0.626667 |
b64280db6b6d393f7700455faa563d56d18462a6 | 125 | rst | reStructuredText | docs/content/core.rst | SDMStudio/sdms | 43a86973081ffd86c091aed69b332f0087f59361 | [
"MIT"
] | null | null | null | docs/content/core.rst | SDMStudio/sdms | 43a86973081ffd86c091aed69b332f0087f59361 | [
"MIT"
] | null | null | null | docs/content/core.rst | SDMStudio/sdms | 43a86973081ffd86c091aed69b332f0087f59361 | [
"MIT"
] | null | null | null |
Core
==========
This namespace contains some basic class used in SDMS.
.. toctree::
:maxdepth: 3
core/spaces
| 10.416667 | 56 | 0.6 |
2a27178e87b9e6e97702acfd2c9d7bf1a73cebe2 | 18,928 | rst | reStructuredText | chef_master/source/integrate_chef_automate_saml.rst | emachnic/chef-web-docs | 0183aaa7ee1f59a72a965e6cb701d62013b3d487 | [
"CC-BY-3.0"
] | null | null | null | chef_master/source/integrate_chef_automate_saml.rst | emachnic/chef-web-docs | 0183aaa7ee1f59a72a965e6cb701d62013b3d487 | [
"CC-BY-3.0"
] | null | null | null | chef_master/source/integrate_chef_automate_saml.rst | emachnic/chef-web-docs | 0183aaa7ee1f59a72a965e6cb701d62013b3d487 | [
"CC-BY-3.0"
] | null | null | null | =====================================================
Integrate Chef Automate with SAML for Authentication
=====================================================
`[edit on GitHub] <https://github.com/chef/chef-web-docs/blob/master/chef_master/source/integrate_chef_automate_saml.rst>`__
.. tag chef_automate_mark
.. image:: ../../images/a2_docs_banner.svg
:target: https://automate.chef.io/docs
.. end_tag
.. tag EOL_a1
.. danger:: This documentation applies to a deprecated version of Chef Automate and will reach its `End-Of-Life on December 31, 2019 </https://docs.chef.io/versions.html#deprecated-products-and-versions>`__. See the `Chef Automate site <https://automate.chef.io/docs/quickstart/>`__ for current documentation. The new Chef Automate includes newer out-of-the-box compliance profiles, an improved compliance scanner with total cloud scanning functionality, better visualizations, role-based access control and many other features. The new Chef Automate is included as part of the Chef Automate license agreement and is `available via subscription <https://www.chef.io/pricing/>`_.
.. end_tag
Security Assertion Markup Language (SAML) is an XML-based, open-standard data format for exchanging authentication and authorization data
between parties, in particular, between an identity provider and a service provider. Chef Automate supports SAML-backed Single Sign On (SSO) as a
service provider, integrating with your chosen identity provider.
Configuring SAML for your Chef Automate enterprise
=====================================================
As an enterprise admin, you can configure a SAML Service to enable single sign on. To do this from the Chef Automate UI,
click on the ``Admin`` menu item. From the ``Admin`` screen, navigate to the SAML Setup tab. Once you are on the SAML Setup tab, you can configure the details
necessary to integrate Chef Automate and SAML.
This can either be done by supplying Chef Automate with your Identity Provider's metadata endpoint, or by manually entering the required
fields. Please note that both options require you to set a NameID Policy (explained below).
A Default Role for new users must be set in order to successfully set up a SAML service. Any combination of roles may be selected in the SCM Setup tab; all auto-provisioned users will be assigned these permissions when they first log in to Chef Automate.
.. note:: Metadata-driven SAML configuration enables Chef Automate to periodically update its SAML certificates from this metadata, enabling certificate rolling for signed SAML assertions.
Automatic SAML configuration through Identity Provider metadata
-----------------------------------------------------------------
To make Chef Automate configure SAML automatically from the metadata published by your Identity Provider, check the `Import Metadata` box and
enter its URL in the text field. For example, the metadata endpoint for Azure AD deployments is of the form ``https://login.microsoftonline.com/SOMEHASH/federationmetadata/2007-06/federationmetadata.xml``,
and Okta's metadata is similar to ``https://CORP.okta.com/app/SOMEHASH/sso/saml/metadata``. You should be able to look at the XML document served there,
and find that it starts with the following:
.. code-block:: xml
<EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" ID="_f4168057-a418-4b84-a250-29b25e927b73" entityID="https://sts.windows.net/1b218ca8-3694-4fcb-ac12-d2112c657830/">
Since it is uncommon to use CA-signed certificates for this, and the set of certificates retrieved from that endpoint is trusted in the
verification of SAML logins, it is crucial for establishing trust to use HTTPS for retrieving the metadata file.
Chef Automate will default to verifying the HTTPS endpoints certificate using your operating system's trusted certificate bundle. See the Trust SSL Certificate section of `Integrate Chef Automate with BitBucket </integrate_delivery_bitbucket.html>`__ for more information.
The periodic refresh can be controlled through ``delivery.rb``. The following are the default settings:
.. code-block:: ruby
auth['saml_metadata_refresh_interval'] = '1d'
auth['saml_metadata_retry_interval'] = '1m'
With these settings, the Identity Provider's metadata will be refreshed every day (`1d`), and if this request fails, Chef Automate will
wait one minute (`1m`) before trying again. On failure, a retry will be attempted five times total. If the retries don't succeed, the next
attempt to fetch the metadata will be at the next refresh interval.
Manual SAML configuration
---------------------------------------------------
Fill out the following fields to configure SAML SSO. These details can often be found through your Identity Provider's metadata file.
#. The Identity Provider's Id, which is a URL that uniquely identifies your SAML identity provider. This is found as an attribute entityId under the EntityDescriptor element. Copy this value and put it in the Identity Provider URL text box. SAML assertions sent to Chef Automate must match this value exactly in the <saml:Issuer> attribute of SAML assertions.
Metadata XML example:
.. code-block:: xml
<EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" ID="_0579740c-32a1-46a0-a8d0-fb583f0566e7" entityID="https://sts.windows.net/1b218ca8-3694-4fcb-ac12-d2112c657830/">
#. The Identity Provider's SSO Login location. This can be retrieved from the metadata file. Look for the SingleSignOnService element and the Binding and Location attributes in that element. Ensure that the binding is a HTTP-Redirect binding. This is currently the only SSO Login Binding type supported in Chef Automate. Copy this location and put it in the Single Sign-On Login URL text box.
Metadata XML example:
.. code-block:: xml
<SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/1b218ca8-3694-4fcb-ac12-d2112c657830/saml2"/>
.. note:: There can be multiple SingleSignOnService tags, each with a different binding.
#. Selection of a Name Id Policy option. The Name Id Policy is used to request a specific user identification format from your Identity Provider (IdP). This can be left at "Default (No Policy)" if a specific format is not required, in which case the IdP will identify the user with it's default configured Name Id Policy.
#. A certificate from the IdP is required to verify integrity and authenticity of SAML assertions. From your metadata file copy only the certificate information from the KeyInfo block of XML, leaving out the XML tags. Paste this information into the Identity Provider Certificate box.
Metadata XML example:
.. code-block:: xml
<KeyDescriptor use="signing">
<KeyInfo>
<X509Data>
<X509Certificate>
MIIC4jCCAcqgAwIBAgIQQNXrmzh…..
</X509Certificate>
</X509Data>
</KeyInfo>
</KeyDescriptor>
Removing SAML configuration
-----------------------------------------------
The SAML configuration UI also allows for the removal of SAML configuration from the system. In order to remove the configuration, navigate to the SAML Setup tab, and then click on the `Remove Configuration` button. After a confirmation prompt, the SAML configuration will be removed from Chef Automate. Once the configuration is removed, SAML users will no longer be able to log into Chef Automate.
.. note:: The SAML type accounts that may have been created will still continue to exist even after the SAML configuration has been removed.
Configuring your Identity Provider to accept SAML requests from Chef Automate
=================================================================================
To configure your IdP to accept SAML requests, you need the following:
* The entity identification, or the issuer. If you have not overridden this setting in your `delivery.rb` (see below), enter:
.. code-block:: none
https://<yourChefAutomateDomain>/api/v0/e/<yourEnterprise>/saml/metadata
* Assertion Consumer Service / Reply URL. This is where Chef Automate receives SAML assertions from the Identity Provider:
.. code-block:: none
https://<yourChefAutomateDomain>/api/v0/e/<yourEnterprise>/saml/consume
* Audience. This will be the metadata URL for Chef Automate:
.. code-block:: none
https://<yourChefAutomateDomain>/api/v0/e/<yourEnterprise>/saml/metadata
Chef Automate currently only supports a subset of existing SAML communication schemes. To ensure this works with your IdP, please
ensure these configuration options are set up:
* Check that the identity provider endpoints are configured to accept ``HTTP-Redirect`` from the service provider.
* Check that the identity provider is configured to use ``HTTP-POST`` to connect to the endpoints of the service provider.
Enabling users to authenticate through SAML
=====================================================
By default, any users that authenticate successfully with the configured Identity Provider will be logged in: both users with
existing user accounts in Chef Automate that are set up for SAML authentication, and users hitherto unknown to Chef Automate,
which then get a user account created in Chef Automate automatically. It is also possible to migrate existing users, or to
create SAML users manually.
Auto-provisioned users
----------------------------------------------------
The new user's name will match their NameId value as reported by the Identity Provider (see below for the possible options).
Also note that changing the NameId Policy settings after users have been created automatically will lead to new user accounts being
created -- since their NameId no longer matches a user's username in Chef Automate.
These users will be assigned the default role(s) selected as part of the SAML configuration within the enterprise.
Migrating existing users and manual user creation
----------------------------------------------------
To use SAML for existing users, they can be migrated from Chef Automate or LDAP authentication. This can also be used to create SAML
users in Chef Automate before they have logged with SAML for the first time (triggering auto-provisioning). For example, this allows you to grant a
user more roles in their enterprise. The username in Chef Automate must match the NameId, such as email address, of the user in their
Identity Provider. See `Notes on NameId Policy <#notes-on-nameid-policy>`_ for more information.
To migrate an account:
#. Click on the `Admin` menu item.
#. Click on the user you wish to edit.
#. The current authentication type will be highlighted. Change it to `SAML`.
#. Rename username to match the user's full email address associated with their SAML account.
#. Click `Save and Close`.
Chef Automate makes a SAML request to the Identity Provider with the NameIdPolicy Format of ```urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress```. Your Identity Provider must support NameIds in this format.
It is recommended that an administrator account remain a Chef Automate authenticated user. This will allow an administrator to access Chef Automate in the case of a SAML misconfiguration or problem with the SAML Identity Provider.
.. note:: For Okta users, Okta has to be configured to get a user's first name, email address, and last name. When you are setting up SAML for Chef Automate, log into your in Okta account. From the `Admin` tab, go to Applications -> Your Application -> General -> SAML Settings. Click the edit button and then on step 2, "Configure SAML" in the section "ATTRIBUTE STATEMENTS (OPTIONAL)" set up the attribute mappings with the following values:
.. image:: ../../images/samlattributes.jpg
Notes on NameId Policy
=====================================================
The Name Id Policy is important because it identifies the user that the SAML assertion applies to. In order for Chef Automate to authenticate the user, the Name Id that the IdP returns must exactly match a Chef Automate SAML user name. In addition, it must match the user name that was entered at the Chef Automate login page. Therefore, the the IdP or the SP must be configured with an appropriate Name Id Policy. In some cases, you (or your system administrator) may need to either negotiate or configure the Name Id Policy on the IdP itself.
Name Id mismatches will lead to successful logins (as far as the IdP is concerned), but not leading to the Chef Automate login of the expected user. Instead, a new user will be provisioned with the username matching the returned NameId.
The following Name Id policies are not supported by Chef Automate: Transient, X509Subject.
For illustration purposes, below we discuss two common scenarios:
#. Configure Name Id Policy on the IdP side:
If you are using an IdP such as Okta, you can configure the Name Id Policy when your application is added to Okta . For more information, see
`<http://developer.okta.com/docs/guides/setting_up_a_saml_application_in_okta>`_.
In this case, you can leave the Name Id Policy setting on the Chef Automate side to "Default (No Policy)", since the IdP will always return what is pre-configured.
#. Configure Name Id Policy on the Chef Automate side:
On the other hand, you may be using an IdP (for example Microsoft Azure), that does not allow configuration of the Name Id Policy during application setup. For more information, see
`<https://azure.microsoft.com/en-us/documentation/articles/active-directory-authentication-scenarios/>`_.
In this case, you will need to request a specific Name Id Policy through the Chef Automate configuration - for example, 'Email Address'.
Notes on EntityId
=====================================================
By default, Chef Automate's SAML integration will use EntityId ``https://<yourChef AutomateDomain>/api/v0/e/<yourEnterprise>/saml/metadata``. This can be overridden in ``delivery.rb`` as follows:
.. code-block:: ruby
auth['saml_entity_id'] = 'https://delivery.corp.com/saml'
Workflow ('delivery') CLI
=====================================================
The Workflow CLI in Chef Automate (``delivery-cli``) can be used with SAML-authenticated users:
#. When SAML is configured, ``delivery token`` defaults to SAML-authenticating the user, and it will prompt the user to use their browser to login to Chef Automate:
.. code-block:: bash
$ delivery token
Chef Chef Automate
Loading configuration from /path/to/project
Requesting Token
Press Enter to open a browser window to retrieve a new token. [ENTER]
Launching browser.
#. The Chef Automate CLI will then wait for the user to enter the token retrieved from the web interface:
.. code-block:: none
Enter token:
#. The token retrieved will then be verified and saved in the usual token store.
.. code-block:: none
Enter token: [enter oMMoQ9N7XXYHI6X6lV7GaxEjxEP4Yv1TafTx7hFWH1U=]
token: oMMoQ9N7XXYHI6X6lV7GaxEjxEP4Yv1TafTx7hFWH1U=
saved API token to: /Users/alice/.delivery/api-tokens
token: oMMoQ9N7XXYHI6X6lV7GaxEjxEP4Yv1TafTx7hFWH1U=
Verifying Token: valid
#. To log in as an internal user when SAML is configured, use the option ``--saml=false``
Enabling SAML proxying for Chef Server
=====================================================
The integration between the management console in Chef Infra Server and Chef Automate's SAML capabilities is done using OpenID Connect.
OpenID Connect Signing Key
-----------------------------------------------------
Chef Automate signs the ID token given to the management console following successful SAML authentication. To do that, a private signing key needs to be provided.
An alternate location can be configured in ``/etc/delivery/delivery.rb``:
.. code-block:: ruby
auth['oidc_signing_private_key'] = '/etc/delivery/oidc_signing_private_key.pem' # this is the default
If the file does not exist, a 2048-bit RSA key will be generated using OpenSSL (when running ``automate-ctl reconfigure``). You can also provide that RSA private key in PEM format yourself:
.. code-block:: none
/etc/delivery# cat > oidc_signing_private_key.pem <<EOF
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDfBg/WS60hE8k/
4R3qvcoiH3noL0mQ0rUEEsfXEEiXgg2Wr0Vt7p9bB7rGH/6BTxEscVQbcpmpHeFu
TNvuPsENy9thT5lNWVH6goO1O9MsasqfXbLoZYprV/lA2V32ol5DpCyN09ozO1u0
LhMhnDqEgOiYpDiGw2HQNR58AuBqTxWvbc7ML5muDJ3/K2bf40uAYkziZA2Nv2Z3
...
-----END PRIVATE KEY-----
EOF
/etc/delivery#
You can verify that Chef Automate can read and parse your key by accessing ``https://<yourChef AutomateDomain>/api/v0/oidc/jwks``:
.. code-block:: bash
$ curl https://delivery.corp.com/api/v0/oidc/jwks | jq .
{
"keys": [
{
"alg": "RS256",
"e": "AQAB",
"kid": "1",
"kty": "RSA",
"n": "3wYP1kutIRPJP-Ed6r3KIh956C9JkNK1BBLH1xBIl4INlq9Fbe6fWwe6xh_-gU8RLHFUG3KZqR3hbkzb7j7BDcvbYU-ZTVlR-oKDtTvTLGrKn12y6GWKa1f5QNld9qJeQ6QsjdPaMztbtC4TIZw6hIDomKQ4hsNh0DUefALgak8Vr23OzC-Zrgyd_ytm3-NL
gGJM4mQNjb9md2eoUHh5iTpvbxCFQDA3LMBZje7Ls45mNvjC8wAX6b26fq1otoxmGeDiMoovjIFWp3tL3_KphTs0mDOoBQsEUA9FtZJXGBWQIyEibM5v9LBt43s8lJqAVMfVzSNW8uXKhBC9O7h2ZQ",
"use": "sig"
}
]
}
If no key is configured or the key file can't be read, the keys array will be empty: ``[]``.
Chef Infra Server as OpenID Connect client
---------------------------------------------------
To allow Chef Infra Server to act as an OpenID Connect client to Chef Automate, it needs to be known to Chef Automate. To achieve this, add the following to your ``/etc/delivery/delivery.rb``
.. code-block:: ruby
auth['oidc_clients'] = {
'manage-client-id' => {
'client_secret' => 'ohai',
'client_redirect_uri' => 'https://manage.corp.com/oidc/callback'
}
}
In the above snippet, the 'manage-client-id' should be a unique string for each Chef Infra Server whose management console will authenticate through SAML. Also, if you have multiple Chef Servers that will authenticate through SAML, you will need to create additional entries for the client id, the client secret and the client redirect URI in the section above for each one.
Configuration of Chef Server
-----------------------------------------------------
Note that all of the client-related values need to match the configuration in the Chef Infra Server management console.
See `Configuring for SAML Authentication </server_configure_saml.html>`__ for more details.
Troubleshooting
===================================================================
If you have problems with SAML configuration and integration, see the SAML section of `Troubleshooting Chef Automate Deployments </troubleshooting_chef_automate.html>`__ for debugging tips.
| 58.24 | 678 | 0.732143 |
15c0841e946c28df2eeef6ee1747083a8318ac00 | 2,872 | rst | reStructuredText | source/husky-env/src/husky/husky_base/CHANGELOG.rst | qobi/amazing-race | 4c8fc673ded6c3eff2bf81db612196f4b1768e49 | [
"MIT"
] | null | null | null | source/husky-env/src/husky/husky_base/CHANGELOG.rst | qobi/amazing-race | 4c8fc673ded6c3eff2bf81db612196f4b1768e49 | [
"MIT"
] | null | null | null | source/husky-env/src/husky/husky_base/CHANGELOG.rst | qobi/amazing-race | 4c8fc673ded6c3eff2bf81db612196f4b1768e49 | [
"MIT"
] | null | null | null | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package husky_base
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.3.1 (2018-08-02)
------------------
0.3.0 (2018-04-11)
------------------
* Fix default tyre radius
* changed the tire radius in base.launch to reflect a 13 inch Husky outdoor tire
* Remove defunct email address
* Updated maintainers.
* Update bringup for multirobot
* Update URDF for multirobot
* Move packages into monorepo for kinetic; strip out ur packages
* Contributors: Martin Cote, Paul Bovbel, Tony Baltovski, Wolfgang Merkt
0.2.6 (2016-10-03)
------------------
* Adding support for the UM7 IMU.
* Contributors: Tony Baltovski
0.2.5 (2015-12-31)
------------------
* Fix absolute value to handle negative rollover readings effectively
* Another bitwise fix, now for x86.
* Formatting
* Fix length complement check.
There's a subtle difference in how ~ is implemented in aarch64 which
causes this check to fail. The new implementation should work on x86
and ARM.
* Contributors: Mike Purvis, Paul Bovbel
0.2.4 (2015-07-08)
------------------
0.2.3 (2015-04-08)
------------------
* Integrate husky_customization workflow
* Contributors: Paul Bovbel
0.2.2 (2015-03-23)
------------------
* Fix package urls
* Contributors: Paul Bovbel
0.2.1 (2015-03-23)
------------------
* Add missing dependencies
* Contributors: Paul Bovbel
0.2.0 (2015-03-23)
------------------
* Add UR5_ENABLED envvar
* Contributors: Paul Bovbel
0.1.5 (2015-02-19)
------------------
* Fix duration cast
* Contributors: Paul Bovbel
0.1.4 (2015-02-13)
------------------
* Correct issues with ROS time discontinuities - now using monotonic time source
* Implement a sane retry policy for communication with MCU
* Contributors: Paul Bovbel
0.1.3 (2015-01-30)
------------------
* Update description and maintainers
* Contributors: Paul Bovbel
0.1.2 (2015-01-20)
------------------
* Fix library install location
* Contributors: Paul Bovbel
0.1.1 (2015-01-13)
------------------
* Add missing description dependency
* Contributors: Paul Bovbel
0.1.0 (2015-01-12)
------------------
* Fixed encoder overflow issue
* Ported to ros_control for Indigo release
* Contributors: Mike Purvis, Paul Bovbel, finostro
0.0.5 (2013-10-04)
------------------
* Mark the config directory to install.
0.0.4 (2013-10-03)
------------------
* Parameterize husky port in env variable.
0.0.3 (2013-09-24)
------------------
* Add launchfile check.
* removing imu processing by dead_reckoning.py
* removing dynamic reconfigure from dead_reckoning because it was only there for handling gyro correction
* adding diagnostic aggregator and its related config file under config/diag_agg.yaml
0.0.2 (2013-09-11)
------------------
* Fix diagnostic_msgs dependency.
0.0.1 (2013-09-11)
------------------
* New husky_base package for Hydro, which contains the nodes
formerly in husky_bringup.
| 25.415929 | 105 | 0.641713 |
ac156180ba696457cea11da0ef8237cebd81238b | 22,488 | rst | reStructuredText | docs/kubernetes/operations/tasks/backends/cvs_gcp.rst | ffilippopoulos/trident | 39a9d9167c3770e9aed4f221d23ae642bef40038 | [
"Apache-2.0"
] | null | null | null | docs/kubernetes/operations/tasks/backends/cvs_gcp.rst | ffilippopoulos/trident | 39a9d9167c3770e9aed4f221d23ae642bef40038 | [
"Apache-2.0"
] | null | null | null | docs/kubernetes/operations/tasks/backends/cvs_gcp.rst | ffilippopoulos/trident | 39a9d9167c3770e9aed4f221d23ae642bef40038 | [
"Apache-2.0"
] | null | null | null | #############################
Cloud Volumes Service for GCP
#############################
.. note::
The NetApp Cloud Volumes Service for GCP does not support CVS-Performance volumes less than 100 GiB in size, or CVS
volumes less than 1 TiB in size. To make it easier to deploy applications, Trident automatically creates volumes of
the minimum size if a too-small volume is requested. Future releases of the Cloud Volumes Service may remove this
restriction.
Preparation
-----------
To create and use a Cloud Volumes Service (CVS) for GCP backend, you will need:
* An `GCP account configured with NetApp CVS`_
* Project number of your GCP account
* GCP service account with the ``netappcloudvolumes.admin`` role
* API key file for your CVS service account
Backend configuration options
-----------------------------
========================= ================================================================= =================================================
Parameter Description Default
========================= ================================================================= =================================================
version Always 1
storageDriverName "gcp-cvs"
backendName Custom name for the storage backend Driver name + "_" + part of API key
storageClass Type of storage. Choose from ``hardware`` [Performance Optimized] "hardware"
or ``software`` [Scale Optimized (beta)]
projectNumber GCP account project number
apiRegion CVS account region
apiKey API key for GCP service account with CVS admin role
proxyURL Proxy URL if proxy server required to connect to CVS Account
nfsMountOptions Fine-grained control of NFS mount options "nfsvers=3"
limitVolumeSize Fail provisioning if requested volume size is above this value "" (not enforced by default)
network GCP network used for CVS volumes "default"
serviceLevel The CVS service level for new volumes "standard"
debugTraceFlags Debug flags to use when troubleshooting.
E.g.: {"api":false, "method":true} null
========================= ================================================================= =================================================
.. warning::
Do not use ``debugTraceFlags`` unless you are troubleshooting and require a
detailed log dump.
The required value ``projectNumber`` may be found in the GCP web portal's Home screen. The ``apiRegion`` is the
GCP region where this backend will provision volumes. The ``apiKey`` is the JSON-formatted contents of a GCP
service account's private key file (copied verbatim into the backend config file). The service account must have
the ``netappcloudvolumes.admin`` role.
The ``storageClass`` is an optional parameter that can be used to choose the
desired `CVS service type <https://cloud.google.com/solutions/partners/netapp-cloud-volumes/service-types?hl=en_US>`_.
Users can choose from the base CVS service type[``storageClass=software``] or the CVS-Performance service
type [``storageClass=hardware``], which Trident uses by default. Make sure you specify an ``apiRegion`` that
provides the respective CVS ``storageClass`` in your backend definition.
.. note::
Trident's integration with the base
`CVS service type <https://cloud.google.com/solutions/partners/netapp-cloud-volumes/service-types?hl=en_US>`_
on GCP is a **beta feature**, not meant for production
workloads. Trident is **fully supported** with the CVS-Performance service type
and uses it by default.
The proxyURL config option must be used if a proxy server is needed to communicate with GCP. The proxy server may either
be an HTTP proxy or an HTTPS proxy. In case of an HTTPS proxy, certificate validation is skipped to allow the usage of
self-signed certificates in the proxy server. Proxy servers with authentication enabled are not supported.
Each backend provisions volumes in a single GCP region. To create volumes in other regions, you can define additional
backends.
The serviceLevel values for CVS on GCP are ``standard``, ``premium``, and ``extreme``.
You can control how each volume is provisioned by default using these options in a special section of the configuration.
For an example, see the configuration examples below.
========================= =============================================================== ================================================
Parameter Description Default
========================= =============================================================== ================================================
exportRule The export rule(s) for new volumes "0.0.0.0/0"
snapshotDir Controls visibility of the .snapshot directory "false"
snapshotReserve Percentage of volume reserved for snapshots "" (accept CVS default of 0)
size The size of new volumes "1T"
========================= =============================================================== ================================================
The ``exportRule`` value must be a comma-separated list of any combination of
IPv4 addresses or IPv4 subnets in CIDR notation.
.. note::
For all volumes created on a GCP backend, Trident will copy all labels present
on a :ref:`storage pool <gcp-virtual-storage-pool>` to the storage volume at
the time it is provisioned. Storage admins can define labels per storage pool
and group all volumes created per storage pool. This provides a convenient way
of differentiating volumes based on a set of customizable labels that are
provided in the backend configuration.
Example configurations
----------------------
**Example 1 - Minimal backend configuration for gcp-cvs driver**
.. code-block:: json
{
"version": 1,
"storageDriverName": "gcp-cvs",
"projectNumber": "012345678901",
"apiRegion": "us-west2",
"apiKey": {
"type": "service_account",
"project_id": "my-gcp-project",
"private_key_id": "1234567890123456789012345678901234567890",
"private_key": "-----BEGIN PRIVATE KEY-----\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nXsYg6gyxy4zq7OlwWgLwGa==\n-----END PRIVATE KEY-----\n",
"client_email": "cloudvolumes-admin-sa@my-gcp-project.iam.gserviceaccount.com",
"client_id": "123456789012345678901",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/cloudvolumes-admin-sa%40my-gcp-project.iam.gserviceaccount.com"
}
}
**Example 2 - Backend configuration for gcp-cvs driver with the base CVS service type**
This example shows a backend definition that uses the base CVS service type, which
is meant for general-purpose workloads and provides light/moderate performance,
coupled with high zonal availability. This is a **beta** Trident integration that
can be used in test environments.
.. code-block:: json
{
"version": 1,
"storageDriverName": "gcp-cvs",
"projectNumber": "012345678901",
"storageClass": "software",
"apiRegion": "us-east4",
"apiKey": {
"type": "service_account",
"project_id": "my-gcp-project",
"private_key_id": "1234567890123456789012345678901234567890",
"private_key": "-----BEGIN PRIVATE KEY-----\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nXsYg6gyxy4zq7OlwWgLwGa==\n-----END PRIVATE KEY-----\n",
"client_email": "cloudvolumes-admin-sa@my-gcp-project.iam.gserviceaccount.com",
"client_id": "123456789012345678901",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/cloudvolumes-admin-sa%40my-gcp-project.iam.gserviceaccount.com"
}
}
**Example 3 - Backend configuration for gcp-cvs driver with single service level**
This example shows a backend file that applies the same aspects to all Trident created storage in the GCP us-west2
region. This example also shows the usage of proxyURL config option in a backend file.
.. code-block:: json
{
"version": 1,
"storageDriverName": "gcp-cvs",
"projectNumber": "012345678901",
"apiRegion": "us-west2",
"apiKey": {
"type": "service_account",
"project_id": "my-gcp-project",
"private_key_id": "1234567890123456789012345678901234567890",
"private_key": "-----BEGIN PRIVATE KEY-----\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nXsYg6gyxy4zq7OlwWgLwGa==\n-----END PRIVATE KEY-----\n",
"client_email": "cloudvolumes-admin-sa@my-gcp-project.iam.gserviceaccount.com",
"client_id": "123456789012345678901",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/cloudvolumes-admin-sa%40my-gcp-project.iam.gserviceaccount.com"
},
"proxyURL": "http://proxy-server-hostname/",
"nfsMountOptions": "vers=3,proto=tcp,timeo=600",
"limitVolumeSize": "10Ti",
"serviceLevel": "premium",
"defaults": {
"snapshotDir": "true",
"snapshotReserve": "5",
"exportRule": "10.0.0.0/24,10.0.1.0/24,10.0.2.100",
"size": "5Ti"
}
}
.. _gcp-virtual-storage-pool:
**Example 4 - Backend and storage class configuration for gcp-cvs driver with virtual storage pools**
This example shows the backend definition file configured with :ref:`Virtual Storage Pools <Virtual Storage Pools>`
along with StorageClasses that refer back to them.
In the sample backend definition file shown below, specific defaults are set for all storage pools, which set the
``snapshotReserve`` at 5% and the ``exportRule`` to 0.0.0.0/0. The virtual storage pools are defined in the
``storage`` section. In this example, each individual storage pool sets its own ``serviceLevel``, and some pools
overwrite the default values set above.
.. code-block:: json
{
"version": 1,
"storageDriverName": "gcp-cvs",
"projectNumber": "012345678901",
"apiRegion": "us-west2",
"apiKey": {
"type": "service_account",
"project_id": "my-gcp-project",
"private_key_id": "1234567890123456789012345678901234567890",
"private_key": "-----BEGIN PRIVATE KEY-----\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nznHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m\nXsYg6gyxy4zq7OlwWgLwGa==\n-----END PRIVATE KEY-----\n",
"client_email": "cloudvolumes-admin-sa@my-gcp-project.iam.gserviceaccount.com",
"client_id": "123456789012345678901",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/cloudvolumes-admin-sa%40my-gcp-project.iam.gserviceaccount.com"
},
"nfsMountOptions": "vers=3,proto=tcp,timeo=600",
"defaults": {
"snapshotReserve": "5",
"exportRule": "0.0.0.0/0"
},
"labels": {
"cloud": "gcp"
},
"region": "us-west2",
"storage": [
{
"labels": {
"performance": "extreme",
"protection": "extra"
},
"serviceLevel": "extreme",
"defaults": {
"snapshotDir": "true",
"snapshotReserve": "10",
"exportRule": "10.0.0.0/24"
}
},
{
"labels": {
"performance": "extreme",
"protection": "standard"
},
"serviceLevel": "extreme"
},
{
"labels": {
"performance": "premium",
"protection": "extra"
},
"serviceLevel": "premium",
"defaults": {
"snapshotDir": "true",
"snapshotReserve": "10"
}
},
{
"labels": {
"performance": "premium",
"protection": "standard"
},
"serviceLevel": "premium"
},
{
"labels": {
"performance": "standard"
},
"serviceLevel": "standard"
}
]
}
The following StorageClass definitions refer to the above Virtual Storage Pools. Using the ``parameters.selector``
field, each StorageClass calls out which virtual pool(s) may be used to host a volume. The volume will have the
aspects defined in the chosen virtual pool.
The first StorageClass (``cvs-extreme-extra-protection``) will map to the first Virtual Storage Pool. This is the
only pool offering extreme performance with a snapshot reserve of 10%. The last StorageClass (``cvs-extra-protection``)
calls out any storage pool which provides a snapshot reserve of 10%. Trident will decide which Virtual Storage Pool is
selected and will ensure the snapshot reserve requirement is met.
.. code-block:: yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cvs-extreme-extra-protection
provisioner: netapp.io/trident
parameters:
selector: "performance=extreme; protection=extra"
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cvs-extreme-standard-protection
provisioner: netapp.io/trident
parameters:
selector: "performance=premium; protection=standard"
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cvs-premium-extra-protection
provisioner: netapp.io/trident
parameters:
selector: "performance=premium; protection=extra"
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cvs-premium
provisioner: netapp.io/trident
parameters:
selector: "performance=premium; protection=standard"
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cvs-standard
provisioner: netapp.io/trident
parameters:
selector: "performance=standard"
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cvs-extra-protection
provisioner: netapp.io/trident
parameters:
selector: "protection=extra"
allowVolumeExpansion: true
.. _GCP account configured with NetApp CVS: https://cloud.netapp.com/cloud-volumes-service-for-gcp?utm_source=NetAppTrident_ReadTheDocs&utm_campaign=Trident
| 64.068376 | 1,762 | 0.705176 |
64c789f48ea4a34b9458e8a238e45349f52a6ba3 | 70,320 | rst | reStructuredText | 2014-docs/model_grid.rst | awickert/landlab | 496de56717a5877db96f354a1b1285bfabe8b56f | [
"MIT"
] | 1 | 2019-06-01T07:39:49.000Z | 2019-06-01T07:39:49.000Z | 2014-docs/model_grid.rst | awickert/landlab | 496de56717a5877db96f354a1b1285bfabe8b56f | [
"MIT"
] | 1 | 2018-04-07T08:24:56.000Z | 2018-04-07T13:52:03.000Z | 2014-docs/model_grid.rst | awickert/landlab | 496de56717a5877db96f354a1b1285bfabe8b56f | [
"MIT"
] | null | null | null | ========================================================================
Building Simple Models with Landlab's Gridding Library: A Tutorial Guide
========================================================================
When creating a two-dimensional simulation model, often the most time-consuming and
error-prone task involves writing the code to set up the underlying grid. Irregular
(or "unstructured") grids are especially tricky to implement. Landlab's **ModelGrid**
package makes this process much easier, by providing a set of library routines for
creating and managing a 2D grid, attaching data to the grid, performing common input
and output operations, and providing library functions that handle common numerical
operations such as calculating a field of gradients for a particular state variable.
By taking care of much of the overhead involved in writing grid-management code,
**ModelGrid** is designed to help you build 2D models quickly and efficiently, freeing you
to concentration on the science behind the code.
Some of the things you can do with **ModelGrid** include:
- Create and configure a structured or unstructured grid in a one or a few lines of code
- Create various data arrays attached to the grid
- Easily implement staggered-grid finite-difference / finite-volume schemes
- Calculate gradients in state variables in a single line
- Calculate net fluxes in/out of grid cells in a single line
- Set up and run "link-based" cellular automaton models
- Switch between structured and unstructured grids without needing to change the rest of
the code
- Develop complete, 2D numerical finite-volume or finite-difference models much more
quickly and efficiently than would be possible using straight C, Fortran, Matlab, or
Python code
Some of the Landlab capabilities that work with **ModelGrid** to enable easy numerical modeling include:
- Easily read in model parameters from a formatted text file
- Write grid and data output to netCDF files for import into open-source visualization
packages such as ParaView and VisIt
- Create grids from ArcGIS-formatted ascii files
- Create models by coupling together your own and/or pre-built process components
- Use models built by others from process components
This document provides a basic introduction to building applications using
**ModelGrid**. It covers: (1) how grids are represented, and (2) a set of tutorial examples
that illustrate how to build models using simple scripts.
How a Grid is Represented
=========================
Basic Grid Elements
-------------------
.. _grid:
.. figure:: images/grid_schematic_ab.png
:figwidth: 80%
:align: center
Figure 1: Elements of a model grid. The main grid elements are nodes, links, and faces.
Less commonly used elements include corners, patches, and junctions. In the
spring 2015 version of Landlab, **ModelGrid** can implement raster (a) and
Voronoi-Delaunay (b) grids, as well as radial and hexagonal grids (not shown).
(Note that not all links patches are shown, and only one representative cell is
shaded.)
:ref:`Figure 1 <grid>` illustrates how **ModelGrid** represents a simulation grid. The
grid contains a set of *(x,y)* points called *nodes*. In a typical
finite-difference or finite-volume model, nodes are the locations at which one tracks
scalar state variables, such as water depth, land elevation, sea-surface elevation,
or temperature.
Each adjacent pair of nodes is connected by a line segment known as
a *link*. A link has both a position in space, denoted
by the coordinates of the two bounding nodes, and a direction: a link
runs from one node (known as its *from-node* or "tail-node*) to another (its *to-node* or *head-node*).
Every node in the grid interior is associated with a polygon known as a *cell* (illustrated,
for example, by the shaded square region in :ref:`Figure 1a <grid>`). Each cell is
bounded by a set of line segments known as *faces*, which it shares with its neighboring
cells.
In the simple case of a regular (raster) grid, the cells are square, the nodes
are the center points of the cells (:ref:`Figure 1a <grid>`), and the links and faces have
identical length (equal to the node spacing). In a Voronoi-Delaunay grid, the
cells are Voronoi polygons (also known as Theissen polygons)
(:ref:`Figure 1b <grid>`). In this case, each cell represents the surface area that
is closer to its own node than to any other node in the grid. The faces then
represent locations that are equidistant between two neighboring nodes. Other grid
configurations are possible as well. The spring 2015 version of Landlab includes
support for hexagonal and radial grids, which are specialized versions of the
Voronoi-Delaunay grid shown in :ref:`Figure 1b <grid>`. Note that the node-link-cell-face
topology is general enough to represent other types of grid; for example, one could use
**ModelGrid's** data structures to implement a quad-tree grid,
or a Delaunay-Voronoi grid in which cells are triangular elements with
nodes at their circumcenters.
Creating a grid is easy. The first step is to import Landlab's RasterModelGrid class (this
assumes you have installed landlab and are working in your favorite Python environment):
>>> from landlab import RasterModelGrid
Now, create a regular (raster) grid with 10 rows and 40 columns, with a node spacing (dx) of 5:
>>> mg = RasterModelGrid(10, 40, 5)
*mg* is a grid object. This grid has 400 ( 10*40 ) nodes. It has 2,330 ( 40*(30-1) + 30*(40-1) ) links.
Adding Data to a Landlab Grid Element using Fields
--------------------------------------------------
Landlab has a data structure called *fields* that will store data associated with different types
of grid elements. Fields are convenient because 1) fields create data arrays of the proper length for
the associated data type and 2) fields attach these data to the grid, so that any piece of code that has
access to the grid also has access to the data stored in fields. Suppose you would like like to
track the elevation at each node. The following code creates a data field (array) called *elevation* and
the number of elements in the array is the number of nodes:
>>> z = mg.add_zeros('node', 'elevation')
Here *z* is an array of zeros. You can that *z* has the same length as the number of nodes:
>>> len(z)
400
Note that *z* is a deep copy of the data stored in the model field. This means that if you change z, you
also change the data in the ModelGrid's elevation field. You can also change values directly in the ModelGrid's
elevation field:
>>> mg.at_node['elevation'][5] = 1000
Now the sixth element in the model's elevation field array, or in *z*, is equal to 1000. (Remember that the first
element of a Python array has an index of 0 (zero).
You can see all of the field data at the nodes on *mg* with the following:
>>> mg.at_node.keys()
['elevation']
You may recognize this as a dictionary-type structure, where
the keys are the names (as strings) of the data arrays.
There are currently no data assigned to the links, as apparent by the following:
>>> mg. at_link.keys()
[]
Fields can store data at nodes, cells, links, faces, core_nodes, core_cells, active_links, and active_faces.
Core nodes and cells are ones on which the model is performing operations, and active links
connect two core nodes or a core node with an open boundary node. The meanings of core, boundary, active and inactive are
described in more detail below [LINK TO BOUNDARY CONDITIONS]. Note that when initializing a field, the singular of the grid
element type is provided:
>>> veg = mg.add_ones('cell', 'percent_vegetation')
>>> mg.at_cell.keys()
['percent_vegetation']
Note that here *veg* is an array of ones, that has the same length as the number of cells. Note that there are
no cells around the edge of a grid, so there are less cells than nodes:
>>> len(mg.at_cell['percent_vegetation'])
304
As you can see, fields are convenient because you don't have to keep track of how many nodes, links, cells, etc.
there are on the grid. Further it is easy for any part of the code to query what data are already associated with the grid
and operate on these data.
Representing Gradients in a Landlab Grid
----------------------------------------
Finite-difference and finite-volume models usually need to calculate spatial
gradients in one or more scalar variables, and often these gradients are
evaluated between pairs of adjacent nodes. ModelGrid makes these calculations
easier for programmers by providing built-in functions to calculate gradients
along links, and allowing applications to associate an array of gradient values
with their corresponding links or edges. The tutorial examples below illustrate how
this capability can be used to create models of processes such as diffusion and
overland flow.
Other Grid Elements
-------------------
The cell vertices are called *corners* (:ref:`Figure 1, solid squares <grid>`).
Each face is therefore a line segment connecting two corners. The intersection
of a face and a link (or directed edge) is known as a *junction*
(:ref:`Figure 1, open diamonds <grid>`). Often, it is useful to calculate scalar
values (say, ice thickness in a glacier) at nodes, and vector values (say, ice
velocity) at junctions. This approach is sometimes referred to as a
staggered-grid scheme. It lends itself naturally to finite-volume methods, in
which one computes fluxes of mass, momentum, or energy across cell faces, and
maintains conservation of mass within cells. (In the spring 2015 version of Lanlab,
there are no supporting functions for the use of junctions.)
Notice that the links also enclose a set of polygons that are offset from the
cells. These secondary polygons are known as *patches* (:ref:`Figure 1,
dotted <grid>`). This means that any grid comprises two complementary tesselations: one
made of cells, and one made of patches. If one of these is a Voronoi
tessellation, the other is a Delaunay triangulation. For this reason, Delaunay
triangulations and Voronoi diagrams are said to be dual to one another: for any
given Delaunay triangulation, there is a unique corresponding Voronoi diagram. With **ModelGrid,** one can
create a mesh with Voronoi polygons as cells and Delaunay triangles as patches
(:ref:`Figure 1b <grid>`). Alternatively, with a raster grid, one simply has
two sets of square elements that are offset by half the grid spacing
(:ref:`Figure 1a <grid>`). Whatever the form of the tessellation, **ModelGrid** keeps
track of the geometry and topology of the grid.
Managing Grid Boundaries
========================
.. _raster4x5:
.. figure:: images/example_raster_grid.png
:figwidth: 80%
:align: center
Figure 2: Illustration of a simple four-row by five-column raster grid created with
:class:`~landlab.grid.raster.RasterModelGrid`. By default, all perimeter
nodes are tagged as open (fixed value) boundaries, and all interior cells
are tagged as core. An active link is one that connects either
two core nodes, or one core node and one open boundary node.
.. _raster4x5openclosed:
.. figure:: images/example_raster_grid_with_closed_boundaries.png
:figwidth: 80 %
:align: center
Figure 3: Illustration of a simple four-row by five-column raster grid with a
combination of open and closed boundaries.
An important component of any numerical model is the method for handling
boundary conditions. In general, it's up to the application developer to manage
boundary conditions for each variable. However, **ModelGrid** makes this task a bit
easier by tagging nodes that are treated as boundaries (*boundary nodes*) and those that are treated as regular nodes belonging to the interior computational domain (*core nodes*). It also allows you to de-activate ("close")
portions of the grid perimeter, so that they effectively act as walls.
Let's look first at how ModelGrid treats its own geometrical boundaries. The
outermost elements of a grid are nodes and links (as opposed to corners and
faces). For example, :ref:`Figure 2 <raster4x5>` shows a sketch of a regular
four-row by five-column grid created by RasterModelGrid. The edges of the grid
are composed of nodes and links. Only the inner six nodes have cells around
them; the remaining 14 nodes form the perimeter of the grid.
All nodes are tagged as either *boundary* or *core*. Those on the
perimeter of the grid are automatically tagged as boundary nodes. Nodes on the
inside are *core* by default, but it is possible to tag some of them as
*boundary* instead (this would be useful, for example, if you wanted to
represent an irregular region, such as a watershed, inside a regular grid). In the example
shown in :ref:`Figure 2 <raster4x5>`, all the interior nodes are *core*, and all
perimeter nodes are *open boundary*.
Boundary nodes are flagged as either *open* or *closed*, and links are tagged as
either *active* or *inactive*. An *active link*
is one that joins either two core nodes, or one *core* and one
*open boundary* node (:ref:`Figure 3 <raster4x5openclosed>`). You can use this
distinction in models to implement closed boundaries by performing flow
calculations only on active links, as the following simple example illustrates.
Examples
========
This section illustrates Landlab's grid capabilities through the use of several examples,
including a 2D numerical model of diffusion and a model of overland-flow routing.
Example #1: Modeling Diffusion on a Raster Grid
-----------------------------------------------
The following is a simple example in which we use **ModelGrid** to build an explicit,
finite-volume, staggered-grid model of diffusion. The mathematics of diffusion describe
several different phenomena, including heat conduction in solids, chemical diffusion
of solutes, transport of momentum in a viscous shear flow, and transport of
soil on hillslopes. To make this example concrete, we will use the hillslope evolution as
our working case study, though in fact the solution could apply to any of these systems.
To work through this example, you can type in and run the code below, or run the file
*diffusion_with_model_grid.py*, which is located in the Landlab developer distribution
under *docs/model_grid_guide*. The complete source code for the diffusion model is listed
below. Line numbers are
included to make it easier to refer to particular lines of code (of course, these numbers
are not part of the source code). After the listing, we will take a closer look at each
piece of the code in turn. Output from the the diffusion model is shown in
:ref:`Figure 3 <diff1>`.
.. code-block:: python
#! /usr/env/python
"""
2D numerical model of diffusion, implemented using Landlab's ModelGrid module.
Provides a simple tutorial example of ModelGrid functionality.
Last updated GT May 2014
"""
from landlab import RasterModelGrid
import pylab, time
def main():
"""
In this simple tutorial example, the main function does all the work:
it sets the parameter values, creates and initializes a grid, sets up
the state variables, runs the main loop, and cleans up.
"""
# INITIALIZE
# User-defined parameter values
numrows = 20 # number of rows in the grid
numcols = 30 # number of columns in the grid
dx = 10.0 # grid cell spacing
kd = 0.01 # diffusivity coefficient, in m2/yr
uplift_rate = 0.0001 # baselevel/uplift rate, in m/yr
num_time_steps = 10000 # number of time steps in run
# Derived parameters
dt = 0.1*dx**2 / kd # time-step size set by CFL condition
# Create and initialize a raster model grid
mg = RasterModelGrid(numrows, numcols, dx)
# Set the boundary conditions
mg.set_closed_boundaries_at_grid_edges(False, False, True, True)
# Set up scalar values
z = mg.add_zeros('node', 'Elevation') # node elevations
# Get a list of the core cells
core_cells = mg.get_core_cell_node_ids()
# Display a message, and record the current clock time
print( 'Running diffusion_with_model_grid.py' )
print( 'Time-step size has been set to ' + str( dt ) + ' years.' )
start_time = time.time()
# RUN
# Main loop
for i in range(0, num_time_steps):
# Calculate the gradients and sediment fluxes
g = mg.calculate_gradients_at_active_links(z)
qs = -kd*g
# Calculate the net deposition/erosion rate at each node
dqsds = mg.calculate_flux_divergence_at_nodes(qs)
# Calculate the total rate of elevation change
dzdt = uplift_rate - dqsds
# Update the elevations
z[core_cells] = z[core_cells] + dzdt[core_cells] * dt
# FINALIZE
# Get a 2D array version of the elevations
zr = mg.node_vector_to_raster(z)
# Create a shaded image
pylab.close() # clear any pre-existing plot
im = pylab.imshow(zr, cmap=pylab.cm.RdBu, extent=[0,numcols*dx,0,numrows*dx],
origin='lower')
# add contour lines with labels
cset = pylab.contour(zr, extent=[0,numcols*dx,numrows*dx,0], hold='on',
origin='image')
pylab.clabel(cset, inline=True, fmt='%1.1f', fontsize=10)
# add a color bar on the side
cb = pylab.colorbar(im)
cb.set_label('Elevation in meters')
# add a title and axis labels
pylab.title('Simulated topography with uplift and diffusion')
pylab.xlabel('Distance (m)')
pylab.ylabel('Distance (m)')
# Display the plot
pylab.show()
print('Run time = '+str(time.time()-start_time)+' seconds')
if __name__ == "__main__":
main()
.. _diff1:
.. figure:: images/basic_diffusion_example.png
:figwidth: 80 %
:align: center
Figure 4: Output from the hillslope diffusion model.
Below we explore how the code works line-by-line.
Importing Packages
>>>>>>>>>>>>>>>>>>
.. code-block:: python
from landlab import RasterModelGrid
import pylab, time
We start by importing the grid class ``RasterModelGrid`` from the ``landlab`` package (note that the ``landlab`` package must first be installed; see instructions under :ref:`Installing Landlab <install>`). We'll also import ``pylab`` so we can plot the results, and ``time`` so we can report the time it takes to run the model.
Setting the User-Defined Parameters
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# User-defined parameter values
numrows = 20 # number of rows in the grid
numcols = 30 # number of columns in the grid
dx = 10.0 # grid cell spacing
kd = 0.01 # diffusivity coefficient, in m2/yr
uplift_rate = 0.0001 # baselevel/uplift rate, in m/yr
num_time_steps = 10000 # number of time steps in run
The first thing we'll do in the ``main()`` function is set a group of user-defined parameters. The size of the grid is set by ``numrows`` and ``numcols``, with cell spacing ``dx``. In this example, we have a 20 by 30 grid with 10 m grid spacing, so our domain represents a 200 by 300 m rectangular patch of land. The diffusivity coefficient ``kd`` describes the efficiency of soil creep, while the ``uplift_rate`` indicates how fast the land is rising relative to base level along its boundaries. Finally, we set how many time steps we want to compute.
Note that the code for our simple program lives inside a ``main()`` function. This isn't strictly necessary---we could have put the code in the file without a ``main()`` function and it would work just fine when we run it---but it is good Python practice, and will be helpful later on.
Calculating Derived Parameters
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Derived parameters
dt = 0.1*dx**2 / kd # time-step size set by CFL condition
Next, we calculate the values of parameters that are derived from the user-defined parameters. In this case, we have just one: the time-step size, which is set by the Courant-Friedrichs-Lewy condition for an explicit, finite-difference solution to the diffusion equation (to be on the safe side, we multiply the ratio :math:`\Delta x^2 / k_d` by 0.1 instead of the theoretical limit of 1/2). With the parameter values above, :math:`\Delta t = 1000` years, so our total run duration will be one million years. Remember, though, that the same code could be used for any diffusion application with a source term. For instance, we could model conductive heat flow, with :math:`k_d` representing thermal diffusivity and ``uplift_rate`` representing heat input by, for example, radioactive decay in the earth's crust.
Creating and Configuring the Grid
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Create and initialize a raster model grid
mg = RasterModelGrid(numrows, numcols, dx)
# Set the boundary conditions
mg.set_closed_boundaries_at_grid_edges(False, False, True, True)
Our model grid is created with a call to ``RasterModelGrid()``. (Object-oriented programmers will recognize this as the syntax for creating a new object---in this case a
raster model grid.) The variable ``mg`` now contains a ``RasterModelGrid`` object that has
been configured with 20 rows and 30 columns.
For our boundary conditions, we would like to keep the nodes along the bottom and right edges of the grid fixed at zero elevation. We also want to have the top and left boundaries represent ridge-lines with a fixed horizontal position and no flow of sediment in or out. To accomplish this, we call the ``set_closed_boundaries_at_grid_edges()`` method. (Note: the term *method* is object-oriented parlance for a function that belongs to a particular class of object, and is always called in reference to a particular object). The method takes four boolean arguments, which indicate whether there should be closed boundary condition on the bottom, right, top, and left sides of the grid. Here we have set the flag to ``True`` for the top and left sides. This means that the links connecting the interior nodes to the perimeter nodes along these two sides will be flagged as inactive, just as illustrated (with a smaller grid) in :ref:`Figure 3 <raster4x5openclosed>`. As we'll see in a moment, we will simply not bother to calculate any mass flux across these closed boundaries.
Creating Data
>>>>>>>>>>>>>
.. code-block:: python
# Set up scalar values
z = mg.add_zeros('node', 'Elevation') # node elevations
Our state variable, :math:`z(x,y,t)`, represents the land surface elevation. One of the unique aspects of ModelGrid is that grid-based variables like :math:`z` are represented as 1D rather than 2D Numpy arrays. Why do it this way, if we have a regular grid that naturally lends itself to 2D arrays? The answer is that we might want to have an irregular, unstructured grid, which is much easier to handle with 1D arrays of values. By using 1D arrays for all types of ModelGrid, we allow the user to switch seamlessly between structured and unstructured grids.
We create our data structure for :math:`z` values with ``add_zeros()``, a ModelGrid method that creates and returns a 1D Numpy array filled with zeros (behind the scenes, it also "attaches" the array to the grid; we'll see later why this is useful). The length of the array is equal to the number of nodes in the grid (:math:`20\times 30=600`), which makes sense because we want to have an elevation value associated with every node in the grid.
When we update elevation values, we will want to operate only on the core nodes. To help with this, we use the ``core_nodes`` property (a *property* in Python is essentially a variable that belongs to an object, which you can access but not modify directly). This property contains a 1D numpy array of integers that represent the node ID numbers associated with all of the core nodes (of which there are :math:`18\times 28 = 504`). Finally, we display a message to tell the user that we're about to run and with what time step size.
Main Loop
>>>>>>>>>
.. code-block:: python
# Main loop
for i in range(0, num_time_steps):
Our model implements a finite-volume solution to the diffusion equation. The idea here is that we calculate sediment fluxes around the perimeter of each cell. We then integrate these fluxes forward in time to calculate the net change in volume, which is divided by the cell's surface area to obtain an equivalent change in height. The numerical solution is given by:
.. math::
\begin{equation}
\frac{d z_i}{dt} \approx \frac{z^{T+1}_i-z^T_i}{\Delta t}
= - \frac{1}{\Lambda_i} \sum_{j=1}^{N_i} \mathbf{q}_{Sij}^T \lambda_{ij}.
\label{eq:dzdt}
\end{equation}
Here, :math:`z_i^T` is the elevation at node :math:`i` at time step :math:`T`, :math:`t` is time, :math:`\Lambda_i` is the surface area of cell :math:`i`, :math:`N_i` is the number of nodes adjacent to :math:`i` (called the node's *neighbors*), :math:`\mathbf{q}_{Sij}^T` is the sediment flux per unit face width from cell :math:`i` to cell :math:`j`, and :math:`\lambda_{ij}` is the width of the face between cells :math:`i` and :math:`j`. The flux between a pair of adjacent cells is the product of the slope (positive upward) between their associated nodes, :math:`\mathbf{S}_{ij}`, and a transport coefficient, :math:`k_d`,
.. math::
\begin{equation}
\mathbf{q}_{Sij} = - k_d \mathbf{S}_{ij} = - k_d \frac{z_j-z_i}{L_{ij}}
\end{equation}
where :math:`L_{ij}` is the length of the link connecting nodes :math:`i` and :math:`j`. Notice that elevation values (which are scalars) are associated with nodes, while slopes and sediment fluxes (which are vectors) are associated with links and faces. If we want to think of the slopes and fluxes as being calculated at a particular point, that point is the junction between a link and its corresponding face :ref:`Figure 1 <grid>`.
Because we are using a regular (raster) grid with node spacing :math:`\Delta x`, the face width and link length are both equal to :math:`\Delta x` everywhere, and the cell area :math:`\Lambda=\Delta x^2`. This would not be true, however, for an unstructured grid.
Calculating gradients and sediment fluxes
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Calculate the gradients and sediment fluxes
g = mg.calculate_gradients_at_active_links(z)
qs = -kd*g
In order to calculate new elevation values, the first quantity we need to know is the gradient (slope) values between all the node pairs. We can calculate this in a single line of code using ModelGrid's ``calculate_gradients_at_active_links()`` method. This method takes a single argument: a 1D numpy array of scalar values associated with nodes. The length of this array must be the same as the number of nodes in the grid. The method calculates the gradients in ``z`` between each pair of nodes. It returns a 1D numpy array, ``g`` (for gradient), the size of which is the same as the number of active links in the grid (the difference between active and inactive links is illustrated in :ref:`Figure 2 <raster4x5>` and :ref:`3 <raster4x5openclosed>`). The sign of each value of ``g`` is positive when the slope runs uphill from a link's *from-node* to its *to-node*, and negative otherwise.
To calculate the sediment fluxes, we multiply each gradient value by the transport coefficient ``kd``. The minus sign simply means that the sediment goes downhill: where the gradient is negative, the flux should be positive, and vice versa. Here, we are taking advantage of numpy's ability to perform mathematical operations on entire arrays in a single line of code, rather than having to write out a ``for`` loop. The line ``qs = -kd*g`` in our code multiplies ``ks`` by every value of ``g``, and returns the result as a numpy array the same size as ``g``.
Calculating net fluxes in and out of cells
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Calculate the net deposition/erosion rate at each node
dqsds = mg.calculate_flux_divergence_at_nodes(qs)
Now that we know the unit flux associated with each link and its corresponding cell face, the next thing we need to do is add up the total flux around the perimeter of each cell. In other words, we need to calculate the summation in equation above. Landlab allows us to do this in one line of code, by calling the ``calculate_flux_divergence_at_nodes()`` method. This method takes a single argument: a 1D Numpy array containing the flux per unit width at each face in the grid. The method multiplies each unit flux by its corresponding face width, adds up the fluxes across each face for each cell, and divides the result by the surface area of the cell. It returns a 1D Numpy array that contains the net rate of change of volume per unit cell area. The length of this array is the same as the number of nodes in the grid. We will store the result in ``dqsds``.
If the boundary nodes around the grid perimeter do not have associated cells, why do we bother calculating net fluxes for them? In fact, we do not need to; we could have called the method ``calculate_flux_divergence_at_core_cells()`` instead. This would have given us an array the length of the number of core cells, not nodes (there is one every core node has a corresponding core cell). There are two reasons to do the net flux calculation at all nodes. The first is simply that the node-based method is slightly faster than the cell-based version. The second is that by using nodes, we retain some information about the flow of mass into the open boundary nodes. This could be useful in testing whether our model correctly balances mass (though we do not actually use that capability in this example).
Updating elevations
>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Calculate the total rate of elevation change
dzdt = uplift_rate - dqsds
# Update the elevations
z[core_cells] = z[core_cells] + dzdt[core_cells] * dt
When we calculated flux divergence, we got back an array of numbers, ``dqsds``, that represents the rate of gain or loss of sediment volume per unit area at each node. Now we need to combine this information with the source term---representing vertical motion of the soil relative to the base level at the model's fixed boundaries---in order to calculate the total rate of elevation change at the nodes. Once we've calculated rates of change, we update all node elevations by simply multiplying ``dzdt`` by our time step size. We do not want to change the elevations of the boundary nodes, however, and so we perform the update only on the interior cells. Because we are using numpy arrays, we can isolate the core nodes simply by putting our array of node IDs for core nodes inside square brackets.
Plotting the Results
>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Get a 2D array version of the elevations
zr = mg.node_vector_to_raster(z)
# Create a shaded image
pylab.close() # clear any pre-existing plot
im = pylab.imshow(zr, cmap=pylab.cm.RdBu, extent=[0,numcols*dx,0,numrows*dx],
origin='lower')
# add contour lines with labels
cset = pylab.contour(zr, extent=[0,numcols*dx,numrows*dx,0], hold='on',
origin='image')
pylab.clabel(cset, inline=True, fmt='%1.1f', fontsize=10)
# add a color bar on the side
cb = pylab.colorbar(im)
cb.set_label('Elevation in meters')
# add a title and axis labels
pylab.title('Simulated topography with uplift and diffusion')
pylab.xlabel('Distance (m)')
pylab.ylabel('Distance (m)')
# Display the plot
pylab.show()
print('Run time = '+str(time.time()-start_time)+' seconds')
The last section of the ``main`` function plots the result of our calculation. We do this using pylab's ``imshow`` and ``contour`` functions to create a colored image of topography overlain by contours. To use these functions, we need our elevations to be ordered in a 2D array. We obtain a 2D array version of our ``z`` values through a call to RasterModelGrid's ``node_vector_to_raster()`` method.
Running the ``main()`` function
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
if __name__ == "__main__":
main()
The last two lines of code are standard Python syntax. They will execute the ``main`` function when the code is run, but not when the code is simply imported as a module.
That's it. The 2D diffusion code is less than 100 lines long. In fact, only about 20 of these actually implement the diffusion calculation; the rest handle plotting, comments, blank lines, etc.
Checking against the analytical solution
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
To test the diffusion model against an analytical solution, we can change the setup to have closed boundaries on two opposite sides, by modifying line 39 to read:
.. code:: python
mg.set_closed_boundaries_at_grid_edges(True, False, True, False)
This change makes the solution identical in the `y` direction, so that we can compare it with a 1D analytical solution. For a 1D steady state configuration with a constant source term (baselevel lowering) and two fixed boundaries, the elevation field is a parabola:
.. math::
z(x') = \frac{U}{2K_d} \left( L^2 - x'^2 \right),
where :math:`L` is the half-length of the domain and :math:`x'` is a transformed :math:`x` coordinate such that :math:`x'=0` at the ridge crest. The numerical solution fits the analytical solution quite well (:ref:`Figure 5 <diffan>`).
.. _diffan:
.. figure:: images/diffusion_raster_with_analytical.png
:scale: 50 %
:align: center
Figure 5: Output from the hillslope diffusion model, compared with the analytical solution (right, red curve).
Example #2: Overland Flow
-------------------------
In this second example, we look at an implementation of the storage-cell algorithm of Bates et al. (2010) [1]_ for modeling flood inundation. In this example, we will use a flat terrain, and prescribe a water depth of 2.5 meters at the left side of the grid. This will create a wave that travels from left to right across the grid. The output is shown in :ref:`Figure 6 <inundation>`.
.. _inundation:
.. figure:: images/inundation.png
:scale: 50%
:align: center
Figure 6: Simulated flood-wave propagation.
Overland Flow Code Listing
>>>>>>>>>>>>>>>>>>>>>>>>>>
The source code listed below can also be found in the file *overland_flow_with_model_grid.py*.
.. code-block:: python
#! /usr/env/python
"""
2D numerical model of shallow-water flow over topography, using the
Bates et al. (2010) algorithm for storage-cell inundation modeling.
Last updated GT May 2014
"""
from landlab import RasterModelGrid
import pylab, time
import numpy as np
def main():
"""
In this simple tutorial example, the main function does all the work:
it sets the parameter values, creates and initializes a grid, sets up
the state variables, runs the main loop, and cleans up.
"""
# INITIALIZE
# User-defined parameter values
numrows = 20
numcols = 100
dx = 50.
n = 0.03 # roughness coefficient
run_time = 1800 # duration of run, seconds
h_init = 0.001 # initial thin layer of water (m)
h_boundary = 2.5 # water depth at left side (m)
g = 9.8
alpha = 0.2 # time-step factor (ND; from Bates et al., 2010)
# Derived parameters
ten_thirds = 10./3. # pre-calculate 10/3 for speed
elapsed_time = 0.0 # total time in simulation
report_interval = 2. # interval to report progress (seconds)
next_report = time.time()+report_interval # next time to report progress
# Create and initialize a raster model grid
mg = RasterModelGrid(numrows, numcols, dx)
# Set up boundaries. We'll have the right and left sides open, the top and
# bottom closed. The water depth on the left will be 5 m, and on the right
# just 1 mm.
mg.set_closed_boundaries_at_grid_edges(True, False, True, False)
# Set up scalar values
z = mg.add_zeros('node', 'Land_surface__elevation') # land elevation
h = mg.add_zeros('node', 'Water_depth') + h_init # water depth (m)
q = mg.create_active_link_array_zeros() # unit discharge (m2/s)
dhdt = mg.add_zeros('node', 'Water_depth_time_derivative')
# Left side has deep water
leftside = mg.left_edge_node_ids()
h[leftside] = h_boundary
# Get a list of the core nodes
core_nodes = mg.core_nodes
# Display a message
print( 'Running ...' )
start_time = time.time()
# RUN
# Main loop
while elapsed_time < run_time:
# Report progress
if time.time()>=next_report:
print('Time = '+str(elapsed_time)+' ('
+str(100.*elapsed_time/run_time)+'%)')
next_report += report_interval
# Calculate time-step size for this iteration (Bates et al., eq 14)
dtmax = alpha*mg.dx/np.sqrt(g*np.amax(h))
# Calculate the effective flow depth at active links. Bates et al. 2010
# recommend using the difference between the highest water-surface
# and the highest bed elevation between each pair of nodes.
zmax = mg.max_of_link_end_node_values(z)
w = h+z # water-surface height
wmax = mg.max_of_link_end_node_values(w)
hflow = wmax - zmax
# Calculate water-surface slopes
water_surface_slope = mg.calculate_gradients_at_active_links(w)
# Calculate the unit discharges (Bates et al., eq 11)
q = (q-g*hflow*dtmax*water_surface_slope)/ \
(1.+g*hflow*dtmax*n*n*abs(q)/(hflow**ten_thirds))
# Calculate water-flux divergence and time rate of change of water depth
# at nodes
dhdt = -mg.calculate_flux_divergence_at_nodes(q)
# Second time-step limiter (experimental): make sure you don't allow
# water-depth to go negative
if np.amin(dhdt) < 0.:
shallowing_locations = np.where(dhdt<0.)
time_to_drain = -h[shallowing_locations]/dhdt[shallowing_locations]
dtmax2 = alpha*np.amin(time_to_drain)
dt = np.min([dtmax, dtmax2])
else:
dt = dtmax
# Update the water-depth field
h[core_nodes] = h[core_nodes] + dhdt[core_nodes]*dt
# Update current time
elapsed_time += dt
# FINALIZE
# Get a 2D array version of the elevations
hr = mg.node_vector_to_raster(h)
# Create a shaded image
pylab.close() # clear any pre-existing plot
image_extent = [0, 0.001*dx*numcols, 0, 0.001*dx*numrows] # in km
im = pylab.imshow(hr, cmap=pylab.cm.RdBu, extent=image_extent)
pylab.xlabel('Distance (km)', fontsize=12)
pylab.ylabel('Distance (km)', fontsize=12)
# add contour lines with labels
cset = pylab.contour(hr, extent=image_extent)
pylab.clabel(cset, inline=True, fmt='%1.1f', fontsize=10)
# add a color bar on the side
cb = pylab.colorbar(im)
cb.set_label('Water depth (m)', fontsize=12)
# add a title
pylab.title('Simulated inundation')
# Display the plot
pylab.show()
print('Done.')
print('Total run time = '+str(time.time()-start_time)+' seconds.')
if __name__ == "__main__":
main()
Packages
>>>>>>>>
.. code-block:: python
from landlab import RasterModelGrid
import pylab, time
import numpy as np
For this program, we'll need ModelGrid as well as the pylab, time, and numpy packages.
User-Defined Parameters
>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# User-defined parameter values
numrows = 20
numcols = 100
dx = 50.
n = 0.03 # roughness coefficient
run_time = 1800 # duration of run, seconds
h_init = 0.001 # initial thin layer of water (m)
h_boundary = 2.5 # water depth at left side (m)
g = 9.8
alpha = 0.2 # time-step factor (ND; from Bates et al., 2010)
Several of the user-defined parameters are the same as those used in the diffusion example: the dimensions and cell size of our raster grid, and the duration of the run. Here the duration is in seconds. In addition, we need to specify the Manning roughness coefficient (``n``), the initial water depth (here set to 1 mm), the water depth along the left-hand boundary, gravitational acceleration, and a time-step factor.
Derived Parameters
>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Derived parameters
ten_thirds = 10./3. # pre-calculate 10/3 for speed
elapsed_time = 0.0 # total time in simulation
report_interval = 2. # interval to report progress (seconds)
next_report = time.time()+report_interval # next time to report progress
Here, we pre-calculate the value of 10/3 so as to avoid repeating a division operation throughout the main loop. We also set up some variables to track the progress of the run. The elapsed time refers here to model time. In this model, we use a variable time-step size, and so rather than counting through a predetermined number of iterations, we instead keep track of the elapsed run time and halt the simulation when we reach the desired run duration.
The ``report_interval`` refers to clock time rather than run time. Every two seconds of clock time, we will report the percentage completion to the user, so that he/she is aware that the run is progressing and has an idea how much more is left to go. The variable ``next_report`` keeps track of the next time (on the clock) to report progress to the user.
Setting up the grid and state variables
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Create and initialize a raster model grid
mg = RasterModelGrid(numrows, numcols, dx)
# Set up boundaries. We'll have the right and left sides open, the top and
# bottom closed. The water depth on the left will be 5 m, and on the right
# just 1 mm.
mg.set_closed_boundaries_at_grid_edges(True, False, True, False)
# Set up scalar values
z = mg.add_zeros('node', 'Land_surface__elevation') # land elevation
h = mg.add_zeros('node', 'Water_depth') + h_init # water depth (m)
q = mg.create_active_link_array_zeros() # unit discharge (m2/s)
dhdt = mg.add_zeros('node', 'Water_depth_time_derivative')
# Left side has deep water
leftside = mg.left_edge_node_ids()
h[leftside] = h_boundary
# Get a list of the core nodes
core_nodes = mg.core_nodes
Next, we create and configure a raster grid. In this example, we'll have the left and right boundaries open and the top and bottom closed; we set this up with a call to ``set_closed_boundaries_at_grid_edges()`` on line 47.
Our key variables are as follows: land elevation, ``z`` (which remains constant and uniform at zero in this example), water depth, ``h`` (which starts out at ``h_init``), discharge per unit width, ``q``, and the rate of change of water depth, ``dhdt``. Three of these---elevation, depth, and :math:`dh/dt`---are scalars that are evaluated at nodes. The fourth, discharge, is evaluated at active links.
In this example, we will have the left boundary maintain a fixed water depth of 2.5 m. To accomplish this, we first obtain a list of the ID numbers of the boundary nodes that lie along the left grid edge by calling RasterModelGrid's ``left_edge_node_ids()`` method, which returns a Numpy array containing the IDs. We then use them to set the new depth values on the following line. Finally, we obtain a list of core node IDs, just as we did in the diffusion example.
Main loop, part 1
>>>>>>>>>>>>>>>>>
.. code-block:: python
# Main loop
while elapsed_time < run_time:
# Report progress
if time.time()>=next_report:
print('Time = '+str(elapsed_time)+' ('
+str(100.*elapsed_time/run_time)+'%)')
next_report += report_interval
# Calculate time-step size for this iteration (Bates et al., eq 14)
dtmax = alpha*mg.dx/np.sqrt(g*np.amax(h))
# Calculate the effective flow depth at active links. Bates et al. 2010
# recommend using the difference between the highest water-surface
# and the highest bed elevation between each pair of nodes.
zmax = mg.max_of_link_end_node_values(z)
w = h+z # water-surface height
wmax = mg.max_of_link_end_node_values(w)
hflow = wmax - zmax
# Calculate water-surface slopes
water_surface_slope = mg.calculate_gradients_at_active_links(w)
# Calculate the unit discharges (Bates et al., eq 11)
q = (q-g*hflow*dtmax*water_surface_slope)/ \
(1.+g*hflow*dtmax*n*n*abs(q)/(hflow**ten_thirds))
The main loop uses a ``while`` rather than a ``for`` loop because the time-step size is variable. We begin with a block of code that prints the percentage completion to the screen every two seconds. After this, we calculate a maximum time-step size size using the formula of Bates et al. (2010) [1]_, which depends on grid-cell spacing and on the shallow water wave celerity, :math:`\sqrt{g h}`. For water depth, we use the maximum value in the grid, because it is this value that will have the greatest celerity and therefore be most restrictive.
The next several lines calculate unit discharge values along each active link. To do this, we need to know the effective water depth at each of these locations. Bates et al. (2010) [1]_ recommend using the difference between the highest water-surface elevation and the highest bed-surface elevation at each pair of adjacent nodes---that is, at each active link. To find these maximum values, we call the ``active_link_max()`` method, first with bed elevation, and then with water-surface elevation, ``w``. The resulting effective flow depths at the active links are stored in Numpy array called ``hflow``.
Calculating discharge also requires us to know the water-surface gradient at each active link. We find this by calling ``calculate_gradients_at_active_links`` and passing it the water-surface height. We then have everything we need to calculate the discharge values using the Bates et al. (2010) [1]_ formula, which is done with the line
.. code::
q = (q-g*hflow*dtmax*water_surface_slope)/ \
(1.+g*hflow*dtmax*n*n*abs(q)/(hflow**ten_thirds))
Main loop, part 2
>>>>>>>>>>>>>>>>>
.. code-block:: python
# Calculate water-flux divergence and time rate of change of water depth
# at nodes
dhdt = -mg.calculate_flux_divergence_at_nodes(q)
# Second time-step limiter (experimental): make sure you don't allow
# water-depth to go negative
if np.amin(dhdt) < 0.:
shallowing_locations = np.where(dhdt<0.)
time_to_drain = -h[shallowing_locations]/dhdt[shallowing_locations]
dtmax2 = alpha*np.amin(time_to_drain)
dt = np.min([dtmax, dtmax2])
else:
dt = dtmax
# Update the water-depth field
h[core_nodes] = h[core_nodes] + dhdt[core_nodes]*dt
# Update current time
elapsed_time += dt
Because we have no source term in the flow equations---we are assuming there is no rainfall or infiltration to add or remove water in each cell---the rate of depth change is equal to :math:`-\nabla q`, the divergence of water discharge. Just as in the diffusion example, we can calculate the flux divergence in a single line with help from the ``calculate_flux_divergence_at_nodes()`` method.
The next block of code provides a second limit on time-step size, designed to prevent water depth from becoming negative. At some locations in the grid, it is possible that the rate of change of water depth will be negative, meaning that the water depth is becoming shallower over time. If we were to extrapolate this shallowing too far into the future, by taking too big a time step, we could end up with negative water depth. To avoid this situation, we first determine whether there are any locations where ``dhdt`` is negative, using the Numpy ``amin`` function. If there are, we call the Numpy ``where`` function to obtain a list of the node IDs at which the water depth is shallowing. The next line calculates the time it would take to reach zero water thickness. We then find the minimum of these time intervals, and multiply it by the ``alpha`` time-step parameter. This ensures that we won't actually completely drain any cells of water. Finally, we determine which limiting time-step is smaller: ``dtmax``, which reflects the limitation due to fluid velocity, or ``dtmax2``, which is the limitation due to cell drainage. If no cells have :math:`dh/dt<0`, then we simply use the fluid-velocity time step size.
We then update the values of water depth at all core nodes. Finally, we increment the total elapsed time.
Plotting the results
>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Get a 2D array version of the elevations
hr = mg.node_vector_to_raster(h)
# Create a shaded image
pylab.close() # clear any pre-existing plot
image_extent = [0, 0.001*dx*numcols, 0, 0.001*dx*numrows] # in km
im = pylab.imshow(hr, cmap=pylab.cm.RdBu, extent=image_extent)
pylab.xlabel('Distance (km)', fontsize=12)
pylab.ylabel('Distance (km)', fontsize=12)
# add contour lines with labels
cset = pylab.contour(hr, extent=image_extent)
pylab.clabel(cset, inline=True, fmt='%1.1f', fontsize=10)
# add a color bar on the side
cb = pylab.colorbar(im)
cb.set_label('Water depth (m)', fontsize=12)
# add a title
pylab.title('Simulated inundation')
# Display the plot
pylab.show()
print('Done.')
print('Total run time = '+str(time.time()-start_time)+' seconds.')
The final portion of the code uses the RasterModelGrid ``node_vector_to_raster()`` method along with some Pylab functions to create a color image plus contour plot of the water depth at the end of the run. This part of the code is essentially the same as what we used in the diffusion example.
Example 3: Overland Flow using a DEM
------------------------------------
In the next example, we create a version of the storage-cell overland-flow model that uses a digital elevation model (DEM) for the topography, and has the flow fed by rain rather than by a boundary input. In walking through the code, we'll focus only on those aspects that are new. The code is set up to run for 40 minutes (2400 seconds) of flow, which takes about 78 seconds to execute on a 2.7 Ghz Intel Core i7 processor. The complete code listing is below. Output is shown in :ref:`Figure 7 <olflowdem>`.
.. _olflowdem:
.. figure:: images/overland_flow_dem.png
:scale: 40%
:align: center
Figure 7: Output from a model of overland flow run on a DEM. Left: images showing
topography, and water depth at end of run. Right: hydrograph at catchment outlet.
.. code-block:: python
#! /usr/env/python
"""
2D numerical model of shallow-water flow over topography read from a DEM, using
the Bates et al. (2010) algorithm for storage-cell inundation modeling.
Last updated GT May 2014
"""
from landlab.io import read_esri_ascii
import time
import os
import pylab
import numpy as np
def main():
"""
In this simple tutorial example, the main function does all the work:
it sets the parameter values, creates and initializes a grid, sets up
the state variables, runs the main loop, and cleans up.
"""
# INITIALIZE
# User-defined parameter values
dem_name = 'ExampleDEM/west_bijou_gully.asc'
outlet_row = 82
outlet_column = 38
next_to_outlet_row = 81
next_to_outlet_column = 38
n = 0.06 # roughness coefficient (Manning's n)
h_init = 0.001 # initial thin layer of water (m)
g = 9.8 # gravitational acceleration (m/s2)
alpha = 0.2 # time-step factor (ND; from Bates et al., 2010)
run_time = 2400 # duration of run, seconds
rainfall_mmhr = 100 # rainfall rate, in mm/hr
rain_duration = 15*60 # rainfall duration, in seconds
# Derived parameters
rainfall_rate = (rainfall_mmhr/1000.)/3600. # rainfall in m/s
ten_thirds = 10./3. # pre-calculate 10/3 for speed
elapsed_time = 0.0 # total time in simulation
report_interval = 5. # interval to report progress (seconds)
next_report = time.time()+report_interval # next time to report progress
DATA_FILE = os.path.join(os.path.dirname(__file__), dem_name)
# Create and initialize a raster model grid by reading a DEM
print('Reading data from "'+str(DATA_FILE)+'"')
(mg, z) = read_esri_ascii(DATA_FILE)
print('DEM has ' + str(mg.number_of_node_rows) + ' rows, ' +
str(mg.number_of_node_columns) + ' columns, and cell size ' + str(mg.dx)) + ' m'
# Modify the grid DEM to set all nodata nodes to inactive boundaries
mg.set_nodata_nodes_to_closed(z, 0) # set nodata nodes to inactive bounds
# Set the open boundary (outlet) cell. We want to remember the ID of the
# outlet node and the ID of the interior node adjacent to it. We'll make
# the outlet node an open boundary.
outlet_node = mg.grid_coords_to_node_id(outlet_row, outlet_column)
node_next_to_outlet = mg.grid_coords_to_node_id(next_to_outlet_row, next_to_outlet_column)
mg.set_fixed_value_boundaries(outlet_node)
# Set up state variables
h = mg.add_zeros('node', 'Water_depth') + h_init # water depth (m)
q = mg.create_active_link_array_zeros() # unit discharge (m2/s)
# Get a list of the core nodes
core_nodes = mg.core_nodes
# To track discharge at the outlet through time, we create initially empty
# lists for time and outlet discharge.
q_outlet = []
t = []
q_outlet.append(0.)
t.append(0.)
outlet_link = mg.active_link_connecting_node_pair(outlet_node, node_next_to_outlet)
# Display a message
print( 'Running ...' )
start_time = time.time()
# RUN
# Main loop
while elapsed_time < run_time:
# Report progress
if time.time()>=next_report:
print('Time = '+str(elapsed_time)+' ('
+str(100.*elapsed_time/run_time)+'%)')
next_report += report_interval
# Calculate time-step size for this iteration (Bates et al., eq 14)
dtmax = alpha*mg.dx/np.sqrt(g*np.amax(h))
# Calculate the effective flow depth at active links. Bates et al. 2010
# recommend using the difference between the highest water-surface
# and the highest bed elevation between each pair of cells.
zmax = mg.max_of_link_end_node_values(z)
w = h+z # water-surface height
wmax = mg.max_of_link_end_node_values(w)
hflow = wmax - zmax
# Calculate water-surface slopes
water_surface_slope = mg.calculate_gradients_at_active_links(w)
# Calculate the unit discharges (Bates et al., eq 11)
q = (q-g*hflow*dtmax*water_surface_slope)/ \
(1.+g*hflow*dtmax*n*n*abs(q)/(hflow**ten_thirds))
# Calculate water-flux divergence at nodes
dqds = mg.calculate_flux_divergence_at_nodes(q)
# Update rainfall rate
if elapsed_time > rain_duration:
rainfall_rate = 0.
# Calculate rate of change of water depth
dhdt = rainfall_rate-dqds
# Second time-step limiter (experimental): make sure you don't allow
# water-depth to go negative
if np.amin(dhdt) < 0.:
shallowing_locations = np.where(dhdt<0.)
time_to_drain = -h[shallowing_locations]/dhdt[shallowing_locations]
dtmax2 = alpha*np.amin(time_to_drain)
dt = np.min([dtmax, dtmax2])
else:
dt = dtmax
# Update the water-depth field
h[core_nodes] = h[core_nodes] + dhdt[core_nodes]*dt
h[outlet_node] = h[node_next_to_outlet]
# Update current time
elapsed_time += dt
# Remember discharge and time
t.append(elapsed_time)
q_outlet.append(q[outlet_link])
# FINALIZE
# Set the elevations of the nodata cells to the minimum active cell
# elevation (convenient for plotting)
z[np.where(z<=0.)] = 9999 # temporarily change their elevs ...
zmin = np.amin(z) # ... so we can find the minimum ...
z[np.where(z==9999)] = zmin # ... and assign them this value.
# Get a 2D array version of the water depths and elevations
hr = mg.node_vector_to_raster(h)
zr = mg.node_vector_to_raster(z)
# Clear previous plots
pylab.figure(1)
pylab.close()
pylab.figure(2)
pylab.close()
# Plot discharge vs. time
pylab.figure(1)
pylab.plot(np.array(t), np.array(q_outlet)*mg.dx)
pylab.xlabel('Time (s)')
pylab.ylabel('Q (m3/s)')
pylab.title('Outlet discharge')
# Plot topography
pylab.figure(2)
pylab.subplot(121)
im = pylab.imshow(zr, cmap=pylab.cm.RdBu,
extent=[0, mg.number_of_node_columns * mg.dx,
0, mg.number_of_node_rows * mg.dx])
cb = pylab.colorbar(im)
cb.set_label('Elevation (m)', fontsize=12)
pylab.title('Topography')
# Plot water depth
pylab.subplot(122)
im2 = pylab.imshow(hr, cmap=pylab.cm.RdBu,
extent=[0, mg.number_of_node_columns * mg.dx,
0, mg.number_of_node_rows * mg.dx])
pylab.clim(0, 0.25)
cb = pylab.colorbar(im2)
cb.set_label('Water depth (m)', fontsize=12)
pylab.title('Water depth')
# Display the plots
pylab.show()
print('Done.')
print('Total run time = '+str(time.time()-start_time)+' seconds.')
if __name__ == "__main__":
main()
Loading modules
>>>>>>>>>>>>>>>
.. code-block:: python
from landlab.io import read_esri_ascii
import time
import os
import pylab
import numpy as np
In order to import the DEM, we will use Landlab's ``read_esri_ascii`` function, so we need to import this. We also need the ``time`` module for timekeeping, ``os`` for manipulating path names, ``pylab`` for plotting, and ``numpy`` for numerical operations.
User-defined variables
>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# User-defined parameter values
dem_name = 'ExampleDEM/west_bijou_gully.asc'
outlet_row = 82
outlet_column = 38
next_to_outlet_row = 81
next_to_outlet_column = 38
n = 0.06 # roughness coefficient (Manning's n)
h_init = 0.001 # initial thin layer of water (m)
g = 9.8 # gravitational acceleration (m/s2)
alpha = 0.2 # time-step factor (ND; from Bates et al., 2010)
run_time = 2400 # duration of run, seconds
rainfall_mmhr = 100 # rainfall rate, in mm/hr
rain_duration = 15*60 # rainfall duration, in seconds
We will obtain topography from a 3-m resolution digital elevation model (DEM) of a small gully watershed in the West Bijou Creek drainage basin, east-central Colorado, USA. The drainage area of this catchment is about one hectare. The topography derives from airborne lidar data. The DEM is contained in an ArcInfo-format ascii file called *west_bijou_gully.asc*, located in the *ExampleDEM* folder.
In this example, we will allow flow through a single outlet cell, which we need to flag as a fixed-value boundary. We will also monitor discharge at the outlet. To accomplish these tasks, we need the row and column of the cell that will be used as the outlet and the cell next to it.
Our run will apply water as rainfall, with a rate given by ``rainfall_mmhr`` and a duration given by ``rain_duration``. In fact, in this simple model, we won't allow any infiltration, so the rainfall rate is actually a runoff rate.
Derived parameters
>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Derived parameters
rainfall_rate = (rainfall_mmhr/1000.)/3600. # rainfall in m/s
ten_thirds = 10./3. # pre-calculate 10/3 for speed
elapsed_time = 0.0 # total time in simulation
report_interval = 5. # interval to report progress (seconds)
next_report = time.time()+report_interval # next time to report progress
DATA_FILE = os.path.join(os.path.dirname(__file__), dem_name)
In this block of code, we convert the rainfall rate from millimeters per hour to meters per second. We also find the full path name of the input DEM by combining the pathname of the python code file (which is stored in ``__file__``) with the specified DEM file name. We take advantage of the ``dirname`` and ``join`` functions in the OS module.
Reading and initializing the DEM
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Create and initialize a raster model grid by reading a DEM
print('Reading data from "'+str(DATA_FILE)+'"')
(mg, z) = read_esri_ascii(DATA_FILE)
print('DEM has ' + str(mg.number_of_node_rows) + ' rows, ' +
str(mg.number_of_node_columns) + ' columns, and cell size ' + str(mg.dx)) + ' m'
# Modify the grid DEM to set all nodata nodes to inactive boundaries
mg.set_nodata_nodes_to_closed(z, 0) # set nodata nodes to inactive bounds
Landlab's IO module allows us to read an ArcInfo ascii-format DEM with a call to the ``read_esri_ascii()`` method. The function creates and returns a ``RasterModelGrid`` of the correct size and resolution, as well as a Numpy array of node elevation values. In this example, we know that the DEM contains elevations for a small watershed; nodes outside the watershed have a no-data value of zero. We don't want any flow to cross the watershed perimeter except at a single outlet cell. The call to the ModelGrid method ``set_nodata_nodes_to_closed()`` accomplishes this by identifying all nodes for which the corresponding value in ``z`` equals the specified no-data value of zero.
Setting up the watershed outlet
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# Set the open boundary (outlet) cell. We want to remember the ID of the
# outlet node and the ID of the interior node adjacent to it. We'll make
# the outlet node an open boundary.
outlet_node = mg.grid_coords_to_node_id(outlet_row, outlet_column)
node_next_to_outlet = mg.grid_coords_to_node_id(next_to_outlet_row,
next_to_outlet_column)
mg.set_fixed_value_boundaries(outlet_node)
We will handle the outlet by keeping the water-surface slope the same as the bed-surface slope along the link that leads to the outlet boundary node. To accomplish this, the first thing we need to do is find the ID of the outlet node and the core node adjacent to it. We already know what the row and column numbers of these nodes are; to obtain the corresponding node ID, we use ModelGrid's ``grid_coords_to_node_id`` method. We then convert the outlet node to a fixed-value (i.e., open) boundary with the ``set_fixed_value_boundaries`` method. (Note that in doing this, we've converted what was a core node into a fixed boundary; had we converted a no-data node, we would end up with a waterfall at the outlet because the no-data nodes all have zero elevation, while the interior nodes all have elevations above 1600 m).
Preparing to track discharge at the outlet
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. code-block:: python
# To track discharge at the outlet through time, we create initially empty
# lists for time and outlet discharge.
q_outlet = []
t = []
q_outlet.append(0.)
t.append(0.)
outlet_link = mg.active_link_connecting_node_pair(outlet_node,
node_next_to_outlet)
For this model, it would be nice to track discharge through time at the watershed outlet. To do this, we create two new lists: one for the time corresponding to each iteration, and one for the outlet discharge. Using lists will be slightly slower than using pre-defined Numpy arrays, but avoids forcing us to guess how many iterations there will be (recall that time-step size depends on the flow conditions in any given iteration). We append zeros to each list to represent the starting condition. To find out which active link represents the watershed outlet, we use ModelGrid's ``active_link_connecting_node_pair()`` method. This method takes a pair of node IDs as arguments. If the nodes are connected by an active link, it returns the ID of that active link; otherwise, it returns ``ModelGrid.BAD_INDEX_VALUE``.
Main loop
>>>>>>>>>
.. code-block:: python
# Update rainfall rate
if elapsed_time > rain_duration:
rainfall_rate = 0.
# Calculate rate of change of water depth
dhdt = rainfall_rate-dqds
Most of the main loop is identical to what we saw in Example 2, and here we will only list the parts that are new or different. One difference is that we now have a source term that represents rainfall and runoff. The code listed above sets the rainfall rate to zero when the elapsed time is greater than the rainfall duration. It also adds ``rainfall_rate`` as a source term when computing :math:`dh/dt`.
.. code-block:: python
# Update the water-depth field
h[core_nodes] = h[core_nodes] + dhdt[core_nodes]*dt
h[outlet_node] = h[node_next_to_outlet]
After updating water depth values for the core nodes, we also need to update the water depth at the outlet boundary so that it matches the depth at the adjacent node.
.. code-block:: python
# Remember discharge and time
t.append(elapsed_time)
q_outlet.append(q[outlet_link])
The last few lines in the main loop keep track of discharge at the outlet by appending the current time and discharge to their respective lists.
Plotting the result
>>>>>>>>>>>>>>>>>>>
The plotting section is similar to what we saw in the previous two examples. One difference is that we now use two figures: one for the topography and water depth, and one for outlet discharge over time. We also use Pylab's sub-plot capability to place images of topography and water depth side by side.
Using a Different Grid Type
---------------------------
As noted earlier, Landlab provides several different types of grid. Available grids (as of this writing) are listed in the table below. Grids are designed using Python classes, with
more specialized grids inheriting properties and behavior from more general types. The
class heirarchy is given in the second column, **Inherits from**.
======================= ======================= ================== ================
Grid type Inherits from Node arrangement Cell geometry
======================= ======================= ================== ================
``RasterModelGrid`` ``ModelGrid`` raster squares
``VoronoiDelaunayGrid`` ``ModelGrid`` Delaunay triangles Voronoi polygons
``HexModelGrid`` ``VoronoiDelaunayGrid`` triagonal hexagons
``RadialModelGrid`` ``VoronoiDelaunayGrid`` concentric Voronoi polygons
======================= ======================= ================== ================
In a *VoronoiDelaunay* grid, a set of node coordinates is given as an initial condition. Landlab then forms a
Delaunay triangulation, so that the links between nodes are the edges of the triangles, and the cells are Voronoi polygons. A *HexModelGrid* is a special type of *VoronoiDelaunay* grid in which the Voronoi cells happen to be regular hexagons. In a *RadialModelGrid*, nodes
are created in concentric circles and then connected to form a Delaunay triangulation (again with Voronoi polygons as cells). The next example illustrates the use of a
*RadialModelGrid*.
Hillslope diffusion with a radial grid
--------------------------------------
Suppose that we wanted to model the long-term evolution, via hillslope soil creep, of a volcanic island. A radial, semi-structured arrangement of grid nodes might be a good solution. To start, we'll look at the highly idealized case of a perfectly circular island that is subject to uniform baselevel lowering along its edges (as if it were shaped like a gigantic undersea column, and sea-level were steadily falling). We can implement such a model simply by making a few small changes to our previous diffusion-model code.
A radial model grid is defined by specifying a number of concentric ``shells'' of a given radial spacing. We'll change portion of the original (raster) diffusion code that sets up grid geometry to the following:
.. code-block:: python
# User-defined parameter values
num_shells=10 # number of radial "shells" in the grid
#numcols = 30 # not needed for a radial model grid
dr = 10.0 # grid cell spacing
Note that we have changed ``dx`` to ``dr``; ``dr`` represents the distance between concentric "shells" of nodes. To create a RadialModelGrid instead of a RasterModelGrid, we simply replace the name of the object ``RasterModelGrid`` with ``RadialModelGrid``.
.. code-block:: python
# Create and initialize a radial model grid
mg = RadialModelGrid(num_shells, dr)
Finally, because our grid is now no longer a simple raster, we need to modify our plotting code. Here we'll replace the original plotting commands
with the following:
.. code-block:: python
# Plot the points, colored by elevation
import numpy
maxelev = numpy.amax(z)
for i in range(mg.number_of_nodes):
mycolor = str(z[i]/maxelev)
pylab.plot(mg.node_x[i], mg.node_y[i], 'o', color=mycolor, ms=10)
mg.display_grid()
# Plot the points from the side, with analytical solution
pylab.figure(3)
L = num_shells*dr
xa = numpy.arange(-L, L+dr, dr)
z_analytical = (uplift_rate/(4*kd))*(L*L-xa*xa)
pylab.plot(mg.node_x, z, 'o')
pylab.plot(xa, z_analytical, 'r-')
pylab.xlabel('Distance from center (m)')
pylab.ylabel('Height (m)')
pylab.show()
The result of our run is shown below.
.. figure:: images/radial_example.png
:figwidth: 80 %
:scale: 50 %
:align: center
Figure 8: Hillslope diffusion model implemented with a radial model grid. (a) Nodes and links. Green nodes are active interior points, and red nodes are open boundaries. Active links in green; inactive links in black. Node gray shading is proportional to height. (b) Voronoi diagram, highlighting cells. Blue dots are nodes, and green circles are corners (cell vertices. Lines are faces (Voronoi polygon edges, sometimes called "Voronoi ridges"). Dashed lines show orientation of undefined Voronoi edges. (c) Side view of model, showing nodes (blue dots) in comparison with analytical solution (red curve). All axes in meters.
Where to go next?
=================
All of the codes in these exercises are available in the Landlab distribution, under the folder *docs/model_grid_guide*.
.. [1] Bates, P., M. Horritt, and T. Fewtrell (2010), A simple inertial formulation of the shallow water equations for efficient two-dimensional flood inundation modelling, Journal of Hydrology, 387(1), 33–45.
| 50.62635 | 1,218 | 0.71651 |
ecb50e43c12139f920f81604a578e70eee627262 | 8,613 | rst | reStructuredText | content/articles/algorithme/tri/tri_insertion/tri_insertion.rst | napnac/napnac.fr | 2df4e2428c7a6be9e5a4f0a3cae7d89e78f8a368 | [
"MIT"
] | 1 | 2017-10-18T17:19:31.000Z | 2017-10-18T17:19:31.000Z | content/articles/algorithme/tri/tri_insertion/tri_insertion.rst | napnac/napnac.fr | 2df4e2428c7a6be9e5a4f0a3cae7d89e78f8a368 | [
"MIT"
] | 11 | 2019-03-21T06:17:36.000Z | 2022-02-20T12:12:55.000Z | content/articles/algorithme/tri/tri_insertion/tri_insertion.rst | napnac/napnac.fr | 2df4e2428c7a6be9e5a4f0a3cae7d89e78f8a368 | [
"MIT"
] | 1 | 2015-10-12T18:00:46.000Z | 2015-10-12T18:00:46.000Z | Introduction
------------
Le tri par insertion (*insertion sort* en anglais) est un algorithme de
tri par comparaison simple, et intuitif mais toujours avec une
complexité en :math:`O(N^2)`. Vous l’avez sans doute déjà utilisé sans
même vous en rendre compte : lorsque vous triez des cartes par exemple.
C’est un algorithme de tri
`stable <https://en.wikipedia.org/wiki/Sorting_algorithm#Stability>`__,
`en place <https://en.wikipedia.org/wiki/In-place_algorithm>`__, et le
plus rapide en pratique sur une entrée de petite taille.
Principe de l’algorithme
------------------------
Le principe du tri par insertion est de trier les éléments du tableau
comme avec des cartes :
- On prend nos cartes mélangées dans notre main.
- On crée deux ensembles de carte, l’un correspond à l’ensemble de
carte triée, l’autre contient l’ensemble des cartes restantes (non
triées).
- On prend au fur et à mesure, une carte dans l’ensemble non trié et on
l’insère à sa bonne place dans l’ensemble de carte triée.
- On répète cette opération tant qu’il y a des cartes dans l’ensemble
non trié.
Exemple
-------
Prenons comme exemple la suite de nombre suivante : 9, 2, 7, 1 que l’on
veut trier en ordre croissant avec l’algorithme du tri par insertion :
*1er tour* :
9 \| **2**, 7, 1 -> à gauche la partie triée du tableau (le premier
élément est considéré comme trié puisqu'il est seul dans cette partie),
à droite la partie non triée. On prend le premier élément de la partie
non triée, 2, et on l'insère à sa place dans la partie triée,
c'est-à-dire à gauche de 9.
*2ème tour* :
2, 9 \| **7**, 1 -> on prend 7, et on le place entre 2 et 9 dans la
partie triée.
*3ème tour* :
2, 7, 9 \| **1** -> on continue avec 1 que l’on place au début de la
première partie.
1, 2, 7, 9
Pour insérer un élément dans la partie triée, on parcourt de droite à
gauche tant que l'élément est plus grand que celui que l'on souhaite
insérer.
Pour résumer l'idée de l'algorithme :
.. figure:: /img/algo/tri/tri_insertion/exemple_tri.png
:alt: Exemple de tri par insertion
Exemple de tri par insertion
La partie verte du tableau est la partie triée, l'élément en bleu est le
prochain élément non trié à placer et la partie blanche est la partie
non triée.
Pseudo-code
-----------
.. code:: nohighlight
triInsertion :
Pour chaque élément non trié du tableau
Décaler vers la droite dans la partie triée, les éléments supérieurs à
celui que l'on souhaite insérer
Placer notre élément à sa place dans le trou ainsi créé
Complexité
----------
L’algorithme du tri par insertion a une complexité de :math:`O(N^2)` :
- La première boucle parcourt :math:`N – 1` tours, ici on notera plutôt
:math:`N` tours car le :math:`– 1` n’est pas très important.
- Décaler les éléments de la partie triée prend :math:`i` tours (avec
:math:`i` variant de 0 à :math:`N`).
Dans le pire des cas on parcourt :math:`N^2` tours, donc le tri par
insertion a une complexité en temps de :math:`O(N^2)`.
Implémentation
--------------
L’implémentation en C du tri par insertion :
[[secret="tri_insertion.c"]]
.. code:: c
#include <stdio.h>
#define TAILLE_MAX 1000
int tableau[TAILLE_MAX];
int taille;
void triInsertion(void)
{
int iTab;
for(iTab = 0; iTab < taille; ++iTab) {
int aInserer;
int position;
aInserer = tableau[iTab];
position = iTab;
while(position > 0 && aInserer < tableau[position - 1]) {
tableau[position] = tableau[position - 1];
--position;
}
tableau[position] = aInserer;
}
}
int main(void)
{
int iTab;
scanf("%d\n", &taille);
for(iTab = 0; iTab < taille; ++iTab)
scanf("%d ", &tableau[iTab]);
triInsertion();
for(iTab = 0; iTab < taille; ++iTab)
printf("%d ", tableau[iTab]);
printf("\n");
return 0;
}
[[/secret]]
L'entrée du tri :
.. code:: nohighlight
4
9 2 7 1
Et en sortie, notre tableau trié :
.. code:: nohighlight
1 2 7 9
Améliorations et variantes
--------------------------
Utiliser des listes chaînées
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Le tri par insertion doit décaler de nombreuses fois le tableau pour
insérer un élément, ce qui est une opération lourde et inutile puisqu'on
peut utiliser des `listes
chaînées </algo/structure/liste_chainee.html>`__ afin de contrer ce
problème. Les listes chaînées permettent d'insérer notre élément de
façon simple et plus rapide, cependant comme il faut toujours calculer
où placer cet élément, la complexité reste quadratique.
Tri Shell
~~~~~~~~~
Le tri par insertion est un algorithme de tri très efficace sur des
entrées quasiment triées, et on peut utiliser cette propriété
intéressante du tri pour l'améliorer. En effet, le tri Shell (*Shell
sort* en anglais, du nom de son inventeur Donald L. Shell) va échanger
certaines valeurs du tableau à un écart bien précis afin de le rendre
dans la plupart des cas presque trié. Une fois qu'on a ce tableau
ré-arrangé, on lui applique notre tri par insertion classique, mais ce
dernier sera bien plus rapide grâce à notre première étape.
Pour calculer cet écart, on utilise cette formule :
| :math:`Ecart(N) = 3 \times Ecart(N - 1) + 1`
| avec :math:`Ecart(0) = 0`
Par exemple, on souhaite trier la suite de nombres : 5, 8, 2, 9, 1, 3
dans l'ordre croissant :
On calcule les écarts tant que le résultat est inférieur à la taille du
tableau.
| :math:`Ecart(0) = 0`
| :math:`Ecart(1) = 3 \times Ecart(0) + 1 = 3 \times 0 + 1 = 1`
| :math:`Ecart(2) = 3 \times Ecart(1) + 1 = 3 \times 1 + 1 = 4`
| :math:`Ecart(3) = 3 \times Ecart(2) + 1 = 3 \times 4 + 1 = 13`
On a donc deux écarts que l'on peut utiliser : 1 et 4 (13 étant
supérieur au nombre d'éléments du tableau). Cependant appliquer un écart
de 1 revient à faire un tri par insertion normal, on utilisera donc
uniquement l'écart de 4 dans cet exemple.
On compare ensuite chaque élément du tableau écarté de quatre éléments :
| **5**, 8, 2, 9, **1**, 3 -> on voit que 5 est supérieur à 1, on les
échange.
| 1, **8**, 2, 9, 5, **3** -> on voit que 8 est supérieur à 3, on les
échange.
| 1, 3, 2, 9, 5, 8 -> plus d’échange possible avec un écart de 4.
On répète cette opération tant qu'il nous reste des écarts, dans notre
cas c'est la fin de la première étape du tri. Maintenant notre tableau
est réorganisé et quasi trié, on peut donc lui appliquer un tri par
insertion.
Malheureusement, le tri Shell reste avec une complexité quadratique dans
le pire des cas, mais est une bonne amélioration de manière général.
Dichotomie
~~~~~~~~~~
Le tri par insertion est basé sur le fait que le tableau est coupé en
deux parties, l’une triée (celle qui nous intéresse) et l’autre non
triée. On peut améliorer la recherche de l'emplacement où insérer notre
élément grâce à la `dichotomie </algo/recherche/dichotomie.html>`__
(c’est un algorithme de recherche efficace dans un ensemble d’objet déjà
trié, ce qui est parfait pour notre cas).
Cette recherche consiste à utiliser la méthode du `diviser pour
régner <https://en.wikipedia.org/wiki/Divide_and_conquer_algorithms>`__,
on cherche l’emplacement pour notre élément à l’aide d’intervalles.
Notre intervalle de départ est : *début partie triée* -> *fin partie
triée* :
- On teste si l’élément situé au milieu de notre intervalle est
inférieur à l’élément que l’on veut insérer.
- Si c’est le cas on recommence l’opération mais cette fois ci avec cet
intervalle : *milieu ancien inter* -> *fin ancien inter*.
- Sinon on recommence mais avec l’intervalle suivant : *début ancien
inter* -> *milieu ancien inter*.
Une fois que l’intervalle ne contient plus qu’un seul élément, on a
trouvé l’emplacement où insérer l'élément à sa place. Grâce à cette
amélioration, l’algorithme du tri par insertion a pour complexité
:math:`O(N \log _2 N)`.
*J'ai expliqué ici très rapidement le principe de la dichotomie, j'en
parle plus longuement dans mon article à ce propos donc si vous n'avez
pas tout suivi, je vous conseille d'aller le lire pour bien saisir ce
concept fondamental en algorithmie.*
Conclusion
----------
L'algorithme du tri par insertion est simple et relativement intuitif,
même s'il a une complexité en temps quadratique. Cet algorithme de tri
reste très utilisé à cause de ses facultés à s'exécuter en temps quasi
linéaire sur des entrées déjà triées, et de manière très efficace sur de
petites entrées en général (souvent plus performant, dans ce cas, que
des algorithmes de tri en :math:`O(N \log _2 N)`).
| 32.13806 | 80 | 0.701498 |
5f32eb8d4828d727b1538026d3b66f68b039cd75 | 4,001 | rst | reStructuredText | docs/source/example_workflows/exploring_roof_pv_panels_potential.rst | ESDLMapEditorESSIM/ESDLMapEditorDocumentation | aefd8d557092dd5d83910abf425f272c315c055d | [
"Apache-2.0"
] | null | null | null | docs/source/example_workflows/exploring_roof_pv_panels_potential.rst | ESDLMapEditorESSIM/ESDLMapEditorDocumentation | aefd8d557092dd5d83910abf425f272c315c055d | [
"Apache-2.0"
] | null | null | null | docs/source/example_workflows/exploring_roof_pv_panels_potential.rst | ESDLMapEditorESSIM/ESDLMapEditorDocumentation | aefd8d557092dd5d83910abf425f272c315c055d | [
"Apache-2.0"
] | null | null | null | Exploring rooftop PV panels potential
=====================================
One of the applications provided by the PICO model developed by Geodan, is to gain insight in the potential for rooftop
PV panels. Based on internal calculations the model can be queried for the potential on roofs of 5 different orientations:
* Roofs facing north
* Roofs facing east
* Roofs facing Ssouth
* Roofs facing west
* Flat roofs
There are two possible ways to use this service:
* Via the 'External services' menu
* Via the context menu of an area on the map
External services menu
**********************
This workflow can be divided in the following steps:
1. Go to 'Services' menu and select 'External ESDL Services'. Select 'Get PICO rooftop solar potential' from the list
.. image:: images/esdl_services_pico_rooftop_solar_potential.png
:alt: Select 'external ESDL services', and 'get PICO rooftop solar potential'
2. The service can be queried for 3 different geographical scopes:
.. image:: images/pico_rooftop_solar_potential_scope.png
:alt: 3 different geographical scopes
3. As an example, select 'Municipality' and choose 'Hengelo'. Press the 'Run Service' button. The following result will load.
.. image:: images/pico_rooftop_solar_potential_municipality.png
:alt: Results for the municipality of Hengelo
5. Continue :ref:`here <Exploring potential values>`
Via the area context menu
*************************
1. If you have an already loaded energy system (and the area attributes have been formatted in the right way), you can right click on an area and select 'Query Rooftop PV Potential'.
.. image:: images/context_menu_area_query_potential.png
:alt: Context menu for an area
2. A dialog opens where you can select to only query the potential for the selected area or for all areas in the energy system.
.. image:: images/query_rooftop_pv_potential.png
:alt: Query rooftop pv potential for area
3. If you choose to query for all areas, the result looks like
.. image:: images/rooftop_potential_neighbourhood.png
:alt: Rooftop PV potential for all areas
4. Continue :ref:`here <Exploring potential values>`
Exploring potential values
**************************
1. When you right click on a building on the map, a context menu appears
.. image:: images/building_context_menu.png
:alt: Building context menu
2. Click 'Building ESDL contents' to open the building editor
.. image:: images/five_types_of_rooftop_panels_potential.png
:alt: building editor with 5 different types of rooftop PV potential
3. Right click on one of the icons for the solar potential (SP)
.. image:: images/right_click_solar_potential.png
:alt:
4. Click on 'Edit' to open the ESDL browser and inspect the information.
.. image:: images/value_of_solar_potential.png
:alt: Inspect values of solar potential
Converting potential to actual installations
********************************************
The last functionality is to convert (part of) the potential to installed capacity of PV installations.
1. Right click on an area, and select 'Use Rooftop PV Potential'
.. image:: images/context_menu_area_use_potential.png
:alt: Context menu are use potential
2. At the moment you can fill in one percentage for all orientations and apply this to this area or all areas
.. image:: images/use_rooftop_pv_potential.png
:alt: Use Rooftop PV Potential
3. If you open the building editor again, the result looks like this. By right clicking on an icon for a PV installation and selecting 'Edit' from the menu, values can be inspected.
.. image:: images/building_editor_installed_pv_installations.png
:alt: Installed PV installations
.. note::
ESDL: The produced energy in kWh is connected as a SingleValue profile to the OutPort of the PVInstallation asset.
Other models, like for example the Energy Transition Model, know how to handle this information and are able to
take these values into account. | 35.096491 | 182 | 0.738565 |
b9bd0054666d3b597153a379ee9655a9475fee9c | 3,472 | rst | reStructuredText | docs/baremetal-m3/m3_add_makefile_and_lds.rst | thomas-coding/baremetal | 9e65c10ca56328a6aae6dc8e4ed18faa7cd7ef9a | [
"MIT"
] | null | null | null | docs/baremetal-m3/m3_add_makefile_and_lds.rst | thomas-coding/baremetal | 9e65c10ca56328a6aae6dc8e4ed18faa7cd7ef9a | [
"MIT"
] | null | null | null | docs/baremetal-m3/m3_add_makefile_and_lds.rst | thomas-coding/baremetal | 9e65c10ca56328a6aae6dc8e4ed18faa7cd7ef9a | [
"MIT"
] | null | null | null | 3、Makefile 和 链接脚本
==========================================
当前需要修改代码目录如下,通过makefile来编译,build.sh中指定toolchain,调用make,runqemu.sh 和 rungdb.sh 用于运行 elf 和调试。
::
root@iZj6ccyu2ndokc2ujnox0tZ:~/workspace/code/baremetal/baremetal-m3# tree
.
├── bm.lds
├── build.sh
├── Makefile
├── README.md
├── rungdb.sh
├── runqemu.sh
└── src
├── board
├── core
│ └── start.S
└── test
3.1 Makefile
-------------------------------------------
参考:
https://makefile-study.readthedocs.io/zh_CN/latest/
.. code-block:: Makefile
# ------------------------
# Generic Makefile
# ------------------------
# Project name
Target = target
ELF = ${Target}.elf
CPU = cortex-m3
TARGET_ARCH = -mcpu=$(CPU)
# Compile command and flag
CC = arm-none-eabi-gcc
CFLAG = -Wall -mthumb
ASFLAGS =
# Linker command and flag
LINKER = arm-none-eabi-gcc
LDFLAGS = -nostdlib -e 0x0 -Ttext=0x0 -mcpu=cortex-m3
DUMP = arm-none-eabi-objdump
OBJCOPY = arm-none-eabi-objcopy
# Dir
SRC_DIR = src
OBJ_DIR = obj
BIN_DIR = output
# Source directorys, find all source directory ,like src/board src/common
SRC_DIRS = $(shell find src -maxdepth 3 -type d)
# OBJ_DIRS, change src to obj, match the source directorys, like obj/board obj/common
OBJ_DIRS := $(foreach dir,$(SRC_DIRS),$(subst src,obj,$(dir)))
# INCLUDES, add source directorys to include, like -Isrc/board -Isrc/common
INCLUDES = $(foreach dir, $(SRC_DIRS),-I$(dir))
# Source files, c srouce files and asmeble source files. like src/board/test.c src/board/test.S
C_SRC += $(foreach dir,$(SRC_DIRS),$(wildcard $(dir)/*.c))
S_SRC := $(foreach dir,$(SRC_DIRS),$(wildcard $(dir)/*.S))
# OBJ files, object files. like obj/board/test.o obj/board/test.o
OBJ_S_FILES := $(foreach file,$(S_SRC),$(patsubst %.S,%.o,$(subst src,obj,$(file))))
OBJ_C_FILES := $(foreach file,$(C_SRC),$(patsubst %.c,%.o,$(subst src,obj,$(file))))
OBJ_FILES := $(OBJ_S_FILES) $(OBJ_C_FILES)
# 1. Create obj directorys and bin directory
# 2. Comple all OBJ_FILES, from .c .S to .o
# 3. Link all .o to binary target
$(BIN_DIR)/$(ELF) : $(OBJ_DIRS) $(BIN_DIR) $(OBJ_FILES)
$(LINKER) $(LDFLAGS) -o $@ $(OBJ_FILES)
$(DUMP) -xD $@ > $(BIN_DIR)/$(Target).asm
$(OBJCOPY) -O binary $@ $(BIN_DIR)/$(Target).bin
xxd $(BIN_DIR)/$(Target).bin > $(BIN_DIR)/$(Target).hex
@echo "Linking complete!"
# Compile .c to .o
obj/%.o : src/%.c
@echo Compiling $< to $@
$(CC) $(CFLAG) $(INCLUDES) $(TARGET_ARCH) -c $< -o $@
# Compile .S to .o
obj/%.o : src/%.S
@echo $@ Compiling $< to $@
$(CC) $(ASFLAGS) -c $(TARGET_ARCH) $< -o $@
PHONY: clean
clean :
rm -rf $(BIN_DIR) $(OBJ_DIR)
@echo "Cleanup complete!"
$(OBJ_DIRS):
mkdir -p $@
$(BIN_DIR):
mkdir -p $@
.. note::
Makefile 基本思路是查找src目录下所有 .c .S 文件,先各自编译成 .o, 再链接成 elf。
3.2 链接脚本
-------------------------------------------
| 之前我们是在链接时,直接指定参数 -e 0x0 -Ttext=0x0 来告诉gcc 代码段和入口地址的。现在我们需要把这部分用链接脚本来实现。
| 新建bm.lsd文件。
::
__RAM_BASE = 0x0;
__RAM_SIZE = 0x10000;
MEMORY
{
RAM (rwx) : ORIGIN = __RAM_BASE, LENGTH = __RAM_SIZE
}
ENTRY(Reset_Handler)
SECTIONS
{
.text :
{
*(.text*)
} > RAM
}
.. code-block:: Makefile
#LDFLAGS = -nostdlib -e 0x0 -Ttext=0x0 -mcpu=cortex-m3
LDFLAGS = -nostdlib -mcpu=cortex-m3 -T bm.lds
.. note::
只有一个section,把代码段放入到ram中,注意这个entry定义了入口,Reset_Handler需要在start.S中定义为globl( .globl Reset_Handler)。
基于现在的代码框架,编译和调试都用shell命令来管理,比较方便了。
::
./build.sh a
./runqemu.sh --gdb
./rungdb.sh
| 21.836478 | 98 | 0.612039 |
920a28376c6494afb97cd40a57ad18b66192ab82 | 303 | rst | reStructuredText | services/fb_log_fromsrv/README.rst | FirebirdSQL/saturnin-core | bf6c67acdf70a905da51651041d5ddf2fed0c59f | [
"MIT"
] | 1 | 2021-03-04T21:50:19.000Z | 2021-03-04T21:50:19.000Z | services/fb_log_fromsrv/README.rst | FirebirdSQL/saturnin-core | bf6c67acdf70a905da51651041d5ddf2fed0c59f | [
"MIT"
] | null | null | null | services/fb_log_fromsrv/README.rst | FirebirdSQL/saturnin-core | bf6c67acdf70a905da51651041d5ddf2fed0c59f | [
"MIT"
] | null | null | null | ==========================================
Saturnin firebird-log-fromsrv microservice
==========================================
Firebird-log-fromsrv is a Saturnin DATA_PROVIDER microservice that fetches Firebird log
from Firebird server via services and send it as blocks of text to output data pipe.
| 43.285714 | 87 | 0.60066 |
5fcc03d8eeb348e480a0d33cca426b450ab7dfb6 | 1,689 | rst | reStructuredText | docs/api/casalith.rst | yohei99/casadocs | 9ff53c08d042ac5e5f580cc049de48378b7bd404 | [
"Apache-2.0"
] | 6 | 2020-07-31T12:43:58.000Z | 2022-03-11T22:01:57.000Z | docs/api/casalith.rst | yohei99/casadocs | 9ff53c08d042ac5e5f580cc049de48378b7bd404 | [
"Apache-2.0"
] | 15 | 2021-01-20T03:54:05.000Z | 2022-03-21T19:15:33.000Z | docs/api/casalith.rst | yohei99/casadocs | 9ff53c08d042ac5e5f580cc049de48378b7bd404 | [
"Apache-2.0"
] | 8 | 2020-10-16T06:34:05.000Z | 2021-12-09T07:32:25.000Z | casalith
====================
CASA monolithic environment bundling Python and library dependencies into a single download package.
.. currentmodule:: casalith
tasks
^^^^^
A few remaining tasks are found only in the monolithic environment
.. automodsumm:: casalith
:toctree: tt
:nosignatures:
:functions-only:
executables
^^^^^^^^^^^
The following executable applications are located in the <casa release>/bin directory of the expanded monolithic CASA tarball:
.. data:: python3(v3.6.7)
.. data:: pip3(v9.0.1)
.. data:: 2to3
.. data:: casa
.. data:: mpicasa
.. data:: casaviewer
.. data:: buildmytasks
python libraries
^^^^^^^^^^^^^^^^
The following third party libraries are included in the python distribution of monolithic casa and available as imports:
.. data:: libraries(attrs==19.3.0, backcall==0.1.0, certifi==2020.12.5, cycler==0.10.0, decorator==4.4.2, grpcio==1.29.0, importlib-metadata==1.6.0, ipython==7.15.0, ipython-genutils==0.2.0, jedi==0.17.0, kiwisolver==1.2.0, matplotlib==3.2.1, more-itertools==8.3.0, mpi4py==3.0.3, numpy==1.18.4, packaging==20.4, parso==0.7.0, pexpect==4.8.0, pickleshare==0.7.5, pluggy==0.13.1, prompt-toolkit==3.0.5, protobuf==3.12.2, ptyprocess==0.6.0, py==1.8.1, pyfits==3.5, Pygments==2.6.1, pyparsing==2.4.7, pytest==5.4.2, python-dateutil==2.8.1, pytz==2020.1, scipy==1.4.1, six==1.15.0, traitlets==4.3.3, wcwidth==0.2.2, zipp==3.1.0)
Note that each component in the modular CASA distribution uses a subset of these same dependencies.
The definition is provided here in pip compatible format such that one could save the preceding list to a list.txt file and
recreate using:
::
pip install -r list.txt
| 28.15 | 624 | 0.692126 |
ade15536975689fcc2470a887a4160f710b67624 | 210 | rst | reStructuredText | docs/source/api/browse.services.util.external_refs_cits.rst | rstojnic/arxiv-browse | 8847fb8bcd79cc2b162fd3de16db8fba1a1df020 | [
"MIT"
] | 61 | 2019-01-10T21:15:00.000Z | 2022-02-21T13:22:22.000Z | docs/source/api/browse.services.util.external_refs_cits.rst | rstojnic/arxiv-browse | 8847fb8bcd79cc2b162fd3de16db8fba1a1df020 | [
"MIT"
] | 91 | 2019-01-14T21:12:06.000Z | 2022-03-09T19:52:59.000Z | docs/source/api/browse.services.util.external_refs_cits.rst | rstojnic/arxiv-browse | 8847fb8bcd79cc2b162fd3de16db8fba1a1df020 | [
"MIT"
] | 38 | 2019-01-10T22:01:30.000Z | 2022-03-10T23:07:00.000Z | browse.services.util.external_refs_cits module
================================================
.. automodule:: browse.services.util.external_refs_cits
:members:
:undoc-members:
:show-inheritance:
| 26.25 | 55 | 0.580952 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.