hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c13974e189c86826c6d009b4fce93e764a56933c | 2,304 | rst | reStructuredText | docs/tutorial/tutorial_04.rst | dqfort/django-oauth-toolkit | 492a867499b50f348c28db4ef3e429e8f46dc412 | [
"BSD-2-Clause-FreeBSD"
] | 1,406 | 2018-04-09T18:46:01.000Z | 2022-03-30T00:42:23.000Z | docs/tutorial/tutorial_04.rst | dqfort/django-oauth-toolkit | 492a867499b50f348c28db4ef3e429e8f46dc412 | [
"BSD-2-Clause-FreeBSD"
] | 625 | 2018-04-08T06:06:29.000Z | 2022-03-28T20:48:19.000Z | docs/tutorial/tutorial_04.rst | dqfort/django-oauth-toolkit | 492a867499b50f348c28db4ef3e429e8f46dc412 | [
"BSD-2-Clause-FreeBSD"
] | 378 | 2018-04-11T20:08:11.000Z | 2022-03-30T17:53:21.000Z | Part 4 - Revoking an OAuth2 Token
=================================
Scenario
--------
You've granted a user an :term:`Access Token`, following :doc:`part 1 <tutorial_01>` and now you would like to revoke that token, probably in response to a client request (to logout).
Revoking a Token
----------------
Be sure that you've granted a valid token. If you've hooked in `oauth-toolkit` into your `urls.py` as specified in :doc:`part 1 <tutorial_01>`, you'll have a URL at `/o/revoke_token`. By submitting the appropriate request to that URL, you can revoke a user's :term:`Access Token`.
`Oauthlib <https://github.com/idan/oauthlib>`_ is compliant with https://tools.ietf.org/html/rfc7009, so as specified, the revocation request requires:
- token: REQUIRED, this is the :term:`Access Token` you want to revoke
- token_type_hint: OPTIONAL, designating either 'access_token' or 'refresh_token'.
Note that these revocation-specific parameters are in addition to the authentication parameters already specified by your particular client type.
Setup a Request
---------------
Depending on the client type you're using, the token revocation request you may submit to the authentication server may vary. A `Public` client, for example, will not have access to your `Client Secret`. A revoke request from a public client would omit that secret, and take the form:
::
POST /o/revoke_token/ HTTP/1.1
Content-Type: application/x-www-form-urlencoded
token=XXXX&client_id=XXXX
Where token is :term:`Access Token` specified above, and client_id is the `Client id` obtained in
obtained in :doc:`part 1 <tutorial_01>`. If your application type is `Confidential` , it requires a `Client secret`, you will have to add it as one of the parameters:
::
POST /o/revoke_token/ HTTP/1.1
Content-Type: application/x-www-form-urlencoded
token=XXXX&client_id=XXXX&client_secret=XXXX
The server will respond wih a `200` status code on successful revocation. You can use `curl` to make a revoke request on your server. If you have access to a local installation of your authorization server, you can test revoking a token with a request like that shown below, for a `Confidential` client.
::
curl --data "token=XXXX&client_id=XXXX&client_secret=XXXX" http://localhost:8000/o/revoke_token/
| 50.086957 | 303 | 0.738715 |
4d5e8a4489edb18e7df24579cb0c20fa14f2466f | 85 | rst | reStructuredText | docs/usage.rst | PythonicNinja/k-center-problem | 3a220eac1f02d9d4a957d0bd2b4c36ad1e49dfac | [
"BSD-3-Clause"
] | 4 | 2015-11-07T17:45:05.000Z | 2021-03-16T09:51:13.000Z | docs/usage.rst | PythonicNinja/k-center-problem | 3a220eac1f02d9d4a957d0bd2b4c36ad1e49dfac | [
"BSD-3-Clause"
] | null | null | null | docs/usage.rst | PythonicNinja/k-center-problem | 3a220eac1f02d9d4a957d0bd2b4c36ad1e49dfac | [
"BSD-3-Clause"
] | null | null | null | ========
Usage
========
To use k_center in a project::
import k-center-problem
| 10.625 | 30 | 0.564706 |
afd7ea2eb8342b8743390e822dae3e57148e02e3 | 912 | rst | reStructuredText | docs/source/low_api_refs.rst | Badr-MOUFAD/SkillNER | 169cb62960c764dfe70a91f8858005808ebf3803 | [
"MIT"
] | 21 | 2021-11-07T22:15:49.000Z | 2022-02-16T23:27:21.000Z | docs/source/low_api_refs.rst | Badr-MOUFAD/SkillNER | 169cb62960c764dfe70a91f8858005808ebf3803 | [
"MIT"
] | 3 | 2021-11-02T16:31:00.000Z | 2021-12-04T14:28:53.000Z | docs/source/low_api_refs.rst | Badr-MOUFAD/SkillNER | 169cb62960c764dfe70a91f8858005808ebf3803 | [
"MIT"
] | 4 | 2021-11-20T06:33:50.000Z | 2022-03-06T02:31:51.000Z | Low API references
==================
This section is meant for those who are interested in contributing to skillNer.
It comprises functions that are under the hood of skillNer.
Aside from that, we highly recommand those intersed in contributing to the skillNer
to check directly the `GitHub repository <https://github.com/AnasAito/SkillNER>`_
and go through the code. Their we added docstrings to provide as much code
explanation as possible.
.. note::
For a friendly explanation of how skillNer was built,
check the section `How it works <https://skillner.vercel.app/>`_ of our website.
.. autosummary::
:toctree: generated
skillNer.matcher_class.Matcher
skillNer.matcher_class.SkillsGetter
.. autosummary::
:toctree: generated
skillNer.utils.Utils
.. autosummary::
:toctree: generated
skillNer.visualizer.html_elements
skillNer.visualizer.phrase_class | 24.648649 | 86 | 0.741228 |
19a2acba02663cb7e94f9787191dccceae73e6bd | 315 | rst | reStructuredText | README.rst | null-none/kzcurrency | 2708492ad86c11211b2a6155b4db1fab9427f0b1 | [
"MIT"
] | 2 | 2019-04-14T20:27:53.000Z | 2020-05-01T08:50:44.000Z | README.rst | null-none/kzcurrency | 2708492ad86c11211b2a6155b4db1fab9427f0b1 | [
"MIT"
] | null | null | null | README.rst | null-none/kzcurrency | 2708492ad86c11211b2a6155b4db1fab9427f0b1 | [
"MIT"
] | null | null | null | =======
Install
=======
.. code-block:: bash
pip install kz-currency
=======
Example
=======
.. code-block:: python
from kzcurrency.list import KZCurrency
currency = KZCurrency()
print(currency.list())
print(currency.rates())
print(currency.get('USD'))
=======
License
=======
MIT
| 10.862069 | 42 | 0.565079 |
e8141729b6ad64c84cd98f384e0f085cd6c7fb0b | 4,378 | rst | reStructuredText | docs/source/tutorial/advanced.rst | cenonn/resample | f4126b2b62211cf1e6dab050b2c8a24247b1fdbe | [
"Apache-2.0"
] | null | null | null | docs/source/tutorial/advanced.rst | cenonn/resample | f4126b2b62211cf1e6dab050b2c8a24247b1fdbe | [
"Apache-2.0"
] | null | null | null | docs/source/tutorial/advanced.rst | cenonn/resample | f4126b2b62211cf1e6dab050b2c8a24247b1fdbe | [
"Apache-2.0"
] | null | null | null | .. _advanced:
***********************************
Advanced Tutorial
***********************************
**resample** provides other features when performing bootstraps:
handling multivariate data and validity checks.
Multivariate Data
===================================
**resample** allows many different kinds of multivariate data. The one
thing that they all have in common is that the dataset needs to be passed into
``boot`` as a ``pd.DataFrame``.
Matrix Statistics
-----------------------------------
The simplest case would be to calculate a statistic that looks at how each
variable is dependent on one another such as the covariance or correlation ::
multi_bootstrap = rs.boot(score[["mec", "vec"]], np.cov)
Performing the actual bootstrap is almost the same as the univariate case.
The only difference is that the input data has multiple variables, thus
the calculations are handling matrices rather than atomic values. For example,
calculating a point estimate would now return a 2x2 matrix rather than a single
value ::
multi_bootstrap.point_estimate(np.mean)
The only times this changes is when plotting the bootstrap distribution or when
calculating confidence intervals. In either of these two situations, the
``col`` and ``row`` arguments will need to be specified to denote which
specific value to analyze ::
multi_bootstrap.plot(col=0, row=1)
.. plot::
import pandas as pd
import numpy as np
import resample as rs
data = pd.read_csv("score.csv")
bootstrap = rs.boot(data[["mec", "vec"]], np.cov)
bootstrap.plot(col=0, row=1)
::
multi_bootstrap.ci(col=0 row=0)
Grouping Variables
-----------------------------------
**resample** also allows the user to calculate statistics that compare
different groups. For example, a user may want to look at the difference in
means between two groups ::
def diff_mean(group1, group2):
return np.mean(group1) - np.mean(group2)
The ``boot`` function handles this by specifying the ``group_cols`` argument; a
``list`` with the column names that specify the different groups should be
passed. In the example below, we want to get a bootstrap estimate of the
difference beween the average algebra and statistics scores. This will also
require changing the structure of the score dataset to contain a column that
specifies an observation's group ::
group_cols = ["alg", "sta"]
data = score[group_cols].melt(value_vars=group_cols, var_name="test")
boot_groups = rs.boot(data, diff_mean, group_cols=["test"])
Specifying ``group_col`` will return the same type of object as before; the
functionality remains the same.
Output Variables
-----------------------------------
A user can also use **resample** to bootstrap situations that specify specific
dependent and independent variables such as estimating regression
coefficients ::
from sklearn.linear_model import LinearRegression
def get_coefs(X, y):
model = LinearRegression()
model.fit(X, y)
return model.coef_
boot_reg = rs.boot(hormone, get_coefs, output_cols=["amount"])
Like all of the previous examples, this will return the same time of object as
before.
Validity Checks using Statistics Objects
========================================
Certain estimators will not be valid to bootstrap. This includes statistics
like the maximum and minimum. resample solves this problem by building up
estimators using Statistics objects.
These objects contain common statistics and hold information on whether
they are valid to bootstrap or not. More complicated estimators can be
created by adding, subtracting, etc. with other estimators and numeric values.
After being created, they need to be passed into the *boot* function
inplace of a function.
If someone wanted to look at the average of the mean and median, they would
need to ::
estimator = (rs.Mean() + rs.Median()) / 2
bootstrap = rs.boot(data["mec"], estimator)
This particular case uses a valid estimator, so resample will not give a
warning.
Using the max on the otherhand would cause resample to give a warning ::
estimator = rs.Max()
bootstrap = rs.boot(data["mec"], estimator)
# would raise a python Warning: "results from bootstrap may not be valid
This feature can be completely bypassed if a user wants to proceed with the
bootstrap anyway.
| 35.306452 | 79 | 0.709456 |
37757f689f8c3ed76d3b6fb43f4d46533e44e60d | 611 | rst | reStructuredText | doc/source/REST_identity.rst | balrampariyarath/rucio | 8a68017af6b44485a9620566f1afc013838413c1 | [
"Apache-2.0"
] | 1 | 2017-08-07T13:34:55.000Z | 2017-08-07T13:34:55.000Z | doc/source/REST_identity.rst | balrampariyarath/rucio | 8a68017af6b44485a9620566f1afc013838413c1 | [
"Apache-2.0"
] | null | null | null | doc/source/REST_identity.rst | balrampariyarath/rucio | 8a68017af6b44485a9620566f1afc013838413c1 | [
"Apache-2.0"
] | null | null | null | ==========
identity.py
==========
.. http:put:: /identities/<account>/x509
Create a new identity and map it to an account.
**Example request**:
.. sourcecode:: http
PUT /identities/<account>/x509 HTTP/1.1
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Vary: Accept
Content-Type:
.. http:put:: /identities/<account>/gss
Create a new identity and map it to an account.
**Example request**:
.. sourcecode:: http
PUT /identities/<account>/gss HTTP/1.1
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Vary: Accept
Content-Type:
| 13.577778 | 55 | 0.615385 |
b55e319385bd647b80fa9ba4409a4d11556784d4 | 3,741 | rst | reStructuredText | CHANGELOG.rst | emory-libraries-ecds/namedropper-py | f32f3ae9acea420e1fd3580c5490fa49bfb02231 | [
"Apache-2.0"
] | 2 | 2015-06-28T18:40:38.000Z | 2015-10-29T13:57:59.000Z | CHANGELOG.rst | emory-libraries-ecds/namedropper-py | f32f3ae9acea420e1fd3580c5490fa49bfb02231 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.rst | emory-libraries-ecds/namedropper-py | f32f3ae9acea420e1fd3580c5490fa49bfb02231 | [
"Apache-2.0"
] | null | null | null | Change & Version Information
============================
The following is a summary of changes and improvements to
:mod:`namedropper`. New features in each version should be listed, with
any necessary information about installation or upgrade notes.
0.3.1
-----
* Corrected CSV output of **lookup-names** script, which was broken in
in 0.3.0.
0.3
---
* New script **count-nametags**
* A user can run a script to get summary information about the number of
tagged names in an EAD document, in order to do simple comparison of
tagged and untagged documents.
* Updates to **lookup-names** script
* When a user runs the lookup-names script to generate a CSV file, the resulting output
includes resource type for person, place, or organization so that results can be
filtered and organized by broad types.
* When users interrupts the lookup-names script while it is running, it stops
processing gracefully and reports on what was done so that user can get an idea
of the output without waiting for the script to complete on a long document.
* When a user runs the lookup names script with options that generate no results,
the script does not create a CSV file or an enhanced xml file (even if those options
were specified) and prints a message explaining why, so that the user is not confused
by empty or unchanged files.
* When users run the lookup-names script to generate annotated XML, they can optionally
add tags with Oxygen history tracking comments so that changes can be reviewed and
accepted or rejected in Oxygen.
* Bug fix: When a user runs a lookup-names script on an XML file that does not have
all of its component parts, it should not crash.
* Bug fix: When annotating XML, the script will no longer crash if --types is not restricted
to Person,Place,Organisation (or some subset of those three), and will warn about
recognized entities that cannot be inserted into the output XML.
* Bug fix: When annotating XML, tags will not be inserted where they are not schema valid
(schema validation currently only supported for EAD).
* Bug fix: If output XML is requested but an HTTP Proxy is not configured, the script will halt and
information about setting a proxy, instead of crashing when attempting to validate the output XML.
0.2.1
-----
* Normalize whitespace for text context when generating CSV output
(primarily affects plain-text input).
0.2
---
* A command-line user running the lookup-names script can have the input
document type auto-detected, so they don't have to specify an input type
every time they use the script.
* A command line user can run a script to look up recognized person names from
a TEI or EAD XML document in a name authority system so that recognized
names can be linked to other data.
* A command line user can run a script to generate a new version of an EAD XML
document with tagged named entities, in order to automatically link
mentioned entities to other data sources.
* A command line user can run a script to generate a new version of a TEI XML
document with tagged named entities, in order to automatically link
mentioned entities to other data sources.
* A command line user can optionally export identified resources and
associated data to a CSV file, so they can review the results in more
detail.
0.1
---
* New script **lookup-names**
* A command line user can run a script to output recognized names in an EAD
XML document in order to evaluate automated name recognition and
disambiguation.
* A command line user can run a script to output recognized names in a TEI XML
document in order to evaluate automated name recognition and disambiguation.
| 44.535714 | 102 | 0.753275 |
0716ac53fad38b39417369496975b62c2c2556ea | 488 | rst | reStructuredText | build/pkgs/gdb/SPKG.rst | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 1,742 | 2015-01-04T07:06:13.000Z | 2022-03-30T11:32:52.000Z | build/pkgs/gdb/SPKG.rst | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 66 | 2015-03-19T19:17:24.000Z | 2022-03-16T11:59:30.000Z | build/pkgs/gdb/SPKG.rst | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 495 | 2015-01-10T10:23:18.000Z | 2022-03-24T22:06:11.000Z | gdb: The GNU Project debugger
=============================
Description
-----------
GDB, the GNU Project debugger, allows you to see what is going on
"inside" another program while it executes -- or what another program
was doing at the moment it crashed.
License
-------
GPL v3+
Upstream Contact
----------------
http://www.gnu.org/software/gdb/
Special Update/Build Instructions
---------------------------------
Current version needs makeinfo installed to build successfully.
| 18.769231 | 69 | 0.625 |
99c37e0a4100d76a780d7171aa7337823d101ec9 | 3,841 | rst | reStructuredText | doc/source/drivers/snmp.rst | pyrrrat/moved-ironic | 93331da82ef13490ccf08f8f9c370e81ca176a41 | [
"Apache-2.0"
] | 1 | 2021-03-29T15:39:15.000Z | 2021-03-29T15:39:15.000Z | doc/source/drivers/snmp.rst | pyrrrat/moved-ironic | 93331da82ef13490ccf08f8f9c370e81ca176a41 | [
"Apache-2.0"
] | null | null | null | doc/source/drivers/snmp.rst | pyrrrat/moved-ironic | 93331da82ef13490ccf08f8f9c370e81ca176a41 | [
"Apache-2.0"
] | 1 | 2015-01-15T18:39:26.000Z | 2015-01-15T18:39:26.000Z | ===========
SNMP driver
===========
The SNMP power driver enables control of power distribution units of the type
frequently found in data centre racks. PDUs frequently have a management
ethernet interface and SNMP support enabling control of the power outlets.
The SNMP power driver works with the PXE driver for network deployment and
network-configured boot.
List of supported devices
=========================
This is a non-exhaustive list of supported devices. Any device not listed in
this table could possibly work using a similar driver.
Please report any device status.
============== ========== ========== =====================
Manufacturer Model Supported? Driver name
============== ========== ========== =====================
APC AP7920 Yes apc_masterswitch
APC AP9606 Yes apc_masterswitch
APC AP9225 Yes apc_masterswitchplus
APC AP7155 Yes apc_rackpdu
APC AP7900 Yes apc_rackpdu
APC AP7901 Yes apc_rackpdu
APC AP7902 Yes apc_rackpdu
APC AP7911a Yes apc_rackpdu
APC AP7930 Yes apc_rackpdu
APC AP7931 Yes apc_rackpdu
APC AP7932 Yes apc_rackpdu
APC AP7940 Yes apc_rackpdu
APC AP7941 Yes apc_rackpdu
APC AP7951 Yes apc_rackpdu
APC AP7960 Yes apc_rackpdu
APC AP7990 Yes apc_rackpdu
APC AP7998 Yes apc_rackpdu
APC AP8941 Yes apc_rackpdu
APC AP8953 Yes apc_rackpdu
APC AP8959 Yes apc_rackpdu
APC AP8961 Yes apc_rackpdu
APC AP8965 Yes apc_rackpdu
Aten all? Yes aten
CyberPower all? Untested cyberpower
EatonPower all? Untested eatonpower
Teltronix all? Yes teltronix
============== ========== ========== =====================
Software Requirements
=====================
- The PySNMP package must be installed, variously referred to as ``pysnmp``
or ``python-pysnmp``
Enabling the SNMP Power Driver
==============================
- Add ``pxe_snmp`` to the list of ``enabled_drivers`` in
``/etc/ironic/ironic.conf``
- Ironic Conductor must be restarted for the new driver to be loaded.
Ironic Node Configuration
=========================
Nodes are configured for SNMP control by setting the Ironic node object's
``driver`` property to be ``pxe_snmp``. Further configuration values are
added to ``driver_info``:
- ``snmp_driver``: PDU manufacturer driver
- ``snmp_address``: the IPv4 address of the PDU controlling this node.
- ``snmp_port``: (optional) A non-standard UDP port to use for SNMP operations.
If not specified, the default port (161) is used.
- ``snmp_outlet``: The power outlet on the PDU (1-based indexing).
- ``snmp_protocol``: (optional) SNMP protocol version
(permitted values ``1``, ``2c`` or ``3``). If not specified, SNMPv1
is chosen.
- ``snmp_community``: (Required for SNMPv1 and SNMPv2c) SNMP community
parameter for reads and writes to the PDU.
- ``snmp_security``: (Required for SNMPv3) SNMP security string.
PDU Configuration
=================
This version of the SNMP power driver does not support handling
PDU authentication credentials. When using SNMPv3, the PDU must be
configured for ``NoAuthentication`` and ``NoEncryption``. The
security name is used analogously to the SNMP community in early
SNMP versions.
| 41.75 | 79 | 0.566779 |
78a84eedda1661d7f8760920ea755de4a4cbee24 | 2,880 | rst | reStructuredText | docs/src/userguide/anatomy/graph.rst | gaybro8777/klio | e14055fba73f275ebbe7b3b64cc43beaa4ac2f69 | [
"Apache-2.0"
] | 2 | 2021-01-05T14:41:40.000Z | 2021-01-06T09:40:10.000Z | docs/src/userguide/anatomy/graph.rst | gaybro8777/klio | e14055fba73f275ebbe7b3b64cc43beaa4ac2f69 | [
"Apache-2.0"
] | null | null | null | docs/src/userguide/anatomy/graph.rst | gaybro8777/klio | e14055fba73f275ebbe7b3b64cc43beaa4ac2f69 | [
"Apache-2.0"
] | null | null | null | Graph
=====
In **streaming** mode, Klio makes use of `Google Pub/Sub`_ and `GCS buckets`_ to create a directed acyclic graph (DAG) to string job dependencies together, allowing various modes of execution.
Klio support two modes of execution: :ref:`top-down <top-down>` and :ref:`bottom-up <bottom-up>`.
.. _top-down:
Top-Down Execution
------------------
With top-down execution, every Klio job in the graph is run for every file submitted to it.
Here we have a graph of Klio jobs. A Pub/Sub message containing a :ref:`kliomessage` (which in it
contains a reference to a unique file) is published to the left-most job (the "apex" job). That
job runs the necessary logic on the referenced file. Once it's done, it publishes a ``KlioMessage``
to Pub/Sub for the child jobs to consume. Once those child jobs finish, they publish a
``KlioMessage`` to Pub/Sub for their child jobs to consume, and so on.
.. figure:: images/top_down.gif
:alt: top-down execution flow
This continues until all jobs in a graph have been executed for a particular audio file.
.. note::
Any job can be an "apex" node!
While the above animation shows a message being published to the root of the overall graph,
you may publish messages to any job directly. Depending on the execution mode of the published
message (top-down or bottom-up), any job downstream of the originally-triggered job may
(top-down) or may not (bottom-up) be triggered.
.. _bottom-up:
Bottom-Up Execution
-------------------
It's not always efficient or necessary to run every Klio job in the graph for a given file. Maybe
you just want to run a single job for a file, which sometimes means running the parent Klio jobs
to fill in missing dependencies.
In bottom-up execution mode, missing dependencies for a particular job are recursively created.
Here we have another graph of Klio jobs, and we publish a Pub/Sub message (a reference to a file)
to our Klio job, the right-most node here.
.. figure:: images/bottom_up.gif
:alt: bottom-up execution flow
Klio will first check to see if the input file for that file is available to download. If it sees
that it's missing, Klio will submit a :ref:`kliomessage` to Pub/Sub of the same file reference to
the parent job that generates the input file. If the input to that job is _also_ missing, Klio
will submit the same Pub/Sub message to _its_ parent.
When the parent job finishes, it essentially resubmits work to child jobs. But Klio is smart
enough to not let it trigger _all_ child jobs – only the jobs that are in the direct path of the
originating job is triggered. Other child jobs won't do any unnecessary work.
Bottom-up execution is particularly useful whenever work needs to be re-run in just one part of
the graph.
.. _Google Pub/Sub: https://cloud.google.com/pubsub/docs
.. _GCS buckets: https://cloud.google.com/storage/docs
| 42.985075 | 192 | 0.748958 |
8b30d02e0ea26b9d94d27150397e64d12105d43f | 2,673 | rst | reStructuredText | install-guide/source/get_started.rst | mail2nsrajesh/zaqar | a68a03a228732050b33c2a7f35d1caa9f3467718 | [
"Apache-2.0"
] | 97 | 2015-01-02T09:35:23.000Z | 2022-03-25T00:38:45.000Z | install-guide/source/get_started.rst | mail2nsrajesh/zaqar | a68a03a228732050b33c2a7f35d1caa9f3467718 | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | install-guide/source/get_started.rst | mail2nsrajesh/zaqar | a68a03a228732050b33c2a7f35d1caa9f3467718 | [
"Apache-2.0"
] | 44 | 2015-01-28T03:01:28.000Z | 2021-05-13T18:55:19.000Z | ==========================
Messaging service overview
==========================
The Message service is multi-tenant, fast, reliable, and scalable. It allows
developers to share data between distributed application components performing
different tasks, without losing messages or requiring each component to be
always available.
The service features a RESTful API and a Websocket API, which developers can
use to send messages between various components of their SaaS and mobile
applications, by using a variety of communication patterns.
Key features
~~~~~~~~~~~~
The Messaging service provides the following key features:
* Choice between two communication transports. Both with Identity service
support:
* Firewall-friendly, **HTTP-based RESTful API**. Many of today's developers
prefer a more web-friendly HTTP API. They value the simplicity and
transparency of the protocol, its firewall-friendly nature, and its huge
ecosystem of tools, load balancers and proxies. In addition, cloud
operators appreciate the scalability aspects of the REST architectural
style.
* **Websocket-based API** for persistent connections. Websocket protocol
provides communication over persistent connections. Unlike HTTP, where
new connections are opened for each request/response pair, Websocket can
transfer multiple requests/responses over single TCP connection. It saves
much network traffic and minimizes delays.
* Multi-tenant queues based on Identity service IDs.
* Support for several common patterns including event broadcasting, task
distribution, and point-to-point messaging.
* Component-based architecture with support for custom back ends and message
filters.
* Efficient reference implementation with an eye toward low latency and high
throughput (dependent on back end).
* Highly-available and horizontally scalable.
* Support for subscriptions to queues. Several notification types are
available:
* Email notifications
* Webhook notifications
* Websocket notifications
Layers of the Messaging service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Messaging service has following layers:
* The transport layer (Messaging application) which can provide these APIs:
* HTTP RESTful API (via ``wsgi`` driver).
* Websocket API (via ``websocket`` driver).
* The storage layer which keeps all the data and metadata about queues and
messages. It has two sub-layers:
* The management store database (Catalog). Can be ``MongoDB`` database (or
``MongoDB`` replica-set) or SQL database.
* The message store databases (Pools). Can be ``MongoDB`` database (or
``MongoDB`` replica-set) or ``Redis`` database.
| 40.5 | 78 | 0.751964 |
611d67dca4bb17435ce5820717ba3764da443b48 | 855 | rst | reStructuredText | source/acknowledgements.rst | SALMON-TDDFT/SALMON-DOCS | aef55e9347f465f5901d424402e4f0e3b2423281 | [
"Apache-2.0"
] | null | null | null | source/acknowledgements.rst | SALMON-TDDFT/SALMON-DOCS | aef55e9347f465f5901d424402e4f0e3b2423281 | [
"Apache-2.0"
] | 6 | 2018-08-08T08:42:54.000Z | 2018-10-25T00:20:28.000Z | source/acknowledgements.rst | SALMON-TDDFT/SALMON-DOCS | aef55e9347f465f5901d424402e4f0e3b2423281 | [
"Apache-2.0"
] | 4 | 2018-08-08T08:29:42.000Z | 2018-10-23T00:54:35.000Z | .. _acknowledgements:
Acknowledgements
------------------
SALMON has been developed by the SALMON developers under supports by
Center for Computational Sciences, University of Tsukuba, and
National Institute for Quantum and Radiological Science and Technology.
SALMON has been supported by Strategic Basic
Research Programs, CREST, Japan Science and Technology Agency, under the
Grand Number JPMJCR16N5, in the research area of Advanced core
technology for creation and practical utilization of innovative
properties and functions based upon optics and photonics. SALMON was
also supported by Ministry of Education, Culture, Sports and
Technology of Japan as a social and scientific priority issue (Creation
of new functional devices and high-performance materials to support
next-generation industries: CDMSI) to be tackled by using post-K
computer.
| 42.75 | 72 | 0.815205 |
09c1e012ee08a6b2c86845e144a2de0e86771eb1 | 506 | rest | reStructuredText | src/backend/api-test/info.rest | ElsevierSoftwareX/SOFTX-D-22-00045 | a3ecd163af633273318297acfbe1372a9071ac0f | [
"MIT"
] | null | null | null | src/backend/api-test/info.rest | ElsevierSoftwareX/SOFTX-D-22-00045 | a3ecd163af633273318297acfbe1372a9071ac0f | [
"MIT"
] | null | null | null | src/backend/api-test/info.rest | ElsevierSoftwareX/SOFTX-D-22-00045 | a3ecd163af633273318297acfbe1372a9071ac0f | [
"MIT"
] | null | null | null | ### Ping
POST https://127.0.0.1:8081/restful/info/ping HTTP/1.1
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJNRCI6dHJ1ZSwiYWRtaW4iOnRydWUsImV4cCI6MTYwODkzMjIxOSwiaW5zdGl0dXRpb25faWQiOiI3NTYwM2RkNi1lMGY0LTRlNTMtOTNjZS1hNjRkZTNmZTRhOWUiLCJpbnN0aXR1dGlvbl9pcF9hZGRyZXNzIjoiMTI3LjAuMC4xIiwiaW5zdGl0dXRpb25fcG9ydF9udW1iZXIiOjgwODEsIm9yaWdfaWF0IjoxNjA4OTMxOTE5LCJzdXBlcnVzZXIiOmZhbHNlLCJ1c2VyX2lkIjoiNWU0ZTdhYzUtNTQ3NS00ZDIwLTgzOTItMTc4ZDUwMThkOGZhIn0.Onw1wcSXMGhAymlOKSeQMSYDTnRsxaqpLPE4p3XJRGI | 168.666667 | 442 | 0.950593 |
9406d45e37e913faa9ccdf864c9d29de94c99b70 | 1,848 | rst | reStructuredText | docs/tdc.single_pred.rst | ypapanik/TDC | 3739a918cf3bbb4fc2ef8d6c96d8b809dd8c4d2a | [
"MIT"
] | 577 | 2020-11-17T01:09:15.000Z | 2022-03-31T22:45:34.000Z | docs/tdc.single_pred.rst | ypapanik/TDC | 3739a918cf3bbb4fc2ef8d6c96d8b809dd8c4d2a | [
"MIT"
] | 70 | 2020-11-18T09:35:33.000Z | 2022-03-25T11:28:38.000Z | docs/tdc.single_pred.rst | ypapanik/TDC | 3739a918cf3bbb4fc2ef8d6c96d8b809dd8c4d2a | [
"MIT"
] | 106 | 2020-11-17T01:47:02.000Z | 2022-03-25T03:34:46.000Z | tdc.single\_pred
========================
tdc.single\_pred.single\_pred\_dataset module
---------------------------------------------
.. automodule:: tdc.single_pred.single_pred_dataset
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.adme module
----------------------------
.. automodule:: tdc.single_pred.adme
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.crispr\_outcome module
---------------------------------------
.. automodule:: tdc.single_pred.crispr_outcome
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.develop module
-------------------------------
.. automodule:: tdc.single_pred.develop
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.epitope module
-------------------------------
.. automodule:: tdc.single_pred.epitope
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.hts module
---------------------------
.. automodule:: tdc.single_pred.hts
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.paratope module
--------------------------------
.. automodule:: tdc.single_pred.paratope
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.qm module
--------------------------
.. automodule:: tdc.single_pred.qm
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.test\_single\_pred module
------------------------------------------
.. automodule:: tdc.single_pred.test_single_pred
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.tox module
---------------------------
.. automodule:: tdc.single_pred.tox
:members:
:undoc-members:
:show-inheritance:
tdc.single\_pred.yields module
------------------------------
.. automodule:: tdc.single_pred.yields
:members:
:undoc-members:
:show-inheritance: | 20.307692 | 51 | 0.568182 |
8eecb6fb3919f30fabcc80b7e9f99f62697d5e72 | 43 | rst | reStructuredText | source/advanced/index.rst | LilyGO/Pictoblox_EN | 6a011f273bc4bfb7b904a0d551d51a99a2d88312 | [
"MIT"
] | 1 | 2020-07-29T00:23:40.000Z | 2020-07-29T00:23:40.000Z | source/advanced/index.rst | LilyGO/Pictoblox_EN | 6a011f273bc4bfb7b904a0d551d51a99a2d88312 | [
"MIT"
] | null | null | null | source/advanced/index.rst | LilyGO/Pictoblox_EN | 6a011f273bc4bfb7b904a0d551d51a99a2d88312 | [
"MIT"
] | 1 | 2020-10-16T05:26:04.000Z | 2020-10-16T05:26:04.000Z | ****************
Advanced
****************
| 10.75 | 16 | 0.186047 |
0a48dd88858c0d497e7cc747de226c3988330c8f | 1,501 | rst | reStructuredText | docs/accesslab.rst | El-Coder/f5-big-iq-lab | 0827e1376ae702e81dae03a111bfbefadb6f719c | [
"Apache-2.0"
] | null | null | null | docs/accesslab.rst | El-Coder/f5-big-iq-lab | 0827e1376ae702e81dae03a111bfbefadb6f719c | [
"Apache-2.0"
] | null | null | null | docs/accesslab.rst | El-Coder/f5-big-iq-lab | 0827e1376ae702e81dae03a111bfbefadb6f719c | [
"Apache-2.0"
] | null | null | null | Lab environment access
^^^^^^^^^^^^^^^^^^^^^^
You will find 2 ways to access the different systems in this lab:
- From the Jump Host
From the lab environment, launch a remote desktop session to access the Jump Host (Ubuntu Desktop).
To do this, in your lab deployment, click on the *ACCESS* button of the **Ubuntu Lamp Server** system and click on
*noVNC*. The password is ``purple123``.
|
You can also use *XRDP* as an alternative, click on the resolution that works for your laptop.
When the RDP session launches showing *Session: Xorg*, simply click *OK*, no credentials are needed.
Modern laptops with higher resolutions you might want to use 1440x900 and once XRDP is launched Zoom to 200%.
|
|udf_ubuntu_rdp_vnc|
- Going directly to the BIG-IQ CM or BIG-IP TMUI or WEB SHELL/SSH
To access the BIG-IQ directly, click on the *ACCESS* button under **BIG-IQ CM**
and select *TMUI*. The credentials to access the BIG-IQ TMUI are ``david/david`` and ``paula/paula`` as directed in the labs.
|udf_bigiq_tmui|
To ssh into a system, you can click on *WEB SHELL* or *SSH* (you will need your ssh keys setup in the lab environment for SSH).
|
You can also click on *DETAILS* on each component to see the credentials (login/password).
.. |udf_ubuntu_rdp_vnc| image:: /pictures/udf_ubuntu_rdp_vnc.png
:scale: 60%
.. |udf_bigiq_tmui| image:: /pictures/udf_bigiq_tmui.png
:scale: 60%
| 40.567568 | 133 | 0.684211 |
cf4e17ed718b973dd2c9a37cabbd32e21f60a087 | 178 | rst | reStructuredText | docs/reduction/reduction_param.rst | kglidic/tshirt | 8080d32b154bc2b4da8410b1d53c5353a8f6b9dd | [
"MIT"
] | 1 | 2020-08-09T10:28:17.000Z | 2020-08-09T10:28:17.000Z | docs/reduction/reduction_param.rst | kglidic/tshirt | 8080d32b154bc2b4da8410b1d53c5353a8f6b9dd | [
"MIT"
] | 25 | 2020-07-01T17:25:59.000Z | 2022-03-23T03:45:56.000Z | docs/reduction/reduction_param.rst | kglidic/tshirt | 8080d32b154bc2b4da8410b1d53c5353a8f6b9dd | [
"MIT"
] | 1 | 2020-06-30T15:56:32.000Z | 2020-06-30T15:56:32.000Z | Reduction Parameters
=================================
.. literalinclude:: ../../tshirt/parameters/reduction_parameters/example_reduction_parameters.yaml
.. :language: yaml
| 22.25 | 98 | 0.634831 |
d23bc3a23747849e39aee8aae92a9f7918394f07 | 5,897 | rst | reStructuredText | docs/apache-airflow/logging-monitoring/check-health.rst | arezamoosavi/airflow | c3c81c3144386d1de535c1c5e777270e727bb69e | [
"Apache-2.0"
] | 1 | 2022-03-23T21:57:44.000Z | 2022-03-23T21:57:44.000Z | docs/apache-airflow/logging-monitoring/check-health.rst | arezamoosavi/airflow | c3c81c3144386d1de535c1c5e777270e727bb69e | [
"Apache-2.0"
] | 2 | 2019-02-16T19:00:53.000Z | 2019-05-09T23:29:14.000Z | docs/apache-airflow/logging-monitoring/check-health.rst | samhita-alla/airflow | 5b8c3819900793f6530a7313a05a181edf86f224 | [
"Apache-2.0"
] | 1 | 2022-03-03T18:47:49.000Z | 2022-03-03T18:47:49.000Z | .. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Checking Airflow Health Status
==============================
Airflow has two methods to check the health of components - HTTP checks and CLI checks. All available checks are
accessible through the CLI, but only some are accessible through HTTP due to the role of the component being checked
and the tools being used to monitor the deployment.
For example, when running on Kubernetes, use `a Liveness probes <https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/>`__ (``livenessProbe`` property)
with :ref:`CLI checks <check-health/cli-checks-for-scheduler>` on the scheduler deployment to restart it when it fails.
For the webserver, you can configure the readiness probe (``readinessProbe`` property) using :ref:`check-health/http-endpoint`.
For an example for a Docker Compose environment, see the ``docker-compose.yaml`` file available in the :doc:`/start/docker`.
.. _check-health/http-endpoint:
Health Check Endpoint
---------------------
To check the health status of your Airflow instance, you can simply access the endpoint
``/health``. It will return a JSON object in which a high-level glance is provided.
.. code-block:: JSON
{
"metadatabase":{
"status":"healthy"
},
"scheduler":{
"status":"healthy",
"latest_scheduler_heartbeat":"2018-12-26 17:15:11+00:00"
}
}
* The ``status`` of each component can be either "healthy" or "unhealthy"
* The status of ``metadatabase`` depends on whether a valid connection can be initiated with the database
* The status of ``scheduler`` depends on when the latest scheduler heartbeat was received
* If the last heartbeat was received more than 30 seconds (default value) earlier than the current time, the scheduler is
considered unhealthy
* This threshold value can be specified using the option ``scheduler_health_check_threshold`` within the
``[scheduler]`` section in ``airflow.cfg``
* If you run more than one scheduler, only the state of one scheduler will be reported, i.e. only one working scheduler is enough
for the scheduler state to be considered healthy
Please keep in mind that the HTTP response code of ``/health`` endpoint **should not** be used to determine the health
status of the application. The return code is only indicative of the state of the rest call (200 for success).
.. note::
For this check to work, at least one working web server is required. Suppose you use this check for scheduler
monitoring, then in case of failure of the web server, you will lose the ability to monitor scheduler, which means
that it can be restarted even if it is in good condition. For greater confidence, consider using :ref:`CLI Check for Scheduler <check-health/cli-checks-for-scheduler>`.
.. _check-health/cli-checks-for-scheduler:
CLI Check for Scheduler
-----------------------
Scheduler creates an entry in the table :class:`airflow.jobs.base_job.BaseJob` with information about the host and
timestamp (heartbeat) at startup, and then updates it regularly. You can use this to check if the scheduler is
working correctly. To do this, you can use the ``airflow jobs checks`` command. On failure, the command will exit
with a non-zero error code.
To check if the local scheduler is still working properly, run:
.. code-block:: bash
airflow jobs check --job-type SchedulerJob --hostname "$(hostname)"
To check if any scheduler is running when you are using high availability, run:
.. code-block:: bash
airflow jobs check --job-type SchedulerJob --allow-multiple --limit 100
CLI Check for Database
----------------------
To verify that the database is working correctly, you can use the ``airflow db check`` command. On failure, the command will exit
with a non-zero error code.
HTTP monitoring for Celery Cluster
----------------------------------
You can use Flower to monitor the health of the Celery cluster. It also provides an HTTP API that you can use to build a health check for your environment.
For details about installation, see: :ref:`executor:CeleryExecutor`. For details about usage, see: `The Flower project documentation <https://flower.readthedocs.io/en/stable/>`__.
CLI Check for Celery Workers
----------------------------
To verify that the Celery workers are working correctly, you can use the ``celery inspect ping`` command. On failure, the command will exit
with a non-zero error code.
To check if the worker running on the local host is working correctly, run:
.. code-block:: bash
celery --app airflow.executors.celery_executor.app inspect ping -d celery@${HOSTNAME}
To check if the all workers in the cluster running is working correctly, run:
.. code-block:: bash
celery --app airflow.executors.celery_executor.app inspect ping
For more information, see: `Management Command-line Utilities (inspect/control) <https://docs.celeryproject.org/en/stable/userguide/monitoring.html#monitoring-control>`__ and `Workers Guide <https://docs.celeryproject.org/en/stable/userguide/workers.html>`__ in the Celery documentation.
| 45.713178 | 287 | 0.738511 |
8ad48382fef8074154a2e6a8ab1e2c3d1d926909 | 165 | rst | reStructuredText | NEWS.rst | steinwurf/astyle | 922d84b42046a7d5fe26f7dbc1031fc7d852b5eb | [
"MIT"
] | 16 | 2016-10-18T17:39:01.000Z | 2021-08-19T09:10:10.000Z | NEWS.rst | steinwurf/astyle | 922d84b42046a7d5fe26f7dbc1031fc7d852b5eb | [
"MIT"
] | null | null | null | NEWS.rst | steinwurf/astyle | 922d84b42046a7d5fe26f7dbc1031fc7d852b5eb | [
"MIT"
] | 7 | 2017-12-07T14:34:03.000Z | 2021-07-16T13:25:32.000Z | News for astyle
===============
This file lists the major changes between versions. For a more detailed list
of every change, see the Git log.
Latest
------
* tbd
| 16.5 | 76 | 0.666667 |
f84f955a66d662061c3501db761134763b3e097a | 1,807 | rst | reStructuredText | docs/contributing/how-to/how-to-write-tests/index.rst | 501ZHY/Nashpy | cdbc85b592a272d8431648e435a21b7736058f4e | [
"MIT"
] | 212 | 2016-11-06T12:44:08.000Z | 2022-03-10T03:05:27.000Z | docs/contributing/how-to/how-to-write-tests/index.rst | 501ZHY/Nashpy | cdbc85b592a272d8431648e435a21b7736058f4e | [
"MIT"
] | 93 | 2016-11-06T12:34:14.000Z | 2022-03-25T10:57:17.000Z | docs/contributing/how-to/how-to-write-tests/index.rst | 501ZHY/Nashpy | cdbc85b592a272d8431648e435a21b7736058f4e | [
"MIT"
] | 51 | 2016-11-06T12:31:22.000Z | 2022-03-29T10:45:53.000Z | How to write tests
==================
The :ref:`pytest <pytest-discussion>` framework is used for writing and running
tests for Nashpy.
Tests should be written in one of the following locations:
- In a preexisting file in the :code:`test/` directory.
- In a new file in the :code:`test/` directory.
Thanks to :code:`pytest` the format for a test is::
def test_<functionality>():
"""
<short summary if necessary>
"""
<code logic>
assert <boolean>
For guidance on how to run tests see: :ref:`how-to-run-tests`.
When writing a new test it is good practice to ensure the test fails (either by
modifying the test or by modifying the source code): this ensures that
:ref:`pytest <pytest-discussion>` is running the test in question.
Note that when adding new functionality the coverage of the test suite will be
checked using :ref:`coverage <coverage-discussion>`. Thus, in practice multiple
tests will need to be written to test new functionality completely.
Hypothesis
----------
Property based tests are tests that use random sampling in an efficient manner
to test given properties as opposed to specific values. Nashpy uses
:ref:`hypothesis <hypothesis-discussion>` for this.
For example the following tests that for any given :code:`M`, which is a 3 by 3
numpy integer array, the length of the output of
:code:`get_derivative_of_fitness` is as expected::
from hypothesis import given, settings
from hypothesis.strategies import integers
from hypothesis.extra.numpy import arrays
@given(M=arrays(np.int8, (3, 3)))
def test_property_get_derivative_of_fitness(M):
t = 0
x = np.zeros(M.shape[1])
derivative_of_fitness = get_derivative_of_fitness(x, t, M)
assert len(derivative_of_fitness) == len(x)
| 32.854545 | 79 | 0.717211 |
39bae86d328ea40868c3e4789d85f1a4ad600b67 | 3,916 | rst | reStructuredText | machine/qemu/sources/u-boot/doc/board/amlogic/libretech-ac.rst | muddessir/framework | 5b802b2dd7ec9778794b078e748dd1f989547265 | [
"MIT"
] | 1 | 2021-11-21T19:56:29.000Z | 2021-11-21T19:56:29.000Z | machine/qemu/sources/u-boot/doc/board/amlogic/libretech-ac.rst | muddessir/framework | 5b802b2dd7ec9778794b078e748dd1f989547265 | [
"MIT"
] | null | null | null | machine/qemu/sources/u-boot/doc/board/amlogic/libretech-ac.rst | muddessir/framework | 5b802b2dd7ec9778794b078e748dd1f989547265 | [
"MIT"
] | null | null | null | .. SPDX-License-Identifier: GPL-2.0+
U-Boot for LibreTech AC
=======================
LibreTech AC is a single board computer manufactured by Libre Technology
with the following specifications:
- Amlogic S805X ARM Cortex-A53 quad-core SoC @ 1.2GHz
- ARM Mali 450 GPU
- 512MiB DDR4 SDRAM
- 10/100 Ethernet
- HDMI 2.0 4K/60Hz display
- 40-pin GPIO header
- 4 x USB 2.0 Host
- eMMC, SPI NOR Flash
- Infrared receiver
Schematics are available on the manufacturer website.
U-Boot compilation
------------------
.. code-block:: bash
$ export CROSS_COMPILE=aarch64-none-elf-
$ make libretech-ac_defconfig
$ make
Image creation
--------------
Amlogic doesn't provide sources for the firmware and for tools needed
to create the bootloader image, so it is necessary to obtain them from
the git tree published by the board vendor:
.. code-block:: bash
$ wget https://releases.linaro.org/archive/13.11/components/toolchain/binaries/gcc-linaro-aarch64-none-elf-4.8-2013.11_linux.tar.xz
$ wget https://releases.linaro.org/archive/13.11/components/toolchain/binaries/gcc-linaro-arm-none-eabi-4.8-2013.11_linux.tar.xz
$ tar xvfJ gcc-linaro-aarch64-none-elf-4.8-2013.11_linux.tar.xz
$ tar xvfJ gcc-linaro-arm-none-eabi-4.8-2013.11_linux.tar.xz
$ export PATH=$PWD/gcc-linaro-aarch64-none-elf-4.8-2013.11_linux/bin:$PWD/gcc-linaro-arm-none-eabi-4.8-2013.11_linux/bin:$PATH
$ git clone https://github.com/BayLibre/u-boot.git -b libretech-ac amlogic-u-boot
$ cd amlogic-u-boot
$ wget https://raw.githubusercontent.com/BayLibre/u-boot/libretech-cc/fip/blx_fix.sh
$ make libretech_ac_defconfig
$ make
$ export UBOOTDIR=$PWD
Download the latest Amlogic Buildroot package, and extract it :
.. code-block:: bash
$ wget http://openlinux2.amlogic.com:8000/ARM/filesystem/Linux_BSP/buildroot_openlinux_kernel_4.9_fbdev_20180418.tar.gz
$ tar xfz buildroot_openlinux_kernel_4.9_fbdev_20180418.tar.gz buildroot_openlinux_kernel_4.9_fbdev_20180418/bootloader
$ export BRDIR=$PWD/buildroot_openlinux_kernel_4.9_fbdev_20180418
Go back to mainline U-Boot source tree then :
.. code-block:: bash
$ mkdir fip
$ cp $UBOOTDIR/build/scp_task/bl301.bin fip/
$ cp $UBOOTDIR/build/board/amlogic/libretech_ac/firmware/bl21.bin fip/
$ cp $UBOOTDIR/build/board/amlogic/libretech_ac/firmware/acs.bin fip/
$ cp $BRDIR/bootloader/uboot-repo/bl2/bin/gxl/bl2.bin fip/
$ cp $BRDIR/bootloader/uboot-repo/bl30/bin/gxl/bl30.bin fip/
$ cp $BRDIR/bootloader/uboot-repo/bl31/bin/gxl/bl31.img fip/
$ cp u-boot.bin fip/bl33.bin
$ sh $UBOOTDIR/blx_fix.sh \
fip/bl30.bin \
fip/zero_tmp \
fip/bl30_zero.bin \
fip/bl301.bin \
fip/bl301_zero.bin \
fip/bl30_new.bin \
bl30
$ $BRDIR/bootloader/uboot-repo/fip/acs_tool.pyc fip/bl2.bin fip/bl2_acs.bin fip/acs.bin 0
$ sh $UBOOTDIR/blx_fix.sh \
fip/bl2_acs.bin \
fip/zero_tmp \
fip/bl2_zero.bin \
fip/bl21.bin \
fip/bl21_zero.bin \
fip/bl2_new.bin \
bl2
$ $BRDIR/bootloader/uboot-repo/fip/gxl/aml_encrypt_gxl --bl3enc --input fip/bl30_new.bin
$ $BRDIR/bootloader/uboot-repo/fip/gxl/aml_encrypt_gxl --bl3enc --input fip/bl31.img
$ $BRDIR/bootloader/uboot-repo/fip/gxl/aml_encrypt_gxl --bl3enc --input fip/bl33.bin
$ $BRDIR/bootloader/uboot-repo/fip/gxl/aml_encrypt_gxl --bl2sig --input fip/bl2_new.bin --output fip/bl2.n.bin.sig
$ $BRDIR/bootloader/uboot-repo/fip/gxl/aml_encrypt_gxl --bootmk \
--output fip/u-boot.bin \
--bl2 fip/bl2.n.bin.sig \
--bl30 fip/bl30_new.bin.enc \
--bl31 fip/bl31.img.enc \
--bl33 fip/bl33.bin.enc
and then write the image to SD with:
.. code-block:: bash
$ DEV=/dev/your_sd_device
$ dd if=fip/u-boot.bin.sd.bin of=$DEV conv=fsync,notrunc bs=512 skip=1 seek=1
$ dd if=fip/u-boot.bin.sd.bin of=$DEV conv=fsync,notrunc bs=1 count=444
| 35.279279 | 135 | 0.704545 |
0d512a68d923a988eb2744dd26a5647c4e35ce05 | 2,395 | rst | reStructuredText | docs/class1/module1/lab06.rst | jamesaffeld/f5-gsts-labs-ansible-cookbook | 703dc4d840767e08245d217e5d6ca50599043cdf | [
"MIT"
] | 2 | 2018-08-01T16:36:42.000Z | 2019-02-19T15:02:56.000Z | docs/class1/module1/lab06.rst | jamesaffeld/f5-gsts-labs-ansible-cookbook | 703dc4d840767e08245d217e5d6ca50599043cdf | [
"MIT"
] | 3 | 2018-02-23T17:35:01.000Z | 2019-09-10T23:24:53.000Z | docs/class1/module1/lab06.rst | jamesaffeld/f5-gsts-labs-ansible-cookbook | 703dc4d840767e08245d217e5d6ca50599043cdf | [
"MIT"
] | 4 | 2017-12-18T08:55:02.000Z | 2019-09-10T01:00:29.000Z | Using static inventory
======================
Problem
-------
You need to have Ansible communicate with a predefined list of hosts
Solution
--------
Use a static inventory file.
A static inventory file is a INI formatted file. Here is an example ::
server ansible_host=10.1.1.6
bigip ansible_host=10.1.1.4
client ansible_host=10.1.1.5
The above text you be put in a file named ``hosts`` in the ``inventory`` directory.
You would use the inventory like so, ::
ansible-playbook -i inventory/hosts playbooks/site.yaml
#. Create a ``lab1.6`` directory in the ``labs`` directory.
#. Setup the filesystem layout to mirror the one :doc:`described in lab 1.3</class1/module1/lab03>`.
#. Add a ``server`` host to the ansible inventory and give it an ``ansible_host``
fact with the value ``10.1.1.6``
#. Add a ``client`` host to the ansible inventory and give it an ``ansible_host``
fact with the value ``10.1.1.5``
#. Add a ``bigip`` host to the ansible inventory and give it an ``ansible_host``
fact with the value ``10.1.1.4``
Discussion
----------
Static hosts are the original means of specifying an inventory to Ansible.
The format mentioned in the solution above includes the following information,
#. A host named ``bigip``. This value will be put in Ansible’s ``inventory_hostname``
variable.
#. A host *fact* called ``ansible_host``. This is a reserved variable in Ansible.
It is used by Ansible to connect to the remote host. Its value is ``10.1.1.4``.
There are many more forms of inventory than static lists. Indeed, you can also
provide dynamic lists that take the form of small programs which output specially
formatted JSON.
Static lists work well for demos, ad-hoc play running, and cases when your
organizations systems practically never change. Otherwise, a dynamic source is
probably better.
Dynamic sources must be written by hand if you require a specific means of
getting the host informations (for example, from a local database at your company).
There are also a number of dynamic resources that you can get from Ansible.
You can find `Community contributions here`_, and you can find Contributions that `ship with Ansible, here`_.
.. _Community contributions here: https://github.com/ansible/ansible/tree/devel/contrib/inventory
.. _ship with Ansible, here: https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/inventory | 38.629032 | 109 | 0.73904 |
f8bbc7abb1bd96bd4f9a56ae2e1046e9654934b9 | 1,904 | rest | reStructuredText | api.rest | Kazuo-Tsubokawa/MusicAppLaravel | 236c199cbfa4bdea456654dfdb13078880e5a5bf | [
"MIT"
] | null | null | null | api.rest | Kazuo-Tsubokawa/MusicAppLaravel | 236c199cbfa4bdea456654dfdb13078880e5a5bf | [
"MIT"
] | null | null | null | api.rest | Kazuo-Tsubokawa/MusicAppLaravel | 236c199cbfa4bdea456654dfdb13078880e5a5bf | [
"MIT"
] | null | null | null | # register
POST /api/register
Host: localhost
Content-Type: application/json
{
"name": "kazuo",
"email": "aa@a",
"password": "11111111"
}
###
# login
POST /api/login
Host: localhost
Content-Type: application/json
{
"email": "1@1",
"password": "11111111"
}
###
# songshow
GET /api/songs/33
Authorization: Bearer 5|7obE2c3v6fBra7SFFSd0iBnWh2io4Ngv3sQJSMIW
Host: localhost
Content-Type: application/json
###
#artistshow
GET /api/artists/8
Authorization: Bearer 5|7obE2c3v6fBra7SFFSd0iBnWh2io4Ngv3sQJSMIW
Host: localhost
Content-Type: application/json
###
# # store
# POST /api/songs
# Host: localhost
# Content-Type: application/json
# Authorization: Bearer 2|PvO22eNPQLgG8oKZKhJYmJ58B7QJFhJiMx5rdCIO
# {
# "category_id": "2",
# "title": "title",
# "file_name": "2.mp3",
# "description": "aaaa",
# "image": "2.jpeg"
# }
# ###
# # update
# PUT /api/songs/48
# Host: localhost
# Content-Type: application/json
# Authorization: Bearer 2|PvO22eNPQLgG8oKZKhJYmJ58B7QJFhJiMx5rdCIO
# {
# "category_id": "2",
# "title": "titleeeen",
# "file_name": "2.mp3",
# "description": "bbbb",
# "image": "2.jpeg"
# }
###
# likestore
POST /api/songs/1/likes
Host: localhost
Content-Type: application/json
Authorization: Bearer 2|emhEeFh2oPaKocFcDUyxPuMA5J5O8py5CCLiJDD7
{
"user_id": "1",
"song_id": "1"
}
###
# likedestroy
DELETE /api/songs/11/likes/91
Host: localhost
Content-Type: application/json
Authorization: Bearer 8|usOuMgoNVFoVIDDPeneCNlCc3GLIUP4peNfnQaBE
###
# followstore
POST /api/artists/1/follows
Host: localhost
Content-Type: application/json
Authorization: Bearer 9|oruKsDrVdEhNmm2ZrrOV7vBBRBLmwhNMVFAAv94s
{
"user_id": "7",
"artist_id": "7"
}
###
# followdestroy
DELETE /api/artists/7/follows/29
Host: localhost
Content-Type: application/json
Authorization: Bearer 9|oruKsDrVdEhNmm2ZrrOV7vBBRBLmwhNMVFAAv94s
| 16.701754 | 66 | 0.702731 |
ced3e43bfddca278b4693c90785a340c05f246eb | 404 | rst | reStructuredText | docs/source/text/bleu_score.rst | Borda/torchmetrics | 6144eb3b248b7fa315bb9afeb96c690a5d747001 | [
"Apache-2.0"
] | null | null | null | docs/source/text/bleu_score.rst | Borda/torchmetrics | 6144eb3b248b7fa315bb9afeb96c690a5d747001 | [
"Apache-2.0"
] | null | null | null | docs/source/text/bleu_score.rst | Borda/torchmetrics | 6144eb3b248b7fa315bb9afeb96c690a5d747001 | [
"Apache-2.0"
] | null | null | null | .. customcarditem::
:header: BLEU Score
:image: https://pl-flash-data.s3.amazonaws.com/assets/thumbnails/summarization.svg
:tags: Text
.. include:: ../links.rst
##########
BLEU Score
##########
Module Interface
________________
.. autoclass:: torchmetrics.BLEUScore
:noindex:
Functional Interface
____________________
.. autofunction:: torchmetrics.functional.bleu_score
:noindex:
| 17.565217 | 85 | 0.707921 |
0679435bc2ae05c870097ea976920ce8990e1fed | 3,764 | rst | reStructuredText | source/howto/k3b_einrichten.txt.rst | rakor/wiki | 2acc9e401e12fdf45cd505b17d54a740bbcc9a6f | [
"CC-BY-3.0"
] | 4 | 2019-10-14T09:30:23.000Z | 2021-08-31T11:25:04.000Z | source/howto/k3b_einrichten.txt.rst | rakor/wiki | 2acc9e401e12fdf45cd505b17d54a740bbcc9a6f | [
"CC-BY-3.0"
] | null | null | null | source/howto/k3b_einrichten.txt.rst | rakor/wiki | 2acc9e401e12fdf45cd505b17d54a740bbcc9a6f | [
"CC-BY-3.0"
] | 9 | 2019-10-14T07:09:18.000Z | 2021-07-31T18:50:15.000Z | K3B einrichten
==============
.. |date| date::
.. sidebar:: Info
.. image:: ../images/logo-freebsd.png
Das Brennprogramm K3B erfreut sich außerordentlicher Beliebtheit, selbst bei
jenen, die eigentlich nicht KDE verwenden. Allerdings erfordert das Brennen von
CDs eine Menge Zugriffsrechte. Dieser Artikel beschreibt wie K3B eingerichtet
werden kann, um es als normaler Benutzer ohne Einschränkungen zu verwenden.
Installation
------------
- `sysutils/k3b <https://www.google.com/search?q=sysutils/k3b&btnI=lucky>`__
in den FreeBSD Ports.
- `sysutils/k3b-kde4 <https://www.google.com/search?q=sysutils/k3b-kde4&btnI=lucky>`__
in den FreeBSD Ports.
- `sysutils/k3b <https://www.google.com/search?q=sysutils/k3b&btnI=lucky>`__
in Pkgsrc.
FreeBSD
-------
Unter FreeBSD benötigt K3B die SCSI-Emulation, zusätzliche
Zugriffsrechte und Einträge in der Datei ``/etc/fstab``.
Zugriffsrechte
~~~~~~~~~~~~~~
K3B benötigt zum Brennen Zugriffsrechte auf die entsprechenden
CD-Devices. Folgender Befehl liefert eine Liste:
::
$ ls /dev/cd*
Außerdem wird Zugriff auf ``/dev/xpt0`` und ``/dev/pass*`` benötigt.
Leider kann sich die Nummer der betroffenen **pass**-Devices ändern (zum
Beispiel wenn beim Booten ein externes USB-Laufwerk vorhanden ist),
deshalb muss Zugriff auf alle **pass**-Devices gegeben werden. Um das
Risiko zumindest etwas einzudämmen, sollte der Zugriff auf eine
vertrauenswürdige Benutzergruppe beschränkt werden. Üblich ist die
Gruppe **operator**.
Wie Zugriffsrechte vergeben werden ist im `devfs-HowTo <devfs>`__
beschrieben.
Hier ist ein funktionsfähiges Beispiel für die Datei
``/etc/devfs.rules``.
::
[localrules=10]
add path 'cd*' mode 0660 group operator
add path 'pass*' mode 0660 group operator
add path 'xpt0' mode 0660 group operator
aus K3B mounten
~~~~~~~~~~~~~~~
Da K3B beim Mounten nicht auf HAL aufsetzt, benötigt es einen Eintrag in
der Datei ``/etc/fstab`` und einen Mountpunkt im Homeverzeichnis des
aktuellen Benutzers. Der Eintrag in der ``/etc/fstab`` kann etwa so
aussehen:
::
/dev/cd0 .mnt/cd0 cd9660 ro,noauto 0 0
/dev/cd1 .mnt/cd1 cd9660 ro,noauto 0 0
Zu beachten ist hierbei, dass vor dem Mountpunkt kein ``/`` steht.
Dadurch wird der Mountpunkt relativ zum aktuellen Verzeichnis
betrachtet. Üblicherweise werden Programme vom Home-Verzeichnis des
aktuellen Nutzers aus gestartet. Deshalb sollte jeder Nutzer in der
Gruppe mit den entsprechenden Rechten, üblicherweise **operator**, der
K3B einsetzen will, folgenden Befehle ausführen:
::
$ mkdir -p ~/.mnt/cd0 ~/.mnt/cd1
Verzeichnisse und ``fstab``-Einträge sollten natürlich nur für
vorhandene Laufwerke angelegt werden.
Zu guter Letzt muss noch Mounten für normale Benutzer aktiviert werden.
Das geht mit folgendem Kommando:
::
# sysctl vfs.usermount=1
Permanent wird diese Änderung durch einen Eintrag in der Datei
``/etc/sysctl.conf``.
::
vfs.usermount=1
K3B 2.0/k3b-kde4
~~~~~~~~~~~~~~~~
Ab K3B 2.0, die Version die mit KDE4 läuft, muss der HAL Daemon aktiv
sein, damit K3B die vorhanden Laufwerke entdeckt. HAL kann zur Laufzeit
mit folgendem Befehl aktiviert werden:
::
# service hald onestart
Um HALD beim Booten automatisch zu starten, müssen die folgenden 2
Einträge in die Datei ``/etc/rc.conf`` gemacht werden:
::
dbus_enable="YES"
hald_enable="YES"
Verweise
--------
- `devfs </howto/devfs>`__ - devfs-Howto
- `K3B </anwendungen/K3B>`__
- `sysutils/k3b <https://www.google.com/search?q=sysutils/k3b&btnI=lucky>`__
in den FreeBSD Ports.
- `sysutils/k3b <https://www.google.com/search?q=sysutils/k3b&btnI=lucky>`__
in Pkgsrc.
- http://k3b.org die K3B Homepage.
* :ref:`genindex`
Zuletzt geändert: |date|
| 27.474453 | 87 | 0.725292 |
fd5d4a1672ad20ef990fca8c87acfe006bb123c2 | 2,726 | rst | reStructuredText | docs/basic_overview.rst | BarbzYHOOL/MySQL-AutoXtraBackup | 8ae7927e72c03cf4e685d26f2c2d0d2580eeac52 | [
"MIT"
] | 28 | 2017-05-19T09:28:17.000Z | 2021-11-15T10:05:52.000Z | docs/basic_overview.rst | BarbzYHOOL/MySQL-AutoXtraBackup | 8ae7927e72c03cf4e685d26f2c2d0d2580eeac52 | [
"MIT"
] | null | null | null | docs/basic_overview.rst | BarbzYHOOL/MySQL-AutoXtraBackup | 8ae7927e72c03cf4e685d26f2c2d0d2580eeac52 | [
"MIT"
] | 3 | 2017-12-20T09:52:29.000Z | 2022-03-28T09:45:23.000Z | Basic Overview
==============
Project Structure
-----------------
XtraBackup is a powerful open-source hot online backup tool for MySQL
from Percona. This script is using XtraBackup for full and incremental
backups, also for preparing backups, as well as to restore. Here is project path tree:
::
* master_backup_script -- Full and Incremental backup taker script.
* backup_prepare -- Backup prepare and restore script.
* partial_recovery -- Partial table recovery script.
* general_conf -- All-in-one config file's and config reader class folder.
* prepare_env_test_mode -- The directory for --test_mode actions.
* test -- The directory for test things.
* setup.py -- Setuptools Setup file.
* autoxtrabackup.py -- Commandline Tool provider script.
* VagrantFile -- The Vagrant thing for starting using this tool[will be useful to contributors].
* /etc/bck.conf -- Config file will be created from general_conf/bck.conf
Available Options
-----------------
.. code-block:: shell
$ sudo autoxtrabackup
Usage: autoxtrabackup [OPTIONS]
Options:
--dry_run Enable the dry run.
--prepare Prepare/recover backups.
--backup Take full and incremental backups.
--partial Recover specified table (partial recovery).
--version Version information.
--defaults_file TEXT Read options from the given file [default:
/etc/bck.conf]
--tag TEXT Pass the tag string for each backup
--show_tags Show backup tags and exit
-v, --verbose Be verbose (print to console)
-lf, --log_file TEXT Set log file [default:
/var/log/autoxtrabackup.log]
-l, --log [DEBUG|INFO|WARNING|ERROR|CRITICAL]
Set log level [default: WARNING]
--test_mode Enable test mode.Must be used with
--defaults_file and only for TESTs for
XtraBackup
--help Print help message and exit.
Usage
-----
::
1. Install it.
2. Edit /etc/bck.conf file to reflect your environment or create your own config.
3. Pass this config file to autoxtrabackup with --defaults_file and begin to backup/prepare/restore.
Logging
--------
The logging mechanism is using Python3 logging.
It lets to log directly to console and also to file.
| 36.837838 | 110 | 0.569699 |
d6696ec1369b773dbd9e56cc72711b1c197f77ab | 1,296 | rst | reStructuredText | docs/source/multi_tenancy.rst | categulario/norm | 232d3e25dcce2a1f698b429ecdedf5f8ee33c340 | [
"MIT"
] | 1 | 2020-10-11T06:40:33.000Z | 2020-10-11T06:40:33.000Z | docs/source/multi_tenancy.rst | categulario/coralillo | 232d3e25dcce2a1f698b429ecdedf5f8ee33c340 | [
"MIT"
] | 17 | 2017-08-22T16:52:03.000Z | 2017-08-30T17:23:56.000Z | docs/source/multi_tenancy.rst | categulario/norm | 232d3e25dcce2a1f698b429ecdedf5f8ee33c340 | [
"MIT"
] | 4 | 2018-05-15T18:10:10.000Z | 2020-09-01T08:58:55.000Z | Multi-tenancy
=============
It is often useful to have objects of the same class stored within different namespaces, for example when running an application that serves different clients and you don't want them to be in the same place.
For this case Coralillo has a Model subclass called BoundedModel that lets you specify a prefix for your models:
.. testsetup::
from coralillo import Engine
eng = Engine()
eng.lua.drop(args=['*'])
.. testcode::
from coralillo import Engine, BoundedModel, fields
eng = Engine()
current_namespace = 'coral'
class User(BoundedModel):
name = fields.Text()
@classmethod
def prefix(cls):
# here you may have your own way of determining the __bound__
# depending on the context. We will just return a variable's
# value
return current_namespace
class Meta:
engine = eng
# models are saved in the namespace given by the context
juan = User(name='Juan').save()
assert eng.redis.exists('coral:user:members')
# changing the context changes how models are found
current_namespace = 'nauyaca'
assert User.get(juan.id) is None
pepe = User(name='Pepe').save()
assert eng.redis.exists('nauyaca:user:members')
| 28.173913 | 207 | 0.665895 |
e97fb2b32b8972b17bc10dc3dbc36167d624075c | 226 | rst | reStructuredText | docs/TitanicAttempt.rst | brookemosby/titanic | e0eb3537a83c7b9d0b7a01db5f23785ffc6f8f70 | [
"MIT"
] | null | null | null | docs/TitanicAttempt.rst | brookemosby/titanic | e0eb3537a83c7b9d0b7a01db5f23785ffc6f8f70 | [
"MIT"
] | null | null | null | docs/TitanicAttempt.rst | brookemosby/titanic | e0eb3537a83c7b9d0b7a01db5f23785ffc6f8f70 | [
"MIT"
] | null | null | null | TitanicAttempt module
=====================
Produces predictions for survival for Titanic passengers with 78.5% acurracy.
.. automodule:: TitanicAttempt.TitanicAttempt
:members:
:undoc-members:
:show-inheritance:
| 25.111111 | 77 | 0.69469 |
871089c7d0dd7e69f96e5a8eae6091c5868459a2 | 234 | rst | reStructuredText | docs/source2/generated/generated/statsmodels.regression.linear_model.RegressionResults.HC1_se.rst | GreatWei/pythonStates | c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37 | [
"BSD-3-Clause"
] | 76 | 2019-12-28T08:37:10.000Z | 2022-03-29T02:19:41.000Z | docs/source2/generated/generated/statsmodels.regression.linear_model.RegressionResults.HC1_se.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | null | null | null | docs/source2/generated/generated/statsmodels.regression.linear_model.RegressionResults.HC1_se.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 35 | 2020-02-04T14:46:25.000Z | 2022-03-24T03:56:17.000Z | :orphan:
statsmodels.regression.linear\_model.RegressionResults.HC1\_se
==============================================================
.. currentmodule:: statsmodels.regression.linear_model
.. automethod:: RegressionResults.HC1_se
| 26 | 62 | 0.602564 |
3b731b8681d1936f348586cdfe9bd19b5a5441eb | 1,219 | rst | reStructuredText | docs/homework/homework1.rst | lhuang-pvamu/Parallel-Computing-Code | 3a520d93c46a1ca20677c730436834fc1012cc26 | [
"Apache-2.0"
] | 1 | 2019-09-18T17:12:22.000Z | 2019-09-18T17:12:22.000Z | docs/homework/homework1.rst | lhuang-pvamu/Parallel-Computing-Code | 3a520d93c46a1ca20677c730436834fc1012cc26 | [
"Apache-2.0"
] | null | null | null | docs/homework/homework1.rst | lhuang-pvamu/Parallel-Computing-Code | 3a520d93c46a1ca20677c730436834fc1012cc26 | [
"Apache-2.0"
] | 9 | 2018-09-28T17:43:16.000Z | 2022-01-28T23:23:53.000Z | Parallel Computing Homeowrk 1
========================
A prime pair or twin prime is a prime number that has a prime gap of two, in other words, the difference between the two prime numbers are two, for example the twin prime pair (41, 43). You need to write an OpenMP program to find the total number of prime pairs between 2 and 50,000,000. Your grade will be not only determined by the correctness of the total number, but also depends on your program performance. Your program should print the number of prime pairs and the total execution time of your program. Report the speedup of your program using 1, 2, 4, and 8 threads respectively.
Build your code::
cd homework/hw1
make
Run your sequential version::
./prime
Run your OpenMP parallel version::
./prime_omp
Run both of them::
make run
You need to revise the prime.cpp and prime_omp.cpp files respectively to create a sequential program and an OpenMP parallel program to complete the homework.
The following is the grading percentage for evaluating your program.::
Correctness: 60%
Performance: 40%
Please submit all of your program source codes and a short report for performance observation.
| 36.939394 | 588 | 0.73831 |
1c6217c0b9284ecb551191d36ced526a59d5c2a9 | 3,269 | rst | reStructuredText | classes/class_capsulemesh.rst | vortexofdoom/godot-docs | c267e81350ceca52a58bc5b3946ace5b768293e3 | [
"CC-BY-3.0"
] | null | null | null | classes/class_capsulemesh.rst | vortexofdoom/godot-docs | c267e81350ceca52a58bc5b3946ace5b768293e3 | [
"CC-BY-3.0"
] | null | null | null | classes/class_capsulemesh.rst | vortexofdoom/godot-docs | c267e81350ceca52a58bc5b3946ace5b768293e3 | [
"CC-BY-3.0"
] | null | null | null | .. Generated automatically by doc/tools/makerst.py in Godot's source tree.
.. DO NOT EDIT THIS FILE, but the CapsuleMesh.xml source instead.
.. The source is found in doc/classes or modules/<name>/doc_classes.
.. _class_CapsuleMesh:
CapsuleMesh
===========
**Inherits:** :ref:`PrimitiveMesh<class_PrimitiveMesh>` **<** :ref:`Mesh<class_Mesh>` **<** :ref:`Resource<class_Resource>` **<** :ref:`Reference<class_Reference>` **<** :ref:`Object<class_Object>`
**Category:** Core
Brief Description
-----------------
Class representing a capsule-shaped :ref:`PrimitiveMesh<class_PrimitiveMesh>`.
Properties
----------
+---------------------------+--------------------------------------------------------------------+-----+
| :ref:`float<class_float>` | :ref:`mid_height<class_CapsuleMesh_property_mid_height>` | 1.0 |
+---------------------------+--------------------------------------------------------------------+-----+
| :ref:`int<class_int>` | :ref:`radial_segments<class_CapsuleMesh_property_radial_segments>` | 64 |
+---------------------------+--------------------------------------------------------------------+-----+
| :ref:`float<class_float>` | :ref:`radius<class_CapsuleMesh_property_radius>` | 1.0 |
+---------------------------+--------------------------------------------------------------------+-----+
| :ref:`int<class_int>` | :ref:`rings<class_CapsuleMesh_property_rings>` | 8 |
+---------------------------+--------------------------------------------------------------------+-----+
Description
-----------
Class representing a capsule-shaped :ref:`PrimitiveMesh<class_PrimitiveMesh>`.
Property Descriptions
---------------------
.. _class_CapsuleMesh_property_mid_height:
- :ref:`float<class_float>` **mid_height**
+-----------+-----------------------+
| *Default* | 1.0 |
+-----------+-----------------------+
| *Setter* | set_mid_height(value) |
+-----------+-----------------------+
| *Getter* | get_mid_height() |
+-----------+-----------------------+
Height of the capsule mesh from the center point.
.. _class_CapsuleMesh_property_radial_segments:
- :ref:`int<class_int>` **radial_segments**
+-----------+----------------------------+
| *Default* | 64 |
+-----------+----------------------------+
| *Setter* | set_radial_segments(value) |
+-----------+----------------------------+
| *Getter* | get_radial_segments() |
+-----------+----------------------------+
Number of radial segments on the capsule mesh.
.. _class_CapsuleMesh_property_radius:
- :ref:`float<class_float>` **radius**
+-----------+-------------------+
| *Default* | 1.0 |
+-----------+-------------------+
| *Setter* | set_radius(value) |
+-----------+-------------------+
| *Getter* | get_radius() |
+-----------+-------------------+
Radius of the capsule mesh.
.. _class_CapsuleMesh_property_rings:
- :ref:`int<class_int>` **rings**
+-----------+------------------+
| *Default* | 8 |
+-----------+------------------+
| *Setter* | set_rings(value) |
+-----------+------------------+
| *Getter* | get_rings() |
+-----------+------------------+
Number of rings along the height of the capsule.
| 34.052083 | 197 | 0.424901 |
9b17e371652617d28fead6cf3cdc702c522de300 | 2,284 | rst | reStructuredText | docs/source/server/configuration-reference.rst | riotkit-org/backup-repository | 3376fe61c5b6bca1aec60c87311e5d55c8cdb66b | [
"Apache-2.0"
] | 8 | 2021-03-21T15:22:07.000Z | 2022-03-28T11:57:48.000Z | docs/source/server/configuration-reference.rst | riotkit-org/backup-repository | 3376fe61c5b6bca1aec60c87311e5d55c8cdb66b | [
"Apache-2.0"
] | 85 | 2021-02-11T07:04:38.000Z | 2022-03-30T20:17:40.000Z | docs/source/server/configuration-reference.rst | riotkit-org/file-repository | 3376fe61c5b6bca1aec60c87311e5d55c8cdb66b | [
"Apache-2.0"
] | 1 | 2019-11-03T19:46:05.000Z | 2019-11-03T19:46:05.000Z | Configuration reference
=======================
1. API documentation
--------------------
API documentation is accessible at application's endpoint, take a look at http://localhost/api/stable/doc
2. Storage
------------------------
1) :class:`FS_RW_NAME` and :class:`FS_RO_NAME` defines NAMES of configuration, for example :class:`FS_LOCAL_DIRECTORY` - here the "LOCAL" is the configuration name.
2) **For most of the cases it is enough to have same adapter in both RO and RW slots.**
3) Following default configuration is using local Min.io storage available on http://localhost:9000, you can run Min.io in docker
4) If you don't have any cloud storage, and don't want to use Min.io, just switch :class:`FS_RW_NAME` and :class:`FS_RO_NAME` to :class:`"LOCAL"`. **If you are using docker, then remember about mounting the path FS_LOCAL_DIRECTORY, else all files will disappear after container restart/recreation.**
.. literalinclude:: ../../../server/.env.dist
:start-after: <docs:storage>
:end-before: </docs:storage>
3. Hard limits
--------------
Global, hard limits can be configured for whole Backup Repository instance.
Those would take effect also for administrators.
.. literalinclude:: ../../../server/.env.dist
:start-after: <docs:backups>
:end-before: </docs:backups>
4. Security
-----------
**JWT - JSON Web Tokens** are used to grant access to system for multiple users, defining the level of access for various resources.
To generate JWT there are server-side keys used. Keys needs to be generated before launching the application first time, and **must be kept IN SECRET!**
The passphrase should be long and unique, so nobody could guess it. Use a password generator to generate a strong password. Avoid using "$", blank spaces and various quotes as characters.
.. literalinclude:: ../../../server/.env.dist
:start-after: lexik/jwt-authentication-bundle
:end-before: < lexik/jwt-authentication-bundle
**Generating JWT keys**
Please replace $JWT_PASSPHRASE with your actual passphrase.
.. code:: bash
openssl genpkey -out config/jwt/private.pem -aes256 -pass pass:$JWT_PASSPHRASE -algorithm rsa -pkeyopt rsa_keygen_bits:4096
openssl pkey -in config/jwt/private.pem -out config/jwt/public.pem -pubout -passin pass:$JWT_PASSPHRASE
| 44.784314 | 299 | 0.72373 |
1add231a137c8c6e5ec395b4a986525633a0ebd1 | 1,393 | rst | reStructuredText | doc/source/plugins/jmx.rst | alteryx/cosmic | 1507ef348a1ccc248334e976522ca3091315ee65 | [
"Apache-2.0"
] | 3 | 2015-04-23T12:19:14.000Z | 2017-08-24T06:25:39.000Z | doc/source/plugins/jmx.rst | ning/cosmic | 806188d4970c7e71c07693e773047df128a0f2a4 | [
"Apache-2.0"
] | null | null | null | doc/source/plugins/jmx.rst | ning/cosmic | 806188d4970c7e71c07693e773047df128a0f2a4 | [
"Apache-2.0"
] | 1 | 2022-02-19T10:58:21.000Z | 2022-02-19T10:58:21.000Z | .. _`JMX resources`: http://docs.oracle.com/javase/tutorial/jmx/index.html
.. _`jmx4r gem`: https://github.com/jmesnil/jmx4r
JMX
===
The JMX plugin allows Cosmic scripts to interact with exposes `JMX resources`_ exposed by services running on the JVM.
This plugin requires JRuby and the `jmx4r gem`_::
gem install jmx4r
The only configuration for the plugin is for authentication in cases where the JMX resources require it::
jmx:
<authentication configuration as explained above>
The plugin supports reading and setting attributes as well as invoking operations. For instance::
require 'cosmic/galaxy'
require 'cosmic/jmx'
services = with galaxy do
select :type => /^echo$?/
end
with jmx do
mbeans = services.collect {|service| get_mbean :host => service.host, :port => 12345, :name => 'some.company:name=MyMBean'}
mbeans.each do |mbean|
old_value = get_attribute :mbean => mbean, :attribute => 'SomeAttribute'
set_attribute :mbean => mbean, :attribute => 'SomeAttribute', :value => old_value + 1
invoke :mbean => mbean, :operation => 'DoSomething', :args => [ 'test' ]
end
end
This collects ``some.company:name=MyMBean`` mbeans from all ``echo`` servers on galaxy, then increments the ``SomeAttribute`` attribute and finally invokes the ``DoSomething`` operation with a single string argument.
| 37.648649 | 216 | 0.701364 |
30b3720e7301e9749e041d90121359010b94e327 | 5,183 | rst | reStructuredText | README.rst | petercb/aggravator | dfdc4c2d76ec160b2a6026d5d44cfc602b5c6ad7 | [
"MIT"
] | null | null | null | README.rst | petercb/aggravator | dfdc4c2d76ec160b2a6026d5d44cfc602b5c6ad7 | [
"MIT"
] | null | null | null | README.rst | petercb/aggravator | dfdc4c2d76ec160b2a6026d5d44cfc602b5c6ad7 | [
"MIT"
] | null | null | null | ==========
Aggravator
==========
.. image:: https://travis-ci.org/petercb/aggravator.svg?branch=master
:target: https://travis-ci.org/petercb/aggravator
.. image:: https://coveralls.io/repos/github/petercb/aggravator/badge.svg?branch=master
:target: https://coveralls.io/github/petercb/aggravator?branch=master
Dynamic inventory script for Ansible that aggregates information from other sources
Installing
----------
.. code:: sh
virtualenv aggravator
source aggravator/bin/activate
pip install aggravator
Executing
---------
.. code:: sh
ansible-playbook -i aggravator/bin/inventory site.yml
How does it work
----------------
It will aggregate other config sources (YAML or JSON format) into a single
config stream.
The sources can be files or urls (to either file or webservices that produce
YAML or JSON) and the key path to merge them under can be specified.
Why does it exist
-----------------
We wanted to maintain our Ansible inventory in GIT as YAML files, and not in
the INI like format that Ansible generally supports for flat file inventory.
Additionally we had some legacy config management systems that contained some
information about our systems that we wanted exported to Ansible so we didn't
have to maintain them in multiple places.
So a script that could take YAML files and render them in a JSON format that
Ansible would ingest was needed, as was one that could aggregate many files
and streams.
Config format
-------------
Example (etc/config.yaml):
.. code:: yaml
---
environments:
test:
include:
- path: inventory/test.yaml
- path: vars/global.yaml
key: all/vars
- path: secrets/test.yaml
key: all/vars
By default the inventory script will look for the root config file as follows:
- `../etc/config.yaml` (relative to the `inventory` file)
- `/etc/aggravator/config.yaml`
- `/usr/local/etc/aggravator/config.yaml`
If it can't find it in one of those locations, you will need to use the `--uri`
option to specify it (or set the `INVENTORY_URI` env var)
It will parse it for a list of environments (test, prod, qa, etc) and for a
list of includes. The `include` section should be a list of dictionaries with
the following keys:
path
The path to the data to be ingested, this can be one of:
- absolute file path
- relative file path (relative to the root config.yaml)
- url to a file or service that emits a supported format
key
The key where the data should be merged into, if none is specified it is
imported into the root of the data structure.
format
The data type of the stream to ingest (ie. `yaml` or `json`) if not specified
then the script will attempt to guess it from the file extension
*Order* is important as items lower in the list will take precedence over ones
specified earlier in the list.
Merging
-------
Dictionaries will be merged, and lists will be replaced. So if a property at
the same level in two source streams of the same name are dictionaries their
contents will be merged. If they are lists, the later one will replace the
earlier.
If the data type of two properties at the same level are different the later
one will overwrite the earlier.
Environment Variables
---------------------
Setting the following environment variables can influence how the script
executes when it is called by Ansible.
`INVENTORY_ENV`
Specify the environment name to merge inventory for as defined under the
'environments' section in the root config.
The environment name can also be guessed from the executable name, so if you
create a symlink from `prod` to the `inventory` bin, it will assume the env
you want to execute for is called `prod`, unless you override that.
`INVENTORY_FORMAT`
Format to output in, defaults to YAML in >0.4
Previously only output in JSON
`INVENTORY_URI`
Location to the root config, if not in one of the standard locations
`VAULT_PASSWORD_FILE`
Location of the vault password file if not in the default location of
`~/.vault_pass.txt`, can be set to `/dev/null` to disable decryption of
secrets.
Usage
-----
`inventory [OPTIONS]`
Ansible file based dynamic inventory script
Options:
--env TEXT specify the platform name to pull inventory for
--uri TEXT specify the URI to query for inventory config
file, supports file:// and http(s):// [default:
/home/peterb-l/git/petercb/aggravator/venv/etc/config.yaml]
--output-format [yaml|json] specify the output format [default: yaml]
--vault-password-file PATH vault password file, if set to /dev/null secret
decryption will be disabled [default: ~/.vault_pass.txt]
--list Print inventory information as a JSON object
--host TEXT Retrieve host variables (not implemented)
--createlinks DIRECTORY Create symlinks in DIRECTORY to the script for
each platform name retrieved
--show Output a list of upstream environments (or groups if environment was set)
--help Show this message and exit.
| 32.192547 | 101 | 0.707891 |
de4a3ce7c7e28135036d527888be920fdd1479e4 | 505 | rst | reStructuredText | docs/source/index.rst | pommevilla/rtd_practice | 362aeef4ee6760a378f041c5d76fe9973f2d0219 | [
"MIT"
] | null | null | null | docs/source/index.rst | pommevilla/rtd_practice | 362aeef4ee6760a378f041c5d76fe9973f2d0219 | [
"MIT"
] | null | null | null | docs/source/index.rst | pommevilla/rtd_practice | 362aeef4ee6760a378f041c5d76fe9973f2d0219 | [
"MIT"
] | null | null | null | Simple Documentation Tutorial: DocTut
=====================================
Another simple header
=====================
Here is some text explaining some very complicated stuff.::
print "Hello"
>> Hello
System Reqs
===========
Testing lists:
1. The first item
2. The second item
How about this function?
**Term**
Did this work?
Guide
*****
.. toctree::
:maxdepth: 3
LICENSE
help
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 12.317073 | 59 | 0.550495 |
0792a8564f8eee7043a309ebe89aaaaa11e8d540 | 46 | rst | reStructuredText | docs/api/io/v3/was/plugins.rst | alpesh-te/pyTenable | 4b5381a7757561f7ac1e79c2e2679356dd533540 | [
"MIT"
] | null | null | null | docs/api/io/v3/was/plugins.rst | alpesh-te/pyTenable | 4b5381a7757561f7ac1e79c2e2679356dd533540 | [
"MIT"
] | 25 | 2021-11-16T18:41:36.000Z | 2022-03-25T05:43:31.000Z | docs/api/io/v3/was/plugins.rst | alpesh-te/pyTenable | 4b5381a7757561f7ac1e79c2e2679356dd533540 | [
"MIT"
] | 2 | 2022-03-02T12:24:40.000Z | 2022-03-29T05:12:04.000Z | .. automodule:: tenable.io.v3.was.plugins.api
| 23 | 45 | 0.73913 |
78365ca91674ab10ac486c32dce93cacafa6f6ce | 8,644 | rst | reStructuredText | docs/source/codemods_tutorial.rst | zhammer/LibCST | e0dd6016a54dc2bda8d6df49e10396637b943f06 | [
"Apache-2.0"
] | 880 | 2019-08-07T21:21:11.000Z | 2022-03-29T06:25:34.000Z | docs/source/codemods_tutorial.rst | zhammer/LibCST | e0dd6016a54dc2bda8d6df49e10396637b943f06 | [
"Apache-2.0"
] | 537 | 2019-08-08T18:34:30.000Z | 2022-03-30T16:46:14.000Z | docs/source/codemods_tutorial.rst | zhammer/LibCST | e0dd6016a54dc2bda8d6df49e10396637b943f06 | [
"Apache-2.0"
] | 108 | 2019-08-08T00:17:21.000Z | 2022-03-24T20:53:31.000Z | =====================
Working With Codemods
=====================
Codemods are an abstraction on top of LibCST for performing large-scale changes
to an entire codebase. See :doc:`Codemods <codemods>` for the complete
documentation.
-------------------------------
Setting up and Running Codemods
-------------------------------
Let's say you were interested in converting legacy ``.format()`` calls to shiny new
Python 3.6 f-strings. LibCST ships with a command-line interface known as
``libcst.tool``. This includes a few provisions for working with codemods at the
command-line. It also includes a library of pre-defined codemods, one of which is
a transform that can convert most ``.format()`` calls to f-strings. So, let's use this
to give Python 3.6 f-strings a try.
You might be lucky enough that the defaults for LibCST perfectly match your coding
style, but chances are you want to customize LibCST to your repository. Initialize
your repository by running the following command in the root of your repository and
then edit the produced ``.libcst.codemod.yaml`` file::
python3 -m libcst.tool initialize .
The file includes provisions for customizing any generated code marker, calling an
external code formatter such as `black <https://pypi.org/project/black/>`_, blackisting
patterns of files you never wish to touch and a list of modules that contain valid
codemods that can be executed. If you want to write and run codemods specific to your
repository or organization, you can add an in-repo module location to the list of
modules and LibCST will discover codemods in all locations.
Now that your repository is initialized, let's have a quick look at what's currently
available for running. Run the following command from the root of your repository::
python3 -m libcst.tool list
You'll see several codemods available to you, one of which is
``convert_format_to_fstring.ConvertFormatStringCommand``. The description to the right
of this codemod indicates that it converts ``.format()`` calls to f-strings, so let's
give it a whirl! Execute the codemod from the root of your repository like so::
python3 -m libcst.tool codemod convert_format_to_fstring.ConvertFormatStringCommand .
If you want to try it out on only one file or a specific subdirectory, you can replace
the ``.`` in the above command with a relative directory, file, list of directories or
list of files. While LibCST is walking through your repository and codemodding files
you will see a progress indicator. If there's anything the codemod can't do or any
unexpected syntax errors, you will also see them on your console as it progresses.
If everything works out, you'll notice that your ``.format()`` calls have been
converted to f-strings!
-----------------
Writing a Codemod
-----------------
Codemods use the same principles as the rest of LibCST. They take LibCST's core,
metadata and matchers and package them up as a simple command-line interface. So,
anything you can do with LibCST in isolation you can also do with a codemod.
Let's say you need to clean up some legacy code which used magic values instead
of constants. You've already got a constants module called ``utils.constants``
and you want to assume that every reference to a raw string matching a particular
constant should be converted to that constant. For the simplest version of this
codemod, you'll need a command-line tool that takes as arguments the string to
replace and the constant to replace it with. You'll also need to ensure that
modified modules import the constant itself.
So, you can write something similar to the following::
import argparse
from ast import literal_eval
from typing import Union
import libcst as cst
from libcst.codemod import CodemodContext, VisitorBasedCodemodCommand
from libcst.codemod.visitors import AddImportsVisitor
class ConvertConstantCommand(VisitorBasedCodemodCommand):
# Add a description so that future codemodders can see what this does.
DESCRIPTION: str = "Converts raw strings to constant accesses."
@staticmethod
def add_args(arg_parser: argparse.ArgumentParser) -> None:
# Add command-line args that a user can specify for running this
# codemod.
arg_parser.add_argument(
"--string",
dest="string",
metavar="STRING",
help="String contents that we should look for.",
type=str,
required=True,
)
arg_parser.add_argument(
"--constant",
dest="constant",
metavar="CONSTANT",
help="Constant identifier we should replace strings with.",
type=str,
required=True,
)
def __init__(self, context: CodemodContext, string: str, constant: str) -> None:
# Initialize the base class with context, and save our args. Remember, the
# "dest" for each argument we added above must match a parameter name in
# this init.
super().__init__(context)
self.string = string
self.constant = constant
def leave_SimpleString(
self, original_node: cst.SimpleString, updated_node: cst.SimpleString
) -> Union[cst.SimpleString, cst.Name]:
if literal_eval(updated_node.value) == self.string:
# Check to see if the string matches what we want to replace. If so,
# then we do the replacement. We also know at this point that we need
# to import the constant itself.
AddImportsVisitor.add_needed_import(
self.context, "utils.constants", self.constant,
)
return cst.Name(self.constant)
# This isn't a string we're concerned with, so leave it unchanged.
return updated_node
This codemod is pretty simple. It defines a command-line description, sets up to parse
a few required command-line args, initializes its own member variables with the
command-line args that were parsed for it by ``libcst.tool codemod`` and finally
replaces any string which matches our string command-line argument with a constant.
It also takes care of adding the import required for the constant to be defined properly.
Cool! Let's look at the command-line help for this codemod. Let's assume you saved it
as ``constant_folding.py`` inside ``libcst.codemod.commands``. You can get help for the
codemod by running the following command::
python3 -m libcst.tool codemod constant_folding.ConvertConstantCommand --help
Notice that along with the default arguments, the ``--string`` and ``--constant``
arguments are present in the help, and the command-line description has been updated
with the codemod's description string. You'll notice that the codemod also shows up
on ``libcst.tool list``.
----------------
Testing Codemods
----------------
Instead of iterating on a codemod by running it repeatedly on a codebase and seeing
what happens, we can write a series of unit tests that assert on desired
transformations. Given the above constant folding codemod that we wrote, we can test
it with some code similar to the following::
from libcst.codemod import CodemodTest
from libcst.codemod.commands.constant_folding import ConvertConstantCommand
class TestConvertConstantCommand(CodemodTest):
# The codemod that will be instantiated for us in assertCodemod.
TRANSFORM = ConvertConstantCommand
def test_noop(self) -> None:
before = """
foo = "bar"
"""
after = """
foo = "bar"
"""
# Verify that if we don't have a valid string match, we don't make
# any substitutions.
self.assertCodemod(before, after, string="baz", constant="BAZ")
def test_substitution(self) -> None:
before = """
foo = "bar"
"""
after = """
from utils.constants import BAR
foo = BAR
"""
# Verify that if we do have a valid string match, we make a substitution
# as well as import the constant.
self.assertCodemod(before, after, string="bar", constant="BAR")
If we save this as ``test_constant_folding.py`` inside ``libcst.codemod.commands.tests``
then we can execute the tests with the following line::
python3 -m unittest libcst.codemod.commands.tests.test_constant_folding
That's all there is to it!
| 43.656566 | 89 | 0.687645 |
e895cc09d58e14cc84edf0e2b8cef2f97338f838 | 1,642 | rst | reStructuredText | grasp_tutorials/doc/fixed_position_pick.rst | RoboticsYY/ros2_grasp_library | bd556eeacbdc12bf94df027767c00ed0332e21c5 | [
"Apache-2.0"
] | 126 | 2019-03-13T18:35:47.000Z | 2022-03-30T14:41:24.000Z | grasp_tutorials/doc/fixed_position_pick.rst | RoboticsYY/ros2_grasp_library | bd556eeacbdc12bf94df027767c00ed0332e21c5 | [
"Apache-2.0"
] | 33 | 2019-09-12T03:23:49.000Z | 2021-07-07T02:10:23.000Z | grasp_tutorials/doc/fixed_position_pick.rst | RoboticsYY/ros2_grasp_library | bd556eeacbdc12bf94df027767c00ed0332e21c5 | [
"Apache-2.0"
] | 40 | 2019-01-06T08:11:56.000Z | 2022-03-21T19:13:14.000Z | Fixed Position Pick
====================
Overview
--------------
This demo shows how to use the robot interface to pick and place a
object at a predefined location with an UR5 robot arm.
Requirement
------------
Before running the code, make sure you have
followed the instructions below to setup the robot correctly.
- Hardware
- Host running ROS2
- `UR5`_
- `Robot Gripper`_
- Software
- `ROS2 Dashing`_ Desktop
- `robot_interface`_
.. _UR5: https://www.universal-robots.com/products/ur5-robot
.. _ROS2 Dashing: https://index.ros.org/doc/ros2/Installation/Dashing/Linux-Install-Debians/
.. _robot_interface: https://github.com/intel/ros2_grasp_library/tree/master/grasp_utils/robot_interface
.. _Robot Gripper: https://www.universal-robots.com/plus/end-effectors/hitbot-electric-gripper
Download and Build the Example Code
------------------------------------
Within your ROS2 workspace, download and compile the example code:
::
cd <path_of_your_ros2_workspace>/src
git clone https://github.com/intel/ros2_grasp_library.git
cd ..
colcon build --base-paths src/ros2_grasp_library/grasp_apps/fixed_position_pick
Launch the Application
----------------------
- Launch the application
::
ros2 launch fixed_position_pick fixed_position_pick.launch.py
.. note:: Please make sure the emergency button on the teach pendant is in your hand,
in case there is any accident.
- Expected Outputs:
1. The robot moves to the home pose
2. The robot picks up an object from the predefined location
3. The robot places the object to another location
4. The robot moves back to the home pose
| 23.126761 | 104 | 0.713764 |
6f0c8de0d58aefcbbad4f17207d601bcbb5cbc4d | 173 | rst | reStructuredText | rhel-stig/doc/metadata/rhel7/V-71863.rst | ztisolutions/ansible-hardening | 7270ed5b4ca453b37202ebd210fd0fc7d49d9375 | [
"Apache-2.0"
] | null | null | null | rhel-stig/doc/metadata/rhel7/V-71863.rst | ztisolutions/ansible-hardening | 7270ed5b4ca453b37202ebd210fd0fc7d49d9375 | [
"Apache-2.0"
] | null | null | null | rhel-stig/doc/metadata/rhel7/V-71863.rst | ztisolutions/ansible-hardening | 7270ed5b4ca453b37202ebd210fd0fc7d49d9375 | [
"Apache-2.0"
] | 1 | 2017-11-21T20:05:08.000Z | 2017-11-21T20:05:08.000Z | ---
id: V-71863
status: implemented
tag: misc
---
The security role already deploys a login banner for console logins with tasks
from another STIG:
* :ref:`stig-V-V-7225`
| 15.727273 | 78 | 0.728324 |
fc74dc6111c787a7e10d0077cef89f64bc0cf097 | 86 | rst | reStructuredText | newsfragments/935.misc.rst | renaynay/trinity | b85f37281b21c00dce91b7c61ba018788467c270 | [
"MIT"
] | 3 | 2019-06-17T13:59:20.000Z | 2021-05-02T22:09:13.000Z | newsfragments/935.misc.rst | renaynay/trinity | b85f37281b21c00dce91b7c61ba018788467c270 | [
"MIT"
] | null | null | null | newsfragments/935.misc.rst | renaynay/trinity | b85f37281b21c00dce91b7c61ba018788467c270 | [
"MIT"
] | 2 | 2019-12-14T02:52:32.000Z | 2021-02-18T23:04:44.000Z | ``LESHandshakeParams`` no longer takes a ``version`` parameter to ``as_payload_dict``
| 43 | 85 | 0.755814 |
dc13bf2c1bb7739699f1fd78ff0a5463c21b96c3 | 19,702 | rst | reStructuredText | docs/tutorial.rst | captainsafia/agate | 14f41d13f72160a374d2504d9c9ade958f543eab | [
"MIT"
] | null | null | null | docs/tutorial.rst | captainsafia/agate | 14f41d13f72160a374d2504d9c9ade958f543eab | [
"MIT"
] | null | null | null | docs/tutorial.rst | captainsafia/agate | 14f41d13f72160a374d2504d9c9ade958f543eab | [
"MIT"
] | 1 | 2019-11-26T03:25:18.000Z | 2019-11-26T03:25:18.000Z | ========
Tutorial
========
About this tutorial
===================
The best way to learn to use any tool is to actually use it. In this tutorial we will answer some basic questions about a dataset using agate.
The data will be using is a copy of the `National Registery of Exonerations <http://www.law.umich.edu/special/exoneration/Pages/detaillist.aspx>`_ made on August 28th, 2015. This dataset lists individuals who are known to have been exonerated after having been wrongly convicted. At the time the data was exported there were 1,651 entries in the registry.
Installing agate
================
Installing agate is easy::
pip install agate
.. note::
You should be installing agate inside a `virtualenv <http://virtualenv.readthedocs.org/en/latest/>`_. If for some crazy reason you aren't using virtualenv you will need to add a ``sudo`` to the previous command.
Getting the data
================
Let's start by creating a clean workspace::
mkdir agate_tutorial
cd agate_tutorial
Now let's download the data::
curl -L -O https://github.com/onyxfish/agate/raw/master/examples/realdata/exonerations-20150828.csv
You will now have a file named ``exonerations-20150828.csv`` in your ``agate_tutorial`` directory.
Getting setup
=============
First launch the Python interpreter::
python
Now let's import our dependencies:
.. code-block:: python
import csv
import agate
.. note::
You should really be using `csvkit <http://csvkit.readthedocs.org/>`_ to load CSV files, but here we stick with the builtin `csv` module because it comes with Python so everyone already has it.
I also strongly suggest taking a look at `proof <http://proof.readthedocs.org/en/latest/>`_ for building data processing pipelines, but we won't use it in this tutorial to keep things simple.
Defining the columns
====================
agate requires us to give it some information about each column in our dataset. No effort is made to determine these types automatically, however, :class:`.TextType` is always a safe choice if you aren't sure what kind of data is in a column.
First we create instances of the column types we will be using:
.. code-block:: python
text_type = agate.TextType()
number_type = agate.NumberType()
boolean_type = agate.BooleanType()
Then we define the names and types of the columns that are in our dataset:
.. code-block:: python
COLUMNS = (
('last_name', text_type),
('first_name', text_type),
('age', number_type),
('race', text_type),
('state', text_type),
('tags', text_type),
('crime', text_type),
('sentence', text_type),
('convicted', number_type),
('exonerated', number_type),
('dna', boolean_type),
('dna_essential', text_type),
('mistaken_witness', boolean_type),
('false_confession', boolean_type),
('perjury', boolean_type),
('false_evidence', boolean_type),
('official_misconduct', boolean_type),
('inadequate_defense', boolean_type),
)
You'll notice here that we define the names and types as pairs (tuples), which is what the :class:`.Table` constructor will expect in the next step.
.. note::
The column names defined here do not necessarily need to match those found in your CSV file. I've kept them consistent in this example for clarity.
Loading data from a CSV
=======================
The :class:`.Table` is the basic class in agate. A time-saving method is included to load table data from CSV:
.. code-block:: python
exonerations = agate.Table.from_csv('exonerations-20150828.csv', COLUMNS)
.. note::
If you have data that you've generated in another way you can always pass it in the :class:`.Table` constructor directly.
Aggregating column data
=======================
Analysis begins with questions, so that's how we'll learn about agate.
Question: **How many exonerations involved a false confession?**
Answering this question involves counting the number of "True" values in the ``false_confession`` column. When we created the table we specified that the data in this column was :class:`.BooleanType`. Because of this, agate has taken care of coercing the original text data from the CSV into Python's ``True`` and ``False`` values.
We'll answer the question using :class:`.Count` which is a type of :class:`.Aggregation`. Aggregations in agate are used to perform "column-wise" calculations. That is, they derive a new single value from the contents of a column. In the case of :class:`.Count`, it will tell us how many times a particular value appears in the column.
An :class:`.Aggregation` is applied to a column of a table. You can access the columns of a table using the :attr:`.Table.columns` attribute.
Putting it together looks like this:
.. code-block:: python
num_false_confessions = exonerations.columns['false_confession'].aggregate(agate.Count(True))
print(num_false_confessions)
::
211
Let's look at another example, this time using a numerical aggregation.
Question: **What was the median age of exonerated indviduals at time of arrest?**
.. code-block:: python
median_age = exonerations.columns['age'].aggregate(agate.Median())
print(median_age)
Answer:
::
agate.exceptions.NullComputationError
Apparently, not every exonerated individual in the data has a value for the ``age`` column. The :class:`.Median` statistical operation has no standard way of accounting for null values, so its caused an error.
Question: **How many individuals do not have an age specified in the data?**
.. code-block:: python
num_without_age = exonerations.columns['age'].aggregate(agate.Count(None))
print(num_without_age)
Answer:
::
9
Only nine rows in this dataset don't have age, so it's still useful to compute a median, but to do this we'll need to filter out those null values first.
Each column in :attr:`.Table.columns` is a subclass of :class:`.Column`, such as :class:`.NumberColumn` or :class:`.TextColumn`. As we've seen with :class:`.Median`, different aggregations can be applied depending on the column type and, in this case, its contents.
If none of the provided aggregations suit your needs you can also easily create your own by subclassing :class:`.Aggregation`. See the API documentation for :mod:`.aggregations` to see all of the implemented types.
Selecting and filtering data
============================
So how can we answer our question about median age? First, we need to filter the data to only those rows that don't contain nulls.
Agate's :class:`.Table` class provides a full suite of these "SQL-like" operations, including :meth:`.Table.select` for grabbing specific columns, :meth:`.Table.where` for selecting particular rows and :meth:`.Table.group_by` for grouping rows by common values.
Let's filter our exonerations table to only those individuals that have an age specified.
.. code-block:: python
with_age = exonerations.where(lambda row: row['age'] is not None)
You'll notice we provide a :keyword:`lambda` (anonymous) function to the :meth:`.Table.where`. This function is applied to each row and if it returns ``True``, the row is included in the output table.
A crucial thing to understand about these methods is that they return **new tables**. In our example above ``exonerations`` was a :class:`.Table` instance and we applied :meth:`.Table.where`, so ``with_age`` is a :class:`Table` too. The tables themselves are immutable. You can create new tables, but you can never modify them.
We can verify this did what we expected by counting the rows in the original table and rows in the new table:
.. code-block:: python
old = len(exonerations.rows)
new = len(with_age.rows)
print(old - new)
::
9
Nine rows were removed, which is how many we knew had nulls for the age column.
So, what **is** the median age of these individuals?
.. code-block:: python
median_age = with_age.columns['age'].aggregate(agate.Median())
print(median_age)
::
26
Computing new columns
=====================
In addition to "column-wise" calculations there are also "row-wise" calculations. These calculations go through a :class:`.Table` row-by-row and derive a new column using the existing data. To perform row calculations in agate we use subclasses of :class:`.Computation`.
When one or more instances of :class:`.Computation` are applied to a :class:`.Table`, a new table is created with additional columns.
Question: **How long did individuals remain in prison before being exonerated?**
To answer this question we will apply the :class:`.Change` computation to the ``convicted`` and ``exonerated`` columns. All that :class:`.Change` does is compute the difference between two numbers. (In this case each of these columns contains an integer year, but agate does have features for working with dates too.)
.. code-block:: python
with_years_in_prison = exonerations.compute([
('years_in_prison', agate.Change('convicted', 'exonerated'))
])
median_years = with_years_in_prison.columns['years_in_prison'].aggregate(agate.Median())
print(median_years)
::
8
The median number of years an exonerated individual spent in prison was 8 years.
Sometimes, the built-in computations, such as :class:`.Change` won't suffice. In this case, you can use the generic :class:`.Formula` to compute a column based on an arbitrary function. This is somewhat analogous to Excel's cell formulas.
For instance, this example will create a ``full_name`` column from the ``first_name`` and ``last_name`` columns in the data:
.. code-block:: python
full_names = exonerations.compute([
('full_name', agate.Formula(text_type, lambda row: '%(first_name)s %(last_name)s' % row))
])
For efficiencies sake, agate allows you to perform several computations at once.
.. code-block:: python
with_computations = exonerations.compute([
('full_name', agate.Formula(text_type, lambda row: '%(first_name)s %(last_name)s' % row)),
('years_in_prison', agate.Change('convicted', 'exonerated'))
])
If :class:`.Formula` still is not flexible enough (for instance, if you need to compute a new row based on the distribution of data in a column) you can always implement your own subclass of :class:`.Computation`. See the API documentation for :mod:`.computations` to see all of the supported ways to compute new data.
Sorting and slicing
===================
Question: **Who are the ten exonerated individuals who were youngest at the time they were arrested?**
Remembering that methods of tables return tables, we will use :meth:`.Table.order_by` to sort our table:
.. code-block:: python
sorted_by_age = exonerations.order_by('age')
We can then use :meth:`.Table.limit` get only the first ten rows of the data.
.. code-block:: python
youngest_ten = sorted_by_age.limit(10)
Now let's use :meth:`.Table.format` to help us pretty the results in a way we can easily review:
.. code-block:: python
print(youngest_ten.format(max_columns=7))
::
|------------+------------+-----+-----------+-------+---------+---------+------|
| last_name | first_name | age | race | state | tags | crime | ... |
|------------+------------+-----+-----------+-------+---------+---------+------|
| Murray | Lacresha | 11 | Black | TX | CV, F | Murder | ... |
| Adams | Johnathan | 12 | Caucasian | GA | CV, P | Murder | ... |
| Harris | Anthony | 12 | Black | OH | CV | Murder | ... |
| Edmonds | Tyler | 13 | Caucasian | MS | | Murder | ... |
| Handley | Zachary | 13 | Caucasian | PA | A, CV | Arson | ... |
| Jimenez | Thaddeus | 13 | Hispanic | IL | | Murder | ... |
| Pacek | Jerry | 13 | Caucasian | PA | | Murder | ... |
| Barr | Jonathan | 14 | Black | IL | CDC, CV | Murder | ... |
| Brim | Dominique | 14 | Black | MI | F | Assault | ... |
| Brown | Timothy | 14 | Black | FL | | Murder | ... |
|------------+------------+-----+-----------+-------+---------+---------+------|
If you find it impossible to believe that an eleven year-old was convicted of murder, I encourage you to read the Registry's `description of the case <http://www.law.umich.edu/special/exoneration/Pages/casedetail.aspx?caseid=3499>`_.
.. note::
In the previous example we could have omitted the :meth:`.Table.limit` and passed a ``max_rows=10`` to :meth:`.Table.format` instead.
Grouping and aggregating
========================
Question: **Which state has seen the most exonerations?**
This question can't be answered by operating on a single column. What we need is the equivalent of SQL's ``GROUP BY``. agate supports a full set of SQL-like operations on tables. Unlike SQL, agate breaks grouping and aggregation into two discrete steps.
First, we use :meth:`.Table.group_by` to group the data by state.
.. code-block:: python
by_state = exonerations.group_by('state')
This takes our original :class:`.Table` and groups it into a :class:`.TableSet`, which contains one table per county. Now we need to aggregate the total for each state. This works in a very similar way to how it did when we were aggregating columns of a single table, except that we'll use the :class:`.Length` aggregation to count the total number of values in the column.
.. code-block:: python
state_totals = by_state.aggregate([
('state', agate.Length(), 'count')
])
sorted_totals = state_totals.order_by('count', reverse=True)
print(sorted_totals.format(max_rows=5))
::
|--------+--------|
| state | count |
|--------+--------|
| TX | 212 |
| NY | 202 |
| CA | 154 |
| IL | 153 |
| MI | 60 |
| ... | ... |
|--------+--------|
You'll notice we pass a list of tuples to :meth:`.TableSet.aggregate`. Each one includes three elements. The first is the column name to aggregate. The second is an instance of some :class:`.Aggregation`. The third is the new column name. Unsurpringly, in this case the results appear roughly proportional to population.
Question: **What state has the longest median time in prison prior to exoneration?**
This is a much more complicated question that's going to pull together a lot of the features we've been using. We'll repeat the computations we applied before, but this time we're going to roll those computations up in our group and take the :class:`.Median` of each group. Then we'll sort the data and see where people have been stuck in prison the longest.
.. code-block:: python
with_years_in_prison = exonerations.compute([
('years_in_prison', agate.Change('convicted', 'exonerated'))
])
state_totals = with_years_in_prison.group_by('state')
medians = state_totals.aggregate([
('years_in_prison', agate.Length(), 'count')
('years_in_prison', agate.Median(), 'median_years_in_prison')
])
sorted_medians = medians.order_by('median_years_in_prison', reverse=True)
print(sorted_medians.format(max_rows=5))
::
|--------+-------+-------------------------|
| state | count | median_years_in_prison |
|--------+-------+-------------------------|
| DC | 15 | 27 |
| NE | 9 | 20 |
| ID | 2 | 19 |
| VT | 1 | 18 |
| LA | 45 | 16 |
| ... | ... | ... |
|--------+-------+-------------------------|
DC? Nebraska? What accounts for these states having the longest times in prison before exoneration? I have no idea. Given that the group sizes are small, it would probably be wise to look for outliers.
As with :meth:`.Table.aggregate` and :meth:`.Table.compute`, the :meth:`.TableSet.aggregate`: method takes a list of aggregations to perform. You can aggregate as many columns as you like in a single step and they will all appear in the output table.
Multi-dimensional aggregation
=============================
Before we wrap up, let's try one more thing. I've already shown you that you can use :class:`.TableSet` to group instances of :class:`.Table`. However, you can also use a :class:`.TableSet` to group other instances of :class:`.TableSet`. To put that another way, instances of :class:`.TableSet` can be *nested*.
The key to nesting data in this way is to use :meth:`.TableSet.group_by`. Before we used :meth:`.Table.group_by` to split data up into a group of tables. Now we'll use :meth:`.TableSet.group_by` to further subdivide that data. Let's look at a concrete example.
Question: **Is there a collective relationship between race, age and time spent in prison prior to exoneration?**
I'm not going to explain every stage of this analysis as most of it repeats patterns used previously. The key part to look for is the two separate calls to ``group_by``:
.. code-block:: python
# Filters rows without age data
only_with_age = data['with_years_in_prison'].where(
lambda r: r['age'] is not None
)
# Group by race
race_groups = only_with_age.group_by('race')
# Sub-group by age cohorts (20s, 30s, etc.)
race_and_age_groups = race_groups.group_by(
lambda r: '%i0s' % (r['age'] // 10),
key_name='age_group'
)
# Aggregate medians for each group
medians = race_and_age_groups.aggregate([
('years_in_prison', agate.Length(), 'count'),
('years_in_prison', agate.Median(), 'median_years_in_prison')
])
# Sort the results
sorted_groups = medians.order_by('median_years_in_prison', reverse=True)
# Print out the results
print(sorted_groups.format(max_rows=10))
::
|------------------+-----------+-------+-------------------------|
| race | age_group | count | median_years_in_prison |
|------------------+-----------+-------+-------------------------|
| Native American | 20s | 2 | 21.5 |
| | 20s | 1 | 19 |
| Native American | 10s | 2 | 15 |
| Native American | 30s | 2 | 14.5 |
| Black | 10s | 188 | 14 |
| Black | 20s | 358 | 13 |
| Asian | 20s | 4 | 12 |
| Black | 30s | 156 | 10 |
| Caucasian | 10s | 76 | 8 |
| Caucasian | 20s | 255 | 8 |
| ... | ... | ... | ... |
|------------------+-----------+-------+-------------------------|
Well, what are you waiting for? It's your turn!
Where to go next
================
This tutorial only scratches the surface of agate's features. For many more ideas on how to apply agate, check out the :doc:`cookbook`, which includes dozens of examples showing how to substitute agate for common patterns used in Excel, SQL, R and more.
Also, if you're going to be doing data processing in Python you really ought to check out `proof <http://proof.readthedocs.org/en/latest/>`_, a library for building data processing pipelines that are repeatable and self-documenting. It will make your code cleaner and save you tons of time.
| 43.017467 | 373 | 0.644402 |
090f00d91172b64d40610bbcdafc8a9cb787e7cc | 286 | rst | reStructuredText | docs/modules/utils.rst | BLaunet/sitelle | eb4a8c58a00dc76286761eb8f833c29d2e0f493c | [
"MIT"
] | 1 | 2018-01-17T09:41:15.000Z | 2018-01-17T09:41:15.000Z | docs/modules/utils.rst | BLaunet/sitelle | eb4a8c58a00dc76286761eb8f833c29d2e0f493c | [
"MIT"
] | 2 | 2018-05-15T10:41:59.000Z | 2018-05-15T11:10:54.000Z | docs/modules/utils.rst | BLaunet/sitelle | eb4a8c58a00dc76286761eb8f833c29d2e0f493c | [
"MIT"
] | null | null | null | .. _sitelle-utils:
*********************************
Utils (`sitelle.utils`)
*********************************
.. currentmodule:: sitelle.utils
Introduction
============
Miscellaneous functions, some of them are deprecated
Reference/API
=============
.. automodapi:: sitelle.utils
| 16.823529 | 52 | 0.51049 |
574282455830293273c8b7fc4751e746831d43c6 | 1,418 | rst | reStructuredText | docs/usage/troubleshooting.rst | alexpmagalhaes/metapredict | 2041787a4f48d6d7c98fb3d396455d059e67a08e | [
"MIT"
] | 5 | 2021-06-02T16:32:13.000Z | 2022-02-02T13:18:35.000Z | docs/usage/troubleshooting.rst | alexpmagalhaes/metapredict | 2041787a4f48d6d7c98fb3d396455d059e67a08e | [
"MIT"
] | 2 | 2021-08-31T13:13:39.000Z | 2022-02-15T22:32:46.000Z | docs/usage/troubleshooting.rst | alexpmagalhaes/metapredict | 2041787a4f48d6d7c98fb3d396455d059e67a08e | [
"MIT"
] | 2 | 2021-09-21T23:45:56.000Z | 2022-02-02T12:01:59.000Z | HELP! Metapredict isn't working!
=================================
Python Version Issues
----------------------
I have recieved occassional feedback that metapredict is not working for a user. A common problem is that the user is using a different version of Python than metapredict was made on. metapredict was made using Python version 3.7, but works on 3.8 as well. I recommend using one of these versions to avoid problems (I haven't done extensive testing using other versions of Python, so if you're not using 3.7 or 3.8, do so at your own risk). A convenient workaround is to use a conda environment that has Python 3.7 set as the default version of Python. For more info on conda, please see https://docs.conda.io/projects/conda/en/latest/index.html
Once you have conda installed, simply use the command
.. code-block:: bash
conda create --name my_env python=3.7
conda activate my_env
and once activate install metapredict from PyPI
.. code-block:: bash
pip install metapredict
You can, then use metapredict from within this conda environment. In all our testing, this setup leads to a working version of metapredict. However, in principle metapredict should work automatically when installed from pip.
Reporting Issues
-----------------
If you are having other problems, please report them to the issues section on the metapredict Github page at
https://github.com/idptools/metapredict/issues
| 47.266667 | 645 | 0.749647 |
0e013c328b151a065b784a7d030ae8b0cfcc62c4 | 2,041 | rst | reStructuredText | docs/user-guide/labels.rst | heroldus/kubernetes-on-aws | 1fe85395b833475d4d90d306058e1b1e4fb34215 | [
"MIT"
] | 590 | 2016-10-18T15:02:07.000Z | 2022-03-21T00:52:38.000Z | docs/user-guide/labels.rst | heroldus/kubernetes-on-aws | 1fe85395b833475d4d90d306058e1b1e4fb34215 | [
"MIT"
] | 4,807 | 2016-10-14T11:54:13.000Z | 2022-03-29T11:22:40.000Z | docs/user-guide/labels.rst | heroldus/kubernetes-on-aws | 1fe85395b833475d4d90d306058e1b1e4fb34215 | [
"MIT"
] | 158 | 2016-10-25T03:26:12.000Z | 2022-03-11T17:36:31.000Z | ====================
Labels and Selectors
====================
Labels are key/value pairs that are attached to Kubernetes objects, such as pods (this is usually done indirectly via deployments). Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users.
Labels can be used to organize and to select subsets of objects. See `Labels and Selectors in the Kubernetes documentation`_ for more information.
The following Kubernetes labels have a defined meaning in our Zalando context:
application
Application ID as defined in our Kio application registry. Example: "zmon-controller"
version
User-defined application version. This is used as input for the CI/CD pipeline and usually references a Docker image tag.
Example: "cd53"
release
Incrementing release counter. This is generated by the CI/CD pipeline and is used for traffic switching. Example: "4"
stage
Deployment stage to allow canary deployments. Allowed values are "canary" and "production".
owner
Owner of the Kubernetes resource. This needs to reference a valid organizational entity in the context of the cluster's business partner.
Example: "team/eagleeye"
Some labels are required for every deployment resource:
* application
* version
* release
* stage
Example deployment metadata:
.. code-block:: yaml
metadata:
labels:
application: my-app
version: "v31"
release: "r42"
stage: production
Kubernetes services will usually select only on ``application`` and ``stage``:
.. code-block:: yaml
kind: Service
apiVersion: v1
metadata:
name: my-app
spec:
selector:
application: my-app
stage: production
ports:
- port: 80
targetPort: 8080
protocol: TCP
You can always define additional custom labels as long as they don't conflict with the above label catalog.
.. _Labels and Selectors in the Kubernetes documentation: http://kubernetes.io/docs/user-guide/labels/
| 31.890625 | 250 | 0.719255 |
19291673dfe10d46ab0d8dba1b3fd746c19af540 | 569 | rst | reStructuredText | docs/source/index.rst | zzapzzap/hello | f0b880d32d8a6569832ff54183397c972bf740c5 | [
"MIT"
] | null | null | null | docs/source/index.rst | zzapzzap/hello | f0b880d32d8a6569832ff54183397c972bf740c5 | [
"MIT"
] | null | null | null | docs/source/index.rst | zzapzzap/hello | f0b880d32d8a6569832ff54183397c972bf740c5 | [
"MIT"
] | 1 | 2021-03-16T06:51:04.000Z | 2021-03-16T06:51:04.000Z | .. Pytorch_MultiGPU documentation master file, created by
sphinx-quickstart on Fri Mar 5 20:38:33 2021.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Pytorch_MultiGPU's documentation!
============================================
.. toctree::
:maxdepth: 2
:caption: Installization:
Installization/howtoinst.md
.. toctree::
:maxdepth: 2
:caption: Run:
Run/howtorun.md
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 20.321429 | 76 | 0.630931 |
37eb6e670316ec143b696c8018a7dee9aaf87d22 | 6,837 | rst | reStructuredText | doc/source/getting_started.rst | anentropic/neomodel | 004d506932fa7a735bd63c39be9d4c8d66658baa | [
"MIT"
] | 1 | 2021-09-09T01:42:11.000Z | 2021-09-09T01:42:11.000Z | doc/source/getting_started.rst | anentropic/neomodel | 004d506932fa7a735bd63c39be9d4c8d66658baa | [
"MIT"
] | null | null | null | doc/source/getting_started.rst | anentropic/neomodel | 004d506932fa7a735bd63c39be9d4c8d66658baa | [
"MIT"
] | 1 | 2017-01-09T23:20:42.000Z | 2017-01-09T23:20:42.000Z | ===============
Getting started
===============
Connecting
==========
Before executing any neomodel code, set the connection url::
from neomodel import config
config.DATABASE_URL = 'bolt://neo4j:neo4j@localhost:7687' # default
This must be called early on in your app, if you are using Django the `settings.py` file is ideal.
If you are using your neo4j server for the first time you will need to change the default password.
This can be achieved by visiting the neo4j admin panel (default: ``http://localhost:7474`` ).
You can also change the connection url at any time by calling ``set_connection``::
from neomodel import db
db.set_connection('bolt://neo4j:neo4j@localhost:7687')
The new connection url will be applied to the current thread or process.
In general however, it is better to `avoid setting database access credentials in plain sight <https://
www.ndss-symposium.org/wp-content/uploads/2019/02/ndss2019_04B-3_Meli_paper.pdf>`_. Neo4J defines a number of
`environment variables <https://neo4j.com/developer/kb/how-do-i-authenticate-with-cypher-shell-without-specifying-the-
username-and-password-on-the-command-line/>`_ that are used in its tools and these can be re-used for other applications
too.
These are:
* ``NEO4J_USERNAME``
* ``NEO4J_PASSWORD``
* ``NEO4J_BOLT_URL``
By setting these with (for example): ::
$ export NEO4J_USERNAME=neo4j
$ export NEO4J_PASSWORD=neo4j
$ export NEO4J_BOLT_URL="bolt://$NEO4J_USERNAME:$NEO4J_PASSWORD@localhost:7687"
They can be accessed from a Python script via the ``environ`` dict of module ``os`` and be used to set the connection
with something like: ::
import os
from neomodel import config
config.DATABASE_URL = os.environ["NEO4J_BOLT_URL"]
Defining Node Entities and Relationships
========================================
Below is a definition of two related nodes `Person` and `Country`: ::
from neomodel import (config, StructuredNode, StringProperty, IntegerProperty,
UniqueIdProperty, RelationshipTo)
config.DATABASE_URL = 'bolt://neo4j:password@localhost:7687'
class Country(StructuredNode):
code = StringProperty(unique_index=True, required=True)
class Person(StructuredNode):
uid = UniqueIdProperty()
name = StringProperty(unique_index=True)
age = IntegerProperty(index=True, default=0)
# traverse outgoing IS_FROM relations, inflate to Country objects
country = RelationshipTo(Country, 'IS_FROM')
Nodes are defined in the same way classes are defined in Python with the only difference that data members of those
classes that are intended to be stored to the database must be defined as ``neomodel`` property objects. For more
detailed information on property objects please see the section on :ref:`property_types`.
**If** you have a need to attach "ad-hoc" properties to nodes that have not been specified at its definition, then
consider deriving from the :ref:`semistructurednode_doc` class.
Relationships are defined via ``Relationship, RelationshipTo, RelationshipFrom`` objects. ``RelationshipTo,
RelationshipFrom`` can also specify the direction that a relationship would be allowed to be traversed. In this
particular example, ``Country`` objects would be accessible by ``Person`` objects but not the other way around.
When the relationship can be bi-directional, please avoid establishing two complementary ``RelationshipTo,
RelationshipFrom`` relationships and use ``Relationship``, on one of the class definitions instead. In all of these
cases, navigability matters more to the model as defined in Python. A relationship will be established in Neo4J but
in the case of ``Relationship`` it will be possible to be queried in either direction.
Neomodel automatically creates a label for each ``StructuredNode`` class in the database with the corresponding indexes
and constraints.
Applying constraints and indexes
================================
After creating a model in Python, any constraints or indexes need must be applied to Neo4j and ``neomodel`` provides a
script to automate this: ::
$ neomodel_install_labels yourapp.py someapp.models --db bolt://neo4j:neo4j@localhost:7687
It is important to execute this after altering the schema and observe the number of classes it reports.
Remove existing constraints and indexes
=======================================
Similarly, ``neomodel`` provides a script to automate the removal of all existing constraints and indexes from
the database, when this is required: ::
$ neomodel_remove_labels --db bolt://neo4j:neo4j@localhost:7687
After executing, it will print all indexes and constraints it has removed.
Create, Update, Delete operations
=================================
Using convenience methods such as::
jim = Person(name='Jim', age=3).save() # Create
jim.age = 4
jim.save() # Update, (with validation)
jim.delete()
jim.refresh() # reload properties from the database
jim.id # neo4j internal id
Retrieving nodes
================
Using the ``.nodes`` class property::
# Return all nodes
all_nodes = Person.nodes.all()
# Returns Person by Person.name=='Jim' or raises neomodel.DoesNotExist if no match
jim = Person.nodes.get(name='Jim')
``.nodes.all()`` and ``.nodes.get()`` can also accept a ``lazy=True`` parameter which will result in those functions
simply returning the node IDs rather than every attribute associated with that Node. ::
# Will return None unless "bob" exists
someone = Person.nodes.get_or_none(name='bob')
# Will return the first Person node with the name bob. This raises neomodel.DoesNotExist if there's no match.
someone = Person.nodes.first(name='bob')
# Will return the first Person node with the name bob or None if there's no match
someone = Person.nodes.first_or_none(name='bob')
# Return set of nodes
people = Person.nodes.filter(age__gt=3)
Relationships
=============
Working with relationships::
germany = Country(code='DE').save()
jim.country.connect(germany)
if jim.country.is_connected(germany):
print("Jim's from Germany")
for p in germany.inhabitant.all():
print(p.name) # Jim
len(germany.inhabitant) # 1
# Find people called 'Jim' in germany
germany.inhabitant.search(name='Jim')
# Find all the people called in germany except 'Jim'
germany.inhabitant.exclude(name='Jim')
# Remove Jim's country relationship with Germany
jim.country.disconnect(germany)
usa = Country(code='US').save()
jim.country.connect(usa)
jim.country.connect(germany)
# Remove all of Jim's country relationships
jim.country.disconnect_all()
jim.country.connect(usa)
# Replace Jim's country relationship with a new one
jim.country.replace(germany)
| 37.157609 | 120 | 0.7221 |
3558a88d659fb473cc354fc5f12623dc07110ca8 | 118 | rst | reStructuredText | docs/pages/introduction/index.rst | phljcb/tcconfig | 78d6adb35f08201bac589d6b16f22b73d4cb951d | [
"MIT"
] | 1 | 2020-07-23T07:07:47.000Z | 2020-07-23T07:07:47.000Z | docs/pages/introduction/index.rst | daifeilail/tcconfig | ef85bc4347daf2367a68aa59aa3407789e3a89bf | [
"MIT"
] | null | null | null | docs/pages/introduction/index.rst | daifeilail/tcconfig | ef85bc4347daf2367a68aa59aa3407789e3a89bf | [
"MIT"
] | null | null | null | tcconfig
=============
.. include:: badges.txt
Summary
-------
.. include:: summary.txt
.. include:: feature.txt
| 9.076923 | 24 | 0.550847 |
bc1c9542c82dfbbb4bc97c8647be079fde1840d5 | 1,503 | rst | reStructuredText | README.rst | yaybu/touchdown | 70ecda5191ce2d095bc074dcb23bfa1584464814 | [
"Apache-2.0"
] | 14 | 2015-01-05T18:18:04.000Z | 2022-02-07T19:35:12.000Z | README.rst | yaybu/touchdown | 70ecda5191ce2d095bc074dcb23bfa1584464814 | [
"Apache-2.0"
] | 106 | 2015-01-06T00:17:13.000Z | 2019-09-07T00:35:32.000Z | README.rst | yaybu/touchdown | 70ecda5191ce2d095bc074dcb23bfa1584464814 | [
"Apache-2.0"
] | 5 | 2015-01-30T10:18:24.000Z | 2022-02-07T19:35:13.000Z | =========
Touchdown
=========
.. image:: https://img.shields.io/travis/yaybu/touchdown/master.svg
:target: https://travis-ci.org/#!/yaybu/touchdown
.. image:: https://img.shields.io/appveyor/ci/yaybu/touchdown/master.svg
:target: https://ci.appveyor.com/project/yaybu/touchdown
.. image:: https://img.shields.io/codecov/c/github/yaybu/touchdown/master.svg
:target: https://codecov.io/github/yaybu/touchdown?ref=master
.. image:: https://img.shields.io/pypi/v/touchdown.svg
:target: https://pypi.python.org/pypi/touchdown/
.. image:: https://img.shields.io/badge/docs-latest-green.svg
:target: http://docs.yaybu.com/projects/touchdown/en/latest/
Touchdown is a service orchestration framework for python. It provides a python
"DSL" for declaring complicated cloud infrastructures and provisioning those
blueprints in an idempotent way.
You can find us in #yaybu on irc.oftc.net.
Here is an example ``Touchdownfile``::
aws = workspace.add_aws(
region='eu-west-1',
)
vpc = aws.add_virtual_private_cloud(name='example')
vpc.add_internet_gateway(name="internet")
example = vpc.add_subnet(
name='application',
cidr_block='192.168.0.0/24',
)
asg = aws.add_autoscaling_group(
name='example',
launch_configuration=aws.add_launch_configuration(
name="example",
ami='ami-62366',
subnets=[example],
),
)
You can then apply this configuration with::
touchdown apply
| 28.358491 | 79 | 0.682635 |
cd708e5098320fbd5fcfe50debecc6af28bf3268 | 3,156 | rst | reStructuredText | HISTORY.rst | bollwyvl/language-tags | 14949832a3e7322c3c02d03f391a47f77b493be1 | [
"MIT"
] | null | null | null | HISTORY.rst | bollwyvl/language-tags | 14949832a3e7322c3c02d03f391a47f77b493be1 | [
"MIT"
] | null | null | null | HISTORY.rst | bollwyvl/language-tags | 14949832a3e7322c3c02d03f391a47f77b493be1 | [
"MIT"
] | null | null | null | Changelog
=========
1.0.0
-----
- Drop support for Python 2
0.5.0
-----
- Updated dependencies and Python (Removed Python3.3 and Python3.4 support, added 3.6 and 3.7)
0.4.6
-----
- Avoid modifying tag when getting description
0.4.5
-----
- Close files after opening #38
0.4.4
-----
- Bug fix release: language tag 'aa' is detected as invalid #27
0.4.3
-----
- Upgrade to <https://github.com/mattcg/language-subtag-registry/releases/tag/v0.3.18>
0.4.2
-----
- Official python 3.5 compatibility
- Upgrade to <https://github.com/mattcg/language-subtag-registry/releases/tag/v0.3.15>
0.4.1
-----
- Included the data folder again in the project package.
- Added bash script (`update_data_files.sh`) to download the
`language-subtag-registry <https://github.com/mattcg/language-subtag-registry/>`_
and move this data in the data folder of the project.
0.4.0
-----
- Allow parsing a redundant tag into subtags.
- Added package.json file for easy update of the language subtag registry data using `npm <https://docs.npmjs.com/>`_
(:code:`npm install` or :code:`npm update`)
- Improvement of the :code:`language-tags.tags.search` function: rank equal description at top.
See `mattcg/language-tags#4 <https://github.com/mattcg/language-tags/issues/4>`_
0.3.2
-----
- Upgrade to <https://github.com/mattcg/language-subtag-registry/releases/tag/v0.3.11>
- Added wheel config
- Fixed bug under windows: opening data files using utf-8 encoding.
0.3.1
-----
- Upgrade to <https://github.com/mattcg/language-subtag-registry/releases/tag/v0.3.8>
0.3.0
-----
- Upgrade to <https://github.com/mattcg/language-subtag-registry/releases/tag/v0.3.6>
- Simplify output of __str__ functions. The previous json dump is assigned to the repr function.
.. code-block:: python
nlbe = tags.tags('nl-Latn-BE')
> print(nlbe)
'nl-Latn-BE'
> print(nlbe.language)
'nl'
> print(nlbe.script)
'Latn'
0.2.0
-----
- Adjust language, region and script properties of Tag. The properties will return `language_tags.Subtag.Subtag`
instead of a list of string subtags
.. code-block:: python
> print(tags.tag('nl-BE').language)
'{"subtag": "nl", "record": {"Subtag": "nl", "Suppress-Script": "Latn", "Added": "2005-10-16", "Type": "language", "Description": ["Dutch", "Flemish"]}, "type": "language"}'
> print(tags.tag('nl-BE').region)
'{"subtag": "be", "record": {"Subtag": "BE", "Added": "2005-10-16", "Type": "region", "Description": ["Belgium"]}, "type": "region"}'
> print(tags.tag('en-mt-arab').script)
'{"subtag": "arab", "record": {"Subtag": "Arab", "Added": "2005-10-16", "Type": "script", "Description": ["Arabic"]}, "type": "script"}'
0.1.1
-----
- Added string and Unicode functions to make it easy to print Tags and Subtags.
.. code-block:: python
> print(tags.tag('nl-BE'))
'{"tag": "nl-be"}'
- Added functions to easily select either the language, region or script subtags strings of a Tag.
.. code-block:: python
> print(tags.tag('nl-BE').language)
['nl']
0.1.0
-----
- Initial version
| 26.082645 | 181 | 0.646388 |
7bd9dae40879603548b19dbf89cb342eef9fc164 | 6,701 | rst | reStructuredText | doc/release_notes/release_notes_2.2.rst | johnxwu/my-acrn | abcfc1c0a082049572849d09da10dc44eb717477 | [
"BSD-3-Clause"
] | 1 | 2021-05-27T09:39:48.000Z | 2021-05-27T09:39:48.000Z | doc/release_notes/release_notes_2.2.rst | johnxwu/my-acrn | abcfc1c0a082049572849d09da10dc44eb717477 | [
"BSD-3-Clause"
] | null | null | null | doc/release_notes/release_notes_2.2.rst | johnxwu/my-acrn | abcfc1c0a082049572849d09da10dc44eb717477 | [
"BSD-3-Clause"
] | null | null | null | .. _release_notes_2.2:
ACRN v2.2 (Sep 2020)
####################
We are pleased to announce the release of the Project ACRN
hypervisor version 2.2.
ACRN is a flexible, lightweight reference hypervisor that is built with
real-time and safety-criticality in mind. It is optimized to streamline
embedded development through an open source platform. Check out the
:ref:`introduction` introduction for more information. All project ACRN
source code is maintained in the
https://github.com/projectacrn/acrn-hypervisor repository and includes
folders for the ACRN hypervisor, the ACRN device model, tools, and
documentation. You can either download this source code as a zip or
tar.gz file (see the `ACRN v2.2 GitHub release page
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/v2.2>`_) or
use Git clone and checkout commands::
git clone https://github.com/projectacrn/acrn-hypervisor
cd acrn-hypervisor
git checkout v2.2
The project's online technical documentation is also tagged to
correspond with a specific release: generated v2.2 documents can be
found at https://projectacrn.github.io/2.2/. Documentation for the
latest under-development branch is found at
https://projectacrn.github.io/latest/.
ACRN v2.2 requires Ubuntu 18.04. Follow the instructions in the
:ref:`rt_industry_ubuntu_setup` to get started with ACRN.
What's New in v2.2
******************
Elkhart Lake and Tiger Lake processor support.
At `Intel Industrial iSummit 2020
<https://newsroom.intel.com/press-kits/intel-industrial-summit-2020>`_,
Intel announced the latest additions to their
enhanced-for-IoT Edge portfolio: the Intel |reg| Atom |reg| x6000E Series, Intel |reg|
Pentium |reg| and Intel |reg| Celeron |reg| N and J Series (all codenamed Elkhart Lake),
and 11th Gen Intel |reg| Core |trade| processors (codenamed Tiger Lake-UP3). The ACRN
team is pleased to announce that this ACRN v2.2 release already supports
these processors.
* Support for time deterministic applications with new features, e.g.,
Time Coordinated Computing and Time Sensitive Networking
* Support for functional safety with new features, e.g., Intel Safety Island
On Elkhart Lake, ACRN can boot using Slim Bootloader
`Slim Bootloader <https://slimbootloader.github.io/>`_ is an
alternative bootloader to UEFI firmware.
Shared memory based inter-VM communication (ivshmem) is extended
ivshmem now supports all kinds of VMs including pre-launched VM, Service VM, and
other User VMs. (See :ref:`ivshmem-hld`)
**CPU sharing supports pre-launched VM.**
**RTLinux with preempt-RT Linux kernel 5.4 is validated both as a pre-launched and post-launched VM.**
**ACRN hypervisor can emulate MSI-X based on physical MSI with multiple vectors.**
Staged removal of deprivileged boot mode support.
ACRN has supported deprivileged boot mode to ease the integration of
Linux distributions such as Clear Linux. Unfortunately, deprivileged boot
mode limits ACRN's scalability and is unsuitable for ACRN's hybrid
hypervisor mode. In ACRN v2.2, deprivileged boot mode is no longer the default
and will be completely removed in ACRN v2.3. We're focusing instead
on using multiboot2 boot (via Grub). Multiboot2 is not supported in
Clear Linux though, so we have chosen Ubuntu (and Yocto Project) as the
preferred Service VM OSs moving forward.
Document updates
****************
New and updated reference documents are available, including:
.. rst-class:: rst-columns2
* :ref:`develop_acrn`
* :ref:`asm_coding_guidelines`
* :ref:`c_coding_guidelines`
* :ref:`contribute_guidelines`
* :ref:`hv-cpu-virt`
* :ref:`IOC_virtualization_hld`
* :ref:`hv-startup`
* :ref:`hv-vm-management`
* :ref:`ivshmem-hld`
* :ref:`virtio-i2c`
* :ref:`sw_design_guidelines`
* :ref:`faq`
* :ref:`getting-started-building`
* :ref:`introduction`
* :ref:`acrn_configuration_tool`
* :ref:`enable_ivshmem`
* :ref:`setup_openstack_libvirt`
* :ref:`using_grub`
* :ref:`using_partition_mode_on_nuc`
* :ref:`connect_serial_port`
* :ref:`using_yp`
* :ref:`acrn-dm_parameters`
* :ref:`hv-parameters`
* :ref:`acrnctl`
Because we're dropping deprivileged boot mode support in the next v2.3
release, we're also switching our Service VM of choice away from Clear
Linux. We've begun this transition in the v2.2 documentation and removed
some Clear Linux-specific tutorials. Deleted documents are still
available in the `version-specific v2.1 documentation
<https://projectacrn.github.io/v2.1/>`_.
Fixed Issues Details
********************
- :acrn-issue:`5008` - Slowdown in UOS (Zephyr)
- :acrn-issue:`5033` - SOS decode instruction failed in hybrid mode
- :acrn-issue:`5038` - [WHL][Yocto] SOS occasionally hangs/crashes with a kernel panic
- :acrn-issue:`5048` - iTCO_wdt issue: can't request region for resource
- :acrn-issue:`5102` - Can't access shared memory base address in ivshmem
- :acrn-issue:`5118` - GPT ERROR when write preempt img to SATA on NUC7i5BNB
- :acrn-issue:`5148` - dm: support to provide ACPI SSDT for UOS
- :acrn-issue:`5157` - [build from source] during build HV with XML, "TARGET_DIR=xxx" does not work
- :acrn-issue:`5165` - [WHL][Yocto][YaaG] No UI display when launch Yaag gvt-g with acrn kernel
- :acrn-issue:`5215` - [UPsquared N3350 board] Solution to Bootloader issue
- :acrn-issue:`5233` - Boot ACRN failed on Dell-OptiPlex 5040 with Intel i5-6500T
- :acrn-issue:`5238` - acrn-config: add hybrid_rt scenario XML config for ehl-crb-b
- :acrn-issue:`5240` - passthrough DHRD-ignored device
- :acrn-issue:`5242` - acrn-config: add pse-gpio to vmsix_on_msi devices list
- :acrn-issue:`4691` - hv: add vgpio device model support
- :acrn-issue:`5245` - hv: add INTx mapping for pre-launched VMs
- :acrn-issue:`5426` - hv: add vgpio device model support
- :acrn-issue:`5257` - hv: support PIO access to platform hidden devices
- :acrn-issue:`5278` - [EHL][acrn-configuration-tool]: create a new hybrid_rt based scenario for P2SB MMIO pass-thru use case
- :acrn-issue:`5304` - Cannot cross-compile - Build process assumes build system always hosts the ACRN hypervisor
Known Issues
************
- :acrn-issue:`5150` - [REG][WHL][[Yocto][Passthru] Launch RTVM fails with USB passthru
- :acrn-issue:`5151` - [WHL][VxWorks] Launch VxWorks fails due to no suitable video mode found
- :acrn-issue:`5154` - [TGL][Yocto][PM] 148213_PM_SystemS5 with life_mngr fail
- :acrn-issue:`5368` - [TGL][Yocto][Passthru] Audio does not work on TGL
- :acrn-issue:`5369` - [TGL][qemu] Cannot launch qemu on TGL
- :acrn-issue:`5370` - [TGL][RTVM][PTCM] Launch RTVM failed with mem size smaller than 2G and PTCM enabled
- :acrn-issue:`5371` - [TGL][Industry][Xenomai]Xenomai post launch fail
| 45.277027 | 126 | 0.745859 |
5204e233605a032488760a978a1084893a558e63 | 266 | rst | reStructuredText | testdata/06-test-inline-markup/02-emphasis/06.02.03.00-emphasis-surrounded-by-markup.rst | demizer/go-rst | 76354a48a5c3e212687cad4362b551727987af8f | [
"MIT"
] | 51 | 2015-01-21T18:01:42.000Z | 2021-03-15T21:09:23.000Z | testdata/06-test-inline-markup/02-emphasis/06.02.03.00-emphasis-surrounded-by-markup.rst | demizer/go-rst | 76354a48a5c3e212687cad4362b551727987af8f | [
"MIT"
] | 26 | 2015-04-29T07:17:44.000Z | 2017-06-17T10:28:38.000Z | testdata/06-test-inline-markup/02-emphasis/06.02.03.00-emphasis-surrounded-by-markup.rst | demizer/go-rst | 76354a48a5c3e212687cad4362b551727987af8f | [
"MIT"
] | 5 | 2015-04-15T03:29:55.000Z | 2019-05-24T14:24:37.000Z | some punctuation is allowed around inline markup, e.g.
/*emphasis*/, -*emphasis*-, and :*emphasis*: (delimiters),
(*emphasis*), [*emphasis*], <*emphasis*>, {*emphasis*} (open/close pairs),
*emphasis*., *emphasis*,, *emphasis*!, and *emphasis*\ (closing delimiters).
| 53.2 | 76 | 0.672932 |
9699a00dc84fd980019f46c8b34ac51b4c2d07aa | 666 | rst | reStructuredText | source/containers/index.rst | qu0zl/rsyslog-doc | 50b67b95259edcbae4e81c1069e7bb56a89743d6 | [
"Apache-2.0"
] | 77 | 2015-02-04T11:56:46.000Z | 2022-03-11T18:07:07.000Z | source/containers/index.rst | qu0zl/rsyslog-doc | 50b67b95259edcbae4e81c1069e7bb56a89743d6 | [
"Apache-2.0"
] | 412 | 2015-01-11T13:18:16.000Z | 2022-03-30T22:23:20.000Z | source/containers/index.rst | qu0zl/rsyslog-doc | 50b67b95259edcbae4e81c1069e7bb56a89743d6 | [
"Apache-2.0"
] | 263 | 2015-01-13T11:44:50.000Z | 2022-03-07T11:13:34.000Z | rsyslog and containers
======================
In this chapter, we describe how rsyslog can be used together with
containers.
All versions of rsyslog work well in containers. Versions beginning with
8.32.0 have also been made explicitly container-aware and provide some
extra features that are useful inside containers.
Note: the sources for docker containers created by the rsyslog project
can be found at https://github.com/rsyslog/rsyslog-docker - these may
be useful as a starting point for similar efforts. Feedback, bug
reports and pull requests are also appreciated for this project.
.. toctree::
:maxdepth: 2
container_features
docker_specifics
| 31.714286 | 72 | 0.771772 |
158fdc89891b2c7a779fc82c7f74610e24bb4230 | 348 | rst | reStructuredText | libs/kedro/kedro-tutorial/docs/source/kedro_tutorial.io.rst | yobibytes/quant_trading | 6db6815f671431612030b266205a588c135c0856 | [
"Apache-2.0"
] | 2 | 2020-02-11T12:03:24.000Z | 2020-02-11T12:04:02.000Z | libs/kedro/kedro-tutorial/docs/source/kedro_tutorial.io.rst | yobibytes/quant_trading | 6db6815f671431612030b266205a588c135c0856 | [
"Apache-2.0"
] | 8 | 2020-11-13T18:54:26.000Z | 2022-02-10T02:17:31.000Z | libs/kedro/kedro-tutorial/docs/source/kedro_tutorial.io.rst | yobibytes/quant_trading | 6db6815f671431612030b266205a588c135c0856 | [
"Apache-2.0"
] | 1 | 2020-02-11T12:04:04.000Z | 2020-02-11T12:04:04.000Z | kedro\_tutorial.io package
==========================
.. automodule:: kedro_tutorial.io
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
kedro\_tutorial.io.xls\_local module
------------------------------------
.. automodule:: kedro_tutorial.io.xls_local
:members:
:undoc-members:
:show-inheritance:
| 16.571429 | 43 | 0.554598 |
62a7749e181c082f67dc76a76b53749dcd28ec54 | 324 | rst | reStructuredText | doc-source/_blurb.rst | repo-helper/formate | 45e4b4fe29af144db714ea90c92cf6e7035ae301 | [
"MIT"
] | 1 | 2022-03-19T07:39:58.000Z | 2022-03-19T07:39:58.000Z | doc-source/_blurb.rst | repo-helper/formate | 45e4b4fe29af144db714ea90c92cf6e7035ae301 | [
"MIT"
] | 14 | 2021-01-25T23:10:04.000Z | 2021-06-29T19:55:38.000Z | doc-source/_blurb.rst | repo-helper/formate | 45e4b4fe29af144db714ea90c92cf6e7035ae301 | [
"MIT"
] | null | null | null | ``formate`` runs a series of user-selected hooks which reformat Python source files.
This can include :ref:`changing quote characters <dynamic_quotes>`,
:ref:`rewriting imports <collections-import-rewrite>`, and calling tools such as
`isort <https://pycqa.github.io/isort/>`__ and `yapf <https://github.com/google/yapf>`__.
| 64.8 | 89 | 0.765432 |
7538e6cd1d2a9f64d77d52ccba1c0401fff8e37d | 302 | rst | reStructuredText | path_planning/dubins_path/CHANGELOG.rst | nvdpsingh/cpp_robotics | 1ec80f16cd93abe2ff197f646a06a3ad9ef13dbd | [
"MIT"
] | 57 | 2019-07-02T00:58:48.000Z | 2022-02-20T03:13:28.000Z | path_planning/dubins_path/CHANGELOG.rst | leoandersoon/cpp_robotics | b667074485117ffd68f54c3fd867a3c5f510edd9 | [
"MIT"
] | null | null | null | path_planning/dubins_path/CHANGELOG.rst | leoandersoon/cpp_robotics | b667074485117ffd68f54c3fd867a3c5f510edd9 | [
"MIT"
] | 28 | 2019-07-02T02:00:26.000Z | 2022-02-28T16:42:35.000Z | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package dubins_path
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.0.2 (2018-09-18)
------------------
* fix some bug
0.0.1 (2018-09-15)
------------------
* show result use matplot
0.0.0 (2018-09-14)
------------------
* finish first version, can't show result
| 18.875 | 41 | 0.413907 |
f4560949d4ee2928fd4d74ed909e7e71cc49cf90 | 3,204 | rst | reStructuredText | docs/source/corpus.rst | cypreess/corpora | b26f4cfd67386fd87df999dfc70ac63c8fd70756 | [
"MIT"
] | 1 | 2017-10-12T14:23:38.000Z | 2017-10-12T14:23:38.000Z | docs/source/corpus.rst | cypreess/corpora | b26f4cfd67386fd87df999dfc70ac63c8fd70756 | [
"MIT"
] | null | null | null | docs/source/corpus.rst | cypreess/corpora | b26f4cfd67386fd87df999dfc70ac63c8fd70756 | [
"MIT"
] | null | null | null | Corpus internal format
======================
Single corpus is stored as a directory. In the directory there are several files important for corpus structure.
File ``config``
---------------
This file stores yaml formatted dict with properties of corpus. A typical ``config`` file has following properties:
::
chunk_size: 52428800
current_chunk: 0
encoding: utf-8
name: 50k Internet Corpus
.. warning::
you should not modify ``config`` file by yourself, unless you really know what you do.
``chunk_size``
max size (in bytes) of single corpus chunk
.. note::
single document must be stored within single chunk, so you cannot store documents larger that ``chunk_size``.
``current_chunk``
number of current chunk that will be used when appending new document;
.. note::
chunks are numbered from 0.
``encoding``
internal chunk encoding; possibly always ``utf-8``.
``name``
an optional name for corpus
File ``chunkN``
---------------
Files like ``chunk0``, ``chunk1``, ``chunk2``, ... contains raw texts and headers. Each chunk can have maximum size of ``chunk_size`` bytes config property.
Chunk file has a very simple internal format. Documents are stored sequentially (one after another). Each document is represented as yamled header dict and raw document text encoded with ``encoding`` defined in ``config`` file.
Internal format of chunk is:
::
[yamled header1]\n
[raw document1 text encoded]\n
[yamled header2]\n
[raw document2 text encoded]\n
...
[yamled headern]\n
[raw documentn text encoded]\n
.. note::
chunks are numbered from 0.
.. note::
single document must be stored within single chunk, so you cannot store documents larger that ``chunk_size``.
An example of two documents long chunk:
::
id: 8
Prof. Wojciech Roszkowski jest oficjalnym kandydatem AWS na
prezesa Instytutu Pamięci Narodowej - zdecydowało prezydium
Klubu Parlamentarnego Akcji Wyborczej Solidarność.
Rzecznik klubu Piotr Żak przypomniał, że zgodnie z ustawą o IPN,
Sejm wybiera prezesa Instytutu większością 3/5. Do wyboru
Roszkowskiego konieczne jest zatem uzyskanie poparcia nie tylko
Unii Wolności, ale także Polskiego Stronnictwa Ludowego.
Politycy PSL, UW i SLD odmawiają deklaracji, czy ich ugrupowania
poprą kandydaturę prof. Roszkowskiego.
id: 20
Papieże Pius IX i Jan XXIII zostaną beatyfikowani 3 września -
ogłosił Watykan. Beatyfikacja obu papieży zbiegnie się z
uroczystościami Wielkiego Jubileuszu Roku 2000.
File ``idx``
------------
This file contains a list of documents descriptors (indexes in chunk file). This is a list, that contains a tuples like:
* chunk number
* offset of document start in chunk file
* length of header section (with additional ``\n`` )
* length of text section (with additional ``\n``)
This file is managed by DB Berkeley Recno structure.
File ``ridx``
-------------
This files stores a random access index. Basically it is a hashmap containing a mapping of document ``id`` to the index in ``idx`` list.
This file is managed by DB Berkeley Hashmap structure.
| 29.943925 | 227 | 0.701935 |
c442b963ae094a1ea6db05e2673b13156526a3ca | 147 | rst | reStructuredText | docs/source/codedoc/net/es/oscars/resv/rest/package-index.rst | StackV/oscars | 23725dfdf42f2eda93799ec3dc9c16fcc9abf3e0 | [
"MIT"
] | 10 | 2015-03-17T19:11:30.000Z | 2018-03-16T13:52:14.000Z | docs/source/codedoc/net/es/oscars/resv/rest/package-index.rst | esnet/oscars | 2dbc6b484dc537c6fd18bd7c43df31386d82195b | [
"MIT"
] | 41 | 2016-05-17T20:15:36.000Z | 2022-02-26T10:05:08.000Z | docs/source/codedoc/net/es/oscars/resv/rest/package-index.rst | StackV/oscars | 23725dfdf42f2eda93799ec3dc9c16fcc9abf3e0 | [
"MIT"
] | 4 | 2016-01-19T14:35:28.000Z | 2021-07-22T15:53:31.000Z | net.es.oscars.resv.rest
=======================
.. java:package:: net.es.oscars.resv.rest
.. toctree::
:maxdepth: 1
SimpleResvController
| 13.363636 | 41 | 0.578231 |
6a5557af24e4eea62cd13a66924c866a665632f2 | 386 | rst | reStructuredText | src/python/doc/source/turicreate.aggregate.rst | cookingcodewithme/turicreate | a89e203d60529d2d72547c03ec9753ea979ee342 | [
"BSD-3-Clause"
] | 11,356 | 2017-12-08T19:42:32.000Z | 2022-03-31T16:55:25.000Z | src/python/doc/source/turicreate.aggregate.rst | cookingcodewithme/turicreate | a89e203d60529d2d72547c03ec9753ea979ee342 | [
"BSD-3-Clause"
] | 2,402 | 2017-12-08T22:31:01.000Z | 2022-03-28T19:25:52.000Z | src/python/doc/source/turicreate.aggregate.rst | cookingcodewithme/turicreate | a89e203d60529d2d72547c03ec9753ea979ee342 | [
"BSD-3-Clause"
] | 1,343 | 2017-12-08T19:47:19.000Z | 2022-03-26T11:31:36.000Z | :mod:`aggregate`
=========================
.. automodule:: turicreate.aggregate
.. currentmodule:: turicreate.aggregate
classifier metrics
----------------------
.. autosummary::
:toctree: generated/
:nosignatures:
ARGMAX
ARGMIN
AVG
CONCAT
COUNT
COUNT_DISTINCT
DISTINCT
FREQ_COUNT
MAX
MEAN
MIN
QUANTILE
SELECT_ONE
STD
STDV
SUM
VAR
VARIANCE
| 11.352941 | 39 | 0.611399 |
fc7cd8bbdf2d17e006342626bfa881470f5db9a2 | 654 | rst | reStructuredText | doc/source/api-reference/index.rst | purushothamgowthu/deeppy | 8cb658b33b91a0a91dea089a843941baf3f73481 | [
"MIT"
] | 1,170 | 2015-01-02T17:34:47.000Z | 2022-03-08T06:22:29.000Z | doc/source/api-reference/index.rst | purushothamgowthu/deeppy | 8cb658b33b91a0a91dea089a843941baf3f73481 | [
"MIT"
] | 27 | 2015-02-19T14:40:14.000Z | 2019-12-29T09:06:29.000Z | doc/source/api-reference/index.rst | purushothamgowthu/deeppy | 8cb658b33b91a0a91dea089a843941baf3f73481 | [
"MIT"
] | 390 | 2015-01-02T15:24:33.000Z | 2022-01-07T08:49:28.000Z | .. _api:
API reference
=============
.. toctree::
:hidden:
:maxdepth: 1
autoencoder
feedforward
Subpackages
-----------
* :doc:`deeppy.autoencoder <autoencoder>`
* :doc:`deeppy.feedforward <feedforward>`
Base classes
------------
.. automodule:: deeppy.base
:members:
:undoc-members:
:show-inheritance:
Fillers
-------
.. automodule:: deeppy.filler
:members:
:undoc-members:
:show-inheritance:
Inputs
------
.. automodule:: deeppy.input
:members:
:undoc-members:
:show-inheritance:
Parameters
----------
.. automodule:: deeppy.parameter
:members:
:undoc-members:
:show-inheritance:
| 13.346939 | 41 | 0.597859 |
5ada4b9996ac83d8923a2a381ef5c8195c522a49 | 65,876 | rst | reStructuredText | sphinx_docs/source/Godunov.rst | XinlongSBU/MAESTROeX | bda189af39390fc09bb0ebb8321971b9d7688fd7 | [
"BSD-3-Clause"
] | 37 | 2018-04-04T02:56:52.000Z | 2021-12-17T16:34:03.000Z | sphinx_docs/source/Godunov.rst | XinlongSBU/MAESTROeX | bda189af39390fc09bb0ebb8321971b9d7688fd7 | [
"BSD-3-Clause"
] | 172 | 2018-07-02T15:00:59.000Z | 2022-01-06T19:01:59.000Z | sphinx_docs/source/Godunov.rst | XinlongSBU/MAESTROeX | bda189af39390fc09bb0ebb8321971b9d7688fd7 | [
"BSD-3-Clause"
] | 32 | 2018-08-06T21:32:03.000Z | 2022-02-14T04:20:46.000Z | ************************
Godunov Interface States
************************
These are working notes for the Godunov step in MAESTROeX and VARDEN.
MAESTROeX Notation
==================
- For 2D, :math:`\Ub = (u,w)` and :math:`\Ubt = (\ut,\wt)`.
Note that :math:`u = \ut`. We will use the shorthand :math:`\ib = (x,r)`.
- For 3D plane parallel, :math:`\Ub = (u,v,w)`
and :math:`\Ubt = (\ut,\vt,\wt)`. Note that :math:`u = \ut` and :math:`v = \vt`.
We will use the shorthand :math:`\ib = (x,y,r)`.
- For 3D spherical, :math:`\Ub = (u,v,w)`
and :math:`\Ubt = (\ut,\vt,\wt)`. We will use the shorthand
:math:`\ib = (x,y,z)`.
Computing :math:`\Ub` From :math:`\Ubt`
---------------------------------------
For plane-parallel problems, in order to compute :math:`w` from
:math:`\wt`, we use simple averaging
.. math:: w_{\ib}^n = \wt_{\ib}^n + \frac{w_{0,r-\half} + w_{0,r+\half}}{2}.
For spherial problems, in order to compute :math:`\Ub` from :math:`\Ubt`,
we first map :math:`w_0` onto :math:`w_0^{\mac}` using put_w0_on_edges,
where :math:`w_0^{\mac}` only contains normal velocities at each face.
Then we construct :math:`\Ub` by using
.. math:: u_{\ib} = \ut_{\ib} + \frac{w_{0,\ib+\half\eb_x}^{\mac} + w_{0,\ib-\half\eb_x}^{\mac}}{2},
.. math:: v_{\ib} = \vt_{\ib} + \frac{w_{0,\ib+\half\eb_y}^{\mac} + w_{0,\ib-\half\eb_y}^{\mac}}{2},
.. math:: w_{\ib} = \wt_{\ib} + \frac{w_{0,\ib+\half\eb_z}^{\mac} + w_{0,\ib-\half\eb_z}^{\mac}}{2}.
To compute full edge-state velocities, simply add :math:`w_0`
(for plane-parallel) or w0mac to the perturbational
velocity directly since only edge-based quantities are involved.
Computing :math:`\partial w_0/\partial r`
-----------------------------------------
For plane-parallel problems, the spatial derivatives of :math:`w_0`
are given by the two-point centered difference:
.. math:: \left(\frac{\partial w_0}{\partial r}\right)_{\ib} = \frac{w_{0,r+\half}-w_{0,r-\half}}{h}.
For spherical problems, we compute the radial bin centered gradient using
.. math:: \left(\frac{\partial w_0}{\partial r}\right)_{r} = \frac{w_{0,r+\half}-w_{0,r-\half}}{\Delta r}.
Then we put :math:`\partial w_0/\partial r` onto a Cartesian grid
using put_1d_array_on_cart_3d_sphr.
Computing :math:`\Ubt^{\trans}` in MAESTROeX
============================================
| In advance_premac, we call mkutrans, to compute
:math:`\Ubt^{\trans}`. We will only compute the normal
component of velocity at each face.
These transverse velocities do not contain :math:`w_0`, so immediately
following the call to mkutrans, we call addw0 to compute
:math:`\Ub^{\trans}` from :math:`\Ubt^{\trans}`.
| The evolution equation for the perturbational velocity is:
.. math:: \frac{\partial\Ubt}{\partial t} = -\Ub\cdot\nabla\Ubt \underbrace{- (\Ubt\cdot\eb_r)\frac{\partial w_0}{\partial r}\eb_r - \frac{1}{\rho}\nabla\pi + \frac{1}{\rho_0}\frac{\partial\pi_0}{\partial r}\eb_r - \frac{(\rho-\rho_0)}{\rho}g\eb_r}_{\hbox{forcing terms}}.\label{Perturbational Velocity Equation}
We extrapolate each velocity component to edge-centered, time-centered locations. For example,
.. math::
\begin{aligned}
\ut_{R,\ib-\half\eb_x} &=& \ut_{\ib}^n + \frac{h}{2}\frac{\partial\ut_{\ib}^n}{\partial x} + \frac{\dt}{2}\frac{\partial\ut_{\ib}^n}{\partial t} \nonumber \\
&=& \ut_{\ib}^n + \frac{h}{2}\frac{\partial\ut_{\ib}^n}{\partial x} + \frac{\dt}{2}
\left(-\ut_{\ib}^n\frac{\partial\ut_{\ib}^n}{\partial x} - \wt_{\ib}^n\frac{\partial\ut_{\ib}^n}{\partial r} + \text{forcing terms}\right)\end{aligned}
We are going to use a 1D Taylor series extrapolation in space and time.
By 1D, we mean that we omit any spatial derivatives that are not in the
direction of the extrapolation. We also omit the underbraced forcing terms.
We also use characteristic tracing.
.. math:: \ut_{R,\ib-\half\eb_x} = \ut_{\ib}^n + \left[\frac{1}{2} - \frac{\dt}{2h}\min(0,\ut_{\ib}^n)\right]\partial\ut_{\ib}^n
2D Cartesian Case
-----------------
We predict :math:`\ut` to x-faces using a 1D extrapolation:
.. math::
\begin{aligned}
\ut_{L,\ib-\half\eb_x} &=& \ut_{\ib-\eb_x}^n + \left[\half - \frac{\dt}{2h}\max(0,u_{\ib-\eb_x}^n)\right]\Delta_x \ut_{\ib-\eb_x}^n,\\
\ut_{R,\ib-\half\eb_x} &=& \ut_{\ib}^n - \left[\half + \frac{\dt}{2h}\min(0,u_{\ib}^n)\right]\Delta_x \ut_{\ib}^n.\end{aligned}
We pick the final trans states using a Riemann solver:
.. math::
\ut^{\trans}_{\ib-\half\eb_x} =
\begin{cases}
0, & \left(\ut_{L,\ib-\half\eb_x} \le 0 ~~ {\rm AND} ~~ \ut_{R,\ib-\half\eb_x} \ge 0\right) ~~ {\rm OR} ~~ \left|\ut_{L,\ib-\half\eb_x} + \ut_{R,\ib-\half\eb_x}\right| < \epsilon, \\
\ut_{L,\ib-\half\eb_x}, & \ut_{L,\ib-\half\eb_x} + \ut_{R,\ib-\half\eb_x} > 0, \\
\ut_{R,\ib-\half\eb_x}, & \ut_{L,\ib-\half\eb_x} + \ut_{R,\ib-\half\eb_x} < 0, \\
\end{cases}
We predict :math:`\wt` to r-faces using a 1D extrapolation:
.. math::
\begin{aligned}
\wt_{L,\ib-\half\eb_r} &=& \wt_{\ib-\eb_r}^n + \left[\half - \frac{\dt}{2h}\max(0,w_{\ib-\eb_r}^n)\right]\Delta_r \wt_{\ib-\eb_r}^n,\\
\wt_{R,\ib-\half\eb_r} &=& \wt_{\ib}^n - \left[\half + \frac{\dt}{2h}\min(0,w_{\ib}^n)\right]\Delta_r \wt_{\ib}^n.\end{aligned}
We pick the final :math:`\trans` states using a Riemann solver, noting
that we upwind based on the full velocity.
.. math::
\wt^{\trans}_{\ib-\half\eb_r} =
\begin{cases}
0, & \left(w_{L,\ib-\half\eb_r} \le 0 ~~ {\rm AND} ~~ w_{R,\ib-\half\eb_r} \ge 0\right) ~~ {\rm OR} ~~ \left|w_{L,\ib-\half\eb_r} + w_{R,\ib-\half\eb_r}\right| < \epsilon, \\
\wt_{L,\ib-\half\eb_r}, & w_{L,\ib-\half\eb_r} + w_{R,\ib-\half\eb_r} > 0, \\
\wt_{R,\ib-\half\eb_r}, & w_{L,\ib-\half\eb_r} + w_{R,\ib-\half\eb_r} < 0, \\
\end{cases}
.. _d-cartesian-case-1:
3D Cartesian Case
-----------------
We use the exact same procedure in 2D and 3D to compute :math:`\ut^{\trans}` and
:math:`\wt^{\trans}`. The procedure for computing :math:`\vt^{\trans}` is analogous to
computing :math:`\ut^{\trans}`. We predict :math:`\vt` to y-faces using the
1D extrapolation:
.. math::
\begin{aligned}
\vt_{L,\ib-\half\eb_y} &=& \vt_{\ib-\eb_y}^n + \left[\half - \frac{\dt}{2h}\max(0,v_{\ib-\eb_y}^n)\right]\Delta_y \vt_{\ib-\eb_y}^n, \\
\vt_{R,\ib-\half\eb_y} &=& \vt_{\ib}^n - \left[\half + \frac{\dt}{2h}\min(0,v_{\ib}^n)\right]\Delta_y \vt_{\ib}^n,\end{aligned}
.. math::
\vt^{\trans}_{\ib-\half\eb_y} =
\begin{cases}
0, & \left(v_{L,\ib-\half\eb_y} \le 0 ~~ {\rm AND} ~~ v_{R,\ib-\half\eb_y} \ge 0\right) ~~ {\rm OR} ~~ \left|v_{L,\ib-\half\eb_y} + v_{R,\ib-\half\eb_y}\right| < \epsilon, \\
\vt_{L,\ib-\half\eb_y}, & v_{L,\ib-\half\eb_y} + v_{R,\ib-\half\eb_y} > 0, \\
\vt_{R,\ib-\half\eb_y}, & v_{L,\ib-\half\eb_y} + v_{R,\ib-\half\eb_y} < 0. \\
\end{cases}
3D Spherical Case
-----------------
We predict the normal components of velocity to the normal faces
using a 1D extrapolation. The equations for all three directions
are identical to those given in the 2D and 3D plane-parallel
sections. As in the plane-parallel case, make sure
that the advection velocities, as well as
the upwind velocity, is done with the full velocity, not the
perturbational velocity.
Computing :math:`\Ubt^{\mac,*}` in MAESTROeX
============================================
| In advance_premac, we call velpred to compute
:math:`\Ubt^{\mac,*}`. We will only compute the normal component of
velocity at each face.
| For reference, here is the perturbational velocity equation from before:
.. math:: \frac{\partial\Ubt}{\partial t} = -\Ub\cdot\nabla\Ubt \underbrace{- (\Ubt\cdot\eb_r)\frac{\partial w_0}{\partial r}\eb_r \underbrace{- \frac{1}{\rho}\nabla\pi + \frac{1}{\rho_0}\frac{\partial\pi_0}{\partial r}\eb_r - \frac{(\rho-\rho_0)}{\rho}g\eb_r}_{\hbox{terms included in $\fb_{\Ubt}$}}}_{\hbox{forcing terms}}.
Note that the :math:`\partial w_0/\partial r` term is treated like a forcing
term, but it is not actually part of :math:`\fb_{\Ubt}`. We make use of the 1D
extrapolations used to compute :math:`\Ubt^{\trans}`
(:math:`\ut_{L/R,\ib-\half\eb_x}`, :math:`\vt_{L/R,\ib-\half\eb_y}`,
and :math:`\wt_{L/R,\ib-\half\eb_r}`), as well as the “:math:`\trans`” states
(:math:`\ut_{\ib-\half\eb_x}^{\trans}`, :math:`\vt_{\ib-\half\eb_y}^{\trans}`,
and :math:`\wt_{\ib-\half\eb_r}^{\trans}`)
.. _d-cartesian-case-2:
2D Cartesian Case
-----------------
#. Predict :math:`\ut` to r-faces using a 1D extrapolation.
#. Predict :math:`\ut` to x-faces using a full-dimensional extrapolation.
#. Predict :math:`\wt` to x-faces using a 1D extrapolation.
#. Predict :math:`\wt` to r-faces using a full-dimensional extrapolation.
Predict :math:`\ut` to r-faces using a 1D extrapolation:
.. math::
\begin{aligned}
\ut_{L,\ib-\half\eb_r} &=& \ut_{\ib-\eb_r}^n + \left[\half - \frac{\dt}{2h}\max(0,w_{\ib-\eb_r}^n)\right]\Delta_r \ut_{\ib-\eb_r}^n, \\
\ut_{R,\ib-\half\eb_r} &=& \ut_{\ib} - \left[\half + \frac{\dt}{2h}\min(0,w_{\ib}^n)\right]\Delta_r \ut_{\ib}^n.\end{aligned}
Upwind based on :math:`w^{\trans}`:
.. math::
\ut_{\ib-\half\eb_r} =
\begin{cases}
\half\left(\ut_{L,\ib-\half\eb_r} + \ut_{R,\ib-\half\eb_r}\right), & \left|w^{\trans}_{\ib-\half\eb_r}\right| < \epsilon \\
\ut_{L,\ib-\half\eb_r}, & w^{\trans}_{\ib-\half\eb_r} > 0, \\
\ut_{R,\ib-\half\eb_r}, & w^{\trans}_{\ib-\half\eb_r} < 0. \\
\end{cases}
Predict :math:`\ut` to x-faces using a full-dimensional extrapolation,
.. math::
\begin{aligned}
\ut_{L,\ib-\half\eb_x}^{\mac,*} &=& \ut_{L,\ib-\half\eb_x} - \frac{\dt}{4h}\left(w_{\ib-\eb_x+\half\eb_r}^{\trans}+w_{\ib-\eb_x-\half\eb_r}^{\trans}\right)\left(\ut_{\ib-\eb_x+\half\eb_r} - \ut_{\ib-\eb_x-\half\eb_r}\right) + \frac{\dt}{2}f_{\ut,\ib-\eb_x}, \nonumber \\
&& \\
\ut_{R,\ib-\half\eb_x}^{\mac,*} &=& \ut_{R,\ib-\half\eb_x} - \frac{\dt}{4h}\left(w_{\ib+\half\eb_r}^{\trans}+w_{\ib-\half\eb_r}^{\trans}\right)\left(\ut_{\ib+\half\eb_r} - \ut_{\ib-\half\eb_r}\right) + \frac{\dt}{2}f_{\ut,\ib}.\end{aligned}
Solve a Riemann problem:
.. math::
\ut_{\ib-\half\eb_x}^{\mac,*} =
\begin{cases}
0, & \left(u_{L,\ib-\half\eb_x}^{\mac,*} \le 0 ~~ {\rm AND} ~~ u_{R,\ib-\half\eb_x}^{\mac,*} \ge 0\right) ~~ {\rm OR} ~~ \left|u_{L,\ib-\half\eb_x}^{\mac,*} + u_{R,\ib-\half\eb_x}^{\mac,*}\right| < \epsilon, \\
\ut_{L,\ib-\half\eb_x}^{\mac,*}, & u_{L,\ib-\half\eb_x}^{\mac,*} + u_{R,\ib-\half\eb_x}^{\mac,*} > 0, \\
\ut_{R,\ib-\half\eb_x}^{\mac,*}, & u_{L,\ib-\half\eb_x}^{\mac,*} + u_{R,\ib-\half\eb_x}^{\mac,*} < 0.
\end{cases}
Predict :math:`\wt` to x-faces using a 1D extrapolation:
.. math::
\begin{aligned}
\wt_{L,\ib-\half\eb_x} &=& \wt_{\ib-\eb_x}^n + \left[\half - \frac{\dt}{2h}\max(0,u_{\ib-\eb_x}^n)\right]\Delta_x \wt_{\ib-\eb_x}^n, \\
\wt_{R,\ib-\half\eb_x} &=& \wt_{\ib} - \left[\half + \frac{\dt}{2h}\min(0,u_{\ib}^n)\right]\Delta_x \wt_{\ib}^n.\end{aligned}
Upwind based on :math:`u^{\trans}`:
.. math::
\wt_{\ib-\half\eb_x} =
\begin{cases}
\half\left(\wt_{L,\ib-\half\eb_x} + \wt_{R,\ib-\half\eb_x}\right), & \left|u^{\trans}_{\ib-\half\eb_x}\right| < \epsilon \\
\wt_{L,\ib-\half\eb_x}, & u^{\trans}_{\ib-\half\eb_x} > 0, \\
\wt_{R,\ib-\half\eb_x}, & u^{\trans}_{\ib-\half\eb_x} < 0. \\
\end{cases}
Predict :math:`\wt` to r-faces using a full-dimensional extrapolation:
.. math::
\begin{aligned}
\wt_{L,\ib-\half\eb_r}^{\mac,*} = \wt_{L,\ib-\half\eb_r} &-& \frac{\dt}{4h}\left(u_{\ib-\eb_r+\half\eb_x}^{\trans}+u_{\ib-\eb_r-\half\eb_x}^{\trans}\right)\left(\wt_{\ib-\eb_r+\half\eb_x} - \wt_{\ib-\eb_r-\half\eb_x}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(\wt_{\ib-\half\eb_r}^{\trans}+\wt_{\ib-\frac{3}{2}\eb_r}^{\trans}\right)\left(w_{0,\ib-\half\eb_r} - w_{0,\ib-\frac{3}{2}\eb_r}\right) + \frac{\dt}{2}f_{\wt,\ib-\eb_r}, \nonumber \\
&& \\
\wt_{R,\ib-\half\eb_r}^{\mac,*} = \wt_{R,\ib-\half\eb_r} &-& \frac{\dt}{4h}\left(u_{\ib+\half\eb_x}^{\trans}+u_{\ib-\half\eb_x}^{\trans}\right)\left(\wt_{\ib+\half\eb_x} - \wt_{\ib-\half\eb_x}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(\wt_{\ib+\half\eb_r}^{\trans}+\wt_{\ib-\half\eb_r}^{\trans}\right)\left(w_{0,\ib+\half\eb_r} - w_{0,\ib-\half\eb_r}\right) + \frac{\dt}{2}f_{\wt,\ib}.\end{aligned}
Solve a Riemann problem:
.. math::
\wt_{\ib-\half\eb_r}^{\mac,*} =
\begin{cases}
0, & \left(w_L^{\mac,*} \le 0 ~~ {\rm AND} ~~ w_R^{\mac,*} \ge 0\right) ~~ {\rm OR} ~~ \left|w_L^{\mac,*} + w_R^{\mac,*}\right| < \epsilon, \\
\wt_{L,\ib-\half\eb_r}^{\mac,*}, & w_L^{\mac,*} + w_R^{\mac,*} > 0, \\
\wt_{R,\ib-\half\eb_r}^{\mac,*}, & w_L^{\mac,*} + w_R^{\mac,*} < 0.
\end{cases}
.. _d-cartesian-case-3:
3D Cartesian Case
-----------------
This algorithm is more complicated than the 2D case since we include
the effects of corner coupling.
#. Predict :math:`\ut` to y-faces using a 1D extrapolation.
#. Predict :math:`\ut` to r-faces using a 1D extrapolation.
#. Predict :math:`\vt` to x-faces using a 1D extrapolation.
#. Predict :math:`\vt` to r-faces using a 1D extrapolation.
#. Predict :math:`\wt` to x-faces using a 1D extrapolation.
#. Predict :math:`\wt` to y-faces using a 1D extrapolation.
#. Update prediction of :math:`\ut` to y-faces by accounting for :math:`r`-derivatives.
#. Update prediction of :math:`\ut` to r-faces by accounting for :math:`y`-derivatives.
#. Update prediction of :math:`\vt` to x-faces by accounting for :math:`r`-derivatives.
#. Update prediction of :math:`\vt` to r-faces by accounting for :math:`x`-derivatives.
#. Update prediction of :math:`\wt` to x-faces by accounting for :math:`y`-derivatives.
#. Update prediction of :math:`\wt` to y-faces by accounting for :math:`x`-derivatives.
#. Predict :math:`\ut` to x-faces using a full-dimensional extrapolation.
#. Predict :math:`\vt` to y-faces using a full-dimensional extrapolation.
#. Predict :math:`\wt` to r-faces using a full-dimensional extrapolation.
* Predict :math:`\ut` to y-faces using a 1D extrapolation.
.. math::
\begin{aligned}
\ut_{L,\ib-\half\eb_y} &=& \ut_{\ib-\eb_y}^n + \left[\half - \frac{\dt}{2h}\max(0,v_{\ib-\eb_y}^n)\right]\Delta_y \ut_{\ib-\eb_y}^n, \\
\ut_{R,\ib-\half\eb_y} &=& \ut_{\ib} - \left[\half + \frac{\dt}{2h}\min(0,v_{\ib}^n)\right]\Delta_y \ut_{\ib}^n.\end{aligned}
Upwind based on :math:`v^{\trans}`:
.. math::
\ut_{\ib-\half\eb_y} =
\begin{cases}
\half\left(\ut_{L,\ib-\half\eb_y} + \ut_{R,\ib-\half\eb_y}\right), & \left|v^{\trans}_{\ib-\half\eb_y}\right| < \epsilon \\
\ut_{L,\ib-\half\eb_y}, & v^{\trans}_{\ib-\half\eb_y} > 0, \\
\ut_{R,\ib-\half\eb_y}, & v^{\trans}_{\ib-\half\eb_y} < 0. \\
\end{cases}
* Predict :math:`\ut` to r-faces using a 1D extrapolation.
* Predict :math:`\vt` to x-faces using a 1D extrapolation.
* Predict :math:`\vt` to r-faces using a 1D extrapolation.
* Predict :math:`\wt` to x-faces using a 1D extrapolation.
* Predict :math:`\wt` to y-faces using a 1D extrapolation.
* Update prediction of :math:`\ut` to y-faces by accounting for :math:`r`-derivatives.
The notation :math:`\ut_{\ib-\half\eb_y}^{y|r}` means state :math:`\ut_{\ib-\half\eb_y}` that has been updated to account for transverse derives in the r-direction.
.. math::
\begin{aligned}
\ut_{L,\ib-\half\eb_y}^{y|r} &=& \ut_{L,\ib-\half\eb_y} - \frac{\dt}{6h}\left(w_{\ib-\eb_y+\half\eb_r}^{\trans}+w_{\ib-\eb_y-\half\eb_r}^{\trans}\right)\left(\ut_{\ib-\eb_y+\half\eb_r}-\ut_{\ib-\eb_y-\half\eb_r}\right), \\
\ut_{R,\ib-\half\eb_y}^{y|r} &=& \ut_{R,\ib-\half\eb_y} - \frac{\dt}{6h}\left(w_{\ib+\half\eb_r}^{\trans}+w_{\ib-\half\eb_r}^{\trans}\right)\left(\ut_{\ib+\half\eb_r}-\ut_{\ib-\half\eb_r}\right).\end{aligned}
Upwind based on :math:`v^{\trans}`:
.. math::
\ut_{\ib-\half\eb_y}^{y|r} =
\begin{cases}
\half\left(\ut_{L,\ib-\half\eb_y}^{y|r} + \ut_{R,\ib-\half\eb_y}^{y|r}\right), & \left|v_{\ib-\half\eb_y}^{\trans}\right| < \epsilon \\
\ut_{L,\ib-\half\eb_y}^{y|r}, & v_{\ib-\half\eb_y}^{\trans} > 0, \\
\ut_{R,\ib-\half\eb_y}^{y|r}, & v_{\ib-\half\eb_y}^{\trans} < 0.
\end{cases}
* Update prediction of :math:`\ut` to r-faces by accounting for :math:`y`-derivatives.
* Update prediction of :math:`\vt` to x-faces by accounting for :math:`r`-derivatives.
* Update prediction of :math:`\vt` to r-faces by accounting for :math:`x`-derivatives.
* Update prediction of :math:`\wt` to x-faces by accounting for :math:`y`-derivatives.
* Update prediction of :math:`\wt` to y-faces by accounting for :math:`x`-derivatives.
* Predict :math:`\ut` to x-faces using a full-dimensional extrapolation.
.. math::
\begin{aligned}
\ut_{L,\ib-\half\eb_x}^{\mac,*} = \ut_{L,\ib-\half\eb_x} &-& \frac{\dt}{4h}\left(v_{\ib-\eb_x+\half\eb_y}^{\trans}+v_{\ib-\eb_x-\half\eb_y}^{\trans}\right)\left(\ut_{\ib-\eb_x+\half\eb_y}^{y|r}-\ut_{\ib-\eb_x-\half\eb_y}^{y|r}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(w_{\ib-\eb_x+\half\eb_r}^{\trans}+w_{\ib-\eb_x-\half\eb_r}^{\trans}\right)\left(\ut_{\ib-\eb_x+\half\eb_r}^{r|y}-\ut_{\ib-\eb_x-\half\eb_r}^{r|y}\right) + \frac{\dt}{2}f_{u,\ib-\eb_x}, \nonumber \\
&& \\
\ut_{R,\ib-\half\eb_x}^{\mac,*} = \ut_{R,\ib-\half\eb_x} &-& \frac{\dt}{4h}\left(v_{\ib+\half\eb_y}^{\trans}+v_{\ib-\half\eb_y}^{\trans}\right)\left(\ut_{\ib+\half\eb_y}^{y|r}-\ut_{\ib-\half\eb_y}^{y|r}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(w_{\ib+\half\eb_r}^{\trans}+w_{\ib-\half\eb_r}^{\trans}\right)\left(\ut_{\ib+\half\eb_r}^{r|y}-\ut_{\ib-\half\eb_r}^{r|y}\right) + \frac{\dt}{2}f_{u,\ib}.\end{aligned}
Solve a Riemann problem:
.. math::
\ut_{\ib-\half\eb_x}^{\mac,*} =
\begin{cases}
0, & \left(u_{L,\ib-\half\eb_x}^{\mac,*} \le 0 ~~ {\rm AND} ~~ u_{R,\ib-\half\eb_x}^{\mac,*} \ge 0\right) ~~ {\rm OR} ~~ \left|u_{L,\ib-\half\eb_x}^{\mac,*} + u_{R,\ib-\half\eb_x}^{\mac,*}\right| < \epsilon, \\
\ut_{L,\ib-\half\eb_x}^{\mac,*}, & u_{L,\ib-\half\eb_x}^{\mac,*} + u_{R,\ib-\half\eb_x}^{\mac,*} > 0, \\
\ut_{R,\ib-\half\eb_x}^{\mac,*}, & u_{L,\ib-\half\eb_x}^{\mac,*} + u_{R,\ib-\half\eb_x}^{\mac,*} < 0.
\end{cases}
* Predict :math:`\vt` to y-faces using a full-dimensional extrapolation.
* Predict :math:`\wt` to r-faces using a full-dimensional extrapolation.
In this step, make sure you account for the :math:`\partial w_0/\partial r`
term before solving the Riemann problem:
.. math::
\begin{aligned}
\wt_{L,\ib-\half\eb_r}^{\mac,*} &=& \wt_{L,\ib-\half\eb_r}^{\mac,*} -
\frac{\dt}{4h}\left(\wt^{\trans}_{\ib+\half\eb_r} + \wt^{\trans}_{\ib-\half\eb_r}\right)\left(w_{0,\ib+\half\eb_r}-w_{0,\ib-\half\eb_r}\right) \\
\wt_{R,\ib-\half\eb_r}^{\mac,*} &=& \wt_{R,\ib-\half\eb_r}^{\mac,*} -
\frac{\dt}{4h}\left(\wt^{\trans}_{\ib-\half\eb_r} + \wt^{\trans}_{\ib-\frac{3}{2}\eb_r}\right)\left(w_{0,\ib-\half\eb_r}-w_{0,\ib-\frac{3}{2}\eb_r}\right)\end{aligned}
.. _d-spherical-case-1:
3D Spherical Case
-----------------
The spherical case is the same as the plane-parallel 3D Cartesian
case, except the :math:`\partial w_0/\partial r` term enters
in the full dimensional extrapolation for each direction.
As in the plane-parallel case, make sure to upwind using the
full velocity.
.. _Scalar Edge State Prediction in MAESTROeX:
Computing :math:`\rho^{'\edge}, X_k^{\edge},(\rho h)^{'\edge}`, and :math:`\Ubt^{\edge}` in MAESTROeX
=====================================================================================================
We call make_edge_scal to compute :math:`\rho^{'\edge}, X_k^{\edge},
(\rho h)^{'\edge}`, and :math:`\Ubt^{\edge}` at each edge.
The procedure is the same for each quantitiy, so we shall simply denote
the scalar as :math:`s`. We always need to compute :math:`\rho'` and :math:`X_k` to faces,
and the choice of energy prediction is as follows:
- For enthalpy_pred_type = 1, we predict :math:`(\rho h)'` to faces.
- For enthalpy_pred_type = 2, we predict :math:`h` to faces.
- For enthalpy_pred_type = 3 and 4, we predict :math:`T` to faces.
- For enthalpy_pred_type = 5, we predict :math:`h'` to faces.
- For enthalpy_pred_type = 6, we predict :math:`T'` to faces.
We are using enthalpy_pred_type = 1 for now. The equations
of motion are:
.. math::
\begin{aligned}
\frac{\partial \rho'}{\partial t} &=& -\Ub\cdot\nabla\rho' \underbrace{- \rho'\nabla\cdot\Ub - \nabla\cdot\left(\rho_0\Ubt\right)}_{f_{\rho'}}, \\
\frac{\partial X_k}{\partial t} &=& -\Ub\cdot\nabla X_k ~~~ \text{(no forcing)}, \\
\frac{\partial(\rho h)'}{\partial t} &=& -\Ub\cdot\nabla(\rho h)' \underbrace{- (\rho h)'\nabla\cdot\Ub - \nabla\cdot\left[(\rho h)_0\Ubt\right] + \left(\Ubt\cdot\eb_r\right)\frac{\partial p_0}{\partial r} + \nabla\cdot\kth\nabla T}_{f_{(\rho h)'}}, \nonumber \\
&& \\
\frac{\partial\Ubt}{\partial t} &=& -\Ub\cdot\nabla\Ubt \underbrace{- \left(\Ubt\cdot\eb_r\right)\frac{\partial w_0}{\partial r}\eb_r \underbrace{- \frac{1}{\rho}\nabla\pi + \frac{1}{\rho_0}\frac{\partial\pi_0}{\partial r}\eb_r - \frac{(\rho-\rho_0)}{\rho}g\eb_r}_{\hbox{terms included in $\fb_{\Ubt}$}}}_{\hbox{forcing terms}}.\end{aligned}
.. _d-cartesian-case-4:
2D Cartesian Case
-----------------
#. Predict :math:`s` to r-faces using a 1D extrapolation.
#. Predict :math:`s` to x-faces using a full-dimensional extrapolation.
#. Predict :math:`s` to x-faces using a 1D extrapolation.
#. Predict :math:`s` to r-faces using a full-dimensional extrapolation.
* Predict :math:`s` to r-faces using a 1D extrapolation:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_r} &=& s_{\ib-\eb_r}^n + \left(\half - \frac{\dt}{2h}w_{\ib-\half\eb_r}^{\mac}\right)\Delta_r s_{\ib-\eb_r}^n, \\
s_{R,\ib-\half\eb_r} &=& s_{\ib} - \left(\half + \frac{\dt}{2h}w_{\ib-\half\eb_r}^{\mac}\right)\Delta_r s_{\ib}^n.\end{aligned}
Upwind based on :math:`w^{\mac}`:
.. math::
s_{\ib-\half\eb_r} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_r} + s_{R,\ib-\half\eb_r}\right), & \left|w^{\mac}_{\ib-\half\eb_r}\right| < \epsilon \\
s_{L,\ib-\half\eb_r}, & w^{\mac}_{\ib-\half\eb_r} > 0, \\
s_{R,\ib-\half\eb_r}, & w^{\mac}_{\ib-\half\eb_r} < 0. \\
\end{cases}
Predict :math:`s` to x-faces using a full-dimensional extrapolation. First, the normal derivative and forcing terms:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} &=& s_{\ib-\eb_x}^n + \left(\half - \frac{\dt}{2h}u_{\ib-\half\eb_x}^{\mac}\right)\Delta_x s_{\ib-\eb_x}^n + \frac{\dt}{2}f_{\ib-\eb_x}^n \\
s_{R,\ib-\half\eb_x}^{\edge} &=& s_{\ib}^n - \left(\half + \frac{\dt}{2h}u_{\ib-\half\eb_x}^{\mac}\right)\Delta_x s_{\ib}^n + \frac{\dt}{2}f_{\ib}^n \end{aligned}
Account for the transverse terms:
**if** is_conservative **then**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} &=& s_{L,\ib-\half\eb_x}^{\edge} -
\frac{\dt}{2h}\left[\left(w^{\mac}s\right)_{\ib-\eb_x+\half\eb_r} - \left(w^{\mac}s\right)_{\ib-\eb_x-\half\eb_r}\right] - \frac{\dt}{2h}s_{\ib-\eb_x}^{n}\left(u_{\ib-\half\eb_x}^{\mac}-u_{\ib-\frac{3}{2}\eb_x}^{\mac}\right)\nonumber \\
&&\\
s_{R,\ib-\half\eb_x}^{\edge} &=& s_{R,\ib-\half\eb_x}^{\edge} -
\frac{\dt}{2h}\left[\left(w^{\mac}s\right)_{\ib+\half\eb_r} - \left(w^{\mac}s\right)_{\ib-\half\eb_r}\right] - \frac{\dt}{2h}s_{\ib}^{n}\left(u_{\ib+\half\eb_x}^{\mac}-u_{\ib-\half\eb_x}^{\mac}\right)\end{aligned}
**else**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} &=& s_{L,\ib-\half\eb_x}^{\edge} -
\frac{\dt}{4h}\left(w^{\mac}_{\ib-\eb_x+\half\eb_r} + w^{\mac}_{\ib-\eb_x-\half\eb_r}\right)\left(s_{\ib-\eb_x+\half\eb_r} - s_{\ib-\eb_x-\half\eb_r}\right)\\
s_{R,\ib-\half\eb_x}^{\edge} &=& s_{R,\ib-\half\eb_x}^{\edge} -
\frac{\dt}{4h}\left(w^{\mac}_{\ib+\half\eb_r} + w^{\mac}_{\ib-\half\eb_r}\right)\left(s_{\ib+\half\eb_r} - s_{\ib-\half\eb_r}\right)\end{aligned}
**end if**
* Account for the :math:`\partial w_0/\partial r` term:
**if** is_vel **and** comp = 2 **then**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} &=& s_{L,\ib-\half\eb_x}^{\edge} -
\frac{\dt}{4h}\left(\wt^{\mac}_{\ib-\eb_x+\half\eb_r} + \wt^{\mac}_{\ib-\eb_x-\half\eb_r}\right)\left(w_{0,\ib+\half\eb_r}-w_{0,\ib-\half\eb_r}\right) \\
s_{R,\ib-\half\eb_x}^{\edge} &=& s_{R,\ib-\half\eb_x}^{\edge} -
\frac{\dt}{4h}\left(\wt^{\mac}_{\ib+\half\eb_r} + \wt^{\mac}_{\ib-\half\eb_r}\right)\left(w_{0,\ib+\half\eb_r}-w_{0,\ib-\half\eb_r}\right) \\\end{aligned}
**end if**
* Upwind based on :math:`u^{\mac}`.
.. math::
s_{\ib-\half\eb_x}^{\edge} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_x}^{\edge} + s_{R,\ib-\half\eb_x}^{\edge}\right), & \left|u^{\mac}_{\ib-\half\eb_x}\right| < \epsilon \\
s_{L,\ib-\half\eb_x}^{\edge}, & u^{\mac}_{\ib-\half\eb_x} > 0, \\
s_{R,\ib-\half\eb_x}^{\edge}, & u^{\mac}_{\ib-\half\eb_x} < 0.
\end{cases}
Predict :math:`s` to x-faces using a 1D extrapolation:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x} &=& s_{\ib-\eb_x}^n + \left(\half - \frac{\dt}{2h}u_{\ib-\half\eb_x}^{\mac}\right)\Delta_x s_{\ib-\eb_x}^n, \\
s_{R,\ib-\half\eb_x} &=& s_{\ib} - \left(\half + \frac{\dt}{2h}u_{\ib-\half\eb_x}^{\mac}\right)\Delta_x s_{\ib}^n.\end{aligned}
Upwind based on :math:`u^{\mac}`:
.. math::
s_{\ib-\half\eb_x} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_x} + s_{R,\ib-\half\eb_x}\right), & \left|u^{\mac}_{\ib-\half\eb_x}\right| < \epsilon \\
s_{L,\ib-\half\eb_x}, & u^{\mac}_{\ib-\half\eb_x} > 0, \\
s_{R,\ib-\half\eb_x}, & u^{\mac}_{\ib-\half\eb_x} < 0. \\
\end{cases}
Predict :math:`s` to r-faces using a full-dimensional extrapolation. First, the normal derivative and forcing terms:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_r}^{\edge} &=& s_{\ib-\eb_r}^n + \left(\half - \frac{\dt}{2h}w_{\ib-\half\eb_r}^{\mac}\right)\Delta_r s_{\ib-\eb_r}^n + \frac{\dt}{2}f_{\ib-\eb_r}^n \\
s_{R,\ib-\half\eb_r}^{\edge} &=& s_{\ib}^n - \left(\half + \frac{\dt}{2h}w_{\ib-\half\eb_r}^{\mac}\right)\Delta_r s_{\ib}^n + \frac{\dt}{2}f_{\ib}^n \end{aligned}
Account for the transverse terms:
**if** is_conservative **then**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_r}^{\edge} &=& s_{L,\ib-\half\eb_r}^{\edge} -
\frac{\dt}{2h}\left[\left(u^{\mac}s\right)_{\ib-\eb_r+\half\eb_x} - \left(u^{\mac}s\right)_{\ib-\eb_r-\half\eb_x}\right] - \frac{\dt}{2h}s_{\ib-\eb_r}^{n}\left(w_{\ib-\half\eb_r}^{\mac}-w_{\ib-\frac{3}{2}\eb_r}^{\mac}\right)\nonumber\\
&& \\
s_{R,\ib-\half\eb_r}^{\edge} &=& s_{R,\ib-\half\eb_r}^{\edge} -
\frac{\dt}{2h}\left[\left(u^{\mac}s\right)_{\ib+\half\eb_x} - \left(u^{\mac}s\right)_{\ib-\half\eb_x}\right] - \frac{\dt}{2h}s_{\ib}^{n}\left(w_{\ib+\half\eb_r}^{\mac}-w_{\ib-\half\eb_r}^{\mac}\right)\end{aligned}
**else**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_r}^{\edge} &=& s_{L,\ib-\half\eb_r}^{\edge} -
\frac{\dt}{4h}\left(u^{\mac}_{\ib-\eb_r+\half\eb_x} + u^{\mac}_{\ib-\eb_r-\half\eb_x}\right)\left(s_{\ib-\eb_r+\half\eb_x} - s_{\ib-\eb_r-\half\eb_x}\right)\\
s_{R,\ib-\half\eb_r}^{\edge} &=& s_{R,\ib-\half\eb_r}^{\edge} -
\frac{\dt}{4h}\left(u^{\mac}_{\ib+\half\eb_x} + u^{\mac}_{\ib-\half\eb_x}\right)\left(s_{\ib+\half\eb_x} - s_{\ib-\half\eb_x}\right)\end{aligned}
**end if**
* Account for the :math:`\partial w_0/\partial r` term:
**if** is_vel **and** comp = 2 **then**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_r}^{\edge} &=& s_{L,\ib-\half\eb_r}^{\edge} -
\frac{\dt}{4h}\left(\wt^{\mac}_{\ib-\half\eb_r} + \wt^{\mac}_{\ib-\frac{3}{2}\eb_r}\right)\left(w_{0,\ib-\half\eb_r}-w_{0,\ib-\frac{3}{2}\eb_r}\right) \\
s_{R,\ib-\half\eb_r}^{\edge} &=& s_{R,\ib-\half\eb_r}^{\edge} -
\frac{\dt}{4h}\left(\wt^{\mac}_{\ib+\half\eb_r} + \wt^{\mac}_{\ib-\half\eb_r}\right)\left(w_{0,\ib+\half\eb_r}-w_{0,\ib-\half\eb_r}\right) \\\end{aligned}
**end if**
* Upwind based on :math:`w^{\mac}`:
.. math::
s_{\ib-\half\eb_r} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_r} + s_{R,\ib-\half\eb_r}\right), & \left|w^{\mac}_{\ib-\half\eb_r}\right| < \epsilon \\
u_{L,\ib-\half\eb_r}, & w^{\mac}_{\ib-\half\eb_r} > 0, \\
u_{R,\ib-\half\eb_r}, & w^{\mac}_{\ib-\half\eb_r} < 0. \\
\end{cases}
.. _d-cartesian-case-5:
3D Cartesian Case
-----------------
This algorithm is more complicated than the 2D case since we include
the effects of corner coupling.
#. Predict :math:`s` to x-faces using a 1D extrapolation.
#. Predict :math:`s` to y-faces using a 1D extrapolation.
#. Predict :math:`s` to r-faces using a 1D extrapolation.
#. Update prediction of :math:`s` to x-faces by accounting for y-derivatives.
#. Update prediction of :math:`s` to x-faces by accounting for r-derivatives.
#. Update prediction of :math:`s` to y-faces by accounting for x-derivatives.
#. Update prediction of :math:`s` to y-faces by accounting for r-derivatives.
#. Update prediction of :math:`s` to r-faces by accounting for x-derivatives.
#. Update prediction of :math:`s` to r-faces by accounting for y-derivatives.
#. Predict :math:`s` to x-faces using a full-dimensional extrapolation.
#. Predict :math:`s` to y-faces using a full-dimensional extrapolation.
#. Predict :math:`s` to r-faces using a full-dimensional extrapolation.
* Predict :math:`s` to x-faces using a 1D extrapolation.
.. math::
s_{L,\ib-\half\eb_x} &=& s_{\ib-\eb_x}^n + \left(\half - \frac{\dt}{2h}u_{\ib-\half\eb_x}^{\mac}\right)\Delta_x s_{\ib-\eb_x}^n,
:label: 3D predict s to left
.. math::
s_{R,\ib-\half\eb_x} &=& s_{\ib} - \left(\half + \frac{\dt}{2h}u_{\ib-\half\eb_x}^{\mac}\right)\Delta_x s_{\ib}^n.
:label: 3D predict s to right
Upwind based on :math:`u^{\mac}`:
.. math::
s_{\ib-\half\eb_x} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_x} + s_{R,\ib-\half\eb_x}\right), & \left|u^{\mac}_{\ib-\half\eb_x}\right| < \epsilon \\
s_{L,\ib-\half\eb_x}, & u^{\mac}_{\ib-\half\eb_x} > 0, \\
s_{R,\ib-\half\eb_x}, & u^{\mac}_{\ib-\half\eb_x} < 0. \\
\end{cases}
* Predict :math:`s` to y-faces using a 1D extrapolation.
* Predict :math:`s` to r-faces using a 1D extrapolation.
* Update prediction of :math:`s` to x-faces by accounting for y-derivatives.
The notation :math:`s_{\ib-\half\eb_x}^{x|y}` means “state :math:`s_{\ib-\half\eb_x}`
that has been updated to account for the transverse derivatives in
the :math:`y`-direction”.
**if** is_conservative **then**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{x|y} &=& s_{L,\ib-\half\eb_x} - \frac{\dt}{3h}\left[(sv^{\mac})_{\ib-\eb_x+\half\eb_y}-(sv^{\mac})_{\ib-\eb_x-\half\eb_y}\right], \\
s_{R,\ib-\half\eb_x}^{x|y} &=& s_{R,\ib-\half\eb_x} - \frac{\dt}{3h}\left[(sv^{\mac})_{\ib+\half\eb_y}-(sv^{\mac})_{\ib-\half\eb_y}\right].\end{aligned}
**else**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{x|y} &=& s_{L,\ib-\half\eb_x} - \frac{\dt}{6h}\left(v_{\ib-\eb_x+\half\eb_y}^{\mac} + v_{\ib-\eb_x-\half\eb_y}^{\mac}\right)\left(s_{\ib-\eb_x+\half\eb_y} - s_{\ib-\eb_x-\half\eb_y}\right), \\
s_{R,\ib-\half\eb_x}^{x|y} &=& s_{R,\ib-\half\eb_x} - \frac{\dt}{6h}\left(v_{\ib+\half\eb_y}^{\mac} + v_{\ib-\half\eb_y}^{\mac}\right)\left(s_{\ib+\half\eb_y} - s_{\ib-\half\eb_y}\right).\end{aligned}
**end if**
* Upwind based on :math:`u^{\mac}`:
.. math::
s_{\ib-\half\eb_x}^{x|y} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_x}^{x|y} + s_{R,\ib-\half\eb_x}^{x|y}\right), & \left|u^{\mac}_{\ib-\half\eb_x}\right| < \epsilon \\
s_{L,\ib-\half\eb_x}^{x|y}, & u^{\mac}_{\ib-\half\eb_x} > 0, \\
s_{R,\ib-\half\eb_x}^{x|y}, & u^{\mac}_{\ib-\half\eb_x} < 0.
\end{cases}
* Update prediction of :math:`s` to x-faces by accounting for r-derivatives.
* Update prediction of :math:`s` to y-faces by accounting for x-derivatives.
* Update prediction of :math:`s` to y-faces by accounting for r-derivatives.
* Update prediction of :math:`s` to r-faces by accounting for x-derivatives.
* Update prediction of :math:`s` to r-faces by accounting for y-derivatives.
* Predict :math:`s` to x-faces using a full-dimensional extrapolation.
**if** is_conservative **then**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} = s_{L,\ib-\half\eb_x} &-& \frac{\dt}{2h}\left[(s^{y|r}v^{\mac})_{\ib-\eb_x+\half\eb_y}-({s^{y|r}v^{\mac})_{\ib-\eb_x-\half\eb_y}}\right] \nonumber \\
&-& \frac{\dt}{2h}\left[(s^{r|y}w^{\mac})_{\ib-\eb_x+\half\eb_r}-({s^{r|y}w^{\mac})_{\ib-\eb_x-\half\eb_r}}\right] \nonumber \\
&-& \frac{\dt}{2h}s_{\ib-\eb_x}\left(u_{\ib-\half\eb_x}^{\mac}-u_{\ib-\frac{3}{2}\eb_x}^{\mac}\right) + \frac{\dt}{2}f_{\ib-\eb_x}, \\
s_{R,\ib-\half\eb_x}^{\edge} = s_{R,\ib-\half\eb_x} &-& \frac{\dt}{2h}\left[(s^{y|r}v^{\mac})_{\ib+\half\eb_y}-({s^{y|r}v^{\mac})_{\ib-\half\eb_y}}\right] \nonumber \\
&-& \frac{\dt}{2h}\left[(s^{r|y}w^{\mac})_{\ib+\half\eb_r}-({s^{r|y}w^{\mac})_{\ib-\half\eb_r}}\right] \nonumber \\
&-& \frac{\dt}{2h}s_{\ib}\left(u_{\ib+\half\eb_x}^{\mac}-u_{\ib-\half\eb_x}^{\mac}\right) + \frac{\dt}{2}f_{\ib}.\end{aligned}
**else**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} = s_{L,\ib-\half\eb_x} &-& \frac{\dt}{4h}\left(v_{\ib-\eb_x+\half\eb_y}^{\mac}+v_{\ib-\eb_x-\half\eb_y}^{\mac}\right)\left(s_{\ib-\eb_x+\half\eb_y}^{y|r}-s_{\ib-\eb_x-\half\eb_y}^{y|r}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(w_{\ib-\eb_x+\half\eb_r}^{\mac}+w_{\ib-\eb_x-\half\eb_r}^{\mac}\right)\left(s_{\ib-\eb_x+\half\eb_r}^{r|y}-s_{\ib-\eb_x-\half\eb_r}^{r|y}\right) + \frac{\dt}{2}f_{\ib-\eb_x}, \nonumber \\
&& \\
s_{R,\ib-\half\eb_x}^{\edge} = s_{R,\ib-\half\eb_x} &-& \frac{\dt}{4h}\left(v_{\ib+\half\eb_y}^{\mac}+v_{\ib-\half\eb_y}^{\mac}\right)\left(s_{\ib+\half\eb_y}^{y|r}-s_{\ib-\half\eb_y}^{y|r}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(w_{\ib+\half\eb_r}^{\mac}+w_{\ib-\half\eb_r}^{\mac}\right)\left(s_{\ib+\half\eb_r}^{r|y}-s_{\ib-\half\eb_r}^{r|y}\right) + \frac{\dt}{2}f_{\ib}.\end{aligned}
**end if**
* Account for the :math:`\partial w_0/\partial r` term:
**if** is_vel **and** comp = 2 **then**
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} &=& s_{L,\ib-\half\eb_x}^{\edge} -
\frac{\dt}{4h}\left(\wt^{\mac}_{\ib-\eb_x+\half\eb_r} + \wt^{\mac}_{\ib-\eb_x-\half\eb_r}\right)\left(w_{0,\ib+\half\eb_r}-w_{0,\ib-\half\eb_r}\right) \\
s_{R,\ib-\half\eb_x}^{\edge} &=& s_{R,\ib-\half\eb_x}^{\edge} -
\frac{\dt}{4h}\left(\wt^{\mac}_{\ib+\half\eb_r} + \wt^{\mac}_{\ib-\half\eb_r}\right)\left(w_{0,\ib+\half\eb_r}-w_{0,\ib-\half\eb_r}\right) \\\end{aligned}
**end if**
* Upwind based on :math:`u^{\mac}`:
.. math::
s_{\ib-\half\eb_x}^{\edge} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_x}^{\edge} + s_{R,\ib-\half\eb_x}^{\edge}\right), & \left|u^{\mac}_{\ib-\half\eb_x}\right| < \epsilon \\
s_{L,\ib-\half\eb_x}^{\edge}, & u^{\mac}_{\ib-\half\eb_x} > 0, \\
s_{R,\ib-\half\eb_x}^{\edge}, & u^{\mac}_{\ib-\half\eb_x} < 0.
\end{cases}
* Predict :math:`s` to y-faces using a full-dimensional extrapolation.
* Predict :math:`s` to r-faces using a full-dimensional extrapolation.
.. _d-spherical-case-2:
3D Spherical Case
-----------------
The spherical case is the same as the plane-parallel 3D Cartesian
case, except the :math:`\partial w_0/\partial r` term enters in the full
dimensional extrapolation for each direction when predicting velocity
to faces. As in the plane-parallel case, make sure upwind based on
the full velocity.
Computing :math:`\Ub^{\mac,*}` in VARDEN
========================================
.. _d-cartesian-case-6:
2D Cartesian Case
-----------------
* We do a 1D Taylor series extrapolation to get both components of velocity at the x-face:
.. math::
u_{L,\ib-\half\eb_x}^{1D} = u_{\ib-\eb_x} + \left[\half - \frac{\dt}{2h}{\rm max}(0,u_{\ib-\eb_x})\right]\Delta_xu_{\ib-\eb_x},
:label: varden U_L^1D
.. math::
u_{R,\ib-\half\eb_x}^{1D} = u_{\ib} + \left[\half - \frac{\dt}{2h}{\rm min}(0,u_{\ib})\right]\Delta_xu_{\ib} .
.. math::
v_{L,\ib-\half\eb_x}^{1D} = v_{\ib-\eb_x} + \left[\half - \frac{\dt}{2h}{\rm max}(0,v_{\ib-\eb_x})\right]\Delta_xv_{\ib-\eb_x},
.. math::
v_{R,\ib-\half\eb_x}^{1D} &=& v_{\ib} + \left[\half - \frac{\dt}{2h}{\rm min}(0,v_{\ib})\right]\Delta_xv_{\ib}.
We obtain the normal velocity using the Riemann problem:
.. math::
u_{\ib-\half\eb_x}^{1D} =
\begin{cases}
0, & \left(u_{L,\ib-\half\eb_x}^{1D} \le 0 ~~ {\rm AND} ~~ u_{R,\ib-\half\eb_x}^{1D} \ge 0\right) ~~ {\rm OR} ~~ \left|u_{L,\ib-\half\eb_x}^{1D} + u_{R,\ib-\half\eb_x}^{1D}\right| < \epsilon, \\
u_{L,\ib-\half\eb_x}^{1D}, & u_{L,\ib-\half\eb_x}^{1D} + u_{R,\ib-\half\eb_x}^{1D} > 0, \\
u_{R,\ib-\half\eb_x}^{1D}, & u_{L,\ib-\half\eb_x}^{1D} + u_{R,\ib-\half\eb_x}^{1D} < 0.
\end{cases}
We obtain the transverse velocity by upwinding based on
:math:`u_{\ib-\half\eb_x}^{1D}`:
.. math::
v_{\ib-\half\eb_x}^{1D} =
\begin{cases}
\half\left(v_{L,\ib-\half\eb_x}^{1D} + v_{R,\ib-\half\eb_x}^{1D}\right), & \left|u_{\ib-\half\eb_x}^{1D}\right| < \epsilon \\
v_{L,\ib-\half\eb_x}^{1D}, & u_{\ib-\half\eb_x}^{1D} > 0, \\
v_{R,\ib-\half\eb_x}^{1D}, & u_{\ib-\half\eb_x}^{1D} < 0.
\end{cases}
:label: Transverse Velocity Riemann Problem
* We perform analogous operations to compute both components of velocity
at the y-faces, :math:`\Ub_{\ib-\half\eb_y}^{1D}`.
* Now we do a full-dimensional extrapolation to get the MAC velocity at
the x-faces (note that we only compute the normal components):
.. math::
\begin{aligned}
u_{L,\ib-\half\eb_x}^{\mac,*} &=& u_{L,\ib-\half\eb_x}^{1D} - \frac{\dt}{4h}\left(v_{\ib-\eb_x+\half\eb_y}^{1D}+v_{\ib-\eb_x-\half\eb_y}^{1D}\right)\left(u_{\ib-\eb_x+\half\eb_y}^{1D} - u_{\ib-\eb_x-\half\eb_y}^{1D}\right) + \frac{\dt}{2}f_{u,\ib-\eb_x}, \\
u_{R,\ib-\half\eb_x}^{\mac,*} &=& u_{R,\ib-\half\eb_x}^{1D} - \frac{\dt}{4h}\left(v_{\ib+\half\eb_y}^{1D}+v_{\ib-\half\eb_y}^{1D}\right)\left(u_{\ib+\half\eb_y}^{1D} - u_{\ib-\half\eb_y}^{1D}\right) + \frac{\dt}{2}f_{u,\ib}.\end{aligned}
Then we solve a Riemann problem:
.. math::
u_{\ib-\half\eb_x}^{\mac,*} =
\begin{cases}
0, & \left(u_{L,\ib-\half\eb_x}^{\mac,*} \le 0 ~~ {\rm AND} ~~ u_{R,\ib-\half\eb_x}^{\mac,*} \ge 0\right) ~~ {\rm OR} ~~ \left|u_{L,\ib-\half\eb_x}^{\mac,*} + u_{R,\ib-\half\eb_x}^{\mac,*}\right| < \epsilon, \\
u_{L,\ib-\half\eb_x}^{\mac,*}, & u_{L,\ib-\half\eb_x}^{\mac,*} + u_{R,\ib-\half\eb_x}^{\mac,*} > 0, \\
u_{R,\ib-\half\eb_x}^{\mac,*}, & u_{L,\ib-\half\eb_x}^{\mac,*} + u_{R,\ib-\half\eb_x}^{\mac,*} < 0.
\end{cases}
:label: umac Riemann Problem
* We perform analogous operations to compute the normal velocity at the
y-faces, :math:`v^{\mac,*}_{\ib-\half\eb_y}`.
.. _d-cartesian-case-7:
3D Cartesian Case
-----------------
This is more complicated than the 2D case because we include corner
coupling. We compute :math:`\Ub_{\ib-\half\eb_x}^{1D},
\Ub_{\ib-\half\eb_y}^{1D}`, and :math:`\Ub_{\ib-\half\eb_z}^{1D}` in
an analogous manner as :eq:`varden U_L^1D]`-:eq:`Transverse Velocity
Riemann Problem`. Then we compute an intermediate state,
:math:`u_{\ib-\half\eb_y}^{y|z}`, which is described as “state
:math:`u_{\ib-\half\eb_y}^{1D}` that has been updated to account for
the transverse derivatives in the z direction”, using:
.. math::
\begin{aligned}
u_{L,\ib-\half\eb_y}^{y|z} &=& u_{L,\ib-\half\eb_y}^{1D} - \frac{\dt}{6h}\left(w_{\ib-\eb_y+\half\eb_z}^{1D}+w_{\ib-\eb_y-\half\eb_z}^{1D}\right)\left(u_{\ib-\eb_y+\half\eb_z}^{1D}-u_{\ib-\eb_y-\half\eb_z}^{1D}\right), \\
u_{R,\ib-\half\eb_y}^{y|z} &=& u_{R,\ib-\half\eb_y}^{1D} - \frac{\dt}{6h}\left(w_{\ib+\half\eb_z}^{1D}+w_{\ib-\half\eb_z}^{1D}\right)\left(u_{\ib+\half\eb_z}^{1D}-u_{\ib-\half\eb_z}^{1D}\right).\end{aligned}
Then upwind based on :math:`v_{\ib-\half\eb_y}^{1D}`:
.. math::
u_{\ib-\half\eb_y}^{y|z} =
\begin{cases}
\half\left(u_{L,\ib-\half\eb_y}^{y|z} + u_{R,\ib-\half\eb_y}^{y|z}\right), & \left|v_{\ib-\half\eb_y}^{1D}\right| < \epsilon \\
u_{L,\ib-\half\eb_y}^{y|z}, & v_{\ib-\half\eb_y}^{1D} > 0, \\
u_{R,\ib-\half\eb_y}^{y|z}, & v_{\ib-\half\eb_y}^{1D} < 0.
\end{cases}
We use an analogous procedure to compute five more intemediate states,
:math:`u_{\ib-\half\eb_z}^{z|y}, v_{\ib-\half\eb_x}^{x|z},
v_{\ib-\half\eb_z}^{z|x}, w_{\ib-\half\eb_x}^{x|y}`, and
:math:`w_{\ib-\half\eb_y}^{y|x}`. Then we do a full-dimensional
extrapolation to get the MAC velocities at normal faces:
.. math::
\begin{aligned}
u_{L,\ib-\half\eb_x}^{\mac,*} = u_{L,\ib-\half\eb_x}^{1D} &-& \frac{\dt}{4h}\left(v_{\ib-\eb_x+\half\eb_y}^{1D}+v_{\ib-\eb_x-\half\eb_y}^{1D}\right)\left(u_{\ib-\eb_x+\half\eb_y}^{y|z}-u_{\ib-\eb_x-\half\eb_y}^{y|z}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(w_{\ib-\eb_x+\half\eb_z}^{1D}+w_{\ib-\eb_x-\half\eb_z}^{1D}\right)\left(u_{\ib-\eb_x+\half\eb_z}^{z|y}-u_{\ib-\eb_x-\half\eb_z}^{z|y}\right) + \frac{\dt}{2}f_{u,\ib-\eb_x}, \\
u_{R,\ib-\half\eb_x}^{\mac,*} = u_{R,\ib-\half\eb_x}^{1D} &-& \frac{\dt}{4h}\left(v_{\ib+\half\eb_y}^{1D}+v_{\ib-\half\eb_y}^{1D}\right)\left(u_{\ib+\half\eb_y}^{y|z}-u_{\ib-\half\eb_y}^{y|z}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(w_{\ib+\half\eb_z}^{1D}+w_{\ib-\half\eb_z}^{1D}\right)\left(u_{\ib+\half\eb_z}^{z|y}-u_{\ib-\half\eb_z}^{z|y}\right) + \frac{\dt}{2}f_{u,\ib}.\end{aligned}
Then we use the Riemann solver given above for the 2D case (:eq:`umac Riemann Problem`) to compute
:math:`u_{\ib-\half\eb_x}^{\mac,*}`. We use an analogous procedure to
obtain :math:`v_{\ib-\half\eb_y}^{\mac,*}` and
:math:`w_{\ib-\half\eb_z}^{\mac,*}`.
Computing :math:`\Ub^{\edge}` and :math:`\rho^{\edge}` in VARDEN
================================================================
To compute :math:`\Ub^{\edge}`, VARDEN uses the exact same algorithm
as the :math:`s^{\edge}` case in MAESTROeX. The algorithm for
:math:`\rho^{\edge}` in VARDEN is slightly different than the
:math:`s^{\edge}` case in MAESTROeX since it uses a “conservative”
formulation. Here, :math:`s` is used in place of either :math:`\rho, u, v`, or
:math:`w` (in 3D).
.. _d-cartesian-case-8:
2D Cartesian Case
-----------------
The 1D extrapolation is:
.. math::
s_{L,\ib-\half\eb_x}^{1D} = s_{\ib-\eb_x}^n + \left(\half - \frac{\dt}{2h}u_{\ib-\half\eb_x}^{\mac}\right)\Delta_x s_{\ib-\eb_x}^n,
:label: varden s_L^1D
.. math::
s_{R,\ib-\half\eb_x}^{1D} = s_{\ib} - \left(\half + \frac{\dt}{2h}u_{\ib-\half\eb_x}^{\mac}\right)\Delta_x s_{\ib}^n.
:label: varden s_R^1D
Then we upwind based on :math:`u^{\mac}`:
.. math::
s_{\ib-\half\eb_x}^{1D} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_x}^{1D} + s_{R,\ib-\half\eb_x}^{1D}\right), & \left|u^{\mac}_{\ib-\half\eb_x}\right| < \epsilon \\
s_{L,\ib-\half\eb_x}^{1D}, & u^{\mac}_{\ib-\half\eb_x} > 0, \\
s_{R,\ib-\half\eb_x}^{1D}, & u^{\mac}_{\ib-\half\eb_x} < 0. \\
\end{cases}
We use an analogous procedure to obtain :math:`s_{\ib-\half\eb_y}^{1D}`.
Now we do a full-dimensional extrapolation of :math:`s` to each face. The
extrapolation of a “non-conserved” :math:`s` to x-faces is:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} &=& s_{L,\ib-\half\eb_x}^{1D} - \frac{\dt}{4h}\left(v_{\ib-\eb_x+\half\eb_y}^{\mac}+v_{\ib-\eb_x-\half\eb_y}^{\mac}\right)\left(s_{\ib-\eb_x+\half\eb_y}^{1D} - s_{\ib-\eb_x-\half\eb_y}^{1D}\right) + \frac{\dt}{2}f_{s,\ib-\eb_x}, \\
s_{R,\ib-\half\eb_x}^{\edge} &=& s_{R,\ib-\half\eb_x}^{1D} - \frac{\dt}{4h}\left(v_{\ib+\half\eb_y}^{\mac}+v_{\ib-\half\eb_y}^{\mac}\right)\left(s_{\ib+\half\eb_y}^{1D} - s_{\ib-\half\eb_y}^{1D}\right) + \frac{\dt}{2}f_{s,\ib}.\end{aligned}
The extrapolation of a “conserved” :math:`s` to x-faces is:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} = s_{L,\ib-\half\eb_x}^{1D} &-& \frac{\dt}{2h}\left[(s^{1D} v^{\mac})_{\ib-\eb_x+\half\eb_y} - (s^{1D} v^{\mac})_{\ib-\eb_x-\half\eb_y}\right] \nonumber \\
&-& \frac{\dt}{2}s_{\ib-\eb_x}(\nabla\cdot\Ub^{\mac})_{\ib-\eb_x} + \frac{\dt}{2h}s_{\ib-\eb_x}\left(v_{\ib-\eb_x+\half\eb_y}^{\mac} - v_{\ib-\eb_x-\half\eb_y}^{\mac}\right) + \frac{\dt}{2}f_{s,\ib-\eb_x}, \\
s_{R,\ib-\half\eb_x}^{\edge} = s_{R,\ib-\half\eb_x}^{1D} &-& \frac{\dt}{2h}\left[(s^{1D} v^{\mac})_{\ib+\half\eb_y} - (s^{1D} v^{\mac})_{\ib-\half\eb_y}\right] \nonumber \\
&-& \frac{\dt}{2}s_{\ib}(\nabla\cdot\Ub^{\mac})_{\ib} + \frac{\dt}{2h}s_{\ib}\left(v_{\ib+\half\eb_y}^{\mac} - v_{\ib-\half\eb_y}^{\mac}\right) + \frac{\dt}{2}f_{s,\ib}.\end{aligned}
Then we upwind based on :math:`u^{\mac}`.
.. math::
s_{\ib-\half\eb_x}^{\edge} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_x}^{\edge} + s_{R,\ib-\half\eb_x}^{\edge}\right), & \left|u^{\mac}_{\ib-\half\eb_x}\right| < \epsilon \\
s_{L,\ib-\half\eb_x}^{\edge}, & u^{\mac}_{\ib-\half\eb_x} > 0, \\
s_{R,\ib-\half\eb_x}^{\edge}, & u^{\mac}_{\ib-\half\eb_x} < 0.
\end{cases}
:label: varden s^edge upwind
We use an analogous procedure to compute :math:`s_{\ib-\half\eb_y}^{\edge}`.
.. _d-cartesian-case-9:
3D Cartesian Case
-----------------
This is more complicated than the 2D case because we include corner
coupling. We first compute :math:`s_{\ib-\half\eb_x}^{1D}`,
:math:`s_{\ib-\half\eb_y}^{1D}`, and :math:`s_{\ib-\half\eb_z}^{1D}` in an
analogous manner to :eq:`varden s_L^1D` and :eq:`varden s_R^1D`. Then we compute six intermediate states,
:math:`s_{\ib-\half\eb_x}^{x|y}, s_{\ib-\half\eb_x}^{x|z},
s_{\ib-\half\eb_y}^{y|x}, s_{\ib-\half\eb_y}^{y|z},
s_{\ib-\half\eb_z}^{z|x}`, and :math:`s_{\ib-\half\eb_z}^{z|y}`. For the
“non-conservative case”, we use, for example:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{x|y} &=& s_{L,\ib-\half\eb_x}^{1D} - \frac{\dt}{6h}\left(v_{\ib-\eb_x+\half\eb_y}^{\mac} + v_{\ib-\eb_x-\half\eb_y}^{\mac}\right)\left(s_{\ib-\eb_x+\half\eb_y}^{1D} - s_{\ib-\eb_x-\half\eb_y}^{1D}\right), \\
s_{R,\ib-\half\eb_x}^{x|y} &=& s_{R,\ib-\half\eb_x}^{1D} - \frac{\dt}{6h}\left(v_{\ib+\half\eb_y}^{\mac} + v_{\ib-\half\eb_y}^{\mac}\right)\left(s_{\ib+\half\eb_y}^{1D} - s_{\ib-\half\eb_y}^{1D}\right).\end{aligned}
For the “conservative” case, we use, for example:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{x|y} &=& s_{L,\ib-\half\eb_x}^{1D} - \frac{\dt}{3h}\left[(sv^{\mac})_{\ib-\eb_x+\half\eb_y}-(sv^{\mac})_{\ib-\eb_x-\half\eb_y}\right], \\
s_{R,\ib-\half\eb_x}^{x|y} &=& s_{R,\ib-\half\eb_x}^{1D} - \frac{\dt}{3h}\left[(sv^{\mac})_{\ib+\half\eb_y}-(sv^{\mac})_{\ib-\half\eb_y}\right].\end{aligned}
Then we upwind based on :math:`u^{\mac}`:
.. math::
s_{\ib-\half\eb_x}^{x|y} =
\begin{cases}
\half\left(s_{L,\ib-\half\eb_x}^{x|y} + s_{R,\ib-\half\eb_x}^{x|y}\right), & \left|u^{\mac}_{\ib-\half\eb_x}\right| < \epsilon \\
s_{L,\ib-\half\eb_x}^{x|y}, & u^{\mac}_{\ib-\half\eb_x} > 0, \\
s_{R,\ib-\half\eb_x}^{x|y}, & u^{\mac}_{\ib-\half\eb_x} < 0.
\end{cases}
We use an analogous procedure to compute the other five intermediate
states. Now we do a full-dimensional extrapolation of :math:`s` to each
face. The extrapolation of a “non-conserved” :math:`s` to x-faces is:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} = s_{L,\ib-\half\eb_x}^{1D} &-& \frac{\dt}{4h}\left(v_{\ib-\eb_x+\half\eb_y}^{\mac}+v_{\ib-\eb_x-\half\eb_y}^{\mac}\right)\left(s_{\ib-\eb_x+\half\eb_y}^{y|z}-s_{\ib-\eb_x-\half\eb_y}^{y|z}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(w_{\ib-\eb_x+\half\eb_z}^{\mac}+w_{\ib-\eb_x-\half\eb_z}^{\mac}\right)\left(s_{\ib-\eb_x+\half\eb_z}^{z|y}-s_{\ib-\eb_x-\half\eb_z}^{z|y}\right) \nonumber \\
&+& \frac{\dt}{2}f_{s,\ib-\eb_x}, \\
s_{R,\ib-\half\eb_x}^{\edge} = s_{R,\ib-\half\eb_x}^{1D} &-& \frac{\dt}{4h}\left(v_{\ib+\half\eb_y}^{\mac}+v_{\ib-\half\eb_y}^{\mac}\right)\left(s_{\ib+\half\eb_y}^{y|z}-s_{\ib-\half\eb_y}^{y|z}\right) \nonumber \\
&-& \frac{\dt}{4h}\left(w_{\ib+\half\eb_z}^{\mac}+w_{\ib-\half\eb_z}^{\mac}\right)\left(s_{\ib+\half\eb_z}^{z|y}-s_{\ib-\half\eb_z}^{z|y}\right) \nonumber \\
&+& \frac{\dt}{2}f_{s,\ib}.\end{aligned}
The extrapolation of a “conserved” :math:`s` to x-faces is:
.. math::
\begin{aligned}
s_{L,\ib-\half\eb_x}^{\edge} = s_{L,\ib-\half\eb_x}^{1D} &-& \frac{\dt}{2h}\left[(s^{y|z}v^{\mac})_{\ib-\eb_x+\half\eb_y}-({s^{y|z}v^{\mac})_{\ib-\eb_x-\half\eb_y}}\right] \nonumber \\
&-& \frac{\dt}{2h}\left[(s^{z|y}w^{\mac})_{\ib-\eb_x+\half\eb_z}-({s^{z|y}w^{\mac})_{\ib-\eb_x-\half\eb_z}}\right] \nonumber \\
&-& \frac{\dt}{2}s_{\ib-\eb_x}(\nabla\cdot\Ub^{\mac})_{\ib-\eb_x} \nonumber \\
&+& \frac{\dt}{2h}s_{\ib-\eb_x}\left(v_{\ib-\eb_x+\half\eb_y}^{\mac}-v_{\ib-\eb_x-\half\eb_y}^{\mac}+w_{\ib-\eb_x+\half\eb_z}^{\mac}-w_{\ib-\eb_x-\half\eb_z}^{\mac}\right) \nonumber \\
&+& \frac{\dt}{2}f_{s,\ib-\eb_x}, \\
s_{R,\ib-\half\eb_x}^{\edge} = s_{R,\ib-\half\eb_x}^{1D} &-& \frac{\dt}{2h}\left[(s^{y|z}v^{\mac})_{\ib+\half\eb_y}-({s^{y|z}v^{\mac})_{\ib-\half\eb_y}}\right] \nonumber \\
&-& \frac{\dt}{2h}\left[(s^{z|y}w^{\mac})_{\ib+\half\eb_z}-({s^{z|y}w^{\mac})_{\ib-\half\eb_z}}\right] \nonumber \\
&-& \frac{\dt}{2}s_{\ib}(\nabla\cdot\Ub^{\mac})_{\ib} \nonumber \\
&+& \frac{\dt}{2h}s_{\ib}\left(v_{\ib+\half\eb_y}^{\mac}-v_{\ib-\half\eb_y}^{\mac}+w_{\ib+\half\eb_z}^{\mac}-w_{\ib-\half\eb_z}^{\mac}\right) \nonumber \\
&+& \frac{\dt}{2}f_{s,\ib}.\end{aligned}
Then we upwind based on :math:`u^{\mac}`, as in :eq:`varden s^edge upwind`.
We use an analogous procedure to compute both
:math:`s_{\ib-\half\eb_y}^{\edge}` and :math:`s_{\ib-\half\eb_z}`.
ESTATE_FPU in GODUNOV_2D/3D.f
=============================
* First, the normal predictor.
.. math::
\begin{aligned}
s_L^x &=& s_{\ib-\eb_x} + \left(\half - \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib-\eb_x} + \underbrace{\frac{\dt}{2}\text{TFORCES}_{\ib-\eb_x}}_{\text{IF USE\_MINION}} \\
s_R^x &=& s_{\ib} - \left(\half + \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib} + \underbrace{\frac{\dt}{2}\text{TFORCES}_{\ib}}_{\text{IF USE\_MINION}}\end{aligned}
**If** USE_MINION **and** ICONSERVE **then:**
.. math::
\begin{aligned}
s_L^x &=& s_L^x - \frac{\dt}{2}s_{\ib-\eb_x}\text{DIVU}_{\ib-\eb_x} \\
s_R^x &=& s_R^x - \frac{\dt}{2}s_{\ib}\text{DIVU}_{\ib}\end{aligned}
Apply boundary conditions on :math:`s_L^x` and :math:`s_R^x`. Then,
.. math::
\text{s}_{\ib-\half\eb_x}^x =
\begin{cases}
s_L^x, & \text{UEDGE}_{\ib-\half\eb_x} > 0, \\
s_R^x, & \text{else}. \\
\end{cases}
:label: ESTATE_FPU Upwind
* Then, if :math:`|\text{UEDGE}_{\ib-\half\eb_x}| \le \epsilon`, we set :math:`s_{\ib-\half\eb_x}^x = (s_L^x+s_R^x)/2`. The procedure to obtain :math:`s_{\ib-\half\eb_y}^y` is analogous.
* Now, the transverse terms.
**If** ICONSERVE **then:**
.. math::
\begin{aligned}
\text{sedge}_L^x &=& s_{\ib-\eb_x} + \left(\half - \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib-\eb_x} + \frac{\dt}{2}\text{TFORCES}_{\ib-\eb_x} \nonumber\\
&& - \frac{\dt}{2}\left[\frac{\text{VEDGE}_{\ib-\eb_x+\half\eb_y}s_{\ib-\eb_x+\half\eb_y}^y - \text{VEDGE}_{\ib-\eb_x-\half\eb_y}s_{\ib-\eb_x-\half\eb_y}^y}{h_y}\right.\nonumber\\
&& ~~~~~~~~~~ \left. - \frac{s_{\ib-\eb_x}(\text{VEDGE}_{\ib-\eb_x+\half\eb_y}-\text{VEDGE}_{\ib-\eb_x-\half\eb_y})}{h_y}+s_{\ib-\eb_x}\text{DIVU}_{\ib-\eb_x}\right]\\
\text{sedge}_R^x &=& s_{\ib} - \left(\half + \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib} + \frac{\dt}{2}\text{TFORCES}_{\ib} \nonumber\\
&& - \frac{\dt}{2}\left[\frac{\text{VEDGE}_{\ib+\half\eb_y}s_{\ib+\half\eb_y}^y - \text{VEDGE}_{\ib-\half\eb_y}s_{\ib-\half\eb_y}^y}{h_y}\right.\nonumber\\
&& ~~~~~~~~~~ \left. - \frac{s_{\ib}(\text{VEDGE}_{\ib+\half\eb_y}-\text{VEDGE}_{\ib-\half\eb_y})}{h_y}+s_{\ib}\text{DIVU}_{\ib}\right]\end{aligned}
* Now, define :math:`\text{VBAR}_{\ib} = (\text{VEDGE}_{\ib+\half\eb_y}+\text{VEDGE}_{\ib-\half\eb_y})/2`.
**If** NOT ICONSERVE **and** :math:`\text{VEDGE}_{\ib+\half\eb_y}\cdot\text{VEDGE}_{\ib-\half\eb_y} < 0` **and** :math:`\text{VBAR}_{\ib} < 0` **then:**
.. math::
\begin{align}
\text{sedge}_L^x = s_{\ib-\eb_x} &+ \left(\half - \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib} + \frac{\dt}{2}\text{TFORCES}_{\ib-\eb_x} \nonumber\\
& - \frac{\dt}{2}\left[\frac{\text{VBAR}_{\ib-\eb_x}(s_{\ib-\eb_x+\eb_y}-s_{\ib-\eb_x})}{h_y}\right]
\end{align}
:label: transverse upwinding 1
.. math::
\begin{align}
\text{sedge}_R^x = s_{\ib} &- \left(\half + \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib} + \frac{\dt}{2}\text{TFORCES}_{\ib} \nonumber\\
& - \frac{\dt}{2}\left[\frac{\text{VBAR}_{\ib}(s_{\ib+\eb_y}-s_{\ib})}{h_y}\right]
\end{align}
**Else If** NOT ICONSERVE **and** :math:`\text{VEDGE}_{\ib+\half\eb_y}\cdot\text{VEDGE}_{\ib-\half\eb_y} < 0` **and** :math:`\text{VBAR}_{\ib} \ge 0` **then:**
.. math::
\begin{aligned}
\text{sedge}_L^x = s_{\ib-\eb_x} &+& \left(\half - \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib-\eb_x} + \frac{\dt}{2}\text{TFORCES}_{\ib-\eb_x} \nonumber\\
&& - \frac{\dt}{2}\left[\frac{\text{VBAR}_{\ib-\eb_x}(s_{\ib-\eb_x}-s_{\ib-\eb_x-\eb_y})}{h_y}\right] \\
\text{sedge}_R^x = s_{\ib} &-& \left(\half + \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib} + \frac{\dt}{2}\text{TFORCES}_{\ib} \nonumber\\
&& - \frac{\dt}{2}\left[\frac{\text{VBAR}_{\ib}(s_{\ib}-s_{\ib-\eb_y})}{h_y}\right]\end{aligned}
**Else If** NOT ICONSERVE **and** :math:`\text{VEDGE}_{\ib+\half\eb_y}\cdot\text{VEDGE}_{\ib-\half\eb_y} \ge 0` **then:**
.. math::
\begin{align}
\text{sedge}_L^x &= s_{\ib-\eb_x} + \left(\half - \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib-\eb_x} + \frac{\dt}{2}\text{TFORCES}_{\ib-\eb_x} \nonumber\\
&& - \frac{\dt}{2}\left[\frac{(\text{VEDGE}_{\ib-\eb_x+\half\eb_y}+\text{VEDGE}_{\ib-\eb_x-\half\eb_y})(s_{\ib-\eb_x+\half\eb_y}-s_{\ib-\eb_x-\half\eb_y})}{2h_y}\right] \\
\text{sedge}_R^x &=& s_{\ib} - \left(\half + \frac{\dt}{h_x}\text{UEDGE}_{\ib-\half\eb_x}\right)\Delta^x s_{\ib} + \frac{\dt}{2}\text{TFORCES}_{\ib} \nonumber\\
&& - \frac{\dt}{2}\left[\frac{(\text{VEDGE}_{\ib+\half\eb_y}+\text{VEDGE}_{\ib-\half\eb_y})(s_{\ib+\half\eb_y}-s_{\ib-\half\eb_y})}{2h_y}\right]
\end{align}
:label: transverse upwinding 6
* Finally, upwind analogous to :eq:`ESTATE_FPU Upwind` to get :math:`\text{sedge}_{\ib-\half\eb_x}`.
ESTATE in GODUNOV_2D/3D.f
=========================
First, the normal predictor.
.. math::
\begin{aligned}
s_L^x &=& s_{\ib-\eb_x} + \left(\half - \frac{\dt}{h_x}u_{\ib-\eb_x}\right)\Delta^x s_{\ib-\eb_x} \\
s_R^x &=& s_{\ib} - \left(\half + \frac{\dt}{h_x}u_{\ib}\right)\Delta^x s_{\ib}\end{aligned}
**If** USE_MINION **then:**
.. math::
\begin{aligned}
s_L^x &=& s_L^x + \frac{\dt}{2}\text{TFORCES}_{\ib-\eb_x} \\
s_R^x &=& s_R^x + \frac{\dt}{2}\text{TFORCES}_{\ib}\end{aligned}
Apply boundary conditions on :math:`s_L^x` and :math:`s_R^x`. Then,
.. math::
\text{s}_{\ib-\half\eb_x}^x =
\begin{cases}
s_L^x, & \text{UAD}_{\ib-\half\eb_x} > 0, \\
s_R^x, & \text{else}. \\
\end{cases}
:label: ESTATE Upwind
Then, if :math:`|\text{UAD}_{\ib-\half\eb_x}| \le \epsilon`, we set :math:`s_{\ib-\half\eb_x}^x = (s_L^x+s_R^x)/2`.
.. math::
\begin{aligned}
\text{sedge}_L^x = s_{\ib-\eb_x} &+& \left(\half - \frac{\dt}{h_x}u_{\ib-\eb_x}\right)\Delta^x s_{\ib-\eb_x} + \frac{\dt}{2}\text{TFORCES}_{\ib-\eb_x} \nonumber\\
&& - \frac{\dt}{2}\left[\frac{(\text{VAD}_{\ib-\eb_x+\half\eb_y}+\text{VAD}_{\ib-\eb_x-\half\eb_y})(s_{\ib-\eb_x+\half\eb_y}-s_{\ib-\eb_x-\half\eb_y})}{2h_y}\right] \\
\text{sedge}_R^x = s_{\ib} &-& \left(\half + \frac{\dt}{h_x}u_{\ib}\right)\Delta^x s_{\ib} + \frac{\dt}{2}\text{TFORCES}_{\ib} \nonumber\\
&& - \frac{\dt}{2}\left[\frac{(\text{VAD}_{\ib+\half\eb_y}+\text{VAD}_{\ib-\half\eb_y})(s_{\ib+\half\eb_y}-s_{\ib-\half\eb_y})}{2h_y}\right]\end{aligned}
Note that the 2D and 3D algorithms are different - in 3D the transverse
terms use upwinding analogous to :eq:`transverse upwinding 1`-:eq:`transverse upwinding 6`, using UAD
instead of UEDGE. Finally, upwind analogous to :eq:`ESTATE Upwind`
to get :math:`\text{sedge}_{\ib-\half\eb_x}`, but use UEDGE instead of UAD.
Piecewise Parabolic Method (PPM)
================================
Consider a scalar, :math:`s`, which we wish to predict to
time-centered edges. The PPM method is an improvement over the
piecewise-linear method. Using our notation, we modify equations
(:eq:`3D predict s to left` and :eq:`3D predict s to right`) in Section `4 <#Scalar Edge
State Prediction in MAESTROeX>`__ to obtain better estimates for the
time-centered 1D edge states, :math:`s_{L/R,\ib-\myhalf\eb_x}`,
etc.. Once these states are obtained, we continue with the
full-dimensional extrapolations as described before.
The PPM method is described in a series of papers:
- Colella and Woodward 1984 - describes the basic method.
- Miller and Colella 2002 - describes how to apply PPM to a multidimensional
unsplit Godunov method and generalizes the characteristic tracing for more complicated
systems. Note that we only need to upwind based on the fluid velocity, so we don’t
need to use fancy characteristic tracing.
- Colella and Sekora 2008 - describes new fancy quadratic limiters. There are
several errors in the printed text, so we have implemented a corrected version from
Phil Colella.
Here are the steps for the :math:`x`-direction. For simplicity, we replace the vector index notation with a simple scalar notation (:math:`\ib+\eb_x \rightarrow i+1`, etc.).
- **Step 1:** Compute :math:`s_{i,+}` and :math:`s_{i,-}`, which are spatial interpolations of
:math:`s` to the hi and lo faces of cell :math:`i`, respectively. See Sections
`9.1 <#Sec:ColellaWoodward>`__ and `9.2 <#Sec:ColellaSekora>`__ for the two options.
- **Step 2:** Construct a quadratic profile within each cell.
.. math:: s_i^I(x) = s_{i,-} + \xi\left[s_{i,+} - s_{i,-} + s_{6,i}(1-\xi)\right],\label{Quadratic Interp}
.. math:: s_{6,i}= 6s_{i} - 3\left(s_{i,-}+s_{i,+}\right),
.. math:: \xi = \frac{x - (i-\myhalf)h}{h}, ~ 0 \le \xi \le 1.
- **Step 3:** Integrate quadratic profiles to get the average value swept over the face
over time.
Define the following integrals, where :math:`\sigma = |u|\Delta t/h`:
.. math::
\begin{aligned}
\mathcal{I}_{i,+}(\sigma) &=& \frac{1}{\sigma h}\int_{(i+\myhalf)h-\sigma h}^{(i+\myhalf)h}s_i^I(x)dx \\
\mathcal{I}_{i,-}(\sigma) &=& \frac{1}{\sigma h}\int_{(i-\myhalf)h}^{(i-\myhalf)h+\sigma h}s_i^I(x)dx\end{aligned}
Plugging in (`[Quadratic Interp] <#Quadratic Interp>`__) gives:
.. math::
\begin{aligned}
\mathcal{I}_{i,+}(\sigma) &=& s_{j,+} - \frac{\sigma}{2}\left[s_{j,+}-s_{j,-}-\left(1-\frac{2}{3}\sigma\right)s_{6,i}\right], \\
\mathcal{I}_{i,-}(\sigma) &=& s_{j,-} + \frac{\sigma}{2}\left[s_{j,+}-s_{j,-}+\left(1-\frac{2}{3}\sigma\right)s_{6,i}\right].\end{aligned}
- **Step 4:** Obtain 1D edge states.
Perform a 1D extrapolation, without source terms, to get
left and right edge states. Add the source terms later if desired/necessary.
.. math::
\begin{aligned}
s_{L,i-\myhalf} &=&
\begin{cases}
\mathcal{I}_{i-1,+}(\sigma), & u_{i-1} ~ \text{or} ~ u_{i-\myhalf}^{\mac} > 0 \\
s_{i-1}, & \text{else}.
\end{cases}\\
s_{R,i-\myhalf} &=&
\begin{cases}
\mathcal{I}_{i,-}(\sigma), & u_{i} ~ \text{or} ~ u_{i-\myhalf}^{\mac} < 0 \\
s_{i}, & \text{else}.
\end{cases}\end{aligned}
.. _Sec:ColellaWoodward:
Colella and Woodward Based Approach
-----------------------------------
Spatially interpolate :math:`s` to edges.
Use a 4th-order interpolation in space with van Leer limiting to obtain edge values:
.. math::
s_{i+\myhalf} = \frac{1}{2}\left(s_{i} + s_{i+1}\right) - \frac{1}{6}\left(\delta s_{i+1}^{vL} - \delta s_{i}^{vL}\right),
:label: eq:CW Edge
.. math::
\delta s_i =
\frac{1}{2}\left(s_{i+1}-s_{i-1}\right),
.. math::
\delta s_i^{vL} =
\begin{cases}
\text{sign}(\delta s_i)\min\left(|\delta s_i|, ~ 2|s_{i+1}-s_{i}|, ~ 2|s_i-s_{i-1}|\right), & {\rm if} ~ (s_{i+1}-s_i)(s_i-s_{i-1}) > 0,\\
0, & {\rm otherwise}.
\end{cases}
A more compact way of writing this is
.. math:: s = \text{sign}(\delta s_i),
.. math:: \delta s_i^{vL} = s\max\left\{\min\left[s\delta s_i, 2s(s_{i+1}-s_i),2s(s_i-s_{i-1})\right],0\right\}
Without the limiters, :eq:`eq:CW Edge` is the familiar 4th-order spatial interpolation formula:
.. math:: s_{i+\myhalf} = \frac{7}{12}\left(s_{i+1}+s_i\right) - \frac{1}{12}\left(s_{i+2}+s_{i-1}\right).
Next, we must ensure that :math:`s_{i+\myhalf}` lies between the adjacent
cell-centered values:
.. math:: \min\left(s_{i},s_{i+1}\right) \le s_{i+\myhalf} \le \max\left(s_{i},s_{i+1}\right).
In anticipation of further limiting, we set double-valued face-centered values:
.. math:: s_{i,+} = s_{i+1,-} = s_{i+\myhalf}.
Modify :math:`s_{i,\pm}` using a quadratic limiter.
First, we test whether
:math:`s_i` is a local extreumum with the condition:
.. math:: \left(s_{i,+}-s_i\right)\left(s_i-s_{i,-}\right) \le 0,
If this condition is true, we constrain :math:`s_{i,\pm}` by setting
:math:`s_{i,+} = s_{i,-} = s_i`. If not, we then apply a second test to determine
whether :math:`s_i` is sufficiently close to :math:`s_{i,\pm}` so that a quadratic
interpolate would contain a local extremum. We define
:math:`\alpha_{i,\pm} = s_{i,\pm} - s_i`. If one of :math:`|\alpha_{i,\pm}| \ge 2|\alpha_{i,\mp}|`
holds, then for that choice of :math:`\pm = +,-` we set:
.. math:: s_{i,\pm} = 3s_i - 2s_{i,\mp}.
.. _Sec:ColellaSekora:
Colella and Sekora Based Approach
---------------------------------
* Spatially interpolate :math:`s` to edges.
Use a 4th-order interpolation in space to obtain edge values:
.. math:: s_{i+\myhalf} = \frac{7}{12}\left(s_{i+1}+s_i\right) - \frac{1}{12}\left(s_{i+2}+s_{i-1}\right).
Then, if :math:`(s_{i+\myhalf}-s_i)(s_{i+1}-s_{i+\myhalf}) < 0`, we limit :math:`s_{i+\myhalf}` using
a nonlinear combination of approximations to the second derivative.
First, define:
.. math::
\begin{aligned}
(D^2s)_{i+\myhalf} &=& 3\left(s_{i}-2s_{i+\myhalf}+s_{i+1}\right) \\
(D^2s)_{i+\myhalf,L} &=& s_{i-1}-2s_{i}+s_{i+1} \\
(D^2s)_{i+\myhalf,R} &=& s_{i}-2s_{i+1}+s_{i+2}\end{aligned}
Then, define
.. math:: s = \text{sign}\left[(D^2s)_{i+\myhalf}\right],
.. math:: (D^2s)_{i+\myhalf,\text{lim}} = s\max\left\{\min\left[Cs(D^2s)_{i+\myhalf,L},Cs(D^2s)_{i+\myhalf,R},s(D^2s)_{i+\myhalf}\right],0\right\},
where :math:`C=1.25` was used in Colella and Sekora. Then,
.. math:: s_{i+\myhalf} = \frac{1}{2}\left(s_{i}+s_{i+1}\right) - \frac{1}{6}(D^2s)_{i+\myhalf,\text{lim}}.
Now we implement Phil’s new version of the algorithm to eliminate sensitivity to roundoff.
First we need to detect whether a particular cell corresponds to an “extremum”. There
are two tests. For the first test, define
.. math:: \alpha_{i,\pm} = s_{i\pm\myhalf} - s_i.
If :math:`\alpha_{i,+}\alpha_{i,-} \ge 0`, then we are at an extremum. We apply the second
test if either :math:`|\alpha_{i,\pm}| > 2|\alpha_{i,\mp}|`. Then, we define:
.. math::
\begin{aligned}
(Ds)_{i,{\rm face},-} &=& s_{i-\myhalf} - s_{i-\sfrac{3}{2}} \\
(Ds)_{i,{\rm face},+} &=& s_{i+\sfrac{3}{2}} - s_{i+\myhalf}\end{aligned}
.. math:: (Ds)_{i,{\rm face,min}} = \min\left[\left|(Ds)_{i,{\rm face},-}\right|,\left|(Ds)_{i,{\rm face},+}\right|\right].
.. math::
\begin{aligned}
(Ds)_{i,{\rm cc},-} &=& s_{i} - s_{i-1} \\
(Ds)_{i,{\rm cc},+} &=& s_{i+1} - s_{i}\end{aligned}
.. math:: (Ds)_{i,{\rm cc,min}} = \min\left[\left|(Ds)_{i,{\rm cc},-}\right|,\left|(Ds)_{i,{\rm cc},+}\right|\right].
If :math:`(Ds)_{i,{\rm face,min}} \ge (Ds)_{i,{\rm cc,min}}`, set
:math:`(Ds)_{i,\pm} = (Ds)_{i,{\rm face},\pm}`. Otherwise, set
:math:`(Ds)_{i,\pm} = (Ds)_{i,{\rm cc},\pm}`. Finally, we are at an extreumum if
:math:`(Ds)_{i,+}(Ds)_{i,-} \le 0`.
* Now that we have finished the extremum tests, if we are at an extremum,
we scale :math:`\alpha_{i,\pm}`. First, we define
.. math::
\begin{aligned}
(D^2s)_{i} &=& 6(\alpha_{i,+}+\alpha_{i,-}) \\
(D^2s)_{i,L} &=& s_{i-2}-2s_{i-1}+s_{i} \\
(D^2s)_{i,R} &=& s_{i}-2s_{i+1}+s_{i+2} \\
(D^2s)_{i,C} &=& s_{i-1}-2s_{i}+s_{i+1}\end{aligned}
Then, define
.. math:: s = \text{sign}\left[(D^2s)_{i}\right],
.. math:: (D^2s)_{i,\text{lim}} = \max\left\{\min\left[s(D^2s)_{i},Cs(D^2s)_{i,L},Cs(D^2s)_{i,R},Cs(D^2s)_{i,C}\right],0\right\}.
Then,
.. math:: \alpha_{i,\pm} = \frac{\alpha_{i,\pm}(D^2s)_{i,\text{lim}}}{\max\left[\left|(D^2s)_{i}\right|,1\times 10^{-10}\right]}
Otherwise, if we are not at an extremum and :math:`|\alpha_{i,\pm}| > 2|\alpha_{i,\mp}|`,
then define
.. math:: s = \text{sign}(\alpha_{i,\mp})
.. math:: \delta\mathcal{I}_{\text{ext}} = \frac{-\alpha_{i,\pm}^2}{4\left(\alpha_{j,+}+\alpha_{j,-}\right)},
.. math:: \delta s = s_{i\mp 1} - s_i,
If :math:`s\delta\mathcal{I}_{\text{ext}} \ge s\delta s`, then we perform the following test.
If :math:`s\delta s - \alpha_{i,\mp} \ge 1\times 10^{-10}`, then
.. math:: \alpha_{i,\pm} = -2\delta s - 2s\left[(\delta s)^2 - \delta s \alpha_{i,\mp}\right]^{\myhalf}
otherwise,
.. math:: \alpha_{i,\pm} = -2\alpha_{i,\mp}
Finally, :math:`s_{i,\pm} = s_i + \alpha_{i,\pm}`.
| 46.228772 | 344 | 0.588515 |
773bf132662e506a35345d928237b27d78cc1d16 | 9,552 | rst | reStructuredText | doc/source/initial_voice.rst | arnacadia/Oscar | 469b26495d46bcca604884d9b0347a29ad3bd572 | [
"Apache-2.0"
] | 63 | 2017-08-23T08:19:01.000Z | 2021-09-06T12:31:25.000Z | doc/source/initial_voice.rst | arnacadia/Oscar | 469b26495d46bcca604884d9b0347a29ad3bd572 | [
"Apache-2.0"
] | 25 | 2017-08-24T13:18:50.000Z | 2022-03-16T07:07:22.000Z | doc/source/initial_voice.rst | arnacadia/Oscar | 469b26495d46bcca604884d9b0347a29ad3bd572 | [
"Apache-2.0"
] | 30 | 2017-08-25T08:07:36.000Z | 2022-03-12T15:37:56.000Z |
Tutorial: building a minimal voice
===================================
To give some idea of how Ossian works we will demonstrate the building of the
most simple 'voice' possible.
This 'voice' is very basic and won't in fact speak, but should give a general
idea of how the tools work. The following discussion assumes that an environment
variable ``$OSSIAN`` is set to point to the top directory of the tools as before.
.. code-block:: bash
cd $OSSIAN
python ./scripts/train.py -s rss_toy_demo -l rm demo01
This instructs Ossian to use the 'rss_toy_demo' corpus in Romanian (``rm``) to build a voice using the
recipe we've called ``demo01``. Ossian expects the corpus to be stored at:
.. code-block:: bash
$OSSIAN/corpus/rm/speakers/rss_toy_demo/
and intermediate data/models for the voice are kept at:
.. code-block:: bash
$OSSIAN/train/rm/speakers/rss_toy_demo/demo01
and components of the final trained voice will be output at:
.. code-block:: bash
$OSSIAN/voices/rm/rss_toy_demo/demo01
The key work of the ``train.py`` script is to:
- create a corpus object from the data specified by the language and corpus name
- create a voice object initialised to use the specified recipe
- train the voice by calling ``voice.train(corpus)`` using the objects which have been
constructed in the previous two steps.
These three steps are now described in greater depth.
Corpus loading
--------------
.. todo:: Add some notes on corpus here -- holds collections of utterances and text.
.. comment:: include description of XML utterance struct
.. comment:: Corpus
.. comment:: represent relations between utterances (if utterances are sentences, then this extra structure might represent paragraphs or chapters).
Utterance
+++++++++
.. todo:: tidy this up a bit
Utterances have hierarchical structure, and for TTS it’s useful to have a representation of that. E.g. word dominates syllables, syllables dominate phone segments, phone has attribute of start time. Storing this data as a tree whose nodes have attributes allows other useful representations to be inferred (e.g. sequence of syllables grouped into words, find start time of word, etc.)
For Ossian, we decided to use XML representation of utterance structure (where utterance structure is represented by XML document structure). The advantages are we’ve used it before for this purpose, existing XML libraries can be used, existing general standards for querying trees (XPATH). Disadvantages: speed, tree (where each node has only 1 parent) more restrictive than e.g. Festival’s utterance structure (syllables not always aligned with morphs; word in both phrase and syntax relations).
Utt class for loading/storing utt structure from/to XML. Data associated with utterance is stored in tree as attributes (e.g. type of Token). In some cases, this can be a filepath to data stored external to the utterance. This is useful for large and/or non-text data (waveforms, MFCCs etc.), also for files required in specific formats for other tools (e.g. label files).
Text data
+++++++++
Currently treated separately -- consistency would involve putting all text data also in utterance structures. We have experimented with this, but (predictably) speed is an issue with even
modestly-sized corpora.
Voice initialisation
--------------------
An untrained voice is initialised using a config file (.ini format) corresponding to a *recipe* (in ``$OSSIAN/recipes/*.cfg``). The recipe config file for a recipe called ‘x’ is expected to be located in ./recipe/x.cfg. Take a look at ./recipe/demo01.cfg to get an idea of the required structure.
The basic structure of a Ossian voice is a sequence of utterance processors. When an utterance in a corpus (either the training corpus during training, or a ‘test corpus’ at run time, which in the simplest case will contain only a single utterance) is processed, it is passed along a sequence utterance processors, each of which adds to or modifies the XML of the utterance, sometimes creating data external to the utterance structure.
A recipe can specify different sequences of processors (called stages) for different purposes. The first 2 sections in ./recipe/demo01.cfg (called [train] and [runtime]) each contain a list of processors, which happens to be the same in both cases. These are the most important stages of a voice (and the only ones which are required) -- train specifies the processors to be applied during training, and runtime at synthesis.
.. coment :: Already-trained voices are loaded from their stored config file at e.g.
Voice training
--------------
The main lines of the train method, slightly simplified, are these:
.. code-block:: python
def train(corpus):
for processor in self.processors:
if not processor.trained:
processor.train(corpus)
for utterance in corpus:
processor.apply_to_utt(utterance)
utterance.save()
self.save()
Each processor is trained if it is not yet trained, and then applied to the training corpus as it would be in synthesis. The key idea here is that we want the uttearnces to be processed in a way that is consistent at training and run time. The synthesis-type processing provides possible new training data for downstream use.
In the last line, the voice's ``save`` method is called. The role of this method is to:
copy the minimal files necessary for synthesis with the built voice to the
``$OSSIAN/voices/`` directory, including a copy of the voice config file.
This means the config can be tweaked after training
without altering the recipe for voices built in the future.
Also the recipe config can be modified for building future voices without breaking
already-trained ones.
In the ``demo01`` example, no training is done for either processor -- both are applied in series, resulting in utterance structures for training like:
.. code-block:: xml
<utt text="Nu este treaba lor ce constituţie avem." status="OK" waveform="/Users/owatts/repos/simple4all/Ossian/branches/cleaner_june_2013/corpus/romanian/speakers/toy/wav/adr_diph1_001.wav" utterance_name="adr_diph1_001" processors_used=",tokeniser,letter_adder">
<token text="Nu">
<letter text="N"/>
<letter text="u"/>
</token>
<token text=" ">
<letter text=" "/>
</token>
<token text="este">
<letter text="e"/>
<letter text="s"/>
<letter text="t"/>
<letter text="e"/>
</token>
[...]
Similar processing happens for testing. Language-naive: we will add examples on other languages too.
When loading, a voice looks at the list of processors for whatever stage it has been activated in (e.g. train, runtime), and tries to find a section of the recipe with the same name as each one. In the ``demo01`` example, it will look for (and find) a config section entitled [tokeniser]. Each processor will be a Python object whose class is specified with the ‘class’ key in the config. [tokeniser] says it is to be an object of class BasicTokenisers.RegexTokeniser. Given this class name, the voice uses dynamic loading and tries to instantiate an object of the required class using the config.
When writing subclasses of UtteranceProcessor, users are expected to provide the methods load() and process_utterance(). Load is meant to do class-specific things after instantiation, including setting default values of required instance attributes, reading user- or recipe-specified values for them from config, converting type as necessary. The definition of ``process_utterance`` specifies what work is to be performed on an utterance which is being synthesied.
Optionally, for processors which really require training, ``do_training()`` can be provided (add more here).
A class hierarchy has been developed. There are some abstract subclasses of UtteranceProcessor such as NodeSplitter which provide some functions useful for TTS.
NodeSplitter is configured with an XPATH pattern matching its target_nodes, a split_attribute, and child_node_type. When an utterance is processed (using process_utterance), the processor pulls out the nodes of the utterance that match the target_node xpath, for each of those nodes extracts the value attribute split_attribute from each, splits that value, and adds child nodes of type child_node_type. This is useful for such tasks as breaking sentences into words, words into syllables, and many other TTS tasks. A user can easily write code to tokenise text just by making a subclass providing a method called splitting_function. The details of reading xpaths and node attribute names from config is all taken care of, as is the tedious detail of manipulating XML (esp. important for more elaborate transformations such as restructuring, where it is very easy to screw up document/chronological order of nodes.) Existing code for e.g. syllabification can be easily integrated by wrapping it in the splitting_function of such a newly defined subclass.
]
In the current example, both processors are to be loaded into 2 differently configured instantiations of the same class, RegexTokeniser. This class specialises NodeSplitter by specifying a splitting_function that uses a regular expression read from config. (Add more detail here)
Split words to letters with simple pattern (.)
Note that () means that the spaces themselves are included in the resulting chunks (and thus children).
| 54.896552 | 1,060 | 0.753978 |
884594b3ac8e256da2f58152fd30d6ec4eb93406 | 382 | rst | reStructuredText | docs/examples.rst | ladyada/Adafruit_CircuitPython_SCD30 | ba102eed82cdb9ce48f1d6cd3bbfa5e9bbc1725f | [
"Unlicense",
"MIT-0",
"MIT"
] | null | null | null | docs/examples.rst | ladyada/Adafruit_CircuitPython_SCD30 | ba102eed82cdb9ce48f1d6cd3bbfa5e9bbc1725f | [
"Unlicense",
"MIT-0",
"MIT"
] | null | null | null | docs/examples.rst | ladyada/Adafruit_CircuitPython_SCD30 | ba102eed82cdb9ce48f1d6cd3bbfa5e9bbc1725f | [
"Unlicense",
"MIT-0",
"MIT"
] | null | null | null | Simple test
------------
Ensure your device works with this simple test.
.. literalinclude:: ../examples/scd30_simpletest.py
:caption: examples/scd30_simpletest.py
:linenos:
Tuning Knobs
------------
Experiment with different tuning parameters and settings
.. literalinclude:: ../examples/scd30_tuning_knobs.py
:caption: examples/scd30_tuning_knobs.py
:linenos: | 22.470588 | 56 | 0.717277 |
720235870a86059565dc392038f3a20b4681e7b9 | 1,386 | rst | reStructuredText | documentation/developer_guide/architecture.rst | msenoville/hbp_neuromorphic_platform | 897675af3a9928da0cf3abcfeac3d7f508a859e1 | [
"Apache-2.0"
] | 13 | 2017-09-03T19:57:29.000Z | 2021-11-17T11:25:28.000Z | documentation/developer_guide/architecture.rst | msenoville/hbp_neuromorphic_platform | 897675af3a9928da0cf3abcfeac3d7f508a859e1 | [
"Apache-2.0"
] | 30 | 2017-06-27T08:36:41.000Z | 2022-02-14T16:04:32.000Z | documentation/developer_guide/architecture.rst | msenoville/hbp_neuromorphic_platform | 897675af3a9928da0cf3abcfeac3d7f508a859e1 | [
"Apache-2.0"
] | 6 | 2017-06-11T20:16:57.000Z | 2021-05-05T12:49:01.000Z | =====================
Platform architecture
=====================
The platform provides the following components:
* Running on nmpi.hbpneuromorphic.eu:
* Job queue REST service
* Job manager Collaboratory app
* Dashboard Collaboratory app
* Running on quotas.hbpneuromorphic.eu:
* Quotas REST service
* Resource manager Collaboratory app
* Resource manager coordination Collaboratory app
* Running on benchmarks.hbpneuromorphic.eu:
* Benchmarks REST service
* Benchmarks website
* Running on www.hbpneuromorphic.eu:
* Collaboratory home ("splash") page
* Development and Operations Guidebook (this document)
* Monitoring service (commercial service)
* Python client
* User Guidebook
In addition to the three web servers listed above, there is a staging server *nmpi-staging.hbpneuromorphic.eu*
(a staging server for quotas is planned) and a database server.
The REST services are implemented with Django. The Collaboratory apps are implemented with AngularJS.
Both services and apps are served using nginx, running in Docker containers on cloud servers
from Digital Ocean.
A migration from the commercial cloud provider (Digital Ocean) to servers provided by ICEI is planned for 2019.
.. Coming later
.. benchmark runner (webhook)
.. nest server (for benchmarks): nest.hbpneuromorphic.eu
.. nest data store: tmp-data.hbpneuromorphic.eu
| 33 | 111 | 0.751804 |
f80a8272a943879bc1d39ee5b4a35b4fec9492cf | 685 | rest | reStructuredText | node-full-stack-app/server-app/rest/zips.rest | t4d-classes/applied-javascript_05172021 | 09f0c5723750e6f73a7364c76523cbfd9376b3f2 | [
"MIT"
] | 1 | 2020-08-30T06:11:17.000Z | 2020-08-30T06:11:17.000Z | node-full-stack-app/server-app/rest/zips.rest | t4d-classes/applied-javascript_05172021 | 09f0c5723750e6f73a7364c76523cbfd9376b3f2 | [
"MIT"
] | null | null | null | node-full-stack-app/server-app/rest/zips.rest | t4d-classes/applied-javascript_05172021 | 09f0c5723750e6f73a7364c76523cbfd9376b3f2 | [
"MIT"
] | 2 | 2021-05-17T16:22:55.000Z | 2021-05-18T01:20:46.000Z |
GET http://localhost:3060/api/zips HTTP/1.1
###
GET http://localhost:3060/api/zips/5f495668927ad23edc4bca0a HTTP/1.1
###
POST http://localhost:3060/api/zips HTTP/1.1
Content-Type: application/json
{
"loc": {
"y": 33.331165,
"x": 86.208934
},
"city": "ALPINE2",
"zip": "35014",
"pop": 3062,
"state": "AL"
}
###
PUT http://localhost:3060/api/zips/5f495668927ad23edc4bca0a HTTP/1.1
Content-Type: application/json
{
"_id": "5f495668927ad23edc4bca0a",
"loc": {
"y": 33.331165,
"x": 86.208934
},
"city": "ALPINE3",
"zip": "35014",
"pop": 3062,
"state": "AL"
}
###
DELETE http://localhost:3060/api/zips/5f495668927ad23edc4bca0a HTTP/1.1
| 14.891304 | 71 | 0.624818 |
1c8ed3f892c6578d29f230863926e18e23562f1f | 2,571 | rst | reStructuredText | docs/source/index.rst | asafepy/aws-data-wrangler | 5cc6360eb8325fa79fc9abe3128ba7a5f40681fa | [
"Apache-2.0"
] | null | null | null | docs/source/index.rst | asafepy/aws-data-wrangler | 5cc6360eb8325fa79fc9abe3128ba7a5f40681fa | [
"Apache-2.0"
] | null | null | null | docs/source/index.rst | asafepy/aws-data-wrangler | 5cc6360eb8325fa79fc9abe3128ba7a5f40681fa | [
"Apache-2.0"
] | null | null | null | .. AWS Data Wrangler documentation master file, created by
sphinx-quickstart on Sun Aug 18 12:05:01 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
.. figure:: _static/logo.png
:align: center
:alt: alternate text
:figclass: align-center
*DataFrames on AWS*
`Read the Tutorials <https://github.com/awslabs/aws-data-wrangler/tree/master/tutorials>`_: `Catalog & Metadata <https://github.com/awslabs/aws-data-wrangler/blob/master/tutorials/catalog_and_metadata.ipynb>`_ | `Athena Nested <https://github.com/awslabs/aws-data-wrangler/blob/master/tutorials/athena_nested.ipynb>`_ | `S3 Write Modes <https://github.com/awslabs/aws-data-wrangler/blob/master/tutorials/s3_write_modes.ipynb>`_
Use Cases
---------
Pandas
``````
* Pandas -> Parquet (S3) (Parallel)
* Pandas -> CSV (S3) (Parallel)
* Pandas -> Glue Catalog Table
* Pandas -> Athena (Parallel)
* Pandas -> Redshift (Append/Overwrite/Upsert) (Parallel)
* Pandas -> Aurora (MySQL/PostgreSQL) (Append/Overwrite) (Via S3) (NEW)
* Parquet (S3) -> Pandas (Parallel)
* CSV (S3) -> Pandas (One shot or Batching)
* Glue Catalog Table -> Pandas (Parallel)
* Athena -> Pandas (One shot, Batching or Parallel)
* Redshift -> Pandas (Parallel)
* CloudWatch Logs Insights -> Pandas
* Aurora -> Pandas (MySQL) (Via S3) (NEW)
* Encrypt Pandas Dataframes on S3 with KMS keys
* Glue Databases Metadata -> Pandas (Jupyter output compatible)
* Glue Table Metadata -> Pandas (Jupyter output compatible)
PySpark
```````
* PySpark -> Redshift (Parallel)
* Register Glue table from Dataframe stored on S3
* Flatten nested DataFrames (NEW)
General
```````
* List S3 objects (Parallel)
* Delete S3 objects (Parallel)
* Delete listed S3 objects (Parallel)
* Delete NOT listed S3 objects (Parallel)
* Copy listed S3 objects (Parallel)
* Get the size of S3 objects (Parallel)
* Get CloudWatch Logs Insights query results
* Load partitions on Athena/Glue table (repair table)
* Create EMR cluster (For humans)
* Terminate EMR cluster
* Get EMR cluster state
* Submit EMR step(s) (For humans)
* Get EMR step state
* Get EMR step state
* Athena query to receive the result as python primitives (*Iterable[Dict[str, Any]*)
* Load and Unzip SageMaker jobs outputs
* Load and Unzip SageMaker models
* Redshift -> Parquet (S3)
* Aurora -> CSV (S3) (MySQL) (NEW :star:)
* Get Glue Metadata
Table Of Contents
-----------------
.. toctree::
:maxdepth: 4
installation
examples
divingdeep
stepbystep
contributing
api/modules
license
| 32.1375 | 427 | 0.719175 |
896789bd53ad03cc7f7fda926bcb9bd121c63fea | 418 | rst | reStructuredText | doc/build/changelog/unreleased_14/6592.rst | abhishekjog/sqlalchemy | 5a0c700bb96bf2d80cfbf03f5ddfa97964987e4e | [
"MIT"
] | 1 | 2021-06-08T15:24:00.000Z | 2021-06-08T15:24:00.000Z | doc/build/changelog/unreleased_14/6592.rst | FIRDOUS-BHAT/sqlalchemy | 14a4849e14a4f94dfcb5cef3600d439b7c716344 | [
"MIT"
] | null | null | null | doc/build/changelog/unreleased_14/6592.rst | FIRDOUS-BHAT/sqlalchemy | 14a4849e14a4f94dfcb5cef3600d439b7c716344 | [
"MIT"
] | null | null | null | .. change::
:tags: bug, asyncio
:tickets: 6592
Added ``asyncio.exceptions.TimeoutError``,
``asyncio.exceptions.CancelledError`` as so-called "exit exceptions", a
class of exceptions that include things like ``GreenletExit`` and
``KeyboardInterrupt``, which are considered to be events that warrant
considering a DBAPI connection to be in an unusable state where it should
be recycled.
| 38 | 77 | 0.720096 |
5e963ba371e9c5b2908bbe1e1773bceff064a7a4 | 299 | rst | reStructuredText | tuna/parts/storage/api/csv.DictWriter.rst | russellnakamura/thetuna | 0e445baf780fb65e1d92fe1344ebdf21bf81573c | [
"MIT"
] | null | null | null | tuna/parts/storage/api/csv.DictWriter.rst | russellnakamura/thetuna | 0e445baf780fb65e1d92fe1344ebdf21bf81573c | [
"MIT"
] | null | null | null | tuna/parts/storage/api/csv.DictWriter.rst | russellnakamura/thetuna | 0e445baf780fb65e1d92fe1344ebdf21bf81573c | [
"MIT"
] | null | null | null | csv.DictWriter
==============
.. currentmodule:: csv
.. autoclass:: DictWriter
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~DictWriter.__init__
~DictWriter.writeheader
~DictWriter.writerow
~DictWriter.writerows
| 11.96 | 29 | 0.555184 |
63203dab70796763e0d5f77b9e8936de1a557963 | 55,147 | rst | reStructuredText | doc/contribute/coding_guidelines/index.rst | ldalek/zephyr | fa2054f93e4e80b079a4a6a9e84d642e51912042 | [
"Apache-2.0"
] | 1 | 2020-10-06T15:26:54.000Z | 2020-10-06T15:26:54.000Z | doc/contribute/coding_guidelines/index.rst | ldalek/zephyr | fa2054f93e4e80b079a4a6a9e84d642e51912042 | [
"Apache-2.0"
] | 1 | 2022-02-04T15:54:34.000Z | 2022-02-04T15:54:34.000Z | doc/contribute/coding_guidelines/index.rst | ldalek/zephyr | fa2054f93e4e80b079a4a6a9e84d642e51912042 | [
"Apache-2.0"
] | 1 | 2022-02-17T20:30:22.000Z | 2022-02-17T20:30:22.000Z | .. _coding_guidelines:
Coding Guidelines
#################
The project TSC and the Safety Committee of the project agreed to implement
a staged and incremental approach for complying with a set of coding rules (AKA
Coding Guidelines) to improve quality and consistency of the code base. Below
are the agreed upon stages and the approximate timelines:
Stage I
Coding guideline rules are available to be followed and referenced,
but not enforced. Rules are not yet enforced in CI and pull-requests cannot be
blocked by reviewers/approvers due to violations.
Stage II
Begin enforcement on a limited scope of the code base. Initially, this would be
the safety certification scope. For rules easily applied across codebase, we
should not limit compliance to initial scope. This step requires tooling and
CI setup and will start sometime after LTS2.
Stage III
Revisit the coding guideline rules and based on experience from previous
stages, refine/iterate on selected rules.
Stage IV
Expand enforcement to the wider codebase. Exceptions may be granted on some
areas of the codebase with a proper justification. Exception would require
TSC approval.
.. note::
Coding guideline rules may be removed/changed at any time by filing a
GH issue/RFC.
Main rules
**********
The coding guideline rules are based on MISRA-C 2012 and are a subset of MISRA-C.
The subset is listed in the table below with a summary of the rules, its
severity and the equivalent rules from other standards for reference.
.. note::
For existing Zephyr maintainers and collaborators, if you are unable to
obtain a copy through your employer, a limited number of copies will be made
available through the project. If you need a copy of MISRA-C 2012, please
send email to safety@lists.zephyrproject.org and provide details on reason
why you can't obtain one through other options and expected contributions
once you have one. The safety committee will review all requests.
.. list-table:: Main rules
:header-rows: 1
:widths: 17 14 43 12 14
* - MISRA C 2012
- Severity
- Description
- CERT C
- Example
* - Dir 1.1
- Required
- Any implementation-defined behaviour on which the output of the program depends shall be documented and understood
- `MSC09-C <https://wiki.sei.cmu.edu/confluence/display/c/MSC09-C.+Character+encoding%3A+Use+subset+of+ASCII+for+safety>`_
- `Dir 1.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_01_01.c>`_
* - Dir 2.1
- Required
- All source files shall compile without any compilation errors
- N/A
- `Dir 2.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_02_01.c>`_
* - Dir 3.1
- Required
- All code shall be traceable to documented requirements
- N/A
- `Dir 3.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_03_01.c>`_
* - Dir 4.1
- Required
- Run-time failures shall be minimized
- N/A
- `Dir 4.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_01.c>`_
* - Dir 4.2
- Advisory
- All usage of assembly language should be documented
- N/A
- `Dir 4.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_02.c>`_
* - Dir 4.4
- Advisory
- Sections of code should not be “commented out”
- `MSC04-C <https://wiki.sei.cmu.edu/confluence/display/c/MSC04-C.+Use+comments+consistently+and+in+a+readable+fashion>`_
- `Dir 4.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_04.c>`_
* - Dir 4.5
- Advisory
- Identifiers in the same name space with overlapping visibility should be typographically unambiguous
- `DCL02-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL02-C.+Use+visually+distinct+identifiers>`_
- `Dir 4.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_05.c>`_
* - Dir 4.6
- Advisory
- typedefs that indicate size and signedness should be used in place of the basic numerical types
- N/A
- `Dir 4.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_06.c>`_
* - Dir 4.7
- Required
- If a function returns error information, then that error information shall be tested
- N/A
- `Dir 4.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_07.c>`_
* - Dir 4.8
- Advisory
- If a pointer to a structure or union is never dereferenced within a translation unit, then the implementation of the object should be hidden
- `DCL12-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL12-C.+Implement+abstract+data+types+using+opaque+types>`_
- | `Dir 4.8 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_08_1.c>`_
| `Dir 4.8 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_08_2.c>`_
* - Dir 4.9
- Advisory
- A function should be used in preference to a function-like macro where they are interchangeable
- `PRE00-C <https://wiki.sei.cmu.edu/confluence/display/c/PRE00-C.+Prefer+inline+or+static+functions+to+function-like+macros>`_
- `Dir 4.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_09.c>`_
* - Dir 4.10
- Required
- Precautions shall be taken in order to prevent the contents of a header file being included more than once
- `PRE06-C <https://wiki.sei.cmu.edu/confluence/display/c/PRE06-C.+Enclose+header+files+in+an+include+guard>`_
- `Dir 4.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_10.c>`_
* - Dir 4.11
- Required
- The validity of values passed to library functions shall be checked
- N/A
- `Dir 4.11 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_11.c>`_
* - Dir 4.12
- Required
- Dynamic memory allocation shall not be used
- `STR01-C <https://wiki.sei.cmu.edu/confluence/display/c/STR01-C.+Adopt+and+implement+a+consistent+plan+for+managing+strings>`_
- `Dir 4.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_12.c>`_
* - Dir 4.13
- Advisory
- Functions which are designed to provide operations on a resource should be called in an appropriate sequence
- N/A
- `Dir 4.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_13.c>`_
* - Dir 4.14
- Required
- The validity of values received from external sources shall be checked
- N/A
- `Dir 4.14 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/D_04_14.c>`_
* - Rule 1.2
- Advisory
- Language extensions should not be used
- `MSC04-C <https://wiki.sei.cmu.edu/confluence/display/c/MSC04-C.+Use+comments+consistently+and+in+a+readable+fashion>`_
- `Rule 1.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_01_02.c>`_
* - Rule 1.3
- Required
- There shall be no occurrence of undefined or critical unspecified behaviour
- N/A
- `Rule 1.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_01_03.c>`_
* - Rule 2.1
- Required
- A project shall not contain unreachable code
- `MSC07-C <https://wiki.sei.cmu.edu/confluence/display/c/MSC07-C.+Detect+and+remove+dead+code>`_
- | `Rule 2.1 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_02_01_1.c>`_
| `Rule 2.1 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_02_01_2.c>`_
* - Rule 2.2
- Required
- There shall be no dead code
- `MSC12-C <https://wiki.sei.cmu.edu/confluence/display/c/MSC12-C.+Detect+and+remove+code+that+has+no+effect+or+is+never+executed>`_
- `Rule 2.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_02_02.c>`_
* - Rule 2.3
- Advisory
- A project should not contain unused type declarations
- N/A
- `Rule 2.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_02_03.c>`_
* - Rule 2.6
- Advisory
- A function should not contain unused label declarations
- N/A
- `Rule 2.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_02_06.c>`_
* - Rule 2.7
- Advisory
- There should be no unused parameters in functions
- N/A
- `Rule 2.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_02_07.c>`_
* - Rule 3.1
- Required
- The character sequences /* and // shall not be used within a comment
- `MSC04-C <https://wiki.sei.cmu.edu/confluence/display/c/MSC04-C.+Use+comments+consistently+and+in+a+readable+fashion>`_
- `Rule 3.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_03_01.c>`_
* - Rule 3.2
- Required
- Line-splicing shall not be used in // comments
- N/A
- `Rule 3.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_03_02.c>`_
* - Rule 4.1
- Required
- Octal and hexadecimal escape sequences shall be terminated
- `MSC09-C <https://wiki.sei.cmu.edu/confluence/display/c/MSC09-C.+Character+encoding%3A+Use+subset+of+ASCII+for+safety>`_
- `Rule 4.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_04_01.c>`_
* - Rule 4.2
- Advisory
- Trigraphs should not be used
- `PRE07-C <https://wiki.sei.cmu.edu/confluence/display/c/PRE07-C.+Avoid+using+repeated+question+marks>`_
- `Rule 4.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_04_02.c>`_
* - Rule 5.1
- Required
- External identifiers shall be distinct
- `DCL23-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL23-C.+Guarantee+that+mutually+visible+identifiers+are+unique>`_
- | `Rule 5.1 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_01_1.c>`_
| `Rule 5.1 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_01_2.c>`_
* - Rule 5.2
- Required
- Identifiers declared in the same scope and name space shall be distinct
- `DCL23-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL23-C.+Guarantee+that+mutually+visible+identifiers+are+unique>`_
- `Rule 5.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_02.c>`_
* - Rule 5.3
- Required
- An identifier declared in an inner scope shall not hide an identifier declared in an outer scope
- `DCL23-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL23-C.+Guarantee+that+mutually+visible+identifiers+are+unique>`_
- `Rule 5.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_03.c>`_
* - Rule 5.4
- Required
- Macro identifiers shall be distinct
- `DCL23-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL23-C.+Guarantee+that+mutually+visible+identifiers+are+unique>`_
- `Rule 5.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_04.c>`_
* - Rule 5.5
- Required
- Identifiers shall be distinct from macro names
- `DCL23-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL23-C.+Guarantee+that+mutually+visible+identifiers+are+unique>`_
- `Rule 5.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_05.c>`_
* - Rule 5.6
- Required
- A typedef name shall be a unique identifier
- N/A
- `Rule 5.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_06.c>`_
* - Rule 5.7
- Required
- A tag name shall be a unique identifier
- N/A
- `Rule 5.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_07.c>`_
* - Rule 5.8
- Required
- Identifiers that define objects or functions with external linkage shall be unique
- N/A
- | `Rule 5.8 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_08_1.c>`_
| `Rule 5.8 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_08_2.c>`_
* - Rule 5.9
- Advisory
- Identifiers that define objects or functions with internal linkage should be unique
- N/A
- | `Rule 5.9 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_09_1.c>`_
| `Rule 5.9 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_05_09_2.c>`_
* - Rule 6.1
- Required
- Bit-fields shall only be declared with an appropriate type
- `INT14-C <https://wiki.sei.cmu.edu/confluence/display/c/INT14-C.+Avoid+performing+bitwise+and+arithmetic+operations+on+the+same+data>`_
- `Rule 6.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_06_01.c>`_
* - Rule 6.2
- Required
- Single-bit named bit fields shall not be of a signed type
- `INT14-C <https://wiki.sei.cmu.edu/confluence/display/c/INT14-C.+Avoid+performing+bitwise+and+arithmetic+operations+on+the+same+data>`_
- `Rule 6.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_06_02.c>`_
* - Rule 7.1
- Required
- Octal constants shall not be used
- `DCL18-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL18-C.+Do+not+begin+integer+constants+with+0+when+specifying+a+decimal+value>`_
- `Rule 7.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_07_01.c>`_
* - Rule 7.2
- Required
- A u or U suffix shall be applied to all integer constants that are represented in an unsigned type
- N/A
- `Rule 7.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_07_02.c>`_
* - Rule 7.3
- Required
- The lowercase character l shall not be used in a literal suffix
- `DCL16-C <https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152241>`_
- `Rule 7.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_07_03.c>`_
* - Rule 7.4
- Required
- A string literal shall not be assigned to an object unless the objects type is pointer to const-qualified char
- N/A
- `Rule 7.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_07_04.c>`_
* - Rule 8.1
- Required
- Types shall be explicitly specified
- N/A
- `Rule 8.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_01.c>`_
* - Rule 8.2
- Required
- Function types shall be in prototype form with named parameters
- `DCL20-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL20-C.+Explicitly+specify+void+when+a+function+accepts+no+arguments>`_
- `Rule 8.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_02.c>`_
* - Rule 8.3
- Required
- All declarations of an object or function shall use the same names and type qualifiers
- N/A
- `Rule 8.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_03.c>`_
* - Rule 8.4
- Required
- A compatible declaration shall be visible when an object or function with external linkage is defined
- N/A
- `Rule 8.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_04.c>`_
* - Rule 8.5
- Required
- An external object or function shall be declared once in one and only one file
- N/A
- | `Rule 8.5 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_05_1.c>`_
| `Rule 8.5 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_05_2.c>`_
* - Rule 8.6
- Required
- An identifier with external linkage shall have exactly one external definition
- N/A
- | `Rule 8.6 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_06_1.c>`_
| `Rule 8.6 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_06_2.c>`_
* - Rule 8.8
- Required
- The static storage class specifier shall be used in all declarations of objects and functions that have internal linkage
- `DCL15-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL15-C.+Declare+file-scope+objects+or+functions+that+do+not+need+external+linkage+as+static>`_
- `Rule 8.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_08.c>`_
* - Rule 8.9
- Advisory
- An object should be defined at block scope if its identifier only appears in a single function
- `DCL19-C <https://wiki.sei.cmu.edu/confluence/display/c/DCL19-C.+Minimize+the+scope+of+variables+and+functions>`_
- `Rule 8.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_09.c>`_
* - Rule 8.10
- Required
- An inline function shall be declared with the static storage class
- N/A
- `Rule 8.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_10.c>`_
* - Rule 8.12
- Required
- Within an enumerator list, the value of an implicitly-specified enumeration constant shall be unique
- `INT09-C <https://wiki.sei.cmu.edu/confluence/display/c/INT09-C.+Ensure+enumeration+constants+map+to+unique+values>`_
- `Rule 8.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_12.c>`_
* - Rule 8.14
- Required
- The restrict type qualifier shall not be used
- N/A
- `Rule 8.14 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_08_14.c>`_
* - Rule 9.1
- Mandatory
- The value of an object with automatic storage duration shall not be read before it has been set
- N/A
- `Rule 9.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_01.c>`_
* - Rule 9.2
- Required
- The initializer for an aggregate or union shall be enclosed in braces
- N/A
- `Rule 9.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_02.c>`_
* - Rule 9.3
- Required
- Arrays shall not be partially initialized
- N/A
- `Rule 9.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_03.c>`_
* - Rule 9.4
- Required
- An element of an object shall not be initialized more than once
- N/A
- `Rule 9.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_04.c>`_
* - Rule 9.5
- Required
- Where designated initializers are used to initialize an array object the size of the array shall be specified explicitly
- N/A
- `Rule 9.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_09_05.c>`_
* - Rule 10.1
- Required
- Operands shall not be of an inappropriate essential type
- `STR04-C <https://wiki.sei.cmu.edu/confluence/display/c/STR04-C.+Use+plain+char+for+characters+in+the+basic+character+set>`_
- `Rule 10.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_10_01.c>`_
* - Rule 10.2
- Required
- Expressions of essentially character type shall not be used inappropriately in addition and subtraction operations
- `STR04-C <https://wiki.sei.cmu.edu/confluence/display/c/STR04-C.+Use+plain+char+for+characters+in+the+basic+character+set>`_
- `Rule 10.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_10_02.c>`_
* - Rule 10.3
- Required
- The value of an expression shall not be assigned to an object with a narrower essential type or of a different essential type category
- `STR04-C <https://wiki.sei.cmu.edu/confluence/display/c/STR04-C.+Use+plain+char+for+characters+in+the+basic+character+set>`_
- `Rule 10.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_10_03.c>`_
* - Rule 10.4
- Required
- Both operands of an operator in which the usual arithmetic conversions are performed shall have the same essential type category
- `STR04-C <https://wiki.sei.cmu.edu/confluence/display/c/STR04-C.+Use+plain+char+for+characters+in+the+basic+character+set>`_
- `Rule 10.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_10_04.c>`_
* - Rule 10.5
- Advisory
- The value of an expression should not be cast to an inappropriate essential type
- N/A
- `Rule 10.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_10_05.c>`_
* - Rule 10.6
- Required
- The value of a composite expression shall not be assigned to an object with wider essential type
- `INT02-C <https://wiki.sei.cmu.edu/confluence/display/c/INT02-C.+Understand+integer+conversion+rules>`_
- `Rule 10.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_10_06.c>`_
* - Rule 10.7
- Required
- If a composite expression is used as one operand of an operator in which the usual arithmetic conversions are performed then the other operand shall not have wider essential type
- `INT02-C <https://wiki.sei.cmu.edu/confluence/display/c/INT02-C.+Understand+integer+conversion+rules>`_
- `Rule 10.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_10_07.c>`_
* - Rule 10.8
- Required
- The value of a composite expression shall not be cast to a different essential type category or a wider essential type
- `INT02-C <https://wiki.sei.cmu.edu/confluence/display/c/INT02-C.+Understand+integer+conversion+rules>`_
- `Rule 10.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_10_08.c>`_
* - Rule 11.2
- Required
- Conversions shall not be performed between a pointer to an incomplete type and any other type
- N/A
- `Rule 11.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_11_02.c>`_
* - Rule 11.6
- Required
- A cast shall not be performed between pointer to void and an arithmetic type
- N/A
- `Rule 11.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_11_06.c>`_
* - Rule 11.7
- Required
- A cast shall not be performed between pointer to object and a noninteger arithmetic type
- N/A
- `Rule 11.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_11_07.c>`_
* - Rule 11.8
- Required
- A cast shall not remove any const or volatile qualification from the type pointed to by a pointer
- `EXP05-C <https://wiki.sei.cmu.edu/confluence/display/c/EXP05-C.+Do+not+cast+away+a+const+qualification>`_
- `Rule 11.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_11_08.c>`_
* - Rule 11.9
- Required
- The macro NULL shall be the only permitted form of integer null pointer constant
- N/A
- `Rule 11.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_11_09.c>`_
* - Rule 12.1
- Advisory
- The precedence of operators within expressions should be made explicit
- `EXP00-C <https://wiki.sei.cmu.edu/confluence/display/c/EXP00-C.+Use+parentheses+for+precedence+of+operation>`_
- `Rule 12.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_12_01.c>`_
* - Rule 12.2
- Required
- The right hand operand of a shift operator shall lie in the range zero to one less than the width in bits of the essential type of the left hand operand
- N/A
- `Rule 12.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_12_02.c>`_
* - Rule 12.4
- Advisory
- Evaluation of constant expressions should not lead to unsigned integer wrap-around
- N/A
- `Rule 12.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_12_04.c>`_
* - Rule 12.5
- Mandatory
- The sizeof operator shall not have an operand which is a function parameter declared as “array of type”
- N/A
- `Rule 12.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_12_05.c>`_
* - Rule 13.1
- Required
- Initializer lists shall not contain persistent side effects
- N/A
- | `Rule 13.1 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_01_1.c>`_
| `Rule 13.1 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_01_2.c>`_
* - Rule 13.2
- Required
- The value of an expression and its persistent side effects shall be the same under all permitted evaluation orders
- N/A
- `Rule 13.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_02.c>`_
* - Rule 13.3
- Advisory
- A full expression containing an increment (++) or decrement (--) operator should have no other potential side effects other than that caused by the increment or decrement operator
- N/A
- `Rule 13.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_03.c>`_
* - Rule 13.4
- Advisory
- The result of an assignment operator should not be used
- N/A
- `Rule 13.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_04.c>`_
* - Rule 13.5
- Required
- The right hand operand of a logical && or || operator shall not contain persistent side effects
- `EXP10-C <https://wiki.sei.cmu.edu/confluence/display/c/EXP10-C.+Do+not+depend+on+the+order+of+evaluation+of+subexpressions+or+the+order+in+which+side+effects+take+place>`_
- | `Rule 13.5 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_05_1.c>`_
| `Rule 13.5 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_05_2.c>`_
* - Rule 13.6
- Mandatory
- The operand of the sizeof operator shall not contain any expression which has potential side effects
- N/A
- `Rule 13.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_06.c>`_
* - Rule 14.1
- Required
- A loop counter shall not have essentially floating type
- N/A
- `Rule 14.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_01.c>`_
* - Rule 14.2
- Required
- A for loop shall be well-formed
- N/A
- `Rule 14.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_02.c>`_
* - Rule 14.3
- Required
- Controlling expressions shall not be invariant
- N/A
- `Rule 14.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_03.c>`_
* - Rule 14.4
- Required
- The controlling expression of an if statement and the controlling expression of an iteration-statement shall have essentially Boolean type
- N/A
- `Rule 14.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_14_04.c>`_
* - Rule 15.2
- Required
- The goto statement shall jump to a label declared later in the same function
- N/A
- `Rule 15.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_15_02.c>`_
* - Rule 15.3
- Required
- Any label referenced by a goto statement shall be declared in the same block, or in any block enclosing the goto statement
- N/A
- `Rule 15.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_15_03.c>`_
* - Rule 15.6
- Required
- The body of an iteration-statement or a selection-statement shall be a compound-statement
- `EXP19-C <https://wiki.sei.cmu.edu/confluence/display/c/EXP19-C.+Use+braces+for+the+body+of+an+if%2C+for%2C+or+while+statement>`_
- `Rule 15.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_15_06.c>`_
* - Rule 15.7
- Required
- All if else if constructs shall be terminated with an else statement
- N/A
- `Rule 15.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_15_07.c>`_
* - Rule 16.1
- Required
- All switch statements shall be well-formed
- N/A
- `Rule 16.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_01.c>`_
* - Rule 16.2
- Required
- A switch label shall only be used when the most closely-enclosing compound statement is the body of a switch statement
- `MSC20-C <https://wiki.sei.cmu.edu/confluence/display/c/MSC20-C.+Do+not+use+a+switch+statement+to+transfer+control+into+a+complex+block>`_
- `Rule 16.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_02.c>`_
* - Rule 16.3
- Required
- An unconditional break statement shall terminate every switch-clause
- N/A
- `Rule 16.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_03.c>`_
* - Rule 16.4
- Required
- Every switch statement shall have a default label
- N/A
- `Rule 16.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_04.c>`_
* - Rule 16.5
- Required
- A default label shall appear as either the first or the last switch label of a switch statement
- N/A
- `Rule 16.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_05.c>`_
* - Rule 16.6
- Required
- Every switch statement shall have at least two switch-clauses
- N/A
- `Rule 16.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_06.c>`_
* - Rule 16.7
- Required
- A switch-expression shall not have essentially Boolean type
- N/A
- `Rule 16.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_16_07.c>`_
* - Rule 17.1
- Required
- The features of <stdarg.h> shall not be used
- `ERR00-C <https://wiki.sei.cmu.edu/confluence/display/c/ERR00-C.+Adopt+and+implement+a+consistent+and+comprehensive+error-handling+policy>`_
- `Rule 17.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_01.c>`_
* - Rule 17.2
- Required
- Functions shall not call themselves, either directly or indirectly
- `MEM05-C <https://wiki.sei.cmu.edu/confluence/display/c/MEM05-C.+Avoid+large+stack+allocations>`_
- `Rule 17.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_02.c>`_
* - Rule 17.3
- Mandatory
- A function shall not be declared implicitly
- N/A
- `Rule 17.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_03.c>`_
* - Rule 17.4
- Mandatory
- All exit paths from a function with non-void return type shall have an explicit return statement with an expression
- N/A
- `Rule 17.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_04.c>`_
* - Rule 17.5
- Advisory
- The function argument corresponding to a parameter declared to have an array type shall have an appropriate number of elements
- N/A
- `Rule 17.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_05.c>`_
* - Rule 17.6
- Mandatory
- The declaration of an array parameter shall not contain the static keyword between the [ ]
- N/A
- `Rule 17.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_06.c>`_
* - Rule 17.7
- Required
- The value returned by a function having non-void return type shall be used
- N/A
- `Rule 17.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_17_07.c>`_
* - Rule 18.1
- Required
- A pointer resulting from arithmetic on a pointer operand shall address an element of the same array as that pointer operand
- `EXP08-C <https://wiki.sei.cmu.edu/confluence/display/c/EXP08-C.+Ensure+pointer+arithmetic+is+used+correctly>`_
- `Rule 18.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_01.c>`_
* - Rule 18.2
- Required
- Subtraction between pointers shall only be applied to pointers that address elements of the same array
- `EXP08-C <https://wiki.sei.cmu.edu/confluence/display/c/EXP08-C.+Ensure+pointer+arithmetic+is+used+correctly>`_
- `Rule 18.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_02.c>`_
* - Rule 18.3
- Required
- The relational operators >, >=, < and <= shall not be applied to objects of pointer type except where they point into the same object
- `EXP08-C <https://wiki.sei.cmu.edu/confluence/display/c/EXP08-C.+Ensure+pointer+arithmetic+is+used+correctly>`_
- `Rule 18.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_03.c>`_
* - Rule 18.5
- Advisory
- Declarations should contain no more than two levels of pointer nesting
- N/A
- `Rule 18.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_05.c>`_
* - Rule 18.6
- Required
- The address of an object with automatic storage shall not be copied to another object that persists after the first object has ceased to exist
- N/A
- | `Rule 18.6 example 1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_06_1.c>`_
| `Rule 18.6 example 2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_06_2.c>`_
* - Rule 18.8
- Required
- Variable-length array types shall not be used
- N/A
- `Rule 18.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_08.c>`_
* - Rule 19.1
- Mandatory
- An object shall not be assigned or copied to an overlapping object
- N/A
- `Rule 19.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_19_01.c>`_
* - Rule 20.2
- Required
- The ', or \ characters and the /* or // character sequences shall not occur in a header file name"
- N/A
- `Rule 20.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_02.c>`_
* - Rule 20.3
- Required
- The #include directive shall be followed by either a <filename> or "filename" sequence
- N/A
- `Rule 20.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_03.c>`_
* - Rule 20.4
- Required
- A macro shall not be defined with the same name as a keyword
- N/A
- `Rule 20.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_04.c>`_
* - Rule 20.7
- Required
- Expressions resulting from the expansion of macro parameters shall be enclosed in parentheses
- `PRE01-C <https://wiki.sei.cmu.edu/confluence/display/c/PRE01-C.+Use+parentheses+within+macros+around+parameter+names>`_
- `Rule 20.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_07.c>`_
* - Rule 20.8
- Required
- The controlling expression of a #if or #elif preprocessing directive shall evaluate to 0 or 1
- N/A
- `Rule 20.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_08.c>`_
* - Rule 20.9
- Required
- All identifiers used in the controlling expression of #if or #elif preprocessing directives shall be #defined before evaluation
- N/A
- `Rule 20.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_09.c>`_
* - Rule 20.11
- Required
- A macro parameter immediately following a # operator shall not immediately be followed by a ## operator
- N/A
- `Rule 20.11 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_11.c>`_
* - Rule 20.12
- Required
- A macro parameter used as an operand to the # or ## operators, which is itself subject to further macro replacement, shall only be used as an operand to these operators
- N/A
- `Rule 20.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_12.c>`_
* - Rule 20.13
- Required
- A line whose first token is # shall be a valid preprocessing directive
- N/A
- `Rule 20.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_13.c>`_
* - Rule 20.14
- Required
- All #else, #elif and #endif preprocessor directives shall reside in the same file as the #if, #ifdef or #ifndef directive to which they are related
- N/A
- `Rule 20.14 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_14.c>`_
* - Rule 21.1
- Required
- #define and #undef shall not be used on a reserved identifier or reserved macro name
- N/A
- `Rule 21.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_01.c>`_
* - Rule 21.2
- Required
- A reserved identifier or macro name shall not be declared
- N/A
- `Rule 21.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_02.c>`_
* - Rule 21.3
- Required
- The memory allocation and deallocation functions of <stdlib.h> shall not be used
- `MSC24-C <https://wiki.sei.cmu.edu/confluence/display/c/MSC24-C.+Do+not+use+deprecated+or+obsolescent+functions>`_
- `Rule 21.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_03.c>`_
* - Rule 21.4
- Required
- The standard header file <setjmp.h> shall not be used
- N/A
- `Rule 21.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_04.c>`_
* - Rule 21.6
- Required
- The Standard Library input/output functions shall not be used
- N/A
- `Rule 21.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_06.c>`_
* - Rule 21.7
- Required
- The atof, atoi, atol and atoll functions of <stdlib.h> shall not be used
- N/A
- `Rule 21.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_07.c>`_
* - Rule 21.9
- Required
- The library functions bsearch and qsort of <stdlib.h> shall not be used
- N/A
- `Rule 21.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_09.c>`_
* - Rule 21.11
- Required
- The standard header file <tgmath.h> shall not be used
- N/A
- `Rule 21.11 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_11.c>`_
* - Rule 21.12
- Advisory
- The exception handling features of <fenv.h> should not be used
- N/A
- `Rule 21.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_12.c>`_
* - Rule 21.13
- Mandatory
- Any value passed to a function in <ctype.h> shall be representable as an unsigned char or be the value EO
- N/A
- `Rule 21.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_13.c>`_
* - Rule 21.14
- Required
- The Standard Library function memcmp shall not be used to compare null terminated strings
- N/A
- `Rule 21.14 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_14.c>`_
* - Rule 21.15
- Required
- The pointer arguments to the Standard Library functions memcpy, memmove and memcmp shall be pointers to qualified or unqualified versions of compatible types
- N/A
- `Rule 21.15 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_15.c>`_
* - Rule 21.16
- Required
- The pointer arguments to the Standard Library function memcmp shall point to either a pointer type, an essentially signed type, an essentially unsigned type, an essentially Boolean type or an essentially enum type
- N/A
- `Rule 21.16 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_16.c>`_
* - Rule 21.17
- Mandatory
- Use of the string handling functions from <string.h> shall not result in accesses beyond the bounds of the objects referenced by their pointer parameters
- N/A
- `Rule 21.17 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_17.c>`_
* - Rule 21.18
- Mandatory
- The size_t argument passed to any function in <string.h> shall have an appropriate value
- N/A
- `Rule 21.18 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_18.c>`_
* - Rule 21.19
- Mandatory
- The pointers returned by the Standard Library functions localeconv, getenv, setlocale or, strerror shall only be used as if they have pointer to const-qualified type
- N/A
- `Rule 21.19 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_19.c>`_
* - Rule 21.20
- Mandatory
- The pointer returned by the Standard Library functions asctime, ctime, gmtime, localtime, localeconv, getenv, setlocale or strerror shall not be used following a subsequent call to the same function
- N/A
- `Rule 21.20 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_20.c>`_
* - Rule 22.1
- Required
- All resources obtained dynamically by means of Standard Library functions shall be explicitly released
- N/A
- `Rule 22.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_01.c>`_
* - Rule 22.2
- Mandatory
- A block of memory shall only be freed if it was allocated by means of a Standard Library function
- N/A
- `Rule 22.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_02.c>`_
* - Rule 22.3
- Required
- The same file shall not be open for read and write access at the same time on different streams
- N/A
- `Rule 22.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_03.c>`_
* - Rule 22.4
- Mandatory
- There shall be no attempt to write to a stream which has been opened as read-only
- N/A
- `Rule 22.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_04.c>`_
* - Rule 22.5
- Mandatory
- A pointer to a FILE object shall not be dereferenced
- N/A
- `Rule 22.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_05.c>`_
* - Rule 22.6
- Mandatory
- The value of a pointer to a FILE shall not be used after the associated stream has been closed
- N/A
- `Rule 22.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_06.c>`_
* - Rule 22.7
- Required
- The macro EOF shall only be compared with the unmodified return value from any Standard Library function capable of returning EOF
- N/A
- `Rule 22.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_07.c>`_
* - Rule 22.8
- Required
- The value of errno shall be set to zero prior to a call to an errno-setting-function
- N/A
- `Rule 22.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_08.c>`_
* - Rule 22.9
- Required
- The value of errno shall be tested against zero after calling an errno-setting-function
- N/A
- `Rule 22.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_09.c>`_
* - Rule 22.10
- Required
- The value of errno shall only be tested when the last function to be called was an errno-setting-function
- N/A
- `Rule 22.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_10.c>`_
Additional rules
****************
Rule A.1: Conditional Compilation
=================================
Severity
--------
Required
Description
-----------
Do not conditionally compile function declarations in header files. Do not
conditionally compile structure declarations in header files. You may
conditionally exclude fields within structure definitions to avoid wasting
memory when the feature they support is not enabled.
Rationale
---------
Excluding declarations from the header based on compile-time options may prevent
their documentation from being generated. Their absence also prevents use of
``if (IS_ENABLED(CONFIG_FOO)) {}`` as an alternative to preprocessor
conditionals when the code path should change based on the selected options.
.. _coding_guideline_inclusive_language:
Rule A.2: Inclusive Language
============================
Severity
--------
Required
Description
-----------
Do not introduce new usage of offensive terms listed below. This rule applies
but is not limited to source code, comments, documentation, and branch names.
Replacement terms may vary by area or subsystem, but should aim to follow
updated industry standards when possible.
Exceptions are allowed for maintaining existing implementations or adding new
implementations of industry standard specifications governed externally to the
Zephyr Project.
Existing usage is recommended to change as soon as updated industry standard
specifications become available or new terms are publicly announced by the
governing body, or immediately if no specifications apply.
.. list-table::
:header-rows: 1
* - Offensive Terms
- Recommended Replacements
* - ``{master,leader} / slave``
- - ``{primary,main} / {secondary,replica}``
- ``{initiator,requester} / {target,responder}``
- ``{controller,host} / {device,worker,proxy,target}``
- ``director / performer``
- ``central / peripheral``
* - ``blacklist / whitelist``
- * ``denylist / allowlist``
* ``blocklist / allowlist``
* ``rejectlist / acceptlist``
* - ``grandfather policy``
- * ``legacy``
* - ``sanity``
- * ``coherence``
* ``confidence``
Rationale
---------
Offensive terms do not create an inclusive community environment and therefore
violate the Zephyr Project `Code of Conduct`_. This coding rule was inspired by
a similar rule in `Linux`_.
.. _Code of Conduct: https://github.com/zephyrproject-rtos/zephyr/blob/main/CODE_OF_CONDUCT.md
.. _Linux: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=49decddd39e5f6132ccd7d9fdc3d7c470b0061bb
Status
------
Related GitHub Issues and Pull Requests are tagged with the `Inclusive Language Label`_.
.. list-table::
:header-rows: 1
* - Area
- Selected Replacements
- Status
* - :ref:`bluetooth_api`
- See `Bluetooth Appropriate Language Mapping Tables`_
-
* - CAN
- This `CAN in Automation Inclusive Language news post`_ has a list of general
recommendations. See `CAN in Automation Inclusive Language`_ for terms to
be used in specification document updates.
-
* - eSPI
- * ``master / slave`` => TBD
-
* - gPTP
- * ``master / slave`` => TBD
-
* - :ref:`i2c_api`
- * ``master / slave`` => TBD
- NXP publishes the `I2C Specification`_ and has selected ``controller /
target`` as replacement terms, but the timing to publish an announcement
or new specification is TBD. Zephyr will update I2C when replacement
terminology is confirmed by a public announcement or updated
specification.
See :github:`Zephyr issue 27033 <27033>`.
* - :ref:`i2s_api`
- * ``master / slave`` => TBD
-
* - SMP/AMP
- * ``master / slave`` => TBD
-
* - :ref:`spi_api`
- * ``master / slave`` => ``controller / peripheral``
* ``MOSI / MISO / SS`` => ``SDO / SDI / CS``
- The Open Source Hardware Association has selected these replacement
terms. See `OSHWA Resolution to Redefine SPI Signal Names`_
* - :ref:`twister_script`
- * ``platform_whitelist`` => ``platform_allow``
* ``sanitycheck`` => ``twister``
-
.. _Inclusive Language Label: https://github.com/zephyrproject-rtos/zephyr/issues?q=label%3A%22Inclusive+Language%22
.. _I2C Specification: https://www.nxp.com/docs/en/user-guide/UM10204.pdf
.. _Bluetooth Appropriate Language Mapping Tables: https://btprodspecificationrefs.blob.core.windows.net/language-mapping/Appropriate_Language_Mapping_Table.pdf
.. _OSHWA Resolution to Redefine SPI Signal Names: https://www.oshwa.org/a-resolution-to-redefine-spi-signal-names/
.. _CAN in Automation Inclusive Language news post: https://www.can-cia.org/news/archive/view/?tx_news_pi1%5Bnews%5D=699&tx_news_pi1%5Bday%5D=6&tx_news_pi1%5Bmonth%5D=12&tx_news_pi1%5Byear%5D=2020&cHash=784e79eb438141179386cf7c29ed9438
.. _CAN in Automation Inclusive Language: https://can-newsletter.org/canopen/categories/
Parasoft Codescan Tool
**********************
Parasoft Codescan is an official static code analysis tool used by the Zephyr
project. It is used to automate compliance with a range of coding and security
standards.
The tool is currently set to the MISRA-C:2012 Coding Standard because the Zephyr
:ref:`coding_guidelines` are based on that standard.
It is used together with the Coverity Scan tool to achieve the best code health
and precision in bug findings.
Violations fixing process
=========================
Step 1
Any Zephyr Project member, company or a developer can request access
to the Parasoft reporting centre if they wish to get involved in fixing
violations by submitting issues.
Step 2
A developer starts to review violations.
Step 3
A developer submits a Github PR with the fix. Commit messages should follow
the same guidelines as other PRs in the Zephyr project. Please add a comment
that your fix was found by a static coding scanning tool.
Developers should follow and refer to the Zephyr :ref:`coding_guidelines`
as basic rules for coding. These rules are based on the MISRA-C standard.
Below you can find an example of a recommended commit message::
lib: os: add braces to 'if' statements
An 'if' (expression) construct shall be followed by a compound statement.
Add braces to improve readability and maintainability.
Found as a coding guideline violation (Rule 15.6) by static
coding scanning tool.
Signed-off-by: Johnny Developer <johnny.developer@company.com>
Step 4
If a violation is a false positive, the developer should mark it for the Codescan
tool just like they would do for the Coverity tool.
The developer should also add a comment to the code explaining that
the violation raised by the static code analysis tool should be considered a
false positive.
Step 5
If the developer has found a real violation that the community decided to ignore,
the developer must submit a PR with a suppression tag
and a comment explaining why the violation has been deviated.
The template structure of the comment and tag in the code should be::
/* Explain why that part of the code doesn't follow the standard,
* explain why it is a deliberate deviation from the standard.
* Don't refer to the Parasoft tool here, just mention that static code
* analysis tool raised a violation in the line below.
*/
code_line_with_a_violation /* parasoft-suppress Rule ID */
Below you can find an example of a recommended commit message::
testsuite: suppress usage of setjmp in a testcode (rule 21.4)
According to the Rule 21.4 the standard header file <setjmp.h> shall not
be used. We will suppress this violation because it is in
test code. Tag suppresses reporting of the violation for the
line where the violation is located.
This is a deliberate deviation.
Found as a coding guideline violation (Rule 21.4) by static coding
scanning tool.
Signed-off-by: Johnny Developer <johnny.developer@company.com>
The example below demonstrates how deviations can be suppressed in the code::
/* Static code analysis tool can raise a violation that the standard
* header <setjmp.h> shall not be used.
* Since this violation is in test code, we will suppress it.
* Deliberate deviation.
*/
#include <setjmp.h> /* parasoft-suppress MISRAC2012-RULE_21_4-a MISRAC2012-RULE_21_4-b */
This variant above suppresses item ``MISRAC2012-RULE_21_4-a`` and ``MISRAC2012-RULE_21_4-b``
on the line with "setjump" header include. You can add as many rules to suppress you want -
just make sure to keep the Parasoft tag on one line and separate rules with a space.
To read more about suppressing findings in the Parasoft tool, refer to the
official Parasoft `documentation`_
.. _documentation: https://docs.parasoft.com/display/CPPTEST1031/Suppressing+Findings
Step 6
After a PR is submitted, the developer should add the ``Coding guidelines``
and ``MISRA-C`` Github labels so their PR can be easily tracked by maintainers.
If you have any concerns about what your PR should look like, you can search
on Github using those tags and refer to similar PRs that have already been merged.
| 51.927495 | 235 | 0.668667 |
1064429a3a9040d61c9b385391b7123c715d2b2e | 2,936 | rst | reStructuredText | docs/howto/operator/external_task_sensor.rst | daemon-demon/airflow | 6f96e81f0123b30750fb68ec496246023bf63f35 | [
"Apache-2.0"
] | 4 | 2020-02-16T18:13:54.000Z | 2021-01-01T03:22:19.000Z | docs/howto/operator/external_task_sensor.rst | daemon-demon/airflow | 6f96e81f0123b30750fb68ec496246023bf63f35 | [
"Apache-2.0"
] | 20 | 2021-01-23T12:33:08.000Z | 2021-12-07T22:30:37.000Z | docs/howto/operator/external_task_sensor.rst | daemon-demon/airflow | 6f96e81f0123b30750fb68ec496246023bf63f35 | [
"Apache-2.0"
] | 2 | 2020-10-23T18:55:05.000Z | 2022-02-16T21:53:10.000Z | .. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
.. _howto/operator:Cross-DAG Dependencies:
Cross-DAG Dependencies
======================
When two DAGs have dependency relationships, it is worth considering combining them into a single
DAG, which is usually simpler to understand. Airflow also offers better visual representation of
dependencies for tasks on the same DAG. However, it is sometimes not practical to put all related
tasks on the same DAG. For example:
- Two DAGs may have different schedules. E.g. a weekly DAG may have tasks that depend on other tasks
on a daily DAG.
- Different teams are responsible for different DAGs, but these DAGs have some cross-DAG
dependencies.
- A task may depend on another task on the same DAG, but for a different ``execution_date``.
``ExternalTaskSensor`` can be used to establish such dependencies across different DAGs. When it is
used together with ``ExternalTaskMarker``, clearing dependent tasks can also happen across different
DAGs.
ExternalTaskSensor
^^^^^^^^^^^^^^^^^^
Use the :class:`~airflow.sensors.external_task_sensor.ExternalTaskSensor` to make tasks on a DAG
wait for another task on a different DAG for a specific ``execution_date``.
ExternalTaskSensor also provide options to set if the Task on a remote DAG succeeded or failed
via ``allowed_states`` and ``failed_states`` parameters.
.. exampleinclude:: /../airflow/example_dags/example_external_task_marker_dag.py
:language: python
:start-after: [START howto_operator_external_task_sensor]
:end-before: [END howto_operator_external_task_sensor]
ExternalTaskMarker
^^^^^^^^^^^^^^^^^^
If it is desirable that whenever ``parent_task`` on ``parent_dag`` is cleared, ``child_task1``
on ``child_dag`` for a specific ``execution_date`` should also be cleared, ``ExternalTaskMarker``
should be used. Note that ``child_task1`` will only be cleared if "Recursive" is selected when the
user clears ``parent_task``.
.. exampleinclude:: /../airflow/example_dags/example_external_task_marker_dag.py
:language: python
:start-after: [START howto_operator_external_task_marker]
:end-before: [END howto_operator_external_task_marker]
| 45.169231 | 100 | 0.76124 |
3d9c2d0fe8b1953d0b14a0260354ca4365729635 | 8,207 | rst | reStructuredText | doc/source/user/index.rst | kucerakk/python-otcextensions | d74d6aaa6dcf7c46d2c5fbe3676656baaf8e81d6 | [
"Apache-2.0"
] | null | null | null | doc/source/user/index.rst | kucerakk/python-otcextensions | d74d6aaa6dcf7c46d2c5fbe3676656baaf8e81d6 | [
"Apache-2.0"
] | null | null | null | doc/source/user/index.rst | kucerakk/python-otcextensions | d74d6aaa6dcf7c46d2c5fbe3676656baaf8e81d6 | [
"Apache-2.0"
] | null | null | null | Getting started with the OTCExtensions SDK
==========================================
Please note that OTCExtensions provides an extension to the OpenStackSDK.
Please refer to it's documentation for the details <https://docs.openstack.org/openstacksdk/latest/>
Installation
------------
The OTCExtensions SDK is available on
`GitHub <https://github.com/OpenTelekomCloud/python-otcextensions.git>`_.
To install it, use ``pip``::
$ pip install otcextensions
.. _user_guides:
User Guides
-----------
These guides walk you through how to make use of the libraries we provide
to work with each OpenStack service. If you're looking for a cookbook
approach, this is where you'll want to begin.
.. toctree::
:maxdepth: 1
Plain-simple connect to OTC <guides/connect_otc>
Configuration <config/index>
Connect to an OpenStack Cloud Using a Config File <https://docs.openstack.org/openstacksdk/latest/user/guides/connect_from_config>
Using Cloud Abstration Layer <https://docs.openstack.org/openstacksdk/latest/user/usage>
Logging <guides/logging>
Microversions <https://docs.openstack.org/openstacksdk/latest/user/microversions>
Block Storage <https://docs.openstack.org/openstacksdk/latest/user/guides/block_storage>
Compute <https://docs.openstack.org/openstacksdk/latest/user/guides/compute>
Identity <https://docs.openstack.org/openstacksdk/latest/user/guides/identity>
Image <https://docs.openstack.org/openstacksdk/latest/user/guides/image>
Key Manager <https://docs.openstack.org/openstacksdk/latest/user/guides/key_manager>
Message <https://docs.openstack.org/openstacksdk/latest/user/guides/message>
Network <https://docs.openstack.org/openstacksdk/latest/user/guides/network>
Object Store <https://docs.openstack.org/openstacksdk/latest/user/guides/object_store>
Orchestration <https://docs.openstack.org/openstacksdk/latest/user/guides/orchestration>
RDS <guides/rds>
OBS <guides/obs>
AutoScaling <guides/auto_scaling>
Volume Backup <guides/volume_backup>
Dedicated Host <guides/deh>
API Documentation
-----------------
OpenStackSDK documentation is available under <https://docs.openstack.org/openstacksdk/latest/user/index.html#api-documentation>
Service APIs are exposed through a two-layered approach. The classes
exposed through our `Connection Interface`_ are
the place to start if you're an application developer consuming an OpenStack
cloud. The `Resource Interface`_ is the layer upon which the
`Connection Interface`_ is built, with methods on `Service Proxies`_ accepting
and returning :class:`~openstack.resource.Resource` objects.
The Cloud Abstraction layer has a data model.
.. toctree::
:maxdepth: 1
model
Connection Interface
~~~~~~~~~~~~~~~~~~~~
A :class:`~openstack.connection.Connection` instance maintains your cloud
config, session and authentication information providing you with a set of
higher-level interfaces to work with OpenStack services.
.. toctree::
:maxdepth: 1
connection
Once you have a :class:`~openstack.connection.Connection` instance, services
are accessed through instances of :class:`~openstack.proxy.Proxy` or
subclasses of it that exist as attributes on the
:class:`~openstack.connection.Connection`.
.. autoclass:: openstack.proxy.Proxy
:members:
.. _service-proxies:
Service Proxies
~~~~~~~~~~~~~~~
The following service proxies exist on the
:class:`~openstack.connection.Connection`. The service proxies are all always
present on the :class:`~openstack.connection.Connection` object, but the
combination of your ``CloudRegion`` and the catalog of the cloud in question
control which services can be used.
.. toctree::
:maxdepth: 1
Block Storage <https://docs.openstack.org/openstacksdk/latest/user/proxies/block_storage>
Compute <https://docs.openstack.org/openstacksdk/latest/user/proxies/compute>
Database <https://docs.openstack.org/openstacksdk/latest/user/proxies/database>
Identity v2 <https://docs.openstack.org/openstacksdk/latest/user/proxies/identity_v2>
Identity v3 <https://docs.openstack.org/openstacksdk/latest/user/proxies/identity_v3>
Image v1 <https://docs.openstack.org/openstacksdk/latest/user/proxies/image_v1>
Image v2 <https://docs.openstack.org/openstacksdk/latest/user/proxies/image_v2>
Key Manager <https://docs.openstack.org/openstacksdk/latest/user/proxies/key_manager>
Load Balancer <https://docs.openstack.org/openstacksdk/latest/user/proxies/load_balancer_v2>
Message v2 <https://docs.openstack.org/openstacksdk/latest/user/proxies/message_v2>
Network <https://docs.openstack.org/openstacksdk/latest/user/proxies/network>
Object Store <https://docs.openstack.org/openstacksdk/latest/user/proxies/object_store>
Orchestration <https://docs.openstack.org/openstacksdk/latest/user/proxies/orchestration>
Workflow <https://docs.openstack.org/openstacksdk/latest/user/proxies/workflow>
Anti DDoS Service <proxies/anti_ddos>
AutoScaling Service <proxies/auto_scaling>
Cloud Container Engine v1<proxies/cce_v1>
Cloud Container Engine v2<proxies/cce_v3>
Cloud Trace Service <proxies/cts>
Distributed Cache Service <proxies/dcs>
Dedicated Host Service <proxies/deh>
Distributed Message Service <proxies/dms>
DNS Service <proxies/dns>
Key Management Service <proxies/kms>
Object Block Storage <proxies/obs>
Volume Backup Service <proxies/volume_backup>
RDS <proxies/rds>
Resource Interface
~~~~~~~~~~~~~~~~~~
The *Resource* layer is a lower-level interface to
communicate with OpenStack services. While the classes exposed by the
`Service Proxies`_ build a convenience layer on top of
this, :class:`~openstack.resource.Resource` objects can be
used directly. However, the most common usage of this layer is in receiving
an object from a class in the `Connection Interface_`, modifying it, and
sending it back to the `Service Proxies`_ layer, such as to update a resource
on the server.
The following services have exposed :class:`~openstack.resource.Resource`
classes.
.. toctree::
:maxdepth: 1
Baremetal <https://docs.openstack.org/openstacksdk/latest/user/resources/baremetal/index>
Block Storage <https://docs.openstack.org/openstacksdk/latest/user/resources/block_storage/index>
Clustering <https://docs.openstack.org/openstacksdk/latest/user/resources/clustering/index>
Compute <https://docs.openstack.org/openstacksdk/latest/user/resources/compute/index>
Database <https://docs.openstack.org/openstacksdk/latest/user/resources/database/index>
Identity <https://docs.openstack.org/openstacksdk/latest/user/resources/identity/index>
Image <https://docs.openstack.org/openstacksdk/latest/user/resources/image/index>
Key Management <https://docs.openstack.org/openstacksdk/latest/user/resources/key_manager/index>
Load Balancer <https://docs.openstack.org/openstacksdk/latest/user/resources/load_balancer/index>
Network <https://docs.openstack.org/openstacksdk/latest/user/resources/network/index>
Orchestration <https://docs.openstack.org/openstacksdk/latest/user/resources/orchestration/index>
Object Store <https://docs.openstack.org/openstacksdk/latest/user/resources/object_store/index>
Workflow <https://docs.openstack.org/openstacksdk/latest/user/resources/workflow/index>
Anti DDoS Service <resources/anti_ddos/index>
AutoScaling Service <resources/auto_scaling/index>
DNS Service <resources/dns/index>
Cloud Container Engine <resources/cce/index>
Cloud Trace Service <resources/cts/index>
Distributed Cache Service <resources/dcs/index>
Dedicated Host Service <resources/deh/index>
Distributed Message Service <resources/dms/index>
Key Management Service <resources/kms/index>
Object Block Storage <resources/obs/index>
RDS <resources/rds/index>
Low-Level Classes
~~~~~~~~~~~~~~~~~
The following classes are not commonly used by application developers,
but are used to construct applications to talk to OpenStack APIs. Typically
these parts are managed through the `Connection Interface`_, but their use
can be customized.
.. toctree::
:maxdepth: 1
resource
utils
Presentations
=============
.. toctree::
:maxdepth: 1
multi-cloud-demo
| 41.872449 | 133 | 0.768856 |
86de41eb2205d7e46c1399e5f7e256b80f9086f2 | 1,878 | rst | reStructuredText | docs/background.rst | arinawushu/MIALab | 80ee41e4376145d77485b596bd94260285562f3b | [
"Apache-2.0"
] | 11 | 2019-07-30T21:43:11.000Z | 2021-09-30T23:24:49.000Z | docs/background.rst | Martikuz/MIALab | cbb54773c9159cbcaa2c5b14862e956f2ee246fa | [
"Apache-2.0"
] | 3 | 2020-09-07T10:37:53.000Z | 2022-03-12T00:47:32.000Z | docs/background.rst | Martikuz/MIALab | cbb54773c9159cbcaa2c5b14862e956f2ee246fa | [
"Apache-2.0"
] | 33 | 2019-09-23T11:54:20.000Z | 2021-12-12T23:17:50.000Z | .. _background_label:
Clinical Background
===================
In the MIALab, we are segmenting structures of the human brain. We are thus focusing on the most prominent medical imaging analysis (MIA) task, segmentation, and do it in the most prominent area in MIA, the human brain, on magnetic resonance (MR) images.
Segmenting brain structures from MR images is important, e.g., for the tracking of progression in neurodegenerative diseases by the atrophy of brain tissue [1]_. Performing the segmentation task manually is very time-consuming, user-dependent, and costly [2]_. Think about being a neuroradiologist who needs to segment the brain of every scanned patient.
This is why we aim for an automated approach based on machine learning (ML).
The aim of the pipeline is to classify each voxel of a brain MR image in one of the following classes:
- 0: Background (or any other structures than the one listed below)
- 1: Cortical and cerebellar white matter
- 2: Cerebral and cerebellar cortex / grey matter
- 3: Hippocampus
- 4: Amygdala
- 5: Thalamus
An example sagittal image slice is shown in the figure below, where the label image (reference segmentation referred to as ground truth or simply labels) is shown next to the two available MR images (T1-weighted and T2-weighted).
.. image:: pics/background.png
References
----------
.. [1] Pereira, S., Pinto, A., Oliveira, J., Mendrik, A. M., Correia, J. H., Silva, C. A.: Automatic brain tissue segmentation in MR images using Random Forests and Conditional Random Fields. Journal of Neuroscience Methods 270, 111-123, (2016). https://doi.org/10.1016/j.jneumeth.2016.06.017
.. [2] Porz, N., Bauer, S., Pica, A., Schucht, P., Beck, J., Verma, R.K., Slotboom, J., Reyes, M., Wiest, R.: Multi-Modal Glioblastoma Segmentation: Man versus Machine. PLoS ONE 9(5), (2014). https://doi.org/10.1371/journal.pone.0096873 | 67.071429 | 354 | 0.747071 |
9d717ae1ea5ef7d1bc89afa770767bccb55c3504 | 2,535 | rst | reStructuredText | sim-services/resources/docs/gcd_validation.rst | hschwane/offline_production | e14a6493782f613b8bbe64217559765d5213dc1e | [
"MIT"
] | 1 | 2020-12-24T22:00:01.000Z | 2020-12-24T22:00:01.000Z | sim-services/resources/docs/gcd_validation.rst | hschwane/offline_production | e14a6493782f613b8bbe64217559765d5213dc1e | [
"MIT"
] | null | null | null | sim-services/resources/docs/gcd_validation.rst | hschwane/offline_production | e14a6493782f613b8bbe64217559765d5213dc1e | [
"MIT"
] | 3 | 2020-07-17T09:20:29.000Z | 2021-03-30T16:44:18.000Z | Introduction
!!!!!!!!!!!!
This page contains instructions for generating GCD files for use in simulation
production. Below is a list of MJDs for the various seasons.
* 2015 - MJD = 57161
* 2014 - MJD = 56784
* 2013 - MJD = 56429
* 2012 - MJD = 56063
* 2011 - MJD = 55697
* IC79 - MJD = 55380
* IC59 - MJD = 55000
* IC40 - MJD = 54649
Generation
@@@@@@@@@@
* Run the script **sim-services/resources/gcd_validation/generate_gcd.py --season=2014** .
This script pulls from the DB, makes all in the corrections needed, tests, and validates
the resulting GCD file as well, all in one go.
The output :
* GeoCalibDetectorStatus_<SEASON>.<MJD>_V<VERSION>.i3.gz - This is **the** GCD file.
* GeoCalibDetectorStatus_<SEASON>.<MJD>_V<VERSION>.log - The logfile.
Details
@@@@@@@
The generation script runs several scripts, all of which can be found in **sim-services/resources/gcd_validation/details/**.
* **generate_gcd_snapshot.py** - This pulls the data from the I3OmDb database.
* **correct_GCD.py**
- PMT Thresholds - Correct negative thresholds.
- RDE (Relative DOM Efficiencies) - Set NaN RDEs to either 1.35(1.0) for High(Low) QE DOMs.
- Low Noise DOMs - For DOMs with noise rates below 400Hz add 1kHz.
- Clean out any triggers that aren't InIce or IceTop or of the following types
+ SIMPLE_MULTIPLICITY
+ STRING
+ VOLUME
+ SLOW_PARTICLE
- Remove any AMANDA OMs.
* **test_GCD.py** - Makes a quick pass over the GCD file and makes some simple checks. It runs the following test (found in their respective projects).
- photonics_hit_maker_test
- vuvuzela_test
- pmt_response_sim_test
- dom_launcher_test
* **generate_IC86_stress_test_samples.py** - Injects 20 PEs in each DOM spread uniformly over a 2 microsecond window.
* **validate_stress_test_samples.py** - Checks the file that was generated by the previous script.
- Verifies that the number of PEs is about what's expected (~20). Currently the range is pretty large and only flags an error message if the charge is less than 20% of expected or greater than a factor of 2. We should be able to do better.
- Checks that there are no LC bits set for SLC-only DOMs.
- Checks that good DOMs do have hits, as expected.
Often many of the DOMs above that are flagged as "good" but produced no hits,
really are bad DOMs and should be added to the bad DOM list. To be sure check
out http://wiki.icecube.wisc.edu/index.php/Problem_DOMs and verify they are in
fact bad. Please report this to the simulation group as well.
| 40.238095 | 243 | 0.729389 |
bc93635af6ca8be64f3a81d63071b41be9044c42 | 358 | rst | reStructuredText | CHANGES.rst | digsim/tube4droid | 294ea717a7ef0a639780afbc4b73c6eb17d31d91 | [
"Apache-2.0"
] | null | null | null | CHANGES.rst | digsim/tube4droid | 294ea717a7ef0a639780afbc4b73c6eb17d31d91 | [
"Apache-2.0"
] | null | null | null | CHANGES.rst | digsim/tube4droid | 294ea717a7ef0a639780afbc4b73c6eb17d31d91 | [
"Apache-2.0"
] | null | null | null | Changelog
============
1.6 (unreleased)
----------------
- Completely switched to tox to release as a packaged app.
- Lifted youtube_dl dependency to latest version.
1.5 (2016-11-20)
----------------
- Make it actually work. Various fixes to make the code behave like before.
1.4 (2016-11-20)
----------------
- Switch to PPIC as underlying base.
| 16.272727 | 75 | 0.600559 |
7795904293427616eb9dd7e91e7eb93e125fd757 | 289 | rst | reStructuredText | addons14/web_tree_image_tooltip/readme/CONTRIBUTORS.rst | odoochain/addons_oca | 55d456d798aebe16e49b4a6070765f206a8885ca | [
"MIT"
] | 1 | 2021-06-10T14:59:13.000Z | 2021-06-10T14:59:13.000Z | addons14/web_tree_image_tooltip/readme/CONTRIBUTORS.rst | odoochain/addons_oca | 55d456d798aebe16e49b4a6070765f206a8885ca | [
"MIT"
] | null | null | null | addons14/web_tree_image_tooltip/readme/CONTRIBUTORS.rst | odoochain/addons_oca | 55d456d798aebe16e49b4a6070765f206a8885ca | [
"MIT"
] | 1 | 2021-04-09T09:44:44.000Z | 2021-04-09T09:44:44.000Z | * Stefan Rijnhart
* Leonardo Donelli <donelli@webmonks.it>
* Jay Vora <jay.vora@serpentcs.com>
* Meet Dholakia <m.dholakia.serpentcs@gmail.com>
* Nikul Chaudhary <nikul.chaudhary.serpentcs@gmail.com>
* Phuc Tran Thanh <phuc@trobz.com>
* Tharathip Chaweewongphan <tharathipc@ecosoft.co.th>
| 36.125 | 55 | 0.775087 |
eb3794f24b43dc319e4774427e76c557e6e330ea | 173 | rst | reStructuredText | house_prices/data/README.rst | westurner/house_prices | 7260ada0c10cf371b33973b0d9af6bca860d0008 | [
"BSD-3-Clause"
] | 3 | 2017-01-24T07:33:35.000Z | 2018-11-14T13:20:18.000Z | house_prices/data/README.rst | westurner/house_prices | 7260ada0c10cf371b33973b0d9af6bca860d0008 | [
"BSD-3-Clause"
] | null | null | null | house_prices/data/README.rst | westurner/house_prices | 7260ada0c10cf371b33973b0d9af6bca860d0008 | [
"BSD-3-Clause"
] | 1 | 2019-04-05T11:19:04.000Z | 2019-04-05T11:19:04.000Z |
This data is from https://www.kaggle.com/
"House Prices: Advanced Regression Techniques":
https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data
| 21.625 | 77 | 0.757225 |
62f3124d06d4f98d0e2246b8b44b08253ca3ba64 | 9,191 | rst | reStructuredText | docs/intro/add-ta65-to-thingsboard.rst | avantec-iot/avantec-thingsboard | 38d5e7159586e986ca01fc564b8c08d2f7c82a0f | [
"Apache-2.0"
] | null | null | null | docs/intro/add-ta65-to-thingsboard.rst | avantec-iot/avantec-thingsboard | 38d5e7159586e986ca01fc564b8c08d2f7c82a0f | [
"Apache-2.0"
] | null | null | null | docs/intro/add-ta65-to-thingsboard.rst | avantec-iot/avantec-thingsboard | 38d5e7159586e986ca01fc564b8c08d2f7c82a0f | [
"Apache-2.0"
] | null | null | null | Add TA65 to ThingsBoard
==========================
Add devices (TA65 thermostat) to ThingsBoard.
.. tip::
Two devices are added in this section, TA65-FC-TB and TA65-FH-TB. You may add only one device, such as TA65-FC-TB.
Step 1. Login
-------------
- Open your ThingsBoard website in your browser.
- Tenant Administrator login ThingsBoard: tenant@thingsboard.org / tenant.
.. image:: ../_static/intro/add_ta65_to_thingsboard/tenant_login.png
:width: 50 %
The default user name and password are shown in the following table:
.. table::
:widths: auto
========== ===========
Field Value
========== ===========
Username tenant@thingsboard.org
Password tenant
========== ===========
Step 2. Add device
------------------
**Devices** --> **+** --> **Add new deivce** --> **Popup Dialog** --> **Input** --> **Add**.
.. image:: ../_static/intro/add_ta65_to_thingsboard/add_devices_a.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/add_devices_b.png
.. table::
:widths: auto
============ ========================= ==========
Field Device A Device B
============ ========================= ==========
Name* TA65-FC-TB TA65-FH-TB
Device type* TA65-FC-TB TA65-FH-TB
Label AVANTEC Headquaters Avantec Manufacturing Plant
Description A Thermostat for fan-coil A Thermostat for floor-heating
============ ========================= ==========
.. note::
The field with * must be filled in.
.. _copy-credentials-of-new-device:
Step 3. Copy credentials of new device
--------------------------------------
**Devices** --> **Manage credentials (icon)** --> **Popup Dialog** --> **Copy Access Token** --> **Select Access Token** --> Ctrl + C.
.. image:: ../_static/intro/add_ta65_to_thingsboard/copy_credentials.png
.. tip::
The Credentials (Access Token), which you need to use when you're configuring your hardware, for example, j9JiCkID9E7uE1WhKxnc, lMTQLZ7VSRQSD7ls.
Step 4. Add shared attributes of new device
-------------------------------------------
**Devices** --> **New device(TA65-FC-TB or TA65-FH-TB)** --> **Attributes** --> **Shared attributes** --> **+** --> **Popup Dialog** --> **Inpug Key, Value type & value** --> **Add**。
.. image:: ../_static/intro/add_ta65_to_thingsboard/add_shared_attributes_of_device.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/shared_attributes_list.png
The following Shared attributes of the two devices, TA65-FC-TB and TA65-FH-TB, are identical.
.. _add-shared-attributes-of-new-device-cloudhost:
.. table:: Add shared attributes of new device
:widths: 15, 10, 15, 50
============= =========== ================ =========================================
Key* Value Type* Value* Memo
============= =========== ================ =========================================
cloudHost String | mqtt://\ | **Please replace THINGSBOARD_IP**
| THINGSBOARD_IP | **with your value**.
| This ThingsBoard Server's MQTT URL,
| It must begin with "MQTT ://", such as
| mqtt://192.168.21.222
uploadFreq Integer 120 Telemetry per uploadFreq seconds
syncTimeFreq Integer 1800 Sync time per syncTimeFreq seconds
timezone Integer 480 | **Please replace with your value**.
| The time offset from UTC, minutes.
| For example Hongkong is UTC+8:00 time
| zone, this offset is 480 minutes (8*60)
timeNTPServer String pool.ntp.org | SNTP Server URL, eg: pool.ntp.org,
| 0.pool.ntp.org, 1.pool.ntp.org,
| time.nist.gov, …
============= =========== ================ =========================================
.. note::
The field with * must be filled in.
Step 5. Add asset
-----------------
**Note**: You can skip this step if your asset already in ThingsBoard.
**Assets** --> **+** --> **Add new asset** --> **Popup dialog** --> **Input name & asset type** --> **Add**.
.. image:: ../_static/intro/add_ta65_to_thingsboard/add_asset.png
.. table::
:widths: auto
============ ============
Type Assets
============ ============
Name* Building X
Asset type* building
Label
Description
============ ============
.. note::
The field with * must be filled in.
Step 6. Add device to asset
---------------------------
Add two devices to the Building X: **Assets** --> **Building X** --> **Relations** --> **Direction: From** --> **+** --> **Popup dialog** --> **Input relation type, to entity type & entity list** --> **Add**.
.. image:: ../_static/intro/add_ta65_to_thingsboard/add_device_to_asset_a.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/add_device_to_asset_b.png
.. table::
:widths: auto
========== ============== ============== ========
Direction* Relation Type* To entityType* Device*
========== ============== ============== ========
From Contains Device TA65-FC-TB
From Contains Device TA65-FH-TB
========== ============== ============== ========
.. note::
The field with * must be filled in.
Step 7. Import Avantec Widgets
------------------------------
.. tip::
Avantec_widgets.json can only be imported once. If you have already imported it, you do not need and cannot repeat the import.
If you have already imported it, you can skip this step.
**Widgets Library** --> **+** --> **Popup dialog** --> **Select File: avantec_widgets.json** --> **Import**.
See :download:`avantec_widgets.json <../_static/intro/thingsboard_extension/avantec_widgets.json>`.
.. image:: ../_static/intro/add_ta65_to_thingsboard/import_widgets_bundle.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/avantec_widgets.png
Step 8. Avantec Dashboard
-------------------------
Step 8.1. Import Avantec Dashboard (Option)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. tip::
Avantec_dashboard.json can only be imported once. If you have already imported it, you do not need and cannot repeat the import.
If you have already imported it, you can skip this step.
**Dashboards** --> **+** --> **Popup dialog: Import dashboard** --> **Select File: avantec_dashboard.json** --> **Import** --> **Popup dialog: Configure aliases used by imported dashboard** --> **Edit alias(icon)** --> **Popup dialog: Edit alias** --> **Input Fileds : ...** --> **Save**.
See :download:`avantec_dashboard.json <../_static/intro/thingsboard_extension/avantec_dashboard.json>`.
.. image:: ../_static/intro/add_ta65_to_thingsboard/import_dashboard_a.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/import_dashboard_b.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/import_dashboard_c.png
.. table::
:widths: auto
============================== =====================
Field Value
============================== =====================
Alias name*: Thermostats
Resolve as multiple entities* TRUE
Filter type* Device search query
Type* Asset
Asset* Building X
Relation type* Contains
Device types* TA65-FC-TB, TA65-FH-TB
============================== =====================
Step 8.2. Edit Avantec Dashboard
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
.. tip::
Avantec_dashboard.json can only be imported once. If you have already imported it, you do not need and cannot repeat the import.
If you have already imported avantec_dashboard.json, you may skip this step.
We can modify it, for example we can modify alias to add a new device.
**Dashboards** --> **Open dashboard(icon)** --> **New Dashboard: Avantec Dashboard** --> **Edit (red icon on the bottom and right)** --> **Edit Dashboard Mode** --> **Entity aliases(icon on the top and right)** --> **Popup dialog: Entity aliases** --> **Edit alias(icon)** --> **Popup dialog: Edit alias** --> **Modify Fileds : ...** --> **Save**.
.. image:: ../_static/intro/add_ta65_to_thingsboard/edit_dashboard_a.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/edit_dashboard_b.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/edit_dashboard_c.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/edit_dashboard_d.png
Step 9. Open Avantec Dashboard
------------------------------
**Dashboards** --> **Open dashboard(icon) in the line of Avantec Dashboard** --> **New Dashboard: Avantec Dashboard** --> **Click this line of TA65-FC-TB**.
.. image:: ../_static/intro/add_ta65_to_thingsboard/open_dashboard_a.png
.. image:: ../_static/intro/add_ta65_to_thingsboard/open_dashboard_b.png
| 38.456067 | 347 | 0.538462 |
600a509b4114cabce8d60e6d39ccb990bdaef0dd | 72 | rst | reStructuredText | doc/autosar4_api/port/parameter_com_spec.rst | SHolzmann/autosar | 26a7725ef71f63323ba5daa8d8bd841449da1da4 | [
"MIT"
] | 199 | 2016-07-27T17:14:43.000Z | 2022-03-30T12:28:02.000Z | doc/autosar4_api/port/parameter_com_spec.rst | SHolzmann/autosar | 26a7725ef71f63323ba5daa8d8bd841449da1da4 | [
"MIT"
] | 50 | 2017-10-10T08:19:21.000Z | 2022-03-27T18:43:29.000Z | doc/autosar4_api/port/parameter_com_spec.rst | SHolzmann/autosar | 26a7725ef71f63323ba5daa8d8bd841449da1da4 | [
"MIT"
] | 125 | 2016-07-27T17:16:08.000Z | 2022-03-30T17:03:28.000Z | .. _ar4_port_ParameterComSpec:
ParameterComSpec
=================
TBD
| 10.285714 | 30 | 0.625 |
5e8ee2002b59f2d1864929c95cda0a47c88635d3 | 3,967 | rst | reStructuredText | reference/generators/cmake_find_package_multi.rst | Minimonium/docs | cdda622421f5280c7b69c33851d4f8d9eb782735 | [
"MIT"
] | null | null | null | reference/generators/cmake_find_package_multi.rst | Minimonium/docs | cdda622421f5280c7b69c33851d4f8d9eb782735 | [
"MIT"
] | null | null | null | reference/generators/cmake_find_package_multi.rst | Minimonium/docs | cdda622421f5280c7b69c33851d4f8d9eb782735 | [
"MIT"
] | null | null | null | .. _cmake_find_package_multi_generator_reference:
`cmake_find_package_multi`
==========================
.. warning::
This is an **experimental** feature subject to breaking changes in future releases.
.. container:: out_reference_box
This is the reference page for ``cmake_find_package_multi`` generator.
Go to :ref:`Integrations/CMake<cmake>` if you want to learn how to integrate your project or recipes with CMake.
Generated files
---------------
For each conan package in your graph, it will generate 2 files and 1 more per different ``build_type``.
Being {name} the package name:
+------------------------------------+--------------------------------------------------------------------------------------+
| NAME | CONTENTS |
+====================================+======================================================================================+
| {name}Config.cmake | It includes the {name}Targets.cmake and call find_dependency for each dep |
+------------------------------------+--------------------------------------------------------------------------------------+
| {name}Targets.cmake | It includes the following files |
+------------------------------------+--------------------------------------------------------------------------------------+
| {name}Targets-debug.cmake | Specific information for the Debug configuration |
+------------------------------------+--------------------------------------------------------------------------------------+
| {name}Targets-release.cmake | Specific information for the Release configuration |
+------------------------------------+--------------------------------------------------------------------------------------+
| {name}Targets-relwithdebinfo.cmake | Specific information for the RelWithDebInfo configuration |
+------------------------------------+--------------------------------------------------------------------------------------+
| {name}Targets-minsizerel.cmake | Specific information for the MinSizeRel configuration |
+------------------------------------+--------------------------------------------------------------------------------------+
Targets
-------
A target named ``{name}::{name}`` target is generated with the following properties adjusted:
- ``INTERFACE_INCLUDE_DIRECTORIES``: Containing all the include directories of the package.
- ``INTERFACE_LINK_LIBRARIES``: Library paths to link.
- ``INTERFACE_COMPILE_DEFINITIONS``: Definitions of the library.
The targets contains multi-configuration properties, for example, the compile options property
is declared like this:
.. code-block:: cmake
set_property(TARGET {name}::{name}
PROPERTY INTERFACE_COMPILE_OPTIONS
$<$<CONFIG:Release>:${{{name}_COMPILE_OPTIONS_RELEASE_LIST}}>
$<$<CONFIG:RelWithDebInfo>:${{{name}_COMPILE_OPTIONS_RELWITHDEBINFO_LIST}}>
$<$<CONFIG:MinSizeRel>:${{{name}_COMPILE_OPTIONS_MINSIZEREL_LIST}}>
$<$<CONFIG:Debug>:${{{name}_COMPILE_OPTIONS_DEBUG_LIST}}>)
The targets are also transitive. So, if your project depends on a packages ``A`` and ``B``, and at the same time
``A`` depends on ``C``, the ``A`` target will contain automatically the properties of the ``C`` dependency, so
in your `CMakeLists.txt` file you only need to ``find_package(A)`` and ``find_package(B)``.
The `{name}Config.cmake` file will be found by the cmake ``find_package(XXX)`` function if the directory where the file is generated
is listed in the `CMAKE_PREFIX_PATH <https://cmake.org/cmake/help/v3.0/variable/CMAKE_PREFIX_PATH.html>`_.
| 60.106061 | 132 | 0.47391 |
4316533546c5ff9a081c29aa4a27679a78c44ed3 | 960 | rst | reStructuredText | CHANGELOG.rst | minaz912/django-object-tools | 4a809a0295e5c314ac09f8521718669acd100bc9 | [
"BSD-3-Clause"
] | 1 | 2018-07-31T11:28:45.000Z | 2018-07-31T11:28:45.000Z | CHANGELOG.rst | minaz912/django-object-tools | 4a809a0295e5c314ac09f8521718669acd100bc9 | [
"BSD-3-Clause"
] | null | null | null | CHANGELOG.rst | minaz912/django-object-tools | 4a809a0295e5c314ac09f8521718669acd100bc9 | [
"BSD-3-Clause"
] | null | null | null | Changelog
=========
1.11
----
#. Django 1.11 compatibility.
#. Deprecated support for versions below Django 1.9.
1.9
---
#. Django 1.9 compatibility.
1.0.3
-----
#. Fixed Django 1.4 templatetag issues.
1.0.2
-----
#. Django 1.7 compatibility.
1.0.1
-----
#. Fixed compatibility issues with Django 1.5+ url templatetags.
1.0.0
-----
#. Fixed compatibility issues with newer versions of Django. This however may not be
backward compatible with versions of Django earlier than 1.4.
0.0.7
-----
#. Pass context to object_tools tag. Thanks `slafs <https://github.com/slafs>`_
0.0.6
-----
#. Corrected 'str' object has no attribute 'has_perm' bug `#7 <https://github.com/praekelt/django-export/issues/7>`_.
0.0.5
-----
#. Remove usage of 'ADMIN_STATIC_PREFIX' in templates for move to Django 1.4.
0.0.4
-----
#. Better packaging.
0.0.3 (2011-09-15)
------------------
#. Correctly resolve title.
0.0.1 (2011-07-22)
------------------
#. Initial release.
| 18.113208 | 117 | 0.645833 |
31872cb70f8fcd0d710b8eaac39f036dab313ce3 | 3,625 | rst | reStructuredText | README.rst | kalufinnle/pennylane-cirq | c239239a661cd01adb671ad7b7254a2fa3684c6b | [
"Apache-2.0"
] | 1 | 2020-07-12T17:56:32.000Z | 2020-07-12T17:56:32.000Z | README.rst | kalufinnle/pennylane-cirq | c239239a661cd01adb671ad7b7254a2fa3684c6b | [
"Apache-2.0"
] | null | null | null | README.rst | kalufinnle/pennylane-cirq | c239239a661cd01adb671ad7b7254a2fa3684c6b | [
"Apache-2.0"
] | null | null | null | PennyLane Cirq Plugin
#########################
.. image:: https://img.shields.io/travis/com/XanaduAI/pennylane-cirq/master.svg
:alt: Travis
:target: https://travis-ci.com/XanaduAI/pennylane-cirq
.. image:: https://img.shields.io/codecov/c/github/xanaduai/pennylane-cirq/master.svg
:alt: Codecov coverage
:target: https://codecov.io/gh/XanaduAI/pennylane-cirq
.. image:: https://img.shields.io/codacy/grade/33d12f7d2d0644968087e33966ed904e.svg
:alt: Codacy grade
:target: https://app.codacy.com/app/XanaduAI/pennylane-cirq
.. image:: https://img.shields.io/readthedocs/pennylane-cirq.svg
:alt: Read the Docs
:target: https://pennylane-cirq.readthedocs.io
.. image:: https://img.shields.io/pypi/v/pennylane-cirq.svg
:alt: PyPI
:target: https://pypi.org/project/pennylane-cirq
`PennyLane <https://pennylane.readthedocs.io>`_ is a cross-platform Python library for quantum machine
learning, automatic differentiation, and optimization of hybrid quantum-classical computations.
`Cirq <https://github.com/quantumlib/Cirq>`_ is a Python library for writing, manipulating, and optimizing quantum circuits and running them against quantum computers and simulators.
This PennyLane plugin allows to use both the software and hardware backends of Cirq as devices for PennyLane.
Features
========
* Access to Cirq's simulator backend via the `cirq.simulator` device
* Support for all PennyLane core functionality
Installation
============
Plugin Name requires both PennyLane and Cirq. It can be installed via ``pip``:
.. code-block:: bash
$ python -m pip install pennylane-cirq
Getting started
===============
Once Pennylane Cirq is installed, the provided Cirq devices can be accessed straight
away in PennyLane.
You can instantiate these devices for PennyLane as follows:
.. code-block:: python
import pennylane as qml
dev = qml.device('cirq.simulator', wires=2, shots=100, analytic=True)
These devices can then be used just like other devices for the definition and evaluation of
QNodes within PennyLane. For more details, see the
`plugin usage guide <https://pennylane-cirq.readthedocs.io/en/latest/usage.html>`_ and refer
to the PennyLane documentation.
Contributing
============
We welcome contributions - simply fork the Plugin Name repository, and then make a
`pull request <https://help.github.com/articles/about-pull-requests/>`_ containing your contribution.
All contributors to PennyLane-Cirq will be listed as authors on the releases.
We also encourage bug reports, suggestions for new features and enhancements, and even links to cool
projects or applications built on PennyLane and Cirq.
Authors
=======
Johannes Jakob Meyer
If you are doing research using PennyLane, please cite our papers:
Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, and Nathan Killoran.
*PennyLane: Automatic differentiation of hybrid quantum-classical computations.* 2018.
`arXiv:1811.04968 <https://arxiv.org/abs/1811.04968>`_
Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran.
*Evaluating analytic gradients on quantum hardware.* 2018.
`Phys. Rev. A 99, 032331 <https://journals.aps.org/pra/abstract/10.1103/PhysRevA.99.032331>`_
Support
=======
- **Source Code:** https://github.com/XanaduAI/pennylane-cirq
- **Issue Tracker:** https://github.com/XanaduAI/pennylane-cirq/issues
If you are having issues, please let us know by posting the issue on our GitHub issue tracker.
License
=======
Plugin Name is **free** and **open source**, released under the Apache License, Version 2.0.
| 32.954545 | 182 | 0.744276 |
671abc0ac5f89e9a7a1f277d789e2eb20ac981fa | 1,151 | rst | reStructuredText | toggletoolbutton.rst | airon90/python-gtk3-tutorial | db3940bef1834756c309f1bcb2f8e2bb9731763b | [
"CC0-1.0"
] | 11 | 2020-02-22T04:14:06.000Z | 2022-02-27T12:21:39.000Z | toggletoolbutton.rst | airon90/python-gtk3-tutorial | db3940bef1834756c309f1bcb2f8e2bb9731763b | [
"CC0-1.0"
] | null | null | null | toggletoolbutton.rst | airon90/python-gtk3-tutorial | db3940bef1834756c309f1bcb2f8e2bb9731763b | [
"CC0-1.0"
] | 4 | 2019-08-19T22:55:38.000Z | 2021-06-20T03:31:18.000Z | ToggleToolButton
================
A ToggleToolButton is similar to a :doc:`togglebutton` in functionality however it should be applied to a :doc:`toolbar`.
===========
Constructor
===========
The ToggleToolButton can be constructed using the following::
toggletoolbutton = Gtk.ToggleToolButton(label)
=======
Methods
=======
.. note::
The methods listed below only apply to this widget and those that inherit from it. For more methods, see the :doc:`toolbutton` page. For more information on widget hierarchy, see :doc:`hierarchytheory`.
To flip the active or inactive state, use the method::
toggletoolbutton.set_active(active)
When *active* is set to ``True``, the ToggleToolButton will appear depressed.
To retrieve the current state of the ToggleToolButton call::
toggletoolbutton.get_active()
=======
Signals
=======
The commonly used signals of an ToggleToolButton are::
"toggled" (toggletoolbutton)
When the user clicks on the ToggleToolButton and the state is changed to active or inactive, the ``"toggled"`` signal is emitted.
=======
Example
=======
To view an example for this widget, see the :doc:`toolbar` example.
| 27.404762 | 204 | 0.720243 |
605410bd00d007007315934c5bfd2837062ea5cc | 6,295 | rst | reStructuredText | docs/languages/de/modules/zend.validator.hostname.rst | akrabat/zf2-documentation | 5f4c3fc4443ffaf5d67a8f6a9540c4b33502886d | [
"BSD-3-Clause"
] | 2 | 2015-04-20T21:52:59.000Z | 2017-11-09T22:11:29.000Z | docs/languages/de/modules/zend.validator.hostname.rst | akrabat/zf2-documentation | 5f4c3fc4443ffaf5d67a8f6a9540c4b33502886d | [
"BSD-3-Clause"
] | null | null | null | docs/languages/de/modules/zend.validator.hostname.rst | akrabat/zf2-documentation | 5f4c3fc4443ffaf5d67a8f6a9540c4b33502886d | [
"BSD-3-Clause"
] | 1 | 2020-07-16T02:38:56.000Z | 2020-07-16T02:38:56.000Z | .. EN-Revision: none
.. _zend.validator.set.hostname:
Hostname
========
``Zend\Validate\Hostname`` erlaubt die Prüfung von Hostnamen mit einem Set von bekannten Spezifikationen. Es ist
möglich drei verschiedene Typen von Hostnamen zu Prüfen: einen *DNS* Hostnamen (z.b. ``domain.com``), IP Adressen
(z.B. 1.2.3.4), und lokale Hostnamen (z.B. localhost). Standarmäßig werden nur *DNS* Hostnamen geprüft.
.. _zend.validator.set.hostname.options:
Unterstützte Optionen für Zend\Validate\Hostname
------------------------------------------------
Die folgenden Optionen werden für ``Zend\Validate\Hostname`` unterstützt:
- **allow**: Definiert die Art des Hostnamens welche verwendet werden darf. Siehe :ref:`Hostname Typen
<zend.validator.set.hostname.types>` für Details.
- **idn**: Definiert ob *IDN* Domains erlaubt sind oder nicht. Diese Option ist standardmäßig ``TRUE``.
- **ip**: Erlaubt es eine eigene IP Prüfung zu definieren. Diese Option ist standardmäßig eine neue Instanz von
``Zend\Validate\Ip``.
- **tld**: Definiert ob *TLD*\ s geprüft werden. Diese Option ist standardmäßig ``TRUE``.
.. _zend.validator.set.hostname.basic:
Normale Verwendung
------------------
**Normale Verwendung**
.. code-block:: php
:linenos:
$validator = new Zend\Validate\Hostname();
if ($validator->isValid($hostname)) {
// Hostname scheint gültig zu sein
} else {
// Hostname ist ungülig; Gründe dafür ausdrucken
foreach ($validator->getMessages() as $message) {
echo "$message\n";
}
}
Das prüft den Hostnamen ``$hostname`` und wird einen Fehler über ``getMessages()`` mit einer nützlichen
Fehlermeldung auswerfen.
.. _zend.validator.set.hostname.types:
Verschiedene Typen von Hostnamen prüfen
---------------------------------------
Es kann gewünscht sein auch IP Adressen, lokale Hostnamen, oder eine Kombination aller drei erlaubten Typen zu
prüfen. Das kann gemacht werden durch die Übergabe eines Parameters an ``Zend\Validate\Hostname`` wenn dieser
initialisiert wird. Der Parameter sollte ein Integer sein, welcher die Typen von Hostnamen auswählt die erlaubt
sind. Hierfür können die ``Zend\Validate\Hostname`` Konstanten verwendet werden.
Die ``Zend\Validate\Hostname`` Konstanten sind: ``ALLOW_DNS`` um nur *DNS* Hostnamen zu erlauben, ``ALLOW_IP`` um
IP Adressen zu erlauben, ``ALLOW_LOCAL`` um lokale Hostnamen zu erlauben, und ``ALLOW_ALL`` um alle drei Typen zu
erlauben. Um nur IP Adressen zu prüfen kann das folgende Beispiel verwendet werden:
.. code-block:: php
:linenos:
$validator = new Zend\Validate\Hostname(Zend\Validate\Hostname::ALLOW_IP);
if ($validator->isValid($hostname)) {
// Hostname scheint gültig zu sein
} else {
// Hostname ist ungülig; Gründe dafür ausdrucken
foreach ($validator->getMessages() as $message) {
echo "$message\n";
}
}
Genau wie die Verwendung von ``ALLOW_ALL`` alle Typen von Hostnamen akzeptiert, können diese Typen kombiniert
werden um Kombinationen zu erlauben. Um zum Beispiel *DNS* und lokale Hostnamen zu akzeptieren muß das
``Zend\Validate\Hostname`` Objekt wie folgt initialisiert werden:
.. code-block:: php
:linenos:
$validator = new Zend\Validate\Hostname(Zend\Validate\Hostname::ALLOW_DNS |
Zend\Validate\Hostname::ALLOW_IP);
.. _zend.validator.set.hostname.idn:
Internationale Domain Namen prüfen
----------------------------------
Einige Länder Code Top Level Domains (ccTLDs), wie 'de' (Deutschland), unterstützen internationale Zeichen in
Domain Namen. Diese sind als Internationale Domain Namen (*IDN*) bekannt. Diese Domains können mit
``Zend\Validate\Hostname`` geprüft werden, mit Hilfe von erweiterten Zeichen die im Prüfprozess verwendet werden.
.. note::
**IDN Domains**
Bis jetzt unterstützen mehr als 50 ccTLDs *IDN* Domains.
Eine *IDN* Domain zu prüfen ist genauso einfach wie die Verwendung des standard Hostnamen Prüfers da *IDN*
Prüfung standardmäßig eingeschaltet ist. Wenn *IDN* Prüfung ausgeschaltet werden soll, kann das entweder durch
die Übergabe eines Parameters im ``Zend\Validate\Hostname`` Constructor, oder über die ``setValidateIdn()``
Methode gemacht werden.
Die *IDN* Prüfung kann ausgeschaltet werden durch die Übergabe eines zweiten Parameters an den
``Zend\Validate\Hostname`` Constructor auf die folgende Art und Weise.
.. code-block:: php
:linenos:
$validator =
new Zend\Validate\Hostname(
array(
'allow' => Zend\Validate\Hostname::ALLOW_DNS,
'idn' => false
)
);
Alternativ kann entweder ``TRUE`` oder ``FALSE`` an ``setValidateIdn()`` übergeben werden, um die *IDN* Prüfung
ein- oder auszuschalten. Wenn ein *IDN* Hostname geprüft wird, der aktuell nicht unterstützt wird, ist sicher das
die Prüfung fehlschlagen wird wenn er irgendwelche internationalen Zeichen hat. Wo keine ccTLD Datei in
``Zend/Validate/Hostname`` existiert, welche die zusätzlichen Zeichen definiert, wird eine normale Hostnamen
Prüfung durchgeführt.
.. note::
**IDN Prüfung**
Es sollte beachtet werden das *IDN*\ s nur geprüft werden wenn es erlaubt ist *DNS* Hostnamen zu prüfen.
.. _zend.validator.set.hostname.tld:
Top Level Domains prüfen
------------------------
Normalerweise wird ein Hostname gegen eine Liste von bekannten *TLD*\ s geprüft. Wenn diese Funktionalität nicht
benötigt wird kann das, auf die gleiche Art und Weise wie die *IDN* Unterstützung, ausgeschaltet werden Die *TLD*
Prüfung kann ausgeschaltet werden indem ein dritter Parameter an den ``Zend\Validate\Hostname`` Constructor
übergeben wird. Im folgenden Beispiel wird die *IDN* Prüfung durch den zweiten Parameter unterstützt.
.. code-block:: php
:linenos:
$validator =
new Zend\Validate\Hostname(
array(
'allow' => Zend\Validate\Hostname::ALLOW_DNS,
'idn' => true,
'tld' => false
)
);
Alternativ kann entweder ``TRUE`` oder ``FALSE`` übergeben an ``setValidateTld()`` übergeben werden um die *TLD*
Prüfung ein- oder auszuschalten.
.. note::
**TLD Prüfung**
Es sollte beachtet werden das *TLD*\ s nur geprüft werden wenn es erlaubt ist *DNS* Hostnamen zu prüfen.
| 37.921687 | 113 | 0.703257 |
b1dc85dd71394be98cee9a23199ef4f9f26074e1 | 628 | rst | reStructuredText | README.rst | asellappen/pybtex-docutils | a521cba9694e95210c4f0c6f4848b70c1fb1d742 | [
"MIT-0"
] | null | null | null | README.rst | asellappen/pybtex-docutils | a521cba9694e95210c4f0c6f4848b70c1fb1d742 | [
"MIT-0"
] | null | null | null | README.rst | asellappen/pybtex-docutils | a521cba9694e95210c4f0c6f4848b70c1fb1d742 | [
"MIT-0"
] | null | null | null | A docutils backend for pybtex.
* Download: http://pypi.python.org/pypi/pybtex-docutils/#downloads
* Documentation: http://pybtex-docutils.readthedocs.org/
* Development: http://github.com/mcmtroffaes/pybtex-docutils/ |imagetravis| |imagecodecov|
.. |imagetravis| image:: https://travis-ci.org/mcmtroffaes/pybtex-docutils.png?branch=develop
:target: https://travis-ci.org/mcmtroffaes/pybtex-docutils
:alt: travis-ci
.. |imagecodecov| image:: https://codecov.io/gh/mcmtroffaes/pybtex-docutils/branch/develop/graph/badge.svg
:target: https://codecov.io/gh/mcmtroffaes/pybtex-docutils
:alt: codecov
| 39.25 | 106 | 0.742038 |
bca4326cf20b15a46e7edf49afc13c5f53578444 | 2,028 | rst | reStructuredText | docs/usage/tutorials/VanillaKD.rst | PiaCuk/KD_Lib | 153299d484e4c6b33793749709dbb0f33419f190 | [
"MIT"
] | 360 | 2020-05-11T08:18:20.000Z | 2022-03-31T01:48:43.000Z | docs/usage/tutorials/VanillaKD.rst | PiaCuk/KD_Lib | 153299d484e4c6b33793749709dbb0f33419f190 | [
"MIT"
] | 91 | 2020-05-11T08:14:56.000Z | 2022-03-30T05:29:03.000Z | docs/usage/tutorials/VanillaKD.rst | PiaCuk/KD_Lib | 153299d484e4c6b33793749709dbb0f33419f190 | [
"MIT"
] | 39 | 2020-05-11T08:06:47.000Z | 2022-03-29T05:11:18.000Z | ======================
VanillaKD using KD_Lib
======================
To implement the most basic version of knowledge distillation from `Distilling the Knowledge in a Neural Network <https://arxiv.org/abs/1503.02531>`_
and plot losses
.. code-block:: python
import torch
import torch.optim as optim
from torchvision import datasets, transforms
from KD_Lib.KD import VanillaKD
# Define datasets, dataloaders, models and optimizers
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"mnist_data",
train=True,
download=True,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
),
batch_size=32,
shuffle=True,
)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(
"mnist_data",
train=False,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
),
batch_size=32,
shuffle=True,
)
teacher_model = <your model>
student_model = <your model>
teacher_optimizer = optim.SGD(teacher_model.parameters(), 0.01)
student_optimizer = optim.SGD(student_model.parameters(), 0.01)
# Now, this is where KD_Lib comes into the picture
distiller = VanillaKD(teacher_model, student_model, train_loader, test_loader,
teacher_optimizer, student_optimizer)
distiller.train_teacher(epochs=5, plot_losses=True, save_model=True) # Train the teacher network
distiller.train_student(epochs=5, plot_losses=True, save_model=True) # Train the student network
distiller.evaluate(teacher=False) # Evaluate the student network
distiller.get_parameters() # A utility function to get the number of parameters in the teacher and the student network
| 36.214286 | 168 | 0.616371 |
8689448ec9c6c3c64fb005ef5b724c0577a6016c | 2,316 | rst | reStructuredText | source/course-info/install-using-yml.rst | ropitz/2018 | a806773378041dc08646e48ceda935c58b4a6a18 | [
"MIT"
] | 31 | 2018-11-28T01:06:48.000Z | 2022-03-28T05:44:31.000Z | source/course-info/install-using-yml.rst | ropitz/2018 | a806773378041dc08646e48ceda935c58b4a6a18 | [
"MIT"
] | 2 | 2019-02-12T13:58:39.000Z | 2022-03-24T19:55:36.000Z | source/course-info/install-using-yml.rst | ropitz/2018 | a806773378041dc08646e48ceda935c58b4a6a18 | [
"MIT"
] | 26 | 2018-11-28T04:48:54.000Z | 2022-03-25T15:25:23.000Z | Install Python GIS environment using YML configuration file
===========================================================
Installing various GIS packages in Python can be sometimes a bit tricky because there might exist complex dependencies
that requires specific versions of different packages and even specific version of Python itself.
The easiest way to get the installation working smoothly is to build a dedicated `Python environment <https://conda.io/docs/user-guide/tasks/manage-environments.html>`__
for GIS using conda and preferably installing packages using mostly the same `conda channel <https://conda.io/docs/glossary.html#channels>`__.
Using dedicated environment has the advantage that you can load the environment when needed.
In this way, it won't break any existing installations that you might have.
There are basically three steps required to install GIS packages and start using them in your operating system:
1. Download suitable environment file (.yml) to your operating system
2. Install packages and create a dedicated conda environment for ``gis``
3. Activate the environment and start using the packages
1. Download environment file for your operating system
------------------------------------------------------
A dedicated repository contains a list of *.yml* environment files created for different operating systems
(*work in progress*). Go to `<https://github.com/Automating-GIS-processes/install>`__ repository.
You should download a version that suites your operating system and then follow the instructions below.
2. Install GIS packages into dedicated environment
--------------------------------------------------
Once you have downloaded the yml file that fits your operating system you can install the packages
by using following command:
.. code:: bash
$ conda env create -f gis-win-10.yml
.. note::
Solving the environment and installing all the packages might take surprisingly long time, so be patient.
3. Activate the GIS environment and start doing GIS
---------------------------------------------------
Once the installations have been done, you are ready to start using the GIS packages by activating the environment.
It can be done by running following command from the command prompt / terminal:
.. code:: bash
$ source activate gis
| 47.265306 | 169 | 0.72323 |
29da6c95a3e3137876c152e9a89d8ecacc45918f | 246 | rst | reStructuredText | CHANGELOG.rst | cadithealth/templatealchemy | a098aff22a6bcdd693e5ebea51155b96356d3178 | [
"MIT"
] | 1 | 2019-10-06T23:49:58.000Z | 2019-10-06T23:49:58.000Z | CHANGELOG.rst | cadithealth/templatealchemy | a098aff22a6bcdd693e5ebea51155b96356d3178 | [
"MIT"
] | null | null | null | CHANGELOG.rst | cadithealth/templatealchemy | a098aff22a6bcdd693e5ebea51155b96356d3178 | [
"MIT"
] | null | null | null | =========
ChangeLog
=========
v0.1.22
=======
* Added workaround to "string:" sources to be able to load external
resources by forcing them to use the "pkg:" driver
v0.1.21
=======
* Removed `distribute` dependency
* First tagged release
| 13.666667 | 67 | 0.630081 |
e767d7d76377ccdee1ef857861e12d645e0c1836 | 1,220 | rst | reStructuredText | docs/index.rst | coleifer/sweepea | c5ea2fd6c606bb90434a3562f60d4227ec196500 | [
"MIT"
] | 52 | 2015-10-06T23:17:03.000Z | 2021-11-04T02:53:27.000Z | docs/index.rst | coleifer/sweepea | c5ea2fd6c606bb90434a3562f60d4227ec196500 | [
"MIT"
] | 1 | 2019-07-22T11:19:57.000Z | 2019-07-22T13:51:21.000Z | docs/index.rst | coleifer/sweepea | c5ea2fd6c606bb90434a3562f60d4227ec196500 | [
"MIT"
] | 2 | 2020-06-18T22:58:47.000Z | 2020-10-06T21:21:21.000Z | .. swee'pea documentation master file, created by
sphinx-quickstart on Mon Oct 12 21:42:14 2015.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
swee'pea
========
Fast, lightweight Python database toolkit for SQLite, built with Cython.
Like it's cousin `peewee <http://docs.peewee-orm.com/>`_, ``swee'pea`` is
comprised of a database connection abstraction and query-building / execution
APIs. This project is a pet project of mine, so tailor expectations
accordingly.
Features:
* Implemented in Cython for performance and to expose advanced features of the
SQLite database library.
* Composable and consistent APIs for building queries using Python.
* Layered APIs allow you to work as close to the database as you want.
* No magic.
* No bullshit.
Issue tracker and code are hosted on GitHub: https://github.com/coleifer/sweepea.
Documentation hosted on RT**F**D: https://sweepea.readthedocs.io/
.. image:: http://media.charlesleifer.com/blog/photos/sweepea-fast.png
Contents:
.. toctree::
:maxdepth: 2
:glob:
installation
api
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 25.416667 | 81 | 0.730328 |
e477f704ec849aaf1ad67612926038b9482b58de | 592 | rst | reStructuredText | docs/fastapi_contrib.tracing.rst | mumtozvalijonov/fastapi_contrib | e35b4fd7c135380f885c364e4d4b992fb55f687e | [
"MIT"
] | 504 | 2019-08-26T18:14:03.000Z | 2022-03-25T13:49:50.000Z | docs/fastapi_contrib.tracing.rst | mumtozvalijonov/fastapi_contrib | e35b4fd7c135380f885c364e4d4b992fb55f687e | [
"MIT"
] | 100 | 2019-08-23T07:52:30.000Z | 2022-03-20T06:13:10.000Z | docs/fastapi_contrib.tracing.rst | mumtozvalijonov/fastapi_contrib | e35b4fd7c135380f885c364e4d4b992fb55f687e | [
"MIT"
] | 32 | 2019-10-01T12:46:14.000Z | 2022-02-01T13:44:53.000Z | fastapi\_contrib.tracing package
================================
Submodules
----------
fastapi\_contrib.tracing.middlewares module
-------------------------------------------
.. automodule:: fastapi_contrib.tracing.middlewares
:members:
:undoc-members:
:show-inheritance:
fastapi\_contrib.tracing.utils module
-------------------------------------
.. automodule:: fastapi_contrib.tracing.utils
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: fastapi_contrib.tracing
:members:
:undoc-members:
:show-inheritance:
| 19.096774 | 51 | 0.572635 |
02b9bad972b3ba88a1f2be63bdf11acbe09bc9bb | 2,582 | rst | reStructuredText | docs/chapters/acls.rst | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | 1 | 2021-06-17T18:24:25.000Z | 2021-06-17T18:24:25.000Z | docs/chapters/acls.rst | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | 7 | 2020-06-06T00:01:04.000Z | 2022-01-13T01:47:17.000Z | docs/chapters/acls.rst | Syunkolee9891/Mayan-EDMS | 3759a9503a264a180b74cc8518388f15ca66ac1a | [
"Apache-2.0"
] | null | null | null | ********************
Access control lists
********************
Besides the permissions system explained in :doc:`../chapters/permissions`,
Mayan EDMS provides per object permission granting. This feature is used to
grant a permission to a role, but this permission can only be executed for a
limited number of objects (documents, folders, tags) instead of being
effective system-wide.
.. blockdiag::
blockdiag {
default_shape = roundedbox
document [ label = 'Document' ];
role [ label = 'Role' ];
permission [ label = 'Permission' ];
role -> permission -> document;
}
Example:
.. blockdiag::
blockdiag {
default_shape = roundedbox
document [ label = '2015 Payroll report.txt', width=200 ];
role [ label = 'Accountants' ];
permission [ label = 'View document' ];
role -> permission -> document;
}
In this scenario only users in groups belonging to the ``Accountants`` role
would be able to view the ``2015 Payroll report.txt`` document.
Inherited access control
========================
It is also possible to grant a permission to a role for a specific document
type (:doc:`../chapters/document_types`). Under this scheme all users in
groups belonging to that role will inherit that permission for all documents
of that type.
.. blockdiag::
blockdiag {
default_shape = roundedbox
document_type [ label = 'Document type' ];
role [ label = 'Role' ];
permission [ label = 'Permission' ];
documents [shape = "note", stacked];
role -> permission -> document_type ;
document_type -> documents [folded, label = "inherit" ];
}
Example:
.. blockdiag::
blockdiag {
default_shape = roundedbox
document_type [ label = 'Payroll reports', width=200 ];
role [ label = 'Accountants' ];
permission [ label = 'View document' ];
documents [shape = "note", stacked, label="payroll_report*.pdf" ];
role -> permission -> document_type ;
document_type -> documents [folded, label = "inherit" ];
}
The role ``Accountants`` is given the permission ``document view`` for the
document type ``Payroll reports``. Now all users in groups belonging to the
``Accountants`` role can view all documents of the type ``Payroll reports``
without needing to have that permissions granted for each particular
``Payroll reports`` type document.
If access control for the ``Payroll reports`` documents needs to be updated it
only needs to be done for the document type and not for each document of the type
``Payroll reports``.
| 30.376471 | 81 | 0.668087 |
e99e9bdc8dbe639762cf32c1e246e8a430589cbc | 86 | rst | reStructuredText | source/smartnoise/index.rst | pdurbin/opendp-documentation | 15ca9991e34b2f21997ba2084604bca4dc9b1787 | [
"MIT"
] | null | null | null | source/smartnoise/index.rst | pdurbin/opendp-documentation | 15ca9991e34b2f21997ba2084604bca4dc9b1787 | [
"MIT"
] | null | null | null | source/smartnoise/index.rst | pdurbin/opendp-documentation | 15ca9991e34b2f21997ba2084604bca4dc9b1787 | [
"MIT"
] | null | null | null | SmartNoise
==========
**Contents:**
.. toctree::
overview
api-reference/index
| 9.555556 | 22 | 0.581395 |
3f01b8a332cdf0bec4bf148070b95dd8b940f6dc | 1,333 | rst | reStructuredText | AUTHORS.rst | clebouteiller/landlab | e6f47db76ea0814c4c5a24e695bbafb74c722ff7 | [
"MIT"
] | null | null | null | AUTHORS.rst | clebouteiller/landlab | e6f47db76ea0814c4c5a24e695bbafb74c722ff7 | [
"MIT"
] | 1 | 2021-11-11T21:23:46.000Z | 2021-11-11T21:23:46.000Z | AUTHORS.rst | clebouteiller/landlab | e6f47db76ea0814c4c5a24e695bbafb74c722ff7 | [
"MIT"
] | null | null | null | =======
Credits
=======
Development Leads
-----------------
- `Greg Tucker <https://github.com/gregtucker>`_
- `Nicole Gasparini <https://github.com/nicgaspar>`_
- `Erkan Istanbulluoglu <https://github.com/erkanistan>`_
- `Daniel Hobley <https://github.com/SiccarPoint>`_
- `Sai S. Nudurupati <https://github.com/saisiddu>`_
- `Jordan Adams <https://github.com/jadams15>`_
- `Eric Hutton <https://github.com/mcflugen>`_
- `Jenny Knuth <https://github.com/jennyknuth>`_
- `Katy Barnhart <https://github.com/kbarnhart>`_
- `Margaux Mouchene <https://github.com/margauxmouchene>`_
- `Christina Bandaragoda <https://github.com/ChristinaB>`_
- `Nathan Lyons <https://github.com/nathanlyons>`_
Contributors
------------
- `Charlie Shobe <https://github.com/cmshobe>`_
- `Ronda Strauch <https://github.com/RondaStrauch>`_
- `David Litwin <https://github.com/DavidLitwin>`_
- `Rachel Glade <https://github.com/Glader011235>`_
- `Giuseppecipolla95 <https://github.com/Giuseppecipolla95>`_
- `Amanda Manaster <https://github.com/amanaster2>`_
- `elbeejay <https://github.com/elbeejay>`_
- `Allison Pfeiffer <https://github.com/pfeiffea>`_
- `alangston <https://github.com/alangston>`_
- `Kristen Thyng <https://github.com/kthyng>`_
- `Dylan Ward <https://github.com/ddoubleprime>`_
- `Benjamin Campforts <https://github.com/BCampforts>`_
| 37.027778 | 61 | 0.706677 |
dbb574f240e3a2edf0290fc45932a843cfad6b30 | 169 | rst | reStructuredText | user/bsps/bsps-microblaze.rst | alastairtree/rtems-docs | 04bf6a768489b499d547a5a7e52d56baa173f0aa | [
"BSD-2-Clause"
] | 6 | 2019-06-09T10:39:54.000Z | 2021-11-01T13:39:25.000Z | user/bsps/bsps-microblaze.rst | alastairtree/rtems-docs | 04bf6a768489b499d547a5a7e52d56baa173f0aa | [
"BSD-2-Clause"
] | 2 | 2018-11-13T17:47:38.000Z | 2018-11-18T16:04:45.000Z | user/bsps/bsps-microblaze.rst | alastairtree/rtems-docs | 04bf6a768489b499d547a5a7e52d56baa173f0aa | [
"BSD-2-Clause"
] | 18 | 2018-11-13T17:14:36.000Z | 2021-11-30T16:09:23.000Z | .. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2018 embedded brains GmbH
microblaze (Microblaze)
***********************
There are no Microblaze BSPs yet.
| 18.777778 | 42 | 0.639053 |
1577f87932762aeaf104fa3a57eb41e1b11cdf41 | 3,323 | rst | reStructuredText | README.rst | luizirber/tam | 410234620911af938a944e649b457774e56a06ad | [
"BSD-2-Clause"
] | null | null | null | README.rst | luizirber/tam | 410234620911af938a944e649b457774e56a06ad | [
"BSD-2-Clause"
] | null | null | null | README.rst | luizirber/tam | 410234620911af938a944e649b457774e56a06ad | [
"BSD-2-Clause"
] | null | null | null | ========
Overview
========
.. start-badges
.. list-table::
:stub-columns: 1
* - docs
- |docs|
* - tests
- | |travis| |appveyor| |requires|
| |codecov|
| |landscape| |scrutinizer| |codacy| |codeclimate|
* - package
- |version| |downloads| |wheel| |supported-versions| |supported-implementations|
.. |docs| image:: https://readthedocs.org/projects/tam/badge/?style=flat
:target: https://readthedocs.org/projects/tam
:alt: Documentation Status
.. |travis| image:: https://travis-ci.org/luizirber/tam.svg?branch=master
:alt: Travis-CI Build Status
:target: https://travis-ci.org/luizirber/tam
.. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/luizirber/tam?branch=master&svg=true
:alt: AppVeyor Build Status
:target: https://ci.appveyor.com/project/luizirber/tam
.. |requires| image:: https://requires.io/github/luizirber/tam/requirements.svg?branch=master
:alt: Requirements Status
:target: https://requires.io/github/luizirber/tam/requirements/?branch=master
.. |codecov| image:: https://codecov.io/github/luizirber/tam/coverage.svg?branch=master
:alt: Coverage Status
:target: https://codecov.io/github/luizirber/tam
.. |landscape| image:: https://landscape.io/github/luizirber/tam/master/landscape.svg?style=flat
:target: https://landscape.io/github/luizirber/tam/master
:alt: Code Quality Status
.. |codacy| image:: https://img.shields.io/codacy/7c5f7a5118874cf089833ae08ef15d23.svg?style=flat
:target: https://www.codacy.com/app/luizirber/tam
:alt: Codacy Code Quality Status
.. |codeclimate| image:: https://codeclimate.com/github/luizirber/tam/badges/gpa.svg
:target: https://codeclimate.com/github/luizirber/tam
:alt: CodeClimate Quality Status
.. |version| image:: https://img.shields.io/pypi/v/tam.svg?style=flat
:alt: PyPI Package latest release
:target: https://pypi.python.org/pypi/tam
.. |downloads| image:: https://img.shields.io/pypi/dm/tam.svg?style=flat
:alt: PyPI Package monthly downloads
:target: https://pypi.python.org/pypi/tam
.. |wheel| image:: https://img.shields.io/pypi/wheel/tam.svg?style=flat
:alt: PyPI Wheel
:target: https://pypi.python.org/pypi/tam
.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/tam.svg?style=flat
:alt: Supported versions
:target: https://pypi.python.org/pypi/tam
.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/tam.svg?style=flat
:alt: Supported implementations
:target: https://pypi.python.org/pypi/tam
.. |scrutinizer| image:: https://img.shields.io/scrutinizer/g/luizirber/tam/master.svg?style=flat
:alt: Scrutinizer Status
:target: https://scrutinizer-ci.com/g/luizirber/tam/
.. end-badges
Tile Assembly Model modules
* Free software: BSD license
Installation
============
::
pip install tam
Documentation
=============
https://tam.readthedocs.org/
Development
===========
To run the all tests run::
tox
Note, to combine the coverage data from all the tox environments run:
.. list-table::
:widths: 10 90
:stub-columns: 1
- - Windows
- ::
set PYTEST_ADDOPTS=--cov-append
tox
- - Other
- ::
PYTEST_ADDOPTS=--cov-append tox
| 28.401709 | 109 | 0.681011 |
cf8ff636db94ee1d0828c48ba7dfbb6da67ac074 | 38 | rst | reStructuredText | doc/variable/SWEET_CURRENT_LIST.rst | cy20lin/cycmake | 47a34f1e81111e261d2b251da147f78257135b65 | [
"MIT"
] | 1 | 2017-05-18T13:57:00.000Z | 2017-05-18T13:57:00.000Z | doc/variable/SWEET_CURRENT_LIST.rst | cy20lin/sweet | 47a34f1e81111e261d2b251da147f78257135b65 | [
"MIT"
] | null | null | null | doc/variable/SWEET_CURRENT_LIST.rst | cy20lin/sweet | 47a34f1e81111e261d2b251da147f78257135b65 | [
"MIT"
] | null | null | null | SWEET_CURRENT_LIST
==================
| 12.666667 | 18 | 0.473684 |
78ad56a416e1e84a30b91bc402d0a65efa4597a5 | 9,170 | rst | reStructuredText | docs/administration.rst | gillesfabio/django-flickrsets | 953481fde4029d4d613a5994bdbe987f731fe033 | [
"BSD-3-Clause"
] | 1 | 2015-06-24T01:46:02.000Z | 2015-06-24T01:46:02.000Z | docs/administration.rst | gillesfabio/django-flickrsets | 953481fde4029d4d613a5994bdbe987f731fe033 | [
"BSD-3-Clause"
] | null | null | null | docs/administration.rst | gillesfabio/django-flickrsets | 953481fde4029d4d613a5994bdbe987f731fe033 | [
"BSD-3-Clause"
] | null | null | null | ==============
Administration
==============
django.contrib.admin
====================
Go in the admin and add Flickr sets you want to synchronize by adding new
*registered set* objects.
For each set, two required fields:
* ``flickr_id``: The Flickr set ID
* ``title``: The Flickr set Title (just to identify the set, not displayed)
* ``enabled``: enable or disable the set for synchronization
Never ever add/delete/change *Person*, *Photoset*, *Photo* and *Tag* objects.
They are managed by the application.
Application's CLI
=================
`Django Flickrsets`_ adds ``fsets`` command to Django command list.
Command syntax::
python manage.py fsets [command]
This command provides commands to help you managing your Flickr sets:
* ``add``: registers a new Flickr set for synchronization
* ``remove``: removes (deletes) a registered Flickr set
* ``list``: lists all registered Flickr sets
* ``disable``: disables an enabled Flickr set
* ``enable``: enables a disabled Flickr set
* ``sync``: synchronizes registered Flickr sets with Flickr
* ``flush``: flushes existing tables
``fsets add``
-------------
The ``fsets add`` command lists all public sets of the Flickr User previously
defined in ``settings.FLICKRSETS_FLICKR_USER_ID``.
Example::
python manage.py fsets add
+----+--------------------------------------+-------------------+--------+
| ID | Title | Flickr ID | Status |
+----+--------------------------------------+-------------------+--------+
| 0 | Misc | 72157623007721343 | REMOTE |
| 1 | Flowers | 72157622950744561 | REMOTE |
| 2 | Neige 2009 | 72157622911766549 | REMOTE |
| 3 | Monaco (2009-08) | 72157622035597150 | REMOTE |
| 4 | Ladybug | 72157621969558560 | REMOTE |
| 5 | Piscine Saint-Paul (2009-07) | 72157621969224974 | REMOTE |
| 6 | Parc Phoenix (2009-05) | 72157621965519776 | REMOTE |
| 7 | Huguette's Garden | 72157621840015339 | REMOTE |
| 8 | Jardin Exotique de Monaco (2009-06) | 72157621961569334 | REMOTE |
| 9 | Fête des mères 2009 | 72157621961447878 | REMOTE |
| 10 | Sainte-Pétronille 2009 | 72157621961311960 | REMOTE |
| 11 | Titouille | 72157617166890176 | REMOTE |
| 12 | Colors | 72157617075048293 | REMOTE |
| 13 | Black and White | 72157617166578442 | REMOTE |
| 14 | Clouds | 72157617074848637 | REMOTE |
| 15 | Saint-Laurent-du-Var | 72157617166199280 | REMOTE |
| 16 | Nice #4 (2009-02) | 72157614259114644 | REMOTE |
| 17 | Nice #3 (2009-02) | 72157614181081167 | REMOTE |
| 18 | Nice #2 (2009-02) | 72157613842687348 | REMOTE |
| 19 | Noël 2008 | 72157611579724263 | REMOTE |
| 20 | Sushi #1 (2008-07) | 72157606328118682 | REMOTE |
| 21 | 2008-07-14 | 72157606331530077 | REMOTE |
| 22 | Airbus A320, Nice | 72157606176606116 | REMOTE |
| 23 | Sainte-Pétronille 2008 | 72157606180078995 | REMOTE |
| 24 | Cannes #1 (2008-07) | 72157606328349386 | REMOTE |
| 25 | Gourdon (2008-03) | 72157606171366781 | REMOTE |
| 26 | Grotte de Baume Obscure (2008-07) | 72157606167343310 | REMOTE |
| 27 | Screenshots | 72157604610567630 | REMOTE |
| 28 | Saint-Jean-Cap-Ferrat (2008-03) | 72157604162678938 | REMOTE |
| 29 | Marineland (2008-02) | 72157604164536775 | REMOTE |
| 30 | Nice #1 (2008-03) | 72157604159570014 | REMOTE |
| 31 | Zoo, Saint-Jean-Cap-Ferrat (2008-02) | 72157604162849705 | REMOTE |
| 32 | Crèche 2007 | 72157604151543645 | REMOTE |
| 33 | Juan-les-Pins (2007-11) | 72157603084586221 | REMOTE |
| 34 | 25 ans | 72157602826143640 | REMOTE |
| 35 | Eze (2007-10) | 72157602821813034 | REMOTE |
| 36 | Parc Phoenix (2007-10) | 72157602454948487 | REMOTE |
| 37 | Boréon (2007-09) | 72157602453309541 | REMOTE |
| 38 | Animals | 72157600572928124 | REMOTE |
| 39 | Arrière-pays (2007-05) | 72157600572655571 | REMOTE |
| 40 | La Garoupe (2007-01) | 72157600572355520 | REMOTE |
+----+--------------------------------------+-------------------+--------+
Which Flickr set(s) you want to add? 1 12 38
Added set "Flowers" (72157622950744561).
Added set "Colors" (72157617075048293).
Added set "Animals" (72157600572928124).
``fsets remove``
----------------
The ``fsets remove`` command removes given registered Flickr sets.
Example::
python manage.py fsets remove
+----+---------+-------------------+---------+
| ID | Title | Flickr ID | Status |
+----+---------+-------------------+---------+
| 0 | Misc | 72157623007721343 | ENABLED |
| 1 | Clouds | 72157617074848637 | ENABLED |
| 2 | Flowers | 72157622950744561 | ENABLED |
| 3 | Colors | 72157617075048293 | ENABLED |
| 4 | Animals | 72157600572928124 | ENABLED |
+----+---------+-------------------+---------+
Which Flickr set(s) you want to remove? 3
Removed set Colors (72157617075048293).
``fsets list``
--------------
The ``fsets list`` command lists all registered Flickr sets.
Example::
python manage.py fsets list
+----+---------+-------------------+---------+
| ID | Title | Flickr ID | Status |
+----+---------+-------------------+---------+
| 0 | Misc | 72157623007721343 | ENABLED |
| 1 | Clouds | 72157617074848637 | ENABLED |
| 2 | Flowers | 72157622950744561 | ENABLED |
| 3 | Animals | 72157600572928124 | ENABLED |
+----+---------+-------------------+---------+
``fsets disable``
-----------------
The ``fsets disable`` disables synchronization for given Flickr sets.
Example::
python manage.py fsets disable
+----+---------+-------------------+---------+
| ID | Title | Flickr ID | Status |
+----+---------+-------------------+---------+
| 0 | Misc | 72157623007721343 | ENABLED |
| 1 | Clouds | 72157617074848637 | ENABLED |
| 2 | Flowers | 72157622950744561 | ENABLED |
| 3 | Animals | 72157600572928124 | ENABLED |
+----+---------+-------------------+---------+
Which Flickr set(s) you want to disable? 3
Set Animals (72157600572928124) is disabled.
python manage.py fsets list
+----+---------+-------------------+----------+
| ID | Title | Flickr ID | Status |
+----+---------+-------------------+----------+
| 0 | Misc | 72157623007721343 | ENABLED |
| 1 | Clouds | 72157617074848637 | ENABLED |
| 2 | Flowers | 72157622950744561 | ENABLED |
| 3 | Animals | 72157600572928124 | DISABLED |
+----+---------+-------------------+----------+
``fsets enable``
----------------
The ``fsets enable`` command enables synchronization for given Flickr sets.
Example::
python manage.py fsets enable
+----+---------+-------------------+----------+
| ID | Title | Flickr ID | Status |
+----+---------+-------------------+----------+
| 0 | Animals | 72157600572928124 | DISABLED |
+----+---------+-------------------+----------+
Which Flickr set(s) you want to enable? 0
Set Animals (72157600572928124) is enabled.
python manage.py fsets list
+----+---------+-------------------+---------+
| ID | Title | Flickr ID | Status |
+----+---------+-------------------+---------+
| 0 | Misc | 72157623007721343 | ENABLED |
| 1 | Clouds | 72157617074848637 | ENABLED |
| 2 | Flowers | 72157622950744561 | ENABLED |
| 3 | Animals | 72157600572928124 | ENABLED |
+----+---------+-------------------+---------+
``fsets sync``
--------------
The ``fsets sync`` command runs synchronization for enabled Flickr sets.
Example::
python manage.py fsets sync
``fsets flush``
---------------
The ``fsets flush`` command flushes (resets) existing tables (but does not
touch to registered sets).
Example::
python manage.py fsets flush
2010-06-23 09:15:25,195 [INFO] -- Django Flickrsets -- Flushed table: flickrsets_person
2010-06-23 09:15:25,197 [INFO] -- Django Flickrsets -- Flushed table: flickrsets_photo
2010-06-23 09:15:25,198 [INFO] -- Django Flickrsets -- Flushed table: flickrsets_photoset
2010-06-23 09:15:25,198 [INFO] -- Django Flickrsets -- Flushed table: flickrsets_photo_tag
.. _Django Flickrsets: http://github.com/gillesfabio/django-flickrsets
| 41.872146 | 94 | 0.501963 |
51eb534bc97996747dd4fac7417e747a8a81d6c4 | 693 | rst | reStructuredText | docs/started/install.rst | dg46/pgmpy | caea6ef7c914464736818fb185a1d395937ed52f | [
"MIT"
] | 2,144 | 2015-01-05T21:25:04.000Z | 2022-03-31T08:24:15.000Z | docs/started/install.rst | vishalbelsare/pgmpy | 24279929a28082ea994c52f3d165ca63fc56b02b | [
"MIT"
] | 1,181 | 2015-01-04T18:19:44.000Z | 2022-03-30T17:21:19.000Z | docs/started/install.rst | vishalbelsare/pgmpy | 24279929a28082ea994c52f3d165ca63fc56b02b | [
"MIT"
] | 777 | 2015-01-01T11:13:27.000Z | 2022-03-28T12:31:57.000Z | Installation
============
pgmpy requires Python 3.7+. pgmpy is hosted on both pypi and anconda. For installation through pypi, use the command:
.. code-block:: bash
pip install pgmpy
For installation through anaconda, use the command:
.. code-block:: bash
conda install -c ankurankan pgmpy
For installing the latest `dev` branch from github, use the command:
.. code-block:: bash
pip install git+https://github.com/pgmpy/pgmpy.git@dev
Requirements
------------
If installing manually, the following non-optional dependencies needs to be installed:
* Python 3.7+
* numpy
* scipy
* scikit-learn
* pandas
* pyparsing
* pytorch
* statsmodels
* tqdm
* joblib
| 17.769231 | 117 | 0.698413 |
ed242601f32c2c8aaea09d459601de72f5ff2e5d | 503 | rst | reStructuredText | src/sphinx/Detailed-Topics/index.rst | sullivan-/sbt | 5dc671c7f814c68ef89ff538d09be32fd63198ab | [
"MIT",
"Apache-2.0",
"BSD-3-Clause"
] | 19 | 2015-01-14T06:00:27.000Z | 2021-02-02T13:02:17.000Z | src/sphinx/Detailed-Topics/index.rst | trafficland/sbt | 68c69df2ab376ef9da321727d50f014c3727c455 | [
"MIT",
"Apache-2.0",
"BSD-3-Clause"
] | 2 | 2015-04-17T17:12:16.000Z | 2015-09-12T23:43:52.000Z | src/sphinx/Detailed-Topics/index.rst | trafficland/sbt | 68c69df2ab376ef9da321727d50f014c3727c455 | [
"MIT",
"Apache-2.0",
"BSD-3-Clause"
] | 14 | 2015-03-31T15:16:31.000Z | 2021-01-28T08:13:34.000Z | ===============
Detailed Topics
===============
This part of the documentation has pages documenting particular sbt topics in detail.
Before reading anything in here, you will need the information in the
:doc:`Getting Started Guide </Getting-Started/Welcome>` as a foundation.
Other resources include the :doc:`Examples </Examples/index>` and
:doc:`extending sbt </Extending/index>` areas on the wiki, and the
`API Documentation <../../api/index.html>`_
.. toctree::
:maxdepth: 2
:glob:
*
| 27.944444 | 85 | 0.691849 |
9a72cf4bcb477dd16dcef62dde802946f6e7ac52 | 3,336 | rst | reStructuredText | src/ko/variables/arguments1b.rst | hadi-f90/reeborg-docs | 3446e122616c5baea13ea678f368bc307e64720d | [
"CC0-1.0"
] | 2 | 2016-01-08T15:45:44.000Z | 2018-05-30T08:16:53.000Z | src/ko/variables/arguments1b.rst | hadi-f90/reeborg-docs | 3446e122616c5baea13ea678f368bc307e64720d | [
"CC0-1.0"
] | 8 | 2016-01-02T20:38:10.000Z | 2021-08-05T04:37:28.000Z | src/ko/variables/arguments1b.rst | hadi-f90/reeborg-docs | 3446e122616c5baea13ea678f368bc307e64720d | [
"CC0-1.0"
] | 11 | 2016-01-03T09:04:04.000Z | 2021-08-05T02:11:39.000Z | Argument par défaut
===================
Vous savez que, lorsqu'il n'y a qu'un seul type d'objets dans un monde
donné, il n'est pas nécessaire de spécifier le type d'objet pour les
fonctions ``prend()`` et ``depose()``. Ceci est un exemple de
comportement par défaut d'une fonction. Dans le cas de ces deux fonctions
spécifiques au Monde de Reeborg, le code requis pour avoir un tel
comportement par défaut dépend d'une façon assez compliquée de l'état
particulier du monde. Pour les cas plus habituels, Python offre une
façon standard de spécifier un comportement par défaut.
Par exemple, supposons que nous voulions définir une fonction ``tourne()``
qui fera en sorte que Reeborg fasse un simple virage à gauche si aucun
argument n'est spécifié, mais peut faire plusieurs virages si on spécifie un
argument, comme par exemple ``tourne(3)``. On peut définir une telle
fonction de la façon suivante::
def tourne(nombre=1):
for _ in range(nombre):
tourne_a_gauche()
.. topic:: Vérifiez ceci!
Écrivez un programme qui utilise une telle définition. Vérifiez que
l'invocation ``tourne()`` donne le même résultat que
``tourne(1)`` ainsi que ``tourne(n=1)``.
Pour les plus avancés
---------------------
Il y a deux types d'arguments pour les fonctions:
les arguments **positionnels** qui n'ont pas de valeur attribuée par défaut,
et les arguments par **mots-clés** qui ont une valeur attribuée par défaut.
Les arguments positionnels doivent apparaître en premier et sont requis
lors de l'invocation d'une fonction. Les arguments par mots-clés
ne sont pas nécessairement requis lors de l'invocation d'une fonction.
Une fois les arguments positionnels déterminés dans une invocation de fonction,
les autres arguments, s'ils sont fournis simplement par une valeur sans
spécifier de mot-clé, seront également déterminé sur une base positionnelle.
Si on spécifie les mots-clés, on peut alors les invoquer dans n'importe quel
ordre.
Voici quelques exemples::
def ma_fonction(pos_1, pos_2, mot_1='a', mot_2=3, mot_3='bonjour'):
# bloc de code ici
# invocations:
ma_fonction(2) # erreur de syntaxe: 2 arguments positionnels requis
ma_fonction(2, 3)
# pos_1 aura la valeur 2; pos_2 aura la valeur 3;
# les arguments mots-clés auront leur valeur par défaut
ma_fonction(2, 3, 4)
# pos_1 aura la valeur 2; pos_2 aura la valeur 3;
# mot_1 aura la valeur 4;
# les autres arguments mots-clés auront leur valeur par défaut
ma_fonction(2, 3, mot_2=4)
# pos_1 aura la valeur 2; pos_2 aura la valeur 3;
# mot_2 aura la valeur 4;
# les autres arguments mots-clés auront leur valeur par défaut
ma_fonction(2, 3, mot_2=4, mot_1=5)
# pos_1 aura la valeur 2; pos_2 aura la valeur 3;
# mot_1 aura la valeur 5; mot_2 aura la valeur 4;
# mot_3 aura sa valeur par défaut ('bonjour')
Pour les programmeurs très avancés
----------------------------------
Il est possible de spécifier la présence d'arguments mots-clés
sans spécifier de valeur par défaut en utilisant le symbole ``*``
comme argument précédent les arguments mots-clés **dans la définition
de la fonction**.
.. code-block:: py3
def ma_fonction(*, a):
print(a)
ma_fonction(a=3) # va imprimer 3
ma_fonction(3) # va générer une erreur | 37.483146 | 79 | 0.714329 |
ff29f9d7f7ab84fe5a74521466db1ef2a2ad6c79 | 342 | rst | reStructuredText | 2020-09-15/CRC020199.1_README.rst | Data-to-Knowledge/water-use-advice | b0541f47ee6bf2aff4774a6c9fdcb034aa74586e | [
"Apache-2.0"
] | null | null | null | 2020-09-15/CRC020199.1_README.rst | Data-to-Knowledge/water-use-advice | b0541f47ee6bf2aff4774a6c9fdcb034aa74586e | [
"Apache-2.0"
] | null | null | null | 2020-09-15/CRC020199.1_README.rst | Data-to-Knowledge/water-use-advice | b0541f47ee6bf2aff4774a6c9fdcb034aa74586e | [
"Apache-2.0"
] | null | null | null | CRC020199.1
==================================
The negative values seem to simply be calibration issues with the meter and have been set to zero. All negative value are only slightly negative. Min value is -0.7.
There are three sets of days with missing data: 2020-06-07 to 2020-06-9, 2020-05-03 to 2020-05-05, and 2020-10-18 to 2020-10-20.
| 57 | 164 | 0.687135 |
287bed1f3e1dd897eef2486788d2b1ffdc32d294 | 7,557 | rst | reStructuredText | README.rst | s1n4/django-categories | 6af6d815e214bddbaac572c19e9c738ef1f752d6 | [
"Apache-2.0"
] | 1 | 2019-02-06T14:23:55.000Z | 2019-02-06T14:23:55.000Z | README.rst | s1n4/django-categories | 6af6d815e214bddbaac572c19e9c738ef1f752d6 | [
"Apache-2.0"
] | null | null | null | README.rst | s1n4/django-categories | 6af6d815e214bddbaac572c19e9c738ef1f752d6 | [
"Apache-2.0"
] | null | null | null | =================
Django Categories
=================
|BUILD|_
.. |BUILD| image::
https://secure.travis-ci.org/callowayproject/django-categories.png?branch=master
.. _BUILD: http://travis-ci.org/#!/callowayproject/django-categories
Django Categories grew out of our need to provide a basic hierarchical taxonomy management system that multiple applications could use independently or in concert.
As a news site, our stories, photos, and other content get divided into "sections" and we wanted all the apps to use the same set of sections. As our needs grew, the Django Categories grew in the functionality it gave to category handling within web pages.
New in 1.2
==========
* Support for Django 1.5
* Dropped support for Django 1.2
* Dropped caching within the app
* Removed the old settings compatibility layer. *Must use new dictionary-based settings!*
New in 1.1
==========
* Fixed a cosmetic bug in the Django 1.4 admin. Action checkboxes now only appear once.
* Template tags are refactored to allow easy use of any model derived from ``CategoryBase``.
* Improved test suite.
* Improved some of the documentation.
Upgrade path from 1.0.2 to 1.0.3
================================
Due to some data corruption with 1.0.2 migrations, a partially new set of migrations has been written in 1.0.3; and this will cause issues for users on 1.0.2 version. There is also an issue with South version 0.7.4. South version 0.7.3 or 0.7.5 or greater works fine.
For a clean upgrade from 1.0.2 to 1.0.3 you have to delete previous version of 0010 migration (named 0010_changed_category_relation.py) and fakes the new 00010, 0011 and 0012.
Therefore after installing new version of django-categories, for each project to upgrade you should execute the following commans in order::
python manage.py migrate categories 0010_add_field_categoryrelation_category --fake --delete-ghost-migrations
python manage.py migrate categories 0011_move_category_fks --fake
python manage.py migrate categories 0012_remove_story_field --fake
python manage.py migrate categories 0013_null_category_id
This way both the exact database layout and migration history is restored between the two installation paths (new installation from 1.0.3 and upgrade from 1.0.2 to 1.0.3).
Last migration is needed to set the correct null value for `category_id` field when upgrading from 1.0.2 while is a noop for 1.0.3.
New in 1.0
==========
**Abstract Base Class for generic hierarchical category models**
When you want a multiple types of categories and don't want them all part of the same model, you can now easily create new models by subclassing ``CategoryBase``. You can also add additional metadata as necessary.
Your model's can subclass ``CategoryBaseAdminForm`` and ``CategoryBaseAdmin`` to get the hierarchical management in the admin.
See the docs for more information.
**Increased the default caching time on views**
The default setting for ``CACHE_VIEW_LENGTH`` was ``0``, which means it would tell the browser to *never* cache the page. It is now ``600``, which is the default for `CACHE_MIDDLEWARE_SECONDS <https://docs.djangoproject.com/en/1.3/ref/settings/#cache-middleware-seconds>`_
**Updated for use with Django-MPTT 0.5**
Just a few tweaks.
**Initial compatibility with Django 1.4**
More is coming, but at least it works.
**Slug transliteration for non-ASCII characters**
A new setting, ``SLUG_TRANSLITERATOR``, allows you to specify a function for converting the non-ASCII characters to ASCII characters before the slugification. Works great with `Unidecode <http://pypi.python.org/pypi/Unidecode>`_.
Updated in 0.8.8
================
The `editor` app was placed inside the categories app, `categories.editor`, to avoid any name clashes.
Upgrading
---------
A setting change is all that is needed::
INSTALLED_APPS = (
'categories',
'categories.editor',
)
New in 0.8
==========
**Added an active field**
As an alternative to deleting categories, you can make them inactive.
Also added a manager method ``active()`` to query only the active categories and added Admin Actions to activate or deactivate an item.
**Improved import**
Previously the import saved items in the reverse order to the imported file. Now them import in order.
New in 0.7
==========
**Added South migrations**
All the previous SQL scripts have been converted to South migrations.
**Can add category fields via management command (and South)**
The new ability to setup category relationships in ``settings.py`` works fine if you are starting from scratch, but not if you want to add it after you have set up the database. Now there is a management command to make sure all the correct fields and tables are created.
**Added an alternate_url field**
This allows the specification of a URL that is not derived from the category hierarchy.
**New JAVASCRIPT_URL setting**
This allows some customization of the ``genericcollections.js`` file.
**New get_latest_objects_by_category template tag**
This will do pretty much what it says.
New in 0.6
==========
**Class-based views**
Works great with Django 1.3 or `django-cbv <http://pypi.python.org/pypi/django-cbv>`_
**New Settings infrastructure**
To be more like the Django project, we are migrating from individual CATEGORIES_* settings to a dictionary named ``CATEGORIES_SETTINGS``\ . Use of the previous settings will still work but will generate a ``DeprecationError``\ .
**The tree's initially expanded state is now configurable**
``EDITOR_TREE_INITIAL_STATE`` allows a ``collapsed`` or ``expanded`` value. The default is ``collapsed``\ .
**Optional Thumbnail field**
Have a thumbnail for each category!
**"Categorize" models in settings**
Now you don't have to modify the model to add a ``Category`` relationship. Use the new settings to "wire" categories to different models.
Features of the project
=======================
**Multiple trees, or a single tree**
You can treat all the records as a single tree, shared by all the applications. You can also treat each of the top level records as individual trees, for different apps or uses.
**Easy handling of hierarchical data**
We use `Django MPTT <http://pypi.python.org/pypi/django-mptt>`_ to manage the data efficiently and provide the extra access functions.
**Easy importation of data**
Import a tree or trees of space- or tab-indented data with a Django management command.
**Metadata for better SEO on web pages**
Include all the metadata you want for easy inclusion on web pages.
**Link uncategorized objects to a category**
Attach any number of objects to a category, even if the objects themselves aren't categorized.
**Hierarchical Admin**
Shows the data in typical tree form with disclosure triangles
**Template Helpers**
Easy ways for displaying the tree data in templates:
**Show one level of a tree**
All root categories or just children of a specified category
**Show multiple levels**
Ancestors of category, category and all children of category or a category and its children
Contributors
============
* Corey Oordt http://github.com/coordt
* Erik Simmler http://github.com/tgecho
* Martin Ogden http://github.com/martinogden
* Ramiro Morales http://github.com/ramiro
* Evan Culver http://github.com/eculver
* Andrzej Herok http://github.com/aherok
* Jonathan Hensley
* Justin Quick http://github.com/justquick
* Josh Ourisman http://github.com/joshourisman
* Jose Soares http://github.com/josesoa
| 41.070652 | 275 | 0.744343 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.