hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6b5f1db8f61e0eee79314cd8689d3f6124a8276a | 4,184 | rst | reStructuredText | docs/tutorial.rst | TheSignPainter/CausalDiscoveryToolbox | 33eae18184905e505be978b08003b9477bf38e0c | [
"MIT"
] | 528 | 2020-02-05T00:58:18.000Z | 2022-03-31T11:32:57.000Z | docs/tutorial.rst | Zephyr-29/CausalDiscoveryToolbox | 073b13a5076390147e95763bab73775c59e6d891 | [
"MIT"
] | 73 | 2020-04-04T08:24:01.000Z | 2022-03-29T14:11:04.000Z | docs/tutorial.rst | Zephyr-29/CausalDiscoveryToolbox | 073b13a5076390147e95763bab73775c59e6d891 | [
"MIT"
] | 94 | 2020-02-16T05:04:00.000Z | 2022-03-23T07:19:18.000Z | .. role:: hidden
:class: hidden-section
===========
Get started
===========
This section focuses on explaining the general functionalities of the package,
and how its components interact with each other. Then two tutorials are
provided:
- :ref:`The first one going through the main features of the package <Basic Tutorial>`
- :ref:`The second for GPU users, highlighting advanced features. <Advanced Tutorial>`
For an installation guide, please check :ref:`here <Installation>`
.. toctree::
:hidden:
tutorial_1
tutorial_2
Package Description
===================
General package architecture
----------------------------
The Causal Discovery Toolbox is a package for causal discovery in the
observational setting. Therefore support for data with interventions is not
available at the moment, but is considered for later versions.
The package is structured in 5 modules:
1. Causality: ``cdt.causality`` implements algorithms for causal discovery, either
in the pairwise setting or the graph setting.
2. Independence: ``cdt.independence`` includes methods to recover the dependence
graph of the data.
3. Data: ``cdt.data`` provides the user with tools to generate data, and load
benchmark data.
4. Utils: ``cdt.utils`` provides tools to the users for model
construction, graph utilities and settings.
5. Metrics: ``cdt.metrics`` includes scoring metrics for graphs, taking as input
``networkx.DiGraph``
All methods for computation adopt a 'scikit-learn' like interface, where ``.predict()``
manages to launch the algorithm on the given data to the toolbox, ``.fit()`` allows
to train learning algorithms Most of the algorithms are classes, and their
parameters can be customized in the ``.__init__()`` function of the class.
.. note::
The ``.predict()`` function is often implemented in the base class
(``cdt.causality.graph.GraphModel`` for causal graph algorithms).
``.predict()`` is often a wrapper calling sub-functions depending on the
arguments fed to the functions. The sub-functions, such as
``.orient_directed_graph()`` for ``cdt.causality.graph`` models (which is
called when a directed graph is fed as a second argument ), are
implemented and documented in the various algorithms.
Hardware and algorithm settings
-------------------------------
The toolbox has a SETTINGS class that defines the hardware settings. Those
settings are unique and their default parameters are defined in
``cdt/utils/Settings``.
These parameters are accessible and overridable via accessing the class:
.. code-block:: python
>>> import cdt
>>> cdt.SETTINGS
Moreover, the hardware parameters are detected and defined automatically
(including number of GPUs, CPUs, available optional packages) at the ``import``
of the package using the ``cdt.utils.Settings.autoset_settings`` method, ran at
startup.
These settings are overriddable in two ways:
1. By changing ``cdt.SETTINGS`` attributes, thus changing the SETTINGS for the
whole python session.
2. By changing the parameters of the functions/classes used. When their default
value in the class definition is ``None``, the ``cdt.SETTINGS`` value is taken, by
using the ``cdt.SETTINGS.get_default`` function. This allows for quick and
temporary parameter change.
The graph class
---------------
The whole package revolves around using the graph classes of the ``networkx``
package.
Most of the methods have the option of predicting a directed graph
(`networkx.DiGraph`) or an undirected graph (`networkx.Graph`).
The ``networkx`` library might not be intuitive to use at first, but it comes
with many useful tools for graphs. Here is a list of handy function, to
understand the output of the toolbox's outputs:
.. code-block:: python
>>> import networkx as nx
>>> g = nx.DiGraph() # initialize a directed graph
>>> l = list(g.nodes()) # list of nodes in the graph
>>> a = nx.adj_matrix(g).todense() # Output the adjacency matrix of the graph
>>> e = list(g.edges()) # list of edges in the graph
Please refer to `networkx` 's documentation for more detailed information:
https://https://networkx.github.io/documentation/stable/
| 35.457627 | 87 | 0.730641 |
9208eefd9609c79c63d89c4056694f5673ccb889 | 102 | rst | reStructuredText | docs/architecture/grading/Grader-Interface.rst | dbecker1/zucchini | 47eb9a40b47bb1b131dcfd0073596ccf8816562c | [
"Apache-2.0"
] | 3 | 2018-03-27T18:09:54.000Z | 2021-04-08T03:03:55.000Z | docs/architecture/grading/Grader-Interface.rst | dbecker1/zucchini | 47eb9a40b47bb1b131dcfd0073596ccf8816562c | [
"Apache-2.0"
] | 337 | 2017-12-17T13:22:26.000Z | 2022-03-28T02:05:09.000Z | docs/architecture/grading/Grader-Interface.rst | dbecker1/zucchini | 47eb9a40b47bb1b131dcfd0073596ccf8816562c | [
"Apache-2.0"
] | 7 | 2018-01-10T18:46:26.000Z | 2020-10-17T17:47:07.000Z | ====================
The Grader Interface
====================
How should the grader interface work?
| 17 | 37 | 0.480392 |
68a9498eb77851a1a598dbc25cffde22dc68ef85 | 1,610 | rst | reStructuredText | doc/doc_sphinx/source/index.rst | CNES/resto_client | 7048bd79c739e33882ebd664790dcf0528e81aa4 | [
"Apache-2.0"
] | 6 | 2019-12-20T09:12:30.000Z | 2021-07-08T11:44:55.000Z | doc/doc_sphinx/source/index.rst | CNES/resto_client | 7048bd79c739e33882ebd664790dcf0528e81aa4 | [
"Apache-2.0"
] | null | null | null | doc/doc_sphinx/source/index.rst | CNES/resto_client | 7048bd79c739e33882ebd664790dcf0528e81aa4 | [
"Apache-2.0"
] | 1 | 2019-12-17T20:16:39.000Z | 2019-12-17T20:16:39.000Z | .. Copyright 2019 CNES
.. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
in compliance with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing, software distributed under the License
is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied. See the License for the specific language governing permissions and
limitations under the License.
.. resto_client documentation master file
Documentation of the resto_client package
=========================================
**resto_client** is a package of Python modules blabla.....
It offers two kind of services:
- **modules** for the developper blabla...
- **CLI interface** which can be activated from the command line to perform some task
resto_client adheres to several standards for providing a code which can be reused with confidence:
- `PEP8 <https://www.python.org/dev/peps/pep-0008/>`_ compliancy
- Auto-documentation
- git based configuration management implemented on a `dedicated forge <https://gitlab.kalimsat.eu/studies/resto_client/>`_
- intensive unit testing
All the details for using resto_client in the development of your application are provided in the
sections below.
resto_client API
----------------
.. toctree::
:maxdepth: 4
api_doc/modules.rst
servers_db_design.rst
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 30.377358 | 126 | 0.71118 |
3f13362d4d3c1689d1aa5acf5d7ebc4abe997ed9 | 859 | rst | reStructuredText | scholarly_data_related_recommendations/SG4MR/readme.rst | SmartDataAnalytics/KEEN-Model-Zoo | 11856a2828c0010e8955a555730c7ba1a009399e | [
"MIT"
] | 2 | 2020-03-01T08:45:49.000Z | 2020-03-11T02:40:39.000Z | scholarly_data_related_recommendations/SG4MR/readme.rst | SmartDataAnalytics/KEEN-Model-Zoo | 11856a2828c0010e8955a555730c7ba1a009399e | [
"MIT"
] | null | null | null | scholarly_data_related_recommendations/SG4MR/readme.rst | SmartDataAnalytics/KEEN-Model-Zoo | 11856a2828c0010e8955a555730c7ba1a009399e | [
"MIT"
] | null | null | null | KGE Models for SG4MR
====================
This directory contains KGE models trained on SG4MR [1], a KG for scholarly data containing for entity types:
* *Author*
* *Paper*
* *Department*
* *Event*
and four relationship types:
* *isAuthorOf*: Denotes the relationship between an author and its papers
* *isCoAuthorOf*: Denotes that two authors have at least one joint paper
* *isAffiliatedIn*: Denotes the department to which an author is affliated
* *isPublished*: Denotes the event in which a paper is published
Overall, SG4MR contains 45952 triples (39952 training triples and 6000 test
triples), 2870 paper entities, 4495 author entities, 5064 department entities, and
15 event entities.
.. image:: figures/sg4mr_kg.png
References
==========
.. [1] Henk, Veronika, et al. "Metaresearch Recommendations using Knowledge Graph Embeddings." (2019).
| 28.633333 | 109 | 0.740396 |
16231f2479c00dfb569c0b4efaa64a03b27d021a | 625 | rst | reStructuredText | includes_lwrp/includes_lwrp_powershell.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | 1 | 2020-02-02T21:57:47.000Z | 2020-02-02T21:57:47.000Z | includes_lwrp/includes_lwrp_powershell.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | null | null | null | includes_lwrp/includes_lwrp_powershell.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | null | null | null | .. The contents of this file are included in multiple topics.
.. This file should not be changed in a way that hinders its ability to appear in multiple documentation sets.
The |lwrp powershell| lightweight resource is used to execute a script using the |windows powershell| interpreter (similar to script resources for |bash|, |csh|, |perl|, |python|, or |ruby|). A temporary file is created and executed like other script resources, rather than run in-line. A |windows powershell| lightweight resource is not idempotent. Use the ``not_if`` and ``only_if`` meta parameters to guard the use of this resource for idempotence. | 156.25 | 451 | 0.7776 |
f7de799f541ab8f8f481a2314eb6f283dea4f421 | 2,083 | rst | reStructuredText | standards_consistency/standards_consistency.rst | vPeteWalker/contributing | 9925e9d5d3559dc832944a219e6d4a6345db1808 | [
"MIT"
] | 1 | 2022-01-29T18:12:30.000Z | 2022-01-29T18:12:30.000Z | standards_consistency/standards_consistency.rst | vPeteWalker/contributing | 9925e9d5d3559dc832944a219e6d4a6345db1808 | [
"MIT"
] | null | null | null | standards_consistency/standards_consistency.rst | vPeteWalker/contributing | 9925e9d5d3559dc832944a219e6d4a6345db1808 | [
"MIT"
] | null | null | null | .. _standards_consistency:
-------------------------
Standards and Consistency
-------------------------
- When creating folders and associated .RST files:
- Avoid redundancy - XYZ_LAB, ABC_LAB, etc.
- Always use lowercase letters
- Never use spaces, or dashes. Instead use underscores
- Avoid abbreviations. These names only get typically typed four times (folder, .rst, twice in index.rst), so not much is being gained, and could avoid confusion in future.
- Always add .. `_section_reference:` to the top of every .RST, and additionally anywhere else you'd like to provide an easy way to refer the attendee to.
- Always add the standard appendix section which includes a glossary. Review this to ensure it contains any terms or concepts that may need further clarification.
-
UserXX (TODO: add proper formatting)
Standard Index, which includes most up-to-date HPOC info, VPN, Frame, etc.
Example formatting:
------------
Main Heading
------------
Subsection
++++++++++
Sub sub-section
...............
TODO: Add screenshot illustrating
For items where the attendee has to enter an IP address, use formatting: `https://<NUTANIX-CLUSTER-IP>:9440` (Caps and dashes, enclosed in <>, the whole thing enclosed in `)
Misc Notes
++++++++++
Drop-down, left-hand, right-hand.
Does it make sense to link terms right to the glossary, maybe for the first X times we use it?
Always have the staging in mind when creating a bootcamp. Below are just some of the options available for staging, as this is continually being expanded upon.
- Windows Tools VM - Typically `WinTools-UserXX`
- Linux Tools VM (?) - ?
- AutoAD
- Nutanix products/services - Calm, Flow, Leap, etc.
Every RST should have:
- How long the entire .RST, and/or each section should take. Typical advice is take however long your trial run as the starting point, then double that for the second point. Ex. 15-30 minutes.
- Overview: What should they expect to learn?
-
Bold use cases:
Italics use cases:
Red HTML use cases:
Note use cases:
| 34.147541 | 195 | 0.697072 |
5ef89f796d96bc7a33147dc8af5ac4bc8b7b070a | 575 | rst | reStructuredText | docs/source/examples.rst | tomography/scanscripts | f7486fe1285da4684f3709661f112ccc15c2e4b8 | [
"BSD-3-Clause"
] | null | null | null | docs/source/examples.rst | tomography/scanscripts | f7486fe1285da4684f3709661f112ccc15c2e4b8 | [
"BSD-3-Clause"
] | 18 | 2017-03-27T01:35:35.000Z | 2018-04-03T20:30:49.000Z | docs/source/examples.rst | tomography/scanscripts | f7486fe1285da4684f3709661f112ccc15c2e4b8 | [
"BSD-3-Clause"
] | 8 | 2017-02-22T16:31:43.000Z | 2017-10-15T18:22:30.000Z | ========
Examples
========
Sector 32-ID-C
==============
An template TXM script is show below. It doesn't actually collect any
data, but it does set up the TXM, open the shutters, close them again,
and tear down the TXM. The ``variableDict`` describes the parameters
that are presented to the user in the GUI when running this script. In
the example below, Several actions take place within a
:py:meth:`~aps_32id.txm.NanoTXM.run_scan` context manager. This
ensures that the current configuration is restored after the scan.
.. literalinclude:: examples/my_aps32_script.py
| 33.823529 | 70 | 0.744348 |
db427a2f4fa0f10982f88acd0a01a428b964b5a3 | 404 | rst | reStructuredText | docs/doc_source/api/function_namespaceh5pp_1a1916ac19846807673af32b055bb98af4.rst | mpb27/h5pp | bd0c70d30329732285b2c06f9dc48795f7d18180 | [
"MIT"
] | null | null | null | docs/doc_source/api/function_namespaceh5pp_1a1916ac19846807673af32b055bb98af4.rst | mpb27/h5pp | bd0c70d30329732285b2c06f9dc48795f7d18180 | [
"MIT"
] | null | null | null | docs/doc_source/api/function_namespaceh5pp_1a1916ac19846807673af32b055bb98af4.rst | mpb27/h5pp | bd0c70d30329732285b2c06f9dc48795f7d18180 | [
"MIT"
] | null | null | null | .. _exhale_function_namespaceh5pp_1a1916ac19846807673af32b055bb98af4:
Template Function h5pp::format(const std::string&, ] Args...)
=============================================================
- Defined in :ref:`file__home_david_GitProjects_h5pp_h5pp_include_h5pp_details_h5ppFormat.h`
Function Documentation
----------------------
.. doxygenfunction:: h5pp::format(const std::string&, ] Args...)
| 28.857143 | 92 | 0.633663 |
0da68e8fa477f249ac004cc83836e270dffd2846 | 235 | rst | reStructuredText | docs/plugins/core/auth.rst | SpockBotMC/SpockBot | f89911551f18357720034fbaa52837a0d09f66ea | [
"MIT"
] | 171 | 2015-02-04T00:24:15.000Z | 2022-03-06T10:23:47.000Z | docs/plugins/core/auth.rst | SpockBotMC/SpockBot | f89911551f18357720034fbaa52837a0d09f66ea | [
"MIT"
] | 142 | 2015-02-04T02:17:51.000Z | 2021-11-07T22:37:27.000Z | docs/plugins/core/auth.rst | SpockBotMC/SpockBot | f89911551f18357720034fbaa52837a0d09f66ea | [
"MIT"
] | 60 | 2015-02-06T01:19:29.000Z | 2022-03-18T18:01:42.000Z | .. _plugin-auth:
****
Auth
****
Description
===========
.. automodule:: spockbot.plugins.core.auth
Events
======
Undocumented
Methods and Attributes
======================
.. autoclass:: AuthCore
:members:
:undoc-members:
| 11.190476 | 42 | 0.565957 |
62cc275b5c1a188713b2f895f4cebbd225c612eb | 69 | rst | reStructuredText | docs/parts/utils.rst | magic-alt/gym-electric-motor | 39b63e2de79840528c24515703777a92e95edd40 | [
"MIT"
] | 179 | 2019-10-21T15:08:05.000Z | 2022-03-29T08:46:13.000Z | docs/parts/utils.rst | magic-alt/gym-electric-motor | 39b63e2de79840528c24515703777a92e95edd40 | [
"MIT"
] | 149 | 2019-12-16T16:02:00.000Z | 2022-03-18T19:42:33.000Z | docs/parts/utils.rst | magic-alt/gym-electric-motor | 39b63e2de79840528c24515703777a92e95edd40 | [
"MIT"
] | 36 | 2019-12-10T17:32:02.000Z | 2022-03-16T20:54:18.000Z | Utils
#####
.. automodule:: gym_electric_motor.utils
:members:
| 9.857143 | 40 | 0.652174 |
3fb1706b1e764b8c8b1eab7101fac5fdabef23c7 | 171 | rst | reStructuredText | docsource/_autosummary/rptools.rpscore.rpScore.load_training_data.rst | niraito/rptools | 5c631c6cc4cd8cb497ccf573891235b36dfde65f | [
"MIT"
] | 4 | 2021-01-14T14:52:55.000Z | 2022-01-28T09:31:59.000Z | docsource/_autosummary/rptools.rpscore.rpScore.load_training_data.rst | niraito/rptools | 5c631c6cc4cd8cb497ccf573891235b36dfde65f | [
"MIT"
] | 5 | 2021-01-18T09:24:48.000Z | 2021-12-17T06:47:07.000Z | docsource/_autosummary/rptools.rpscore.rpScore.load_training_data.rst | niraito/rptools | 5c631c6cc4cd8cb497ccf573891235b36dfde65f | [
"MIT"
] | 2 | 2021-09-14T07:02:20.000Z | 2022-01-31T16:59:42.000Z | rptools.rpscore.rpScore.load\_training\_data
============================================
.. currentmodule:: rptools.rpscore.rpScore
.. autofunction:: load_training_data | 28.5 | 44 | 0.602339 |
050d8e9bfe2f6bbc2088552c4157d57e64ab968e | 6,844 | rst | reStructuredText | doc/source/service_deploy.rst | brettkoonce/nlp-architect | 29b72c39b28dbd8ca8d341075b82a2cdc396e8f8 | [
"Apache-2.0"
] | 1 | 2019-04-06T11:45:25.000Z | 2019-04-06T11:45:25.000Z | doc/source/service_deploy.rst | brettkoonce/nlp-architect | 29b72c39b28dbd8ca8d341075b82a2cdc396e8f8 | [
"Apache-2.0"
] | null | null | null | doc/source/service_deploy.rst | brettkoonce/nlp-architect | 29b72c39b28dbd8ca8d341075b82a2cdc396e8f8 | [
"Apache-2.0"
] | 2 | 2019-02-21T08:41:04.000Z | 2019-02-21T08:53:11.000Z | .. ---------------------------------------------------------------------------
.. Copyright 2016-2018 Intel Corporation
..
.. Licensed under the Apache License, Version 2.0 (the "License");
.. you may not use this file except in compliance with the License.
.. You may obtain a copy of the License at
..
.. http://www.apache.org/licenses/LICENSE-2.0
..
.. Unless required by applicable law or agreed to in writing, software
.. distributed under the License is distributed on an "AS IS" BASIS,
.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. See the License for the specific language governing permissions and
.. limitations under the License.
.. ---------------------------------------------------------------------------
NLP Architect Server Deployment Tutorial
########################################
Overview
--------
This tutorial walks you through the multiple steps for deploying NLP Architect server locally.
Deployment allows the server to scale well based on user requests.
SW Stack
--------
Various layers of the software stack are as follows
.. image :: assets/service_deploy.png
Prerequisites
-------------
1. Ubuntu 16.04
2. Must have root privileges
3. Virtualization must be enabled in your computer BIOS
Kubectl Installation
--------------------
kubectl is a command line interface for running commands against Kubernetes clusters.
Following are the installation instructions
.. code::
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo touch /etc/apt/sources.list.d/kubernetes.list
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
Minikube Installation
---------------------
Minikube provides a simple way of running Kubernetes on your local machine.
Following are the installation instructions
.. code::
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.25.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
sudo minikube start --vm-driver=none
Follow instructions posted by minikube
Docker Installation
-------------------
.. code::
sudo apt-get install ca-certificates curl gnupg2 software-properties-common
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable"
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7EA0A9C3F273FCD8
sudo apt-get update && sudo apt-get install docker-ce
sudo usermod -aG docker $USER
# if the following command doesn't work, re-login again
exec su -l $USER
Launch Docker Registry
----------------------
.. code::
Follow https://docs.docker.com/config/daemon/systemd/#httphttps-proxy to fix proxy things if need be
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Build DockerFile
-----------------
Create a Dockerfile with the following content and save it in your deployment directory.
.. code::
FROM python:3.6 AS builder
RUN apt-get update
RUN apt-get install -y git
ARG GITHUB_ACCESS_TOKEN
# check out project at current location, hopefully this is a tag eventually
# right now this is latest commit from https://github.com/NervanaSystems/nlp-architect/pull/243/commits/ at 12:21pm 8/6/18
RUN git clone https://x-access-token:"${GITHUB_ACCESS_TOKEN}"@github.com/NervanaSystems/nlp-architect.git
# prevent keeping token in final image
FROM python:3.6
COPY --from=builder /nlp-architect /src/nlp-architect
ARG NLP_ARCH_VERSION=v0.3
WORKDIR /src/nlp-architect
RUN git fetch
RUN git checkout ${NLP_ARCH_VERSION}
# install nlp-architect project itself
RUN pip3 install .
# run NLP Architect server
CMD [ "nlp_architect", "server", "-p", "8080"]
Run the following commands to build the docker file
.. code::
docker build --build-arg GITHUB_ACCESS_TOKEN=${GITHUB_ACCESS_TOKEN} --build-arg HTTP_PROXY=${HTTP_PROXY} --build-arg HTTPS_PROXY=${HTTPS_PROXY} --build-arg http_proxy=${http_proxy} --build-arg https_proxy=${https_proxy} -t nlp_architect .
docker tag nlp_architect localhost:5000/nlp_architect
docker push localhost:5000/nlp_architect
docker run --rm -it -p 8080:8080 localhost:5000/nlp_architect
Deploy Kubernetes
-----------------
Create a depolyment.yaml file in the same directory as your deployment. Fill the deployment.yaml file with the following contents
.. code::
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nlp-server
spec:
replicas: 1
template:
metadata:
labels:
run: nlp-server
id: "0"
app: nlp-server
spec:
containers:
- name: nlp-server
image: localhost:5000/nlp_architect
imagePullPolicy: Always
resources:
limits:
cpu: 1300m
memory: 1600Mi
requests:
cpu: 1100m
memory: 1300Mi
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nlp-server
spec:
type: NodePort
selector:
app: nlp-server
ports:
- name: http
port: 8080
targetPort: 8080
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: nlp-server
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nlp-server
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
Run the following commands to create a deployment on the kubernetes cluster
.. code::
kubectl create -f deployment.yaml
# run the following command to see your pods spin up; there will be 3 of them if your machine has enough resources
watch -n1 kubectl get pods
# this next command gives you the {nodeportvalue} below, it'll be in the format `8080:{nodeportvalue}`
kubectl get svc
# this next command will show you the hpa created with this deployment
kubectl get hpa
# if you ever want to see everything at once, run this:
kubectl get all
# if there is a problem, run this:
kubectl logs {podname}
# if there is a problem with the deployment itself, run this:
kubectl describe pod {podname}
# to redeploy, run this, and then rerun the `kubectl create -f deployment.yaml` command
kubectl delete -f deployment.yaml
To test the server
.. code::
curl --noproxy "*" $(sudo minikube ip):{nodeportvalue}
Where nodeportvalue is from kubectl get svc
Now you can browse nlp architect at the following url: http://{operating_system_ip}:8080
| 30.14978 | 242 | 0.675336 |
eddd403387042be6ebb636eac50ea5ae192a6beb | 2,253 | rst | reStructuredText | doc/documentation/datasets.rst | hydrosquall/altair | ded897b0967a88a467828b1e2c133bd92862de23 | [
"BSD-3-Clause"
] | null | null | null | doc/documentation/datasets.rst | hydrosquall/altair | ded897b0967a88a467828b1e2c133bd92862de23 | [
"BSD-3-Clause"
] | null | null | null | doc/documentation/datasets.rst | hydrosquall/altair | ded897b0967a88a467828b1e2c133bd92862de23 | [
"BSD-3-Clause"
] | null | null | null | .. _datasets:
Altair Datasets
===============
Altair includes a loader for a number of built-in datasets, available in the
`vega-datasets`_ GitHub repository.
These datasets are used throughout the documentation, and particularly in
the :ref:`example-gallery`.
The list of available datasets can be found using the
:func:`~altair.datasets.list_datasets` function:
>>> from altair.datasets import list_datasets
>>> len(list_datasets())
40
>>> list_datasets()[:5]
['airports', 'anscombe', 'barley', 'birdstrikes', 'budget']
If you would like to load a dataset and use it within a plot, use the
:func:`altair.load_dataset` function:
>>> from altair import load_dataset
>>> data = load_dataset('movies')
>>> print(data.columns)
Index(['Creative_Type', 'Director', 'Distributor', 'IMDB_Rating', 'IMDB_Votes',
'MPAA_Rating', 'Major_Genre', 'Production_Budget', 'Release_Date',
'Rotten_Tomatoes_Rating', 'Running_Time_min', 'Source', 'Title',
'US_DVD_Sales', 'US_Gross', 'Worldwide_Gross'],
dtype='object')
The data is returned as a Pandas dataframe, which can then be used directly
within Altair:
.. altair-setup::
import altair as alt
data = alt.load_dataset('movies')
.. altair-plot::
import altair as alt
alt.Chart(data).mark_tick().encode(
x='Production_Budget',
y='MPAA_Rating'
).configure_cell(
width=400
)
Note that you can also "load" the dataset by url:
>>> url = load_dataset('movies', url_only=True)
>>> url
'https://vega.github.io/vega-datasets/data/movies.json'
Passing data by URL can be more efficient, as the plot specification is not
required to encode the entire dataset within the JSON structure. Keep in mind,
though, that Altair cannot do type inference on data specified by URL, so
you must specify the :ref:`data-types` explicitly:
.. altair-setup::
url = alt.load_dataset('movies', url_only=True)
.. altair-plot::
import altair as alt
alt.Chart(url).mark_tick().encode(
x='Production_Budget:Q',
y='MPAA_Rating:N'
).configure_cell(
width=400
)
For more examples of visualization using the available datasets, see the
:ref:`example-gallery`.
.. _vega-datasets: https://github.com/vega/vega-datasets
| 27.814815 | 79 | 0.704394 |
ee95fa75c3e3e87a508d2d8302dfb6b0bc67cad6 | 122 | rst | reStructuredText | static_websites/python/docs/_sources/api/ndarray/_autogen/mxnet.ndarray.sparse.concat.rst | IvyBazan/mxnet.io-v2 | fdfd79b1a2c86afb59f27e8700056cd9a32c3181 | [
"MIT"
] | null | null | null | static_websites/python/docs/_sources/api/ndarray/_autogen/mxnet.ndarray.sparse.concat.rst | IvyBazan/mxnet.io-v2 | fdfd79b1a2c86afb59f27e8700056cd9a32c3181 | [
"MIT"
] | null | null | null | static_websites/python/docs/_sources/api/ndarray/_autogen/mxnet.ndarray.sparse.concat.rst | IvyBazan/mxnet.io-v2 | fdfd79b1a2c86afb59f27e8700056cd9a32c3181 | [
"MIT"
] | null | null | null | mxnet.ndarray.sparse.concat
===========================
.. currentmodule:: mxnet.ndarray.sparse
.. autofunction:: concat | 20.333333 | 39 | 0.598361 |
c4002f7550b4936db7b890e6f34616ff58936c07 | 1,857 | rst | reStructuredText | docs/basics/compare_agents.rst | riccardodv/rlberry | 8bb03772cda1e13c57de0e1da7bc7356a3014cfb | [
"MIT"
] | 86 | 2020-11-20T21:02:27.000Z | 2022-03-07T14:57:40.000Z | docs/basics/compare_agents.rst | riccardodv/rlberry | 8bb03772cda1e13c57de0e1da7bc7356a3014cfb | [
"MIT"
] | 103 | 2020-11-17T12:31:21.000Z | 2022-03-28T13:46:16.000Z | docs/basics/compare_agents.rst | riccardodv/rlberry | 8bb03772cda1e13c57de0e1da7bc7356a3014cfb | [
"MIT"
] | 20 | 2020-11-23T01:47:50.000Z | 2022-03-25T07:45:24.000Z | .. _rlberry: https://github.com/rlberry-py/rlberry
.. _compare_agents:
Compare different agents
========================
Two or more agents can be compared using the classes
:class:`~rlberry.manager.agent_manager.AgentManager` and
:class:`~rlberry.manager.multiple_managers.MultipleManagers`, as in the example below.
.. code-block:: python
import numpy as np
from rlberry.envs.classic_control import MountainCar
from rlberry.agents.torch.reinforce import REINFORCEAgent
from rlberry.agents.kernel_based.rs_kernel_ucbvi import RSKernelUCBVIAgent
from rlberry.manager import AgentManager, MultipleManagers, plot_writer_data
# Environment constructor and kwargs
env = (MountainCar, {})
# Parameters
params = {}
params['reinforce'] = dict(
gamma=0.99,
horizon=160,
)
params['kernel'] = dict(
gamma=0.99,
horizon=160,
)
eval_kwargs = dict(eval_horizon=200)
# Create AgentManager for REINFORCE and RSKernelUCBVI
multimanagers = MultipleManagers()
multimanagers.append(
AgentManager(
REINFORCEAgent,
env,
init_kwargs=params['reinforce'],
fit_budget=100,
n_fit=4,
parallelization='thread')
)
multimanagers.append(
AgentManager(
RSKernelUCBVIAgent,
env,
init_kwargs=params['kernel'],
fit_budget=100,
n_fit=4,
parallelization='thread')
)
# Fit and plot
multimanagers.run()
plot_writer_data(
multimanagers.managers,
tag='episode_rewards',
preprocess_func=np.cumsum,
title="Cumulative Rewards")
| 26.528571 | 86 | 0.588045 |
88991753dd91381a087b98c20d6a195e3cc8efea | 5,289 | rst | reStructuredText | content/notebook/notebook-2014-07-01.rst | moorepants/moorepants.info | 18bbb7f96d8a59c81e8f193336e6f8842424e889 | [
"CC-BY-3.0"
] | 2 | 2021-08-31T02:56:02.000Z | 2021-08-31T02:56:12.000Z | content/notebook/notebook-2014-07-01.rst | moorepants/moorepants.info | 18bbb7f96d8a59c81e8f193336e6f8842424e889 | [
"CC-BY-3.0"
] | 16 | 2015-05-06T06:24:40.000Z | 2021-09-13T08:40:21.000Z | content/notebook/notebook-2014-07-01.rst | moorepants/moorepants.info | 18bbb7f96d8a59c81e8f193336e6f8842424e889 | [
"CC-BY-3.0"
] | 1 | 2018-02-22T05:16:00.000Z | 2018-02-22T05:16:00.000Z | ==============
Notebook Entry
==============
:subtitle: July 1, 2014
:category: notebook
:date: 2014-07-01 17:15:38
:slug: notebook-2014-07-01
:tags: notebook, ipopt, python
uh, trying to use IPopt in Python.
Ipopt installation
==================
I tried to install from the Ubuntu repos, but had trouble linking to it or
running the example::
$ sudo aptitude install coinor-libipopt1 coinor-libipopt-dev coinor-libipopt-doc
So then I removed it::
$ sudo aptitude remove coinor-libipopt1 coinor-libipopt-dev coinor-libipopt-doc
The Ipopt documentation_ gives very nice instructions for installing it from
source, which basically goes like this::
$ cd ~/src
$ svn co https://projects.coin-or.org/svn/Ipopt/stable/3.11 CoinIpopt
$ cd CoinIpopt/ThirdParty/Blas
$ ./get.Blas
$ cd ../Lapack
$ ./get.Lapack
$ cd ../ASL
$ ./get.ASL
$ cd ../Mumps
$ ./get.Mumps
$ cd ../Metis
$ ./get.Metis
This downloads the BLAS and LAPACK reference implementations, but I think the
compilation process looks in the system for the system installed
implementations and uses them if you have them.
It is a good idea to get the HSL code (but not necessarily required because
Mumps can be used) and drop it in the ThirdParty/HSL directory. You have to get
the free academic license 2011 code and sign a form to use it. It also takes a
day to get the download link by email. (Note that this can be linked as a
shared lib after compiling ipopt, so recompiling is required, but it may be
easier to just recompile).
I compiled without HSL because I'm waiting for the download link.
First change into the build directory::
$ cd ../../build
And run configure::
$ ./configure
These are some potential options I may want to set in the future::
--prefix /usr/local # for system install
--with-blas="-L$HOME/lib -lmyblas" # link to better blas implementations
Now compile, test, and install::
$ make -j5
$ make test
$ make install # sudo if system install
This ended up putting everything in ``~/src/CoinIpopt/`` like
``~/src/CoinIpopt/include``, ``~/src/CoinIpopt/lib``, etc. So I did something
wrong, as I thought it should have ended up in ``~src/CoinIpopt/build/``.
.. _documentation: https://projects.coin-or.org/Ipopt/browser/stable/3.11/Ipopt/doc/documentation.pdf
cyipopt
=======
I'm hoping to use Ipopt through a Python wrapper. Seems like there are two
existing standalone wrappers pyipopt_ and cyipopt_. I like cython and cyipopt
was newer, so I gave it a shot.
I first created a conda Python 2.7 environment::
$ conda create -n cyipopt numpy scipy cython matplotlib sphinx
$ source activate cyipopt
Then got the source code from bitbucket::
$ cd ~/src
$ hg clone https://bitbucket.org/amitibo/cyipopt
$ cd cyipopt
I installed on Ubuntu 14.04 so I had to do some pruning in the setup.py file.
Basically just remove some odd stuff from ``main_unix()`` so it looks like
this::
def main_unix():
setup(
name=PACKAGE_NAME,
version=VERSION,
description=DESCRIPTION,
author=AUTHOR,
author_email=EMAIL,
url=URL,
packages=[PACKAGE_NAME],
cmdclass={'build_ext': build_ext},
ext_modules=[
Extension(
PACKAGE_NAME + '.' + 'cyipopt',
['src/cyipopt.pyx'],
**pkgconfig('ipopt')
)
],
)
Since I didn't install IPopt system wide I needed to export these two
environment variables to get things to work::
$ export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:~/src/CoinIpopt/lib/pkgconfig
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/src/CoinIpopt/lib
And finally::
$ python setup.py install
$ python test/examplehs071.py
$ python test/lasso.py
And it worked.
.. _pyipopt: https://github.com/xuy/pyipopt
.. _cyipopt: https://bitbucket.org/amitibo/cyipopt
Other Things
============
Here are a bunch of other notes about things I found today:
- A list of Python optimization tools: https://software.sandia.gov/trac/coopr/wiki/Documentation/RelatedProjects
- Casadi is a symbolic framework for numerica optimization with automatic
differentiation, has python bindings and includes Ipopt: https://github.com/casadi/casadi/
- NLOPT: has a bunch of optimizers plus python bindings: http://ab-initio.mit.edu/wiki/index.php/NLopt
- Most all of the large scale NLP optimizers seem to be guarded behind close
source licenses.
- pagmo: parallel optimization includes python bindings and has SciPy, SNOPT,
and IPOPT connections https://github.com/esa/pagmo
- nlpy: large scale optimization with python https://github.com/dpo/nlpy
- Nice SO question on NLP in Python: http://scicomp.stackexchange.com/questions/83/is-there-a-high-quality-nonlinear-programming-solver-for-python
- The paper that explains SNOPT's algorithm: http://www-leland.stanford.edu/group/SOL/reports/snopt.pdf
- JSModelica seems to have some Pyton interfaces to things.
- PyOpt interfaces lots of code, but mostly commercial code.
- OpenMDAO uses PyOpt.
- My old labmate Gilbert has a Python Optimal Control package
https://github.com/gilbertgede/PyOCP and an interior point optimizer
https://github.com/gilbertgede/PyIntropt
| 33.05625 | 146 | 0.709775 |
8a2f4935bfee680a4046ee8944676ca7fc6f92f6 | 3,731 | rst | reStructuredText | docs/oecophylla-basic.rst | biocore/shrugged | 725abd9a09e6c5c447c3801e582b97bb790a2954 | [
"MIT"
] | 13 | 2017-09-27T08:55:56.000Z | 2020-05-01T16:47:22.000Z | docs/oecophylla-basic.rst | biocore/shrugged | 725abd9a09e6c5c447c3801e582b97bb790a2954 | [
"MIT"
] | 123 | 2017-09-08T01:00:44.000Z | 2018-04-16T18:26:42.000Z | docs/oecophylla-basic.rst | biocore/shrugged | 725abd9a09e6c5c447c3801e582b97bb790a2954 | [
"MIT"
] | 22 | 2017-09-07T23:54:47.000Z | 2020-12-28T13:21:44.000Z | Running Oecophylla locally
==========================
For this tutorial, we'll analyze a pair of shallowly sequenced microbiome
samples distributed with the repository. They are located in
``oecophylla/test_data/test_reads``.
This tutorial essentially duplicates the automatic execution of test data
above, walking you through each step with the test data, but *as if you were
running on your own data*.
Gather inputs
-------------
We need three inputs to execute:
1. **Input reads directory**: ``test_data/test_reads``
2. **Parameters file**: ``test_data/test_config/test_params.yml``
3. **Environment file**: ``test_data/test_config/test_envs.yml``
The *input reads* should be the gzipped raw demultiplexed Illumina reads, with
filenames conforming to the Illumina bcl2fastq standard (e.g.
sample_S102_L001_R1_001.fastq.gz).
The *environment file* simply lists the commands necessary to invoke the
correct environment for each module. The included example file specifies the
commands necessary for the Oecophylla-installed Conda environments, but these
can be changed if, for example, you have appropriate environments specified as
modules on a cluster.
The *parameters file* specifies the parameters for each tool being executed,
including paths to the relevant databases.
Information from these three files is combined by Oecophylla into a single
``config.yaml`` configuration file and placed in the output directory of a run.
This serves as complete record of the settings chosen for execution of a run,
as well as instructions for restarting or extending the processing on a
dataset for subsequent invokations of Oecophylla.
Run QC with Oecophylla
----------------------
To run a simple workflow, executing read trimming and QC summaries, run the
following *from the Oecophylla directory*:
.. code-block:: bash
:caption: note that the backslash here is just escaping the return
oecophylla workflow \
--input-dir test_data/test_reads \
--params test_data/test_config/test_params.yml \
--envs test_data/test_config/test_envs.yml \
--output-dir test_output qc
Then go get a cup of coffee.
\...
When you come back, you will find a directory called ``test_output``.
Inside of ``test_output`` will be the configuration file ``config.yaml``.
Also inside of ``test_output`` will be a folder called ``results``. This
contains... well, you know.
Take a look at the file ``test_out/results/qc/multiQC_per_sample/
multiqc_report.html``. This is a portable HTML-based summary of the FastQC
quality information from each of your samples.
Run additional modules
----------------------
Now that you have initiated an Oecophylla run, you can call subsequent modules
without providing paths for the three above inputs. Simply providing the
output directory and module to execute will be enough: Oecophylla will find
the ``config.yaml`` file in the output directory, and pick up where it left
off.
To continue with the ``taxonomy`` and ``function`` modules on the previous
outputs, you can now simply run:
.. code-block:: bash
oecophylla workflow --output_dir test_output taxonomy function
Oecophylla will find the outputs from the ``qc`` module, using these cleaned
reads as inputs to the next steps.
Results
-------
The ``results`` directory will contain a separate folder for each module in
the analysis workflow.
These modules each will list per-sample outputs---for example, trimmed reads
or assembled contigs---in per-sample directories within the module directory.
Combined outputs---for example, the MultiQC summaries or combined biom table
taxonomic summaries---will be found in their own dictories within the primary
module directory.
| 35.875 | 79 | 0.757974 |
00cc7408591dd969cdd2b286bbdf9e012b4814f0 | 102 | rst | reStructuredText | doc/src/modules/physics/quantum/hilbert.rst | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
] | 8,323 | 2015-01-02T15:51:43.000Z | 2022-03-31T13:13:19.000Z | doc/src/modules/physics/quantum/hilbert.rst | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
] | 15,102 | 2015-01-01T01:33:17.000Z | 2022-03-31T22:53:13.000Z | doc/src/modules/physics/quantum/hilbert.rst | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
] | 4,490 | 2015-01-01T17:48:07.000Z | 2022-03-31T17:24:05.000Z | =============
Hilbert Space
=============
.. automodule:: sympy.physics.quantum.hilbert
:members:
| 14.571429 | 45 | 0.539216 |
28e366ee79b0206d67576c9a7cacd83be52ca3d8 | 1,674 | rst | reStructuredText | doc/dendro.rst | yuvallanger/glue | 1e27b47328db1e9a44eb6734e894a897c4b693be | [
"BSD-3-Clause"
] | null | null | null | doc/dendro.rst | yuvallanger/glue | 1e27b47328db1e9a44eb6734e894a897c4b693be | [
"BSD-3-Clause"
] | null | null | null | doc/dendro.rst | yuvallanger/glue | 1e27b47328db1e9a44eb6734e894a897c4b693be | [
"BSD-3-Clause"
] | null | null | null | Visualizing Astronomical Dendrograms
====================================
.. note:: Dendrogram visualization is experimental in Glue v0.3
You can use Glue to visualize dendrograms created by the
`astrodendro <http://dendrograms.org/>`_ package.
Enabling Denrogram Visualization
--------------------------------
Because dendrogram visualization is still experimental, you
must enable it before using it. To do so, add the following
line to your :ref:`~/.glue/config.py <configuration>` file::
from glue.qt.widgets import enable_dendrograms
enable_dendrograms()
Building a dendrogram
---------------------
The details of constructing dendrograms for astronomical images
is beyond the scope of this document -- see `<http://dendrograms.org/>`_
for more information. The following snippet builds a dendrogram
from the W5 image used in the :ref:`tutorial <getting_started>`::
from astropy.io import fits
from astrodendro import Dendrogram
data = fits.getdata('W5.fits')
dg = Dendrogram.compute(data, min_value=500, min_npix=50)
dg.save_to('w5_dendro.fits')
Next, load this file into Glue, choosing "Denrdogram" as a file type.
You can now visualize the W5 dendrogram alongside its image:
.. figure:: dendro.png
:align: center
:width: 400px
Linking to Catalog Properties
-----------------------------
If you have used astrodendro to compute a catalog of structure properties,
you can visualize that in Glue as well. The best way to do this is to
save the catalog as a table, load it into Glue, and
:ref:`merge it <merging>` with the dendrogram dataset. This will
supplement the dendrogram with the additional catalog-derived properties. | 37.2 | 74 | 0.717443 |
5f945cea2b68d3c19a3d171a016a9ee355906422 | 224 | rst | reStructuredText | doc/release/upcoming_changes/19513.new_feature.rst | iam-abbas/numpy | 2fb5e969fded3cd468f2ca01d5b954c953545dd9 | [
"BSD-3-Clause"
] | 20,453 | 2015-01-02T09:00:47.000Z | 2022-03-31T23:35:56.000Z | doc/release/upcoming_changes/19513.new_feature.rst | iam-abbas/numpy | 2fb5e969fded3cd468f2ca01d5b954c953545dd9 | [
"BSD-3-Clause"
] | 14,862 | 2015-01-01T01:28:34.000Z | 2022-03-31T23:48:52.000Z | doc/release/upcoming_changes/19513.new_feature.rst | iam-abbas/numpy | 2fb5e969fded3cd468f2ca01d5b954c953545dd9 | [
"BSD-3-Clause"
] | 9,362 | 2015-01-01T15:49:43.000Z | 2022-03-31T21:26:51.000Z | Preliminary support for `windows/arm64` target
----------------------------------------------
``numpy`` added support for windows/arm64 target. Please note
``OpenBLAS`` support is not yet available for windows/arm64 target.
| 44.8 | 67 | 0.625 |
e08ef3016cb93bbc85778c35890c71d6c19ab513 | 1,167 | rst | reStructuredText | doc/source/index.rst | thuylt2/karbor | cf572f62991c0453c5c5b8c9c06b942f5e8ad5dc | [
"Apache-2.0"
] | 1 | 2021-05-23T01:48:25.000Z | 2021-05-23T01:48:25.000Z | doc/source/index.rst | thuylt2/karbor | cf572f62991c0453c5c5b8c9c06b942f5e8ad5dc | [
"Apache-2.0"
] | null | null | null | doc/source/index.rst | thuylt2/karbor | cf572f62991c0453c5c5b8c9c06b942f5e8ad5dc | [
"Apache-2.0"
] | null | null | null | ===================================
Karbor: Application Data Protection
===================================
Introduction
============
Karbor is an OpenStack project that provides a pluggable framework for
protecting and restoring Data and Metadata that comprises an OpenStack-deployed
application - Application Data Protection as a Service.
Mission Statement
~~~~~~~~~~~~~~~~~
To protect the Data and Metadata that comprises an OpenStack-deployed
Application against loss/damage (e.g. backup, replication) by providing a
standard framework of APIs and services that allows vendors to provide plugins
through a unified interface
.. toctree::
:maxdepth: 2
Using Karbor
============
.. toctree::
:maxdepth: 2
readme
install/index
configuration/index
admin/index
Available Plugins
=================
.. toctree::
:maxdepth: 2
bank_plugins
protectable_plugins
protection_plugins
Contributor Docs
================
.. toctree::
:maxdepth: 2
contributor/index
Release Notes
=============
.. toctree::
:maxdepth: 1
releasenotes
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 18.234375 | 79 | 0.640103 |
aa6df60366f582891285b42c8f758a463e574345 | 582 | rst | reStructuredText | docs/source/plots/parameter_plots.rst | pkpdapp-team/erlotinib | fbdfffe0d0e3feab6b45c9a3f983f8dd3fcfdad3 | [
"BSD-3-Clause"
] | null | null | null | docs/source/plots/parameter_plots.rst | pkpdapp-team/erlotinib | fbdfffe0d0e3feab6b45c9a3f983f8dd3fcfdad3 | [
"BSD-3-Clause"
] | null | null | null | docs/source/plots/parameter_plots.rst | pkpdapp-team/erlotinib | fbdfffe0d0e3feab6b45c9a3f983f8dd3fcfdad3 | [
"BSD-3-Clause"
] | null | null | null | .. _erlotinib: https://github.com/DavAug/erlotinib
***************
Parameter Plots
***************
.. currentmodule:: erlotinib
Parameter plots in erlotinib_ are intended to illustrate parameter
inference results derived with a :class:`InferenceController`.
.. currentmodule:: erlotinib.plots
Functional classes
------------------
- :class:`MarginalPosteriorPlot`
- :class:`ParameterEstimatePlot`
Detailed API
^^^^^^^^^^^^
.. autoclass:: MarginalPosteriorPlot
:members:
:inherited-members:
.. autoclass:: ParameterEstimatePlot
:members:
:inherited-members:
| 19.4 | 66 | 0.683849 |
69c1335889f53d6905aa413aa25b6c2073b943fc | 394 | rst | reStructuredText | doc/v4_0/nusshkey.rst | mohaimenhasan/vspk-python | 4c7b297427048340b250cc3c74d9214dc0d4bde1 | [
"BSD-3-Clause"
] | null | null | null | doc/v4_0/nusshkey.rst | mohaimenhasan/vspk-python | 4c7b297427048340b250cc3c74d9214dc0d4bde1 | [
"BSD-3-Clause"
] | null | null | null | doc/v4_0/nusshkey.rst | mohaimenhasan/vspk-python | 4c7b297427048340b250cc3c74d9214dc0d4bde1 | [
"BSD-3-Clause"
] | null | null | null | .. _nusshkey:
nusshkey
===========================================
.. class:: nusshkey.NUSSHKey(bambou.nurest_object.NUMetaRESTObject,):
None
Attributes
----------
- ``name`` (**Mandatory**): Name of the SSH Key.
- ``description``: A description of the SSH Key.
- ``key_type``: Type of SSH Key defined. Only RSA supported for now.
- ``public_key``: Public Key of a SSH Key Pair.
| 14.592593 | 69 | 0.583756 |
335d2995a82d5659dc1d5c0bf42d0289c2e08ae5 | 3,813 | rst | reStructuredText | docs/loading_km.rst | JackGuyver/pyansys | ce91c0bc6e4c071e006b15d63c55a40d391e628b | [
"MIT"
] | null | null | null | docs/loading_km.rst | JackGuyver/pyansys | ce91c0bc6e4c071e006b15d63c55a40d391e628b | [
"MIT"
] | null | null | null | docs/loading_km.rst | JackGuyver/pyansys | ce91c0bc6e4c071e006b15d63c55a40d391e628b | [
"MIT"
] | null | null | null | Working with a ANSYS Full File (full)
=====================================
The ANSYS full file is a FORTRAN formatted binary file containing the
mass and stiffness from an ANSYS analysis. Using pyansys it can be
loaded into memory as either a sparse or full matrix.
Reading a Full File
-------------------
This example reads in the mass and stiffness matrices associated with
the above example. ``load_km`` sorts degrees of freedom such that the
nodes are ordered from minimum to maximum, and each degree of freedom
(i.e. X, Y, Z), are sorted within each node. The matrices ``k`` and
``m`` are sparse by default, but if ``scipy`` is not installed, or if
the optional parameter ``as_sparse=False`` then they will be full
numpy arrays.
By default ``load_km`` outputs the upper triangle of both matrices.
The constrained nodes of the analysis can be identified by accessing
``fobj.const`` where the constrained degrees of freedom are True and
all others are False. This corresponds to the degrees of reference in
``dof_ref``.
By default dof_ref is unsorted. To sort these values, set
``sort==True``. It is enabled for this example to allow for plotting
of the values later on.
.. code:: python
# Load pyansys
import pyansys
from pyansys import examples
# Create result reader object and read in full file
full = pyansys.read_binary(examples.fullfile)
dof_ref, k, m = full.load_km(sort=True)
ANSYS only stores the upper triangular matrix in the full file. To
make the full matrix:
.. code:: python
k += sparse.triu(k, 1).T
m += sparse.triu(m, 1).T
If you have ``scipy`` installed, you can solve solve for the natural
frequencies and mode shapes of a system.
.. code:: python
import numpy as np
from scipy.sparse import linalg
# condition the k matrix
# to avoid getting the "Factor is exactly singular" error
k += sparse.diags(np.random.random(k.shape[0])/1E20, shape=k.shape)
# Solve
w, v = linalg.eigsh(k, k=20, M=m, sigma=10000)
# System natural frequencies
f = (np.real(w))**0.5/(2*np.pi)
.. code::
print('First four natural frequencies')
for i in range(4):
print('{:.3f} Hz'.format(f[i]))
First four natural frequencies
1283.200 Hz
1283.200 Hz
5781.975 Hz
6919.399 Hz
Plotting a Mode Shape
---------------------
You can also plot the mode shape of this finite element model. Since the constrained degrees of freedom have been removed from the solution, you have to account for these when displaying the displacement.
.. code:: python
import pyvista as pv
# Get the 4th mode shape
full_mode_shape = v[:, 3] # x, y, z displacement for each node
# reshape and compute the normalized displacement
disp = full_mode_shape.reshape((-1, 3))
n = (disp*disp).sum(1)**0.5
n /= n.max() # normalize
# load an archive file and create a vtk unstructured grid
archive = pyansys.Archive(pyansys.examples.hexarchivefile)
grid = archive.parse_vtk()
# plot the normalized displacement
# grid.plot(scalars=n)
# Fancy plot the displacement
plobj = pv.Plotter()
# add the nominal mesh
plobj.add_mesh(grid, style='wireframe')
# copy the mesh and displace it
new_grid = grid.copy()
new_grid.points += disp/80
plobj.add_mesh(new_grid, scalars=n, stitle='Normalized\nDisplacement',
flipscalars=True)
plobj.add_text('Cantliver Beam 4th Mode Shape at {:.4f}'.format(f[3]),
fontsize=30)
plobj.plot()
.. image:: ./images/solved_km.png
This example is built into ``pyansys`` and can be run from ``examples.solve_km()``.
FullFile Object Methods
-----------------------
.. autoclass:: pyansys.full.FullFile
:members:
| 29.789063 | 205 | 0.669552 |
870a708d02dde2c4afd1f672588f5bc2d3e18479 | 303 | rst | reStructuredText | docs/reference/kombu.asynchronous.timer.rst | kaiix/kombu | 580b5219cc50cad278c4b664d0e0f85e37a5e9ea | [
"BSD-3-Clause"
] | 5,079 | 2015-01-01T03:39:46.000Z | 2022-03-31T07:38:22.000Z | desktop/core/ext-py/kombu-4.3.0/docs/reference/kombu.asynchronous.timer.rst | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 1,623 | 2015-01-01T08:06:24.000Z | 2022-03-30T19:48:52.000Z | desktop/core/ext-py/kombu-4.3.0/docs/reference/kombu.asynchronous.timer.rst | zks888/hue | 93a8c370713e70b216c428caa2f75185ef809deb | [
"Apache-2.0"
] | 2,033 | 2015-01-04T07:18:02.000Z | 2022-03-28T19:55:47.000Z | ==========================================================
Timer - ``kombu.asynchronous.timer``
==========================================================
.. contents::
:local:
.. currentmodule:: kombu.asynchronous.timer
.. automodule:: kombu.asynchronous.timer
:members:
:undoc-members:
| 25.25 | 58 | 0.415842 |
92a4471a98f32440605a37c5a7b9d89911facf1b | 1,190 | rst | reStructuredText | docs/guide/text-indexes.rst | DonQueso89/mongoengine | d73f0bb1af7fa63424137174da76f89dca762010 | [
"MIT"
] | 2 | 2020-02-03T13:46:50.000Z | 2020-08-12T23:45:41.000Z | docs/guide/text-indexes.rst | DonQueso89/mongoengine | d73f0bb1af7fa63424137174da76f89dca762010 | [
"MIT"
] | 3 | 2016-03-09T18:29:29.000Z | 2019-02-27T22:12:59.000Z | docs/guide/text-indexes.rst | DonQueso89/mongoengine | d73f0bb1af7fa63424137174da76f89dca762010 | [
"MIT"
] | 2 | 2016-08-24T00:58:34.000Z | 2017-05-18T20:15:50.000Z | ===========
Text Search
===========
After MongoDB 2.4 version, supports search documents by text indexes.
Defining a Document with text index
===================================
Use the *$* prefix to set a text index, Look the declaration::
class News(Document):
title = StringField()
content = StringField()
is_active = BooleanField()
meta = {'indexes': [
{'fields': ['$title', "$content"],
'default_language': 'english',
'weights': {'title': 10, 'content': 2}
}
]}
Querying
========
Saving a document::
News(title="Using mongodb text search",
content="Testing text search").save()
News(title="MongoEngine 0.9 released",
content="Various improvements").save()
Next, start a text search using :attr:`QuerySet.search_text` method::
document = News.objects.search_text('testing').first()
document.title # may be: "Using mongodb text search"
document = News.objects.search_text('released').first()
document.title # may be: "MongoEngine 0.9 released"
Ordering by text score
======================
::
objects = News.objects.search_text('mongo').order_by('$text_score')
| 22.884615 | 69 | 0.604202 |
02a849dbfc37f882accf0d1d7228877d26c43050 | 14,291 | rst | reStructuredText | definitions/manual/source/examples/h5py/index.rst | trnielsen/nexus-constructor | 65efb6eedca30250b75f142dd29a46bc909958df | [
"BSD-2-Clause"
] | 3 | 2019-05-31T08:38:25.000Z | 2022-01-06T09:23:21.000Z | definitions/manual/source/examples/h5py/index.rst | trnielsen/nexus-constructor | 65efb6eedca30250b75f142dd29a46bc909958df | [
"BSD-2-Clause"
] | 709 | 2019-02-06T08:23:07.000Z | 2022-03-29T23:03:37.000Z | definitions/manual/source/examples/h5py/index.rst | trnielsen/nexus-constructor | 65efb6eedca30250b75f142dd29a46bc909958df | [
"BSD-2-Clause"
] | 2 | 2020-03-06T09:58:56.000Z | 2020-08-04T18:32:57.000Z | .. index:: h5py
.. _Example-H5py:
==============================
Python Examples using ``h5py``
==============================
One way to gain a quick familiarity with NeXus is to start working with some data. For at least the
first few examples in this section, we have a simple two-column set of 1-D data, collected as part of a
series of alignment scans by the APS USAXS instrument during the time it was stationed at
beam line 32ID. We will show how to write this
data using the Python language and the ``h5py`` package [#]_
(:index:`using <h5py>` ``h5py`` calls directly rather than using the NeXus NAPI). The
actual data to be written was extracted (elsewhere) from a ``spec`` [#]_ data file
and read as a text block from a file by the Python source code.
Our examples will start with the simplest case and add only mild complexity with each new case
since these examples are meant for those who are unfamiliar with NeXus.
.. [#] *h5py*: https://www.h5py.org/
.. [#] *SPEC*: http://certif.com/spec.html
The data shown plotted in the next figure will be written to the NeXus HDF5 file
using only two NeXus base classes, ``NXentry`` and ``NXdata``, in the first example
and then minor variations on this structure in the next two examples. The
data model is identical to the one in the :ref:`Introduction <fig.simple-example>`
chapter except that the names will be different, as shown below:
.. compound::
.. figure:: ../../img/Simple.png
:width: 60%
:alt: simple data structure
data structure, (from Introduction)
.. rubric:: our h5py example
.. literalinclude:: data-model.txt
:tab-width: 4
:linenos:
:language: text
.. _Example-H5py-Plot:
.. figure:: s00008.png
:alt: Example-H5py-Plot
:width: 80%
plot of our *mr_scan*
.. rubric:: two-column data for our *mr_scan*
.. literalinclude:: input.dat
:tab-width: 4
:linenos:
:language: text
Writing the simplest data using ``h5py``
########################################
These two examples show how to write the simplest data (above).
One example writes the data directly to the :ref:`NXdata` group
while the other example writes the data to ``NXinstrument/NXdetector/data``
and then creates a soft link to that data in ``NXdata``.
.. toctree::
:maxdepth: 1
writer_1_3
writer_2_1
.. _Example-H5py-complete:
Complete ``h5py`` example writing and reading a NeXus data file
###############################################################
.. _Example-H5py-Writing:
Writing the HDF5 file using **h5py**
====================================
In the main code section of :ref:`BasicWriter.py <Example-H5py-BasicWriter>`,
a current time stamp
is written in the format of *ISO 8601* (``yyyy-mm-ddTHH:MM:SS``).
For simplicity of this code example, we use a text string for the time, rather than
computing it directly from Python support library calls. It is easier this way to
see the exact type of string formatting for the time. When using the Python
``datetime`` package, one way to write the time stamp is:
.. code-block:: python
:linenos:
timestamp = "T".join( str( datetime.datetime.now() ).split() )
.. 2016-02-16,PRJ:
ISO8601 now allows the "T" to be replaced by " " which is more readable
We won't change now. Shows a pedantic case, for sure.
The data (``mr`` is similar to "two_theta" and
``I00`` is similar to "counts") is collated into two Python lists. We use the
**numpy** package to read the file and parse the two-column format.
The new HDF5 file is opened (and created if not already existing) for writing,
setting common NeXus attributes in the same command from our support library.
Proper HDF5+NeXus groups are created for ``/entry:NXentry/mr_scan:NXdata``.
Since we are not using the NAPI, our
support library must create and set the ``NX_class`` attribute on each group.
.. note:: We want to create the desired structure of
``/entry:NXentry/mr_scan:NXdata/``.
#. First, our support library calls
``f = h5py.File()``
to create the file and root level NeXus structure.
#. Then, it calls
``nxentry = f.create_group("entry")``
to create the ``NXentry`` group called
``entry`` at the root level.
#. Then, it calls
``nxdata = nxentry.create_group("mr_scan")``
to create the ``NXentry`` group called
``entry`` as a child of the ``NXentry`` group.
Next, we create a dataset called ``title`` to hold a title string that can
appear on the default plot.
Next, we create datasets for ``mr`` and ``I00`` using our support library.
The data type of each, as represented in ``numpy``, will be recognized by
``h5py`` and automatically converted to the proper HDF5 type in the file.
A Python dictionary of attributes is given, specifying the engineering units and other
values needed by NeXus to provide a default plot of this data. By setting ``signal="I00"``
as an attribute on the group, NeXus recognizes ``I00`` as the default
*y* axis for the plot. The ``axes="mr"`` attribute on the :ref:`NXdata`
group connects the dataset to be used as the *x* axis.
Finally, we *must* remember to call ``f.close()`` or we might
corrupt the file when the program quits.
.. compound::
.. rubric:: *BasicWriter.py*: Write a NeXus HDF5 file using Python with h5py
.. _Example-H5py-BasicWriter:
.. literalinclude:: BasicWriter.py
:tab-width: 4
:linenos:
:language: python
.. _Example-H5py-Reading:
Reading the HDF5 file using **h5py**
====================================
The file reader, :ref:`BasicReader.py <Example-H5py-Reader>`,
is very simple since the bulk of the work is done by ``h5py``.
Our code opens the HDF5 we wrote above,
prints the HDF5 attributes from the file, reads the two datasets,
and then prints them out as columns. As simple as that.
Of course, real code might add some error-handling and
extracting other useful stuff from the file.
.. note:: See that we identified each of the two datasets using HDF5 absolute path references
(just using the group and dataset names). Also, while coding this example, we were reminded
that HDF5 is sensitive to upper or lowercase. That is, ``I00`` is not the same is
``i00``.
.. compound::
.. rubric:: *BasicReader.py*: Read a NeXus HDF5 file using Python with h5py
.. _Example-H5py-Reader:
.. literalinclude:: BasicReader.py
:tab-width: 4
:linenos:
:language: python
Output from ``BasicReader.py`` is shown next.
.. compound::
.. rubric:: Output from ``BasicReader.py``
.. literalinclude:: output.txt
:tab-width: 4
:linenos:
:language: text
.. _finding.default.data.python:
Finding the default plottable data
----------------------------------
Let's make a new reader that follows the chain of
attributes (``@default``, ``@signal``, and ``@axes``)
to find the default plottable data. We'll use the
same data file as the previous example.
Our demo here assumes one-dimensional data.
(For higher dimensionality data,
we'll need more complexity when handling the
``@axes`` attribute and we'll to check the
field sizes. See section :ref:`Find-Plottable-Data`,
subsection :ref:`Find-Plottable-Data-v3`, for the details.)
.. compound::
.. rubric:: *reader_attributes_trail.py*: Read a NeXus HDF5 file using Python with h5py
.. _Example-H5py-Reader_attributes_trail:
.. literalinclude:: reader_attributes_trail.py
:tab-width: 4
:linenos:
:language: python
Output from ``reader_attributes_trail.py`` is shown next.
.. compound::
.. rubric:: Output from ``reader_attributes_trail.py``
.. literalinclude:: reader_attributes_trail.txt
:tab-width: 4
:linenos:
:language: text
.. _Example-H5py-Plotting:
Plotting the HDF5 file
======================
.. index::
NeXpy
Now that we are certain our file conforms to the NeXus
standard, let's plot it using the ``NeXpy`` [#]_
client tool. To help label the plot, we added the
``long_name`` attributes to each of our datasets.
We also added metadata to the root level of our HDF5 file
similar to that written by the NAPI. It seemed to be a useful addition.
Compare this with :ref:`Example-H5py-Plot`
and note that the horizontal axis of this plot is mirrored from that above.
This is because the data is stored in the file in descending
``mr`` order and ``NeXpy`` has plotted
it that way (in order of appearance) by default.
.. [#] *NeXpy*: http://nexpy.github.io/nexpy/
.. compound::
.. _fig-Example-H5py-nexpy-plot:
.. figure:: nexpy.png
:alt: fig-Example-H5py-nexpy-plot
:width: 80%
plot of our *mr_scan* using NeXpy
.. _h5py-example-external-links:
Links to Data in External HDF5 Files
####################################
HDF5 files may contain links to data (or groups) in other files.
This can be used to advantage to refer to data in existing HDF5 files
and create NeXus-compliant data files. Here, we show such an example,
using the same ``counts`` v. ``two_theta`` data from the examples above.
We use the *HDF5 external file* links with NeXus data files.
::
f[local_addr] = h5py.ExternalLink(external_file_name, external_addr)
where ``f`` is an open ``h5py.File()`` object in which we will create the new link,
``local_addr`` is an HDF5 path address, ``external_file_name`` is the name
(relative or absolute) of an existing HDF5 file, and ``external_addr`` is the
HDF5 path address of the existing data in the ``external_file_name`` to be linked.
file: external_angles.hdf5
==========================
Take for example, the structure of :download:`external_angles.hdf5`,
a simple HDF5 data file that contains just the ``two_theta``
angles in an HDF5 dataset at the root level of the file.
Although this is a valid HDF5 data file, it is not a valid NeXus data file:
.. code-block:: text
:linenos:
angles:float64[31] = [17.926079999999999, '...', 17.92108]
@units = degrees
file: external_counts.hdf5
==========================
The data in the file ``external_angles.hdf5`` might be referenced from
another HDF5 file (such as :download:`external_counts.hdf5`)
by an HDF5 external link. [#]_
Here is an example of the structure:
.. code-block:: text
:linenos:
entry:NXentry
instrument:NXinstrument
detector:NXdetector
counts:NX_INT32[31] = [1037, '...', 1321]
@units = counts
two_theta --> file="external_angles.hdf5", path="/angles"
.. [#] see these URLs for further guidance on HDF5 external links:
https://portal.hdfgroup.org/display/HDF5/H5L_CREATE_EXTERNAL,
http://docs.h5py.org/en/stable/high/group.html#external-links
file: external_master.hdf5
==========================
A valid NeXus data file could be created that refers to the data in these files
without making a copy of the data files themselves.
.. note::
It is necessary for all
these files to be located together in the same directory for the HDF5 external file
links to work properly.`
To be a valid NeXus file, it must contain a :ref:`NXentry` group.
For the files above, it is simple to make a master file that links to
the data we desire, from structure that we create. We then add the
group attributes that describe the default plottable data:
.. code-block:: text
data:NXdata
@signal = counts
@axes = two_theta
@two_theta_indices = 0
Here is (the basic structure of) :download:`external_master.hdf5`, an example:
.. code-block:: text
:linenos:
entry:NXentry
@default = data
instrument --> file="external_counts.hdf5", path="/entry/instrument"
data:NXdata
@signal = counts
@axes = two_theta
@two_theta = 0
counts --> file="external_counts.hdf5", path="/entry/instrument/detector/counts"
two_theta --> file="external_angles.hdf5", path="/angles"
source code: externalExample.py
===============================
Here is the complete code of a Python program, using ``h5py``
to write a NeXus-compliant HDF5 file with links to data in other HDF5 files.
.. compound::
.. rubric:: *externalExample.py*: Write using HDF5 external links
.. _Example-H5py-externalExample:
.. literalinclude:: externalExample.py
:tab-width: 4
:linenos:
:language: python
downloads
=========
The Python code and files related to this section may be downloaded from the following table.
=========================================== =============================================
file description
=========================================== =============================================
:download:`input.dat` 2-column ASCII data used in this section
:download:`BasicReader.py` python code to read example *prj_test.nexus.hdf5*
:download:`BasicWriter.py` python code to write example *prj_test.nexus.hdf5*
:download:`external_angles_h5dump.txt` *h5dump* analysis of *external_angles.hdf5*
:download:`external_angles.hdf5` HDF5 file written by *externalExample*
:download:`external_angles_structure.txt` *punx tree* analysis of *external_angles.hdf5*
:download:`external_counts_h5dump.txt` *h5dump* analysis of *external_counts.hdf5*
:download:`external_counts.hdf5` HDF5 file written by *externalExample*
:download:`external_counts_structure.txt` *punx tree* analysis of *external_counts.hdf5*
:download:`externalExample.py` python code to write external linking examples
:download:`external_master_h5dump.txt` *h5dump* analysis of *external_master.hdf5*
:download:`external_master.hdf5` NeXus file written by *externalExample*
:download:`external_master_structure.txt` *punx tree* analysis of *external_master.hdf5*
:download:`prj_test.nexus_h5dump.txt` *h5dump* analysis of the NeXus file
:download:`prj_test.nexus.hdf5` NeXus file written by *BasicWriter*
:download:`prj_test.nexus_structure.txt` *punx tree* analysis of the NeXus file
=========================================== =============================================
| 35.199507 | 103 | 0.666154 |
4c2cb26be938e9044cf10ef3cef0888327856602 | 2,954 | rst | reStructuredText | users-guide/getting-started/owin-bootstrapper.rst | ssewell/Hangfire-Documentation | 39628bcd2dfbc7711938bd917fd0717901545431 | [
"CC-BY-4.0"
] | null | null | null | users-guide/getting-started/owin-bootstrapper.rst | ssewell/Hangfire-Documentation | 39628bcd2dfbc7711938bd917fd0717901545431 | [
"CC-BY-4.0"
] | null | null | null | users-guide/getting-started/owin-bootstrapper.rst | ssewell/Hangfire-Documentation | 39628bcd2dfbc7711938bd917fd0717901545431 | [
"CC-BY-4.0"
] | null | null | null | OWIN bootstrapper
==================
In OWIN based web application frameworks, such as `ASP.NET MVC <http://www.asp.net/mvc>`_, `FubuMVC <http://fubu-project.org>`_, `Nancy <http://nancyfx.org>`_, `ServiceStack <https://servicestack.net>`_ and many others, you can use :doc:`OWIN bootstrapper <owin-bootstrapper>` methods to simplify the configuration task.
Adding OWIN Startup class
--------------------------
.. note::
If your project already have the OWIN Startup class (for example if you have SignalR installed), go to the next section.
`OWIN Startup class <http://www.asp.net/aspnet/overview/owin-and-katana/owin-startup-class-detection>`_ is intended to keep web application bootstrap logic in a single place. In Visual Studio 2013 you can add it by right clicking on the project and choosing the *Add / OWIN Startup Class* menu item.
If you have Visual Studio 2012 or earlier, just create a regular class in the root folder of your application, name it ``Startup`` and place the following contents:
.. code-block:: c#
using HangFire;
using HangFire.SqlServer;
using Microsoft.Owin;
using Owin;
[assembly: OwinStartup(typeof(MyWebApplication.Startup))]
namespace MyWebApplication
{
public class Startup
{
public void Configuration(IAppBuilder app)
{
/* configuration goes here */
}
}
}
Configuring Hangfire
---------------------
Hangfire provides an extension method for the ``IAppBuilder`` interface called ``UseHangfire`` – an entry point to the configuration. :doc:`Storage <../storage-configuration/index>`, :doc:`Job activator <../background-methods/using-ioc-containers>`, :doc:`Authorization filters <../deployment-to-production/configuring-authorization>`, :doc:`Job filters <../extensibility/using-job-filters>` can be configured here, check the available methods through the intellisence. Job storage is the only required configuration option, all others are optional.
.. note::
Prefer to use the ``UseServer`` method over manual ``BackgroundJobServer`` instantiation to process background jobs inside a web application. The method registers a handler of the application's shutdown event to perform the :doc:`graceful shutdown <../background-methods/using-cancellation-tokens>` for your jobs.
.. code-block:: c#
public void Configuration(IAppBuilder app)
{
app.UseHangfire(config =>
{
// Basic setup required to process background jobs.
config.UseSqlServerStorage("<your connection string or its name>");
config.UseServer();
});
}
The order of ``Use*`` methods inside the configuration action is not important, all configuration logic is being performed after all calls to these methods. The ``UseHangfire`` method also registers the *Hangfire Dashboard* middleware at the ``http://<your-app>/hangfire`` default url (but you can change it).
| 50.067797 | 549 | 0.707854 |
87f130a0f3495ad4aee10e1ebde06645d4cea12e | 7,571 | rst | reStructuredText | wordpress/wordpress_rst/when-does-xlogy-ylogx.rst | asmeurer/blog | c82e726fbe54a177738c62f1ea825877d33ad6e2 | [
"CC0-1.0"
] | 7 | 2016-03-13T06:16:14.000Z | 2019-09-09T11:47:23.000Z | wordpress/wordpress_rst/when-does-xlogy-ylogx.rst | asmeurer/blog | c82e726fbe54a177738c62f1ea825877d33ad6e2 | [
"CC0-1.0"
] | 18 | 2016-10-02T21:11:37.000Z | 2021-06-03T19:25:54.000Z | wordpress/wordpress_rst/when-does-xlogy-ylogx.rst | asmeurer/blog | c82e726fbe54a177738c62f1ea825877d33ad6e2 | [
"CC0-1.0"
] | 4 | 2015-11-05T20:21:47.000Z | 2020-05-02T18:09:35.000Z | When does x^log(y) = y^log(x)?
##############################
:date: 2013-03-03 06:49
:author: asmeurer
:category: Uncategorized
:slug: when-does-xlogy-ylogx
*In this blog post, when I write $latex \\log(x)$, I mean the natural
logarithm, or log base $latex e$, i.e., $latex \\ln(x)$.*
A discussion on a \ `pull request`_ got me thinking about this question:
what are the solutions to the complex equation $latex x^{\\log{(y)}} =
y^{\\log(x)}$? At the outset, they look like different expressions.
But clearly there some solutions. For example, if $latex x = y$, then
obviously the two expressions will be the same. We probably should
exclude $latex x = y = 0$, though note that even if $latex 0^{\\log(0)}$
is well-defined (probably if it is it is either 0 or complex $latex
\\infty$), it will be the same well-defined value. But for the remainder
of this blog post, I'll assume that $latex x$ and $latex y$ are nonzero.
Now, observe that if we apply $latex \\log$ to both sides of the
equation, we get $latex \\log{\\left(x^{\\log(y)}\\right )} = \\log
{\\left (y^{\\log(x)}\\right )}$. Now, supposing that we can apply the
famous logarithm exponent rule, we would get $latex \\log(x)\\log(y) =
\\log(y)\\log(x)$, which means that if additionally $latex \\log$ is
one-to-one, we would have that the original expressions must be equal.
The second question, that of `injectivity`_, is easier to answer than
the first, so I'll address it first. Note that the complex exponential
is not one-to-one, because for example $latex e^0 = e^{2\\pi i} = 1$.
But we still define the complex logarithm as the "inverse" of the
complex exponential. What this really means is that the complex
logarithm is strictly speaking not a function, because it is not
well-defined. Recall that the definition of one-to-one means that $latex
f(x) = f(y)$ implies $latex x = y$, and that the definition of
well-defined is that $latex x = y$ implies $latex f(x) = f(y)$. It is
clear to see here that $latex f$ being one-to-one is the same as $latex
f^{-1}$ being well-defined and visa-versa ($latex f^{-1}$ here is the
same loose definition of an inverse as saying that the complex logarithm
is the inverse of the complex exponential).
So note that the complex logarithm is not well-defined exactly because
the complex exponential is not one-to-one. We of course fix this
problem by making it well-defined, i.e., it normally is multivalued, but
we pick a single value consistently (i.e., we pick a `branch`_), so that
it is well-defined. For the remainder of this blog post, I will assume
the standard choice of branch cut for the complex logarithm, i.e., the
branch cut is along the negative axis, and we choose the branch where,
for $latex x > 0$, $latex \\log(x)$ is real and $latex \\log(-x) =
\\log(x) + i\\pi$.
My point here is that we automatically know that the complex logarithm
is one-to-one because we know that the complex exponential is
well-defined.
So our question boils down to, when does the identity $latex
\\log{\\left (z^a\\right)} = a \\log(z)$ hold? In SymPy, this identity
is only applied by ``expand_log()`` or ``logcombine()`` when $latex a$
is real and $latex z$ is positive, so let us assume that we know that it
holds under those conditions. Note that it also holds for some other
values too. For example, by our definition $latex \\log{\\left
(e^{i\\pi}\\right)} = \\log(-1) = \\log(1) + i\\pi = i\\pi =
i\\pi\\log(e)$. For our example, this means that $latex x = e$, $latex
y = -1$ is a non-trivial solution (non-trivial meaning $latex x \\neq
y$). Actually, the way that the complex logarithm being the "inverse"
of the complex exponential works is that $latex e^{\\log(x)} = x$ for
all $latex x$ (on the other hand $latex \\log{\\left(e^x\\right)} \\neq
x$ in general), so that if $latex x = e$, then $latex x^{\\log(y)} =
e^{\\log(y)} = y$ and $latex y^{\\log(x)} = y^{\\log(e)} = y^1 = y$. In
other words, $latex x = e$ is always a solution, for any $latex y\\,
(\\neq 0)$ (and similarly $latex y = e$ for all $latex x$). In terms of
our question of when $latex \\log{\\left(z^a\\right)} = a\\log(z)$, this
just says that this always true for $latex a = \\log(e) = 1$, regardless
of $latex z$, which is obvious. We can also notice that this identity
always holds for $latex a = 0$, regardless of $latex z$. In terms of our
original equation, this means that $latex x = e^0 = 1$ is a solution for
all $latex y$ (and as before, $latex y = 1$ for all $latex x$).
Note that $latex z > 0$ and $latex a$ real corresponds to $latex x, y >
0$ and $latex \\log(x), \\log(y)$ real, respectively, (which are the
same condition). So we have so far that the following are solutions to
$latex x^{\\log(y)} = y^{\\log(x)}$:
- $latex x, y > 0$
- $latex x = y$
- $latex x = e$, $latex y$ arbitrary
- $latex y = e$, $latex x$ arbitrary
- $latex x = 1$, $latex y$ arbitrary
- $latex y = 1$, $latex x$ arbitrary
Now let's look at some cases where $latex \\log{\\left (z^a\\right)}
\\neq a\\log(z)$. If $latex z < 0$ and $latex a$ is a nonzero even
integer, then $latex z^a > 0$ so $latex \\log{\\left (z^a \\right)}) =
\\log{\\left (\\left (-z\\right )^a \\right )} = a\\log(-z)$, whereas
$latex a\\log(z) = a(\\log(-z) + i\\pi)$, which are different by our
assumption that $latex a \\neq 0$. If $latex a$ is an odd integer not
equal to 1, then $latex z^a < 0$, so $latex \\log{\\left (z^a \\right)}
= \\log{\\left (-z^a \\right )} + i\\pi$ = $latex \\log{\\left (\\left(-
z\\right)^{a} \\right )} + i\\pi$ *Wordpress is refusing to render this.
It should be* log((-z)^a) + iπ = $latex a\\log(-z) + i\\pi$, whereas
$latex a\\log(z) = a(\\log(-z) + i\\pi)$ again, which is not the same
because $latex a \\neq 1$. This means that if we let $latex x < 0$ and
$latex y = e^a$, where $latex a \\neq 0, 1$, we get a non-solution (and
the same if we swap $latex x$ and $latex y$).
This is as far as I got tonight. Wordpress is arbitrarily not rendering
that LaTeX for no good reason. That and the very ugly LaTeX images is
pissing me off (why wordpress.com hasn't switched to MathJaX yet is
beyond me). The next time I get some free time, I am going to seriously
consider switching my blog to something hosted on GitHub, probably using
the IPython notebook. I welcome any hints people can give me on that,
especially concerning migrating pages from this blog.
Here is some work on finding the rest of the solutions: the general
definition of $latex \\log(x)$ is $latex \\log(\|x\|) + i\\arg(x)$,
where $latex \\arg(x)$ is chosen in $latex (-\\pi, \\pi]$. Therefore, if
$latex \\log{\\left(z^a\\right )} = a\\log(z)$, we must have $latex
\\arg(z^a) = a\\arg(z)$. I believe a description of all such complex
$latex z$ and $latex a$ will give all solutions $latex x = z$, $latex y
= e^a$ (and $latex y = z$, $latex x = e^a$) to $latex x^{\\log(y)} =
y^{\\log(x)}$. I need to verify that, though, and I also need to think
about how to describe such $latex z$ and $latex a$. I will (hopefully)
continue this post later, either by editing this one or writing a new
one (depending on how much more I come up with).
Any comments to this post are welcome. I know you can't preview
comments, but if you want to use math, just write it as ``$latex math$``
(like ``$latex \log(x)$`` for $latex \\log(x)$). If you mess something
up, I'll edit your comment and fix it.
.. _pull request: https://github.com/sympy/sympy/pull/1845
.. _injectivity: http://en.wikipedia.org/wiki/Injective_function
.. _branch: http://en.wikipedia.org/wiki/Branch_point#Complex_logarithm
| 56.5 | 72 | 0.680095 |
d2315fdecdee6500c467d3e7c013d90ae4b177e0 | 552 | rst | reStructuredText | doc/source/admin/index.rst | pwickham/kolla | 8e0b754361e4cc80a4c570fe3184bcd8c15f120a | [
"Apache-2.0"
] | 957 | 2015-09-17T09:07:56.000Z | 2022-03-26T14:58:57.000Z | doc/source/admin/index.rst | pwickham/kolla | 8e0b754361e4cc80a4c570fe3184bcd8c15f120a | [
"Apache-2.0"
] | 33 | 2015-12-06T01:14:04.000Z | 2020-02-13T05:37:47.000Z | doc/source/admin/index.rst | pwickham/kolla | 8e0b754361e4cc80a4c570fe3184bcd8c15f120a | [
"Apache-2.0"
] | 648 | 2015-09-14T08:09:51.000Z | 2022-03-21T23:19:30.000Z | ===================
Administrator Guide
===================
Building Container Images
-------------------------
If you are a system administrator running Kolla, this section contains
information that should help you understand how to build container image
or build some images using ``--template-override``.
.. toctree::
:maxdepth: 2
:glob:
image-building
template-override/*
Kolla Images API
----------------
Take advantage of the Kolla API to configure containers at runtime.
.. toctree::
:maxdepth: 2
:glob:
kolla_api
| 17.806452 | 72 | 0.630435 |
25a343554e628aab746a07ad0d486640704e70b4 | 1,092 | rst | reStructuredText | algorithm/resource.rst | wdv4758h/notes | 60fa483961245ec5bb264d3f28a885fb82a1c25e | [
"Unlicense"
] | 136 | 2015-06-15T13:26:40.000Z | 2022-03-03T07:47:31.000Z | algorithm/resource.rst | wdv4758h/notes | 60fa483961245ec5bb264d3f28a885fb82a1c25e | [
"Unlicense"
] | 82 | 2017-01-06T06:32:55.000Z | 2020-09-03T03:34:24.000Z | algorithm/resource.rst | wdv4758h/notes | 60fa483961245ec5bb264d3f28a885fb82a1c25e | [
"Unlicense"
] | 18 | 2015-12-04T04:02:44.000Z | 2022-02-24T03:48:57.000Z | ========================================
演算法
========================================
.. contents:: 目錄
課程
========================================
* `6.851: Advanced Data Structures <http://courses.csail.mit.edu/6.851/>`_
* `CS1113 - Foundations of Computer Science II <http://osullivan.ucc.ie/teaching/cs1113/lectures.html>`_
書籍
========================================
* `Introduction to Algorithms <https://www.amazon.com/Introduction-Algorithms-3rd-MIT-Press/dp/0262033844>`_
* `Algorithms Unlocked <https://www.amazon.com/Algorithms-Unlocked-Press-Thomas-Cormen/dp/0262518805>`_
* `Algorithms <https://www.amazon.com/Algorithms-4th-Robert-Sedgewick/dp/032157351X>`_
網站
========================================
* `Programmer Competency Matrix <http://www.starling-software.com/employment/programmer-competency-matrix.html>`_
* `GeeksforGeeks <http://www.geeksforgeeks.org/>`_
* `演算法筆記 <http://www.csie.ntnu.edu.tw/~u91029/index.html>`_
* `Notes of EECS 281: Data Structures and Algorithms <https://www.gitbook.com/book/stevenschmatz/data-structures-and-algorithms/details>`_
| 33.090909 | 138 | 0.6163 |
2fbbd65784b40d72389f33b4ac4bc4b66756c2ce | 1,348 | rst | reStructuredText | docs/dev.rst | thibault-78/pibooth | a5ec192adc4428a115770f65f8ca111f61cf2471 | [
"MIT"
] | null | null | null | docs/dev.rst | thibault-78/pibooth | a5ec192adc4428a115770f65f8ca111f61cf2471 | [
"MIT"
] | null | null | null | docs/dev.rst | thibault-78/pibooth | a5ec192adc4428a115770f65f8ca111f61cf2471 | [
"MIT"
] | null | null | null |
Install developing version
--------------------------
If you want to use an **unofficial version** of the ``pibooth`` application, you need to work from
a clone of this ``git`` repository. Replace the step 7. of the `Install <https://github.com/werdeil/pibooth/blob/master/README.rst#Install>`_ procedure by the
following actions:
1. Clone from github ::
$ git clone https://github.com/werdeil/pibooth.git
2. Go in the cloned directory ::
$ cd pibooth
3. Install ``pibooth`` in editable mode ::
$ sudo pip3 install -e .[dslr,printer]
4. Start the application exactly in the same way as installed from pypi. All modifications performed
in the cloned repository are taken into account when the application starts.
Developing rules
----------------
Here is a small user guide and rules applied to develop ``pibooth``. They
will be updated as we go along.
Naming
^^^^^^
1. **Conventions**
The ``PEP8`` naming rules are applied.
2. **Capture / Picture / Image**
In the code and the configuration file:
- ``capture`` is used for variables related to a raw image from the camera.
- ``picture`` is used for variables related to the final image which is
a concatenation of capture(s) and text(s).
- ``image`` shall be used for pictograms displayed in Pygame view or
intermediate PIL/OpenCv objects.
| 28.680851 | 158 | 0.696588 |
415b746f14d789641dd19c9de29a453923878d8e | 4,312 | rst | reStructuredText | docs/implementing-an-arbitrator.rst | victor-wei126/erc-792 | 87276562461ea321d6acd0f46443c16265c5ef72 | [
"MIT"
] | 19 | 2019-10-23T09:59:08.000Z | 2022-02-22T01:46:01.000Z | docs/implementing-an-arbitrator.rst | victor-wei126/erc-792 | 87276562461ea321d6acd0f46443c16265c5ef72 | [
"MIT"
] | 15 | 2019-05-29T13:29:03.000Z | 2022-02-26T17:33:40.000Z | docs/implementing-an-arbitrator.rst | victor-wei126/erc-792 | 87276562461ea321d6acd0f46443c16265c5ef72 | [
"MIT"
] | 9 | 2019-07-30T09:07:25.000Z | 2022-02-01T23:18:25.000Z | ==========================
Implementing an Arbitrator
==========================
.. warning::
Smart contracts in this tutorial are not intended for production but educational purposes. Beware of using them on main network.
When developing arbitrator contracts we need to:
* Implement the functions ``createDispute`` and ``appeal``. Don't forget to store the arbitrated contract and the disputeID (which should be unique).
* Implement the functions for cost display (``arbitrationCost`` and ``appealCost``).
* Allow enforcing rulings. For this a function must execute ``arbitrable.rule(disputeID, ruling)``.
To demonstrate how to use the standard, we will implement a very simple arbitrator where a single address gives rulings and there aren't any appeals.
Let's start by implementing cost functions:
.. literalinclude:: ../contracts/examples/SimpleCentralizedArbitrator.sol
:language: javascript
:lines: 1-5,17-25
We set the arbitration fee to ``0.1 ether`` and the appeal fee to an astronomical amount which can't be afforded.
So in practice, we disabled appeal, for simplicity. We made costs constant, again, for the sake of simplicity of this tutorial.
Next, we need a data structure to keep track of disputes:
.. literalinclude:: ../contracts/examples/SimpleCentralizedArbitrator.sol
:language: javascript
:lines: 1-5,8-25
:emphasize-lines: 7-14
Each dispute belongs to an ``Arbitrable`` contract, so we have ``arbitrated`` field for it.
Each dispute will have a ruling stored in ``ruling`` field: For example, Party A wins (represented by ``ruling = 1``) and Party B wins (represented by ``ruling = 2``), recall that ``ruling = 0`` is reserved for "refused to arbitrate".
We also store number of ruling options in ``choices`` to be able to avoid undefined rulings in the proxy function which executes ``arbitrable.rule(disputeID, ruling)``.
Finally, each dispute will have a status, and we store it inside ``status`` field.
Next, we can implement the function for creating disputes:
.. literalinclude:: ../contracts/examples/SimpleCentralizedArbitrator.sol
:language: javascript
:lines: 1-5,8-37
:emphasize-lines: 24-35
Note that ``createDispute`` function should be called by an *arbitrable*.
We require the caller to pay at least ``arbitrationCost(_extraData)``. We could send back the excess payment, but we omitted it for the sake of simplicity.
Then, we create the dispute by pushing a new element to the array: ``disputes.push( ... )``.
The ``push`` function returns the resulting size of the array, thus we can use the return value of ``disputes.push( ... ) -1`` as ``disputeID`` starting from zero.
Finally, we emit ``DisputeCreation`` as required in the standard.
We also need to implement getters for ``status`` and ``ruling``:
.. literalinclude:: ../contracts/examples/SimpleCentralizedArbitrator.sol
:language: javascript
:lines: 1-5,8-50
:emphasize-lines: 36-48
Finally, we need a proxy function to call ``rule`` function of the ``Arbitrable`` contract. In this simple ``Arbitrator`` we will let one address, the creator of the contract, to give rulings. So let's start by storing contract creator's address:
.. literalinclude:: ../contracts/examples/SimpleCentralizedArbitrator.sol
:language: javascript
:lines: 1-45
:emphasize-lines: 7
Then the proxy function:
.. literalinclude:: ../contracts/examples/SimpleCentralizedArbitrator.sol
:language: javascript
:lines: 1-60
:emphasize-lines: 46-
First we check the caller address, we should only let the ``owner`` execute this. Then we do sanity checks: the ruling given by the arbitrator should be chosen among the ``choices`` and it should not be possible to ``rule`` on an already solved dispute.
Afterwards, we update ``ruling`` and ``status`` values of the dispute. Then we pay arbitration fee to the arbitrator (``owner``). Finally, we call ``rule`` function of the ``arbitrated`` to enforce the ruling.
Lastly, appeal functions:
.. literalinclude:: ../contracts/examples/SimpleCentralizedArbitrator.sol
:language: javascript
:emphasize-lines: 61-
Just a dummy implementation to conform the interface, as we don't actually implement appeal functionality.
That's it, we have a working, very simple centralized arbitrator!
| 43.555556 | 253 | 0.738173 |
f24121970c91cf26e6aa8c271e14b242733b4349 | 685 | rst | reStructuredText | linux-armv7a-lts/README.rst | dockcross/cross-compilers | 49941dadca14c552874596a028e58d7e5f5334d6 | [
"MIT"
] | null | null | null | linux-armv7a-lts/README.rst | dockcross/cross-compilers | 49941dadca14c552874596a028e58d7e5f5334d6 | [
"MIT"
] | null | null | null | linux-armv7a-lts/README.rst | dockcross/cross-compilers | 49941dadca14c552874596a028e58d7e5f5334d6 | [
"MIT"
] | null | null | null | dockcross image for ARMv7-A-LTS
===========================
Toolchain configured for ARMv7-A used in Beaglebone Black single board PC with TI SoC AM3358 on board, Cortex-A8. Code compiled with dockcross armv7 image crashes on Beaglebone, see https://github.com/dockcross/dockcross/issues/290
This is the LTS version, i.e. with glibc version 2.28.
Difference with dockcross armv7 toolchain: ARCH_CPU="cortex-a8", ARCH_FPU="neon".
Only NEON is enabled, though TI docs says it is possible to use both VFPv3 and NEON http://processors.wiki.ti.com/index.php/Using_NEON_and_VFPv3_on_Cortex-A8
I do not know how to configure CrossTool-NG for VFPv3+NEON. Feel you free to submit a fix)
| 48.928571 | 231 | 0.759124 |
2e003a35a7e8556a8306873a957b345c5a5cb60e | 8,031 | rst | reStructuredText | docs/source/capability_requirements.rst | a20351766/udo1.3 | 503266796a042497d4fd91590aa6a7b27dc4be43 | [
"Apache-2.0"
] | null | null | null | docs/source/capability_requirements.rst | a20351766/udo1.3 | 503266796a042497d4fd91590aa6a7b27dc4be43 | [
"Apache-2.0"
] | null | null | null | docs/source/capability_requirements.rst | a20351766/udo1.3 | 503266796a042497d4fd91590aa6a7b27dc4be43 | [
"Apache-2.0"
] | null | null | null | Capability Requirements
-----------------------
Because Udo is a distributed system that will usually involve multiple
organizations (sometimes in different countries or even continents), it is
possible (and typical) that many different versions of Udo code will exist in
the network. Nevertheless, it’s vital that networks process transactions in the
same way so that everyone has the same view of the current network state.
This means that every network -- and every channel within that network – must
define a set of what we call “capabilities” to be able to participate in
processing transactions. For example, Udo v1.1 introduces new MSP role types
of “Peer” and “Client”. However, if a v1.0 peer does not understand these new
role types, it will not be able to appropriately evaluate an endorsement policy
that references them. This means that before the new role types may be used, the
network must agree to enable the v1.1 ``channel`` capability requirement,
ensuring that all peers come to the same decision.
Only binaries which support the required capabilities will be able to participate in the
channel, and newer binary versions will not enable new validation logic until the
corresponding capability is enabled. In this way, capability requirements ensure that
even with disparate builds and versions, it is not possible for the network to suffer a
state fork.
Defining Capability Requirements
================================
Capability requirements are defined per channel in the channel configuration (found
in the channel’s most recent configuration block). The channel configuration contains
three locations, each of which defines a capability of a different type.
+------------------+-----------------------------------+----------------------------------------------------+
| Capability Type | Canonical Path | JSON Path |
+==================+===================================+====================================================+
| Channel | /Channel/Capabilities | .channel_group.values.Capabilities |
+------------------+-----------------------------------+----------------------------------------------------+
| Orderer | /Channel/Orderer/Capabilities | .channel_group.groups.Orderer.values.Capabilities |
+------------------+-----------------------------------+----------------------------------------------------+
| Application | /Channel/Application/Capabilities | .channel_group.groups.Application.values. |
| | | Capabilities |
+------------------+-----------------------------------+----------------------------------------------------+
* **Channel:** these capabilities apply to both peer and orderers and are located in
the root ``Channel`` group.
* **Orderer:** apply to orderers only and are located in the ``Orderer`` group.
* **Application:** apply to peers only and are located in the ``Application`` group.
The capabilities are broken into these groups in order to align with the existing
administrative structure. Updating orderer capabilities is something the ordering orgs
would manage independent of the application orgs. Similarly, updating application
capabilities is something only the application admins would manage. By splitting the
capabilities between "Orderer" and "Application", a hypothetical network could run a
v1.6 ordering service while supporting a v1.3 peer application network.
However, some capabilities cross both the ‘Application’ and ‘Orderer’ groups. As we
saw earlier, adding a new MSP role type is something both the orderer and application
admins agree to and need to recognize. The orderer must understand the meaning
of MSP roles in order to allow the transactions to pass through ordering, while
the peers must understand the roles in order to validate the transaction. These
kinds of capabilities -- which span both the application and orderer components
-- are defined in the top level "Channel" group.
.. note:: It is possible that the channel capabilities are defined to be at version
v1.3 while the orderer and application capabilities are defined to be at
version 1.1 and v1.4, respectively. Enabling a capability at the "Channel"
group level does not imply that this same capability is available at the
more specific "Orderer" and "Application" group levels.
Setting Capabilities
====================
Capabilities are set as part of the channel configuration (either as part of the
initial configuration -- which we'll talk about in a moment -- or as part of a
reconfiguration).
.. note:: We have a two documents that talk through different aspects of channel
reconfigurations. First, we have a tutorial that will take you through
the process of :doc:`channel_update_tutorial`. And we also have a document that
talks through :doc:`config_update` which gives an overview of the
different kinds of updates that are possible as well as a fuller look
at the signature process.
Because new channels copy the configuration of the Orderer System Channel by
default, new channels will automatically be configured to work with the orderer
and channel capabilities of the Orderer System Channel and the application
capabilities specified by the channel creation transaction. Channels that already
exist, however, must be reconfigured.
The schema for the Capabilities value is defined in the protobuf as:
.. code:: bash
message Capabilities {
map<string, Capability> capabilities = 1;
}
message Capability { }
As an example, rendered in JSON:
.. code:: bash
{
"capabilities": {
"V1_1": {}
}
}
Capabilities in an Initial Configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the ``configtx.yaml`` file distributed in the ``config`` directory of the release
artifacts, there is a ``Capabilities`` section which enumerates the possible capabilities
for each capability type (Channel, Orderer, and Application).
The simplest way to enable capabilities is to pick a v1.1 sample profile and customize
it for your network. For example:
.. code:: bash
SampleSingleMSPSoloV1_1:
Capabilities:
<<: *GlobalCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *SampleOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *SampleOrg
Note that there is a ``Capabilities`` section defined at the root level (for the channel
capabilities), and at the Orderer level (for orderer capabilities). The sample above uses
a YAML reference to include the capabilities as defined at the bottom of the YAML.
When defining the orderer system channel there is no Application section, as those
capabilities are defined during the creation of an application channel. To define a new
channel's application capabilities at channel creation time, the application admins should
model their channel creation transaction after the ``SampleSingleMSPChannelV1_1`` profile.
.. code:: bash
SampleSingleMSPChannelV1_1:
Consortium: SampleConsortium
Application:
Organizations:
- *SampleOrg
Capabilities:
<<: *ApplicationCapabilities
Here, the Application section has a new element ``Capabilities`` which references the
``ApplicationCapabilities`` section defined at the end of the YAML.
.. note:: The capabilities for the Channel and Orderer sections are inherited from
the definition in the ordering system channel and are automatically included
by the orderer during the process of channel creation.
.. Licensed under Creative Commons Attribution 4.0 International License
https://creativecommons.org/licenses/by/4.0/
| 48.672727 | 109 | 0.673266 |
ec35defa4f883db84e9c8f454507d9d0a40a6a55 | 6,039 | rst | reStructuredText | doc/source/manual/AppInstall.rst | dgwakeman/mne-python | 3cc7a3f8456d78c828355f1860dd7e0297e59c73 | [
"BSD-3-Clause"
] | 4 | 2017-01-08T15:36:30.000Z | 2022-03-08T13:07:27.000Z | doc/source/manual/AppInstall.rst | dgwakeman/mne-python | 3cc7a3f8456d78c828355f1860dd7e0297e59c73 | [
"BSD-3-Clause"
] | 4 | 2015-04-20T16:10:47.000Z | 2016-11-01T13:32:48.000Z | doc/source/manual/AppInstall.rst | dgwakeman/mne-python | 3cc7a3f8456d78c828355f1860dd7e0297e59c73 | [
"BSD-3-Clause"
] | 1 | 2018-09-15T09:45:38.000Z | 2018-09-15T09:45:38.000Z |
.. _install_config:
==============================
Installation and configuration
==============================
System requirements
###################
The MNE software runs on Mac OSX and LINUX operating systems.
The hardware and software requirements are:
- Mac OSX version 10.5 (Leopard) or later.
- LINUX kernel 2.6.9 or later
- On both LINUX and Mac OSX 32-bit and 64-bit Intel platforms
are supported. PowerPC version on Mac OSX can be provided upon request.
- At least 2 GB of memory, 4 GB or more recommended.
- Disk space required for the MNE software: 80 MB
- Additional open source software on Mac OSX, see :ref:`BABDBCJE`.
Installation
############
The MNE software is distributed as a compressed tar archive
(Mac OSX and LINUX) or a Mac OSX disk image (dmg).
Download the software
=====================
Download the software package of interest. The file names
follow the convention:
MNE-* <*version*>*- <*rev*> -* <*Operating
system*>*-* <*Processor*>*.* <*ext*>*
The present version number is 2.7.0. The <*rev*> field
is the SVN revision number at the time this package was created.
The <*Operating system*> field
is either Linux or MacOSX. The <*processor*> field
is either i386 or x86_64. The <*ext*> field
is 'gz' for compressed tar archive files and 'dmg' for
Mac OSX disk images.
Installing from a compressed tar archive
========================================
Go to the directory where you want the software to be installed:
``cd`` <*dir*>
Unpack the tar archive:
``tar zxvf`` <*software package*>
The name of the software directory under <*dir*> will
be the same as the package file without the .gz extension.
Installing from a Mac OSX disk image
=====================================
- Double click on the disk image file.
A window opens with the installer package ( <*name*> .pkg)
inside.
- Double click the the package file. The installer starts.
- Follow the instructions in the installer.
.. note:: The software will be installed to /Applications/ <*name*> by default. If you want another location, select Choose Folder... on the Select a Destination screen in the installer.
.. note:: To provide centralized support in an environment with
.. _BABDBCJE:
Additional software
===================
MNE uses the 'Netpbm' package (http://netpbm.sourceforge.net/)
to create image files in formats other than tif and rgb from mne_analyze and mne_browse_raw .
This package is usually present on LINUX systems. On Mac OSX, you
need to install the netpbm package. The recommended way to do this
is to use the MacPorts Project tools, see http://www.macports.org/:
- If you have not installed the MacPorts
software, goto http://www.macports.org/install.php and follow the
instructions to install MacPorts.
- Install the netpbm package by saying: ``sudo port install netpbm``
MacPorts requires that you have the XCode developer tools
and X11 windowing environment installed. X11 is also needed by MNE.
For Mac OSX Leopard, we recommend using XQuartz (http://xquartz.macosforge.org/).
As of this writing, XQuartz does not yet exist for SnowLeopard;
the X11 included with the operating system is sufficient.
.. _CIHIIBDA:
Testing the performance of your OpenGL graphics
===============================================
The graphics performance of mne_analyze depends
on your graphics software and hardware configuration. You get the
best performance if you are using mne_analyze locally
on a computer and the hardware acceleration capabilities are in
use. You can check the On GLX... item
in the help menu of mne_analyze to
see whether the hardware acceleration is in effect. If the dialog
popping up says Direct rendering context ,
you are using hardware acceleration. If this dialog indicates Nondirect rendering context , you are either using software
emulation locally, rendering to a remote display, or employing VNC
connection. If you are rendering to a local display and get an indication
of Nondirect rendering context ,
software emulation is in effect and you should contact your local
computer support to enable hardware acceleration for GLX. In some
cases, this may require acquiring a new graphics display card. Fortunately,
relatively high-performance OpenGL-capable graphics cards very inexpensive.
There is also an utility mne_opengl_test to
assess the graphics performance more quantitatively. This utility
renders an inflated brain surface repeatedly, rotating it by 5 degrees
around the *z* axis between redraws. At each
revolution, the time spent for the full revolution is reported on
the terminal window where mne_opengl_test was
started from. The program renders the surface until the interrupt
key (usually control-c) is pressed on the terminal window.
mne_opengl_test is located
in the ``bin`` directory and is thus started as:
``$MNE_ROOT/bin/mne_opengl_test``
On the fastest graphics cards, the time per revolution is
well below 1 second. If this time longer than 10 seconds either
the graphics hardware acceleration is not in effect or you need
a faster graphics adapter.
Obtain FreeSurfer
#################
The MNE software relies on the FreeSurfer software for cortical
surface reconstruction and other MRI-related tasks. Please consult
the FreeSurfer home page site at ``http://surfer.nmr.mgh.harvard.edu/`` .
How to get started
##################
After you have installed the software, a good place to start
is to look at the manual:
- Source the correct setup script, see :ref:`user_environment`,
and
- Say: ``mne_view_manual`` .
Chapters of interest for a novice user include:
- :ref:`CHDDEFAB` and :ref:`CHDBAFGJ` contain introduction
to the software and setup instructions.
- :ref:`ch_cookbook` is an overview of the necessary steps to
compute the cortically constrained minimum-norm solutions.
- :ref:`ch_sample_data` is a hands-on exercise demonstrating analysis
of the sample data set.
- :ref:`ch_reading` contains a list of useful references for
understanding the methods implemented in the MNE software.
| 34.508571 | 192 | 0.738864 |
bd517ba461d7f0462b1c9fd168f38ea55298d0f4 | 7,203 | rst | reStructuredText | doc/source/api/quickstart.rst | rikeshi/galaxy | c536a877e4a9b3d12aa0d00fd4d5e705109a0d0a | [
"CC-BY-3.0"
] | 4 | 2015-05-12T20:36:41.000Z | 2017-06-26T15:34:02.000Z | doc/source/api/quickstart.rst | rikeshi/galaxy | c536a877e4a9b3d12aa0d00fd4d5e705109a0d0a | [
"CC-BY-3.0"
] | 72 | 2019-06-06T18:52:41.000Z | 2022-02-17T02:53:18.000Z | doc/source/api/quickstart.rst | rikeshi/galaxy | c536a877e4a9b3d12aa0d00fd4d5e705109a0d0a | [
"CC-BY-3.0"
] | 1 | 2019-01-16T22:21:54.000Z | 2019-01-16T22:21:54.000Z | Quickstart
==========
Log in as your user, navigate to the API Keys page in the User menu, and
generate a new API key. Make a note of the API key, and then pull up a
terminal. Now we'll use the display.py script in your ``galaxy/scripts/api``
directory for a short example, which assumes your local galaxy server is
running on the default port 8080::
% ./display.py my_key http://localhost:8080/api/histories
Collection Members
------------------
#1: /api/histories/8c49be448cfe29bc
name: Unnamed history
id: 8c49be448cfe29bc
#2: /api/histories/33b43b4e7093c91f
name: output test
id: 33b43b4e7093c91f
The result is a Collection of the histories of the user specified by the API
key (you). To look at the details of a particular history, say #1 above, do
the following::
% ./display.py my_key http://localhost:8080/api/histories/8c49be448cfe29bc
Member Information
------------------
state_details: {'ok': 1, 'failed_metadata': 0, 'upload': 0, 'discarded': 0, 'running': 0, 'setting_metadata': 0, 'error': 0, 'new': 0, 'queued': 0, 'empty': 0}
state: ok
contents_url: /api/histories/8c49be448cfe29bc/contents
id: 8c49be448cfe29bc
name: Unnamed history
This gives detailed information about the specific member in question, in this
case the History. To view history contents, do the following::
% ./display.py my_key http://localhost:8080/api/histories/8c49be448cfe29bc/contents
Collection Members
------------------
#1: /api/histories/8c49be448cfe29bc/contents/6f91353f3eb0fa4a
name: Pasted Entry
type: file
id: 6f91353f3eb0fa4a
What we have here is another Collection of items containing all of the datasets
in this particular history. Finally, to view details of a particular dataset
in this collection, execute the following::
% ./display.py my_key http://localhost:8080/api/histories/8c49be448cfe29bc/contents/6f91353f3eb0fa4a
Member Information
------------------
misc_blurb: 1 line
name: Pasted Entry
data_type: txt
deleted: False
file_name: /Users/yoplait/work/galaxy-stock/database/files/000/dataset_82.dat
state: ok
download_url: /datasets/6f91353f3eb0fa4a/display?to_ext=txt
visible: True
genome_build: ?
model_class: HistoryDatasetAssociation
file_size: 17
metadata_data_lines: 1
id: 6f91353f3eb0fa4a
misc_info: uploaded txt file
metadata_dbkey: ?
And now you've successfully used the API to request and select a history,
browse the contents of that history, and then look at detailed information
about a particular dataset.
For a more comprehensive Data Library example, set the following options in your
galaxy.yml as well, and restart galaxy again::
admin_users: you@example.org
library_import_dir: /path/to/some/directory
In the directory you specified for 'library_import_dir', create some
subdirectories, and put (or symlink) files to import into Galaxy into those
subdirectories.
In Galaxy, create an account that matches the address you put in 'admin_users',
then browse to that user's preferences and generate a new API Key. Copy the
key to your clipboard and then use these scripts::
% ./display.py my_key http://localhost:8080/api/libraries
Collection Members
------------------
0 elements in collection
% ./library_create_library.py my_key http://localhost:8080/api/libraries api_test 'API Test Library'
Response
--------
/api/libraries/f3f73e481f432006
name: api_test
id: f3f73e481f432006
% ./display.py my_key http://localhost:8080/api/libraries
Collection Members
------------------
/api/libraries/f3f73e481f432006
name: api_test
id: f3f73e481f432006
% ./display.py my_key http://localhost:8080/api/libraries/f3f73e481f432006
Member Information
------------------
synopsis: None
contents_url: /api/libraries/f3f73e481f432006/contents
description: API Test Library
name: api_test
% ./display.py my_key http://localhost:8080/api/libraries/f3f73e481f432006/contents
Collection Members
------------------
/api/libraries/f3f73e481f432006/contents/28202595c0d2591f61ddda595d2c3670
name: /
type: folder
id: 28202595c0d2591f61ddda595d2c3670
% ./library_create_folder.py my_key http://localhost:8080/api/libraries/f3f73e481f432006/contents 28202595c0d2591f61ddda595d2c3670 api_test_folder1 'API Test Folder 1'
Response
--------
/api/libraries/f3f73e481f432006/contents/28202595c0d2591fa4f9089d2303fd89
name: api_test_folder1
id: 28202595c0d2591fa4f9089d2303fd89
% ./library_upload_from_import_dir.py my_key http://localhost:8080/api/libraries/f3f73e481f432006/contents 28202595c0d2591fa4f9089d2303fd89 bed bed hg19
Response
--------
/api/libraries/f3f73e481f432006/contents/e9ef7fdb2db87d7b
name: 2.bed
id: e9ef7fdb2db87d7b
/api/libraries/f3f73e481f432006/contents/3b7f6a31f80a5018
name: 3.bed
id: 3b7f6a31f80a5018
% ./display.py my_key http://localhost:8080/api/libraries/f3f73e481f432006/contents
Collection Members
------------------
/api/libraries/f3f73e481f432006/contents/28202595c0d2591f61ddda595d2c3670
name: /
type: folder
id: 28202595c0d2591f61ddda595d2c3670
/api/libraries/f3f73e481f432006/contents/28202595c0d2591fa4f9089d2303fd89
name: /api_test_folder1
type: folder
id: 28202595c0d2591fa4f9089d2303fd89
/api/libraries/f3f73e481f432006/contents/e9ef7fdb2db87d7b
name: /api_test_folder1/2.bed
type: file
id: e9ef7fdb2db87d7b
/api/libraries/f3f73e481f432006/contents/3b7f6a31f80a5018
name: /api_test_folder1/3.bed
type: file
id: 3b7f6a31f80a5018
% ./display.py my_key http://localhost:8080/api/libraries/f3f73e481f432006/contents/e9ef7fdb2db87d7b
Member Information
------------------
misc_blurb: 68 regions
metadata_endCol: 3
data_type: bed
metadata_columns: 6
metadata_nameCol: 4
uploaded_by: nate@...
metadata_strandCol: 6
name: 2.bed
genome_build: hg19
metadata_comment_lines: None
metadata_startCol: 2
metadata_chromCol: 1
file_size: 4272
metadata_data_lines: 68
message:
metadata_dbkey: hg19
misc_info: uploaded bed file
date_uploaded: 2010-06-22T17:01:51.266119
metadata_column_types: str, int, int, str, int, str
Other parameters are valid when uploading, they are the same parameters as are
used in the web form, like 'link_data_only' and etc.
The request and response format should be considered alpha and are subject to change.
| 38.518717 | 175 | 0.665556 |
dd14718c723b69c0013160ad16f895d73de60d0a | 3,433 | rst | reStructuredText | docs/contributing.rst | ocefpaf/pystac-client | ddf0e0566b2b1783a4d32d3d77f9f51b80270df3 | [
"Apache-2.0"
] | 52 | 2021-04-15T23:24:12.000Z | 2022-03-09T23:02:27.000Z | docs/contributing.rst | ocefpaf/pystac-client | ddf0e0566b2b1783a4d32d3d77f9f51b80270df3 | [
"Apache-2.0"
] | 119 | 2021-04-13T11:42:01.000Z | 2022-02-24T10:02:35.000Z | docs/contributing.rst | ocefpaf/pystac-client | ddf0e0566b2b1783a4d32d3d77f9f51b80270df3 | [
"Apache-2.0"
] | 14 | 2021-04-13T19:00:19.000Z | 2022-02-23T09:17:30.000Z | Contributing
============
A list of issues and ongoing work is available on the PySTAC Client `issues page
<https://github.com/stac-utils/pystac-client/issues>`_. If you want to contribute code, the best
way is to coordinate with the core developers via an issue or pull request conversation.
Development installation
^^^^^^^^^^^^^^^^^^^^^^^^
Fork PySTAC-Client into your GitHub account. Then, clone the repo and install it locally with
pip as follows:
.. code-block:: bash
$ git clone git@github.com:your_user_name/pystac-client.git
$ cd pystac
$ pip install -e .
Testing
^^^^^^^
PySTAC-Client runs tests using ``unittest``. You can find unit tests in the ``tests/``
directory.
Run a single test with:
.. code-block:: bash
python -m unittest tests/test_catalog.py
or an entire folder using:
.. code-block:: bash
python -m unittest discover -v -s tests/
or the entire project using:
.. code-block:: bash
./scripts/test
The last command will also check test coverage. To view the coverage report, you can run
`coverage report` (to view the report in the terminal) or `coverage html` (to generate
an HTML report that can be opened in a browser).
More details on using ``unittest`` are `here
<https://docs.python.org/3/library/unittest.html>`_.
Code quality checks
^^^^^^^^^^^^^^^^^^^
tl;dr: Run ``pre-commit install --overwrite`` to perform checks when committing, and
``./scripts/test`` to run the tests.
PySTAC-Client uses
- `black <https://github.com/psf/black>`_ for Python code formatting
- `codespell <https://github.com/codespell-project/codespell/>`_ to check code for common misspellings
- `doc8 <https://github.com/pycqa/doc8>`__ for style checking on RST files in the docs
- `flake8 <https://flake8.pycqa.org/en/latest/>`_ for Python style checks
- `mypy <http://www.mypy-lang.org/>`_ for Python type annotation checks
Run all of these with ``pre-commit run --all-files`` or a single one using
``pre-commit run --all-files ID``, where ``ID`` is one of the command names above. For
example, to format all the Python code, run ``pre-commit run --all-files black``.
You can also install a Git pre-commit hook which will run the relevant linters and
formatters on any staged code when committing. This will be much faster than running on
all files, which is usually [#]_ only required when changing the pre-commit version or
configuration. Once installed you can bypass this check by adding the ``--no-verify``
flag to Git commit commands, as in ``git commit --no-verify``.
.. [#] In rare cases changes to one file might invalidate an unchanged file, such as
when modifying the return type of a function used in another file.
CHANGELOG
^^^^^^^^^
PySTAC-Client maintains a
`changelog <https://github.com/stac-utils/pystac-client/blob/main/CHANGELOG.md>`__
to track changes between releases. All PRs should make a changelog entry unless
the change is trivial (e.g. fixing typos) or is entirely invisible to users who may
be upgrading versions (e.g. an improvement to the CI system).
For changelog entries, please link to the PR of that change. This needs to happen in a
few steps:
- Make a PR to PySTAC with your changes
- Record the link to the PR
- Push an additional commit to your branch with the changelog entry with the link to the
PR.
For more information on changelogs and how to write a good entry, see `keep a changelog
<https://keepachangelog.com/en/1.0.0/>`_. | 36.521277 | 102 | 0.733178 |
73c9f3568207407ff177eeb04baef39d2b2a1322 | 6,931 | rst | reStructuredText | index.rst | CyVerse-learning-materials/importing_sradata_quickstart | d9cb3b9b8ac7d4450e6c5ab4977d29e9b2b38d92 | [
"CC-BY-4.0"
] | null | null | null | index.rst | CyVerse-learning-materials/importing_sradata_quickstart | d9cb3b9b8ac7d4450e6c5ab4977d29e9b2b38d92 | [
"CC-BY-4.0"
] | 1 | 2020-09-22T21:05:59.000Z | 2020-09-23T20:44:11.000Z | index.rst | CyVerse-learning-materials/importing_sradata_quickstart | d9cb3b9b8ac7d4450e6c5ab4977d29e9b2b38d92 | [
"CC-BY-4.0"
] | 1 | 2017-03-30T13:43:13.000Z | 2017-03-30T13:43:13.000Z | |CyVerse_logo|_
|Home_Icon|_
`Learning Center Home <http://learning.cyverse.org/>`_
Importing Data from the NCBI Sequence Read Archive (SRA) using the DE
=====================================================================
..
Use short, imperative titles e.g. Upload and share data, uploading and
sharing data
Goal
----
**Import data from the NCBI Sequence Read Archive into your data store (SRA) via the
Discovery Environment**
The `NCBI Sequence Read Archive (SRA) <https://www.ncbi.nlm.nih.gov/sra>`_ is a repository
for high-throughput sequencing data. You can import data from the SRA into your Data Store
using the Discovery Environment SRA-Import App.
.. tip::
According to the SRA homepage: "Sequence Read Archive (SRA) makes biological sequence
data available to the research community to enhance reproducibility and allow for new
discoveries by comparing data sets. The SRA stores raw sequencing data and alignment
information from high-throughput sequencing platforms, including Roche 454 GS System®,
Illumina Genome Analyzer®, Applied Biosystems SOLiD System®, Helicos Heliscope®,
Complete Genomics®, and Pacific Biosciences SMRT®."
----
Prerequisites
-------------
Downloads, access, and services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*In order to complete this tutorial you will need access to the following services/software*
..
Modify the table below as needed
.. list-table::
:header-rows: 1
* - Prerequisite
- Preparation/Notes
- Link/Download
* - CyVerse account
- You will need a CyVerse account to complete this exercise
- `Register <https://user.cyverse.org/>`_
Platform(s)
~~~~~~~~~~~
*We will use the following CyVerse platform(s):*
..
Modify the table below as needed
.. list-table::
:header-rows: 1
* - Platform
- Interface
- Link
- Platform Documentation
- Quick Start
* - Discovery Environment
- Web/Point-and-click
- `Discovery Environment <https://de.iplantcollaborative.org>`_
- `DE Manual <https://wiki.cyverse.org/wiki/display/DEmanual/Table+of+Contents>`_
- `Guide <https://learning.cyverse.org/projects/discovery-environment-guide/en/latest/>`_
Input and example data
~~~~~~~~~~~~~~~~~~~~~~
*In order to complete this quickstart you will need to have the following inputs prepared*
.. list-table::
:header-rows: 1
* - Input File(s)
- Format
- Preparation/Notes
- Example Data
* - SRA Accession number
- N/A
- We will cover how to search for an accession
- In this example, we will download accession `SRR1761506 <https://www.ncbi.nlm.nih.gov/sra/?term=SRR1761506>`_
----
Get started
-----------
.. Note::
**Searching the SRA:**
Searching the SRA can be complicated. Often a paper or reference will specify the
accession number(s) connected to a dataset. You can search flexibly using a number of
terms (such as the organism name) or the filters (e.g. DNA vs. RNA). The `SRA Help Manual <https://www.ncbi.nlm.nih.gov/books/NBK56913/>`_
provides several useful explanations. It is important to know is that projects are
organized and related at several levels, and some important terms include:
- **Bioproject**: A BioProject is a collection of biological data related to a single initiative, originating from a single organization or from a consortium of coordinating organizations; see for example `Bio Project 272719 <https://www.ncbi.nlm.nih.gov/bioproject/272719>`_
- **Bio Sample**: A description of the source materials for a project
- **Run**: These are the actual sequencing runs (usually starting with SRR); see for example `SRR1761506 <https://www.ncbi.nlm.nih.gov/sra/?term=SRR1761506>`__
#. Obtain an SRA accession number (starting SRR***); If you do not have an accession, you can go to the `SRA homepage <https://www.ncbi.nlm.nih.gov/sra>`_ and search using a variety of search terms and filters (e.g. DNA vs. RNA, exome vs. genome, etc.)
.. Note::
On the SRA homepage for each accession, you may wish to record some useful information about the run, including the sequencing format and the file size.
2. Log in to the `Discovery Environment`_ and click on `NCBI-SRA-Fastq-dump-2.8.1 <https://de.cyverse.org/de/?type=apps&app-id=3d7b6976-b7bf-11e9-b999-008cfa5ae621&system-id=de>`_
App or clcik on **Apps** to search for an launch this App.
3. Name your analysis and enter any desired comments
4. Under "Inputs" enter the SRA accession run number (if you have already downloaded an SRA file you can use this App to decompress it into a fastq file - search for the file using the 'Browse' button)
.. tip::
Depending on the file size, this will take several minutes
5. (optional) Under "optional parameters" check 'Split files' if your data are paried-end
.. tip::
The SRA page for your run should indicate 'SINGLE' or 'PAIRED' under Library Layout; https://www.ncbi.nlm.nih.gov/sra/?term=SRR1761506
6. (optional) Under "Output" enter a custom name for 'Sra output folder name' or leave the default
7. Click **Launch Analysis**
8. To view the status of the import and obtain results click on the **Analysis** icon
9. When the job status is marked 'Completed' in the Analysis window (you may have to refresh), click on the job name (e.g. 'SRA-Import-0.1.0_analysis1') to view the result in your data store
----
Summary
~~~~~~~
In addition to a folder of logs you should have the following files:
- A compressed file (including sequence data and metadata) in the NCBI ".sra" format
- An output folder (default:'sra_out') containing your fastq file (sequence data). If paired-end, and the 'Split files' option was checked, you will have two .fastq files (_1 for left-reads, _2 for right reads).
**Next Steps:**
Some common next steps include
1. Using `FastQC <https://www.bioinformatics.babraham.ac.uk/projects/fastqc/>`_ to check the quality of the sequence reads
2. Using `Trimmomatic <http://www.usadellab.org/cms/?page=trimmomatic>`_ to filter and trim reads for quality control
Both of these applications are available for use in the Discovery Environment. See `DE Apps catalog <https://wiki.cyverse.org/wiki/display/DEapps/List+of+Applications>`_
----
Additional information, help
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
..
Short description and links to any reading materials
----
**Fix or improve this documentation**
- Search for an answer:
|CyVerse Learning Center|
- Ask us for help:
click |Intercom| on the lower right-hand side of the page
- Report an issue or submit a change:
|Github Repo Link|
- Send feedback: `Tutorials@CyVerse.org <Tutorials@CyVerse.org>`_
|Home_Icon|_
`Learning Center Home`_
.. |CyVerse logo| image:: ./img/cyverse_rgb.png
:width: 500
:height: 100
.. _CyVerse logo: http://learning.cyverse.org/
.. |Home_Icon| image:: ./img/homeicon.png
:width: 25
:height: 25
.. _Home_Icon: http://learning.cyverse.org/
| 36.478947 | 277 | 0.711586 |
15aeef495b061a7d5ce4a3123eda7631266cc44d | 784 | rst | reStructuredText | docs/source/dlk.core.layers.decoders.rst | cstsunfu/dlkit | 69e0efd372fa5c0ae5313124d0ba1ef55b535196 | [
"Apache-2.0"
] | null | null | null | docs/source/dlk.core.layers.decoders.rst | cstsunfu/dlkit | 69e0efd372fa5c0ae5313124d0ba1ef55b535196 | [
"Apache-2.0"
] | null | null | null | docs/source/dlk.core.layers.decoders.rst | cstsunfu/dlkit | 69e0efd372fa5c0ae5313124d0ba1ef55b535196 | [
"Apache-2.0"
] | null | null | null | dlk.core.layers.decoders package
================================
Submodules
----------
dlk.core.layers.decoders.identity module
----------------------------------------
.. automodule:: dlk.core.layers.decoders.identity
:members:
:undoc-members:
:show-inheritance:
dlk.core.layers.decoders.linear module
--------------------------------------
.. automodule:: dlk.core.layers.decoders.linear
:members:
:undoc-members:
:show-inheritance:
dlk.core.layers.decoders.linear\_crf module
-------------------------------------------
.. automodule:: dlk.core.layers.decoders.linear_crf
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: dlk.core.layers.decoders
:members:
:undoc-members:
:show-inheritance:
| 20.631579 | 51 | 0.5625 |
bef2fb57baedd024ef37b900dedc8e4ff730a9eb | 398 | rst | reStructuredText | docs/installation.rst | streamsets/adls-gen2-python | 15d1fd968db67b832ebd8bff0aef376a5bf7edcc | [
"Apache-2.0"
] | 1 | 2020-04-14T03:01:55.000Z | 2020-04-14T03:01:55.000Z | docs/installation.rst | streamsets/adls-gen2-python | 15d1fd968db67b832ebd8bff0aef376a5bf7edcc | [
"Apache-2.0"
] | 2 | 2019-06-11T20:45:55.000Z | 2019-07-09T17:50:05.000Z | docs/installation.rst | streamsets/adls-gen2-python | 15d1fd968db67b832ebd8bff0aef376a5bf7edcc | [
"Apache-2.0"
] | 2 | 2019-09-14T08:31:33.000Z | 2021-03-15T05:17:32.000Z | Installation
============
Using pip
---------
Use your Python 3 installation's instance of `pip`_:
.. code-block:: console
$ pip3 install git+https://github.com/streamsets/adls-gen2-python
.. _pip: https://pip.pypa.io
Auth
----
Provide credentials using environment parameters.
The following environment variables are required:
* azure_client_id
* azure_client_secret
* azure_tenant_id
| 16.583333 | 69 | 0.721106 |
27784923be8044fab7acc60523b1bba12c76738d | 4,312 | rst | reStructuredText | docs/source/notebooks/.ipynb_checkpoints/readme-checkpoint.rst | UN-GCPDS/openbci-stream | 339c8cf033788438658d5766b07406fe1f68db07 | [
"BSD-2-Clause"
] | 5 | 2021-04-13T13:14:59.000Z | 2022-03-22T10:47:06.000Z | docs/source/notebooks/.ipynb_checkpoints/readme-checkpoint.rst | UN-GCPDS/openbci-stream | 339c8cf033788438658d5766b07406fe1f68db07 | [
"BSD-2-Clause"
] | 1 | 2021-10-04T10:04:34.000Z | 2021-10-04T10:04:34.000Z | docs/source/notebooks/.ipynb_checkpoints/readme-checkpoint.rst | UN-GCPDS/openbci-stream | 339c8cf033788438658d5766b07406fe1f68db07 | [
"BSD-2-Clause"
] | 2 | 2021-10-04T19:45:35.000Z | 2021-11-18T22:42:59.000Z | OpenBCI-Stream
==============
High level Python module for EEG acquisition and streaming with OpenBCI
hardware.
About this project
------------------
What is it?
~~~~~~~~~~~
A Python module for high-performance interfaces development with
`OpenBCI boards <https://openbci.com/>`__. Currently, we have support
for Cyton+Daisy and their WiFi module, additionally, we provide a
real-time data streaming feature using
`Kafka <https://kafka.apache.org/>`__.
What we want?
~~~~~~~~~~~~~
We want a stable, high level, easy to use and extensible Python module
focalizes on the hardware provided by OpenBCI, a library that can be
used for students, hobbyist and researchers, we are developing a set of
tools for preprocessing, real-time data handling and streaming of EEG
signals.
About us?
~~~~~~~~~
We are a research group focused on digital processing of signals and
machine learning from the National University of Colombia at Manizales
(`GCPDS <http://www.hermes.unal.edu.co/pages/Consultas/Grupo.xhtml;jsessionid=8701CFAD84FB5D540090846EA8912D48.tomcat6?idGrupo=615&opcion=1%3E>`__).
Main features
-------------
- **Asynchronous acquisition:** After the board initialization, the
data acquisition can be executed asynchronously, this feature ables
to realize background operations without interrupt and affect the
data sampling `read
more… <../html/_notebooks/04-data_acquisition.html#initialize-stream>`__
- **Streaming data:** The EEG data is streamed with `Apache
Kafka <https://kafka.apache.org/>`__, this means that the data can be
consumed from any other interface or language `read
more… <../html/_notebooks/04-data_acquisition.html#access-to-stream>`__
- **Remote host:** Is possible to get OpenBCI running in one computing
system and manage it from other `read
more… <../html/_notebooks/A4-configure_remote_host.html>`__
- **Command line interface:** A simple interface is available for
handle the start, stop and access to data stream directly from the
command line `read
more… <../html/_notebooks/A3-command_line_interface.html>`__
- **Markers/Events handler:** `read
more… <../html/_notebooks/07-stream_markers.html>`__
- **Distributed platforms:**
Examples
--------
Read 5 seconds EEG from serial:
.. code:: ipython3
from openbci_stream.acquisition import CytonRFDuino
import time
openbci = CytonRFDuino(capture_stream=True, daisy=False)
openbci.start_stream()
time.sleep(5)
openbci.stop_stream()
print(openbci.eeg_time_series.shape)
Stream markers through Kafka
.. code:: ipython3
import time
from datetime import datetime
import pickle
from kafka import KafkaProducer
producer_eeg = KafkaProducer(bootstrap_servers=['localhost:9092'],
value_serializer=lambda x: pickle.dumps(x))
def stream_marker(marker):
producer_eeg.send('marker', {'timestamp': datetime.now().timestamp(),
'marker': marker})
stream_marker('RIGHT')
time.sleep(1)
stream_marker('LEFT')
time.sleep(1)
stream_marker('RIGHT')
time.sleep(1)
stream_marker('LEFT')
Starting streaming from command line and store as ‘CSV’
.. code:: ipython3
$ python openbci_cli.py serial --start --output 'eeg_out.csv'
Writing data in
Ctrl+C for stop it.
[EEG] 2020-03-04 22:57:57.117478 0.0146s ago 254 samples, 8 channels
[EEG] 2020-03-04 22:57:58.138276 0.0153s ago 254 samples, 8 channels
[EEG] 2020-03-04 22:57:59.158153 0.0161s ago 302 samples, 8 channels
[EEG] 2020-03-04 22:58:00.179612 0.0155s ago 254 samples, 8 channels
[EEG] 2020-03-04 22:58:01.199204 0.0164s ago 254 samples, 8 channels
[EEG] 2020-03-04 22:58:02.219734 0.0154s ago 254 samples, 8 channels
[EEG] 2020-03-04 22:58:03.239956 0.0159s ago 254 samples, 8 channels
[EEG] 2020-03-04 22:58:04.259876 0.0134s ago 254 samples, 8 channels
[EEG] 2020-03-04 22:58:05.281410 0.0170s ago 256 samples, 8 channels
[EEG] 2020-03-04 22:58:06.301453 0.0199s ago 256 samples, 8 channels
[EEG] 2020-03-04 22:58:07.322150 0.0141s ago 254 samples, 8 channels
| 36.542373 | 148 | 0.684601 |
eb1ea05730944265b6b6c7711b7bd31a48822132 | 239 | rst | reStructuredText | docs/api.rst | zjafari77/rake-nltk | 62300c63807b9525f0ce595fbce610fdd5bbd977 | [
"MIT"
] | 964 | 2017-01-18T11:10:02.000Z | 2022-03-30T06:49:26.000Z | docs/api.rst | zjafari77/rake-nltk | 62300c63807b9525f0ce595fbce610fdd5bbd977 | [
"MIT"
] | 58 | 2017-01-25T17:52:38.000Z | 2022-02-17T17:08:26.000Z | docs/api.rst | zjafari77/rake-nltk | 62300c63807b9525f0ce595fbce610fdd5bbd977 | [
"MIT"
] | 173 | 2017-01-18T11:10:07.000Z | 2022-02-05T10:11:06.000Z | .. module:: rake_nltk
This part of the documentation covers all the interfaces of rake_nltk.
Metric Object
-------------
.. autoclass:: Metric
:undoc-members:
:members:
Rake Object
-----------
.. autoclass:: Rake
:members: | 14.9375 | 70 | 0.627615 |
f4d88870223d7dc9be6434595c9972dc2f0fbf40 | 3,753 | rst | reStructuredText | doc/config-reference/source/tables/nova-hyperv.rst | adityamardikanugraha/openstack-doc | e6b9ac02e69fabbfdf13b64dae64b0cf9209a786 | [
"Apache-2.0"
] | null | null | null | doc/config-reference/source/tables/nova-hyperv.rst | adityamardikanugraha/openstack-doc | e6b9ac02e69fabbfdf13b64dae64b0cf9209a786 | [
"Apache-2.0"
] | null | null | null | doc/config-reference/source/tables/nova-hyperv.rst | adityamardikanugraha/openstack-doc | e6b9ac02e69fabbfdf13b64dae64b0cf9209a786 | [
"Apache-2.0"
] | 1 | 2018-10-09T10:01:57.000Z | 2018-10-09T10:01:57.000Z | ..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _nova-hyperv:
.. list-table:: Description of HyperV configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
* - **[hyperv]**
-
* - ``dynamic_memory_ratio`` = ``1.0``
- (Floating point) Dynamic memory ratio Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup. Possible values: * 1.0: Disables dynamic memory allocation (Default). * Float values greater than 1.0: Enables allocation of total implied RAM divided by this value for startup. Services which consume this: * nova-compute Related options: * None
* - ``enable_instance_metrics_collection`` = ``False``
- (Boolean) Enable instance metrics collection Enables metrics collections for an instance by using Hyper-V's metric APIs. Collected data can by retrieved by other apps and services, e.g.: Ceilometer. Possible values: * True: Enables metrics collection. * False: Disables metric collection (Default). Services which consume this: * nova-compute Related options: * None
* - ``instances_path_share`` =
- (String) Instances path share The name of a Windows share mapped to the "instances_path" dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same "instances_path" used locally. Possible values: * "": An administrative share will be used (Default). * Name of a Windows share. Services which consume this: * nova-compute Related options: * "instances_path": The directory which will be used if this option here is left blank.
* - ``limit_cpu_features`` = ``False``
- (Boolean) Limit CPU features This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance. Possible values: * True: Limit processor-specific features. * False: Do not limit processor-specific features (Default). Services which consume this: * nova-compute Related options: * None
* - ``mounted_disk_query_retry_count`` = ``10``
- (Integer) The number of times to retry checking for a disk mounted via iSCSI.
* - ``mounted_disk_query_retry_interval`` = ``5``
- (Integer) Interval between checks for a mounted iSCSI disk, in seconds.
* - ``power_state_check_timeframe`` = ``60``
- (Integer) The timeframe to be checked for instance power state changes.
* - ``power_state_event_polling_interval`` = ``2``
- (Integer) Instance power state change event polling frequency.
* - ``qemu_img_cmd`` = ``qemu-img.exe``
- (String) Path of qemu-img command which is used to convert between different image types
* - ``vswitch_name`` = ``None``
- (String) External virtual switch Name, if not provided, the first external virtual switch is used
* - ``wait_soft_reboot_seconds`` = ``60``
- (Integer) Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.
| 87.27907 | 582 | 0.737543 |
c013ca49bb5c6ac33c00a6e78e08a55233964fad | 243 | rst | reStructuredText | doc/source/cli/command-objects/server-image.rst | cloudification-io/python-openstackclient | e07324e30fbb24e89fd63d1c5a5fe485f693a45c | [
"Apache-2.0"
] | 262 | 2015-01-29T20:10:49.000Z | 2022-03-23T01:59:23.000Z | doc/source/cli/command-objects/server-image.rst | adgeese/python-openstackclient | 06263bd5852aad9cd03a76f50140fbbb2d0751ba | [
"Apache-2.0"
] | 5 | 2015-01-21T02:37:35.000Z | 2021-11-23T02:26:00.000Z | doc/source/cli/command-objects/server-image.rst | adgeese/python-openstackclient | 06263bd5852aad9cd03a76f50140fbbb2d0751ba | [
"Apache-2.0"
] | 194 | 2015-01-08T07:39:27.000Z | 2022-03-30T13:51:23.000Z | ============
server image
============
A server image is a disk image created from a running server instance. The
image is created in the Image store.
Compute v2
.. autoprogram-cliff:: openstack.compute.v2
:command: server image create
| 20.25 | 75 | 0.687243 |
8adb42d5e13eea505eb9919316f2fac7be555847 | 885 | rst | reStructuredText | docs/documentation/installation.rst | nstarman/trackstream | 0d22f5eb95a6ccdfd6c50e3aa879b62583ad6efb | [
"BSD-3-Clause"
] | 3 | 2020-10-25T20:46:55.000Z | 2022-02-03T17:37:06.000Z | docs/documentation/installation.rst | nstarman/trackstream | 0d22f5eb95a6ccdfd6c50e3aa879b62583ad6efb | [
"BSD-3-Clause"
] | 24 | 2020-09-27T00:59:22.000Z | 2022-02-21T04:45:00.000Z | docs/documentation/installation.rst | nstarman/trackstream | 0d22f5eb95a6ccdfd6c50e3aa879b62583ad6efb | [
"BSD-3-Clause"
] | null | null | null | .. _trackstream-installation:
============
Installation
============
|Code Size|
************
With ``pip``
************
.. container::
|PyPI| |PyPI Format|
The easiest way to get *trackstream* is to install with `pip <https://pypi.org/project/trackstream/>`_. To install with pip::
pip install trackstream
See the `installation instructions <https://readthedocs.org/projects/trackstream/>`_ in the `documentation <https://readthedocs.org/projects/trackstream/>`_ for more information.
.. Substitutions
.. |PyPI| image:: https://badge.fury.io/py/trackstream.svg
:target: https://badge.fury.io/py/trackstream
.. |PyPI Format| image:: https://img.shields.io/pypi/format/trackstream?style=flat
:alt: PyPI - Format
.. |Code Size| image:: https://img.shields.io/github/languages/code-size/Nathaniel Starkman/trackstream?style=flat
:alt: GitHub code size in bytes
| 25.285714 | 178 | 0.690395 |
20fa80d352b09768004986bfb016c3de98a0c19e | 288 | rst | reStructuredText | python_bindings/source/rst_files/udfs.rst | dbd64/taco | 1114197595f2429f76aa883ac410b597aac1f19c | [
"MIT"
] | 1,005 | 2017-05-06T22:29:50.000Z | 2022-03-25T13:47:59.000Z | python_bindings/source/rst_files/udfs.rst | dbd64/taco | 1114197595f2429f76aa883ac410b597aac1f19c | [
"MIT"
] | 331 | 2017-05-14T21:25:20.000Z | 2022-03-29T07:11:48.000Z | python_bindings/source/rst_files/udfs.rst | dbd64/taco | 1114197595f2429f76aa883ac410b597aac1f19c | [
"MIT"
] | 156 | 2017-05-14T20:45:14.000Z | 2022-03-14T12:54:26.000Z | User Defined Functions
=========================
Users can defined their own operations to add to taco. They must be written in a C header file and can be called using
the following functions.
.. currentmodule:: pytaco
.. autosummary::
:toctree: functions
apply
set_udf_dir
| 19.2 | 118 | 0.684028 |
3b0007439af364038a256d45886918a589554069 | 1,269 | rst | reStructuredText | docs/source/sections/progapi/bigqueryGUI/HowToAccessBigQueryFromTheGoogleCloudPlatform.rst | inodb/readthedocs | 1a0784d8f39e50d781bcfd9061844426eed31a55 | [
"Apache-2.0"
] | null | null | null | docs/source/sections/progapi/bigqueryGUI/HowToAccessBigQueryFromTheGoogleCloudPlatform.rst | inodb/readthedocs | 1a0784d8f39e50d781bcfd9061844426eed31a55 | [
"Apache-2.0"
] | 3 | 2016-10-25T20:05:48.000Z | 2018-05-02T22:45:37.000Z | docs/source/sections/progapi/bigqueryGUI/HowToAccessBigQueryFromTheGoogleCloudPlatform.rst | inodb/readthedocs | 1a0784d8f39e50d781bcfd9061844426eed31a55 | [
"Apache-2.0"
] | 5 | 2016-10-25T15:40:38.000Z | 2017-10-13T20:16:49.000Z | ===================================
BigQuery on Google Cloud Platform
===================================
In order to use BigQuery, you must have access to a Google Cloud Platform (GCP) project. Your GCP project must be associated with a billing account in order to gain full access to all of products and services that make up the Google Cloud. Contact us at **request-gcp@isb-cgc.org** for more information on how to to request cloud credits.
Additionally, you will need a Google account identity (freely available with a new account or by linking to an existing email account).
When first logging into the `Google Cloud Platform <http://cloud.google.com>`_ ,you will be presented with this page (click on the images to enlarge them):
.. image:: NewSignIntoGCP.png
:align: center
You will be presented with sign in page, prompting you to enter a Google account log in and password:
.. image:: SignInPage.png
:align: center
Once you sign in, click on Console at the top of the screen (see arrow in image below) to access a full range of Google cloud products and services including BigQuery.
.. image:: AfterSignInPage.png
:align: center
At the home button, scroll down to open BigQuery.
.. image:: AccessingBigQuery.png
:align: center
| 40.935484 | 339 | 0.717888 |
b3ca9523926e0bbee816703995b3c3fb67a5fefd | 1,545 | rst | reStructuredText | Documentation/PlanEssais/rst/Algos/apTvAngleListToImage.rst | spacebel/MAJA | 3e5d20bc9c744c610e608cfcf1f4c5c738d4de9e | [
"Apache-2.0"
] | 57 | 2020-09-30T08:51:22.000Z | 2021-12-19T20:28:30.000Z | Documentation/PlanEssais/rst/Algos/apTvAngleListToImage.rst | spacebel/MAJA | 3e5d20bc9c744c610e608cfcf1f4c5c738d4de9e | [
"Apache-2.0"
] | 34 | 2020-09-29T21:27:22.000Z | 2022-02-03T09:56:45.000Z | Documentation/PlanEssais/rst/Algos/apTvAngleListToImage.rst | spacebel/MAJA | 3e5d20bc9c744c610e608cfcf1f4c5c738d4de9e | [
"Apache-2.0"
] | 14 | 2020-10-11T13:17:59.000Z | 2022-03-09T15:58:19.000Z | apTvAngleListToImage
~~~~~~~~~~~~~~~~~~~~
Objectif
********
Validation de l'application "AngleListToImage"
Description
***********
Le module "AngleListToImage" calcul une image à partir d'une liste d'angles. Cela permet notamment de créer une image des angles à partir de métadonnées (S2).
Liste des données d’entrées
***************************
- Les angles sous format texte selon la nomenclature d'interface
- Le DTM sur lequel les angles doivent être calculés
Liste des produits de sortie
****************************
Image contenant les angle d'incidence pour chaque pixel
Prérequis
*********
Il n’y a pas de prérequis.
Durée attendue
***************
La durée d’exécution de l’essai n’est pas un critère attendu.
Epsilon utilisé sur la non regression
*************************************
0
Vérifications à effectuer
**************************
Le test génère en sortie une image contenant les angles.
Mise en oeuvre du test
**********************
Ce test est exécuté en lançant la commande :
ctest -R apTvAngleListToImage
Journal d’essai de la recette
*****************************
Notes sur le déroulement du test
--------------------------------
Rien de particulier n’a été noté lors du déroulement du test.
Conclusion du déroulement du test
---------------------------------
RAS
Validation du test
------------------
================== =================
Date de validation Résultat
26/11/2010 OK
================== =================
Exigences
*********
Ce test couvre les exigences suivantes :
Néant
| 21.760563 | 158 | 0.577994 |
19ac5008f2e0ca4ce3573357c24b073b0de6c218 | 577 | rst | reStructuredText | docs/index.rst | lucatrv/jupyterlab_code_formatter | 9a2ce82376ba7414c8b3f7435872ed812883d297 | [
"MIT"
] | 528 | 2018-08-23T21:18:18.000Z | 2022-03-30T04:19:53.000Z | docs/index.rst | lucatrv/jupyterlab_code_formatter | 9a2ce82376ba7414c8b3f7435872ed812883d297 | [
"MIT"
] | 184 | 2018-08-24T18:11:56.000Z | 2022-03-29T22:01:32.000Z | docs/index.rst | lucatrv/jupyterlab_code_formatter | 9a2ce82376ba7414c8b3f7435872ed812883d297 | [
"MIT"
] | 56 | 2018-08-24T18:06:20.000Z | 2022-03-17T04:44:23.000Z | Welcome to Jupyterlab Code Formatter's documentation!
=====================================================
This is a small Jupyterlab plugin to support using various code formatter on the server side and format code cells/files in Jupyterlab.
.. image:: _static/demo.gif
.. toctree::
:maxdepth: 2
:caption: Contents:
Prerequisites and Installation Steps <installation>
How To Use This Plugin <how-to-use>
FAQ <faq>
Developement <dev>
.. toctree::
:maxdepth: 1
Changelog <changelog>
Indices and tables
==================
* :ref:`genindex`
| 22.192308 | 135 | 0.62565 |
5db00ee776da0691311108443fe52f3804bbd06e | 1,401 | rst | reStructuredText | documentation/manual/source/pages/widget/groupbox.rst | mever/qooxdoo | 2bb08cb6c4ddfaf2425e6efff07deb17e960a050 | [
"MIT"
] | 1 | 2021-02-05T23:00:25.000Z | 2021-02-05T23:00:25.000Z | documentation/manual/source/pages/widget/groupbox.rst | mever/qooxdoo | 2bb08cb6c4ddfaf2425e6efff07deb17e960a050 | [
"MIT"
] | 3 | 2019-02-18T04:22:52.000Z | 2021-02-21T15:02:54.000Z | documentation/manual/source/pages/widget/groupbox.rst | mever/qooxdoo | 2bb08cb6c4ddfaf2425e6efff07deb17e960a050 | [
"MIT"
] | 1 | 2021-06-03T23:08:44.000Z | 2021-06-03T23:08:44.000Z | .. _pages/widget/groupbox#groupbox:
GroupBox
********
A Groupbox is a widget to group a set of form elements in a visual way.
.. _pages/widget/groupbox#preview_image:
Preview Image
-------------
|widget/groupbox_complete.png|
.. |widget/groupbox_complete.png| image:: /pages/widget/groupbox_complete.png
.. _pages/widget/groupbox#features:
Features
--------
* Different legend types
* icon and text
* additional check boxes
* additional radio buttons
.. _pages/widget/groupbox#description:
Description
-----------
The GroupBox offers the possibility to visual group several form elements together. With the use of a legend which supports both text and icon it is easy label the several group boxes to give the user a short description of the form elements.
Additionally it is possible to use checkboxes or radio-buttons within the legend to enable or disable the connected groupBox (and their child elements) completely. This feature is most important for complex forms with multiple choices.
.. _pages/widget/groupbox#demos:
Demos
-----
Here are some links that demonstrate the usage of the widget:
* `Demo showing all groupBox types <http://demo.qooxdoo.org/%{version}/demobrowser/#widget~GroupBox.html>`_
.. _pages/widget/groupbox#api:
API
---
| Here is a link to the API of the Widget:
| `qx.ui.groupbox.GroupBox <http://demo.qooxdoo.org/%{version}/apiviewer/#qx.ui.groupbox>`_
| 27.470588 | 242 | 0.75232 |
4f473df521c321cb8bb775f4f9304deb887b1b4b | 908 | rst | reStructuredText | doc/source/qupulse._program.rst | lankes-fzj/qupulse | 46f00f70bc998b98ac1ae4721d1a9a1c10b675aa | [
"MIT"
] | 2 | 2021-05-22T00:04:20.000Z | 2021-11-17T11:21:46.000Z | doc/source/qupulse._program.rst | bpapajewski/qupulse | c3969a4fa9eabe69c1143ad16e6e3ca5fb9c068e | [
"MIT"
] | null | null | null | doc/source/qupulse._program.rst | bpapajewski/qupulse | c3969a4fa9eabe69c1143ad16e6e3ca5fb9c068e | [
"MIT"
] | null | null | null | qupulse\._program package
=========================
Submodules
----------
qupulse\._program\._loop module
-------------------------------
.. automodule:: qupulse._program._loop
:members:
:undoc-members:
:show-inheritance:
qupulse\._program\.instructions module
--------------------------------------
.. automodule:: qupulse._program.instructions
:members:
:undoc-members:
:show-inheritance:
qupulse\._program\.transformation module
----------------------------------------
.. automodule:: qupulse._program.transformation
:members:
:undoc-members:
:show-inheritance:
qupulse\._program\.waveforms module
-----------------------------------
.. automodule:: qupulse._program.waveforms
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: qupulse._program
:members:
:undoc-members:
:show-inheritance:
| 19.319149 | 47 | 0.556167 |
254bbb6c701196f98c98f62b087dd2f7429fc638 | 31,934 | rst | reStructuredText | docs/source/userguide/working_with_a_design.rst | btester271828/malcolmjs | 16292d41864f00dd4f7e129618866eb8a732637e | [
"Apache-2.0"
] | 7 | 2017-02-27T17:41:02.000Z | 2019-06-20T12:59:06.000Z | docs/source/userguide/working_with_a_design.rst | btester271828/malcolmjs | 16292d41864f00dd4f7e129618866eb8a732637e | [
"Apache-2.0"
] | 424 | 2018-04-12T15:15:24.000Z | 2022-03-08T23:05:40.000Z | docs/source/userguide/working_with_a_design.rst | btester271828/malcolmjs | 16292d41864f00dd4f7e129618866eb8a732637e | [
"Apache-2.0"
] | 3 | 2016-05-19T15:13:03.000Z | 2018-11-15T10:58:56.000Z | Working With a Design
=====================
A `design` forms the heart of your system implementation. It provides an
interactive graphical representation of your system, helping you build and
manage:
* `Blocks <block>` representing hardware components, logic gates, etc.
* The `connectivity <link>` between blocks, in terms of input (sink) ports and
output (source) ports.
* The `attributes <attribute>` associated with blocks and links.
* The `methods <method>` available to influence behaviour within blocks.
A Design is created in the user interface `Layout View`.
Adding a Block to a Design
-----------------------------
A `block` is added to a `design` by dragging and dropping it from the 'Block
Palette' into the `Layout View` as follows:
#. Select the **'Palette'** icon at the bottom of the Layout Panel. The
Block Palette opens containing the set of blocks currently available
to you.
#. Identify the Block you wish to add. By hovering over it the mouse
pointer changes from an arrow to a hand.
#. Click the left mouse button to select the Block and while holding down
the mouse button drag the Block into the Layout Panel.
#. When you reach your desired location for the Block within the Layout
Panel release the mouse button.
The Block Palette icon is replaced by a full representation of the selected
Block, showing:
* The Block name (shown relative to its `Parent Block`).
* An optional, configurable descriptive label (initially containing default
text).
* `Source Ports <Source Port>` responsible for transmitting output from the
Block, including their type.
* `Sink Ports <Sink Port>` responsible for receiving input to the Block,
including their type.
After adding a Block to the Layout Panel it can be selected by hovering over it
and clicking the left mouse button. Upon selection the
`Block Information Panel` presenting each `attribute` and `method` available
to that Block is displayed in the right-hand panel of the web interface.
.. NOTE::
On intially adding a new Block to your Design it is configured according to
its pre-defined default settings retrieved from the underlying Design
Specification of that Block.
Removing a Block from a Design
---------------------------------
If a `block` has been added to a `design` erroneously, or is no longer required
within the current Design it can be removed in one of two ways:
#. *By dragging it to the Bin:*
#. Select the Block to be removed by hovering over it and clicking the left mouse button. Upon selection a **'Bin'** icon is displayed at the bottom of the Layout Panel.
#. While holding down the left mouse button drag the Block over the **'Bin'** icon. The icon is highlighted.
#. Release the left mouse button.
#. *Hitting the Delete or Backspace Key:*
#. Select the Block to be removed by hovering over it and clicking the left mouse button. The selected Block is highlighted.
#. Hit the *Delete* key or *backspace* key on your keyboard.
.. NOTE::
Upon removing a Block from your Design all `Source Port` and `Sink Port`
links associated with it are automatically removed.
Working with the Block Palette
------------------------------
The Block Palette contains a list of each `block` available to a `design` based
on pre-defined constraints imposed by the underlying hardware infrastructure
associated with the system.
When a Block is selected from the Block Palette for inclusion in a Design it is
removed from the Block Palette to ensure it is not included more than once. If
all Blocks of a particular type have been added to a Design it is not possible
to add any more as the underlying hardware implementation will not be able to
represent them.
If a Block is `removed <Removing a Block from a Design>` from a Design it is
immediately available again for selection in the Block Palette.
Specifying Block Attributes
---------------------------
The behaviour of a `block` is defined via its `attributes <attribute>`.
Attributes are pre-defined based on the function of the Block and may include
default values providing a starting point for later implementation-time
customisation. A full list of the attributes associated with each Block
available from the Block Palette can be found in the documentation associated
with that Block.
Types of Attributes
~~~~~~~~~~~~~~~~~~~
Four types of `attribute` are available, and a `block` may support zero or more
of these depending on its purpose. These are summarised as follows:
.. list-table::
:widths: 30, 70
:align: center
:header-rows: 1
* - Type
- Description
* - `Input Attribute`
- An Attribute identifying the source of data that will be received into a
`block` via a `Sink Port` with the same name.
* - `Output Attribute`
- An Attribute identifying the value (or stream of values) that will be
transmitted out of a `block` via a `Source Port` with the same name.
* - `Parameter Attribute`
- An Attribute that can be set by a user while configuring a Block,
ultimately influencing the behavior of that Block.
* - `Readback Attribute`
- An Attribute whose value is set automatically by a process within the
execution environment. Readback attributes cannot be set manually via
the user interface.
Attributes whose value can be set at design-time are denoted by a highlight
below the attribute value field.
Obtaining Information About an Attribute
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Information about an individual Attribute can be obtained by selecting the
`information <Normal State>` icon associated with it from the
`Block Information Panel`. This opens the Attribute Information Panel within
the right-hand panel of the user interface.
For each Attribute the following information is displayed:
* The fully qualified path to the Attribute allowing it to be uniquely
identified within the Design.
* Basic meta-data about the Attribute including it's type, a brief
description of its purpose and whether it is a writeable Attribute.
* Details of the `Attribute state <Understanding Attribute State>`
associated with the Attribute, including severity of any issues and any
corresponding message.
* Timestamp details showing when the Attribute was last updated.
Attribute meta-data and alarm state information is derived from pre-configured
content provided within the underlying Block specification.
Setting a Block Attribute
~~~~~~~~~~~~~~~~~~~~~~~~~
Parameter, Input and Output Block attributes are set via the
`Block Information Panel` associated with the Block you wish to configure.
The way in which an Attribute is set within the user inferface reflects the
nature of that Attribute based on its defintion in the underlying Block
specification. This can also provide clues on whether the Attribute is editable
or not. The user interface provides the following approaches:
View/Edit Button
~~~~~~~~~~~~~~~~
Provides the ability to modify a `complex Attribute <Complex Attributes>`.
Selecting the button opens configurable content in the central panel. Upon
completion of changes the overall complex Attribute must be saved. If the
Attribute is modifiable the text reads 'Edit', otherwise it reads 'View'
*GET SCREENSHOT*
Dropdown List
~~~~~~~~~~~~~
Provides the ability to select a value from a list of pre-defined values
appropriate to the Attribute within is current Block context. Upon selection
the Attribute value field updates to reflect the selected value.
*GET SCREENSHOT*
Text Input
~~~~~~~~~~
Provides a 'free text' field accepting any alphanumeric string. Attributes that
have been edited but not yet submitted are shown in the . Press the *Enter* key
to submit the value. Upon successful submission the `edit
<Locally Edited State>` icon is replaced by the default `information
<Normal State>` icon.
*GET SCREENSHOT*
Checkbox
~~~~~~~~
Provides the option to switch on or switch off the action performed by the
Attribute. If the checkbox is empty the Attribute is *off*.
*GET SCREENHOST*
.. TIP::A
An Attribute may contain a value but within the context of the current
Design this cannot be modified. Such instances are represented by the
approach for setting that Attribute being greyed out.
To configure an Attribute:
#. Select the Block you wish to configure by clicking on it within the Layout Panel. The selected Block will be highlighted and the `Block Information Panel` associated with it displayed on the right-hand panel of the user interface.
#. Find the Attribute you wish to configure in the list of available Attributes.
#. Edit the Attribute value field as necessary based on the update process associated with the update approach described above.
.. NOTE::
No data type validation is performed on manually entered values within the
user interface. Validation is performed upon receipt by the backend
server. If an invalid format is detected a `Warning <Warning State>` icon
is presented in the user interface.
During the process of submitting a new Attribute value a `spinning
<Processing State>` icon is displayed to the left of the modified Attribute.
For more information on the process this represents see
`Attribute Change Lifecycle`.
Upon successful submission the icon associated with the modified Attribute
reverts to the `information <Normal State>` icon.
In case of submission failure an `attribute update error <Update Error State>`
icon is displayed next to the modified Attribute.
Exporting Attributes
~~~~~~~~~~~~~~~~~~~~
The user interface presents a heirarchical view of the overall system, with one
or more `Parent blocks <Parent Block>` encapsulating increasingly deeper levels
of your Design. By default at the top level of your `design` you will only see
attributes associated with Parent blocks but it might be an underlying attribute
within a Child Block that influences the behaviour of its parent. To mitigate
this scenario every Parent Block provides the option to **Export** one or more
Attributes from its children so they are displayed within the Parent Block.
In doing so it becomes possible to monitor, and potentially utilise, crucial
Attributes implemented deep within a Design at increasingly abstracted levels of
detail.
To specify an Attribute for export:
#. Identify the Attribute you wish to monitor outside the current layout level within the overall Deisgn. Note its source (in the format ``BlockName.Attribute``).
#. Within the Parent Block describing the Layout select the **'View'** option associated with the 'Exports' Attribute.
#. When the Export Table is displayed select the first available blank row. If no blank rows are available select the option to add a new row.
#. In the 'Source' column select the drop-down menu option and find the Attribute you wish to export in the list of Attributes available.
#. In the 'Export' column enter the name of the Attribute as you would like it to appear when exported to its Parent Block. Leave the 'Export' field blank to display the default name of the Attribute. User specified display names must be specified in ``camelCase`` format, for example *myAttribute*.
.. NOTE::
The ``camelCase`` naming convention is required to ensure an appropiate
Attribute label can be generated in the Parent `Block Information Panel`.
Once successfully exported the Attribute appears within the 'Exported
Attributes' section of the Parent `Block Information Panel` in the left-hand
panel of the user interface.
Previously specified Attributes can be edited at any time within the Export
Table following a similar process.
Any number of Attributes can be exported from Child Blocks to their overall
Parent Block.
The order in which exported Attributes appear within their Parent Block mirrors
the order in which they were added to the export specification. If you require
a specific order to be displayed in the user interface:
#. With the Export Table displayed select the Edit icon associated with an existing Attribute or `Information <Normal State>` icon associated with a new Attribute. The information panel associated with the Attribute is displayed on the right-hand side.
#. To insert a new Attribute *above* the current one select the **'Insert row above'** option.
#. To insert a new Attribute *below* the current one select the **'Insert row below'** option.
#. On selecting the appropriate insert option a new row is added to the Export Table.
#. An existing Attribute can also be re-ordered by moving it up and down the list of attributes via the **'Move Up'** or **'Move Down'** option associated with it.
Attributes that have previously been exported can be removed from the Parent Block by deleting them from the Parent Block's export table. To remove an exported Attribute:
#. Identify the attribute to be removed.
#. Within the Parent Block containing the Attribute select the **'View'** option associated with the 'Export' Attribute.
#. Identify the line in the export table representing the Attribute to be removed.
#. Select the information icon assoicated with the Attribute. It's information panel is displayed on the right-hand side.
#. Select the **'Delete'** option associated with the **'Delete row'** field.
To complete the export process the export specification defined within the Export Table must be submitted for processing and recording within the overall system Design. To submit your export specification:
#. Select the **'Submit'** option at the bottom of the Export Table.
#. Refresh the Parent Block in the left-hand panel and confirm that the exported Attribute(s) have been promoted to the Parent Block or removed attributes are no longer visible.
Changes to the export specification can be discarded at any time throughout the modification process without impacting the currently recorded specification. To discard changes:
#. Select the **'Discard Changes'** option at the bottom of the Export Table.
Local vs. Server Parameter Attribute State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The underlying physcial hardware infrastructure described by your virtual
representation is defined and configured based on the content of the Design
specification saved behind the graphical representation you interact with on
screen. Only when modified content is submitted and recorded to the Design
specification is the change effected in physical hardware. It is therefore
crucial to understand the difference between 'local' attribute state and
'server' attribute state, particularly for `Parameter Attributes
<Parameter Attribute>` that can be modified directly within the user interface.
Local Attribute state represents the staus of a Parameter Attribute that has
been modified within the User Inferface but not yet submitted for inclusion in
the underlying Design specification. As such the modified value has no effect
on the currently implemented hardware solution. Locally modified attributes are
denoted by the 'edit' status icon next to the Attribute name within their
`Block Information Panel`. A Parameter Attribute enters the 'local' state as
soon as its incumbent value is changed in any way (including adding content to a
previously empty Attibute value field) and will remain so until the 'Enter' key
is pressed, triggering submission of content to the server. If the server
detects an error in the variable content or format it will return an error and
the variable will remain in 'local' state until the issue is resolved. Details
of the mechanism of submitting modified content is described in the `Attribute
Change Lifecycle` section below.
Once a Parameter Attribute has been successfully recorded it is said to be in
the 'server' attribute state, denoting that it has been saved to an underlying
information server used to host the Design specification. Attributes in
'server' state are reflected in the underlying hardware implementation and will
be utilised by the system during exection of the hardware design. 'Server'
state attributes are denoted by the 'information' status icon.
The following diagram shows the process involved in modifying a Parameter
Attribute, mapping 'local' and 'server' states to the activities within it.
Note also the inclusion of Attribute state icons as displayed in the user
interface to denote the state of the Parameter Attribute as activities are
completed.
.. figure:: images/attribute_lifecycle.svg
:align: center
Attribute change lifecycle workflow
.. TIP::
Do not confuse 'local' and 'server' Attribute state with a 'saved' Design.
`Saving a Design` via a Parent Block 'Save' method does not result in all
locally modified Attribute fields being saved to that Design. Only
Attributes already in the 'server' state will be included when the overall
Design is saved. Similarly, modified Attributes now in the 'server' state
will not be stored permenantly until the overall Design has been saved.
Attribute Change Lifecycle
~~~~~~~~~~~~~~~~~~~~~~~~~~
Attributes values modified via a `Block Information Panel` are recorded as part
of the overall `design`. We refer to the combined submission and recording
processes as a *'put'* action (as in 'we are putting the value in the
attribute').
Once the 'put' is complete the Attribute value takes immediate effect,
influencing any executing processes as appropriate from that point forward. If
an error is detected during the 'put' process it is immediately abandonded and
the nature of the error reflected back to the user interface.
The round-trip from submission of a value via the user interface to its
utilisation in the execution environment takes a small but non-deterministic
period of time while data is transferred, validated and ultimately recorded in
the Design. Attribute modification cannot therefore be considered an atomic
process.
Within the user interface the duration of this round-trip is represented by a
`spinning <Processing State>` icon in place of the default information icon
upon submission of the Attribute value. Once the change process is complete the
spinning icon reverts to the default `information <Normal State>` icon. This
reversion is the only reliable indication that a value has been recorded and is
now being utilised.
.. TIP::
Remember the three rules of Attribute change:
* Changing an Attribute value in the user interface has no impact on the
underlying physical system until it has been 'put'.
* Once the 'put' process is complete the change takes immediate effect.
* Changes to an Attribute will not be stored permenantly unless the
overall Design has been `saved <Saving a Design>`. Only those
Attribute values that have been 'put' on the server will be recorded
in the saved Design.
Complex Attributes
------------------
An Attribute associated with a Block may itself represent a collection of values
which, when taken together, define the overall Attribute. For example, the
Sequencer Block type contains a single Attribute defining the sequence of steps
performed by underlying hardware when controlling motion of a motor.
The collection of values required by the Attribute are presented in the user
interface as an Attribute Table. The template for the table is generated
dynamically based on the specification of the Attribute within its Block. For
details of utilising the table associated with a specific Attribute refer to the
technical documentation of its Block.
An example of an Attribute Table for the 'Sequencer' Block associated with a
'PANDA' Parent Block is shown below:
.. figure:: screenshots/attribute_table.png
:align: center
Example Attribute Table associated with a complex Attribute
Identifying Table Attributes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A Table Attribute can be identifed by the `View/Edit button` associated with it.
Selecting the button opens the Attribute Table within the central panel of the
user interface.
Specifying Attribute Table Content
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Upon opening an Attribute Table you are presented with details of the content of
that Attribute, and the ability to define values. Like Attributes themselves
these values may be selected from a list of pre-defined options, selectable
enable/disable options, or text/numerical inputs.
After adding values the content of the table must be submitted for processing
and recording within the overall system Design. To submit an Attribute Table:
#. Select the **'Submit'** option at the bottom of the Attribute Table.
Updates and changes within the table can be discarded at any time throughout the modification process without impacting the currently recorded specification. To discard changes:
#. Select the **'Discard Changes'** option at the bottom of the Attribute Table.
Static vs. Dynamic Attribute Tables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Depending on the specification of a table-based Attribute in its underlying
Block the Attribute Table presented may be static or dynamic in nature.
*Static* Attribute Tables contain a pre-defined number of columns and rows
describing the information required for that Attribute. All fields must be
completed in order to fully define the Attribute.
*Dynamic* Attribute Tables contain a pre-defined number of columns but allow for
a varying number of rows. At least one row must be present to define the
Attribute but typically more will be required to fully describe its behaviour.
New rows are added to the table in one of two ways:
* To add a new row to the end of the table select the **'Add'** option below
the current last row entry. A new row is created.
* If the order in which table entries are specified is important (for
example in the case of describing a sequence of activities), rows can be
added before or after previously defined rows as follows:
#. With the Attribute Table displayed select the 'edit' icon associated with an existing row entry or `information <Normal State>` icon associated with a new row. The information panel associated with the row is displayed on the right-hand side.
#. To insert a new row *above* the current one select the **'Insert row above'** option.
#. To insert a new row *below* the current one select the **'Insert row below'** option.
#. An existing row can also be re-ordered by moving it up and down the list of attributes via the **'Move Up'** or **'Move Down'** option associated with it.
Rows that have been previously specified can be removed by deleting them from the Attribute Table. To remove a row:
#. Identify the row to be removed.
#. Select the `information <Normal State>` icon assoicated with the row. It's information panel is displayed on the right-hand side.
#. Select the **'Delete'** option associated with the 'Delete row' field.
Working with Block Methods
--------------------------
While Block `attributes <attribute>` define the *behaviour* of a Block, `Methods
<method>` define the *actions* it can perform.
A Method in represented in the user inferface as a button, labelled with the
name of the action that will be performed. The Method will only be executed if
the button is pressed on the user interface.
A Method may require input parameters defining how the action is to be enacted.
For example, the 'Save' Method associated with the Design within a
`Parent Block` requires a single input parameter - the name of the file to
which Design information is stored. Method parameters:
* Can be edited directly via the `Block Information Panel`.
* Exist in 'local' state until the button associated with the Method is
pressed.
* Should be considered as properties of the Method they are associated with
rather than entities in their own right. Method parameters are never
recorded on the server or saved within the persistent Design
specification.
A full list of the Methods available within each Block and details of their
Method parameters can be found in the documentation defining that Block.
Obtaining information about Method execution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Selecting the 'Information' icon associated with a Block Method displays two
sources of information relating to the Method:
* The right-hand panel displays details about the Method including a
description of its purpose and the parameters it requires to execute
successfully.
* The central panel shows a log recording each instance of Method execution
within your current session. This includes the time of submission and
completion, the status of that completion (e.g. success or failure) and
any alarms associated with that status. Selecting the Method parameter
name from the table header opens further information about that parameter
in the 'Right-hand panel'.
Block Ports
-----------
If their purpose demands it Blocks are capable of *receiving* input information
via one or more `Sink Ports <sink port>` and *transmitting* information via one
or more `Source Ports <source port>`.
A list of the Source ports and Sink ports associated with a Block can be found
in the documentation for that Block.
To aid the design process ports are colour coded to denote the type of
information they transmit (`Source Ports <source port>`) or receive (`Sink Port
<sink port>`). These are summarised below:
.. list-table::
:widths: auto
:align: center
:header-rows: 1
* - Port Type
- Key
* - Boolean
- Blue
* - Int32
- Orange
* - Motor
- Pink
* - NDArray
- Brown
Transmission of information between a Source Port on one Block to a Sink Port on
a second Block is achieved via a `link`. For further information about working
with links see `linking blocks` below.
Linking Blocks
--------------
Blocks are connected to one another via `Links <link>`. A Link joins a
`Source Port` from one Block to a `Sink Port` on another. Both ports must be
of the same type. The ports available to a Block and their specification are
defined in the documentation for that Block.
Creating a Block Link
~~~~~~~~~~~~~~~~~~~~~
To create a Link between two blocks:
#. Select the `Source Port` or `Sink Port` representing one terminus of the link you wish to make by hovering over the Port on the Block. The Port will be temporarily highlighted.
#. Click the left mouse button and while holding it down drag the Link to the Port representing the other terminus of the link you wish to make. The target port will be temporarily highlighted.
#. Release the mouse button. If the `Link constraints <Constraints when Using Links>` defined below have been respected the Link is displayed within the Design Layout.
.. NOTE::
If an error occurs during the creation process details are displayed at
the bottom of the Layout panel.
.. TIP::
To confirm the Connection has been created correctly select the Link by
clicking on it. The Link is highlighted to denote selection and the Link
information panel opens in the right hand panel displaying the name of the
`Source Port` and `Sink Port` associated with the Link.
Interrogating Link Attributes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A `link` does not possess attributes of its own, but selecting it within a
`layout` displays information about its `source port` origin and `sink port`
target in the right-hand panel of the user interface.
To interrogate the attributes associated with the Link you have created:
#. Hover over the Link of interest. The Link changes colour to denote that it may be selected.
#. Click the left mouse button to select the Link. A Link Information Panel open in the right-hand panel of the user interface.
.. CAUTION::
It is possible to modify the Source and Sink associated with the Link from
the Link Information Panel. Do so cautiously as this will change how blocks
are connected in the overall Design without any acknowledgement that a
change has occurred.
Removing a Link
~~~~~~~~~~~~~~~
If a `link` has been added to a `design` erroneously, or is no longer required within the current Design it can be removed in one of two ways:
#. *Hitting the 'Delete' or backspace key:*
#. Hover over the Link of interest. The Link changes colour to denote that it may be selected.
#. Click the left mouse button to select the Link. The Link is highlighted.
#. Hit the *Delete* or *backspace* key on your keyboard. The Link is removed from the Design Layout.
#. *Via the Link Information Panel:*
#. Hover over the Link of interest. The Link changes colour to denote that it may be selected.
#. Click the left mouse button to select the Link. A Link Information Panel open in the right-hand panel of the user interface.
#. Select the **'Delete'** button in the Link Information Panel. The Link is removed from the Design Layout.
Constraints When Using Links
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Links are subject to the following constraints:
* A `sink port` can only accept a single Link.
* Multiple links can originate from a `source port`, connecting multiple
Blocks to that Source Port.
* Links can only be used to connect a `source port` and a `sink port` of the
same logical type (e.g. boolean, int32). Port types are specified in the
documentation associated with the Block of interest, and colour coded
within the Design Layout to aid identification of similarly typed ports.
Saving a Design
---------------
You can save your Design at any time during the creation or modification
process, and we recommend you do so regularly.
To save a Design:
#. Navigate to the `Root Block` representing the highest level of the Design you wish to save.
#. Navigate to the 'Save' Attribute Group at the bottom of the left-hand panel. Expand it if necessary.
#. Enter a descriptive name for the Design in the 'Design' field. Note this will be used later to identify existing Designs available for use.
.. TIP::
To save your Design with the same name as the currently open Design
leave the 'Filename' field blank.
#. Select the **'Save'** button. The information icon to the left of the button will spin to denote the save is in progess, returning to the information icon when the Design is saved.
.. NOTE::
If an error is detected during the save process a red warning icon is displayed next to the button.
Opening an Existing Design
--------------------------
A `parent block` may facilitate multiple `designs <design>`, each reflecting
operation of that Block within different scenarios. Only a single Design can be
utilised at any given time. By default this is the Design that is open at the
time of system execution.
When a `parent block` is opened a list of all `Designs <design>` within it is
available via the 'Design' Attribute displayed in the left-hand panel.
Selecting a pre-existing Design results in the Design being presented in the
central Layout panel.
To open an existing Design:
#. Navigate to the `parent block` represening the highest level of the system you wish to use.
#. Navigate to the 'Design' Attribute and select the dropdown arrow to display the list of available Designs.
#. Select the Design you wish to use.
#. Select the `View/Edit Button` associated with the 'Layout' Attribute.
.. TIP::
If no previously saved designs exist the 'Design' Attribute list will be
empty.
| 44.352778 | 305 | 0.746164 |
147de9ca0b67ae760ab48033c000fb062056990e | 4,613 | rst | reStructuredText | docs/source/aergo.herapy.obj.rst | aspiers/herapy | cf8a6ab0fd50c12b0f5f3ca85ff015eda5108863 | [
"MIT"
] | 4 | 2019-02-22T09:33:30.000Z | 2021-03-26T14:22:04.000Z | docs/source/aergo.herapy.obj.rst | aspiers/herapy | cf8a6ab0fd50c12b0f5f3ca85ff015eda5108863 | [
"MIT"
] | 50 | 2019-03-10T02:45:55.000Z | 2022-02-01T15:00:44.000Z | docs/source/aergo.herapy.obj.rst | aspiers/herapy | cf8a6ab0fd50c12b0f5f3ca85ff015eda5108863 | [
"MIT"
] | 4 | 2019-08-03T11:01:29.000Z | 2021-03-31T08:31:23.000Z | aergo.herapy.obj package
========================
Submodules
----------
aergo.herapy.obj.abi module
---------------------------
.. automodule:: aergo.herapy.obj.abi
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.address module
-------------------------------
.. automodule:: aergo.herapy.obj.address
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.aer module
---------------------------
.. automodule:: aergo.herapy.obj.aer
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.aergo\_conf module
-----------------------------------
.. automodule:: aergo.herapy.obj.aergo_conf
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.block module
-----------------------------
.. automodule:: aergo.herapy.obj.block
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.block\_hash module
-----------------------------------
.. automodule:: aergo.herapy.obj.block_hash
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.block\_meta\_stream module
-------------------------------------------
.. automodule:: aergo.herapy.obj.block_meta_stream
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.block\_stream module
-------------------------------------
.. automodule:: aergo.herapy.obj.block_stream
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.blockchain\_info module
----------------------------------------
.. automodule:: aergo.herapy.obj.blockchain_info
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.blockchain\_status module
------------------------------------------
.. automodule:: aergo.herapy.obj.blockchain_status
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.call\_info module
----------------------------------
.. automodule:: aergo.herapy.obj.call_info
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.chain\_id module
---------------------------------
.. automodule:: aergo.herapy.obj.chain_id
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.change\_conf\_info module
------------------------------------------
.. automodule:: aergo.herapy.obj.change_conf_info
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.consensus\_info module
---------------------------------------
.. automodule:: aergo.herapy.obj.consensus_info
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.event module
-----------------------------
.. automodule:: aergo.herapy.obj.event
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.event\_stream module
-------------------------------------
.. automodule:: aergo.herapy.obj.event_stream
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.name\_info module
----------------------------------
.. automodule:: aergo.herapy.obj.name_info
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.node\_info module
----------------------------------
.. automodule:: aergo.herapy.obj.node_info
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.peer module
----------------------------
.. automodule:: aergo.herapy.obj.peer
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.private\_key module
------------------------------------
.. automodule:: aergo.herapy.obj.private_key
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.sc\_state module
---------------------------------
.. automodule:: aergo.herapy.obj.sc_state
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.stream module
------------------------------
.. automodule:: aergo.herapy.obj.stream
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.transaction module
-----------------------------------
.. automodule:: aergo.herapy.obj.transaction
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.tx\_hash module
--------------------------------
.. automodule:: aergo.herapy.obj.tx_hash
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.tx\_result module
----------------------------------
.. automodule:: aergo.herapy.obj.tx_result
:members:
:undoc-members:
:show-inheritance:
aergo.herapy.obj.var\_proof module
----------------------------------
.. automodule:: aergo.herapy.obj.var_proof
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: aergo.herapy.obj
:members:
:undoc-members:
:show-inheritance:
| 20.686099 | 50 | 0.56861 |
c68fa414e22d42a9729c0c358f32384603be1352 | 755 | rst | reStructuredText | docs/autodocgen/trump.extensions.rst | Equitable/trump | a2802692bc642fa32096374159eea7ceca2947b4 | [
"BSD-3-Clause"
] | 8 | 2015-03-14T13:09:46.000Z | 2020-08-29T17:49:52.000Z | docs/autodocgen/trump.extensions.rst | Equitable/trump | a2802692bc642fa32096374159eea7ceca2947b4 | [
"BSD-3-Clause"
] | 64 | 2015-03-14T12:14:17.000Z | 2015-08-15T12:31:42.000Z | docs/autodocgen/trump.extensions.rst | Equitable/trump | a2802692bc642fa32096374159eea7ceca2947b4 | [
"BSD-3-Clause"
] | 10 | 2015-03-14T12:18:02.000Z | 2022-01-18T21:44:27.000Z | trump.extensions package
========================
Submodules
----------
trump.extensions.arghack module
-------------------------------
.. automodule:: trump.extensions.arghack
:members:
:undoc-members:
:show-inheritance:
trump.extensions.feed_munging module
------------------------------------
.. automodule:: trump.extensions.feed_munging
:members:
:undoc-members:
:show-inheritance:
trump.extensions.symbol_aggs module
-----------------------------------
.. automodule:: trump.extensions.symbol_aggs
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: trump.extensions
:members:
:undoc-members:
:show-inheritance:
| 19.358974 | 46 | 0.536424 |
58f521e5b45532089346d56f1c25bcfc78f913d0 | 854 | rst | reStructuredText | doc/build/changelog/unreleased_13/4804.rst | edelooff/sqlalchemy | 97d2a2091ed4caee1e19168d0db39e4d94a6d12f | [
"MIT"
] | null | null | null | doc/build/changelog/unreleased_13/4804.rst | edelooff/sqlalchemy | 97d2a2091ed4caee1e19168d0db39e4d94a6d12f | [
"MIT"
] | null | null | null | doc/build/changelog/unreleased_13/4804.rst | edelooff/sqlalchemy | 97d2a2091ed4caee1e19168d0db39e4d94a6d12f | [
"MIT"
] | null | null | null | .. change::
:tags: bug, mysql
:tickets: 4804
The MySQL dialects will emit "SET NAMES" at the start of a connection when
charset is given to the MySQL driver, to appease an apparent behavior
observed in MySQL 8.0 that raises a collation error when a UNION includes
string columns unioned against columns of the form CAST(NULL AS CHAR(..)),
which is what SQLAlchemy's polymorphic_union function does. The issue
seems to have affected PyMySQL for at least a year, however has recently
appeared as of mysqlclient 1.4.4 based on changes in how this DBAPI creates
a connection. As the presence of this directive impacts three separate
MySQL charset settings which each have intricate effects based on their
presense, SQLAlchemy will now emit the directive on new connections to
ensure correct behavior.
| 53.375 | 79 | 0.750585 |
773072b4bbf358d9a5ea94d2019ee4bf50db2740 | 265 | rst | reStructuredText | __doc__/source/TFL/Meta/M_500_M_Auto_Combine_Dicts.rst | Tapyr/tapyr | 4235fba6dce169fe747cce4d17d88dcf4a3f9f1d | [
"BSD-3-Clause"
] | 6 | 2016-12-10T17:51:10.000Z | 2021-10-11T07:51:48.000Z | __doc__/source/TFL/Meta/M_500_M_Auto_Combine_Dicts.rst | Tapyr/tapyr | 4235fba6dce169fe747cce4d17d88dcf4a3f9f1d | [
"BSD-3-Clause"
] | null | null | null | __doc__/source/TFL/Meta/M_500_M_Auto_Combine_Dicts.rst | Tapyr/tapyr | 4235fba6dce169fe747cce4d17d88dcf4a3f9f1d | [
"BSD-3-Clause"
] | 3 | 2020-03-29T07:37:03.000Z | 2021-01-21T16:08:40.000Z | .. automatically generated by TFL.sphinx_autogen, don't change manually
Module `M_Auto_Combine_Dicts`
-------------------------------------------------------------------------------
.. automodule:: _TFL._Meta.M_Auto_Combine_Dicts
:members:
:special-members:
| 29.444444 | 79 | 0.54717 |
4e6dc257155a141ab3c5c736fd1e9e3dacd61733 | 797 | rst | reStructuredText | doc/examples/plot_2d_boxes.rst | thegooglecodearchive/nmrglue | 34ffb5247f457c19b93c584e048df4042dea0482 | [
"BSD-3-Clause"
] | 1 | 2019-04-22T06:08:13.000Z | 2019-04-22T06:08:13.000Z | doc/examples/plot_2d_boxes.rst | thegooglecodearchive/nmrglue | 34ffb5247f457c19b93c584e048df4042dea0482 | [
"BSD-3-Clause"
] | null | null | null | doc/examples/plot_2d_boxes.rst | thegooglecodearchive/nmrglue | 34ffb5247f457c19b93c584e048df4042dea0482 | [
"BSD-3-Clause"
] | null | null | null | .. _plot_2d_boxes:
plotting example: plot_2d_boxes
===============================
This example shows how to use nmrglue and
`matplotlib <http://matplotlib.sourceforge.net/index.html>`_ to create figures
for examining data or publication. In this example the box limits used in
:ref:`integrate_2d` are graphically examined. A contour plot of each peak is
plotted with the box limits indicated by the dark dashed line. To check peak
assignments see :ref:`plot_2d_assignments`.
[`source code <el/plotting/2d_boxes/plot_boxes.py>`_]
.. literalinclude:: el/plotting/2d_boxes/plot_boxes.py
[`input file <el/plotting/2d_boxes/limits.in>`_]
.. literalinclude:: el/plotting/2d_boxes/limits.in
Sample Figure
[`T11.png <el/plotting/2d_boxes/T11.png>`_]
.. image:: el/plotting/2d_boxes/T11.png
| 30.653846 | 78 | 0.741531 |
45ea056b2c725238ef3cf7311c16163a1d890469 | 6,368 | rst | reStructuredText | geoscript/src/main/sphinx/introduction.rst | dwins/geoscript.scala | 9566daf48721b79b0314549bd2e0f6bdfe14db7f | [
"MIT"
] | 15 | 2015-03-18T19:14:12.000Z | 2018-06-07T01:48:11.000Z | geoscript/src/main/sphinx/introduction.rst | dwins/geoscript.scala | 9566daf48721b79b0314549bd2e0f6bdfe14db7f | [
"MIT"
] | 2 | 2016-04-13T20:47:37.000Z | 2017-08-29T02:39:56.000Z | geoscript/src/main/sphinx/introduction.rst | dwins/geoscript.scala | 9566daf48721b79b0314549bd2e0f6bdfe14db7f | [
"MIT"
] | 5 | 2015-01-14T10:35:08.000Z | 2018-08-28T14:18:55.000Z | Introduction to Scala
======================
GeoScript.scala is implemented in Scala, a functional/object hybrid language
with strong facilities for Java interoperability. In order to use it
effectively, you will need to understand some basics of Scala syntax. This
guide explains some of the Scala constructs most frequently used in
GeoScript.scala code.
Variables
---------
Variables in Scala must be declared before they are used. This is accomplished
using the ``val`` or ``var`` statement::
val a = "12"
var b = List(1, 2, 3)
Variables declared with ``val`` can only be assigned once. Variables declared
with ``var`` may be overwritten.
Type Annotations
----------------
All values in Scala have a type associated with them. In the examples above,
we allowed Scala to "infer" the types based on the values on the other side of
the expression. However, sometimes Scala is unable to infer types and it is
necessary to provide them explicitly. We can also specify types where Scala
would normally infer them in order to use a more generic type, or to add an
extra sanity check to code::
val a: Int = "12" // Fails, since "12" is a String, not an Int
var b // Fails since we are not providing an initial value from which to infer the type
Blocks
------
In Scala, we can set aside a sub-section of code as a "block" by wrapping it in
curly braces. This is useful to limit scope. Blocks automatically return the
result of their final statement::
val total: Int = {
val a = 1
val b = 2
val c = 3
a + b + c
}
// total is now 6
// a, b, and c are undeclared outside of the {} pair
Functions
---------
Function definitions in Scala use the ``def`` keyword::
def makeBetter(s: String): String = {
"better " + s
}
makeBetter("mousetrap") // returns "better mousetrap"
For single-statement function definitions we can also omit the curly-braces
(similar to how statements and blocks are interchangeable)::
def oneLiner(s: String): Int = s.length
The Scala compiler doesn't infer the types of function arguments. Function
return types *can* be inferred, but it is generally wise to declare them
anyway. Scala will not compile code where functions that differ only in their
argument lists have implicit return types.
Objects and Classes
-------------------
Since Scala is an object-oriented language, it deals with objects, or
"instances" of "classes". For example, the string literal "abc" is an instance
of the String class. GeoScript provides several classes for you. In general,
you won't need to define your own classes when using GeoScript. Aside from
types that can be expressed as literals, you can create objects using the
``new`` keyword. Many classes also (or instead) provide factories called
"companion objects" that can be used to create instances of those objects
without the ``new``::
class Message(msg: String) {
def greet() { println(msg) }
}
val message = new Message("hello")
message.greet()
Operators
---------
In Scala, methods on objects can also be called in the style of an infix operator::
val message = "123"
message substring 1 // returns "23", the same as message.substring(1)
The reverse is also true::
val message = "abc"
val expandedMessage = "expanded ".+(message)
Function Objects
----------------
In Scala, functions are first-class objects, and can be passed around like any other value::
def increase(x: Int) = x + 1
List(1, 2, 3).map(increase) // returns List(2, 3, 4)
Additionally, there is lightweight syntax for creating anonymous functions::
List(1, 2, 3).map({ x => x + 1 })
This can even be used with operator syntax to create methods that look like native language constructs::
List(1, 2, 3) map { x => x + 1 }
Pattern Matching
----------------
Scala also provides a feature called pattern matching. This is like the ``switch/case`` construct seen in many procedural and object-oriented languages in that it allows comparing a value against many conditions with a distinct response to each::
val guess = 0
guess match {
case 0 => "Known value: zero"
case _ => "I don't know how to handle that"
}
// produces "Known value: zero"
Here we've matched against the literal value ``0``, and the catch-all value ``_``. We can also use patterns to express type requirements::
val items = List(1, "a string", false)
items map { item =>
item match {
case i: Int => "Integer!"
case s: String => "String!"
case b: Boolean => "Boolean!"
}
}
// produces List("Integer!", "String!", "Boolean!")
An interesting aspect of this syntax is that it *names a variable*. For example, in the line::
case i: Int => "Integer!"
a variable named `i` is defined within that case block that provides access to the Integer methods on the item. (Since the List contains items of different types, the `item` variable is inferred to have the most specific type that can contain any of them. In this case, that means a type that can contain an Int, a String, or a Boolean. This type is `Any`, a Scala type that can hold any value. `Any` doesn't have many useful methods, however.)
A third form of pattern uses a Scala feature called "extractors." Patterns using extractors also define variables that can be used inside of the cases they define. For an example, let's look at the find() method on the Scala List class. List.find() accepts a function that inspects list elements, and returns an Option. An Option is a Scala standard class that represents a possible (but not guaranteed) result of an operation. More specifically, List.find() gives a None back if the list doesn't contain any element that satisfies the search condition, or Some(item) otherwise::
val opt = List(1, 2, 3) find { x => x % 2 == 0 }
opt match {
case Some(item) => println("List contained the even number:" + item)
case None => println("List did not contain an even number."
}
Learning More
-------------
These are the basics. If you are still confused, you can check out some further Scala introductions at these web sites:
* http://scala-lang.org/node/25
* http://www.artima.com/scalazine/articles/steps.html
* http://www.simplyscala.com/
| 36.181818 | 584 | 0.694095 |
89daa52804205edc05952b09f0cbf8b974115687 | 558 | rst | reStructuredText | docs/examples/armadillo.rst | BlockResearchGroup/compas_assembly | 6a257e1afaf304f9ddad02baed2396e5bacf91f8 | [
"MIT"
] | 8 | 2019-01-30T18:08:07.000Z | 2021-06-25T09:35:01.000Z | docs/examples/armadillo.rst | BlockResearchGroup/compas_assembly | 6a257e1afaf304f9ddad02baed2396e5bacf91f8 | [
"MIT"
] | 6 | 2019-07-17T11:29:45.000Z | 2020-03-20T13:32:38.000Z | docs/examples/armadillo.rst | BlockResearchGroup/compas_assembly | 6a257e1afaf304f9ddad02baed2396e5bacf91f8 | [
"MIT"
] | 18 | 2019-01-29T09:02:40.000Z | 2021-12-09T09:52:25.000Z | ***************
Armadillo Vault
***************
.. figure:: /_images/armadillo.png
:figclass: figure
:class: figure-img img-fluid
.. rst-class:: lead
Construct an assembly data structure for the Armadillo Vault that can be used for equilibrium calculations, generation of fabrication data, or assembly simulationos.
We construct the assembly from a list oof meshes representing the blocks.
The JSON file containing the meshes is available here: :download:`armadillo.json <armadillo.json>`.
.. literalinclude:: armadillo.py
:language: python
| 31 | 165 | 0.727599 |
3ed8b8b5f0c6e612cbcd901660b91e0432ad7870 | 3,134 | rst | reStructuredText | docs/source/development/guides/mails.rst | al-arz/tt_docs | f8ac01955252b6e0c81aeb130d392c710d14248a | [
"BSD-3-Clause"
] | 11 | 2017-11-05T15:38:12.000Z | 2020-08-11T16:37:51.000Z | docs/source/development/guides/mails.rst | al-arz/tt_docs | f8ac01955252b6e0c81aeb130d392c710d14248a | [
"BSD-3-Clause"
] | 18 | 2018-03-27T18:02:14.000Z | 2021-03-20T08:20:20.000Z | docs/source/development/guides/mails.rst | al-arz/tt_docs | f8ac01955252b6e0c81aeb130d392c710d14248a | [
"BSD-3-Clause"
] | 9 | 2017-11-07T12:00:05.000Z | 2021-03-16T11:10:11.000Z | Конфигурация работы с почтой
============================
На этой странице будет представлено подробное руководство по настройке работы игры с почтой. Данный момент не существенен для разработки, но критичен для публикации игры, поскольку многие активности (регистрация, изменение пароля, нотификации) требуют отправки почты.
Конфигурацию работы игры с почтой можно разделить на 2 части: настройка отправки писем и настройка соединения с почтовым сервером.
Поскольку вопрос почтовых рассылок сложный, мы приветствуем всяческие дополнения в это руководство.
Отправка писем
--------------
За отправку писем отвечает отдельный фоновый рабочий ``post_service``. Часть его настроек можно найти в ``the_tale.post_service.conf`` и переопределить в ``settings_local.py`` дописав к их именам ``POST_SERVICE_``. Эти настройки снабжены соответствующими комментариями, которые повторяться тут не будут.
Для включения отправки писем необходимо установить
``POST_SERVICE_ENABLE_MESSAGE_SENDER = True``
И установить значение ``allowed`` в таблице ``settings`` (через админку Django) по ключу равному значению ``SETTINGS_ALLOWED_KEY``.
Кроме того, в ``settings_local.py`` необходимо установить следующие параметры
- ``SERVER_EMAIL`` — почтовый адрес от которого по умолчанию будут отправляться письма.
- ``EMAIL_NOREPLY`` — почта, которая будет писаться в письмах, на которые игроки не должны отвечать, тут можно указать длинное значение вроде ``u'«Сказка» <no-reply@the-tale.org>'``
- ``EMAIL_SUPPORT`` — почта службы поддержки, тут можно указать длинное значение вроде ``u'«Сказка» <support@the-tale.org>'``
- ``EMAIL_SUPPORT_SHORT`` — короткий адрес службы поддержки (только сама почта, без вставки имени и прочего)
Настройка соединения с почтовым сервером
----------------------------------------
Отправлять почту мы можем как через собственный почтовый сервис (например, настроив `Postfix <http://www.postfix.org/>`_) либо через один из существующих (например, через `GMail <http://gmail.com/>`_).
В каждом случае есть свои нюансы, касающиеся массовых рассылок и попадания под спам фильтры. Но в начале проще использовать существующий сервис (при запуске, Сказка использовала GMail, потом перешла на Postfix).
Соединение с почтовым сервисом настраивается `стандартным для Django способом <https://docs.djangoproject.com/en/1.8/topics/email/>`_.
Для собственного сервиса будет достаточно указать следующие настройки
- ``EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'``
- ``EMAIL_HOST_USER`` — пользователь сервиса с правами отправки почты
- ``EMAIL_HOST_PASSWORD`` — пароль пользователя
Для использования сторонних сервисов, возможно, надо будет добавить несколько других параметров (см. в конфигурации Django и этот пост)
Отправка писем во время разработки
----------------------------------
Для разработки игры обычно нет необходимости настраивать отправку почты, поскольку для большинства случаев она не нужна.
Поэтому отправку почты можно настроить в каталог на файловой системе, например, так:
- ``EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'``
- ``EMAIL_FILE_PATH = '/tmp/emails'``
| 58.037037 | 303 | 0.769943 |
63a92a8141bba08d49016016b444ad2c5aa9a287 | 101 | rst | reStructuredText | docs/source/python/yokome/features/tree.rst | julianbetz/Yokome | 4e2f077cc6835a7719940e760cc351f47159bc36 | [
"Apache-2.0"
] | 1 | 2020-08-07T03:32:15.000Z | 2020-08-07T03:32:15.000Z | docs/source/python/yokome/features/tree.rst | julianbetz/Yokome | 4e2f077cc6835a7719940e760cc351f47159bc36 | [
"Apache-2.0"
] | 11 | 2020-01-28T22:15:01.000Z | 2022-02-10T00:29:58.000Z | docs/source/python/yokome/features/tree.rst | julianbetz/Yokome | 4e2f077cc6835a7719940e760cc351f47159bc36 | [
"Apache-2.0"
] | null | null | null | ``yokome.features.tree``
========================
.. automodule:: yokome.features.tree
:members:
| 16.833333 | 36 | 0.524752 |
f5cffc0f2b67b46bfaa2ba96ae580b65e0932b54 | 230 | rst | reStructuredText | docs/source2/generated/generated/statsmodels.tsa.statespace.representation.Representation.prefix.rst | GreatWei/pythonStates | c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37 | [
"BSD-3-Clause"
] | 76 | 2019-12-28T08:37:10.000Z | 2022-03-29T02:19:41.000Z | docs/source2/generated/generated/statsmodels.tsa.statespace.representation.Representation.prefix.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 11 | 2015-07-22T22:11:59.000Z | 2020-10-09T08:02:15.000Z | docs/source2/generated/generated/statsmodels.tsa.statespace.representation.Representation.prefix.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 35 | 2020-02-04T14:46:25.000Z | 2022-03-24T03:56:17.000Z | statsmodels.tsa.statespace.representation.Representation.prefix
===============================================================
.. currentmodule:: statsmodels.tsa.statespace.representation
.. autoproperty:: Representation.prefix | 38.333333 | 63 | 0.613043 |
b3b49cc763d3537e25f656af2611bf4954051848 | 5,340 | rst | reStructuredText | docs/features.rst | magary4/my-hammer | 2d149628350b21c806d3a7034afa291b4eb7f153 | [
"MIT"
] | 1 | 2018-08-08T02:52:33.000Z | 2018-08-08T02:52:33.000Z | docs/features.rst | magary4/my-hammer | 2d149628350b21c806d3a7034afa291b4eb7f153 | [
"MIT"
] | null | null | null | docs/features.rst | magary4/my-hammer | 2d149628350b21c806d3a7034afa291b4eb7f153 | [
"MIT"
] | null | null | null | .. include:: /_includes/all.rst
.. _features:
********
Features
********
This section gives you a brief overview about the available features.
**Table of Contents**
.. contents:: :local:
Projects
========
Unlimited projects
^^^^^^^^^^^^^^^^^^
The number of projects you can add are so to speak unlimited. Simply add new project directories
and they become automatically available in no time.
Automated virtual hosts
^^^^^^^^^^^^^^^^^^^^^^^
Creating a new project is literally done by creating a new directory on the file system.
Everything else is automatically taken care of in the background. Virtual hosts are added
instantly without having to restart any services.
Automated SSL certificates
^^^^^^^^^^^^^^^^^^^^^^^^^^
Whenever a new project is created, SSL certificates are generated as well and assigned to that
virtual host. Those certificates are signed by the Devilbox certificate authority which can be
imported into your local browser to make all certificates valid and trusted.
Automated DNS records
^^^^^^^^^^^^^^^^^^^^^
The built-in DNS server will automatically make any DNS record available to your host system by
using a wild-card DNS record.
Email catch-all
^^^^^^^^^^^^^^^
All outgoing emails originating from your projects are intercepted, stored locally and
can be viewed within the bundled intranet. This removes the need to create developer DNS records
in ``/etc/hosts``.
Log files
^^^^^^^^^
Log files for every service are available. Either in the form of Docker logs or as actual log files
mounted into the Devilbox git directory. The web and PHP server offer log files for each project
separetely.
Virtual host domains
^^^^^^^^^^^^^^^^^^^^
Each of your virtual host will have its own domain. TLD can be freely chosen, such as ``*.loc``,
``*.local``, ``*.com``, ``*.org`` or whatever you require.
Service and version choice
==========================
Selective start
^^^^^^^^^^^^^^^
Run only the Docker container you actually need, but be able to reload others on the fly once
they are needed. So you could first startup PHP and MySQL only and in case you would require
a Redis server you can attach it later to the Devilbox stack without having to restart anything.
Version choice
^^^^^^^^^^^^^^
Each provided service (such as PHP, MySQL, PostgreSQL, etc) comes in many different versions.
You can enable any combination that matches your perfect development stack.
LAMP and MEAN stack
^^^^^^^^^^^^^^^^^^^
Run a full LAMP stack with Apache or Nginx and even attach MEAN stack services such as MongoDB.
Configuration
=============
Global configuration
^^^^^^^^^^^^^^^^^^^^
All services can be configured globally by including your very own customized
``php.ini``, ``php-fpm.conf``, ``my.cnf``, ``nginx.conf``. ``apache.conf`` and other
configuration files.
Version specific configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each version of PHP can have its own ``php.ini`` and ``php-fpm.conf`` files,
each version of MySQL, MariaDB or PerconaDB can have its own ``my.cnf`` files,
each Apache..., each Nginx... you get the idea.
Project specific configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Even down to projects, the Devilbox allows for full customization when it comes to virtual host
settings via |ext_lnk_project_vhost_gen|.
Intranet
========
Command & Control Center
^^^^^^^^^^^^^^^^^^^^^^^^
The intranet is your Command & Control Center showing you all applied settings, mount points,
port exposures, hostnames and any errors including how they can be resolved.
Third-party tools
^^^^^^^^^^^^^^^^^
Mandatory web projects are also shipped: |ext_lnk_tool_phpmyadmin|, |ext_lnk_tool_adminer| and
|ext_lnk_tool_opcachegui| as well as a web GUI to view all sent emails.
Dockerized
==========
Portable
^^^^^^^^
Docker container run on Linux, Windows and MacOS, so does the Devilbox. This ensures that no
matter what operating system you are currently on, you can always run your development stack.
Built nightly
^^^^^^^^^^^^^
Docker images (at least official Devilbox Docker images) are built nightly and pushed to
Dockerhub to ensure to always have the latest versions installed and be up-to-date with any
security patches that are available.
Ships popular development tools
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Devilbox is also designed to be a development environment offering many tools used for
everyday web development, no matter if frontend or backend.
Work inside the container
^^^^^^^^^^^^^^^^^^^^^^^^^
Instead of working on you host operating system, you can do everything inside the container.
This allows you to have all tools pre-installed and a working unix environment ready.
Work inside and outside the container interchangeably
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
No matter if you work on your host operating system or inside the Docker container. Special
mount points and port-forwards are already in place to make both look the same to you.
Others
======
Work offline
^^^^^^^^^^^^
The Devilbox only requires internet initially to pull the required Docker images, once this is done
you can work completely offline. No need for an active internet connection.
Hacking
^^^^^^^
Last but not least, the Devilbox is bascially just a ``docker-compose.yml`` file and you can
easily add any Docker images you are currently missing in the Devilbox setup.
| 34.012739 | 99 | 0.712547 |
b4080b49a05ec7214855c0b62c28847dddb3de12 | 160 | rst | reStructuredText | Lib/site-packages/PyQt5/doc/sphinx/api/qprintdialog.rst | dipivan/my-first-blog | 07c2b7ba631c747ac85bbd32fcedb9305474b7b8 | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/PyQt5/doc/sphinx/api/qprintdialog.rst | dipivan/my-first-blog | 07c2b7ba631c747ac85bbd32fcedb9305474b7b8 | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/PyQt5/doc/sphinx/api/qprintdialog.rst | dipivan/my-first-blog | 07c2b7ba631c747ac85bbd32fcedb9305474b7b8 | [
"bzip2-1.0.6"
] | null | null | null | .. currentmodule:: PyQt5.QtPrintSupport
QPrintDialog
------------
.. class:: QPrintDialog
`C++ documentation <http://doc.qt.io/qt-5/qprintdialog.html>`_
| 17.777778 | 66 | 0.6625 |
dddcaa3d9746c521cbaf74b1917717e01cf8e43c | 274 | rst | reStructuredText | docs/apis/telemetry/linuxmsg.rst | kubostech/kubos | 44afcdfbd06207c5efdeec8e5f3c25dd790c33be | [
"Apache-2.0"
] | 60 | 2017-01-12T18:14:33.000Z | 2018-01-05T00:15:13.000Z | docs/apis/telemetry/linuxmsg.rst | kubostech/kubos | 44afcdfbd06207c5efdeec8e5f3c25dd790c33be | [
"Apache-2.0"
] | 120 | 2016-10-26T20:18:32.000Z | 2018-01-05T23:27:36.000Z | docs/apis/telemetry/linuxmsg.rst | kubostech/kubos | 44afcdfbd06207c5efdeec8e5f3c25dd790c33be | [
"Apache-2.0"
] | 20 | 2016-11-23T15:25:37.000Z | 2018-01-06T21:52:07.000Z | Telemetry Linux - Message Parsing APIs
---------------------------------------
.. note::
These APIs are used internally in the Linux implementation
of Telemetry for message parsing
.. doxygengroup:: Telemetry-Message
:project: telemetry-linux
:content-only: | 24.909091 | 62 | 0.635036 |
bd9a24e3c361db2b15911e1e4d0f3a119b801bcf | 2,148 | rst | reStructuredText | docs/source/text/interpreter.rst | OneToolsCollection/4paradigm-AutoX | f8e838021354de17f5bb9bc44e9d68d12dda6427 | [
"Apache-2.0"
] | null | null | null | docs/source/text/interpreter.rst | OneToolsCollection/4paradigm-AutoX | f8e838021354de17f5bb9bc44e9d68d12dda6427 | [
"Apache-2.0"
] | null | null | null | docs/source/text/interpreter.rst | OneToolsCollection/4paradigm-AutoX | f8e838021354de17f5bb9bc44e9d68d12dda6427 | [
"Apache-2.0"
] | null | null | null | ==================
AutoML Interpreter
==================
autox_interpreter is AutoML Solutions for Machine Learning interpretation.
AutoX covers following interpretable machine learning methods:
Model-based interpretation
* nn model interpretation, see `nn_interpret <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/nn_interpret.ipynb>`_
* light model interpretation, see `lgb_interpret <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/lgb_interpret.ipynb>`_
* lr model interpretation, see `lr_interpret <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/lr_interpret.ipynb>`_
Golbel interpretation
* tree-based model, see `global_surrogate_tree_demo <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/global_interpretation/global_surrogate_tree_demo.ipynb>`_
Local interpretation
* LIME, see `lime_demo <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/local_interpretation/lime_demo.ipynb>`_
* SHAP, see `shap_demo <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/local_interpretation/shap_demo.ipynb>`_
Influential interpretation
* nn, see `influential_interpretation_nn <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/influential_instances/influential_interpretation_nn.ipynb>`_
* nn_sgd, see `influential_interpretation_nn_sgd <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/influential_instances/influential_interpretation_nn_sgd.ipynb>`_
Prototypes and Criticisms
* MMD-critic, see `MMD_demo <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/prototypes_and_criticisms/MMD_demo.ipynb>`_
* ProtoDash algorithm, see `ProtodashExplainer <https://github.com/4paradigm/AutoX/blob/master/autox/autox_interpreter/interpreter_demo/prototypes_and_criticisms/ProtodashExplainer.ipynb>`_
| 89.5 | 210 | 0.797486 |
608049c8a070c4adc3b0be06330062a27058c96f | 57 | rst | reStructuredText | changelog/178.bugfix.1.rst | hayesla/irispy | f27d784b599125b31d02028874d125a0ae73db40 | [
"BSD-2-Clause"
] | null | null | null | changelog/178.bugfix.1.rst | hayesla/irispy | f27d784b599125b31d02028874d125a0ae73db40 | [
"BSD-2-Clause"
] | null | null | null | changelog/178.bugfix.1.rst | hayesla/irispy | f27d784b599125b31d02028874d125a0ae73db40 | [
"BSD-2-Clause"
] | null | null | null | Allow SPICE FITS reader to read handle dumbbell windows.
| 28.5 | 56 | 0.824561 |
f8aa20525ae4034acae5d79dccd58ff494d6958e | 3,007 | rst | reStructuredText | README.rst | mebel-akvareli/graphene-django | 23008ad22094f3e7b8fb26b73811ce49b20cca25 | [
"MIT"
] | 4,038 | 2016-09-18T01:45:22.000Z | 2022-03-31T01:06:57.000Z | README.rst | mebel-akvareli/graphene-django | 23008ad22094f3e7b8fb26b73811ce49b20cca25 | [
"MIT"
] | 1,104 | 2016-09-19T20:10:22.000Z | 2022-03-30T17:37:46.000Z | README.rst | mebel-akvareli/graphene-django | 23008ad22094f3e7b8fb26b73811ce49b20cca25 | [
"MIT"
] | 791 | 2016-09-18T13:48:11.000Z | 2022-03-29T08:32:06.000Z | Please read
`UPGRADE-v2.0.md <https://github.com/graphql-python/graphene/blob/master/UPGRADE-v2.0.md>`__
to learn how to upgrade to Graphene ``2.0``.
--------------
|Graphene Logo| Graphene-Django |Build Status| |PyPI version| |Coverage Status|
===============================================================================
A `Django <https://www.djangoproject.com/>`__ integration for
`Graphene <http://graphene-python.org/>`__.
Documentation
-------------
`Visit the documentation to get started! <https://docs.graphene-python.org/projects/django/en/latest/>`__
Quickstart
----------
For installing graphene, just run this command in your shell
.. code:: bash
pip install "graphene-django>=3"
Settings
~~~~~~~~
.. code:: python
INSTALLED_APPS = (
# ...
'graphene_django',
)
GRAPHENE = {
'SCHEMA': 'app.schema.schema' # Where your Graphene schema lives
}
Urls
~~~~
We need to set up a ``GraphQL`` endpoint in our Django app, so we can
serve the queries.
.. code:: python
from django.conf.urls import url
from graphene_django.views import GraphQLView
urlpatterns = [
# ...
url(r'^graphql$', GraphQLView.as_view(graphiql=True)),
]
Examples
--------
Here is a simple Django model:
.. code:: python
from django.db import models
class UserModel(models.Model):
name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
To create a GraphQL schema for it you simply have to write the
following:
.. code:: python
from graphene_django import DjangoObjectType
import graphene
class User(DjangoObjectType):
class Meta:
model = UserModel
class Query(graphene.ObjectType):
users = graphene.List(User)
@graphene.resolve_only_args
def resolve_users(self):
return UserModel.objects.all()
schema = graphene.Schema(query=Query)
Then you can simply query the schema:
.. code:: python
query = '''
query {
users {
name,
lastName
}
}
'''
result = schema.execute(query)
To learn more check out the following `examples <examples/>`__:
- **Schema with Filtering**: `Cookbook example <examples/cookbook>`__
- **Relay Schema**: `Starwars Relay example <examples/starwars>`__
Contributing
------------
See `CONTRIBUTING.md <CONTRIBUTING.md>`__.
.. |Graphene Logo| image:: http://graphene-python.org/favicon.png
.. |Build Status| image:: https://github.com/graphql-python/graphene-django/workflows/Tests/badge.svg
:target: https://github.com/graphql-python/graphene-django/actions
.. |PyPI version| image:: https://badge.fury.io/py/graphene-django.svg
:target: https://badge.fury.io/py/graphene-django
.. |Coverage Status| image:: https://coveralls.io/repos/graphql-python/graphene-django/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/graphql-python/graphene-django?branch=master
| 24.447154 | 125 | 0.647822 |
53ed388dd61a74f86b868651fc063dd780bdceed | 3,218 | rst | reStructuredText | docs/api/logging.rst | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 249 | 2017-09-11T22:06:05.000Z | 2022-03-04T17:09:29.000Z | docs/api/logging.rst | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 228 | 2017-09-11T23:07:26.000Z | 2022-03-23T10:58:50.000Z | docs/api/logging.rst | Manny27nyc/oci-python-sdk | de60b04e07a99826254f7255e992f41772902df7 | [
"Apache-2.0",
"BSD-3-Clause"
] | 224 | 2017-09-27T07:32:43.000Z | 2022-03-25T16:55:42.000Z | Logging
=======
.. autosummary::
:toctree: logging/client
:nosignatures:
:template: autosummary/service_client.rst
oci.logging.LoggingManagementClient
oci.logging.LoggingManagementClientCompositeOperations
--------
Models
--------
.. autosummary::
:toctree: logging/models
:nosignatures:
:template: autosummary/model_class.rst
oci.logging.models.Archiving
oci.logging.models.Category
oci.logging.models.ChangeLogGroupCompartmentDetails
oci.logging.models.ChangeLogLogGroupDetails
oci.logging.models.ChangeLogSavedSearchCompartmentDetails
oci.logging.models.ChangeUnifiedAgentConfigurationCompartmentDetails
oci.logging.models.Configuration
oci.logging.models.CreateLogDetails
oci.logging.models.CreateLogGroupDetails
oci.logging.models.CreateLogSavedSearchDetails
oci.logging.models.CreateUnifiedAgentConfigurationDetails
oci.logging.models.GrokPattern
oci.logging.models.GroupAssociationDetails
oci.logging.models.Log
oci.logging.models.LogGroup
oci.logging.models.LogGroupSummary
oci.logging.models.LogIncludedSearch
oci.logging.models.LogIncludedSearchSummary
oci.logging.models.LogIncludedSearchSummaryCollection
oci.logging.models.LogSavedSearch
oci.logging.models.LogSavedSearchSummary
oci.logging.models.LogSavedSearchSummaryCollection
oci.logging.models.LogSummary
oci.logging.models.OciService
oci.logging.models.Parameter
oci.logging.models.ResourceType
oci.logging.models.ServiceSummary
oci.logging.models.Source
oci.logging.models.SourceUpdateDetails
oci.logging.models.UnifiedAgentApache2Parser
oci.logging.models.UnifiedAgentApacheErrorParser
oci.logging.models.UnifiedAgentAuditdParser
oci.logging.models.UnifiedAgentConfiguration
oci.logging.models.UnifiedAgentConfigurationCollection
oci.logging.models.UnifiedAgentConfigurationSummary
oci.logging.models.UnifiedAgentCsvParser
oci.logging.models.UnifiedAgentGrokParser
oci.logging.models.UnifiedAgentLoggingConfiguration
oci.logging.models.UnifiedAgentLoggingDestination
oci.logging.models.UnifiedAgentLoggingSource
oci.logging.models.UnifiedAgentMsgpackParser
oci.logging.models.UnifiedAgentMultilineGrokParser
oci.logging.models.UnifiedAgentMultilineParser
oci.logging.models.UnifiedAgentNoneParser
oci.logging.models.UnifiedAgentParser
oci.logging.models.UnifiedAgentRegexParser
oci.logging.models.UnifiedAgentServiceConfigurationDetails
oci.logging.models.UnifiedAgentSyslogParser
oci.logging.models.UnifiedAgentTailLogSource
oci.logging.models.UnifiedAgentTsvParser
oci.logging.models.UnifiedAgentWindowsEventSource
oci.logging.models.UnifiedJSONParser
oci.logging.models.UpdateConfigurationDetails
oci.logging.models.UpdateLogDetails
oci.logging.models.UpdateLogGroupDetails
oci.logging.models.UpdateLogSavedSearchDetails
oci.logging.models.UpdateUnifiedAgentConfigurationDetails
oci.logging.models.WorkRequest
oci.logging.models.WorkRequestError
oci.logging.models.WorkRequestLog
oci.logging.models.WorkRequestResource
oci.logging.models.WorkRequestSummary
| 38.771084 | 72 | 0.810752 |
bad7dc809087465efa6139f270db7df681c90da5 | 3,619 | rst | reStructuredText | doc/build/core/selectable.rst | lvillis/sqlalchemy | 889d05c444264bf1b6d11386459d3360cc529d27 | [
"MIT"
] | 3,369 | 2015-01-01T15:57:28.000Z | 2022-03-26T12:27:54.000Z | doc/build/core/selectable.rst | lvillis/sqlalchemy | 889d05c444264bf1b6d11386459d3360cc529d27 | [
"MIT"
] | 345 | 2015-01-03T17:08:17.000Z | 2020-11-10T09:35:13.000Z | doc/build/core/selectable.rst | lvillis/sqlalchemy | 889d05c444264bf1b6d11386459d3360cc529d27 | [
"MIT"
] | 1,056 | 2015-01-03T00:30:17.000Z | 2022-03-15T12:56:24.000Z | Selectables, Tables, FROM objects
=================================
The term "selectable" refers to any object that rows can be selected from;
in SQLAlchemy, these objects descend from :class:`_expression.FromClause` and their
distinguishing feature is their :attr:`_expression.FromClause.c` attribute, which is
a namespace of all the columns contained within the FROM clause (these
elements are themselves :class:`_expression.ColumnElement` subclasses).
.. currentmodule:: sqlalchemy.sql.expression
.. _selectable_foundational_constructors:
Selectable Foundational Constructors
--------------------------------------
Top level "FROM clause" and "SELECT" constructors.
.. autofunction:: except_
.. autofunction:: except_all
.. autofunction:: exists
.. autofunction:: intersect
.. autofunction:: intersect_all
.. autofunction:: select
.. autofunction:: table
.. autofunction:: union
.. autofunction:: union_all
.. autofunction:: values
.. _fromclause_modifier_constructors:
Selectable Modifier Constructors
---------------------------------
Functions listed here are more commonly available as methods from
:class:`_sql.FromClause` and :class:`_sql.Selectable` elements, for example,
the :func:`_sql.alias` function is usually invoked via the
:meth:`_sql.FromClause.alias` method.
.. autofunction:: alias
.. autofunction:: cte
.. autofunction:: join
.. autofunction:: lateral
.. autofunction:: outerjoin
.. autofunction:: tablesample
Selectable Class Documentation
--------------------------------
The classes here are generated using the constructors listed at
:ref:`selectable_foundational_constructors` and
:ref:`fromclause_modifier_constructors`.
.. autoclass:: Alias
:members:
.. autoclass:: AliasedReturnsRows
:members:
.. autoclass:: CompoundSelect
:inherited-members: ClauseElement
:members:
.. autoclass:: CTE
:members:
.. autoclass:: Executable
:members:
.. autoclass:: Exists
:members:
.. autoclass:: FromClause
:members:
.. autoclass:: GenerativeSelect
:members:
.. autoclass:: HasCTE
:members:
.. autoclass:: HasPrefixes
:members:
.. autoclass:: HasSuffixes
:members:
.. autoclass:: Join
:members:
.. autoclass:: Lateral
:members:
.. autoclass:: ReturnsRows
:members:
:inherited-members: ClauseElement
.. autoclass:: ScalarSelect
:members:
.. autoclass:: Select
:members:
:inherited-members: ClauseElement
:exclude-members: memoized_attribute, memoized_instancemethod, append_correlation, append_column, append_prefix, append_whereclause, append_having, append_from, append_order_by, append_group_by
.. autoclass:: Selectable
:members:
:inherited-members: ClauseElement
.. autoclass:: SelectBase
:members:
:inherited-members: ClauseElement
:exclude-members: memoized_attribute, memoized_instancemethod
.. autoclass:: Subquery
:members:
.. autoclass:: TableClause
:members:
:inherited-members:
.. autoclass:: TableSample
:members:
.. autoclass:: TableValuedAlias
:members:
.. autoclass:: TextualSelect
:members:
:inherited-members:
.. autoclass:: Values
:members:
Label Style Constants
---------------------
Constants used with the :meth:`_sql.GenerativeSelect.set_label_style`
method.
.. autodata:: LABEL_STYLE_DISAMBIGUATE_ONLY
.. autodata:: LABEL_STYLE_NONE
.. autodata:: LABEL_STYLE_TABLENAME_PLUS_COL
.. data:: LABEL_STYLE_DEFAULT
The default label style, refers to :data:`_sql.LABEL_STYLE_DISAMBIGUATE_ONLY`.
.. versionadded:: 1.4
.. seealso::
:meth:`_sql.Select.set_label_style`
:meth:`_sql.Select.get_label_style`
| 20.446328 | 196 | 0.71318 |
1d9ea6d64ee1ce9ee882968a30d8bd41f4666081 | 3,609 | rst | reStructuredText | docs/testing/user/userguide/08-api.rst | kkltcjk/yardstick-moon | de48b16d1385cc26e83f8886d148d642c59e3d64 | [
"Apache-2.0"
] | 1 | 2019-12-08T21:39:38.000Z | 2019-12-08T21:39:38.000Z | docs/testing/user/userguide/08-api.rst | kkltcjk/yardstick-moon | de48b16d1385cc26e83f8886d148d642c59e3d64 | [
"Apache-2.0"
] | null | null | null | docs/testing/user/userguide/08-api.rst | kkltcjk/yardstick-moon | de48b16d1385cc26e83f8886d148d642c59e3d64 | [
"Apache-2.0"
] | null | null | null | .. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
Yardstick Restful API
======================
Abstract
--------
Yardstick support restful API in danube.
Available API
-------------
/yardstick/env/action
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to environment. For now, we support:
1. Prepare yardstick environment(Including fetch openrc file, get external network and load images)
2. Start a InfluxDB docker container and config yardstick output to InfluxDB.
3. Start a Grafana docker container and config with the InfluxDB.
Which API to call will depend on the Parameters.
Method: POST
Prepare Yardstick Environment
Example::
{
'action': 'prepareYardstickEnv'
}
This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result.
Start and Config InfluxDB docker container
Example::
{
'action': 'createInfluxDBContainer'
}
This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result.
Start and Config Grafana docker container
Example::
{
'action': 'createGrafanaContainer'
}
This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result.
/yardstick/asynctask
^^^^^^^^^^^^^^^^^^^^
Description: This API is used to get the status of asynchronous task
Method: GET
Get the status of asynchronous task
Example::
http://localhost:8888/yardstick/asynctask?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c
The returned status will be 0(running), 1(finished) and 2(failed).
/yardstick/testcases
^^^^^^^^^^^^^^^^^^^^
Description: This API is used to list all release test cases now in yardstick.
Method: GET
Get a list of release test cases
Example::
http://localhost:8888/yardstick/testcases
/yardstick/testcases/release/action
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to run a yardstick release test case.
Method: POST
Run a release test case
Example::
{
'action': 'runTestCase',
'args': {
'opts': {},
'testcase': 'tc002'
}
}
This is an asynchronous API. You need to call /yardstick/results to get the result.
/yardstick/testcases/samples/action
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to run a yardstick sample test case.
Method: POST
Run a sample test case
Example::
{
'action': 'runTestCase',
'args': {
'opts': {},
'testcase': 'ping'
}
}
This is an asynchronous API. You need to call /yardstick/results to get the result.
/yardstick/testsuites/action
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to run a yardstick test suite.
Method: POST
Run a test suite
Example::
{
'action': 'runTestSuite',
'args': {
'opts': {},
'testcase': 'smoke'
}
}
This is an asynchronous API. You need to call /yardstick/results to get the result.
/yardstick/results
^^^^^^^^^^^^^^^^^^
Description: This API is used to get the test results of certain task. If you call /yardstick/testcases/samples/action API, it will return a task id. You can use the returned task id to get the results by using this API.
Get test results of one task
Example::
http://localhost:8888/yardstick/results?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c
This API will return a list of test case result
| 20.275281 | 220 | 0.656415 |
a8f0cdbcd991c7ff692993f308b3074b3e38fd63 | 985 | rst | reStructuredText | docs/source/search_engine_parser.core.rst | MrJaysa/search-engine-parser | 5673443f564eaafb4389459e9ec2d2b43af5aff7 | [
"MIT"
] | null | null | null | docs/source/search_engine_parser.core.rst | MrJaysa/search-engine-parser | 5673443f564eaafb4389459e9ec2d2b43af5aff7 | [
"MIT"
] | null | null | null | docs/source/search_engine_parser.core.rst | MrJaysa/search-engine-parser | 5673443f564eaafb4389459e9ec2d2b43af5aff7 | [
"MIT"
] | null | null | null | search\_engine\_parser.core package
===================================
Submodules
----------
search\_engine\_parser.core.base module
---------------------------------------
.. automodule:: search_engine_parser.core.base
:members:
:undoc-members:
:show-inheritance:
search\_engine\_parser.core.cli module
--------------------------------------
.. automodule:: search_engine_parser.core.cli
:members:
:undoc-members:
:show-inheritance:
search\_engine\_parser.core.consts module
-----------------------------------------
.. automodule:: search_engine_parser.core.consts
:members:
:undoc-members:
:show-inheritance:
search\_engine\_parser.core.engines module
------------------------------------------
.. automodule:: search_engine_parser.core.engines
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: search_engine_parser.core
:members:
:undoc-members:
:show-inheritance:
| 20.957447 | 49 | 0.563452 |
198538091be6384264ac721a456292fa4d49d86a | 1,762 | rst | reStructuredText | README.rst | bilardi/aws-static-website | 85eda38024720507b10a1319664057d3f1946bf5 | [
"MIT"
] | null | null | null | README.rst | bilardi/aws-static-website | 85eda38024720507b10a1319664057d3f1946bf5 | [
"MIT"
] | null | null | null | README.rst | bilardi/aws-static-website | 85eda38024720507b10a1319664057d3f1946bf5 | [
"MIT"
] | null | null | null | Getting started
===============
AWS static website package is implemented for deploying a bucket with its Cloudfront distribution and its domain name.
You can use this package for deploying a static website on the bucket deployed.
It is part of the `educational repositories <https://github.com/pandle/materials>`_ to learn how to write stardard code and common uses of the TDD.
Prerequisites
#############
You have to install the `AWS Cloud Development Kit <https://docs.aws.amazon.com/cdk/latest/guide/>`_ (AWS CDK) for deploying the AWS resources:
.. code-block:: bash
npm install -g aws-cdk # for installing AWS CDK
cdk --help # for printing its commands
And you need an AWS account, in this repository called **your-account**.
Installation
############
The package is not self-consistent. So you have to download the package by github and to install the requirements before to deploy on AWS:
.. code-block:: bash
git clone https://github.com/bilardi/aws-static-website
cd aws-static-website/
pip3 install --upgrade -r requirements.txt
export AWS_PROFILE=your-account
cdk deploy
Or if you want to use this package into your code, you can install by python3-pip:
.. code-block:: bash
pip3 install aws_static_website
python3
>>> import aws_static_website
>>> help(aws_static_website)
Read the documentation on `readthedocs <https://aws-static-website.readthedocs.io/en/latest/>`_ for
* Usage
* Development
Change Log
##########
See `CHANGELOG.md <https://github.com/bilardi/aws-static-website/blob/master/CHANGELOG.md>`_ for details.
License
#######
This package is released under the MIT license. See `LICENSE <https://github.com/bilardi/aws-static-website/blob/master/LICENSE>`_ for details.
| 30.37931 | 147 | 0.731555 |
b5ec8f4817b38737b612167daa070cc3ec2b46a3 | 22,634 | rst | reStructuredText | c-user/timer/directives.rst | richidubey/rtems-docs | 4476290486ae0258589adbe812de69a024403cbf | [
"BSD-2-Clause"
] | null | null | null | c-user/timer/directives.rst | richidubey/rtems-docs | 4476290486ae0258589adbe812de69a024403cbf | [
"BSD-2-Clause"
] | null | null | null | c-user/timer/directives.rst | richidubey/rtems-docs | 4476290486ae0258589adbe812de69a024403cbf | [
"BSD-2-Clause"
] | null | null | null | .. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2020, 2021 embedded brains GmbH (http://www.embedded-brains.de)
.. Copyright (C) 1988, 2008 On-Line Applications Research Corporation (OAR)
.. This file is part of the RTEMS quality process and was automatically
.. generated. If you find something that needs to be fixed or
.. worded better please post a report or patch to an RTEMS mailing list
.. or raise a bug report:
..
.. https://www.rtems.org/bugs.html
..
.. For information on updating and regenerating please refer to the How-To
.. section in the Software Requirements Engineering chapter of the
.. RTEMS Software Engineering manual. The manual is provided as a part of
.. a release. For development sources please refer to the online
.. documentation at:
..
.. https://docs.rtems.org
.. _TimerManagerDirectives:
Directives
==========
This section details the directives of the Timer Manager. A subsection is
dedicated to each of this manager's directives and lists the calling sequence,
parameters, description, return values, and notes of the directive.
.. Generated from spec:/rtems/timer/if/create
.. raw:: latex
\clearpage
.. index:: rtems_timer_create()
.. index:: create a timer
.. _InterfaceRtemsTimerCreate:
rtems_timer_create()
--------------------
Creates a timer.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_create( rtems_name name, rtems_id *id );
.. rubric:: PARAMETERS:
``name``
This parameter is the object name of the timer.
``id``
This parameter is the pointer to an object identifier variable. When the
directive call is successful, the identifier of the created timer will be
stored in this variable.
.. rubric:: DESCRIPTION:
This directive creates a timer which resides on the local node. The timer has
the user-defined object name specified in ``name``. The assigned object
identifier is returned in ``id``. This identifier is used to access the timer
with other timer related directives.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INVALID_NAME`
The ``name`` parameter was invalid.
:c:macro:`RTEMS_INVALID_ADDRESS`
The ``id`` parameter was `NULL
<https://en.cppreference.com/w/c/types/NULL>`_.
:c:macro:`RTEMS_TOO_MANY`
There was no inactive object available to create a timer. The number of
timers available to the application is configured through the
:ref:`CONFIGURE_MAXIMUM_TIMERS` application configuration option.
.. rubric:: NOTES:
The processor used to maintain the timer is the processor of the calling task
at some point during the timer creation.
For control and maintenance of the timer, RTEMS allocates a :term:`TMCB` from
the local TMCB free pool and initializes it.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The directive may obtain and release the object allocator mutex. This may
cause the calling task to be preempted.
* The number of timers available to the application is configured through the
:ref:`CONFIGURE_MAXIMUM_TIMERS` application configuration option.
* Where the object class corresponding to the directive is configured to use
unlimited objects, the directive may allocate memory from the RTEMS
Workspace.
.. Generated from spec:/rtems/timer/if/ident
.. raw:: latex
\clearpage
.. index:: rtems_timer_ident()
.. index:: obtain the ID of a timer
.. _InterfaceRtemsTimerIdent:
rtems_timer_ident()
-------------------
Identifies a timer by the object name.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_ident( rtems_name name, rtems_id *id );
.. rubric:: PARAMETERS:
``name``
This parameter is the object name to look up.
``id``
This parameter is the pointer to an object identifier variable. When the
directive call is successful, the object identifier of an object with the
specified name will be stored in this variable.
.. rubric:: DESCRIPTION:
This directive obtains a timer identifier associated with the timer name
specified in ``name``.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INVALID_ADDRESS`
The ``id`` parameter was `NULL
<https://en.cppreference.com/w/c/types/NULL>`_.
:c:macro:`RTEMS_INVALID_NAME`
The ``name`` parameter was 0.
:c:macro:`RTEMS_INVALID_NAME`
There was no object with the specified name on the local node.
.. rubric:: NOTES:
If the timer name is not unique, then the timer identifier will match the first
timer with that name in the search order. However, this timer identifier is
not guaranteed to correspond to the desired timer.
The objects are searched from lowest to the highest index. Only the local node
is searched.
The timer identifier is used with other timer related directives to access the
timer.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within any runtime context.
* The directive will not cause the calling task to be preempted.
.. Generated from spec:/rtems/timer/if/cancel
.. raw:: latex
\clearpage
.. index:: rtems_timer_cancel()
.. index:: cancel a timer
.. _InterfaceRtemsTimerCancel:
rtems_timer_cancel()
--------------------
Cancels the timer.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_cancel( rtems_id id );
.. rubric:: PARAMETERS:
``id``
This parameter is the timer identifier.
.. rubric:: DESCRIPTION:
This directive cancels the timer specified by ``id``. This timer will be
reinitiated by the next invocation of :ref:`InterfaceRtemsTimerReset`,
:ref:`InterfaceRtemsTimerFireAfter`, or :ref:`InterfaceRtemsTimerFireWhen` with
the same timer identifier.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INVALID_ID`
There was no timer associated with the identifier specified by ``id``.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within interrupt context.
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The directive will not cause the calling task to be preempted.
.. Generated from spec:/rtems/timer/if/delete
.. raw:: latex
\clearpage
.. index:: rtems_timer_delete()
.. index:: delete a timer
.. _InterfaceRtemsTimerDelete:
rtems_timer_delete()
--------------------
Deletes the timer.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_delete( rtems_id id );
.. rubric:: PARAMETERS:
``id``
This parameter is the timer identifier.
.. rubric:: DESCRIPTION:
This directive deletes the timer specified by ``id``. If the timer is running,
it is automatically canceled.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INVALID_ID`
There was no timer associated with the identifier specified by ``id``.
.. rubric:: NOTES:
The :term:`TMCB` for the deleted timer is reclaimed by RTEMS.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The directive may obtain and release the object allocator mutex. This may
cause the calling task to be preempted.
* The calling task does not have to be the task that created the object. Any
local task that knows the object identifier can delete the object.
* Where the object class corresponding to the directive is configured to use
unlimited objects, the directive may free memory to the RTEMS Workspace.
.. Generated from spec:/rtems/timer/if/fire-after
.. raw:: latex
\clearpage
.. index:: rtems_timer_fire_after()
.. index:: fire a timer after an interval
.. _InterfaceRtemsTimerFireAfter:
rtems_timer_fire_after()
------------------------
Fires the timer after the interval.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_fire_after(
rtems_id id,
rtems_interval ticks,
rtems_timer_service_routine_entry routine,
void *user_data
);
.. rubric:: PARAMETERS:
``id``
This parameter is the timer identifier.
``ticks``
This parameter is the interval until the routine is fired in clock ticks.
``routine``
This parameter is the routine to schedule.
``user_data``
This parameter is the argument passed to the routine when it is fired.
.. rubric:: DESCRIPTION:
This directive initiates the timer specified by ``id``. If the timer is
running, it is automatically canceled before being initiated. The timer is
scheduled to fire after an interval of clock ticks has passed specified by
``ticks``. When the timer fires, the timer service routine ``routine`` will be
invoked with the argument ``user_data`` in the context of the clock tick
:term:`ISR`.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INVALID_NUMBER`
The ``ticks`` parameter was 0.
:c:macro:`RTEMS_INVALID_ADDRESS`
The ``routine`` parameter was `NULL
<https://en.cppreference.com/w/c/types/NULL>`_.
:c:macro:`RTEMS_INVALID_ID`
There was no timer associated with the identifier specified by ``id``.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within interrupt context.
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The directive will not cause the calling task to be preempted.
.. Generated from spec:/rtems/timer/if/fire-when
.. raw:: latex
\clearpage
.. index:: rtems_timer_fire_when()
.. index:: fire a timer at time of day
.. _InterfaceRtemsTimerFireWhen:
rtems_timer_fire_when()
-----------------------
Fires the timer at the time of day.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_fire_when(
rtems_id id,
rtems_time_of_day *wall_time,
rtems_timer_service_routine_entry routine,
void *user_data
);
.. rubric:: PARAMETERS:
``id``
This parameter is the timer identifier.
``wall_time``
This parameter is the time of day when the routine is fired.
``routine``
This parameter is the routine to schedule.
``user_data``
This parameter is the argument passed to the routine when it is fired.
.. rubric:: DESCRIPTION:
This directive initiates the timer specified by ``id``. If the timer is
running, it is automatically canceled before being initiated. The timer is
scheduled to fire at the time of day specified by ``wall_time``. When the
timer fires, the timer service routine ``routine`` will be invoked with the
argument ``user_data`` in the context of the clock tick :term:`ISR`.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_NOT_DEFINED`
The system date and time was not set.
:c:macro:`RTEMS_INVALID_ADDRESS`
The ``routine`` parameter was `NULL
<https://en.cppreference.com/w/c/types/NULL>`_.
:c:macro:`RTEMS_INVALID_ADDRESS`
The ``wall_time`` parameter was `NULL
<https://en.cppreference.com/w/c/types/NULL>`_.
:c:macro:`RTEMS_INVALID_CLOCK`
The time of day was invalid.
:c:macro:`RTEMS_INVALID_ID`
There was no timer associated with the identifier specified by ``id``.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within interrupt context.
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The directive will not cause the calling task to be preempted.
.. Generated from spec:/rtems/timer/if/initiate-server
.. raw:: latex
\clearpage
.. index:: rtems_timer_initiate_server()
.. index:: initiate the Timer Server
.. _InterfaceRtemsTimerInitiateServer:
rtems_timer_initiate_server()
-----------------------------
Initiates the Timer Server.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_initiate_server(
rtems_task_priority priority,
size_t stack_size,
rtems_attribute attribute_set
);
.. rubric:: PARAMETERS:
``priority``
This parameter is the task priority.
``stack_size``
This parameter is the task stack size in bytes.
``attribute_set``
This parameter is the task attribute set.
.. rubric:: DESCRIPTION:
This directive initiates the Timer Server task. This task is responsible for
executing all timers initiated via the
:ref:`InterfaceRtemsTimerServerFireAfter` or
:ref:`InterfaceRtemsTimerServerFireWhen` directives.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INCORRECT_STATE`
The Timer Server was already initiated.
:c:macro:`RTEMS_INVALID_PRIORITY`
The task priority was invalid.
:c:macro:`RTEMS_TOO_MANY`
There was no inactive task object available to create the Timer Server
task.
:c:macro:`RTEMS_UNSATISFIED`
There was not enough memory to allocate the task storage area. The task
storage area contains the task stack, the thread-local storage, and the
floating point context.
:c:macro:`RTEMS_UNSATISFIED`
One of the task create extensions failed to create the Timer Server task.
.. rubric:: NOTES:
The Timer Server task is created using the :ref:`InterfaceRtemsTaskCreate`
directive and must be accounted for when configuring the system.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may obtain and release the object allocator mutex. This may
cause the calling task to be preempted.
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The number of timers available to the application is configured through the
:ref:`CONFIGURE_MAXIMUM_TIMERS` application configuration option.
* Where the object class corresponding to the directive is configured to use
unlimited objects, the directive may allocate memory from the RTEMS
Workspace.
.. Generated from spec:/rtems/timer/if/server-fire-after
.. raw:: latex
\clearpage
.. index:: rtems_timer_server_fire_after()
.. index:: fire task-based a timer after an interval
.. _InterfaceRtemsTimerServerFireAfter:
rtems_timer_server_fire_after()
-------------------------------
Fires the timer after the interval using the Timer Server.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_server_fire_after(
rtems_id id,
rtems_interval ticks,
rtems_timer_service_routine_entry routine,
void *user_data
);
.. rubric:: PARAMETERS:
``id``
This parameter is the timer identifier.
``ticks``
This parameter is the interval until the routine is fired in clock ticks.
``routine``
This parameter is the routine to schedule.
``user_data``
This parameter is the argument passed to the routine when it is fired.
.. rubric:: DESCRIPTION:
This directive initiates the timer specified by ``id``. If the timer is
running, it is automatically canceled before being initiated. The timer is
scheduled to fire after an interval of clock ticks has passed specified by
``ticks``. When the timer fires, the timer service routine ``routine`` will be
invoked with the argument ``user_data`` in the context of the Timer Server
task.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INCORRECT_STATE`
The Timer Server was not initiated.
:c:macro:`RTEMS_INVALID_NUMBER`
The ``ticks`` parameter was 0.
:c:macro:`RTEMS_INVALID_ADDRESS`
The ``routine`` parameter was `NULL
<https://en.cppreference.com/w/c/types/NULL>`_.
:c:macro:`RTEMS_INVALID_ID`
There was no timer associated with the identifier specified by ``id``.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within interrupt context.
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The directive will not cause the calling task to be preempted.
.. Generated from spec:/rtems/timer/if/server-fire-when
.. raw:: latex
\clearpage
.. index:: rtems_timer_server_fire_when()
.. index:: fire a task-based timer at time of day
.. _InterfaceRtemsTimerServerFireWhen:
rtems_timer_server_fire_when()
------------------------------
Fires the timer at the time of day using the Timer Server.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_server_fire_when(
rtems_id id,
rtems_time_of_day *wall_time,
rtems_timer_service_routine_entry routine,
void *user_data
);
.. rubric:: PARAMETERS:
``id``
This parameter is the timer identifier.
``wall_time``
This parameter is the time of day when the routine is fired.
``routine``
This parameter is the routine to schedule.
``user_data``
This parameter is the argument passed to the routine when it is fired.
.. rubric:: DESCRIPTION:
This directive initiates the timer specified by ``id``. If the timer is
running, it is automatically canceled before being initiated. The timer is
scheduled to fire at the time of day specified by ``wall_time``. When the
timer fires, the timer service routine ``routine`` will be invoked with the
argument ``user_data`` in the context of the Timer Server task.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INCORRECT_STATE`
The Timer Server was not initiated.
:c:macro:`RTEMS_NOT_DEFINED`
The system date and time was not set.
:c:macro:`RTEMS_INVALID_ADDRESS`
The ``routine`` parameter was `NULL
<https://en.cppreference.com/w/c/types/NULL>`_.
:c:macro:`RTEMS_INVALID_ADDRESS`
The ``wall_time`` parameter was `NULL
<https://en.cppreference.com/w/c/types/NULL>`_.
:c:macro:`RTEMS_INVALID_CLOCK`
The time of day was invalid.
:c:macro:`RTEMS_INVALID_ID`
There was no timer associated with the identifier specified by ``id``.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within interrupt context.
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The directive will not cause the calling task to be preempted.
.. Generated from spec:/rtems/timer/if/reset
.. raw:: latex
\clearpage
.. index:: rtems_timer_reset()
.. index:: reset a timer
.. _InterfaceRtemsTimerReset:
rtems_timer_reset()
-------------------
Resets the timer.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_reset( rtems_id id );
.. rubric:: PARAMETERS:
``id``
This parameter is the timer identifier.
.. rubric:: DESCRIPTION:
This directive resets the timer specified by ``id``. This timer must have been
previously initiated with either the :ref:`InterfaceRtemsTimerFireAfter` or
:ref:`InterfaceRtemsTimerServerFireAfter` directive. If active the timer is
canceled, after which the timer is reinitiated using the same interval and
timer service routine which the original :ref:`InterfaceRtemsTimerFireAfter` or
:ref:`InterfaceRtemsTimerServerFireAfter` directive used.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INVALID_ID`
There was no timer associated with the identifier specified by ``id``.
:c:macro:`RTEMS_NOT_DEFINED`
The timer was not of the interval class.
.. rubric:: NOTES:
If the timer has not been used or the last usage of this timer was by a
:ref:`InterfaceRtemsTimerFireWhen` or :ref:`InterfaceRtemsTimerServerFireWhen`
directive, then the :c:macro:`RTEMS_NOT_DEFINED` error is returned.
Restarting a cancelled after timer results in the timer being reinitiated with
its previous timer service routine and interval.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within interrupt context.
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The directive will not cause the calling task to be preempted.
.. Generated from spec:/rtems/timer/if/get-information
.. raw:: latex
\clearpage
.. index:: rtems_timer_get_information()
.. _InterfaceRtemsTimerGetInformation:
rtems_timer_get_information()
-----------------------------
Gets information about the timer.
.. rubric:: CALLING SEQUENCE:
.. code-block:: c
rtems_status_code rtems_timer_get_information(
rtems_id id,
rtems_timer_information *the_info
);
.. rubric:: PARAMETERS:
``id``
This parameter is the timer identifier.
``the_info``
This parameter is the pointer to a timer information variable. When the
directive call is successful, the information about the timer will be
stored in this variable.
.. rubric:: DESCRIPTION:
This directive returns information about the timer.
.. rubric:: RETURN VALUES:
:c:macro:`RTEMS_SUCCESSFUL`
The requested operation was successful.
:c:macro:`RTEMS_INVALID_ADDRESS`
The ``the_info`` parameter was `NULL
<https://en.cppreference.com/w/c/types/NULL>`_.
:c:macro:`RTEMS_INVALID_ID`
There was no timer associated with the identifier specified by ``id``.
.. rubric:: CONSTRAINTS:
The following constraints apply to this directive:
* The directive may be called from within interrupt context.
* The directive may be called from within device driver initialization context.
* The directive may be called from within task context.
* The directive will not cause the calling task to be preempted.
| 26.817536 | 80 | 0.72855 |
febba0ce079dc6c5883717060a46cfdabe68799e | 336 | rst | reStructuredText | src/interfaces/python/docsrc/source/inference.rst | burcin/opengm | a1b21eecb93c6c5a7b11ab312d26b1c98c55ff41 | [
"MIT"
] | 318 | 2015-01-07T15:22:02.000Z | 2022-01-22T10:10:29.000Z | src/interfaces/python/docsrc/source/inference.rst | burcin/opengm | a1b21eecb93c6c5a7b11ab312d26b1c98c55ff41 | [
"MIT"
] | 89 | 2015-03-24T14:33:01.000Z | 2020-07-10T13:59:13.000Z | src/interfaces/python/docsrc/source/inference.rst | burcin/opengm | a1b21eecb93c6c5a7b11ab312d26b1c98c55ff41 | [
"MIT"
] | 119 | 2015-01-13T08:35:03.000Z | 2022-03-01T01:49:08.000Z | Infer/Optimize a Graphical Model
--------------------------------
Belief propagation (Bp)
+++++++++++++++++++++++
.. literalinclude:: ../../examples/inference_bp.py
ICM
+++++++++++++++++++
.. literalinclude:: ../../examples/inference_icm.py
GraphCut
+++++++++++++++++++
.. literalinclude:: ../../examples/inference_graphcut.py | 17.684211 | 57 | 0.514881 |
fd19465ae64e339105c1f1de98c68bed29d0ff7c | 5,674 | rst | reStructuredText | Docs/source/dataanalysis/advanced.rst | ruohai0925/artemis | deca36ea6f5ccd823f6808d4e9afcaaf94059dd0 | [
"BSD-3-Clause-LBNL"
] | 131 | 2018-09-29T08:11:40.000Z | 2022-03-28T23:24:22.000Z | Docs/source/dataanalysis/advanced.rst | ruohai0925/artemis | deca36ea6f5ccd823f6808d4e9afcaaf94059dd0 | [
"BSD-3-Clause-LBNL"
] | 1,656 | 2018-10-02T01:49:24.000Z | 2022-03-31T21:27:31.000Z | Docs/source/dataanalysis/advanced.rst | ruohai0925/artemis | deca36ea6f5ccd823f6808d4e9afcaaf94059dd0 | [
"BSD-3-Clause-LBNL"
] | 100 | 2018-10-01T20:41:14.000Z | 2022-03-10T10:30:42.000Z | Advanced Visualization of Plotfiles With yt (for developers)
============================================================
This sections contains yt commands for advanced users. The Particle-In-Cell methods uses a
staggered grid (see :ref:`particle-in-cell theory <theory-pic>`), so that the x, y, and z components of the
electric and magnetic fields are all defined at different locations in space. Regular output
(see the :doc:`yt` page, or the notebook at ``WarpX/Tools/PostProcessing/Visualization.ipynb`` for an example)
returns cell-centered data for convenience, which involves an additional operation. It is sometimes
useful to access the raw data directly. Furthermore,
the WarpX implementation for mesh refinement contains a number of grids for each level (coarse,
fine and auxiliary, see :ref:`the theory <theory>` for more details), and it is sometimes useful to access each of
them (regular output return the auxiliary grid only). This page provides information to read
raw data of all grids.
Write Raw Data
--------------
For a given diagnostic the user has the option to write the raw data by setting ``<diag_name>.plot_raw_fields = 1``.
Moreover, the user has the option to write also the values of the fields in the guard cells by setting ``<diag_name>.plot_raw_fields_guards = 1``.
Please refer to :ref:`Input Parameters <running-cpp-parameters>` for more information.
Read Raw Data
-------------
Meta-data relevant to this topic (for example, number and locations of grids in the simulation) are accessed with
.. code-block:: python
import yt
# get yt dataset
ds = yt.load( './plotfiles/plt00004' )
# Index of data in the plotfile
ds_index = ds.index
# Print the number of grids in the simulation
ds_index.grids.shape
# Left and right physical boundary of each grid
ds_index.grid_left_edge
ds_index.grid_right_edge
# List available fields
ds.field_list
When ``<diag_name>.plot_raw_fields = 1``, here are some useful commands to access properties of a grid and the Ex field on the fine patch:
.. code-block:: python
# store grid number 2 into my_grid
my_grid = ds.index.grids[2]
# Get left and right edges of my_grid
my_grid.LeftEdge
my_grid.RightEdge
# Get Level of my_grid
my_grid.Level
# left edge of the grid, in number of points
my_grid.start_index
Return the ``Ex`` field on the fine patch of grid ``my_grid``:
.. code-block:: python
my_field = my_grid['raw', 'Ex_fp'].squeeze().v
For a 2D plotfile, ``my_field`` has shape ``(nx,nz,2)``. The last component stands for the
two values on the edges of each cell for the electric field, due to field staggering. Numpy
function ``squeeze`` removes empty components. While ``yt`` arrays are unit-aware, it is
sometimes useful to extract the data into unitless numpy arrays. This is achieved with ``.v``.
In the case of ``Ex_fp``, the staggering is on direction ``x``, so that
``my_field[:,:-1,1] == my_field[:,1:,0]``.
All combinations of the fields (``E`` or ``B``), the component (``x``, ``y`` or ``z``) and the
grid (``_fp`` for fine, ``_cp`` for coarse and ``_aux`` for auxiliary) can be accessed in this
way, i.e., ``my_grid['raw', 'Ey_aux']`` or ``my_grid['raw', 'Bz_cp']`` are valid queries.
Read Raw Data With Guard Cells
------------------------------
When the output includes the data in the guard cells, the user can read such data using the post-processing tool ``read_raw_data.py``, available in ``Tools/PostProcessing/``, as illustrated in the following example:
.. code-block:: python
from read_raw_data import read_data
# Load all data saved in a given path
path = './diags/diag00200/'
data = read_data(path)
# Load Ex_fp on mesh refinement level 0
level = 0
field = 'Ex_fp'
# data[level] is a dictionary, data[level][field] is a numpy array
my_field = data[level][field]
Note that a list of all available raw fields written to output, that is, a list of all valid strings that the variable ``field`` in the example above can be assigned to, can be obtained by calling ``data[level].keys()``.
In order to plot a 2D slice of the data with methods like ``matplotlib.axes.Axes.imshow``, one might want to pass the correct ``extent`` (the bounding box in data coordinates that the image will fill), including the guard cells. One way to set the correct ``extent`` is illustrated in the following example (case of a 2D slice in the ``(x,z)`` plane):
.. code-block:: python
import yt
import numpy as np
from read_raw_data import read_data
# Load all data saved in a given path
path = './diags/diag00200/'
data = read_data(path)
# Load Ex_fp on mesh refinement level 0
level = 0
field = 'Ex_fp'
# data[level] is a dictionary, data[level][field] is a numpy array
my_field = data[level][field]
# Set the number of cells in the valid domain
# by loading the standard output data with yt
ncells = yt.load(path).domain_dimensions
# Set the number of dimensions automatically (2D or 3D)
dim = 2 if (ncells[2] == 1) else 3
xdir = 0
zdir = 1 if (dim == 2) else 2
# Set the extent (bounding box in data coordinates, including guard cells)
# to be passed to matplotlib.axes.Axes.imshow
left_edge_x = 0 - (my_field.shape[xdir] - ncells[xdir]) // 2
right_edge_x = ncells[xdir] + (my_field.shape[xdir] - ncells[xdir]) // 2
left_edge_z = 0 - (my_field.shape[zdir] - ncells[zdir]) // 2
right_edge_z = ncells[zdir] + (my_field.shape[zdir] - ncells[zdir]) // 2
extent = np.array([left_edge_z, right_edge_z, left_edge_x, right_edge_x])
| 43.646154 | 351 | 0.694572 |
d2ec53017f3730cc37bdeea5c38f9501c27a86ed | 2,371 | rst | reStructuredText | docs/source/processors/heading.rst | uccser/verto | d36aa88b208f1700fafc033679bd1e9775496d25 | [
"MIT"
] | 4 | 2017-04-10T06:09:54.000Z | 2019-05-04T02:07:40.000Z | docs/source/processors/heading.rst | uccser/verto | d36aa88b208f1700fafc033679bd1e9775496d25 | [
"MIT"
] | 268 | 2017-04-03T20:40:46.000Z | 2022-02-04T20:10:08.000Z | docs/source/processors/heading.rst | uccser/kordac | d36aa88b208f1700fafc033679bd1e9775496d25 | [
"MIT"
] | 1 | 2019-01-07T15:46:31.000Z | 2019-01-07T15:46:31.000Z | Heading
#######################################
**Processor name:** ``heading``
This replaces the output of Markdown headings that begin with the ``#`` character (atx-style headings).
This processor an ID to each heading which allows for linking to a heading, and adds the heading number before the heading text.
.. note::
This processor replaces the output of the standard markdown block processor for atx-style headings.
You may create a heading by using the following format:
.. literalinclude:: ../../../verto/tests/assets/heading/doc_example_basic_usage.md
:language: none
The default HTML for headings is:
.. literalinclude:: ../../../verto/html-templates/heading.html
:language: css+jinja
**Example**
Using the following tag:
.. literalinclude:: ../../../verto/tests/assets/heading/doc_example_basic_usage.md
:language: none
The resulting HTML would be:
.. literalinclude:: ../../../verto/tests/assets/heading/doc_example_basic_usage_expected.html
:language: html
Overriding HTML for Heading
***************************************
When overriding the HTML for heading, the following Jinja2 placeholders are available:
- ``{{ headling_level }}`` - A number representing the heading level.
- ``{{ heading_type }}`` - The string of the heading tag i.e. *h1* etc.
- ``{{ title }}`` - The title of the heading.
- ``{{ title_slug }}`` - A slug of the heading, useful for ids.
- ``{{ level_1 }}`` - The current first level heading number.
- ``{{ level_2 }}`` - The current second level heading number.
- ``{{ level_3 }}`` - The current third level heading number.
- ``{{ level_4 }}`` - The current fourth level heading number.
- ``{{ level_5 }}`` - The current fifth level heading number.
- ``{{ level_6 }}`` - The current sixth level heading number.
The ``level`` parameters are useful for generating levels trails so that users know where they are exactly within the document.
**Example**
For example, providing the following HTML:
.. literalinclude:: ../../../verto/tests/assets/heading/doc_example_override_html_template.html
:language: css+jinja
with the following markdown:
.. literalinclude:: ../../../verto/tests/assets/heading/doc_example_override_html.md
:language: none
would result in:
.. literalinclude:: ../../../verto/tests/assets/heading/doc_example_override_html_expected.html
:language: html
| 33.871429 | 128 | 0.692957 |
351bfadcc0b51f74b7e00e12f791e04bbe95aa18 | 997 | rst | reStructuredText | docs/source/scripts.rst | asabyr/LensTools | e155d6d39361e550906cec00dbbc57686a4bca5c | [
"MIT"
] | 1 | 2021-04-27T02:03:11.000Z | 2021-04-27T02:03:11.000Z | docs/source/scripts.rst | asabyr/LensTools | e155d6d39361e550906cec00dbbc57686a4bca5c | [
"MIT"
] | null | null | null | docs/source/scripts.rst | asabyr/LensTools | e155d6d39361e550906cec00dbbc57686a4bca5c | [
"MIT"
] | null | null | null | LensTools command line scripts
******************************
General purpose scripts
=======================
nbodyheader
-----------
Displays the header information of a Nbody simulation snapshot. Usage:
::
nbodyheader <file_1> <file_2> ...
gadgetstrip
-----------
Strips the sometimes unnecessary velocity information from a Gadget2 snapshot, roughly reducing its disk usage by a factor of 2. Usage:
::
gadgetstrip <file_1> <file_2> ...
The script can also read from standard input: to strip all the files in a directory:
::
ls | gadgetstrip
npyinfo
-------
lenstools.confidencecontour
---------------------------
lenstools.view
--------------
LensTools pipeline scripts
==========================
lenstools.submission
--------------------
lenstools.cutplanes
-------------------
lenstools.cutplanes-mpi
-----------------------
lenstools.raytracing
--------------------
lenstools.raytracing-mpi
------------------------
lenstools.execute-mpi
--------------------- | 15.825397 | 135 | 0.55667 |
7fa2b2688083018a67ccd7c89f664322e8845683 | 73 | rst | reStructuredText | sample_projects/pypackage/docs/usage.rst | jayvdb/tox-constraints | e8c4faa455e5adcba59cec978c88290600cd40b1 | [
"MIT"
] | 1 | 2021-06-04T15:59:24.000Z | 2021-06-04T15:59:24.000Z | sample_projects/pypackage/docs/usage.rst | jayvdb/tox-constraints | e8c4faa455e5adcba59cec978c88290600cd40b1 | [
"MIT"
] | 2 | 2019-04-04T16:06:47.000Z | 2020-12-09T02:04:45.000Z | sample_projects/pypackage/docs/usage.rst | jayvdb/tox-constraints | e8c4faa455e5adcba59cec978c88290600cd40b1 | [
"MIT"
] | 1 | 2020-12-08T11:18:27.000Z | 2020-12-08T11:18:27.000Z | =====
Usage
=====
To use pypackage in a project::
import pypackage
| 9.125 | 31 | 0.60274 |
8ab6b09df73b2b787cab1cfdc5e11c8d002afa92 | 204 | rst | reStructuredText | README.rst | MainRo/python-flock | e1faa78d6aba374493336651848daadad82387a8 | [
"MIT"
] | null | null | null | README.rst | MainRo/python-flock | e1faa78d6aba374493336651848daadad82387a8 | [
"MIT"
] | null | null | null | README.rst | MainRo/python-flock | e1faa78d6aba374493336651848daadad82387a8 | [
"MIT"
] | null | null | null |
Flock
====
Flock is a library and daemon for managing smart home controllers and devices.
Flock is based on twisted and provides protocol and transport classes for each
supported smart home controller.
| 25.5 | 78 | 0.803922 |
48ba34e97378bef444e4b6200ff0b9f5204556cf | 985 | rst | reStructuredText | REDSI_1160929_1161573/boost_1_67_0/libs/iterator/doc/iterator_adaptor_abstract.rst | Wultyc/ISEP_1718_2A2S_REDSI_TrabalhoGrupo | eb0f7ef64e188fe871f47c2ef9cdef36d8a66bc8 | [
"MIT"
] | 198 | 2015-01-13T05:47:18.000Z | 2022-03-09T04:46:46.000Z | libs/boost/libs/iterator/doc/iterator_adaptor_abstract.rst | flingone/frameworks_base_cmds_remoted | 4509d9f0468137ed7fd8d100179160d167e7d943 | [
"Apache-2.0"
] | 9 | 2015-01-28T16:33:19.000Z | 2020-04-12T23:03:28.000Z | libs/boost/libs/iterator/doc/iterator_adaptor_abstract.rst | flingone/frameworks_base_cmds_remoted | 4509d9f0468137ed7fd8d100179160d167e7d943 | [
"Apache-2.0"
] | 139 | 2015-01-15T20:09:31.000Z | 2022-01-31T15:21:16.000Z | .. Distributed under the Boost
.. Software License, Version 1.0. (See accompanying
.. file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
.. Version 1.1 of this ReStructuredText document corresponds to
n1530_, the paper accepted by the LWG.
.. Copyright David Abrahams, Jeremy Siek, and Thomas Witt 2003.
Each specialization of the ``iterator_adaptor`` class template is derived from
a specialization of ``iterator_facade``. The core interface functions
expected by ``iterator_facade`` are implemented in terms of the
``iterator_adaptor``\ 's ``Base`` template parameter. A class derived
from ``iterator_adaptor`` typically redefines some of the core
interface functions to adapt the behavior of the ``Base`` type.
Whether the derived class models any of the standard iterator concepts
depends on the operations supported by the ``Base`` type and which
core interface functions of ``iterator_facade`` are redefined in the
``Derived`` class.
| 49.25 | 79 | 0.764467 |
bca35c82a311a48cab8c8032684cb258f19caa5c | 208 | rst | reStructuredText | google/devtools/testing/v1/devtools-testing-v1-py/docs/testing_v1/test_environment_discovery_service.rst | googleapis/googleapis-gen | d84824c78563d59b0e58d5664bfaa430e9ad7e7a | [
"Apache-2.0"
] | 7 | 2021-02-21T10:39:41.000Z | 2021-12-07T07:31:28.000Z | google/devtools/testing/v1/devtools-testing-v1-py/docs/testing_v1/test_environment_discovery_service.rst | googleapis/googleapis-gen | d84824c78563d59b0e58d5664bfaa430e9ad7e7a | [
"Apache-2.0"
] | 6 | 2021-02-02T23:46:11.000Z | 2021-11-15T01:46:02.000Z | google/devtools/testing/v1/devtools-testing-v1-py/docs/testing_v1/test_environment_discovery_service.rst | googleapis/googleapis-gen | d84824c78563d59b0e58d5664bfaa430e9ad7e7a | [
"Apache-2.0"
] | 4 | 2021-01-28T23:25:45.000Z | 2021-08-30T01:55:16.000Z | TestEnvironmentDiscoveryService
-------------------------------------------------
.. automodule:: google.devtools.testing_v1.services.test_environment_discovery_service
:members:
:inherited-members:
| 29.714286 | 86 | 0.625 |
04b712e808a051fa67f317aadb5d1ea5ea96e00e | 2,333 | rst | reStructuredText | includes_data_bag/includes_data_bag_encryption_example.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | 1 | 2020-02-02T21:57:47.000Z | 2020-02-02T21:57:47.000Z | includes_data_bag/includes_data_bag_encryption_example.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | null | null | null | includes_data_bag/includes_data_bag_encryption_example.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | null | null | null | .. The contents of this file are included in multiple topics.
.. This file should not be changed in a way that hinders its ability to appear in multiple documentation sets.
To demonstrate the use of encrypted data bags on a node, we'll start by copying the ``secret_key`` file to an example node using ``scp`` and moving it to ``/etc/chef/encrypted_data_bag_secret``:
.. code-block:: bash
scp ./secret_key $MY_NODE_IP:~/
ssh $MY_NODE_IP
sudo mv ./secret_key /etc/chef/encrypted_data_bag_secret
The ``knife bootstrap`` sub-command supports the ``encrypted_data_bag_secret`` setting in |knife rb|. You will want to add this line:
.. code-block:: ruby
encrypted_data_bag_secret '/path/to/your/data_bag_key'
And change ``/path/to/your/data_bag_key`` to the location of where the data bag key is located. When you run knife bootstrap afterwards it automatically adds this line to the |client rb| for the node you are bootstrapping and copies the key over.
Next, we'll create a recipe that will log the decrypted values for demonstration purposes (if these were real secrets, you would want to avoid logging them). Use |knife| and run the following:
.. code-block:: bash
$ knife cookbook create edb_demo
Then, edit ``cookbooks/edb_demo/recipes/default.rb`` so that it contains the following:
.. code-block:: ruby
# cookbooks/edb_demo/recipes/default.rb
passwords = Chef::EncryptedDataBagItem.load("prod", "passwords")
mysql = passwords["mysql"]
Chef::Log.info("The mysql password is: '#{mysql}'")
Finally, upload the cookbook and run |chef client| on the node. You should see something like this:
.. code-block:: bash
$ knife cookbook upload edb_demo
# output clipped
knife ssh name:i-8a436fe5 -a ec2.public_hostname 'sudo chef-client'
INFO: *** Chef 0.10.0 ***
INFO: Run List is [recipe[edb_demo]]
INFO: Run List expands to [edb_demo]
INFO: Starting Chef Run for i-8a436fe5
INFO: Loading cookbooks [edb_demo]
INFO: The mysql password is: 'open-sesame-123'
INFO: Chef Run complete in 3.122228 seconds
INFO: Running report handlers
INFO: Report handlers complete
As you can see, the recipe was able to decrypt the values in the encrypted data bag. It did so by using the shared secret located in the default location of ``/etc/chef/encrypted_data_bag_secret``. | 44.865385 | 246 | 0.744535 |
c00b9f305d580bcf7f425328f714aa42cb237229 | 6,616 | rst | reStructuredText | oscar/lib/python2.7/site-packages/Unidecode-0.04.21.dist-info/DESCRIPTION.rst | sainjusajan/django-oscar | 466e8edc807be689b0a28c9e525c8323cc48b8e1 | [
"BSD-3-Clause"
] | null | null | null | oscar/lib/python2.7/site-packages/Unidecode-0.04.21.dist-info/DESCRIPTION.rst | sainjusajan/django-oscar | 466e8edc807be689b0a28c9e525c8323cc48b8e1 | [
"BSD-3-Clause"
] | null | null | null | oscar/lib/python2.7/site-packages/Unidecode-0.04.21.dist-info/DESCRIPTION.rst | sainjusajan/django-oscar | 466e8edc807be689b0a28c9e525c8323cc48b8e1 | [
"BSD-3-Clause"
] | null | null | null | Unidecode, lossy ASCII transliterations of Unicode text
=======================================================
It often happens that you have text data in Unicode, but you need to
represent it in ASCII. For example when integrating with legacy code that
doesn't support Unicode, or for ease of entry of non-Roman names on a US
keyboard, or when constructing ASCII machine identifiers from
human-readable Unicode strings that should still be somewhat intelligible
(a popular example of this is when making an URL slug from an article
title).
In most of these examples you could represent Unicode characters as
`???` or `\\15BA\\15A0\\1610`, to mention two extreme cases. But that's
nearly useless to someone who actually wants to read what the text says.
What Unidecode provides is a middle road: function `unidecode()` takes
Unicode data and tries to represent it in ASCII characters (i.e., the
universally displayable characters between 0x00 and 0x7F), where the
compromises taken when mapping between two character sets are chosen to be
near what a human with a US keyboard would choose.
The quality of resulting ASCII representation varies. For languages of
western origin it should be between perfect and good. On the other hand
transliteration (i.e., conveying, in Roman letters, the pronunciation
expressed by the text in some other writing system) of languages like
Chinese, Japanese or Korean is a very complex issue and this library does
not even attempt to address it. It draws the line at context-free
character-by-character mapping. So a good rule of thumb is that the further
the script you are transliterating is from Latin alphabet, the worse the
transliteration will be.
Note that this module generally produces better results than simply
stripping accents from characters (which can be done in Python with
built-in functions). It is based on hand-tuned character mappings that for
example also contain ASCII approximations for symbols and non-Latin
alphabets.
This is a Python port of `Text::Unidecode` Perl module by
Sean M. Burke <sburke@cpan.org>.
Module content
--------------
The module exports a function that takes an Unicode object (Python 2.x) or
string (Python 3.x) and returns a string (that can be encoded to ASCII bytes in
Python 3.x)::
>>> from unidecode import unidecode
>>> unidecode(u'ko\u017eu\u0161\u010dek')
'kozuscek'
>>> unidecode(u'30 \U0001d5c4\U0001d5c6/\U0001d5c1')
'30 km/h'
>>> unidecode(u"\u5317\u4EB0")
'Bei Jing '
A utility is also included that allows you to transliterate text from the
command line in several ways. Reading from standard input::
$ echo hello | unidecode
hello
from a command line argument::
$ unidecode -c hello
hello
or from a file::
$ unidecode hello.txt
hello
The default encoding used by the utility depends on your system locale. You can specify another encoding with the `-e` argument. See `unidecode --help` for a full list of available options.
Requirements
------------
Nothing except Python itself.
You need a Python build with "wide" Unicode characters (also called "UCS-4
build") in order for unidecode to work correctly with characters outside of
Basic Multilingual Plane (BMP). Common characters outside BMP are bold, italic,
script, etc. variants of the Latin alphabet intended for mathematical notation.
Surrogate pair encoding of "narrow" builds is not supported in unidecode.
If your Python build supports "wide" Unicode the following expression will
return True::
>>> import sys
>>> sys.maxunicode > 0xffff
True
See PEP 261 for details regarding support for "wide" Unicode characters in
Python.
Installation
------------
To install the latest version of Unidecode from the Python package index, use
these commands::
$ pip install unidecode
To install Unidecode from the source distribution and run unit tests, use::
$ python setup.py install
$ python setup.py test
Performance notes
-----------------
By default, `unidecode` optimizes for the use case where most of the strings
passed to it are already ASCII-only and no transliteration is necessary (this
default might change in future versions).
For performance critical applications, two additional functions are exposed:
`unidecode_expect_ascii` is optimized for ASCII-only inputs (approximately 5
times faster than `unidecode_expect_nonascii` on 10 character strings, more on
longer strings), but slightly slower for non-ASCII inputs.
`unidecode_expect_nonascii` takes approximately the same amount of time on
ASCII and non-ASCII inputs, but is slightly faster for non-ASCII inputs than
`unidecode_expect_ascii`.
Apart from differences in run time, both functions produce identical results.
For most users of Unidecode, the difference in performance should be
negligible.
Source
------
You can get the latest development version of Unidecode with::
$ git clone https://www.tablix.org/~avian/git/unidecode.git
There is also an official mirror of this repository on GitHub at
https://github.com/avian2/unidecode
Contact
-------
Please send bug reports, patches and suggestions for Unidecode to
tomaz.solc@tablix.org.
Alternatively, you can also open a ticket or pull request at
https://github.com/avian2/unidecode
Copyright
---------
Original character transliteration tables:
Copyright 2001, Sean M. Burke <sburke@cpan.org>, all rights reserved.
Python code and later additions:
Copyright 2017, Tomaz Solc <tomaz.solc@tablix.org>
This program is free software; you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by the Free
Software Foundation; either version 2 of the License, or (at your option)
any later version.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc., 51
Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. The programs and
documentation in this dist are distributed in the hope that they will be
useful, but without any warranty; without even the implied warranty of
merchantability or fitness for a particular purpose.
..
vim: set filetype=rst:
| 35.569892 | 190 | 0.74471 |
a20242f946e6be8d62559d027f5d83e8411e03df | 573 | rst | reStructuredText | libs/full/async_distributed/docs/index.rst | Andrea-MariaDB-2/hpx | e230dddce4a8783817f38e07f5a77e624f64f826 | [
"BSL-1.0"
] | 1,822 | 2015-01-03T11:22:37.000Z | 2022-03-31T14:49:59.000Z | libs/full/async_distributed/docs/index.rst | Deepak-suresh14/hpx | 5ecf3cad298678021c77c825a9f54d7c119d9dd1 | [
"BSL-1.0"
] | 3,288 | 2015-01-05T17:00:23.000Z | 2022-03-31T18:49:41.000Z | libs/full/async_distributed/docs/index.rst | Deepak-suresh14/hpx | 5ecf3cad298678021c77c825a9f54d7c119d9dd1 | [
"BSL-1.0"
] | 431 | 2015-01-07T06:22:14.000Z | 2022-03-31T14:50:04.000Z | ..
Copyright (c) 2019 The STE||AR-Group
SPDX-License-Identifier: BSL-1.0
Distributed under the Boost Software License, Version 1.0. (See accompanying
file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
.. _modules_async:
=====
async
=====
This module contains functionality for asynchronously launching work on remote
localities: :cpp:func:`hpx::async`, :cpp:func:`hpx::apply`. This module extends
the local-only functions in :ref:`libs_async_local`.
See the :ref:`API reference <modules_async_api>` of this module for more
details.
| 27.285714 | 80 | 0.736475 |
98ac7ea4ab5a42e1c875d6e4aa47e10904f4e63b | 1,575 | rst | reStructuredText | CHANGELOG.rst | Sensirion/python-shdlc-driver | 31e9683c27004ee05edf89996d656bc50f5bdb3a | [
"BSD-3-Clause"
] | 3 | 2019-01-26T13:56:10.000Z | 2019-01-28T09:38:12.000Z | CHANGELOG.rst | Sensirion/python-shdlc-driver | 31e9683c27004ee05edf89996d656bc50f5bdb3a | [
"BSD-3-Clause"
] | 2 | 2020-09-17T13:40:41.000Z | 2021-08-20T15:12:38.000Z | CHANGELOG.rst | Sensirion/python-shdlc-driver | 31e9683c27004ee05edf89996d656bc50f5bdb3a | [
"BSD-3-Clause"
] | 1 | 2020-12-20T09:35:09.000Z | 2020-12-20T09:35:09.000Z | CHANGELOG
---------
0.1.5
:::::
- Move parts of ``ShdlcDevice`` into new base class ``ShdlcDeviceBase``
0.1.4
:::::
- Make signature and version offset configurable for ``ShdlcFirmwareImage``
0.1.3
:::::
- Add property ``is_open`` to ``ShdlcPort``, ``ShdlcSerialPort`` and
``ShdlcTcpPort``
- Improve/extend documentation
0.1.2
:::::
- Add ``ShdlcTcpPort`` class to communicate with SHDLC devices through TCP/IP
- Add property ``start_received`` to ``ShdlcSerialMisoFrameBuilder``
- Add methods ``open()`` and ``close()`` to the ``ShdlcPort`` interface
- Add parameter ``do_open`` to constructor of ``ShdlcSerialPort`` to allow
creating ``ShdlcSerialPort`` instances without opening the port yet
- Add property ``additional_response_time`` to ``ShdlcSerialPort``
- Improve timeout calculation of ``ShdlcSerialPort`` to fix possible response
timeout errors
- Make ``FirmwareUpdate`` failing early if the bitrate cannot be changed
0.1.1
:::::
- Add optional dependency ``intelhex`` for performing firmware updates
- Add bootloader commands: ``ShdlcCmdBootloaderBase``,
``ShdlcCmdEnterBootloader``, ``ShdlcCmdFirmwareUpdateStart``,
``ShdlcCmdFirmwareUpdateData``, ``ShdlcCmdFirmwareUpdateStop``
- Add exceptions for the firmware updater:
``ShdlcFirmwareImageSignatureError``,
``ShdlcFirmwareImageIncompatibilityError``
- Add classes to perform firmware updates over SHDLC: ``ShdlcFirmwareImage``,
``ShdlcFirmwareUpdate``
- Add property ``lock`` to the ``ShdlcPort`` interface to allow locking the
port from outside the class
0.1.0
:::::
- Initial release
| 33.510638 | 77 | 0.737778 |
7627375d2b944c85dee7dd9ddb73a33c20500c68 | 3,330 | rst | reStructuredText | doc/historic/2weeks/plexnames.rst | trojanspike/ampify | 4b2978403f642cf5203d7a903d94fd34d566e33c | [
"CC0-1.0"
] | 1 | 2015-11-06T03:10:01.000Z | 2015-11-06T03:10:01.000Z | doc/historic/2weeks/plexnames.rst | trojanspike/ampify | 4b2978403f642cf5203d7a903d94fd34d566e33c | [
"CC0-1.0"
] | null | null | null | doc/historic/2weeks/plexnames.rst | trojanspike/ampify | 4b2978403f642cf5203d7a903d94fd34d566e33c | [
"CC0-1.0"
] | null | null | null | Plexnames
=========
A plexname is for the Plex what a URI is for the Web. They are used in
indexes, events and plexlinks (see below).
Constructs
----------
There are some basic forms:
plex:movies/alexander
This is the name for the alexander resource in the movies topic. If
the resource can not be found, use the resolving mechanism.
plex:~some_entity/movies/alexander
The same resource, but now as part of some_entity. If the alexander
resource now directly originates from the entity or not is not defined.
plex:~/movies/alexander
The same resource, but now within the same entity where that name is
used. No resolving takes place.
If an entity is asked for such an resource it can either return the
resource itself, e.g the picture, or a pointer to somewhere else, e.g.
the storage layer.
plex:storage/A23SF4333
This points directly to a file in the storage layer.
Different Resources
-------------------
It would be possible to have a movie alexander, a picture alexander, a
soundtrack alexander, a textbook alexander and so forth. The question
would be what resource to return, if asked for alexander? The client
has to describe what type it prefers. I would compare it to the
'Accept' header in http.
XXX Can I directly ask for alexander.jpg?
Resolving
---------
If a resource is not found it is searched for using the resolver
hierarchy. An entity defines a hierarchy of resolver which it is going
to use for certain topics - for music ask foo, then bar, for science
better ask bar first, then foo. So it's a bit what dns is for the net,
but this time for resources, not machines.
Uniqueness
----------
XXX Up to now it's undecided if the same name can be used for more
than one resource, or if it has to be unique. And if so, what exactly
the scope of uniqueness is - within the entity, within a topic, or
something else? tav has to think, I would recommend to have no
requirement for uniqueness - which would mean that an entity always
returns an ordered list of resources when asked for something.
Plexlinks
=========
Plexlinks can for example be used in Documents - comparable to hrefs
in html or WikiLinks in wikpages.
The standard format for writing links to other resources is
[plex:movies/alexander]. Square brackets.
Functions
---------
Resources can also be services - maybe comparable to an
URI that points to a xml-rpc resource. The question now arises what is
meant when pointing to a resource - the resource itself or the output
of it. The problem is a bit comparable with URLs in HTML - do you want
to link to it, or include it - link to the picture, or display it in a page.
The solution with plexlinks is that you can either point to a resource
`[plex:joerg/add]` or that you call the function `[#plex:joerg/add]`.
Now, to make it a bit more fancy, you can also combine resources /
functions::
[#convert from:usd to:gpb ~amazon/alexander.price | #print-localized-currency]
This will call the local convert resource, passes from and to as
parameters and tells it to work on the price of alexander at amazon
(USD 8 -> GBP 6). The price is then passed to the local
print-localized-currency (6,00 GBP in Germany, 6.00 GBP on that
island).
Notice the '#', which triggers the calling, and see the the fancy pipe
'|', which behaves very much like a pipe in a shell.
| 34.6875 | 79 | 0.751952 |
b11645d99dd48aa51ce4002412bbab84faf75524 | 10,517 | rst | reStructuredText | README.rst | feinheit/fh-fablib | 5a8850d12d5e4178be19b3c80a27b52b6a7e4c92 | [
"BSD-3-Clause"
] | 4 | 2019-05-22T15:25:18.000Z | 2021-05-06T00:15:00.000Z | README.rst | feinheit/fh-fablib | 5a8850d12d5e4178be19b3c80a27b52b6a7e4c92 | [
"BSD-3-Clause"
] | 12 | 2018-07-13T11:53:40.000Z | 2021-12-01T14:48:59.000Z | README.rst | feinheit/fh-fablib | 5a8850d12d5e4178be19b3c80a27b52b6a7e4c92 | [
"BSD-3-Clause"
] | 5 | 2018-01-31T13:37:31.000Z | 2021-01-26T07:54:48.000Z | =========
fh-fablib
=========
Usage
=====
1. Install `pipx <https://pipxproject.github.io/pipx/>`__
2. Install fh-fablib
a. ``pipx install fh_fablib`` if you're happy with the packaged version
b. ``pipx install ~/Projects/fh-fablib`` if you have a local git checkout
you want to install from
3. Add a ``fabfile.py`` to your project. A minimal example follows:
.. code-block:: python
import fh_fablib as fl
fl.require("1.0.20210928")
fl.config.update(base=fl.Path(__file__).parent, host="www-data@feinheit06.nine.ch")
environments = [
fl.environment(
"production",
{
"domain": "example.com",
"branch": "main",
"remote": "production",
},
aliases=["p"],
),
]
ns = fl.Collection(*fl.GENERAL, *fl.NINE, *environments)
4. Run ``fl --list`` to get a list of commands.
Loading the ``fh_fablib`` module automatically creates
``.git/hooks/pre-commit`` which runs ``fl check`` before each commit.
Configuration values
====================
- ``app = "app"``: Name of primary Django app containing settings, assets etc.
- ``base``: ``pathlib.Path`` object pointing to the base dir of the project.
- ``branch``: Branch containing code to be deployed.
- ``domain``: Primary domain of website. The database name and cache key
prefix are derived from this value.
- ``environments``: A dictionary of environments, see below.
- ``environment``: The name of the active environment or ``"default"``.
- ``host``: SSH connection string (``username@server``)
- ``remote``: git remote name for the server. Only used for the
``fetch`` task.
Adding or overriding bundled tasks
==================================
For the sake of an example, suppose that additional processes should be
restarted after deployment. A custom ``deploy`` task follows:
.. code-block:: python
# ... continuing the fabfile above
@fl.task
def deploy(ctx):
"""Deploy once 🔥"""
fl.deploy(ctx) # Reuse
with fl.Connection(fl.config.host) as conn:
fl.run(conn, "systemctl --user restart other.service")
ns.add_task(deploy)
.. note::
Instead of making existing tasks more flexible or configurable it's
preferable to contribute better building blocks resp. to improve
existing buildings blocks to make it easier to build customized tasks
inside projects. E.g. if you want to ``fmt`` additional paths it's
better to build your own ``fmt`` task and not add configuration
variables to the ``config`` dictionary.
Multiple environments
=====================
If you need multiple environments, add environment tasks as follows:
.. code-block:: python
import fh_fablib as fl
fl.require("1.0.20210928")
fl.config.update(base=fl.Path(__file__).parent, host="www-data@feinheit06.nine.ch")
environments = [
fl.environment(
"production",
{
"domain": "example.com",
"branch": "main",
"remote": "production",
},
aliases=["p"],
),
fl.environment(
"next",
{
"domain": "next.example.com",
"branch": "next",
"remote": "next",
},
aliases=["n"],
),
]
ns = fl.Collection(*fl.GENERAL, *fl.NINE, *environments)
Now, ``fl production pull-db``, ``fl next deploy`` and friends should
work as expected.
Available tasks
===============
``fh_fablib.GENERAL``
~~~~~~~~~~~~~~~~~~~~~
- ``bitbucket``: Create a repository on Bitbucket and push the code
- ``check``: Check the coding style
- ``cm``: Compile the translation catalogs
- ``deploy``: Deploy once 🔥
- ``dev``: Run the development server for the frontend and backend
- ``fetch``: Ensure a remote exists for the server and fetch
- ``fmt``: Format the code
- ``freeze``: Freeze the virtualenv's state
- ``github``: Create a repository on GitHub and push the code
- ``hook``: Install the pre-commit hook
- ``local``: Local environment setup
- ``mm``: Update the translation catalogs
- ``pull-db``: Pull a local copy of the remote DB and reset all passwords
- ``pull-media``: Rsync a folder from the remote to the local environment
- ``reset-pw``: Set all user passwords to ``"password"``
- ``update``: Update virtualenv and node_modules to match the lockfiles
- ``upgrade``: Re-create the virtualenv with newest versions of all libraries
``fh_fablib.NINE``
~~~~~~~~~~~~~~~~~~
- ``nine``: Run all nine🌟 setup tasks in order
- ``nine-alias-add``: Add aliasses to a nine-manage-vhost virtual host
- ``nine-alias-remove``: Remove aliasses from a nine-manage-vhost virtual host
- ``nine-checkout``: Checkout the repository on the server
- ``nine-db-dotenv``: Create a database and initialize the .env.
Currently assumes that the shell user has superuser rights (either
through ``PGUSER`` and ``PGPASSWORD`` environment variables or through
peer authentication)
- ``nine-disable``: Disable a virtual host, dump and remove the DB and
stop the gunicorn@ unit
- ``nine-reinit-from``: Reinitialize an environment from a different environment
- ``nine-restart``: Restart the application server
- ``nine-ssl``: Activate SSL
- ``nine-unit``: Start and enable a gunicorn@ unit
- ``nine-venv``: Create a venv and install packages from requirements.txt
- ``nine-vhost``: Create a virtual host using nine-manage-vhosts
Building blocks
===============
The following functions may be used to build your own tasks. They cannot
be executed directly from the command line.
Running commands
~~~~~~~~~~~~~~~~~
- ``run(c, ...)``: Wrapper around ``Context.run`` or ``Connection.run``
which always sets a few useful arguments (``echo=True``, ``pty= True``
and ``replace_env=False`` at the time of writing)
Checks
~~~~~~
- ``_check_flake8(ctx)``: Run ``venv/bin/flake8``
- ``_check_django(ctx)``: Run Django's checks
- ``_check_prettier(ctx)``: Check whether the frontend code conforms to
prettier's formatting
- ``_check_eslint(ctx)``: Run ESLint
- ``_check_large_files(ctx)``: Check whether the commit would add large
files.
- ``_check_branch(ctx)``: Terminates if checked out branch does not
match configuration.
- ``_check_no_uncommitted_changes(ctx)``: Terminates if there are
uncommitted changes on the server.
Formatters
~~~~~~~~~~
- ``_fmt_pyupgrade(ctx)``: Run ``pyupgrade``
- ``_fmt_black(ctx)``: Run ``black``
- ``_fmt_isort(ctx)``: Run ``isort``
- ``_fmt_djlint(ctx)``: Run ``djLint``
- ``_fmt_prettier(ctx)``: Run ``prettier``
- ``_fmt_tox_style(ctx)``: Run ``tox -e style``
Helpers
~~~~~~~
- ``_local_env(path=".env")``: ``speckenv.env`` for a local env file
- ``_srv_env(conn, path)``: ``speckenv.env`` for a remote env file
- ``_python3()``: Return the path of a Python 3 executable. Prefers
newer Python versions.
- ``_local_dotenv_if_not_exists()``: Ensure a local ``.env`` with a few
default values exists. Does nothing if ``.env`` exists already.
- ``_local_dbname()``: Ensure a local ``.env`` exists and return the
database name.
- ``_dbname_from_dsn(dsn)``: Extract the database name from a DSN.
- ``_dbname_from_domain(domain)``: Mangle the domain to produce a string
suitable as a database name, database user and cache key prefix.
- ``_concurrently(ctx, jobs)``: Run a list of shell commands
concurrently and wait for all of them to terminate (or Ctrl-C).
- ``_random_string(length, chars=None)``: Return a random string of
length, suitable for generating secret keys etc.
- ``require(version)``: Terminate if fh_fablib is older.
- ``terminate(msg)``: Terminate processing with an error message.
Deployment
~~~~~~~~~~
- ``_deploy_django``: Update the Git checkout, update the virtualenv.
- ``_deploy_staticfiles``: Collect staticfiles.
- ``_rsync_static``: rsync the local ``static/`` folder to the remote,
optionally deleting everything which doesn't exist locally.
- ``_nine_restart``: Restart the systemd control unit.
Recommended configuration files
===============================
``.editorconfig``
~~~~~~~~~~~~~~~~~
::
# top-most EditorConfig file
root = true
[*]
end_of_line = lf
insert_final_newline = true
charset = utf-8
trim_trailing_whitespace = true
indent_style = space
indent_size = 4
[*.{html,js,scss}]
indent_size = 2
``.eslintrc.js``
~~~~~~~~~~~~~~~~
::
module.exports = {
env: {
browser: true,
es2020: true,
node: true,
},
extends: [
"eslint:recommended",
"prettier",
"preact",
// "prettier/react",
// "plugin:react/recommended",
],
parser: "babel-eslint",
parserOptions: {
ecmaFeatures: {
experimentalObjectRestSpread: true,
jsx: true,
},
sourceType: "module",
},
plugins: [
// "react",
// "react-hooks",
],
rules: {
"no-unused-vars": [
"error",
{
argsIgnorePattern: "^_",
varsIgnorePattern: "React|Fragment|h|^_",
},
],
// "react/prop-types": "off",
// "react/display-name": "off",
// "react-hooks/rules-of-hooks": "warn", // Checks rules of Hooks
// "react-hooks/exhaustive-deps": "warn", // Checks effect dependencies
},
settings: {
react: {
version: "detect",
},
},
}
``setup.cfg``
~~~~~~~~~~~~~
::
[flake8]
exclude=venv,build,docs,.tox,migrate,migrations,node_modules
ignore=E203,W503
max-line-length=88
max-complexity=10
``package.json``
~~~~~~~~~~~~~~~~
::
{
"name": "feinheit.ch",
"description": "feinheit",
"version": "0.0.1",
"private": true,
"dependencies": {
"babel-eslint": "^10.0.3",
"eslint": "^7.7.0",
"eslint-config-prettier": "^6.11.0",
"fh-webpack-config": "^1.0.7",
"prettier": "^2.1.1"
},
"eslintIgnore": [
"app/static/app/lib/",
"app/static/app/plugin_buttons.js"
]
}
``webpack.config.js``
~~~~~~~~~~~~~~~~~~~~~
::
const merge = require("webpack-merge")
const config = require("fh-webpack-config")
module.exports = merge.smart(
config.commonConfig,
// config.preactConfig,
// config.reactConfig,
config.chunkSplittingConfig
)
| 28.347709 | 90 | 0.613388 |
2bef5c8fcbbc0631e117bf0029ad080afe0cd9b4 | 1,498 | rst | reStructuredText | Documentation/sh/register-banks.rst | jainsakshi2395/linux | 7ccb860232bb83fb60cd6bcf5aaf0c008d903acb | [
"Linux-OpenIB"
] | 44 | 2022-03-16T08:32:31.000Z | 2022-03-31T16:02:35.000Z | Documentation/sh/register-banks.rst | jainsakshi2395/linux | 7ccb860232bb83fb60cd6bcf5aaf0c008d903acb | [
"Linux-OpenIB"
] | 13 | 2021-07-10T04:36:17.000Z | 2022-03-03T10:50:05.000Z | Documentation/sh/register-banks.rst | jainsakshi2395/linux | 7ccb860232bb83fb60cd6bcf5aaf0c008d903acb | [
"Linux-OpenIB"
] | 18 | 2022-03-19T04:41:04.000Z | 2022-03-31T03:32:12.000Z | .. SPDX-License-Identifier: GPL-2.0
==========================================
Notes on register bank usage in the kernel
==========================================
Introduction
------------
The SH-3 and SH-4 CPU families traditionally include a single partial register
bank (selected by SR.RB, only r0 ... r7 are banked), whereas other families
may have more full-featured banking or simply no such capabilities at all.
SR.RB banking
-------------
In the case of this type of banking, banked registers are mapped directly to
r0 ... r7 if SR.RB is set to the bank we are interested in, otherwise ldc/stc
can still be used to reference the banked registers (as r0_bank ... r7_bank)
when in the context of another bank. The developer must keep the SR.RB value
in mind when writing code that utilizes these banked registers, for obvious
reasons. Userspace is also not able to poke at the bank1 values, so these can
be used rather effectively as scratch registers by the kernel.
Presently the kernel uses several of these registers.
- r0_bank, r1_bank (referenced as k0 and k1, used for scratch
registers when doing exception handling).
- r2_bank (used to track the EXPEVT/INTEVT code)
- Used by do_IRQ() and friends for doing irq mapping based off
of the interrupt exception vector jump table offset
- r6_bank (global interrupt mask)
- The SR.IMASK interrupt handler makes use of this to set the
interrupt priority level (used by local_irq_enable())
- r7_bank (current)
| 36.536585 | 78 | 0.712283 |
e23d9fe43e70728dc1290ee9a13f775b3e9e52a4 | 2,539 | rst | reStructuredText | doc/library/char.rst | crcx/retro-language | 338efedf7293b01e5823832c6f25d5dc429a1fa2 | [
"0BSD"
] | 10 | 2015-01-15T02:42:07.000Z | 2019-07-24T17:57:58.000Z | doc/library/char.rst | crcx/retro-language | 338efedf7293b01e5823832c6f25d5dc429a1fa2 | [
"0BSD"
] | 2 | 2015-03-06T17:37:06.000Z | 2020-06-28T19:53:08.000Z | doc/library/char.rst | crcx/retro-language | 338efedf7293b01e5823832c6f25d5dc429a1fa2 | [
"0BSD"
] | 3 | 2016-07-11T11:01:35.000Z | 2021-05-16T07:12:05.000Z | =====
char'
=====
This library provides a vocabulary for simple operations on ASCII characters.
-------
Loading
-------
The following should suffice:
::
needs char'
--------
Examples
--------
::
97 ^char'isChar?
'a dup ^char'isUpper? [ ^char'toLower ] ifTrue
---------
Functions
---------
+---------------+---------+------------------------------------------------+
| Name | Stack | Usage |
+===============+=========+================================================+
| isChar? | c-f | Return true if a given value is an alphabetic |
| | | character (A-Z or a-z). If not, return false |
+---------------+---------+------------------------------------------------+
| isUpper? | c-f | Return true if character is uppercase, false |
| | | otherwise |
+---------------+---------+------------------------------------------------+
| isLower? | c-f | Return true if character is lowercase, false |
| | | otherwise |
+---------------+---------+------------------------------------------------+
| isNumeric? | c-f | Return true if character is between 0 - 9 |
| | | inclusive, or false otherwise |
+---------------+---------+------------------------------------------------+
| isWhitespace? | c-f | Return true if character is a space, tab, or |
| | | end of line. Returns false otherwise |
+---------------+---------+------------------------------------------------+
| isVisible? | c-f | Return true if character is visible, or false |
| | | if it is a control-type character |
+---------------+---------+------------------------------------------------+
| toUpper | c-c | Convert a lowercase character to uppercase. |
| | | This will only work on a lowercase character. |
+---------------+---------+------------------------------------------------+
| toLower | c-c | Convert an upperacase character to lowercase. |
| | | This will only work on an uppercase character. |
+---------------+---------+------------------------------------------------+
| toString | c-$ | Convert a character into a string |
+---------------+---------+------------------------------------------------+
| 42.316667 | 77 | 0.314691 |
6cb132d35e12296ce34381a708bd4d611f601055 | 95 | rst | reStructuredText | tests/fixtures/grid/column-offset.rst | lakhman/restructuredBootstrap | 57002c6db2d3b5b97dc23820b91711db4f00c07e | [
"MIT"
] | 1 | 2020-08-30T08:58:46.000Z | 2020-08-30T08:58:46.000Z | tests/fixtures/grid/column-offset.rst | lakhman/restructuredBootstrap | 57002c6db2d3b5b97dc23820b91711db4f00c07e | [
"MIT"
] | null | null | null | tests/fixtures/grid/column-offset.rst | lakhman/restructuredBootstrap | 57002c6db2d3b5b97dc23820b91711db4f00c07e | [
"MIT"
] | null | null | null | .. column::
:offset: 6
Column offset should be applied when using default size and width.
| 19 | 68 | 0.715789 |
0de203c51ee27fb76415d243c1f6b04d1c059ec9 | 523 | rst | reStructuredText | docs/cli/save-markers.rst | RealOrangeOne/yuri | 6ed55bdf97c6add22cd6c71c39ca30e2229337cb | [
"BSD-3-Clause"
] | 7 | 2019-08-09T10:05:14.000Z | 2021-11-14T17:37:50.000Z | docs/cli/save-markers.rst | RealOrangeOne/yuri | 6ed55bdf97c6add22cd6c71c39ca30e2229337cb | [
"BSD-3-Clause"
] | 226 | 2019-06-20T09:48:23.000Z | 2022-02-20T00:43:52.000Z | docs/cli/save-markers.rst | RealOrangeOne/yuri | 6ed55bdf97c6add22cd6c71c39ca30e2229337cb | [
"BSD-3-Clause"
] | 9 | 2019-07-19T10:55:47.000Z | 2020-07-23T19:16:47.000Z | Save markers
============
The ``save-markers`` tool outputs the images of all the fiducial markers in a given type.
Each marker is surrounded by a white boarder, which is not considered part of the marker (it's not counted when working out the marker's size).
When `--raw` is passed, Markers are output as PNG files, at their smallest possible format. They can then be resized as necessary without losing quality.
Without `--raw`, images are saved 500px, plus a border with text identifying which marker is being used.
| 47.545455 | 153 | 0.755258 |
167cdc68dbbb864f8094b19bbaccf528bcf66728 | 7,206 | rst | reStructuredText | README.rst | zosis/pyPodcastParser | c2abed056a18043ca380f3382eea644d408bb973 | [
"MIT"
] | null | null | null | README.rst | zosis/pyPodcastParser | c2abed056a18043ca380f3382eea644d408bb973 | [
"MIT"
] | null | null | null | README.rst | zosis/pyPodcastParser | c2abed056a18043ca380f3382eea644d408bb973 | [
"MIT"
] | 1 | 2020-05-21T12:48:59.000Z | 2020-05-21T12:48:59.000Z | ###############
pyPodcastParser
###############
|pypi| |pip_monthly| |testing| |coverall| |codacy| |license|
pyPodcastParser is a podcast parser. It should parse any RSS file, but it specializes in parsing podcast rss feeds. pyPodcastParser is agnostic about the method you use to get a podcast RSS feed. Most user will be comfortable with the Requests_ library.
.. _Requests: http://docs.python-requests.org/en/latest/
************
Installation
************
::
pip install pyPodcastParser
*****
Usage
*****
::
from pyPodcastParser.Podcast import Podcast
import requests
response = requests.get('https://some_rss_feed')
podcast = Podcast(response.content)
===================================
Objects and their Useful Attributes
===================================
**Notes:**
* All attributes with empty or nonexistent element will have a value of None.
* Attributes are generally strings or lists of strings, because we want to record the literal value of elements.
* The cloud element aka RSS Cloud is not supported as it has been superseded by the superior PubSubHubbub protocal
-------
Podcast
-------
* categories (list) A list for strings representing the feed categories
* copyright (string): The feed's copyright
* creative_commons (string): The feed's creative commons license
* items (list): A list of Item objects
* description (string): The feed's description
* generator (string): The feed's generator
* image_title (string): Feed image title
* image_url (string): Feed image url
* image_link (string): Feed image link to homepage
* image_width (string): Feed image width
* image_height (Sample H4string): Feed image height
* itunes_author_name (string): The podcast's author name for iTunes
* itunes_block (boolean): Does the podcast block itunes
* itunes_categories (list): List of strings of itunes categories
* itunes_complete (string): Is this podcast done and complete
* itunes_explicit (string): Is this item explicit. Should only be "yes" and "clean."
* itune_image (string): URL to itunes image
* itunes_keywords (list): List of strings of itunes keywords
* itunes_new_feed_url (string): The new url of this podcast
* language (string): Language of feed
* last_build_date (string): Last build date of this feed
* link (string): URL to homepage
* managing_editor (string): managing editor of feed
* published_date (string): Date feed was published
* pubsubhubbub (string): The URL of the pubsubhubbub service for this feed
* owner_name (string): Name of feed owner
* owner_email (string): Email of feed owner
* subtitle (string): The feed subtitle
* title (string): The feed title
* ttl (string): The time to live or number of minutes to cache feed
* web_master (string): The feed's webmaster
* date_time (datetime): When published
----
Item
----
* author (string): The author of the item
* comments (string): URL of comments
* creative_commons (string): creative commons license for this item
* description (string): Description of the item.
* enclosure_url (string): URL of enclosure
* enclosure_type (string): File MIME type
* enclosure_length (integer): File size in bytes
* guid (string): globally unique identifier
* itunes_author_name (string): Author name given to iTunes
* itunes_block (boolean): It this Item blocked from itunes
* itunes_closed_captioned: (string): It is this item have closed captions
* itunes_duration (string): Duration of enclosure
* itunes_explicit (string): Is this item explicit. Should only be "yes" and "clean."
* itune_image (string): URL of item cover art
* itunes_order (string): Override published_date order
* itunes_subtitle (string): The item subtitle
* itunes_summary (string): The summary of the item
* link (string): The URL of item.
* published_date (string): Date item was published
* title (string): The title of item.
* date_time (datetime): When published
***********************
Bugs & Feature Requests
***********************
https://github.com/jrigden/pyPodcastParser/issues/new
*******
Credits
*******
============
Jason Rigden
============
**Email:** jasonrigden@gmail.com
**Linkedin:** https://www.linkedin.com/in/jasonrigden
**Twitter:** |twitter|
*******
History
*******
**Version 2.0.0**
* Removed most time attributes and replaced then them with more concise and versatile datetime object
**Version 1.1.1**
* Fixed missed named attribute in items
**Version 1.1.0**
* Added Validation for RSS and podcasts
* Added several useful time attributes
***********
Development
***********
https://github.com/jrigden/pyPodcastParser
****
Docs
****
http://pypodcastparser.readthedocs.org/en/latest/
*******
License
*******
**The MIT License** (MIT) Copyright (c) 2016 **Jason Rigden**
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
.. |coverall| image:: https://coveralls.io/repos/github/zosis/pyPodcastParser/badge.svg?branch=master
:alt: Test Status
:scale: 100%
:target: https://coveralls.io/github/zosis/pyPodcastParser?branch=master
.. |codacy| image:: https://app.codacy.com/project/badge/Grade/dd6ec457bb6b4d709d69b34da14511fa
:alt: Codacy Grade
:scale: 100%
:target: https://www.codacy.com/gh/zosis/pyPodcastParser
.. |docs| image:: https://readthedocs.org/projects/docs/badge/?version=latest
:alt: Documentation Status
:scale: 100%
:target: https://pypodcastparser.readthedocs.org/en/latest/?badge=latest
.. |license| image:: https://img.shields.io/pypi/l/pyloudness.svg
:alt: License
:scale: 100%
:target: https://opensource.org/licenses/MIT
.. |pypi| image:: https://badge.fury.io/py/pyPodcastParser.svg
:alt: pypi
:scale: 100%
:target: https://pypi.python.org/pypi/pyPodcastParser
.. |pip_monthly| image:: https://img.shields.io/pypi/dm/pyPodcastParser.svg
:alt: Pip Monthly Downloads
:scale: 100%
:target: https://pypi.python.org/pypi/pyPodcastParser
.. |testing| image:: https://travis-ci.org/zosis/pyPodcastParser.svg?branch=master
:alt: Test Status
:scale: 100%
:target: https://travis-ci.org/zosis/pyPodcastParser
.. |twitter| image:: https://img.shields.io/twitter/follow/mr_rigden.svg?style=social
:alt: @mr_rigden
:scale: 100%
:target: https://twitter.com/mr_rigden
| 34.478469 | 460 | 0.721898 |
977c9e50bd2403cf7bc878f2f0f05fe71e831f79 | 217 | rst | reStructuredText | docs/source/_summaries/grape.general_graph.GeneralGraph.compute_local_efficiency.rst | ndem0/GRAPE | 6d82183b6040de1af1b1524b8553925c88bd7054 | [
"MIT"
] | 8 | 2019-04-03T08:48:06.000Z | 2021-04-22T07:41:41.000Z | docs/source/_summaries/grape.general_graph.GeneralGraph.compute_local_efficiency.rst | ndem0/GRAPE | 6d82183b6040de1af1b1524b8553925c88bd7054 | [
"MIT"
] | 6 | 2020-11-24T10:31:37.000Z | 2022-01-13T16:02:05.000Z | docs/source/_summaries/grape.general_graph.GeneralGraph.compute_local_efficiency.rst | ndem0/GRAPE | 6d82183b6040de1af1b1524b8553925c88bd7054 | [
"MIT"
] | 11 | 2019-04-02T16:18:21.000Z | 2021-11-07T14:18:51.000Z | grape.general\_graph.GeneralGraph.compute\_local\_efficiency
============================================================
.. currentmodule:: grape.general_graph
.. automethod:: GeneralGraph.compute_local_efficiency | 36.166667 | 61 | 0.603687 |
7c0766f583dcdbd4fbb35aa743f9b41c51c2a805 | 408 | rst | reStructuredText | sphinx/source/user-guides/module.rst | Tyxz/PixelTransfer | 66d7a4663982dd49286b88632d17cb2612d4fdda | [
"BSD-3-Clause"
] | null | null | null | sphinx/source/user-guides/module.rst | Tyxz/PixelTransfer | 66d7a4663982dd49286b88632d17cb2612d4fdda | [
"BSD-3-Clause"
] | null | null | null | sphinx/source/user-guides/module.rst | Tyxz/PixelTransfer | 66d7a4663982dd49286b88632d17cb2612d4fdda | [
"BSD-3-Clause"
] | null | null | null | .. _module:
*********************
Module
*********************
:mod:`transfer`
===============
Part of the sphinx build process in generate and index file: :ref:`genindex`.
.. automodule:: transfer
:members:
:undoc-members:
:platform: Unix, Windows
:synopsis: A module to transfer alpha and colour information from one image to another.
.. moduleauthor:: Arne Rantzen <arne@rantzen.net>
| 21.473684 | 91 | 0.607843 |
4333d091383e0497317b783f7f9c80ffd54d2437 | 337 | rst | reStructuredText | docs/source/developer/release.rst | ErikHogenbirk/strax | d42e33ccdb3696790a2f62dbd91820891737baa9 | [
"BSD-3-Clause"
] | null | null | null | docs/source/developer/release.rst | ErikHogenbirk/strax | d42e33ccdb3696790a2f62dbd91820891737baa9 | [
"BSD-3-Clause"
] | null | null | null | docs/source/developer/release.rst | ErikHogenbirk/strax | d42e33ccdb3696790a2f62dbd91820891737baa9 | [
"BSD-3-Clause"
] | null | null | null | Release procedure
==================
- Merge pyup PR updating the dependencies
- Update personal fork & local master to Axfoundation fork
- Edit and commit HISTORY.md
- bumpversion minor
- Push to personal and AxFoundation fork, with --tags
- fast-foward and push AxFoundation/stable
- Add release info on release page of github website | 33.7 | 58 | 0.750742 |
d77eb7de057f039873d3e6165f21573e25deadd3 | 864 | rst | reStructuredText | docs/source/reference/core/index.rst | amanaster2/landlab | ea17f8314eb12e3fc76df66c9b6ff32078caa75c | [
"MIT"
] | 257 | 2015-01-13T16:01:21.000Z | 2022-03-29T22:37:43.000Z | docs/source/reference/core/index.rst | amanaster2/landlab | ea17f8314eb12e3fc76df66c9b6ff32078caa75c | [
"MIT"
] | 1,222 | 2015-02-05T21:36:53.000Z | 2022-03-31T17:53:49.000Z | docs/source/reference/core/index.rst | amanaster2/landlab | ea17f8314eb12e3fc76df66c9b6ff32078caa75c | [
"MIT"
] | 274 | 2015-02-11T19:56:08.000Z | 2022-03-28T23:31:07.000Z | .. _api.core:
====================
Core Landlab Classes
====================
landlab.core.messages module
----------------------------
.. automodule:: landlab.core.messages
:members:
:undoc-members:
:show-inheritance:
landlab.core.model\_component module
------------------------------------
.. automodule:: landlab.core.model_component
:members:
:undoc-members:
:show-inheritance:
landlab.core.model\_parameter\_loader module
--------------------------------------------
.. automodule:: landlab.core.model_parameter_loader
:members:
:undoc-members:
:show-inheritance:
landlab.core.utils module
-------------------------
.. automodule:: landlab.core.utils
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: landlab.core
:members:
:undoc-members:
:show-inheritance:
| 18.382979 | 51 | 0.559028 |
5281c698690b310ade363e86179805a9b0563b5d | 1,903 | rst | reStructuredText | docs/index.rst | xumi1993/PyRaysum | d7424957072b7ffd41a993ccba9903355c98a449 | [
"MIT"
] | 14 | 2020-12-03T06:19:01.000Z | 2022-02-21T07:19:34.000Z | docs/index.rst | xumi1993/PyRaysum | d7424957072b7ffd41a993ccba9903355c98a449 | [
"MIT"
] | 3 | 2021-11-10T18:57:38.000Z | 2022-03-22T22:26:05.000Z | docs/index.rst | xumi1993/PyRaysum | d7424957072b7ffd41a993ccba9903355c98a449 | [
"MIT"
] | 7 | 2020-12-03T06:20:37.000Z | 2022-01-14T21:04:21.000Z |
.. figure:: ../pyraysum/examples/picture/PyRaysum_logo.png
:align: center
Documentation
=============
``PyRaysum`` is a Python wrapper around the Fortran software ``Raysum``, originally developed by `Andrew Frederiksen <https://umanitoba.ca/faculties/environment/departments/geo_sciences/research_facilities/AndrewFrederiksen.html>`_ in collaboration with `Michael Bostock <https://www.eoas.ubc.ca/people/michaelbostock>`_. This program generates sets of ray-theoretical seismograms for an incident plane wave (teleseismic approximation) for models consisting of a stack of layers with planar (but possibly non-parallel and dipping) interfaces, allowing the flexibility of adding seismic anisotropy in the layers. Incident P and S waves are supported. The code include methods to process the synthetic seismograms for receiver function calculation and for plotting.
``PyRaysum`` is bundled with a trimmed version of the original Fortran software and provides a script to compile and install Raysum, as well as functions to interact with the software and generate plots of models and seismograms. Common computational workflows are covered in the ``Jupyter`` notebooks bundled with this package.
.. image:: https://github.com/paudetseis/PyRaysum/workflows/Build/badge.svg
:target: https://github.com/paudetseis/PyRaysum/actions
.. image:: https://codecov.io/gh/paudetseis/PyRaysum/branch/main/graph/badge.svg
:target: https://codecov.io/gh/paudetseis/PyRaysum
.. toctree::
:maxdepth: 1
:caption: Quick Links
links
.. toctree::
:maxdepth: 1
:caption: Getting Started
init
.. toctree::
:maxdepth: 1
:caption: API
api
.. toctree::
:maxdepth: 1
:caption: Jupyter Notebooks
Example 1: Reproducing Figure 2 in Porter et al. (2011) <https://nbviewer.jupyter.org/github/paudetseis/PyRaysum/blob/main/pyraysum/examples/notebooks/sim_Porter2011.ipynb>
| 46.414634 | 764 | 0.768786 |
2e913dbe5bc30c18d51e8f69e6ac7d92a74cb03c | 345 | rst | reStructuredText | source/api_doc/utils/lock_helper.rst | timothijoe/DI-engine-docs | e8607933e0e7ea0056aa9c95ac27bd731333310e | [
"Apache-2.0"
] | 56 | 2021-07-10T11:16:33.000Z | 2022-03-31T03:09:27.000Z | source/api_doc/utils/lock_helper.rst | timothijoe/DI-engine-docs | e8607933e0e7ea0056aa9c95ac27bd731333310e | [
"Apache-2.0"
] | 17 | 2021-07-19T09:46:50.000Z | 2022-03-31T02:22:47.000Z | source/api_doc/utils/lock_helper.rst | timothijoe/DI-engine-docs | e8607933e0e7ea0056aa9c95ac27bd731333310e | [
"Apache-2.0"
] | 34 | 2021-07-13T02:37:18.000Z | 2022-03-22T07:55:04.000Z | utils.lock_helper
===================
lock_helper
-----------------
Please Reference ding/ding/utils/lock_helper.py for usage.
LockContext
~~~~~~~~~~~~~~~~~
.. autoclass:: ding.utils.lock_helper.LockContext
:members: __init__, __enter__, __exit__
get_rw_file_lock
~~~~~~~~~~~~~~~~~
.. automodule:: ding.utils.lock_helper.get_rw_file_lock
| 21.5625 | 58 | 0.64058 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.