hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b7ddd4976d4fc4c2a0e6abe7db62b20ee553be2d | 5,573 | rst | reStructuredText | docs/modelscript.rst | ScribesZone/ModelScribes | a36be1047283f2e470dc2dd4353f2a714377bb7d | [
"MIT"
] | 1 | 2019-02-22T14:27:06.000Z | 2019-02-22T14:27:06.000Z | docs/modelscript.rst | ScribesZone/ModelScribes | a36be1047283f2e470dc2dd4353f2a714377bb7d | [
"MIT"
] | 4 | 2019-02-12T07:49:15.000Z | 2019-02-12T07:50:12.000Z | docs/modelscript.rst | ScribesZone/ModelScribes | a36be1047283f2e470dc2dd4353f2a714377bb7d | [
"MIT"
] | null | null | null | .. .. coding=utf-8
ModelScript
===========
ModelScript is a lightweight modeling environment based on:
* a dozen of small textual languages called model scripts
* an underlying (yet truely optional) modeling methodology.
The image below shows the dependency graph between the
different ModelScript languages:
.. image:: media/language-graph.png
:align: center
Objectives
----------
ModelScript is intented to be used in classrooms.
ModelScript is currently developed to support courses
at the `University of Grenoble Alpes`_. Course topics include:
* software engineering,
* database design,
* user interface design,
* model driven engineering,
* information systems.
ModelScript1, first prototype
-----------------------------
The current version of ModelScript, referred as ModelScript1, is
a very first prototype. **ModelScript1 is limited to syntaX checking**
***apart for three languages** that are based on the `USE OCL`_ tool.
The next version, under development, will:
* implement the full language semantics of all languages,
* replace `USE OCL`_ languages by custom ones,
* add additional features such as automated document generation.
The languages *ClassScript*, *ObjectScript* and *ScenarioScript*
are actually implemented using `USE OCL`_. These languages are named
`ClassScript1`, `ObjectScript1`, `ScenarioScript1`. Each of these languages
are based on a few annotations embedded on `USE OCL`_ comments.
Note also that TaskScript is just a convenient alias here for the
language underlying the Kmade_ environment from university of Nancy.
TaskScript is the only language with no textual syntax.
ModelScript ecosystem
---------------------
ModelScript languages are listed below. All languages exist in the form
of a textual syntax (some of them having a graphical syntax as well), apart
for tasks diagrams that have only a graphical syntax.
+--------------------------------+--------------------------------------------------------------------+
| **language** | **main concepts** |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`GlossaryScript` | *entries, packages, synonyms, translations...* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`ClassScript1` [#u]_ | *classes, attributes, inheritance, associations, invariants...* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`ObjectScript1` [#u]_ | *objects, slots, links, link objects* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`RelationScript` | *relations, columns, keys, constraints, dependencies...* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`ParticipantScript` | *actors, personas, persons, roles, stakeholder* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`UsecaseScript` | *actors, usecases, interactions* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`TaskScript` [#k]_ | *tasks, task decomposition, task decorations...* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`AUIScript` | *spaces, links, concept references* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`PermissionScript` | *subjects, resources, permissions, actions* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`ScenarioScript1` | *scenarios, contexts, usecase instances, persona, step...* |
+--------------------------------+--------------------------------------------------------------------+
.. ..
| :ref:`QAScript` | QA=Quality Assurance ; *rules, enforcements, packages* |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`QCScript` | QC=Qualirt Control ; checks, audits |
+--------------------------------+--------------------------------------------------------------------+
| :ref:`ProjectScript` | *processes, stakeholders, tasks, tasks dependency...* |
+--------------------------------+--------------------------------------------------------------------+
.. [#u] :ref:`ClassScript1`, :ref:`ObjectScript1` and
:ref:`ScenarioScript1` are currently annotated versions of the
`USE OCL`_ language.
.. [#k] The Kmade_ modeling environment is used for task models.
There is no textual syntax. "TaskScript" is just used here for the
sake of consistency.
Language graph
--------------
**TODO**
.. _`USE OCL`: http://sourceforge.net/projects/useocl/
.. _Kmade: https://forge.lias-lab.fr/projects/kmade
.. _`University of Grenoble Alpes`: https://www.univ-grenoble-alpes.fr/
.. _`ScribesTools/UseOCL`:
http://scribestools.readthedocs.org/en/latest/useocl/index.html
| 5,573 | 5,573 | 0.462049 |
2e558d36444c3c4d17b8c910bda8c280c51f2341 | 4,559 | rst | reStructuredText | manual/communication.rst | stefanv/scipy-conference | 7bcce36f0b95a1cd4cf917688a098cd81c7293d1 | [
"BSD-3-Clause"
] | null | null | null | manual/communication.rst | stefanv/scipy-conference | 7bcce36f0b95a1cd4cf917688a098cd81c7293d1 | [
"BSD-3-Clause"
] | null | null | null | manual/communication.rst | stefanv/scipy-conference | 7bcce36f0b95a1cd4cf917688a098cd81c7293d1 | [
"BSD-3-Clause"
] | null | null | null |
=====================
Communication
=====================
Within the teams
================
To contact all organizers, email to scipy-organizers [AT] scipy [DOT]
org.
Weekly phone calls are scheduled with both co-chairs. Anyone is
welcome to join.
Internally the tasks management is done using an instance of Redmine
hosted by Enthought for now. It goal is to allow management of all future Scipy
conferences. The project is called scipy-conf.
The current managers of the redmine instance include jrocher
[AT] enthought.com, bmurphy [AT] enthought.com, mtranby [AT]
enthought.com, and andy.terrel [AT] gmail.com.
Outreach
======
To inform the outside world about the conference,
* a Scipy conference twitter account has been created. Andy Terrel (andy.terrel
[AT] gmail.com) is the owner and should be contacted about password.
username: SciPyConf
* a google+ page ScipyConference has been created and can be reused
every year:
https://plus.google.com/u/0/b/100948873231627513165/
The current owner of the page is jonathanrocher [AT] gmail.com. Others are
currently managers: bmurphy [AT] enthought.com, mtranby [AT]
enthought.com, and andy.terrel [AT] gmail.com.
* a mailing list with all participants will be created: ?? scipy2013
[AT] scipy [DOT] org ??
* To increase diversity at the conference, there are women coder
groups that we want to make sure to contact including PyLadies,
LadyCoders and CodeChix.
Advertizing
===========
Online websites that people read are best places to advertize for the conference:
* scipy.org
* numfocus.org
* python.org
* Enthought.com
Magazine type site are even more effective:
* Slashdot
* Hacker News (don't think they do ads)
* Reddit.com (r/programming or r/python)
* Stacked Overflow (especially http://scicomp.stackexchange.com/ )
* Ars Technica
* Wired
* NA-digest (http://www.netlib.org/na-digest-html/)
You might also try to do organize join-advertizing with other related
conferences:
* PyCon (in all its state and international flavors)
* PyData
* OSCON
* SuperComputing
Not very many developers read journals or magazines, though
occasionaly people follow:
* Communications of the ACM
* SIAM News (http://www.siam.org/news/)
Mailing lists (unsent):
* astropy: http://mail.scipy.org/mailman/listinfo/astropy
* sunpy: https://groups.google.com/forum/#!forum/sunpy
* spacepy: spacepy-announce@lanl.gov
* cosmolopy: https://groups.google.com/forum/?fromgroups#!forum/cosmolopy-devel
Mailing lists (sent, by):
* numfocus: https://groups.google.com/forum/?fromgroups#!forum/numfocus (AMS)
* scipy-user: http://mail.scipy.org/mailman/listinfo/scipy-user (AMS)
* numpy: http://mail.scipy.org/mailman/listinfo/numpy-discussion (AMS)
* pydata: https://groups.google.com/forum/?fromgroups#!forum/pydata (AMS)
* statsmodels: https://groups.google.com/forum/?fromgroups#!forum/pystatsmodels (AMS)
* matplotlib: https://lists.sourceforge.net/lists/listinfo/matplotlib-users (AMS)
* enthought-dev: https://mail.enthought.com/mailman/listinfo/enthought-dev (AMS)
* yt: http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org (AMS)
* IPython developers list: ipython-dev@scipy.org (AMS)
* PyTables Users: pytables-users@lists.sourceforge.net (AMS)
* PyTables Dev: pytables-dev@googlegroups.com (AMS)
* Python Conferences: conferences@python.org (AMS)
* Cython Users cython-users@googlegroups.com (AMS)
* PyNE Dev: pyne-dev@googlegroups.com (AMS)
* PyNE Users: pyne-users@googlegroups.com (AMS)
* SymPy: sympy@googlegroups.com (AMS)
* VisIt software users community: visit-users@elist.ornl.gov (AMS)
* Numba Users numba-users@continuum.io (AMS)
* Theano Users: theano-users@googlegroups.com (AMS)
* scikits-image: https://groups.google.com/forum/?fromgroups#!forum/scikit-image (AMS)
* scikit learn general: scikit-learn-general@lists.sourceforge.net (AMS)
* Software Carpentry Discussion: discuss@lists.software-carpentry.org (AMS)
* Austin Python: austin-python@googlegroups.com (AMS)
* APUG Mailing List: python-188@meetup.com (AMS)
* nipy: http://mail.scipy.org/mailman/listinfo/nipy-devel (MMM)
* itk: http://www.itk.org/mailman/listinfo/insight-users (MMM)
* vtk: vtkusers@vtk.org (MMM)
* debianmed: http://lists.debian.org/debian-med/ (MMM)
* nitrc: http://www.nitrc.org/incf/event_list.php (MMM)
* slicer: http://massmail.bwh.harvard.edu/mailman/listinfo/slicer-devel (MMM)
* pyaos: http://lists.johnny-lin.com/listinfo.cgi/pyaos-johnny-lin.com (JR)
* galaxy: http://user.list.galaxyproject.org/ (MMM)
* biopython: http://lists.open-bio.org/mailman/listinfo/biopython/ (MMM)
| 37.677686 | 86 | 0.750384 |
db040ba61c16bcc995d9bdac3fb40f924cd1769c | 29,749 | rst | reStructuredText | resources/app/git/mingw64/share/doc/spdylay/README.rst | park-Jun-Young/MyBasket | cb2145e143f9ca637f1b8cb9735759e4cb7c9eea | [
"MIT"
] | 263 | 2015-01-01T10:25:20.000Z | 2022-02-10T04:20:37.000Z | resources/app/git/mingw64/share/doc/spdylay/README.rst | park-Jun-Young/MyBasket | cb2145e143f9ca637f1b8cb9735759e4cb7c9eea | [
"MIT"
] | 98 | 2017-11-02T19:00:44.000Z | 2022-03-22T16:15:39.000Z | resources/app/git/mingw64/share/doc/spdylay/README.rst | park-Jun-Young/MyBasket | cb2145e143f9ca637f1b8cb9735759e4cb7c9eea | [
"MIT"
] | 57 | 2017-12-06T07:18:04.000Z | 2022-03-09T07:08:03.000Z | Spdylay - SPDY C Library
========================
This is an experimental implementation of Google's SPDY protocol in C.
This library provides SPDY version 2, 3 and 3.1 framing layer
implementation. It does not perform any I/O operations. When the
library needs them, it calls the callback functions provided by the
application. It also does not include any event polling mechanism, so
the application can freely choose the way of handling events. This
library code does not depend on any particular SSL library (except for
example programs which depend on OpenSSL 1.0.1 or later).
This project also develops SPDY client, server and proxy on top of
Spdylay library. See `SPDY Client and Server Programs`_ section.
Development Status
------------------
Most of the SPDY/2, SPDY/3 and SPDY/3.1 functionality has been
implemented. In both versions, the direct support of server-push has
not been available yet. The application can achieve server-push using
primitive APIs though.
As described below, we can create SPDY client and server with the
current Spdylay API.
Requirements
------------
The following packages are needed to build the library:
* pkg-config >= 0.20
* zlib >= 1.2.3
To build and run the unit test programs, the following packages are
needed:
* cunit >= 2.1
To build and run the example programs, the following packages are
needed:
* OpenSSL >= 1.0.1
To enable ``-a`` option (getting linked assets from the downloaded
resource) in ``spdycat`` (one of the example program), the following
packages are needed:
* libxml2 >= 2.7.7
To build SPDY/HTTPS to HTTP reverse proxy ``shrpx`` (one of the
example program), the following packages are needed:
* libevent-openssl >= 2.0.8
If you are using Ubuntu 12.04, you need the following packages
installed::
$ apt-get install autoconf automake autotools-dev libtool pkg-config zlib1g-dev libcunit1-dev libssl-dev libxml2-dev libevent-dev
Build from git
--------------
Building from git is easy, but please be sure that at least autoconf 2.68 is
used::
$ autoreconf -i
$ automake
$ autoconf
$ ./configure
$ make
Building documentation
----------------------
To build documentation, run::
$ make html
The documents will be generated under ``doc/manual/html/``.
The generated documents will not be installed with ``make install``.
Building Android binary
------------------------
In this section, we briefly describe how to build Android binary using
`Android NDK <http://developer.android.com/tools/sdk/ndk/index.html>`_
cross-compiler on Debian Linux.
We offer ``android-config`` and ``android-make`` scripts to make the
build easier. To make these script work, NDK toolchain must be
installed in the following way. First, let introduce ``ANDROID_HOME``
environment variable. We need to install toolchain under
``$ANDROID_HOME/toolchain``. An user can freely choose the path for
``ANDROID_HOME``. For example, to install toolchain under
``$ANDROID_HOME/toolchain``, do this in the the directory where NDK is
unpacked::
$ build/tools/make-standalone-toolchain.sh --platform=android-9 --install-dir=$ANDROID_HOME/toolchain
The platform level is not important here because we don't use Android
specific C/C++ API.
The dependent libraries, such as OpenSSL and libevent should be built
with the toolchain and installed under ``$ANDROID_HOME/usr/local``.
We recommend to build these libraries as static library to make the
deployment easier. libxml2 support is currently disabled.
We use zlib which comes with Android NDK, so we don't have to build it
by ourselves.
Before running ``android-config`` and ``android-make``,
``ANDOIRD_HOME`` environment variable must be set to point to the
correct path.
After ``android-config``, run ``android-make`` to compile sources.
``android-make`` is just include path to cross compiler in ``PATH``
and run make. So if you include path to corss compiler by yourself,
you can just run make to build spdylay and tools as usual.
API
---
The public API reference is available on online. Visit
http://tatsuhiro-t.github.io/spdylay/. All public APIs are in
*spdylay/spdylay.h*. All public API functions as well as the callback
function typedefs are documented.
SPDY Client and Server Programs
-------------------------------
The *src* directory contains SPDY client and server implementations
using Spdylay library. These programs are intended to make sure that
Spdylay API is acutally usable for real implementation and also for
debugging purposes. Please note that OpenSSL with `NPN
<http://technotes.googlecode.com/git/nextprotoneg.html>`_ support is
required in order to build and run these programs. At the time of
this writing, the OpenSSL 1.0.1 supports NPN.
Spdycat - SPDY client
+++++++++++++++++++++
The SPDY client is called ``spdycat``. It is a dead simple downloader
like wget/curl. It connects to SPDY server and gets resources given in
the command-line::
$ src/spdycat -h
Usage: spdycat [-Oansv23] [-t <SECONDS>] [-w <WINDOW_BITS>] [--cert=<CERT>]
[--key=<KEY>] [--no-tls] [-d <FILE>] [-m <N>] [-p <PROXY_HOST>]
[-P <PROXY_PORT>] <URI>...
OPTIONS:
-v, --verbose Print debug information such as reception/
transmission of frames and name/value pairs.
-n, --null-out Discard downloaded data.
-O, --remote-name Save download data in the current directory.
The filename is dereived from URI. If URI
ends with '/', 'index.html' is used as a
filename. Not implemented yet.
-2, --spdy2 Only use SPDY/2.
-3, --spdy3 Only use SPDY/3.
--spdy3-1 Only use SPDY/3.1.
-t, --timeout=<N> Timeout each request after <N> seconds.
-w, --window-bits=<N>
Sets the initial window size to 2**<N>.
-a, --get-assets Download assets such as stylesheets, images
and script files linked from the downloaded
resource. Only links whose origins are the
same with the linking resource will be
downloaded.
-s, --stat Print statistics.
-H, --header Add a header to the requests.
--cert=<CERT> Use the specified client certificate file.
The file must be in PEM format.
--key=<KEY> Use the client private key file. The file
must be in PEM format.
--no-tls Disable SSL/TLS. Use -2, -3 or --spdy3-1 to
specify SPDY protocol version to use.
-d, --data=<FILE> Post FILE to server. If - is given, data
will be read from stdin.
-m, --multiply=<N> Request each URI <N> times. By default, same
URI is not requested twice. This option
disables it too.
-p, --proxy=<HOST> Use this host as a SPDY proxy
-P, --proxy-port=<PORT>
Use this as the port of the SPDY proxy if
one is set
--color Force colored log output.
$ src/spdycat -nv https://www.google.com/
[ 0.021] NPN select next protocol: the remote server offers:
* spdy/4a4
* spdy/3.1
* spdy/3
* http/1.1
NPN selected the protocol: spdy/3.1
[ 0.029] Handshake complete
[ 0.029] recv SETTINGS frame <version=3, flags=0, length=20>
(niv=2)
[4(1):100]
[7(0):1048576]
[ 0.029] recv WINDOW_UPDATE frame <version=3, flags=0, length=8>
(stream_id=0, delta_window_size=983040)
[ 0.029] send SYN_STREAM frame <version=3, flags=1, length=221>
(stream_id=1, assoc_stream_id=0, pri=3)
:host: www.google.com
:method: GET
:path: /
:scheme: https
:version: HTTP/1.1
accept: */*
accept-encoding: gzip, deflate
user-agent: spdylay/1.2.0-DEV
[ 0.080] recv SYN_REPLY frame <version=3, flags=0, length=619>
(stream_id=1)
:status: 302 Found
:version: HTTP/1.1
alternate-protocol: 443:quic
cache-control: private
content-length: 262
content-type: text/html; charset=UTF-8
date: Tue, 19 Nov 2013 13:47:18 GMT
location: https://www.google.co.jp/
server: gws
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
[ 0.080] recv DATA frame (stream_id=1, flags=1, length=262)
[ 0.080] send GOAWAY frame <version=3, flags=0, length=8>
(last_good_stream_id=0)
Spdyd - SPDY server
+++++++++++++++++++
SPDY server is called ``spdyd`` and serves static files. It is single
threaded and multiplexes connections using non-blocking socket. The
static files are read using blocking I/O system call, ``read(2)``. It
speaks SPDY/2 and SPDY/3::
$ src/spdyd --htdocs=/your/htdocs/ -v 3000 server.key server.crt
IPv4: listen on port 3000
IPv6: listen on port 3000
The negotiated next protocol: spdy/3.1
[id=1] [ 1.296] send SETTINGS frame <version=3, flags=0, length=12>
(niv=1)
[4(0):100]
[id=1] [ 1.297] recv SYN_STREAM frame <version=3, flags=1, length=228>
(stream_id=1, assoc_stream_id=0, pri=3)
:host: localhost:3000
:method: GET
:path: /README
:scheme: https
:version: HTTP/1.1
accept: */*
accept-encoding: gzip, deflate
user-agent: spdylay/1.2.0-DEV
[id=1] [ 1.297] send SYN_REPLY frame <version=3, flags=0, length=116>
(stream_id=1)
:status: 200 OK
:version: HTTP/1.1
cache-control: max-age=3600
content-length: 66
date: Tue, 19 Nov 2013 14:35:24 GMT
last-modified: Tue, 17 Jan 2012 15:39:01 GMT
server: spdyd spdylay/1.2.0-DEV
[id=1] [ 1.297] send DATA frame (stream_id=1, flags=0, length=66)
[id=1] [ 1.297] send DATA frame (stream_id=1, flags=1, length=0)
[id=1] [ 1.297] stream_id=1 closed
[id=1] [ 1.297] recv GOAWAY frame <version=3, flags=0, length=8>
(last_good_stream_id=0)
[id=1] [ 1.297] closed
Currently, ``spdyd`` needs ``epoll`` or ``kqueue``.
Shrpx - A reverse proxy for SPDY/HTTPS
++++++++++++++++++++++++++++++++++++++
For shrpx users who uses shrpx as SPDY proxy: Please consider
migrating to nghttpx developed at `nghttp2 project
<https://nghttp2.org>`_. nghttpx supports SPDY proxy too.
The ``shrpx`` is a multi-threaded reverse proxy for SPDY/HTTPS. It
converts SPDY/HTTPS traffic to plain HTTP. It is initially developed
as a reverse proxy, but now it has other operation modes such as a
frontend forward proxy. For example, with ``--spdy-proxy`` (``-s`` in
shorthand) option, it can be used as secure SPDY proxy with a proxy
(e.g., Squid) in the backend. With ``--cliet-proxy`` (``-p``) option,
it acts like an ordinaly forward proxy but expects secure SPDY proxy
in the backend. Thus it becomes an adapter to secure SPDY proxy for
clients which does not support secure SPDY proxy. The another notable
operation mode is ``--spdy-relay``, which just relays SPDY/HTTPS
traffic to the backend in SPDY. The following table summarizes the
operation modes.
================== ========== ======= =============
Mode option Frontend Backend Note
================== ========== ======= =============
default SPDY/HTTPS HTTP Reverse proxy
``--spdy`` SPDY/HTTPS HTTP SPDY proxy
``--spdy-relay`` SPDY/HTTPS SPDY
``--client`` HTTP SPDY
``--client-proxy`` HTTP SPDY Forward proxy
================== ========== ======= =============
The ``shrpx`` supports configuration file. See ``--conf`` option and
sample configuration file ``shrpx.conf.sample``.
We briefly describe the architecture of ``shrpx`` here. It has a
dedicated thread which listens on server sockets. When it accepted
the incoming connection, it passes the file descriptor of the incoming
connection to one of the worker thread. Each worker thread has its
own event loop and can handle many connections using non-blocking I/O.
The number of worker thread can be specified using the command-line
option. The `libevent <http://libevent.org/>`_ is used to handle
low-level network I/O.
Here is the command-line options::
$ src/shrpx -h
Usage: shrpx [-Dh] [-s|--client|-p] [-b <HOST,PORT>]
[-f <HOST,PORT>] [-n <CORES>] [-c <NUM>] [-L <LEVEL>]
[OPTIONS...] [<PRIVATE_KEY> <CERT>]
A reverse proxy for SPDY/HTTPS.
Positional arguments:
<PRIVATE_KEY> Set path to server's private key. Required
unless either -p or --client is specified.
<CERT> Set path to server's certificate. Required
unless either -p or --client is specified.
OPTIONS:
Connections:
-b, --backend=<HOST,PORT>
Set backend host and port.
Default: '127.0.0.1,80'
-f, --frontend=<HOST,PORT>
Set frontend host and port.
Default: '0.0.0.0,3000'
--backlog=<NUM> Set listen backlog size.
Default: 256
--backend-ipv4 Resolve backend hostname to IPv4 address
only.
--backend-ipv6 Resolve backend hostname to IPv6 address
only.
Performance:
-n, --workers=<CORES>
Set the number of worker threads.
Default: 1
--read-rate=<RATE> Set maximum average read rate on frontend
connection. Setting 0 to this option means
read rate is unlimited.
Default: 1048576
--read-burst=<SIZE>
Set maximum read burst size on frontend
connection. Setting 0 to this option means
read burst size is unlimited.
Default: 4194304
--write-rate=<RATE>
Set maximum average write rate on frontend
connection. Setting 0 to this option means
write rate is unlimited.
Default: 0
--write-burst=<SIZE>
Set maximum write burst size on frontend
connection. Setting 0 to this option means
write burst size is unlimited.
Default: 0
Timeout:
--frontend-spdy-read-timeout=<SEC>
Specify read timeout for SPDY frontend
connection. Default: 180
--frontend-read-timeout=<SEC>
Specify read timeout for non-SPDY frontend
connection. Default: 180
--frontend-write-timeout=<SEC>
Specify write timeout for both SPDY and
non-SPDY frontends.
connection. Default: 60
--backend-read-timeout=<SEC>
Specify read timeout for backend connection.
Default: 900
--backend-write-timeout=<SEC>
Specify write timeout for backend
connection. Default: 60
--backend-keep-alive-timeout=<SEC>
Specify keep-alive timeout for backend
connection. Default: 60
--backend-http-proxy-uri=<URI>
Specify proxy URI in the form
http://[<USER>:<PASS>@]<PROXY>:<PORT>. If
a proxy requires authentication, specify
<USER> and <PASS>. Note that they must be
properly percent-encoded. This proxy is used
when the backend connection is SPDY. First,
make a CONNECT request to the proxy and
it connects to the backend on behalf of
shrpx. This forms tunnel. After that, shrpx
performs SSL/TLS handshake with the
downstream through the tunnel. The timeouts
when connecting and making CONNECT request
can be specified by --backend-read-timeout
and --backend-write-timeout options.
SSL/TLS:
--ciphers=<SUITE> Set allowed cipher list. The format of the
string is described in OpenSSL ciphers(1).
If this option is used, --honor-cipher-order
is implicitly enabled.
--honor-cipher-order
Honor server cipher order, giving the
ability to mitigate BEAST attacks.
-k, --insecure When used with -p or --client, don't verify
backend server's certificate.
--cacert=<PATH> When used with -p or --client, set path to
trusted CA certificate file.
The file must be in PEM format. It can
contain multiple certificates. If the
linked OpenSSL is configured to load system
wide certificates, they are loaded
at startup regardless of this option.
--private-key-passwd-file=<FILEPATH>
Path to file that contains password for the
server's private key. If none is given and
the private key is password protected it'll
be requested interactively.
--subcert=<KEYPATH>:<CERTPATH>
Specify additional certificate and private
key file. Shrpx will choose certificates
based on the hostname indicated by client
using TLS SNI extension. This option can be
used multiple times.
--backend-tls-sni-field=<HOST>
Explicitly set the content of the TLS SNI
extension. This will default to the backend
HOST name.
--dh-param-file=<PATH>
Path to file that contains DH parameters in
PEM format. Without this option, DHE cipher
suites are not available.
--verify-client Require and verify client certificate.
--verify-client-cacert=<PATH>
Path to file that contains CA certificates
to verify client certificate.
The file must be in PEM format. It can
contain multiple certificates.
--client-private-key-file=<PATH>
Path to file that contains client private
key used in backend client authentication.
--client-cert-file=<PATH>
Path to file that contains client
certificate used in backend client
authentication.
--tls-proto-list=<LIST>
Comma delimited list of SSL/TLS protocol to
be enabled.
The following protocols are available:
TLSv1.2, TLSv1.1, TLSv1.0, SSLv3
The name matching is done in case-insensitive
manner.
The parameter must be delimited by a single
comma only and any white spaces are treated
as a part of protocol string.
Default: TLSv1.2,TLSv1.1,TLSv1.0
SPDY:
-c, --spdy-max-concurrent-streams=<NUM>
Set the maximum number of the concurrent
streams in one SPDY session.
Default: 100
--frontend-spdy-window-bits=<N>
Sets the per-stream initial window size of
SPDY frontend connection to 2**<N>.
Default: 16
--frontend-spdy-connection-window-bits=<N>
Sets the per-connection window size of SPDY
frontend connection to 2**<N>.
Default: 16
--frontend-spdy-no-tls
Disable SSL/TLS on frontend SPDY
connections. SPDY protocol must be specified
using --frontend-spdy-proto. This option
also disables frontend HTTP/1.1.
--frontend-spdy-proto
Specify SPDY protocol used in frontend
connection if --frontend-spdy-no-tls is
used. Default: spdy/3.1
--backend-spdy-window-bits=<N>
Sets the per-stream initial window size of
SPDY backend connection to 2**<N>.
Default: 16
--backend-spdy-connection-window-bits=<N>
Sets the per-connection window size of SPDY
backend connection to 2**<N>.
Default: 16
--backend-spdy-no-tls
Disable SSL/TLS on backend SPDY connections.
SPDY protocol must be specified using
--backend-spdy-proto
--backend-spdy-proto
Specify SPDY protocol used in backend
connection if --backend-spdy-no-tls is used.
Default: spdy/3.1
Mode:
-s, --spdy-proxy Enable secure SPDY proxy mode.
--spdy-bridge Communicate with the backend in SPDY. Thus
the incoming SPDY/HTTPS connections are
converted to SPDY connection and relayed to
the backend. See --backend-http-proxy-uri
option if you are behind the proxy and want
to connect to the outside SPDY proxy.
--client Instead of accepting SPDY/HTTPS connection,
accept HTTP connection and communicate with
backend server in SPDY. To use shrpx as
a forward proxy, use -p option instead.
-p, --client-proxy Like --client option, but it also requires
the request path from frontend must be
an absolute URI, suitable for use as a
forward proxy.
Logging:
-L, --log-level=<LEVEL>
Set the severity level of log output.
INFO, WARNING, ERROR and FATAL.
Default: WARNING
--accesslog Print simple accesslog to stderr.
--syslog Send log messages to syslog.
--syslog-facility=<FACILITY>
Set syslog facility.
Default: daemon
Misc:
--add-x-forwarded-for
Append X-Forwarded-For header field to the
downstream request.
--no-via Don't append to Via header field. If Via
header field is received, it is left
unaltered.
-D, --daemon Run in a background. If -D is used, the
current working directory is changed to '/'.
--pid-file=<PATH> Set path to save PID of this program.
--user=<USER> Run this program as USER. This option is
intended to be used to drop root privileges.
--conf=<PATH> Load configuration from PATH.
Default: /etc/shrpx/shrpx.conf
-v, --version Print version and exit.
-h, --help Print this help and exit.
For those of you who are curious, ``shrpx`` is an abbreviation of
"Spdy/https to Http Reverse ProXy".
Without any of ``-s``, ``--spdy-bridge``, ``-p`` and ``--client``
options, ``shrpx`` works as reverse proxy to the backend server::
Client <-- (SPDY, HTTPS) --> Shrpx <-- (HTTP) --> Web Server
[reverse proxy]
With ``-s`` option, it works as secure SPDY proxy::
Client <-- (SPDY, HTTPS) --> Shrpx <-- (HTTP) --> Proxy
[SPDY proxy] (e.g., Squid)
The ``Client`` in the above is needs to be configured to use shrpx as
secure SPDY proxy.
At the time of this writing, Chrome is the only browser which supports
secure SPDY proxy. The one way to configure Chrome to use secure SPDY
proxy is create proxy.pac script like this::
function FindProxyForURL(url, host) {
return "HTTPS SERVERADDR:PORT";
}
``SERVERADDR`` and ``PORT`` is the hostname/address and port of the
machine shrpx is running. Please note that Chrome requires valid
certificate for secure SPDY proxy.
Then run chrome with the following arguments::
$ google-chrome --proxy-pac-url=file:///path/to/proxy.pac --use-npn
.. note::
At the time of this writing, Chrome 24 limits the maximum
concurrent connections to the proxy to 32. And due to the
limitation of socket pool handling in Chrome, it is quickly filled
up if SPDY proxy is used and many SPDY sessions are established. If
it reaches the limit, the new connections are simply blocked until
existing connections are timed out. (See `Chrome Issue 92244
<https://code.google.com/p/chromium/issues/detail?id=92244>`_). The
workaround is make the number of maximum connections high, say, 99,
which is the highest. To do this, you need to change so called
Policy setup. See `Policy Templates
<http://dev.chromium.org/administrators/policy-templates>`_ for
details how to change Policy setup on the platform you use. The
Policy name we are looking for is `MaxConnectionsPerProxy
<http://dev.chromium.org/administrators/policy-list-3#MaxConnectionsPerProxy>`_
For example, if you are using Linux, follow the instruction
described in `Linux Quick Start
<http://dev.chromium.org/administrators/linux-quick-start>`_ and
create ``/etc/opt/chrome/policies/managed/test_policy.json`` file
with the following content and restart Chrome::
{
"MaxConnectionsPerProxy" :99
}
With ``--spdy-bridge``, it accepts SPDY/HTTPS connections and
communicates with backend in SPDY::
Client <-- (SPDY, HTTPS) --> Shrpx <-- (SPDY) --> Web or SPDY Proxy etc
[SPDY bridge] (e.g., shrpx -s)
With ``-p`` option, it works as forward proxy and expects that the
backend is secure SPDY proxy::
Client <-- (HTTP) --> Shrpx <-- (SPDY) --> Secure SPDY Proxy
[forward proxy] (e.g., shrpx -s or node-spdyproxy)
The ``Client`` is needs to be configured to use shrpx as forward proxy.
In this configuration, clients which do not support secure SPDY proxy
can use secure SPDY proxy through ``shrpx``. Putting ``shrpx`` in the
same box or same network with the clients, this configuration can
bring the benefits of secure SPDY proxy to those clients. Since the
maximum number of connections per server still applies in proxy
connection, the performance gain is not obvious. For example, if the
maximum number of connections per server is 6, after sending 6
requests to the proxy, client blocks further requests, which kills
performance which might be gained in SPDY connection. For clients
which can tweak these values (e.g.,
``network.http.max-connections-per-server`` in Firefox), increasing
them may improve the performance.
With ``--client`` option, it works as reverse proxy and expects that
the backend is SPDY-enabled Web server::
Client <-- (HTTP) --> Shrpx <-- (SPDY) --> Web Server
[reverse proxy]
For the operation modes which talk to the backend in SPDY, the backend
connections can be tunneled though HTTP proxy. The proxy is specified
using ``--backend-http-proxy-uri`` option. The following figure
illustrates the example of ``--spdy-bridge`` and
``--backend-http-proxy-uri`` option to talk to the outside SPDY proxy
through HTTP proxy::
Client <-- (SPDY, HTTPS) --> Shrpx <-- (SPDY) --
[SPDY bridge]
--===================---> SPDY Proxy
(HTTP proxy tunnel) (e.g., shrpx -s)
Examples
--------
The *examples* directory contains a simple SPDY client implementation
in C.
Python-Spdylay - Python Wrapper
-------------------------------
The library comes with Python wrapper ``python-spdylay``. See
``python`` directory.
| 44.401493 | 133 | 0.574271 |
ef56b6ddb38e59f20f7248de1ceb952c7627ce76 | 612 | rst | reStructuredText | doc/howto/cluster/multi_cluster/index_cn.rst | jczaja/Paddle | 07923ba006220bf39ebd9fcf19c6b930012e5139 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | doc/howto/cluster/multi_cluster/index_cn.rst | jczaja/Paddle | 07923ba006220bf39ebd9fcf19c6b930012e5139 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | doc/howto/cluster/multi_cluster/index_cn.rst | jczaja/Paddle | 07923ba006220bf39ebd9fcf19c6b930012e5139 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | 在不同集群中运行
================
PaddlePaddle可以使用多种分布式计算平台构建分布式计算任务,包括:
- `Kubernetes <http://kubernetes.io>`_ Google开源的容器集群的调度框架,支持大规模集群生产环境的完整集群方案。
- `OpenMPI <https://www.open-mpi.org>`_ 成熟的高性能并行计算框架。
- `Fabric <http://www.fabfile.org>`_ 集群管理工具。可以使用`Fabric`编写集群任务提交和管理脚本。
对于不同的集群平台,会分别介绍集群作业的启动和停止方法。这些例子都可以在 `cluster_train_v2 <https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/scripts/cluster_train_v2>`_ 找到。
在使用分布式计算平台进行训练时,任务被调度在集群中时,分布式计算平台通常会通过API或者环境变量提供任务运行需要的参数,比如节点的ID、IP和任务节点个数等。
.. toctree::
:maxdepth: 1
fabric_cn.md
openmpi_cn.md
k8s_cn.md
k8s_distributed_cn.md
k8s_aws_cn.md
| 29.142857 | 146 | 0.777778 |
87c6ecfde8d1d602f8b8e3510a5508bc84e774f7 | 2,906 | rst | reStructuredText | user/bsps/arm/beagle.rst | rmeena840/rtems-docs | a2e828be40aa2580c01a3dec32a50c016f2e8b81 | [
"BSD-2-Clause"
] | null | null | null | user/bsps/arm/beagle.rst | rmeena840/rtems-docs | a2e828be40aa2580c01a3dec32a50c016f2e8b81 | [
"BSD-2-Clause"
] | null | null | null | user/bsps/arm/beagle.rst | rmeena840/rtems-docs | a2e828be40aa2580c01a3dec32a50c016f2e8b81 | [
"BSD-2-Clause"
] | null | null | null | .. SPDX-License-Identifier: CC-BY-SA-4.0
.. Copyright (C) 2019 Vijay Kumar Banerjee
beagle
======
This BSP supports four variants, `beagleboardorig`, `beagleboardxm`,
`beaglebonewhite` and `beagleboneblack`. The basic hardware initialization is
not performed by the BSP. A boot loader with device tree support must be used
to start the BSP, e.g., U-Boot.
TODO(These drivers are present but not documented yet):
* Clock driver.
* Network Interface Driver.
* SDcard driver.
* GPIO Driver.
* Console driver.
* PWM Driver.
* RTC driver.
Boot via U-Boot
---------------
To boot via uboot, the ELF must be converted to a U-Boot image like below:
.. code-block:: none
arm-rtems5-objcopy hello.exe -O binary app.bin
gzip -9 app.bin
mkimage -A arm -O linux -T kernel -a 0x80000000 -e 0x80000000 -n RTEMS -d app.bin.gz rtems-app.img
Getting the Device Tree Blob
----------------------------
The Device Tree Blob (DTB) is needed to load the device tree while starting up
the kernel. We build the dtb from the FreeBSD source matching the commit hash
from the libbsd HEAD of freebsd-org. For example if the HEAD is at
"19a6ceb89dbacf74697d493e48c388767126d418"
Then the right Device Tree Source (DTS) file is:
https://github.com/freebsd/freebsd/blob/19a6ceb89dbacf74697d493e48c388767126d418/sys/gnu/dts/arm/am335x-boneblack.dts
Please refer to the :ref:`DeviceTree` to know more about building and applying
the Device Trees.
Writing the uEnv.txt file
-------------------------
The uEnv.txt file is needed to set any environment variable before the kernel is
loaded. Each line is a u-boot command that the uboot will execute during start
up.
Add the following to a file named uEnv.txt:
.. code-block:: none
setenv bootdelay 5
uenvcmd=run boot
boot=fatload mmc 0 0x80800000 rtems-app.img ; fatload mmc 0 0x88000000 am335x-boneblack.dtb ; bootm 0x80800000 - 0x88000000
I2C Driver
----------
For registering the `/dev/i2c-0` device, a wrapper function is provided,
``bbb_register_i2c_0()`` similarly ``bbb_register_i2c_1()`` and
``bbb_register_i2c_2()`` are respectively used to register `i2c-1` and `i2c-2`.
For registering an I2C device with a custom path (say `/dev/i2c-3`) the
function ``am335x_i2c_bus_register()`` has to be used.
The function prototype is given below:
.. code-block:: C
int am335x_i2c_bus_register(
const char *bus_path,
uintptr_t register_base,
uint32_t input_clock,
rtems_vector_number irq
);
SPI Driver
----------
The SPI device `/dev/spi-0` can be registered with ``bbb_register_spi_0()``
For registering with a custom path, the ``bsp_register_spi()`` can be used.
The function prototype is given below:
.. code-block:: C
rtems_status_code bsp_register_spi(
const char *bus_path,
uintptr_t register_base,
rtems_vector_number irq
);
| 29.653061 | 128 | 0.708878 |
925792811e2d654692f3c745346db848634e681b | 507 | rst | reStructuredText | SECURITY.rst | shaardie/emissions-api | 884015ff533db9c9ab6dca2bc86b7f63b37b6467 | [
"MIT"
] | 56 | 2019-09-28T11:33:12.000Z | 2022-03-10T07:21:30.000Z | SECURITY.rst | shaardie/emissions-api | 884015ff533db9c9ab6dca2bc86b7f63b37b6467 | [
"MIT"
] | 213 | 2019-09-27T11:42:05.000Z | 2022-03-01T04:02:39.000Z | SECURITY.rst | shaardie/emissions-api | 884015ff533db9c9ab6dca2bc86b7f63b37b6467 | [
"MIT"
] | 11 | 2019-09-29T21:16:07.000Z | 2022-03-29T07:10:50.000Z | Security Policy
===============
Supported Versions
------------------
This project has a rolling release and we only support the latest state of the branch ``master``.
It is finally meant to be run as a centralized service by Emissions API.
Reporting a Vulnerability
-------------------------
If you find a security vulnerability, please report it by sending a mail to security@emissions-api.org.
We will discuss the problem internally and, if necessary, release a patched version as soon as possible.
| 31.6875 | 104 | 0.704142 |
8f480453eec7e11d28bbaf260f5ff1e502436060 | 625 | rst | reStructuredText | README.rst | sgfoote/ioos_qc | 90d6af5cca9ba4c5d8502315f79bac7f3a1c9148 | [
"Apache-2.0"
] | null | null | null | README.rst | sgfoote/ioos_qc | 90d6af5cca9ba4c5d8502315f79bac7f3a1c9148 | [
"Apache-2.0"
] | null | null | null | README.rst | sgfoote/ioos_qc | 90d6af5cca9ba4c5d8502315f79bac7f3a1c9148 | [
"Apache-2.0"
] | 1 | 2019-12-09T17:47:22.000Z | 2019-12-09T17:47:22.000Z | =======
IOOS QC
=======
.. image:: https://anaconda.org/conda-forge/ioos_qc/badges/version.svg
:target: https://anaconda.org/conda-forge/ioos_qc
:alt: conda_forge_package
This is a backport of the ioos_qc package to work with python 2.7.
The original package description follows:
Collection of utilities, scripts and tests to assist in automated
quality assurance and quality control for oceanographic datasets and
observing systems.
`Code <https://github.com/oceanobservatories/ioos_qc>`_ | `Issues <https://github.com/oceanobservatories/ioos_qc/issues>`_ | `Documentation <https://ioos.github.io/ioos_qc/>`_
| 34.722222 | 179 | 0.7584 |
2ede835b50fe59fc42c901ea28280768ab0826e3 | 213 | rst | reStructuredText | docs/api/index.rst | paylogic/halogen | 0671945634bc1c90e80159e64c2a34dbcb52bf08 | [
"MIT"
] | 18 | 2015-01-09T22:45:45.000Z | 2021-11-17T13:06:53.000Z | docs/api/index.rst | paylogic/halogen | 0671945634bc1c90e80159e64c2a34dbcb52bf08 | [
"MIT"
] | 20 | 2015-03-27T11:48:26.000Z | 2021-03-16T22:12:40.000Z | docs/api/index.rst | paylogic/halogen | 0671945634bc1c90e80159e64c2a34dbcb52bf08 | [
"MIT"
] | 7 | 2016-04-25T09:59:06.000Z | 2020-05-07T11:17:04.000Z | Internal API
============
.. automodule:: halogen.types
:members:
.. automodule:: halogen.schema
:members:
.. automodule:: halogen.exceptions
:members:
.. automodule:: halogen.validators
:members:
| 14.2 | 34 | 0.647887 |
e6dfbe1f5b150eff3475ad03035bce89a24a9e3f | 22,567 | rst | reStructuredText | docs/maps/index.rst | qpiel/gammapy | cfb976909e63f4d5d578e1495245c0baad69482b | [
"BSD-3-Clause"
] | null | null | null | docs/maps/index.rst | qpiel/gammapy | cfb976909e63f4d5d578e1495245c0baad69482b | [
"BSD-3-Clause"
] | 1 | 2020-10-29T19:55:46.000Z | 2020-10-29T19:55:46.000Z | docs/maps/index.rst | qpiel/gammapy | cfb976909e63f4d5d578e1495245c0baad69482b | [
"BSD-3-Clause"
] | null | null | null | .. include:: ../references.txt
.. _maps:
***************
maps - Sky maps
***************
.. currentmodule:: gammapy.maps
Introduction
============
`gammapy.maps` contains classes for representing pixelized data structures with
at least two spatial dimensions representing coordinates on a sphere (e.g. an
image in celestial coordinates). These classes support an arbitrary number of
non-spatial dimensions and can represent images (2D), cubes (3D), or hypercubes
(4+D). Two pixelization schemes are supported:
* WCS : Projection onto a 2D cartesian grid following the conventions
of the World Coordinate System (WCS). Pixels are square in projected
coordinates and as such are not equal area in spherical coordinates.
* HEALPix : Hierarchical Equal Area Iso Latitude pixelation of the
sphere. Pixels are equal area but have irregular shapes.
`gammapy.maps` is organized around two data structures: *geometry* classes
inheriting from `~MapGeom` and *map* classes inheriting from `~Map`. A geometry
defines the map boundaries, pixelization scheme, and provides methods for
converting to/from map and pixel coordinates. A map owns a `~MapGeom` instance
as well as a data array containing map values. Where possible it is recommended
to use the abstract `~Map` interface for accessing or updating the contents of a
map as this allows algorithms to be used interchangeably with different map
representations. The following reviews methods of the abstract map interface.
Documentation specific to WCS- and HEALPix-based maps is provided in
:doc:`hpxmap` and :doc:`wcsmap`.
Getting Started
===============
All map objects have an abstract inteface provided through the methods of the
`~Map`. These methods can be used for accessing and manipulating the contents of
a map without reference to the underlying data representation (e.g. whether a
map uses WCS or HEALPix pixelization). For applications which do depend on the
specific representation one can also work directly with the classes derived from
`~Map`. In the following we review some of the basic methods for working with
map objects.
Constructing with Factory Methods
---------------------------------
The `~Map` class provides a `~Map.create` factory method to facilitate creating
an empty map object from scratch. The ``map_type`` argument can be used to
control the pixelization scheme (WCS or HPX) and whether the map internally uses
a sparse representation of the data.
.. code:: python
from gammapy.maps import Map
from astropy.coordinates import SkyCoord
position = SkyCoord(0.0, 5.0, frame='galactic', unit='deg')
# Create a WCS Map
m_wcs = Map.create(binsz=0.1, map_type='wcs', skydir=position, width=10.0)
# Create a HPX Map
m_hpx = Map.create(binsz=0.1, map_type='hpx', skydir=position, width=10.0)
Higher dimensional map objects (cubes and hypercubes) can be constructed by
passing a list of `~MapAxis` objects for non-spatial dimensions with the
``axes`` parameter:
.. code:: python
from gammapy.maps import Map, MapAxis
from astropy.coordinates import SkyCoord
position = SkyCoord(0.0, 5.0, frame='galactic', unit='deg')
energy_axis = MapAxis.from_bounds(100., 1E5, 12, interp='log', name='energy', unit='GeV')
# Create a WCS Map
m_wcs = Map.create(binsz=0.1, map_type='wcs', skydir=position, width=10.0,
axes=[energy_axis])
# Create a HPX Map
m_hpx = Map.create(binsz=0.1, map_type='hpx', skydir=position, width=10.0,
axes=[energy_axis])
Multi-resolution maps (maps with a different pixel size or geometry in each
image plane) can be constructed by passing a vector argument for any of the
geometry parameters. This vector must have the same shape as the non-spatial
dimensions of the map. The following example demonstrates creating an energy
cube with a pixel size proportional to the Fermi-LAT PSF:
.. code:: python
import numpy as np
from gammapy.maps import Map, MapAxis
from astropy.coordinates import SkyCoord
position = SkyCoord(0.0, 5.0, frame='galactic', unit='deg')
energy_axis = MapAxis.from_bounds(100., 1E5, 12, interp='log', name='energy', unit='GeV')
binsz = np.sqrt((3.0*(energy_axis.center/100.)**-0.8)**2 + 0.1**2)
# Create a WCS Map
m_wcs = Map.create(binsz=binsz, map_type='wcs', skydir=position, width=10.0,
axes=[energy_axis])
# Create a HPX Map
m_hpx = Map.create(binsz=binsz, map_type='hpx', skydir=position, width=10.0,
axes=[energy_axis])
.. _mapslicing:
Indexing and Slicing
--------------------
All map objects feature a `~Map.slice_by_idx()` method, which can be used to
slice and index non-spatial axes of the map to create arbitrary sub-maps. The
method accepts a `dict` specifying the axes name and correspoding integer index
or `slice` objects. When indexing an axis with an integer the corresponding axes
is dropped from the returned sub-map. To keep the axes (with length 1) in the
returned sub-map use a `slice` object of length one. This behaviour is
equivalent to regular numpy array indexing. The following example demonstrates
the use of `~Map.slice_by_idx()` on a map with a time and energy axes:
.. code:: python
import numpy as np
from gammapy.maps import Map, MapAxis
from astropy.coordinates import SkyCoord
position = SkyCoord(0.0, 5.0, frame='galactic', unit='deg')
energy_axis = MapAxis.from_bounds(100., 1E5, 12, interp='log', unit='GeV', name='energy')
time_axis = MapAxis.from_bounds(0., 12, 12, interp='lin', unit='h', name='time')
# Create a WCS Map
m_wcs = Map.create(binsz=0.02, map_type='wcs', skydir=position, width=10.0,
axes=[energy_axis, time_axis])
# index first image plane of the energy axes and third from the time axis
m_wcs.slice_by_idx({'energy': 0, 'time': 2})
# index first image plane of the energy axes and keep time axis unchanged
m_wcs.slice_by_idx({'energy': 0})
# slice first three images of the energy axis at a fixed time
m_wcs.slice_by_idx({'energy': slice(0, 3), 'time': 0})
# slice first three images of the energy axis as well as time axis
m_wcs.slice_by_idx({'energy': slice(0, 3), 'time': slice(0, 3)})
Accessor Methods
----------------
All map objects have a set of accessor methods provided through the abstract
`~Map` class. These methods can be used to access or update the contents of the
map irrespective of its underlying representation. Four types of accessor
methods are provided:
* ``get`` : Return the value of the map at the pixel containing the
given coordinate (`~Map.get_by_idx`, `~Map.get_by_pix`, `~Map.get_by_coord`).
* ``interp`` : Interpolate or extrapolate the value of the map at an arbitrary
coordinate (see also `Interpolation`_).
* ``set`` : Set the value of the map at the pixel containing the
given coordinate (`~Map.set_by_idx`, `~Map.set_by_pix`, `~Map.set_by_coord`).
* ``fill`` : Increment the value of the map at the pixel containing
the given coordinate with a unit weight or the value in the optional
``weights`` argument (`~Map.fill_by_idx`, `~Map.fill_by_pix`,
`~Map.fill_by_coord`).
Accessor methods accept as their first argument a coordinate tuple containing
scalars, lists, or numpy arrays with one tuple element for each dimension of the
map. ``coord`` methods optionally support a `dict` or `~MapCoord` argument.
When using tuple input the first two elements in the tuple should be longitude
and latitude followed by one element for each non-spatial dimension. Map
coordinates can be expressed in one of three coordinate systems:
* ``idx`` : Pixel indices. These are explicit (integer) pixel indices into the
map.
* ``pix`` : Coordinates in pixel space. Pixel coordinates are continuous defined
on the interval [0,N-1] where N is the number of pixels along a given map
dimension with pixel centers at integer values. For methods that reference a
discrete pixel, pixel coordinates wil be rounded to the nearest pixel index
and passed to the corresponding ``idx`` method.
* ``coord`` : The true map coordinates including angles on the sky (longitude
and latitude). This coordinate system supports three coordinate
representations: `tuple`, `dict`, and `~MapCoord`. The tuple representation
should contain longitude and latitude in degrees followed by one coordinate
array for each non-spatial dimension.
The coordinate system accepted by a given accessor method can be inferred from
the suffix of the method name (e.g. `~Map.get_by_idx`). The following
demonstrates how one can access the same pixels of a WCS map using each of the
three coordinate systems:
.. code:: python
from gammapy.maps import Map
m = Map.create(binsz=0.1, map_type='wcs', width=10.0)
vals = m.get_by_idx( ([49,50],[49,50]) )
vals = m.get_by_pix( ([49.0,50.0],[49.0,50.0]) )
vals = m.get_by_coord( ([-0.05,-0.05],[0.05,0.05]) )
Coordinate arguments obey normal numpy broadcasting rules. The coordinate tuple
may contain any combination of scalars, lists or numpy arrays as long as they
have compatible shapes. For instance a combination of scalar and vector
arguments can be used to perform an operation along a slice of the map at a
fixed value along that dimension. Multi-dimensional arguments can be use to
broadcast a given operation across a grid of coordinate values.
.. code:: python
from gammapy.maps import Map
m = Map.create(binsz=0.1, map_type='wcs', width=10.0)
coords = np.linspace(-4.0, 4.0, 9)
# Equivalent calls for accessing value at pixel (49,49)
vals = m.get_by_idx( (49,49) )
vals = m.get_by_idx( ([49],[49]) )
vals = m.get_by_idx( (np.array([49]), np.array([49])) )
# Retrieve map values along latitude at fixed longitude=0.0
vals = m.get_by_coord( (0.0, coords) )
# Retrieve map values on a 2D grid of latitude/longitude points
vals = m.get_by_coord( (coords[None,:], coords[:,None]) )
# Set map values along slice at longitude=0.0 to twice their existing value
m.set_by_coord((0.0, coords), 2.0*m.get_by_coord((0.0, coords)))
The ``set`` and ``fill`` methods can both be used to set pixel values. The
following demonstrates how one can set pixel values:
.. code:: python
from gammapy.maps import Map
m = Map.create(binsz=0.1, map_type='wcs', width=10.0)
m.set_by_coord(([-0.05, -0.05], [0.05, 0.05]), [0.5, 1.5])
m.fill_by_coord( ([-0.05, -0.05], [0.05, 0.05]), weights=[0.5, 1.5])
Interface with `~MapCoord` and `~astropy.coordinates.SkyCoord`
--------------------------------------------------------------
The ``coord`` accessor methods accept `dict`, `~MapCoord`, and
`~astropy.coordinates.SkyCoord` arguments in addition to the standard `tuple` of
`~numpy.ndarray` argument. When using a `tuple` argument a
`~astropy.coordinates.SkyCoord` can be used instead of longitude and latitude
arrays. The coordinate frame of the `~astropy.coordinates.SkyCoord` will be
transformed to match the coordinate system of the map.
.. code:: python
import numpy as np
from astropy.coordinates import SkyCoord
from gammapy.maps import Map, MapCoord, MapAxis
lon = [0, 1]
lat = [1, 2]
energy = [100, 1000]
energy_axis = MapAxis.from_bounds(100, 1E5, 12, interp='log', name='energy')
skycoord = SkyCoord(lon, lat, unit='deg', frame='galactic')
m = Map.create(binsz=0.1, map_type='wcs', width=10.0,
coordsys='GAL', axes=[energy_axis])
m.set_by_coord((skycoord, energy), [0.5, 1.5])
m.get_by_coord((skycoord, energy))
A `~MapCoord` or `dict` argument can be used to interact with a map object
without reference to the axis ordering of the map geometry:
.. code:: python
coord = MapCoord.create(dict(lon=lon, lat=lat, energy=energy))
m.set_by_coord(coord, [0.5, 1.5])
m.get_by_coord(coord)
m.set_by_coord(dict(lon=lon, lat=lat, energy=energy), [0.5, 1.5])
m.get_by_coord(dict(lon=lon, lat=lat, energy=energy))
However when using the named axis interface the axis name string (e.g. as given
by `MapAxis.name`) must match the name given in the method argument. The two
spatial axes must always be named ``lon`` and ``lat``.
.. _mapcoord:
MapCoord
--------
`MapCoord` is an N-dimensional coordinate object that stores both spatial and
non-spatial coordinates and is accepted by all ``coord`` methods. A `~MapCoord`
can be created with or without explicitly named axes with `MapCoord.create`.
Axes of a `MapCoord` can be accessed by index, name, or attribute. A `MapCoord`
without explicit axis names can be created by calling `MapCoord.create` with a
`tuple` argument:
.. code:: python
import numpy as np
from astropy.coordinates import SkyCoord
from gammapy.maps import MapCoord
lon = [0.0, 1.0]
lat = [1.0, 2.0]
energy = [100, 1000]
skycoord = SkyCoord(lon, lat, unit='deg', frame='galactic')
# Create a MapCoord from a tuple (no explicit axis names)
c = MapCoord.create((lon, lat, energy))
print(c[0], c['lon'], c.lon)
print(c[1], c['lat'], c.lat)
print(c[2], c['axis0'])
# Create a MapCoord from a tuple + SkyCoord (no explicit axis names)
c = MapCoord.create((skycoord, energy))
print(c[0], c['lon'], c.lon)
print(c[1], c['lat'], c.lat)
print(c[2], c['axis0'])
The first two elements of the tuple argument must contain longitude and
latitude. Non-spatial axes are assigned a default name ``axis{I}`` where
``{I}`` is the index of the non-spatial dimension. `MapCoord` objects created
without named axes must have the same axis ordering as the map geometry.
A `MapCoord` with named axes can be created by calling `MapCoord.create` with a
`dict` or `~collections.OrderedDict`:
.. code:: python
# Create a MapCoord from a dict
c = MapCoord.create(dict(lon=lon, lat=lat, energy=energy))
print(c[0], c['lon'], c.lon)
print(c[1], c['lat'], c.lat)
print(c[2], c['energy'])
# Create a MapCoord from an OrderedDict
from collections import OrderedDict
c = MapCoord.create(OrderedDict([('energy',energy), ('lon',lon), ('lat', lat)]))
print(c[0], c['energy'])
print(c[1], c['lon'], c.lon)
print(c[2], c['lat'], c.lat)
# Create a MapCoord from a dict + SkyCoord
c = MapCoord.create(dict(skycoord=skycoord, energy=energy))
print(c[0], c['lon'], c.lon)
print(c[1], c['lat'], c.lat)
print(c[2], c['energy'])
Spatial axes must be named ``lon`` and ``lat``. `MapCoord` objects created with
named axes do not need to have the same axis ordering as the map geometry.
However the name of the axis must match the name of the corresponding map
geometry axis.
Interpolation
-------------
Maps support interpolation via the `~Map.interp_by_coord` and
`~Map.interp_by_pix` methods. Currently the following interpolation methods are
supported:
* ``nearest`` : Return value of nearest pixel (no interpolation).
* ``linear`` : Interpolation with first order polynomial. This is the
only interpolation method that is supported for all map types.
* ``quadratic`` : Interpolation with second order polynomial.
* ``cubic`` : Interpolation with third order polynomial.
Note that ``quadratic`` and ``cubic`` interpolation are currently only supported
for WCS-based maps with regular geometry (e.g. 2D or ND with the same geometry
in every image plane). ``linear`` and higher order interpolation by pixel
coordinates is only supported for WCS-based maps.
.. code:: python
from gammapy.maps import Map
m = Map.create(binsz=0.1, map_type='wcs', width=10.0)
m.interp_by_coord(([-0.05, -0.05], [0.05, 0.05]), interp='linear')
m.interp_by_coord(([-0.05, -0.05], [0.05, 0.05]), interp='cubic')
Projection
----------
The `~Map.reproject` method can be used to project a map onto a different
geometry. This can be used to convert between different WCS projections,
extract a cut-out of a map, or to convert between WCS and HPX map types. If the
projection geometry lacks non-spatial dimensions then the non-spatial dimensions
of the original map will be copied over to the projected map.
.. code:: python
from gammapy.maps import WcsNDMap, HpxGeom
m = WcsNDMap.read('$GAMMAPY_DATA/fermi_3fhl/gll_iem_v06_cutout.fits')
geom = HpxGeom.create(nside=8, coordsys='GAL')
# Convert LAT standard IEM to HPX (nside=8)
m_proj = m.reproject(geom)
m_proj.write('gll_iem_v06_hpx_nside8.fits')
.. _mapiter:
Iterating on a Map
------------------
Iterating over a map can be performed with the `~Map.iter_by_coord` and
`~Map.iter_by_pix` methods. These return an iterator that traverses the map
returning (value, coordinate) pairs with map and pixel coordinates,
respectively. The optional ``buffersize`` argument can be used to split the
iteration into chunks of a given size. The following example illustrates how
one can use this method to fill a map with a 2D Gaussian:
.. code:: python
import numpy as np
from astropy.coordinates import SkyCoord
from gammapy.maps import Map
m = Map.create(binsz=0.05, map_type='wcs', width=10.0)
for val, coord in m.iter_by_coord(buffersize=10000):
skydir = SkyCoord(coord[0],coord[1], unit='deg')
sep = skydir.separation(m.geom.center_skydir).deg
new_val = np.exp(-sep**2/2.0)
m.set_by_coord(coord, new_val)
For maps with non-spatial dimensions the `~Map.iter_by_image` method can be used
to loop over image slices. The image plane index `idx` is returned in data order,
so that the data array can be indexed directly. Here is an example for an in-place
convolution of an image using `astropy.convolution.convolve` to interpolate NaN
values:
.. code:: python
import numpy as np
from astropy.convolution import convolve
axis1 = MapAxis([1, 10, 100], interp='log', name='energy')
axis2 = MapAxis([1, 2, 3], interp='lin', name='time')
m = Map.create(width=(5, 3), axes=[axis1, axis2], binsz=0.1)
m.data[:, :, 15:18, 20:25] = np.nan
for img, idx in m.iter_by_image():
kernel = np.ones((5, 5))
m.data[idx] = convolve(img, kernel)
assert not np.isnan(m.data).any()
FITS I/O
--------
Maps can be written to and read from a FITS file with the `~Map.write` and
`~Map.read` methods:
.. code:: python
from gammapy.maps import Map
m = Map.create(binsz=0.1, map_type='wcs', width=10.0)
m.write('file.fits', hdu='IMAGE')
m = Map.read('file.fits', hdu='IMAGE')
If ``map_type`` argument is not given when calling `~Map.read` a non-sparse map
object will be instantiated with the pixelization of the input HDU.
Maps can be serialized to a sparse data format by calling `~Map.write` with
``sparse=True``. This will write all non-zero pixels in the map to a data table
appropriate to the pixelization scheme.
.. code:: python
from gammapy.maps import Map
m = Map.create(binsz=0.1, map_type='wcs', width=10.0)
m.write('file.fits', hdu='IMAGE', sparse=True)
m = Map.read('file.fits', hdu='IMAGE', map_type='wcs')
Sparse maps have the same ``read`` and ``write`` methods with the exception that
they will be written to a sparse format by default:
.. code:: python
from gammapy.maps import Map
m = Map.create(binsz=0.1, map_type='hpx-sparse', width=10.0)
m.write('file.fits', hdu='IMAGE')
m = Map.read('file.fits', hdu='IMAGE', map_type='hpx-sparse')
By default files will be written to the *gamma-astro-data-format* specification
for sky maps (see `here
<http://gamma-astro-data-formats.readthedocs.io/en/latest/skymaps/index.html>`_).
The GADF format offers a number of enhancements over existing map formats such
as support for writing multi-resolution maps, sparse maps, and cubes with
different geometries to the same file. For backward compatibility with software
using other formats, the ``conv`` keyword option is provided to write a file
using a format other than the GADF format:
.. code:: python
from gammapy.maps import Map, MapAxis
energy_axis = MapAxis.from_bounds(100., 1E5, 12, interp='log')
m = Map.create(binsz=0.1, map_type='wcs', width=10.0,
axes=[energy_axis])
# Write a counts cube in a format compatible with the Fermi Science Tools
m.write('ccube.fits', conv='fgst-ccube')
Visualization
-------------
All map objects provide a ``plot`` method for generating a visualization of a
map. This method returns figure, axes, and image objects that can be used to
further tweak/customize the image.
.. code:: python
import matplotlib.pyplot as plt
from gammapy.maps import Map
m = Map.read("$GAMMAPY_DATA/fermi_2fhl/fermi_2fhl_gc.fits.gz")
m.plot(cmap='magma', add_cbar=True)
plt.show()
Examples
========
Creating a Counts Cube from an FT1 File
---------------------------------------
This example shows how to fill a counts cube from an FT1 file:
.. code:: python
from gammapy.data import EventList
from gammapy.maps import WcsGeom, WcsNDMap, MapAxis
energy_axis = MapAxis.from_bounds(10., 2E3, 12, interp='log', name='energy', unit='GeV')
m = WcsNDMap.create(binsz=0.1, width=10.0, skydir=(45.0,30.0),
coordsys='CEL', axes=[energy_axis])
events = EventList.read('$GAMMAPY_DATA/fermi_2fhl/2fhl_events.fits.gz')
m.fill_by_coord({'skycoord': events.radec, 'energy': events.energy})
m.write('ccube.fits', conv='fgst-ccube')
Generating a Cutout of a Model Cube
-----------------------------------
This example shows how to extract a cut-out of LAT galactic diffuse model cube
using the `~Map.cutout` method:
.. code:: python
from gammapy.maps import WcsGeom, WcsNDMap
from astropy import units as u
from astropy.coordinates import SkyCoord
m = WcsNDMap.read('$GAMMAPY_DATA/fermi_3fhl/gll_iem_v06_cutout.fits')
position = SkyCoord(0, 0, frame="galactic", unit="deg")
m_cutout = m.cutout(position=position, width=(5 * u.deg, 2 * u.deg))
m_cutout.write('cutout.fits', conv='fgst-template')
Using `gammapy.maps`
====================
:ref:`tutorials` that show examples using ``gammapy.maps``:
* :gp-extra-notebook:`intro_maps`
* :gp-extra-notebook:`analysis_3d`
* :gp-extra-notebook:`simulate_3d`
* :gp-extra-notebook:`fermi_lat`
More detailed documentation on the WCS and HPX classes in `gammapy.maps` can be
found in the following sub-pages:
.. toctree::
:maxdepth: 1
hpxmap
wcsmap
Reference/API
=============
.. automodapi:: gammapy.maps
:include-all-objects:
| 37.991582 | 93 | 0.70058 |
937bf1ce415a93fb12583f4393938346f08441f2 | 2,304 | rst | reStructuredText | README.rst | kpiwko/jenkins-job-builder | 97f1f3e1087fe79ec6728a7b19ba47fa9f2e145e | [
"Apache-2.0"
] | null | null | null | README.rst | kpiwko/jenkins-job-builder | 97f1f3e1087fe79ec6728a7b19ba47fa9f2e145e | [
"Apache-2.0"
] | null | null | null | README.rst | kpiwko/jenkins-job-builder | 97f1f3e1087fe79ec6728a7b19ba47fa9f2e145e | [
"Apache-2.0"
] | null | null | null | ===================
Jenkins Job Builder
===================
Jenkins Job Builder takes simple descriptions of Jenkins jobs in YAML format,
and uses them to configure Jenkins. You can keep your job descriptions in human
readable text format in a version control system to make changes and auditing
easier. It also has a flexible template system, so creating many similarly
configured jobs is easy.
To install::
$ sudo python setup.py install
Online documentation:
* http://ci.openstack.org/jenkins-job-builder/
Developers
==========
Bug report:
* https://storyboard.openstack.org/#!/project/723
Cloning:
* https://git.openstack.org/openstack-infra/jenkins-job-builder
Patches are submitted via Gerrit at:
* https://review.openstack.org/
Please do not submit GitHub pull requests, they will be automatically closed.
More details on how you can contribute is available on our wiki at:
* http://docs.openstack.org/infra/manual/developers.html
Writing a patch
===============
We ask that all code submissions be pep8_ and pyflakes_ clean. The
easiest way to do that is to run tox_ before submitting code for
review in Gerrit. It will run ``pep8`` and ``pyflakes`` in the same
manner as the automated test suite that will run on proposed
patchsets.
When creating new YAML components, please observe the following style
conventions:
* All YAML identifiers (including component names and arguments)
should be lower-case and multiple word identifiers should use
hyphens. E.g., "build-trigger".
* The Python functions that implement components should have the same
name as the YAML keyword, but should use underscores instead of
hyphens. E.g., "build_trigger".
This consistency will help users avoid simple mistakes when writing
YAML, as well as developers when matching YAML components to Python
implementation.
Installing without setup.py
===========================
For YAML support, you will need libyaml_ installed.
Mac OS X::
$ brew install libyaml
Then install the required python packages using pip_::
$ sudo pip install PyYAML python-jenkins
.. _pep8: https://pypi.python.org/pypi/pep8
.. _pyflakes: https://pypi.python.org/pypi/pyflakes
.. _tox: https://testrun.org/tox
.. _libyaml: http://pyyaml.org/wiki/LibYAML
.. _pip: https://pypi.python.org/pypi/pip
| 28.8 | 79 | 0.743056 |
2a5c2c305e589a3a0a4d2484fbbb999218719430 | 291 | rst | reStructuredText | docs/middleware.rst | Suor/handy | c5503bbc053724f5bef3d1c4bf3799290834c148 | [
"BSD-3-Clause"
] | 41 | 2015-03-06T13:41:58.000Z | 2021-09-06T19:06:42.000Z | docs/middleware.rst | Suor/handy | c5503bbc053724f5bef3d1c4bf3799290834c148 | [
"BSD-3-Clause"
] | 6 | 2015-04-19T07:02:56.000Z | 2017-11-23T11:46:23.000Z | docs/middleware.rst | Suor/handy | c5503bbc053724f5bef3d1c4bf3799290834c148 | [
"BSD-3-Clause"
] | 8 | 2015-04-19T04:59:24.000Z | 2021-09-06T19:09:03.000Z | Middleware
==========
.. class:: StripWhitespace
A middleware that strips whitespace from html responses to make them smaller.
Doesn't strip newlines in order to not break any embeded javascript.
Just add ``handy.middleware.StripWhitespace`` to your ``MIDDLEWARE_CLASSES``.
| 24.25 | 81 | 0.728522 |
70de6cfaed4ef42f5d61dda023f77aa2f8ece7e5 | 158 | rst | reStructuredText | doc/intro.rst | jhermann/xmlschema | 4251afc21b9ce254edcb75d0b11fe936b206a7d0 | [
"MIT"
] | null | null | null | doc/intro.rst | jhermann/xmlschema | 4251afc21b9ce254edcb75d0b11fe936b206a7d0 | [
"MIT"
] | null | null | null | doc/intro.rst | jhermann/xmlschema | 4251afc21b9ce254edcb75d0b11fe936b206a7d0 | [
"MIT"
] | null | null | null | ************
Introduction
************
.. include:: ../README.rst
:start-after: xmlschema-introduction-start
:end-before: xmlschema-introduction-end
| 19.75 | 46 | 0.613924 |
ce9bc7d8c0c11aa15fd203cd48ad9d9cdee28d0c | 3,301 | rst | reStructuredText | docs/relay/nodes.rst | spacether/graphene | 37d6eaea465c8dca981efd173b7c74db9a01830e | [
"MIT"
] | null | null | null | docs/relay/nodes.rst | spacether/graphene | 37d6eaea465c8dca981efd173b7c74db9a01830e | [
"MIT"
] | null | null | null | docs/relay/nodes.rst | spacether/graphene | 37d6eaea465c8dca981efd173b7c74db9a01830e | [
"MIT"
] | null | null | null | Nodes
=====
A ``Node`` is an Interface provided by ``graphene.relay`` that contains
a single field ``id`` (which is a ``ID!``). Any object that inherits
from it has to implement a ``get_node`` method for retrieving a
``Node`` by an *id*.
Quick example
-------------
Example usage (taken from the `Starwars Relay example`_):
.. code:: python
class Ship(graphene.ObjectType):
'''A ship in the Star Wars saga'''
class Meta:
interfaces = (relay.Node, )
name = graphene.String(description='The name of the ship.')
@classmethod
def get_node(cls, info, id):
return get_ship(id)
The ``id`` returned by the ``Ship`` type when you query it will be a
scalar which contains enough info for the server to know its type and
its id.
For example, the instance ``Ship(id=1)`` will return ``U2hpcDox`` as the
id when you query it (which is the base64 encoding of ``Ship:1``), and
which could be useful later if we want to query a node by its id.
Custom Nodes
------------
You can use the predefined ``relay.Node`` or you can subclass it, defining
custom ways of how a node id is encoded (using the ``to_global_id`` method in the class)
or how we can retrieve a Node given a encoded id (with the ``get_node_from_global_id`` method).
Example of a custom node:
.. code:: python
class CustomNode(Node):
class Meta:
name = 'Node'
@staticmethod
def to_global_id(type, id):
return f"{type}:{id}"
@staticmethod
def get_node_from_global_id(info, global_id, only_type=None):
type, id = global_id.split(':')
if only_type:
# We assure that the node type that we want to retrieve
# is the same that was indicated in the field type
assert type == only_type._meta.name, 'Received not compatible node.'
if type == 'User':
return get_user(id)
elif type == 'Photo':
return get_photo(id)
The ``get_node_from_global_id`` method will be called when ``CustomNode.Field`` is resolved.
Accessing node types
--------------------
If we want to retrieve node instances from a ``global_id`` (scalar that identifies an instance by it's type name and id),
we can simply do ``Node.get_node_from_global_id(info, global_id)``.
In the case we want to restrict the instance retrieval to a specific type, we can do:
``Node.get_node_from_global_id(info, global_id, only_type=Ship)``. This will raise an error
if the ``global_id`` doesn't correspond to a Ship type.
Node Root field
---------------
As is required in the `Relay specification`_, the server must implement
a root field called ``node`` that returns a ``Node`` Interface.
For this reason, ``graphene`` provides the field ``relay.Node.Field``,
which links to any type in the Schema which implements ``Node``.
Example usage:
.. code:: python
class Query(graphene.ObjectType):
# Should be CustomNode.Field() if we want to use our custom Node
node = relay.Node.Field()
.. _Relay specification: https://facebook.github.io/relay/docs/graphql-relay-specification.html
.. _Starwars Relay example: https://github.com/graphql-python/graphene/blob/master/examples/starwars_relay/schema.py
| 32.048544 | 121 | 0.663435 |
89a7a71d6311196ea13bd1f7cf12c0dba3566e2b | 2,032 | rst | reStructuredText | capitolo-3---principi-generali/interazioni.rst | italia/lg-tourism-digital-hub-interoperabilita-docs | 131cac1f52d550232702b3636a044dca99c760cc | [
"CC-BY-4.0"
] | 1 | 2022-02-14T20:23:54.000Z | 2022-02-14T20:23:54.000Z | capitolo-3---principi-generali/interazioni.rst | italia/lg-tourism-digital-hub-interoperabilita-docs | 131cac1f52d550232702b3636a044dca99c760cc | [
"CC-BY-4.0"
] | 1 | 2022-01-14T09:15:30.000Z | 2022-01-14T09:15:30.000Z | capitolo-3---principi-generali/interazioni.rst | italia/lg-tourism-digital-hub-interoperabilita-docs | 131cac1f52d550232702b3636a044dca99c760cc | [
"CC-BY-4.0"
] | null | null | null | **3.1 Interazioni**\ [1]_
==========================
L’ambito di applicazione delle presenti Linee Guida, in coerenza con il
Modello di Interoperabilità delle Pubbliche Amministrazioni Italiane
*(ModI: Modello realizzato da AgID che rende possibile la collaborazione
tra Pubbliche amministrazioni e tra queste e soggetti terzi per mezzo di
soluzioni tecnologiche che assicurano l’interazione e lo scambio di
informazioni senza vincoli sulle implementazioni, evitando integrazioni
ad hoc),* comprende i tre tipi di interazioni previste nell’EIF che
vedono coinvolte Pubbliche Amministrazioni, Cittadini e Imprese.
Le modalità di interazione con l’Ecosistema TDH permetteranno sia di
fruire di servizi digitali (open) disponibili al suo interno sia di
svilupparne/erogarne di nuovi (open o closed) mettendoli a disposizione
dei soggetti aderenti all’Ecosistema. In tal senso, le interazioni
prevedono che gli operatori (pubblici e privati) aderenti al TDH022
possano svolgere la funzione di **erogatore di servizi** e **fruitore
dei servizi,** come precedentemente descritto. In tal senso è possibile
da parte di un operatore turistico svolgere alternativamente o entrambe
le funzioni.
I soggetti fruitori possono utilizzare i servizi digitali messi a
disposizione dagli erogatori di servizi, che hanno evidenza del loro
utilizzo tramite appositi meccanismi di controllo quali:
- una soluzione software che preveda un meccanismo di controllo
azionato da specifico operatore nel back-end della Piattaforma (user
agent/human);
- un sistema applicativo automatico (server/machine), anche allo scopo
di definire nuovi servizi a valore aggiunto (connessione via API).
.. [1]
Il contenuto di questo paragrafo è in linea con quanto prescritto
dalle “Linee Guida sull’interoperabilità tecnica delle Pubbliche
Amministrazioni” emanate da AgID, di cui al paragrafo 3.1 del
documento citato (si rimanda alla sezione “Bibliografia e Sitografia
di riferimento” a fine documento per link con redirect al documento)
| 50.8 | 72 | 0.800197 |
4e536bf402004627161debeb4eed162a8098f5cd | 279 | rst | reStructuredText | docs/installation.rst | cfc603/dj-tasks | a28a528844351e7cfc6e89d4b5d7cbf8d0df9dc8 | [
"MIT"
] | null | null | null | docs/installation.rst | cfc603/dj-tasks | a28a528844351e7cfc6e89d4b5d7cbf8d0df9dc8 | [
"MIT"
] | 3 | 2020-10-08T18:37:38.000Z | 2020-12-29T17:31:43.000Z | docs/installation.rst | cfc603/dj-tasks | a28a528844351e7cfc6e89d4b5d7cbf8d0df9dc8 | [
"MIT"
] | null | null | null | ============
Installation
============
At the command line::
$ pip install dj-tasks
Add ``dj_tasks`` to your ``INSTALLED_APPS``:
.. code-block:: python
INSTALLED_APPS = [
...
"dj_tasks",
...
]
DJTASKS_LOCK_ID = "your_django_project"
| 13.95 | 44 | 0.526882 |
ca6275915708e437926aa8cec7e0b2eb84f93ed2 | 529 | rst | reStructuredText | docs/_writing_conda_fleeqtk.rst | pvanheus/planemo | 12c4256325bb1b274dcd40d64b91c1f832cf49b1 | [
"CC-BY-3.0"
] | 73 | 2015-01-03T15:09:26.000Z | 2022-03-30T23:52:55.000Z | docs/_writing_conda_fleeqtk.rst | pvanheus/planemo | 12c4256325bb1b274dcd40d64b91c1f832cf49b1 | [
"CC-BY-3.0"
] | 958 | 2015-01-02T08:27:45.000Z | 2022-03-23T14:51:51.000Z | docs/_writing_conda_fleeqtk.rst | jmchilton/planemo | d352a085fe10cb6b7c1384663b114201da42d97b | [
"CC-BY-3.0"
] | 84 | 2015-01-06T18:27:28.000Z | 2021-11-18T01:58:17.000Z | This is the skeleton of a tool wrapping the parody bioinformatics software package fleeqtk_.
fleeqtk is a fork of the project seqtk_ that many Planemo tutorials are built around and the
example tool should hopefully be fairly familiar. fleeqtk version 1.3 can be downloaded
from `here <https://github.com/jmchilton/fleeqtk/archive/v1.3.tar.gz>`__ and built using
``make``. The result of ``make`` includes a single executable ``fleeqtk``.
.. _fleeqtk: https://github.com/jmchilton/fleeqtk
.. _seqtk: https://github.com/lh3/seqtk
| 58.777778 | 92 | 0.778828 |
c8732f9ca2b996da96df4f40524c3e33025b67e9 | 863 | rst | reStructuredText | doc/source/community/software-contrib.rst | openstack/charm-guide | 50c4ae805ca4dff6e913dce3c76724d720139a7d | [
"Apache-2.0"
] | 18 | 2016-07-05T09:33:01.000Z | 2019-11-15T07:58:40.000Z | doc/source/community/software-contrib.rst | openstack/charm-guide | 50c4ae805ca4dff6e913dce3c76724d720139a7d | [
"Apache-2.0"
] | 4 | 2017-01-24T23:08:25.000Z | 2018-12-10T12:55:45.000Z | doc/source/community/software-contrib.rst | openstack/charm-guide | 50c4ae805ca4dff6e913dce3c76724d720139a7d | [
"Apache-2.0"
] | 5 | 2016-09-20T07:48:45.000Z | 2021-07-21T15:44:33.000Z | ======================
Software contributions
======================
The OpenStack charms are part of the OpenStack project and follow the
same development process as other projects.
For details on how to submit changes to an OpenStack project please refer
to the OpenStack `development documentation`_ and then take a read through
the rest of this section on how to contribute to the OpenStack Charms.
.. toctree::
:maxdepth: 1
../reference/feature-specification
../reference/coding-guidelines
../reference/charm-anatomy
../reference/testing
../reference/making-a-change
../reference/reactive-handlers-optimization
../reference/backport-policy
../reference/new-api-charm
../reference/new-sdn-charm
../reference/new-manila-charm
.. LINKS
.. _development documentation: https://docs.openstack.org/infra/manual/developers.html
| 30.821429 | 86 | 0.723059 |
825f1ce9cc2f3162665f21ea477bff8b2d58d30f | 681 | rst | reStructuredText | docs/panels/render/general.rst | fedormatantsev/blenderseed | 99a29c99a61f7b92a5db4378b3a558a18d54654e | [
"MIT"
] | 2 | 2018-04-12T23:45:57.000Z | 2018-04-12T23:46:04.000Z | docs/panels/render/general.rst | fedormatantsev/blenderseed | 99a29c99a61f7b92a5db4378b3a558a18d54654e | [
"MIT"
] | null | null | null | docs/panels/render/general.rst | fedormatantsev/blenderseed | 99a29c99a61f7b92a5db4378b3a558a18d54654e | [
"MIT"
] | null | null | null | General
=======
.. image:: /_static/screenshots/render_panels/general.JPG
|
- Auto Threads
- Select this option to have appleseed automatically determine the number of rendering threads to use. Unselect it to choose manually.
- Export Geometry
- All: Always export all geometry.
- Partial: Export geometry only if it isn't already present in the output directory.
- Selected: Only export the item(s) that are selected in the viewport.
- Delete External Cache After rendering
- This will delete the files from the output directory after the render has completed.
- Tile Ordering
- This defines how the image tiles are rendered out (or the order in which they will appear). | 40.058824 | 136 | 0.767988 |
a36930defc8de9c8c73825d9827023c0d6491f5b | 52 | rst | reStructuredText | utils/README.rst | erikjanss/cross-python | cb160097d0df4b510e011aa263a37150bfab8939 | [
"BSD-3-Clause"
] | 2 | 2020-09-14T21:35:10.000Z | 2021-02-17T13:15:35.000Z | utils/README.rst | erikjanss/cross-python | cb160097d0df4b510e011aa263a37150bfab8939 | [
"BSD-3-Clause"
] | 1 | 2020-09-06T16:35:50.000Z | 2020-09-06T16:35:50.000Z | utils/README.rst | erikjanss/cross-python | cb160097d0df4b510e011aa263a37150bfab8939 | [
"BSD-3-Clause"
] | 1 | 2020-09-06T13:25:58.000Z | 2020-09-06T13:25:58.000Z | Tools to assist in creating a cross platform builts
| 26 | 51 | 0.826923 |
91ccf54ec218c4ef708ab99a970d89338b45dada | 802 | rst | reStructuredText | docs/_sources/setup.rst | Bnseamster/shopyo | 945b4cf1ac9b4a4c33f6616c252a9ad22a197246 | [
"MIT"
] | 23 | 2021-02-21T07:04:10.000Z | 2022-01-24T10:33:29.000Z | docs/_sources/setup.rst | Bnseamster/shopyo | 945b4cf1ac9b4a4c33f6616c252a9ad22a197246 | [
"MIT"
] | 15 | 2021-10-06T04:16:47.000Z | 2022-03-15T20:02:28.000Z | docs/_sources/setup.rst | Bnseamster/shopyo | 945b4cf1ac9b4a4c33f6616c252a9ad22a197246 | [
"MIT"
] | 14 | 2021-05-07T17:23:04.000Z | 2022-03-27T20:01:05.000Z | Setting up Shopyo
=================
🔧 Install instructions
-----------------------
- download python3.7
- clone and cd into project
- run
.. code-block:: python
:caption: Install requirements.
:name: install_requirements
python -m pip install -r requirements.txt
👟 Run instructions
-------------------
``cd`` into shopyo/shopyo if not already.
initialise and setup app.
.. code-block:: python
:caption: initialise Shopyo.
:name: initialise
python manage.py initialise
run the app.
.. code-block:: python
:caption: runserver.
:name: runserver
python manage.py runserver
go to the indicated url
**Super User password**
-----------------------
.. code-block:: python
:caption: super user password.
:name: user_pass
User ID: user
password: pass
| 16.367347 | 45 | 0.623441 |
f79874d79bfcc4c2701d7d43261e951562a29ede | 461 | rst | reStructuredText | notes.rst | alexa-infra/pil-lite | 1e72f6b46f757636c84f5bed0c8eab12fd8a17a9 | [
"MIT"
] | 6 | 2015-06-22T18:41:27.000Z | 2021-12-25T10:21:31.000Z | notes.rst | alexa-infra/pil-lite | 1e72f6b46f757636c84f5bed0c8eab12fd8a17a9 | [
"MIT"
] | 3 | 2019-10-08T21:54:23.000Z | 2020-08-09T22:05:25.000Z | notes.rst | alexa-infra/pil-lite | 1e72f6b46f757636c84f5bed0c8eab12fd8a17a9 | [
"MIT"
] | null | null | null | Build locally
-------------
.. code:: bash
python src/py_ext_build.py
PYTHONPATH=. pytest
Another way to build locally
----------------------------
.. code:: bash
pip install -e .
pytest
Release
-------
.. code:: bash
vim PilLite/__about__.py
git commit -m "..."
git tag -a 0.1.0 -m "version 0.1.0"
git push origin master --tags
pip install twine
python setup.py sdist
twine upload dist/Pil-Lite-0.1.0.tar.gz
| 15.896552 | 43 | 0.566161 |
2aa1bd2b409bee4b3b4211e1bf51f0e5265cb892 | 840 | rst | reStructuredText | docs/intro.rst | steve1aa/microdot | c130d8f2d45dcce9606dda25d31d653ce91faf92 | [
"MIT"
] | 173 | 2019-04-16T11:17:22.000Z | 2022-03-30T11:19:26.000Z | docs/intro.rst | steve1aa/microdot | c130d8f2d45dcce9606dda25d31d653ce91faf92 | [
"MIT"
] | 36 | 2019-07-01T06:14:50.000Z | 2022-03-08T18:53:19.000Z | docs/intro.rst | steve1aa/microdot | c130d8f2d45dcce9606dda25d31d653ce91faf92 | [
"MIT"
] | 30 | 2019-04-16T16:21:36.000Z | 2022-03-22T12:56:30.000Z | Installation
------------
Microdot can be installed with ``pip``::
pip install microdot
For platforms that do not support or cannot run ``pip``, you can also manually
copy and install the ``microdot.py`` and ``microdot_asyncio.py`` source files.
Examples
--------
The following is an example of a standard single or multi-threaded web
server::
from microdot import Microdot
app = Microdot()
@app.route('/')
def hello(request):
return 'Hello, world!'
app.run()
Microdot also supports the asynchronous model and can be used under
``asyncio``. The example that follows is equivalent to the one above, but uses
coroutines for concurrency::
from microdot_asyncio import Microdot
app = Microdot()
@app.route('/')
async def hello(request):
return 'Hello, world!'
app.run()
| 21 | 78 | 0.67619 |
dc489969b2b45800ef2f6c35b7fd281018789056 | 90 | rst | reStructuredText | docs/concepts/design_architecture/services/cli.rst | venturiscm/hcp | 74ad18180822301274daa9218d7bd9fbdb7807f7 | [
"Apache-2.0"
] | 1 | 2020-06-22T21:25:52.000Z | 2020-06-22T21:25:52.000Z | docs/concepts/design_architecture/services/cli.rst | venturiscm/hcp | 74ad18180822301274daa9218d7bd9fbdb7807f7 | [
"Apache-2.0"
] | 1 | 2020-05-21T02:46:24.000Z | 2020-05-25T07:19:23.000Z | docs/concepts/design_architecture/services/cli.rst | venturiscm/hcp | 74ad18180822301274daa9218d7bd9fbdb7807f7 | [
"Apache-2.0"
] | null | null | null | #############################
Zimagi Command Line Interface
#############################
| 22.5 | 29 | 0.288889 |
a7adc71a35ead163e127e80641ae80170da8c5a8 | 14,718 | rst | reStructuredText | je/je5/je5-dispersive-eqns.rst | ammarhakim/ammar-simjournal | 85b64ddc9556f01a4fab37977864a7d878eac637 | [
"MIT",
"Unlicense"
] | 1 | 2019-12-19T16:21:13.000Z | 2019-12-19T16:21:13.000Z | je/je5/je5-dispersive-eqns.rst | ammarhakim/ammar-simjournal | 85b64ddc9556f01a4fab37977864a7d878eac637 | [
"MIT",
"Unlicense"
] | null | null | null | je/je5/je5-dispersive-eqns.rst | ammarhakim/ammar-simjournal | 85b64ddc9556f01a4fab37977864a7d878eac637 | [
"MIT",
"Unlicense"
] | 2 | 2020-01-08T06:23:33.000Z | 2020-01-08T07:06:50.000Z | :Author: Ammar Hakim
JE5: Hyperbolic balance laws with dispersive source terms
=========================================================
.. contents::
While solving the :doc:`two-fluid Riemann problems
<../je4/je4-twofluid-shock>` small oscillations are observed in the
solution. On first sight it might be thought that these oscillations
are numerical and do not reflect a correct solution to the ideal
two-fluid equation system. In this note I show that such solutions can
(and do) occur when certain types of source terms are present and the
homogeneous system is hyperbolic.
Linearization and hyperbolic balance laws with dispersive sources
-----------------------------------------------------------------
Linearizing a hyperbolic balance law can lead to important insights
into the structure of solutions, although linearization can not reveal
the rich non-linear phenomena (shocks, rarefactions and contact
discontinuities) that are described by these equations. We start from
the non-conservative form of the equations
.. math::
\frac{\partial \mathbf{v}}{\partial t}
+ \mathbf{A}_p\frac{\partial \mathbf{v}}{\partial x} = \mathbf{s}_p
where :math:`\mathbf{v}` are the primitive variables,
:math:`\mathbf{A}_p` is a Jacobian matrix and :math:`\mathbf{s}_p` are
source terms. Linearize about a uniform equilibrium
:math:`\mathbf{v}_0` to write :math:`\mathbf{v} = \mathbf{v}_0 +
\mathbf{v}_1`, where :math:`\mathbf{v}_1` is a small
perturbation. Using a Taylor series expansion to first-order to write
:math:`\mathbf{s}_p(\mathbf{v}) = \mathbf{s}_p(\mathbf{v}_0) + \left(
{\partial \mathbf{s}_p}/{\partial \mathbf{v}} \right)_{\mathbf{v}_0}
\mathbf{v}_1` and :math:`\mathbf{A}_p(\mathbf{v}) =
\mathbf{A}_p(\mathbf{v}_0) + \left( {\partial \mathbf{A}_p}/{\partial
\mathbf{v}} \right)_{\mathbf{v}_0} \mathbf{v}_1` and letting
:math:`\mathbf{M}_p \equiv {\partial \mathbf{s}_p}/{\partial
\mathbf{v}}`, the linear form of the non-conservative equation becomes
.. math::
\frac{\partial \mathbf{v}_1}{\partial t}
+ \mathbf{A}_p(\mathbf{v}_0)\frac{\partial \mathbf{v}_1}{\partial x}
= \mathbf{M}_p(\mathbf{v}_0)\mathbf{v}_1.
To understand the mathematical structure of the linear equation a
Fourier representation of the solution is assumed and each mode is
represented as :math:`\mathbf{v}_1 = \mathbf{\hat{v}}_1 e^{i\omega t}
e^{i k x}`, where :math:`\omega` is the frequency and :math:`k` is the
wavenumber. Using this we obtain
.. math::
\left[
i\omega\mathbf{I} + ik\mathbf{A}_p(\mathbf{v}_0) - \mathbf{M}_p(\mathbf{v}_0)
\right] \mathbf{v}_1 = 0,
where :math:`\mathbf{I}` is a unit matrix. For non-trivial solutions
the determinant of the matrix in the square brackets must
vanish. Another way to state this condition is that the frequency and
wavenumbers must be related by the *dispersion relations*
:math:`\omega = \omega(k)` which are the eigenvalues of the matrix
:math:`-k\mathbf{A}_p(\mathbf{v}_0) -
i\mathbf{M}_p(\mathbf{v}_0)`. Thus if
:math:`\lambda^p(k,\mathbf{v}_0)` is the :math:`p^{\textrm{th}}`
eigenvalue of this matrix then the :math:`p^{\textrm{th}}` branch of
the dispersion relation is :math:`\omega = \lambda^p(k,\mathbf{v}_0)`.
If the dispersion relation is purely real then the equation system
will support non-decaying waves. Further, if the dispersion relation
is *linear*, then a wave packet will propagate without distortion. If,
however, if the dispersion relation is non-linear (but still real),
the wave packet will suffer dispersion, i.e. waves with different
wave-numbers will propagate with different group and phase speeds.
For the simple case of vanishing sources (:math:`\mathbf{s}_p=0`) the
dispersion relation reduces to :math:`\omega = \lambda^p k`, where
:math:`\lambda^p` are the eigenvalues of the Jacobian matrix
:math:`\mathbf{A}_p`. As the homogeneous equation is assumed to be
hyperbolic the eigenvalues are all real, indicating that for the
homogeneous case waves will propagate without dispersion or decay.
This simple analysis indicates that the linear solution will depend on
the nature of the eigenvalues of the source Jacobian matrix
:math:`\mathbf{M}_p`. In case the eigenvalues of this matrix are
*purely imaginary*, the dispersion relation will be real but waves
will suffer dispersion as they propagate in the background uniform
equilibrium. In this case the system of equations is called
*hyperbolic balance laws with dispersive source terms*. (This is my
own terminology and I have not seen such equations discussed in the
literature).
The Dispersive Euler Equations
------------------------------
Several simple hyperbolic balance laws with dispersive source terms can
be constructed. However, a particularly useful system that has
properties similar to the ideal two-fluid equations is the following
*dispersive Euler* system
.. math::
&\frac{\partial \rho}{\partial t} + \nabla\cdot(\rho\mathbf{u}) = 0 \\
&\frac{\partial \mathbf{u}}{\partial t} +
\mathbf{u}\cdot\nabla\mathbf{u} =
-\nabla p/\rho + \lambda\mathbf{u}\times\mathbf{b} \\
&\frac{\partial p}{\partial t} + \mathbf{u}\cdot\nabla p =
-\gamma p \nabla\cdot\mathbf{u}
where :math:`\mathbf{b}(\mathbf{x})` is a time-independent vector
field and :math:`\lambda` is a constant.
Consider solving the linearized system in one-dimension. Linearizing
around stationary uniform initial state :math:`\rho = \rho_0`,
:math:`p = p_0` and assuming :math:`\mathbf{b} = (0,0,b_z)`, where
:math:`b_z` is constant leads to the linear system
.. math::
\frac{\partial \rho_1}{\partial t}
&= -\rho_0\frac{\partial u_1}{\partial x} \\
\rho_0\frac{\partial u_1}{\partial t} &=
-\frac{\partial p_1}{\partial x} + \rho_0 \lambda v_1 b_z \\
\rho_0\frac{\partial v_1}{\partial t} &= -\rho_0 \lambda u_1 b_z \\
\frac{\partial p_1}{\partial t} &=
-\gamma p_0 \frac{\partial u_1}{\partial x}
Assuming solutions of the form
.. math::
f(x,t) = \sum_{n=0}^\infty f_n e^{i(k_n x + w_n t)}
for :math:`f\in \{\rho_1,u_1,v_1,p_1\}` and :math:`k_n` is the wave
number and :math:`\omega_n` is frequency we get the algebraic
equations
.. math::
i \omega_n \rho_1 &= - i k_n u_1 \rho_0 \\
i \omega_n u_1 \rho_0 &= -i k_n p_1 + \lambda v_1 b_z \\
i \omega_n v_1 \rho_0 &= - \rho_0 \lambda u_1 b_z \\
i \omega_n p_1 &= i \gamma p_0 k_n u_1.
From this the dispersion relation can be computed as
.. math::
\omega_n = \pm ( k_n^2 c_{s0}^2 + \omega_c^2 )^{1/2}
Here :math:`c_{s0} \equiv \sqrt{\gamma p_0/\rho_o}` is the speed of
sound and :math:`\omega_c \equiv \lambda b_z` is the eigenvalue of the
source Jacobian.
Exact solution for initial step function perturbation
-----------------------------------------------------
Consider a initial perturbation of the form :math:`u(x,0)` where
.. math::
u_1(x,t) = U_0 \sum_{n=0}^N
\frac{i}{2n+1} e^{i k_nx} e^{i \omega_n t}
with :math:`k_n = 2\pi(2n+1)`. For :math:`N\rightarrow \infty` this
represents the propagation of a step function perturbation. Letting
:math:`u_i^{(n)} \equiv i U_0 /(2n+1) e^{i(k_nx+\omega_nt)}` the
Fourier components of the other flow variable perturbations are given
by
.. math::
\rho_1^{(n)} &= -\frac{k_n\rho_0}{\omega_n} u_1^{(n)} \\
v_1^{(n)} &= -i\frac{\lambda b_z}{\omega_n} u_1^{(n)} \\
p_1^{(n)} &= -\frac{\gamma k_n p_0}{\omega_n} u_1^{(n)},
summing which over :math:`n=0,\ldots,N` gives the exact solution to
the linear problem. The following figure shows the exact solution for
:math:`N=5000`, :math:`\omega_c = 10` and :math:`c_s = \sqrt{2}`
at time 1000.
.. figure:: s41-sqpulse-exact.png
:width: 100%
:align: center
Exact solution [:doc:`s41 <../../sims/s41/s41-sqpulse-exact>`] of
the linear dispersive Euler equation for :math:`N=5000`,
:math:`\omega_c = 10` and :math:`c_s = \sqrt{2}` at time 1000. Very
fine small-scale features are seen which, in a numerical solution,
might be mistaken for numerical noise.
A note on solving dispersive Euler equations with Lucee
-------------------------------------------------------
The dispersive Euler equations can be solved by adding a source term
to the Euler equations. The source terms can be implemented using a
Lorentz force object. This object needs an electric and magnetic field
as input. Hence, we need to allocate memory for all the field
components and set the electric field to zero. Due to the peculiarity
of the point ODE integrator, this memory needs to be part of the fluid
fields. Hence, in the simulations shown below (see, for example,
:doc:`s40 <../../sims/s40/s40-dispersive-euler>`) the fields have 11
components (5 for fluids and 3 for electric field and 3 for magnetic
field).
Problem 1: Case :math:`\omega_c = 10`
-------------------------------------
A series of simulations was performed for the case of :math:`\omega_c
= 10` and :math:`c_s = \sqrt{2}`. To avoid exciting all the Fourier
modes in the step function, the expansion was carried out to only
:math:`N=9` modes. The solution was computed on grids of 100, 200, 300
and 400 cells. The results of velocity :math:`u(x,t)` are shown below
at :math:`t=3`. The wave-propagation scheme has intrinsic diffusion
due to which the small wavelength features are poorly resolved when
the grid is relatively coarse.
.. figure:: s40424344-dispeuler-cmp.png
:width: 100%
:align: center
Velocity at :math:`t=3` for :math:`\omega_c = 10` for different grid
resolutions. The red lines are the numerical results while the black
lines is the exact solution. The top-left figure shows 100 cell
results [:doc:`s40 <../../sims/s40/s40-dispersive-euler>`],
top-right 200 cell results [:doc:`s42
<../../sims/s42/s42-dispersive-euler>`], bottom-left 300 cell
results [:doc:`s43 <../../sims/s43/s43-dispersive-euler>`] and
bottom-right 400 cell results [:doc:`s44
<../../sims/s44/s44-dispersive-euler>`]. At low resolution the small
wavelength features are poorly resolved due to numerical diffusion
of the scheme.
Problem 2: Case :math:`\omega_c = 100`
--------------------------------------
In these simulations, the influence from sources was increased by
setting :math:`\omega_c = 100`. The simulation is run on a grid with
200 cells. The time-step for this case is constrained by the need to
resolve the oscillations from the source terms. Taking :math:`k
\approx 1/\Delta x = 1/200` we get the largest frequency as
approximately 283. To resolve this the time-step needs to much smaller
than :math:`1/238 \approx 0.0035`. This forces a more restrictive CFL
number (0.5) than allowed by stability of just the hyperbolic part. If
the oscillations are not resolved significant phase errors are seen in
the solution.
.. figure:: s48-dispersive-euler_ux.png
:width: 100%
:align: center
Velocity at :math:`t=3` for :math:`\omega_c = 100` with 200 cells
[:doc:`s48 <../../sims/s48/s48-dispersive-euler>`] and with a CFL
number of 0.9. The red line is the numerical result while the black
line is the exact solution. The numerical solution is not only
highly diffuse but the peaks are not in the correct location,
showing phase errors in resolving the oscillations from the
dispersive terms. The reason for these phase errors, even though the
time-step satisfies the fluid CFL condition, is that the oscillation
period of the smallest wavenumber waves is not adequately resolved.
.. figure:: s45-dispersive-euler_ux.png
:width: 100%
:align: center
Velocity at :math:`t=3` for :math:`\omega_c = 100` with 200 cells
[:doc:`s45 <../../sims/s45/s45-dispersive-euler>`] with a CFL number
of 0.5. The red line is the numerical result while the black line
is the exact solution. Significant diffusion is seen in the results
as well as small phase errors. Taking an even smaller time step will
reduce the phase error but add even more diffusion.
The simulation was next run with 400 cells. This significantly
improves the numerical solution even though some small-scale features
are still not resolved correctly.
.. figure:: s46-dispersive-euler_ux.png
:width: 100%
:align: center
Velocity at :math:`t=3` for :math:`\omega_c = 100` with 400 cells
[:doc:`s46 <../../sims/s46/s46-dispersive-euler>`]. The red line is
the numerical result while the black line is the exact
solution. The solution is now much better resolved, although some
small scale features are not resolved well.
Problem 3: Sod-shock problem
----------------------------
The previous simulations show the effect of dispersive source terms on
linear problems. In this simulation I solve the sod-shock problem for
the dispersive Euler equations. This is a highly non-linear problem
and shows complex shock structure. The problem is initialized with a
discontinuity at :math:`x=0.5` and with left and right states
.. math::
\left[
\begin{matrix}
\rho_l \\
p_l
\end{matrix}
\right]
=
\left[
\begin{matrix}
3.0 \\
3.0
\end{matrix}
\right],
\qquad
\left[
\begin{matrix}
\rho_r \\
p_r
\end{matrix}
\right]
=
\left[
\begin{matrix}
1.0 \\
1.0
\end{matrix}
\right].
and is run to :math:`t=0.1` on a grid of 800 cells with
:math:`\mathbf{b} = (0.75, 0.0, 1.0)`, :math:`\lambda=100` and
:math:`\gamma = 5/3`.
The results are shown below. These show significant differences
between the zero-source case and the one with the dispersive
sources. Note that the solution looks like the two-fluid solutions to
the Riemann problem.
.. figure:: s47-dispersive-euler_sol.png
:width: 100%
:align: center
Solution at :math:`t=0.1` for Sod-shock problem [:doc:`s47
<../../sims/s47/s47-dispersive-euler>`]. Density (top left),
velocity (top right), pressure (bottom left) and internal energy
(bottom right). Solutions are significantly different from the
homogeneous case and look similar to the two-fluid Riemann
solutions.
Conclusions
-----------
One conclusion from these series of simulations is that dispersive
source terms can cause small-scale features in the solution. To
resolve these features sufficient spatial *and* temporal resolution is
needed. Poor spatial resolution can diffuse the oscillations while
poor temporal resolution can lead to phase errors. In physical
problems (for example multi-fluid plasmas) there is usually some
diffusion that is active at small scales and can be important when the
gradients change rapidly over a few grid cells. This physical
diffusion will wipe out the oscillations from the dispersive
sources. Hence, in such cases the resolution of the oscillations might
not be so important. However, from a mathematical view-point the
numerical schemes need to be accurate enough to resolve such features.
| 40.213115 | 81 | 0.707365 |
6f3dd47783fa481308db945ba93a601d94280803 | 26 | rest | reStructuredText | http.rest | DariusPasilaban-401-advanced-javascript/auth-server | 9e718dc726c14ed8f4984bba67474abcc2becffe | [
"MIT"
] | 1 | 2021-11-15T10:24:46.000Z | 2021-11-15T10:24:46.000Z | http.rest | DariusPasilaban-401-advanced-javascript/auth-server | 9e718dc726c14ed8f4984bba67474abcc2becffe | [
"MIT"
] | null | null | null | http.rest | DariusPasilaban-401-advanced-javascript/auth-server | 9e718dc726c14ed8f4984bba67474abcc2becffe | [
"MIT"
] | 1 | 2021-10-04T07:35:56.000Z | 2021-10-04T07:35:56.000Z | GET http://localhost:3000/ | 26 | 26 | 0.769231 |
9af2dae13634bf17d7189288190f2c79e351c434 | 548 | rst | reStructuredText | docs/index.rst | pseeth/sigsep-mus-eval | cec9b1aae45400a04a426ab214185efc89c0c563 | [
"MIT"
] | null | null | null | docs/index.rst | pseeth/sigsep-mus-eval | cec9b1aae45400a04a426ab214185efc89c0c563 | [
"MIT"
] | null | null | null | docs/index.rst | pseeth/sigsep-mus-eval | cec9b1aae45400a04a426ab214185efc89c0c563 | [
"MIT"
] | null | null | null | museval
=======
A python package to evaluate **sigsep musdb18** source separation results
as part of the `MUS task <https://sisec.inria.fr/home/2018-professionally-produced-music-recordings/>`__
of the `Signal Separation Evaluation Campaign
(SISEC) <https://sisec.inria.fr/>`__
Contents:
-----------------
.. toctree::
:maxdepth: 2
usage
API documentation
-----------------
.. toctree::
:maxdepth: 2
museval
museval.metrics
References
~~~~~~~~~~
If you use this package, please reference the following paper
.. code:: tex
| 16.606061 | 104 | 0.660584 |
5a32d3a54c1fb6b37f2e79518d23e4463814d849 | 1,052 | rst | reStructuredText | docs/source/guide/install.rst | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 175 | 2021-03-04T15:46:25.000Z | 2022-03-31T05:56:58.000Z | docs/source/guide/install.rst | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 15 | 2021-03-06T17:53:56.000Z | 2022-03-24T17:02:07.000Z | docs/source/guide/install.rst | gategill/elliot | 113763ba6d595976e14ead2e3d460d9705cd882e | [
"Apache-2.0"
] | 39 | 2021-03-04T15:46:26.000Z | 2022-03-09T15:37:12.000Z | Install Elliot
======================
Elliot works with the following operating systems:
- Linux
- Windows 10
- macOS X
Elliot requires Python version 3.6 or later.
Elliot requires tensorflow version 2.3.2 or later. If you want to use Elliot with GPU,
please ensure that CUDA or cudatoolkit version is 7.6 or later.
This requires NVIDIA driver version >= 10.1 (for Linux and Windows10).
Please refer to this `document <https://www.tensorflow.org/install/source#gpu>`__ for further
working configurations.
Install from source
~~~~~~~~~~~~~~~~~~~
CONDA
^^^^^
.. code:: bash
git clone https://github.com//sisinflab/elliot.git && cd elliot
conda create --name elliot_env python=3.8
conda activate
pip install --upgrade pip
pip install -e . --verbose
VIRTUALENV
^^^^^^^^^^
.. code:: bash
git clone https://github.com//sisinflab/elliot.git && cd elliot
virtualenv -p /usr/bin/pyhton3.6 venv # your python location and version
source venv/bin/activate
pip install --upgrade pip
pip install -e . --verbose | 25.047619 | 93 | 0.688213 |
aa503718b14816c260c1399d2092828dcee04292 | 1,413 | rst | reStructuredText | sdk/python/core/docsgen/api/errors.rst | xulleon/ydk-gen | 2a6e8bd89e3cdae4287297a3d02cdeaa838744bc | [
"ECL-2.0",
"Apache-2.0"
] | 125 | 2016-03-15T17:04:13.000Z | 2022-03-22T02:46:17.000Z | sdk/python/core/docsgen/api/errors.rst | xulleon/ydk-gen | 2a6e8bd89e3cdae4287297a3d02cdeaa838744bc | [
"ECL-2.0",
"Apache-2.0"
] | 818 | 2016-03-17T17:06:00.000Z | 2022-03-28T03:56:17.000Z | sdk/python/core/docsgen/api/errors.rst | xulleon/ydk-gen | 2a6e8bd89e3cdae4287297a3d02cdeaa838744bc | [
"ECL-2.0",
"Apache-2.0"
] | 93 | 2016-03-15T19:18:55.000Z | 2022-02-24T13:55:07.000Z | Errors
======
.. module:: ydk.errors
:synopsis: YDK Exceptions
This module contains YDK Python errors classes. These errors are thrown in case of data not conforming to the yang model or due to a server-side error.
.. py:exception:: YError
Bases: :exc:`exceptions.Exception`
Base class for Y Errors. The subclasses give a specialized view of the error that has occurred.
.. py:exception:: YModelError
Bases: :exc:`ydk.errors.YError`
Model Error. Thrown when a model constraint is violated.
.. py:exception:: YServiceProviderError
Bases: :exc:`ydk.errors.YError`
Exception for Service Provider. Thrown in case of a server-side error.
.. py:exception:: YClientError
Bases: :exc:`ydk.errors.YError`
Exception for client connection
.. py:exception:: YIllegalStateError
Bases: :exc:`ydk.errors.YError`
Illegal State Error. Thrown when an operation/service is invoked on an object that is not in the right state. Use the error_msg for the error.
.. py:exception:: YInvalidArgumentError
Bases: :exc:`ydk.errors.YError`
Invalid Argument. Use the error_msg for the error.
.. py:exception:: YOperationNotSupportedError
Bases: :exc:`ydk.errors.YError`
Operation Not Supported Error. Thrown when an operation is not supported.
.. py:exception:: YServiceError
Bases: :exc:`ydk.errors.YError`
Exception for Service Side Validation
| 24.362069 | 151 | 0.726115 |
c99e5e072c13a74fa6a661797eef1194387eddde | 1,740 | rst | reStructuredText | SourceDocs/html/file_Source_Azura_RenderSystem_Inc_Generic_Drawable.h.rst | vasumahesh1/azura | 80aa23e2fb498e6288484bc49b0d5b8889db6ebb | [
"MIT"
] | 12 | 2019-01-08T23:10:37.000Z | 2021-06-04T09:48:42.000Z | SourceDocs/html/file_Source_Azura_RenderSystem_Inc_Generic_Drawable.h.rst | vasumahesh1/azura | 80aa23e2fb498e6288484bc49b0d5b8889db6ebb | [
"MIT"
] | 38 | 2017-04-05T00:27:24.000Z | 2018-12-25T08:34:04.000Z | docs/html/_sources/html/file_Source_Azura_RenderSystem_Inc_Generic_Drawable.h.rst.txt | vasumahesh1/azura | 80aa23e2fb498e6288484bc49b0d5b8889db6ebb | [
"MIT"
] | 4 | 2019-03-27T10:07:32.000Z | 2021-07-15T03:22:27.000Z |
.. _file_Source_Azura_RenderSystem_Inc_Generic_Drawable.h:
File Drawable.h
===============
|exhale_lsh| :ref:`Parent directory <dir_Source_Azura_RenderSystem_Inc_Generic>` (``Source\Azura\RenderSystem\Inc\Generic``)
.. |exhale_lsh| unicode:: U+021B0 .. UPWARDS ARROW WITH TIP LEFTWARDS
.. contents:: Contents
:local:
:backlinks: none
Definition (``Source\Azura\RenderSystem\Inc\Generic\Drawable.h``)
-----------------------------------------------------------------
.. toctree::
:maxdepth: 1
program_listing_file_Source_Azura_RenderSystem_Inc_Generic_Drawable.h.rst
Includes
--------
- ``Containers/Vector.h``
- ``Core/RawStorageFormat.h``
- ``GenericTypes.h`` (:ref:`file_Source_Azura_RenderSystem_Inc_Generic_GenericTypes.h`)
- ``Types.h`` (:ref:`file_Source_Azura_RenderSystem_Inc_Generic_GenericTypes.h`)
Included By
-----------
- :ref:`file_Source_Azura_RenderSystem_Inc_D3D12_D3D12Drawable.h`
- :ref:`file_Source_Azura_RenderSystem_Inc_D3D12_D3D12DrawablePool.h`
- :ref:`file_Source_Azura_RenderSystem_Inc_Generic_Renderer.h`
- :ref:`file_Source_Azura_RenderSystem_Inc_Generic_PoolPrimitives.h`
- :ref:`file_Source_Azura_RenderSystem_Inc_Vulkan_VkComputePool.h`
- :ref:`file_Source_Azura_RenderSystem_Inc_Vulkan_VkDrawablePool.h`
- :ref:`file_Source_Azura_RenderSystem_Src_Generic_Drawable.cpp`
Namespaces
----------
- :ref:`namespace_Azura`
Classes
-------
- :ref:`exhale_struct_struct_azura_1_1_drawable_create_info`
- :ref:`exhale_struct_struct_azura_1_1_drawable_pool_create_info`
- :ref:`exhale_class_class_azura_1_1_drawable`
- :ref:`exhale_class_class_azura_1_1_drawable_pool`
Typedefs
--------
- :ref:`exhale_typedef__drawable_8h_1a58ca57dc00aa1103dece3e66fc0abe13`
| 19.550562 | 124 | 0.748851 |
7e832f2c84fd8e49991d4e198bd6bb983a3d2543 | 225 | rst | reStructuredText | docs/source/public_api/diffeq.rst | feimeng93/probnum | 4e46273c0157d26b9be2a7a415ccf69a3691ec22 | [
"MIT"
] | 4 | 2020-12-07T11:56:48.000Z | 2021-04-16T14:50:40.000Z | docs/source/public_api/diffeq.rst | jzenn/probnum | cb9e5ec07384913049a312ac62cfec88970f1c8d | [
"MIT"
] | 42 | 2021-04-12T08:11:41.000Z | 2022-03-28T00:21:55.000Z | docs/source/public_api/diffeq.rst | NinaEffenberger/probnum | 595f55f9f235fd0396d02b9a6f828aba2383dceb | [
"MIT"
] | 2 | 2022-01-23T14:24:08.000Z | 2022-01-29T01:26:47.000Z | probnum.diffeq
==============
Differential Equations.
This package defines common dynamical models and probabilistic solvers for differential
equations.
.. automodapi:: probnum.diffeq
:no-heading:
:no-main-docstr:
| 18.75 | 87 | 0.728889 |
1acdfd3ebefc7d425c92457a09ae6d681b7701a4 | 233 | rest | reStructuredText | Store.rest | spetz/dnc-workshops | 9c64891fa6c0c539bd08ab54b950c1858fe41335 | [
"MIT"
] | null | null | null | Store.rest | spetz/dnc-workshops | 9c64891fa6c0c539bd08ab54b950c1858fe41335 | [
"MIT"
] | null | null | null | Store.rest | spetz/dnc-workshops | 9c64891fa6c0c539bd08ab54b950c1858fe41335 | [
"MIT"
] | 1 | 2018-06-30T14:49:16.000Z | 2018-06-30T14:49:16.000Z | @url = http://localhost:5000
@productId =
###
GET {{url}}/products
###
GET {{url}}/products/{{productId}}
###
POST {{url}}/products
Content-Type: application/json
{
"name": "Lenovo",
"category": "pc",
"price": 5000
} | 12.944444 | 34 | 0.583691 |
93794bae544403306fe4d9d503dad1bc260e4561 | 82 | rst | reStructuredText | docs/reference/esm.rst | ErikLernskog/python-esm | 1739f758dd2b419aeefe945ebf8797c8bbb471cb | [
"BSD-2-Clause"
] | null | null | null | docs/reference/esm.rst | ErikLernskog/python-esm | 1739f758dd2b419aeefe945ebf8797c8bbb471cb | [
"BSD-2-Clause"
] | null | null | null | docs/reference/esm.rst | ErikLernskog/python-esm | 1739f758dd2b419aeefe945ebf8797c8bbb471cb | [
"BSD-2-Clause"
] | null | null | null | esm
===
.. testsetup::
from esm import *
.. automodule:: esm
:members:
| 8.2 | 21 | 0.54878 |
c45599d9682935fafdc301628d82878375d14367 | 325 | rst | reStructuredText | README.rst | Furmanchuk/guard_controller | 17f563fa4676e3a4366cf8a95011f8224e574b2d | [
"MIT"
] | null | null | null | README.rst | Furmanchuk/guard_controller | 17f563fa4676e3a4366cf8a95011f8224e574b2d | [
"MIT"
] | null | null | null | README.rst | Furmanchuk/guard_controller | 17f563fa4676e3a4366cf8a95011f8224e574b2d | [
"MIT"
] | null | null | null | Guart controller
-----------------------------------------------------------------------
Project for the implementation of security management system based on the
GL Starter Kit board
LICENCE
-----------------------------------------------------------------------
The MIT License (MIT) Copyright © 2020 Vadym Furmanchuk
| 29.545455 | 74 | 0.449231 |
804aa33a897bb522129d437eed6c44f51fe50d7e | 1,139 | rst | reStructuredText | src/content/pages/Meetings.rst | uptimeslc/uptimeslc.github.com | 5c74eb12ec20c6ea8af65d1dab328fd32b41acad | [
"BSD-3-Clause"
] | null | null | null | src/content/pages/Meetings.rst | uptimeslc/uptimeslc.github.com | 5c74eb12ec20c6ea8af65d1dab328fd32b41acad | [
"BSD-3-Clause"
] | null | null | null | src/content/pages/Meetings.rst | uptimeslc/uptimeslc.github.com | 5c74eb12ec20c6ea8af65d1dab328fd32b41acad | [
"BSD-3-Clause"
] | null | null | null | Meetings
########
:date: 2015-01-22
UptimeSLC meets at 6:30 p.m. on the third Tuesday of every month. Because uptimeSLC is topic-centric, the first and second meeting on each topic will be in person AND online. The third meeting on the topic will be in-person only as focus groups can really only be in-person.
Attending In Person
+++++++++++++++++++
Meetings will be held in the Corporate Partnership Center at the Miller Campus of Salt Lake Community College. Room #333.
Here is a `google map <https://goo.gl/maps/MQr5f>`_ to help you navigate to the Miller Campus.
Here is a `campus map <http://www.slcc.edu/locations/miller-campus.aspx>`_, the meetings are held in Room #333 in the MCPC (Miller Corporate Partnership Center).
Attending Online
++++++++++++++++
Meetings will also be online for the first two meetings of each topic. To access the meetings, visit the `uptimeSLC YouTube Channel <http://ur1.ca/jjsaw>`_. Additionally, meetings will be announced in the `uptimeSLC Meetup Group <http://www.meetup.com/uptimeSLC/>`_.
One advantage of online attendance is it can be done any time after the sessions take place.
| 45.56 | 274 | 0.741879 |
e6ec981f88b92920f98ff97c1f22fcb19fcaa73d | 648 | rst | reStructuredText | docs/api/api_python/dataset_vision/mindspore.dataset.vision.c_transforms.BoundingBoxAugment.rst | httpsgithu/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | 1 | 2022-02-23T09:13:43.000Z | 2022-02-23T09:13:43.000Z | docs/api/api_python/dataset_vision/mindspore.dataset.vision.c_transforms.BoundingBoxAugment.rst | 949144093/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | null | null | null | docs/api/api_python/dataset_vision/mindspore.dataset.vision.c_transforms.BoundingBoxAugment.rst | 949144093/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | null | null | null | mindspore.dataset.vision.c_transforms.BoundingBoxAugment
========================================================
.. py:class:: mindspore.dataset.vision.c_transforms.BoundingBoxAugment(transform, ratio=0.3)
对图像的随机标注边界框区域,应用给定的图像变换处理。
**参数:**
- **transform** (TensorOperation) - 对图像的随机标注边界框区域应用的变换处理。
- **ratio** (float, 可选) - 要应用变换的边界框的比例。范围:[0.0, 1.0],默认值:0.3。
**异常:**
- **TypeError** - 如果 `transform` 不是 :class:`mindspore.dataset.vision.c_transforms` 模块中的图像变换处理。
- **TypeError** - 如果 `ratio` 不是float类型。
- **ValueError** - 如果 `ratio` 不在 [0.0, 1.0] 范围内。
- **RuntimeError** - 如果给定的边界框无效。
| 34.105263 | 99 | 0.583333 |
9b1db81bb4bb7f4f226d1b8419d1f5b016a19094 | 143 | rst | reStructuredText | cookbook/integrations/kubernetes/kftensorflow/README.rst | mayitbeegh/flytesnacks | 35fe9db45f08fce3d94923b4245b1a9980a915ef | [
"Apache-2.0"
] | null | null | null | cookbook/integrations/kubernetes/kftensorflow/README.rst | mayitbeegh/flytesnacks | 35fe9db45f08fce3d94923b4245b1a9980a915ef | [
"Apache-2.0"
] | null | null | null | cookbook/integrations/kubernetes/kftensorflow/README.rst | mayitbeegh/flytesnacks | 35fe9db45f08fce3d94923b4245b1a9980a915ef | [
"Apache-2.0"
] | null | null | null | Executing Distributed Tensorflow training jobs on K8s
==========================================================
.. NOTE::
Coming soon 🛠
| 20.428571 | 58 | 0.426573 |
b8f34b030038c9fe5d4d86fbd06ed07d978163b2 | 283 | rst | reStructuredText | docs/index.rst | vijaykarthik-rubrik/etcd | 688043a7c2ac8bbb0e73ca5694c7815275865e24 | [
"Apache-2.0"
] | 4 | 2018-11-11T04:28:45.000Z | 2020-01-07T05:27:08.000Z | docs/index.rst | scaledata/etcd | 688043a7c2ac8bbb0e73ca5694c7815275865e24 | [
"Apache-2.0"
] | 1 | 2021-06-01T22:24:08.000Z | 2021-06-01T22:24:08.000Z | docs/index.rst | scaledata/etcd | 688043a7c2ac8bbb0e73ca5694c7815275865e24 | [
"Apache-2.0"
] | 1 | 2019-04-16T05:43:33.000Z | 2019-04-16T05:43:33.000Z |
etcd Documentation
==================
* :ref:`faq`: Frequently Asked Questions.
* :ref:`client-architecture`: Describes etcd client components.
.. toctree::
:maxdepth: 3
:caption: FAQs
faq
.. toctree::
:maxdepth: 2
:caption: Architecture
client-architecture
| 14.15 | 63 | 0.636042 |
3c36103a63d09e5bdc4b948fb77c9461b144f6ca | 3,099 | rst | reStructuredText | docs/archived/class3/module2/module2.rst | dober-man/f5-agility-labs-iam | 327ffd6c83f8283ea3f29f749f81b7b3ce0bab18 | [
"MIT"
] | 1 | 2019-06-26T15:37:19.000Z | 2019-06-26T15:37:19.000Z | docs/archived/class3/module2/module2.rst | dober-man/f5-agility-labs-iam | 327ffd6c83f8283ea3f29f749f81b7b3ce0bab18 | [
"MIT"
] | 1 | 2020-06-05T13:57:08.000Z | 2020-06-05T13:57:08.000Z | docs/archived/class3/module2/module2.rst | dober-man/f5-agility-labs-iam | 327ffd6c83f8283ea3f29f749f81b7b3ce0bab18 | [
"MIT"
] | null | null | null | Lab 2: URL Category-based Decryption Bypass
===========================================
In this lab exercise, you will bypass SSL decryption based on requests
to URLs categorized as financial services web sites.
Estimated completion time: 25 minutes
**Objectives:**
- Apply a new Per-Request Policy to bypass SSL decryption for specific
URL categories
- Test web browsing behavior
**Lab Requirements:**
- Lab 1 previously completed successfully (working SWG iApp deployment)
Task 1 – Copy and configure new Per-Request Policy
--------------------------------------------------
- Copy the **Lab\_Per\_Request** Per Request Policy by browsing
to **Access Policy > Per-Request Policies** and click **Copy**
- Name the copy **Lab\_Per\_Request\_SSL\_Bypass**
- Edit the new Per-Request Policy by clicking **Edit**, then go
to the VPE tab in your browser
- Modify the Encrypted Category Lookup object to include a branch for
SSL Bypass:
- Click on the existing **Category Lookup** object
- On the **Properties** tab, change the name to **Encrypted**
**Category** **Lookup**
- Click to access the **Branch Rules** tab
- Click **Add Branch Rule** and name it **Banks**
- Click **Change** to modify the Expression of this new Branch
Rule
- Click **Add Expression**
- Change **Agent Sel**: to **Category Lookup**
- Change **Category is**: to **Financial Data and Services**
- Click **Add Expression**
- Click **Finished**
- Click **Save**
- Add an **SSL Bypass Set** object (from the General Purpose tab)
on the **Banks** branch of the **Encrypted Category Lookup**
- Click **Save**
- Add an **SSL Intercept Set** object (from the General Purpose
tab) on the “fallback” branch of the **Encrypted Category Lookup**
- Click **Save**
- Add a **URL Filter** object on the **SSL Bypass** Branch; select the
**LAB\_URL\_FILTER URL** filter previously created in Lab1
- Click **Save**
- Change the **Allow** branch to an ending of **Allow**
|image24|
Task 2 – Reconfigure SWG iApp to assign New Per-Request Policy
--------------------------------------------------------------
- Browse to **iApps >> Application Services > Applications”**
- Click on **SWG**
- Click **Reconfigure**
- Find the section **Which Per-Request Access Policy do you want to
use?**
- Change the per-request policy to **Lab\_Per\_Request\_SSL\_Bypass**
- Scroll to the bottom and click **finished**
Task 3 – Testing
----------------
Test 1:
~~~~~~~
- Open **Internet Explorer** on your Jump Host client machine
- Browse to **http://www.wellsfargo.com**
- The browser should prompt you for authentication. Submit your
credentials.
- User: ``user1``
- Password: ``AgilityRocks!``
- Verify the site loads correctly and inspect the SSL certificate to
confirm that it is originated from Wells Fargo and SSL Bypass was
enabled
|image25|
.. |image24| image:: /_static/class3/image26.png
:width: 6.92014in
:height: 2.76250in
.. |image25| image:: /_static/class3/image27.png
:width: 5.30833in
:height: 1.78333in | 25.61157 | 72 | 0.656663 |
efaa345d501ac824d85ba5acb0c74acb8be6494e | 1,469 | rst | reStructuredText | README.rst | mmore500/teeplot | 51f73ad98650d4f855a480e106dadee86e6a10d0 | [
"MIT"
] | null | null | null | README.rst | mmore500/teeplot | 51f73ad98650d4f855a480e106dadee86e6a10d0 | [
"MIT"
] | 1 | 2022-02-22T23:44:09.000Z | 2022-02-22T23:44:09.000Z | README.rst | mmore500/teeplot | 51f73ad98650d4f855a480e106dadee86e6a10d0 | [
"MIT"
] | null | null | null | ============
teeplot
============
.. image:: https://img.shields.io/pypi/v/teeplot.svg
:target: https://pypi.python.org/pypi/teeplot
.. image:: https://img.shields.io/travis/mmore500/teeplot.svg
:target: https://travis-ci.com/mmore500/teeplot
.. image:: https://readthedocs.org/projects/teeplot/badge/?version=latest
:target: https://teeplot.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
teeplot automatically saves a copy of rendered Jupyter notebook plots
* Free software: MIT license
* Documentation: https://teeplot.readthedocs.io.
.. code-block:: python3
from teeplot import teeplot as tp
import seaborn as sns
# adapted from https://seaborn.pydata.org/examples/errorband_lineplots.html
tp.tee(
sns.lineplot,
x='timepoint',
y='signal',
hue='region',
style='event',
data=sns.load_dataset('fmri'),
teeplot_outattrs={
'additional' : 'metadata',
'for' : 'output-filename',
},
)
tp.tee(
sns.catplot,
data=sns.load_dataset('penguins'),
kind='bar',
x='species',
y='body_mass_g',
hue='sex',
ci='sd',
palette='dark',
alpha=.6,
height=6,
)
Credits
-------
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
| 22.6 | 103 | 0.665078 |
1760911f90fef0d6165f228241d167ae6c044dcb | 548 | rst | reStructuredText | docs/getting-started.rst | jeremymiller00/Success-in-Online-Education | 093bbe615d1ebb7f868802c970fc95f31b803e67 | [
"BSD-3-Clause"
] | null | null | null | docs/getting-started.rst | jeremymiller00/Success-in-Online-Education | 093bbe615d1ebb7f868802c970fc95f31b803e67 | [
"BSD-3-Clause"
] | null | null | null | docs/getting-started.rst | jeremymiller00/Success-in-Online-Education | 093bbe615d1ebb7f868802c970fc95f31b803e67 | [
"BSD-3-Clause"
] | null | null | null | Getting started
===============
The data are available here: https://analyse.kmi.open.ac.uk/open_dataset
To get started, download the seven csv files and store them in a directorty titled data/raw.
To build a postgresql database, execute the file data/create_database.sql to build the tables, then execute the file data/populate_database.sql to populate the tables from the csv files.
To replicate and explore my results:
Run src/features/build_features.py
Run src/models/train_model_rf.py
Run src/models/predict_evaluate_model.py
Enjoy!
| 28.842105 | 187 | 0.782847 |
04783599af90173f3d8f57fa89c5344d746e4028 | 146 | rst | reStructuredText | doc/api/skan.pipe.rst | TaekedeHaan/skan | 36467bda0ddd814a603f3b089031a21b11d7f7b3 | [
"BSD-3-Clause"
] | 75 | 2017-02-07T23:30:48.000Z | 2022-03-31T20:17:35.000Z | doc/api/skan.pipe.rst | TaekedeHaan/skan | 36467bda0ddd814a603f3b089031a21b11d7f7b3 | [
"BSD-3-Clause"
] | 147 | 2016-11-10T01:34:00.000Z | 2022-03-23T02:30:30.000Z | doc/api/skan.pipe.rst | TaekedeHaan/skan | 36467bda0ddd814a603f3b089031a21b11d7f7b3 | [
"BSD-3-Clause"
] | 37 | 2017-01-20T09:26:09.000Z | 2022-02-22T12:51:08.000Z | skan.pipe:: Pipeline functions for skeleton analysis
====================================================
.. automodule:: skan.pipe
:members:
| 24.333333 | 52 | 0.472603 |
d34cc87d36a9996f18acc64f1e6c87f0a8091499 | 103 | rst | reStructuredText | doc/scripts/cgat_logfiles2tsv.rst | kevinrue/cgat-flow | 02b5a1867253c2f6fd6b4f3763e0299115378913 | [
"MIT"
] | 49 | 2015-04-13T16:49:25.000Z | 2022-03-29T10:29:14.000Z | doc/scripts/cgat_logfiles2tsv.rst | kevinrue/cgat-flow | 02b5a1867253c2f6fd6b4f3763e0299115378913 | [
"MIT"
] | 252 | 2015-04-08T13:23:34.000Z | 2019-03-18T21:51:29.000Z | doc/scripts/cgat_logfiles2tsv.rst | kevinrue/cgat-flow | 02b5a1867253c2f6fd6b4f3763e0299115378913 | [
"MIT"
] | 22 | 2015-05-21T00:37:52.000Z | 2019-09-25T05:04:27.000Z |
.. automodule:: cgat_logfiles2tsv
.. program-output:: python ../scripts/cgat_logfiles2tsv.py --help
| 17.166667 | 65 | 0.737864 |
12dafc5741b56e8436ec44846c2921ee76068a51 | 7,668 | rst | reStructuredText | src/main/app-resources/inputs/NDVI/v-m-2003_feb-mx.rst | ec-melodies/wp7-landcondition | 05cd581c1d9b77f8b8ce58b50ae18e5f9a4c91a8 | [
"Apache-2.0"
] | null | null | null | src/main/app-resources/inputs/NDVI/v-m-2003_feb-mx.rst | ec-melodies/wp7-landcondition | 05cd581c1d9b77f8b8ce58b50ae18e5f9a4c91a8 | [
"Apache-2.0"
] | null | null | null | src/main/app-resources/inputs/NDVI/v-m-2003_feb-mx.rst | ec-melodies/wp7-landcondition | 05cd581c1d9b77f8b8ce58b50ae18e5f9a4c91a8 | [
"Apache-2.0"
] | null | null | null | FFGFEFDEDEDEDCCDBBBCDBCBBBAABBCBBBBBBBBABABBBCBCCDCDDDCDCCCBB@@@?@?????>>@?>??>>>?@?>@@@@@@@@??????@@B@?A@A?FGFFEFEDDEEEDCCCCCCCCCBBABABBABBBBBBABAAABCDDDCCBBCDCCCBCCBAA??@????????>??????@??@AAAAA@AA@@????@@A@A@@AAAAFFEEEEEEEEEEDCDCCCBDCCBBBBBBBABBBBBBBBBCBCCBDDCCBBBCBCBBCBBA@@@@?????>???@????@@@ABBBBBAA@AAA>??@AAA@@@@AAAAGGFFLEEFEEEEEDDDCCBCCBBBBBBCAAAAABBCBCCDCBCDCCCBCCBBBBBCCBAAA@??????>????@A@@@@AABCDDDCB@AA@A@@@ABBAA@?@@BBAFGFGGEEEEEEEEEDEDCCBBBBAABBBAA@AABBEDDCCCCCCCDCCCCBCCABAAA@@@?@???????????AABBAABCDEFDBBABAAABBABBBAA??@@ABBEFFGFEFEEEEEEDDDDCAB@CAAAB@AAAAABBCECDBABCDCCDCCCBCDDAA@AA?@@@@@???@?@???@ABBCBBABBDEDCBABBBBBCBCB@B@>??ABABFFFFFFFFEEEFEDDDFEAA@CBB@@BABBBBBBBBCCC@ADDDDCCCBCCCBABAABBBBBBA@@?@@AA@@@ABCCBBAAABCDCCBBAABCBABBBCA>?@AB@AGGFFGFFFEFFFEDDDEA@CACBB@ACBCBBABBBCCBBBBDDEFDCCDCDDBACDEEEDCCAA@@AAAAABBBCDCCBA@A@@ABCDCBBA@@A@ABCDCBAABBA@HIGFFEEFEFFEEDDCCBBACCCBBBBCCBBABCBDCCCCDDDEECDEEECDCDEDFEEDCBA@?@AACCBCCDDDCCBA@A@@@ACCCBA@@@@@BDCDCCABCCB@HHHGFFFFFEFFFDCCCCCCCCCBBAACAABBCCCDDEEEDDDDDDHFFFDDEEEDEGDCCBAAA@AABCCCCCCCDCCAA@@?AACDCA@?>@??BDCCCBACCAA@FGHFGHGFGFFEEDDCDDDDDCCCBCBB@@BCDDDEFFFFEEEEEDEFFFEDCDDDDGDDCBBAAAABCCDCBBCCBBBAAAA@BCDDCA@??@@ACDDBAABCA@AAEFFFGGHGGFFEDEFEFFEDCCCCCCBB@ACDDDDEGGFFFFFFEDEEEEEDDBCDDDDCCBABAABCCDEDCBBCCAAABBBBCCDDCA@?@BCCDDCAABAA@@A@FGFGFFGFFGGEEFGGGFFEDCCDCCCCBDDDDDEDGGGGFFGFEEFEEFEDCDDDEDCCCBBCBBCDDDEDDAAAAAAABBBCCCCCB@@?ABDCCBBBBCBA@??>FFFGFFGFFFFFGFFGGEEEDDDDCCDDCDCCCDEFFEFGGGFFFFFEEFEDDEEDEDCCBBCCBCBCCDDDCAAA@AA@@ABBCCCB@@@@@BDBCBBCBB@?>??>FFFGFFGEFFFFGGFGGEFEDEDDCCCDDDCCCCEFFFGGGHGFEFFFFFFFDEFEDEDDDCBBBCBCCDDDCCA@@AA@A@AABCCB?@@@ABCBBCCBB@>=<>><FGFFFGGFGGGFFGFGFEEEDDDCCCDDDDDDDCDEEEGGGGFFFEEEEEFEDEEEEEDDBBBBBBAAABCDCBA@@@@BAABBAAAB@A@?@BCBABA@@?>??=<;FFEFFGFEGGFFEFFEEEEDCDDDDDEEEDDDDCDDDDFGGGFFFEDDEEEEEFEEEEDCABBBBEAA@@ABCBA@@@ACBBBCBBABBBA??BBA??????@B@===FEFDGHGFGFEFEDEEFFFDEFEEEDDEEEEEDCDEDEEFFFFEEEEEEEFEEEDDDDCCACBBAB@???@ABA@BA@ABBBBAABBCCBA@@@@@>?>>?ABCA???EDDEFEFFFDEDDDEEEFFFFFFFEEEEFEEDBDBDDDEDEFEEEEEEEFEDEDCCCCCCBCBA?>>?@??AB@ACBAAABABAACBEFEA@AA?BAA@>?@BB??>>DDDDEEEEEDDDDEFEEFFGGGFFFFFFFEEDDDBDEEFCEGEEEDEDEFECDCCCBCCCBAAA>?ABB@?AABBDDDCBBAAAABBDBCBAACAB@>=<>??@>???DEDDEEDDDCDDEEEEEEFGGHHGGFEEEEEDECAFCEFEEEEDDDCDDEDCBCDDDCBB??@??@CBBBBBCCCDDCABCCCBBBABAABCCDCA@=;:==>??@AAEECCDDCCCBDDBEEDFHGGEJIGGFEEDDFDEBCFDEFDDDCCADDEDDDDBDEFEDBA>>>?@ABBCDDDDDDCBB@BCCBBBBBBBBAADDA?@?>>??>?@AA?DDCDDDCCBCCDEFDDFGGHIIIHGEEDCEGEECDFEEFEDCBCCDCDDDDEEEFGEEBA???ACDDEEDCDCEDCBAABCCCBBBBBA@@@CEB@?@?ABB@??@AADDDDDDCBBBCFEEEEGGEIJHIGFFEEFEFDDDBDEFDCCBACABBCCDDGFEFFBBABACEFFFEEEDCCBDDCBABCCCCBA@BA@?>?BDCA??AEDDB?>?@@BACDDBCBCDEGFGGEHGEIIFHGGFFFFDECDCDEEDCBABABCCABBCDFEEEDCBACDFFHEDDCECACBABCBACCCBABBBBA@???@CACBBBDEFC====>CCDDEDDEEFFHEEGGGFFFHGHGGFFEDDDDEDDDCBAA@@@ABCAABBBCDDDDCDCFFDFDDCBBCCBBBCCCCCBDDDCAABA@@A@?@A@DC@@@B@@=>==?FEFFGFEEGFHHGFEFFEECGHGGFEDCDEDDDCBBAA@@@?@BBABBBCABDEFDDEFFEDDBCCBBBBCDECCCCDCCCDB@BA>??AA@?B@A@>>?@@BA@??BGFGEGGHGEDGGGEDFECCFFFGEDACDCDCCCCBB@@@@@@AABABBBB@BEEEDEDHECBBCBBDCCACDECDDDCCDCDBADCA@?A@??A@@?>>?@@?A@??AFGFGGFFFFFFFFDEGGFFFFEEDDCDEABDCBCCB@@@@A@@@BBAABCBDFFFEEEIDA@@BCBBBBAADCDCDEEECCBA@BCBA?A?<=>>>@?>@@?????ABDGGHHFFDGGFEECGGHGFFFDDCEEEFDDDCCCCBABCAAAACABAAAFHGGGFDDDDCA@@@AAABAACCACDDEDEC@A@@>>@???=<>??@@???ABBABB@?DEGHHHGEGFFEFDGHGFFGFECCDCEEFDDCCBDCCBC@BACC@B@@BFGFGGECAAAA@A?@A@AAAACCBABCDECBA@@@?=>?>><=>>?>>>?ACDEDAC@@FCGHIHHGFFEEFFGGGGDFEDDDBDDEEDDDDCDBACBAAABBAAACDEGFFFDC@?@@@@@AAAABAABCB@CCCCBAB@@A@<=>=<:=@>=;<>@ABBCDCBA@EDFHJIIHFEEEDGFFGGDFEDDCBDCDCDEEECBAABCCBAABDDCEEEHLEEFB@@A@@@@AA@ABABBBBABBCCBA@@@@?>?>=?AADB@>=>?@@A@@@?>>FEGHIHGFEDCCEGGGGGDFEEEDDDCCDDCDECAAABDEBAAABDDDFFFFFCDBB@@AA@?@@@A?@@@AACCBC@?@??@??@??>BEFFGDB?==>???>>?>=FDGFGFFFDDDEEFEFHFEFGFFEDCBBBCCDDCAABDEECAAABECEGFEDC@ABA@@@?????@@@@A?ABA@BB@@??@@@??@?BBCCDB??>===??>>E>=>FEFBEEEDEEDEDCDEGECFFFDEDCCCCDDFDCBBBCDDDCCCDEDFGEDDC>@B@@???>>>>>>?@@@BB@A@A@?@@@????@AAA@>>=<;<=<==>?>K>>>FDCBDCDCEFEEECDDDDEFFGEEDCCCCDDEEDCCBCDEEDFFEFGFFECC@=?A?@@@@>=>====>?@?A@A?@@@?????????>><;=<======>???>>>>CAACCCCCDEEGFDDCCDDEEEDDDDDCDDEEDDBAABEIGEFFFFGGECBB@???@??A@@?>;>>>>>@???A@AAA@@>????>@?=?>>>>??>??>>>?>>>=BBFCBCCCBCEEECBCBCCCCBCDDDEDEFGEBA@?@BEGGEFFGGFGEBBAAA@@@@>@@ABAABC@?????>AA@AA@@>?A?>>>?>>>>????>>>>>>???>=BCECCCCCBCDDDABDBCB@@ABBAACCDFFD@@@BCDEEGGFHGEDCBABAAAA@AA@@?AABDEDAABBB@@@@@@@?@>??>>>==>>?>>????>>?>>?AA@>CCDCCCCBCDDCDCAACCBA@@@B@@ABBCDDCACEDEEFEGEGECAAAABBCAA@CCBAAA?@AAA@BDBC@@A?>??A@>>?>??>>>>?>?>?=>?>>>>?@B@>BDCDCCBBDDCCCB@@B@@@@??@?????@BDEEEEEFFEEEDECB@@ABBBDABBCCBBBA@@???@A@AB@@@@@?ABBB?@@??>?>??@???=>@?@?>>>???DECDCBAADCBCA@@?????>>>>=<=>??AEEEEFFHFDDBBCBABAABCDDCCCBBCBABBB@AAA@@@@??@?AABBAB@@@>@@@??>?>?>>?>???>>>>>@EECFABACCBAB@AA???>>>=<>>==?@BCEDEFEEEEDCA?@AACCCDDEFECCBBBBBBA@A?@@@@AA@@@@@?A@A@A???AAA@>>>>==>??@?>=>??>@EDCCBCCCCBAA@BB???>>===>@@?ABBCDFEDEDDCCB?<>@ABCDDEFFEEBBACBBBA@@A@@@@@@@@BCCB?ACBCA@ABA?@?>>>>=?>?@A@=>>??>GBBCCCCCAAABBCB?>>==>?@@ABABBCDDEEEEDCA@??=>ACCDDFEFFEFCBBBAABA@?@@@?AAAABBCDB?@BBBBCBBA?>>?>>?@?@@@@>==<=>?GCACCDCDCBBCCCBA>>=>@ABCBBCBCDFCEEDCBA>=<??@BDCDCEFGGFECBBCBBBAA@>?@@@@BBBAABA@?ADBDAB@@?>>>>>?AAA@@??A?>>@ADCCBDDDEECBCBAA@@@A@AECCCBADCCDBCB@?==<=??@ADDDCBCEGHFECCBBBABAA@???@@AABB@ACCBAACCDAAA?>=>>>>@AAA???@A@@@@AEEEDCEEFEBCCA@>?BCEDCEBBCCCCBA??>>;====>??ACEDBAACFFGGFDBCAAAA@A@@@@AAAABBAACECDBBCABBCA?>>>>?ABBA???@@?@A?@FEFFFFDCCABA@ABDEGEFFECDGDBC@<=<;;;=>?@AAABCDCAAACDEEFDBAABAAAAAAAABBCBBBCBBBCDCBCCBAAA?????@ACCBAB?????@AA?FFDFECEDCAAABBCEFHEDEEFFECCC@<=;;<;<>??ABCBBBBBBABCCCDC@ABBB@ABABBCCCCCABBCBBCDDEEDB@@?>@@AAAACDCA@@???ACA?>FEDEDCCEDC@ACBDEFHFEEEEGCCCDB@>>==<<<>?@BBCBBBBBBCCDBCCCDDCCCBCCCCDDDCBBBBBBCEEDFFEBA??>B@A>@ABCB@BCA@ACB?@=EDDDECADECBBCCDEDEEEEFDCCCDDDCCCCB@=;<?@ADDCBABCFDCEEDEEEDCDBBCDDCEDCCDDCBB@BCCDFEDCA???@@>>@ABCCCCCBAAA??@?BBDDDBBBDDCCDCCCEFFDDDCBCBBCCDCBCEEA>=@BCLCCBCDEGFEFEEDEDCBBCBBCCDDDDDDCAAABC?@BDDC@A???@A@?ACDDDDCA@??>=ABB@CDDAAA@ADCCCCCDFEEDCBBBBAA@@BAAABBCBCEEEHDCCDEFGGFFGFEDCCBABAAABCCDCCCA@A@BB@@BDBA@?>@AABBABBCDCCBB@@>@BCBCBACCAA@@BCCCDCCDDCBBBAAAA@@?@@A@A@ACDFECDDBBCDEHEFEEFFDCDCBAA@@ACBACCCDB@@?A@@ABCBAA@?@ABCBBAAABBDDDBBCBCCBCA@BDCBDCBBCDDBCBBAAAA@@@@A@@@@AA@@ABDDBAAAA@ADEEEDDCDDCCEECBBBBBCCCCDEDA@@@@@@ACDCBA@@@CBCC@@?ACDDDDDDDABCDCBCDECCCACACDEDA@A@AABAA@@@AAA@AAAABBCBA@?AB@AABCCBBCCCBCDDCCCCCCEFDDDGDCAABBABAABCCA@ACCBBB?>>BCDEDDEDCABBDDIBDDBADBCA@CEDA?>?@@AAAABBBBAAAABBBBBACBA@@AAABCBBACBBCCCCBBCCBCDEDCCECC@CCAABA@BAABBBCCCB@?=>@BCDCEEECAA@ABECDDBBDBB@ABCBAA??>=>?@@AAAA@@AABBCBBBCCB@??@@AABBBCBCDBCBBA@BBCCDCCCCBAACCCBA@@A@ABCBAAAAA@@ACDCEDDCEC@BBCCCCCCCCCCBABBB@ABBA@>>>@BA@@@@@@ABCBBBCCCBAAAAA@@?BADCBCBCAA@?AAABBCCBBBAAACBB??@A@BBB?AAA@BAABDEEFFECECA?@CDBCDDCBADBBDCCA@A@AA@@@@@A@@@@@AAAABBBCDCCAAAAAA@@B@@BAABBBA???AA?ABAA????@ABA>>@A@AB@B?A@A?ABCDFEDEECCB@@@BCCDDCDAADBDECBCAA??CCBAA@A@@??@A@A@ABBCCCCBBBBA@@@AA?@A@ABBA?>?@@@@@@@?>>??@AA@??@@ACCBA@?@?@ABBDCCCDCCCBBBBBBCBCBABCDEEDCBCC@@ECCCBAAAAA@@??@AA@BBCCBBAAABB@A@???AB@AB??==>@?@A?A@??@@>@@@@?AAABBA@@@@@@A@@?ABADCBCCBCA@BBCCCBBCCCCBBBABBABA@ABBBBCCCBBA@AA@ABBCBAAAACBAA@>?>?@?AAA>=<=?@@?@@?>@???@?@@@@@@AAAAAAAA@@@??@BBDDCCDDCA?CBBCBBCCCBBAAAABBBB@AAAAABDCDDEECBAAABBBAAAAAACBA@?????>??@>>==>@@@??????>>?@@@@>?@AAABBAA@ABAA@@DDFDCCCCD@>BBBBBCEDBABBABABBCCBCCAAABBBCEDEEDCCBCBA@@@?@ABCBA@@AA?>>=@?@@?>@ABA@@@AA@>>=?????@?@@AABBBAAAAACDDDCCBABC@=BBBBDCDDBA?@ABAABCCCDDCCAABCDDBEFFEDDCCAA?@@?@DBEDBABBA?>>@?ABAAAABA@??@AA@>>>????>>>?@?@BAAA@AAABBBAA@@AB@=BCCCDBDDA@>@@AABBCBCCCCCBBCDCDCCDEEFGFDCBA@@@@ADEECBAAAB@?@BBCDBBAAA??>>>>??>??A?==??>=>?@AAA@@@AAA@@?@ABBA?CCDDCBDEB@>@@AABBCACBCCBBBBBCCCCDDDDGGEDDCAA?AB@ACCCAABDCCCDBDDDCAAB@??=@???????@??@@>=>>>?@A?@@@??@?>?ABB@>CCEEBBECA@?@@BABBBACCBBBBBABCCCCCFDDEEDGFCCBAB?@CCCCCBCDEEFDBBA@AACCAA?AAA@?@@B@?@?AA@===<>?@@A?=>A@??>?@@>= | 7,668 | 7,668 | 0.740089 |
4dce1fcd002329264237656461180eb6c46b2caf | 1,729 | rst | reStructuredText | doc/index.rst | yaml2sbml-dev/yaml2sbml | 43ef3cb478ef30cfc0f1f4e232499824baf20598 | [
"MIT"
] | 13 | 2021-01-09T13:48:22.000Z | 2022-01-18T09:12:21.000Z | doc/index.rst | yaml2sbml-dev/yaml2sbml | 43ef3cb478ef30cfc0f1f4e232499824baf20598 | [
"MIT"
] | 103 | 2020-09-05T09:07:32.000Z | 2021-08-16T13:52:44.000Z | doc/index.rst | yaml2sbml-dev/yaml2sbml | 43ef3cb478ef30cfc0f1f4e232499824baf20598 | [
"MIT"
] | 4 | 2020-09-27T17:21:12.000Z | 2021-10-20T19:41:17.000Z | yaml2sbml
=========
.. image:: https://github.com/yaml2sbml-dev/yaml2sbml/workflows/CI/badge.svg
:target: https://github.com/yaml2sbml-dev/yaml2sbml/actions
:alt: Build status
.. image:: https://codecov.io/gh/yaml2sbml-dev/yaml2sbml/branch/master/graph/badge.svg
:target: https://codecov.io/gh/yaml2sbml-dev/yaml2sbml
:alt: Code coverage
.. image:: https://readthedocs.org/projects/yaml2sbml/badge/?version=latest
:target: https://yaml2sbml.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://app.codacy.com/project/badge/Grade/632acdc8d4ef4f50bf69892b8862fd24
:target: https://www.codacy.com/gh/yaml2sbml-dev/yaml2sbml/dashboard?utm_source=github.com&utm_medium=referral&utm_content=yaml2sbml-dev/yaml2sbml&utm_campaign=Badge_Grade
:alt: Code quality
.. image:: https://badge.fury.io/py/yaml2sbml.svg
:target: https://badge.fury.io/py/yaml2sbml
:alt: PyPI
:Release: |version|
:Source code: https://github.com/yaml2sbml-dev/yaml2sbml
.. image:: logo/logo_yaml2sbml_long.png
:alt: yaml2sbml logo
:align: center
**yaml2sbml** allows the user to convert a system of ODEs specified in a YAML_
file to SBML_.
In addition, if experimental data is provided in the YAML file, it can also be
converted to PEtab_.
.. toctree::
:caption: Main
:maxdepth: 1
:hidden:
install
format_specification
examples/examples
api_doc
.. toctree::
:caption: About
:maxdepth: 1
:hidden:
release_notes
license
log
logo
contact
.. toctree::
:caption: Developers
:maxdepth: 1
:hidden:
contribute
deploy
.. _YAML: https://yaml.org
.. _SBML: http://sbml.org
.. _PEtab: https://petab.readthedocs.io/en/stable/
| 27.015625 | 186 | 0.721805 |
2ac65bedefb9ccf9414f46b10d971adbcfa8f4ab | 2,143 | rst | reStructuredText | doc/guides/gcov.rst | gamnes/zephyr | 882702e6883746565ad49ca5befb0df34c8528ab | [
"Apache-2.0"
] | null | null | null | doc/guides/gcov.rst | gamnes/zephyr | 882702e6883746565ad49ca5befb0df34c8528ab | [
"Apache-2.0"
] | null | null | null | doc/guides/gcov.rst | gamnes/zephyr | 882702e6883746565ad49ca5befb0df34c8528ab | [
"Apache-2.0"
] | null | null | null | .. _gcov:
Test coverage reports via GCOV
##############################
Overview
********
`GCC GCOV <https://gcc.gnu.org/onlinedocs/gcc/Gcov.html>`_ is a test coverage program
used together with the GCC compiler to analyze and create test coverage reports
for your programs, helping you create more efficient, faster running code and
discovering untested code paths
In Zephyr, gcov collects coverage profiling data in RAM (and not to a file
system) while your application is running. Support for gcov collection and
reporting is limited by available RAM size and so is currently enabled only
for QEMU emulation of embedded targets.
The purpose of this document is to detail the steps involved in the generation of
coverage report for embedded targets. The default gcov behavior will continue to
remain unchanged for targets which use POSIX_ARCH .
Details
*******
There are 2 parts to enable this feature. The first is to enable the coverage for the
device and the second to enable in the test application. As explained earlier the
code coverage with gcov is a function of RAM available. Therefore ensure that the
device has enough RAM when enabling the coverage for it. For example a small device
like frdm_k64f can run a simple test application but the more complex test
cases which consume more RAM will crash when coverage is enabled.
To enable the device for coverage, select :option:`CONFIG_HAS_COVERAGE_SUPPORT`
in the Kconfig.board file.
To report the coverage for the particular test application set :option:`CONFIG_COVERAGE`.
Steps to generate code coverage reports
***************************************
1. Build the code with CONFIG_COVERAGE=y::
$ cmake -DBOARD=mps2_an385 -DCONFIG_COVERAGE=y ..
#. Store the build and run output on to a log file::
$ make run > log.log
#. Generate the gcov gcda files from the log file that was saved::
$ python3 scripts/gen_gcov_files.py -i log.log
#. Find the gcov binary placed in the SDK::
$ find -iregex ".*gcov"
#. Run gcovr to get the reports::
$ gcovr -r . --html -o gcov_report/coverage.html --html-details --gcov-executable <gcov_path_in_SDK>
| 36.322034 | 105 | 0.745684 |
79bf3dfb8a327e8330d6e6355680a23c7a98004d | 1,337 | rst | reStructuredText | drivers/staging/media/av7110/video-try-command.rst | jainsakshi2395/linux | 7ccb860232bb83fb60cd6bcf5aaf0c008d903acb | [
"Linux-OpenIB"
] | 44 | 2022-03-16T08:32:31.000Z | 2022-03-31T16:02:35.000Z | drivers/staging/media/av7110/video-try-command.rst | jainsakshi2395/linux | 7ccb860232bb83fb60cd6bcf5aaf0c008d903acb | [
"Linux-OpenIB"
] | 13 | 2021-07-10T04:36:17.000Z | 2022-03-03T10:50:05.000Z | drivers/staging/media/av7110/video-try-command.rst | jainsakshi2395/linux | 7ccb860232bb83fb60cd6bcf5aaf0c008d903acb | [
"Linux-OpenIB"
] | 18 | 2022-03-19T04:41:04.000Z | 2022-03-31T03:32:12.000Z | .. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later
.. c:namespace:: DTV.video
.. _VIDEO_TRY_COMMAND:
=================
VIDEO_TRY_COMMAND
=================
Name
----
VIDEO_TRY_COMMAND
.. attention:: This ioctl is deprecated.
Synopsis
--------
.. c:macro:: VIDEO_TRY_COMMAND
``int ioctl(int fd, VIDEO_TRY_COMMAND, struct video_command *cmd)``
Arguments
---------
.. flat-table::
:header-rows: 0
:stub-columns: 0
- .. row 1
- int fd
- File descriptor returned by a previous call to open().
- .. row 2
- int request
- Equals VIDEO_TRY_COMMAND for this command.
- .. row 3
- struct video_command \*cmd
- Try a decoder command.
Description
-----------
This ioctl is obsolete. Do not use in new drivers. For V4L2 decoders
this ioctl has been replaced by the
:ref:`VIDIOC_TRY_DECODER_CMD <VIDIOC_DECODER_CMD>` ioctl.
This ioctl tries a decoder command. The ``video_command`` struct is a
subset of the ``v4l2_decoder_cmd`` struct, so refer to the
:ref:`VIDIOC_TRY_DECODER_CMD <VIDIOC_DECODER_CMD>` documentation
for more information.
Return Value
------------
On success 0 is returned, on error -1 and the ``errno`` variable is set
appropriately. The generic error codes are described at the
:ref:`Generic Error Codes <gen-errors>` chapter.
| 19.955224 | 71 | 0.66193 |
d8cbe422831c77795a3cc2c8e69c849ad18e7aa6 | 1,328 | rst | reStructuredText | atomsci/ddm/docs/source/guide/tests.rst | vgutta/AMPL | 46759aa84fd6acfc14facad0e14cb05a43d2e309 | [
"MIT"
] | 77 | 2019-11-17T01:15:36.000Z | 2021-10-19T07:51:03.000Z | atomsci/ddm/docs/source/guide/tests.rst | vgutta/AMPL | 46759aa84fd6acfc14facad0e14cb05a43d2e309 | [
"MIT"
] | 39 | 2019-12-16T22:21:54.000Z | 2021-09-30T16:31:12.000Z | atomsci/ddm/docs/source/guide/tests.rst | vgutta/AMPL | 46759aa84fd6acfc14facad0e14cb05a43d2e309 | [
"MIT"
] | 41 | 2019-11-24T03:40:32.000Z | 2021-08-17T22:06:07.000Z | .. _tests:
Tests
=====
**AMPL** includes a suite of software tests. This section explains how to run a very simple test that is fast to run. The Python` test fits a random forest model using Mordred descriptors on a set of compounds from `Delaney, et al` with solubility data. A molecular scaffold-based split is used to create the training and test sets. In addition, an external holdout set is used to demonstrate how to make predictions on new compounds.
To run the `Delaney` Python script that curates a dataset, fits a model, and makes predictions, run the following commands:
::
conda activate atomsci
cd atomsci/ddm/test/integrative/delaney_RF
pytest
.. note::
This test generally takes a few minutes on a modern system
The important files for this test are listed below:
* `test_delany_RF.py`: This script loads and curates the dataset, generates a model pipeline object, and fits a model. The model is reloaded from the filesystem and then used to predict solubilities for a new dataset.
* `config_delaney_fit_RF.json`: Basic parameter file for fitting
* `config_delaney_predict_RF.json`: Basic parameter file for predicting
More example and test information
---------------------------------
More details on examples and tests can be found in :ref:`Advanced testing <advanced_testing>`.
| 47.428571 | 434 | 0.754518 |
b0c0c382dd12b6f758553729ab65c581bc286b88 | 4,793 | rst | reStructuredText | docs/source/overview.rst | KBIbiopharma/pybleau | 5cdfce603ad29af874f74f0f527adc6b4c9066e8 | [
"MIT"
] | 4 | 2020-02-27T22:38:29.000Z | 2021-05-03T05:32:11.000Z | docs/source/overview.rst | KBIbiopharma/pybleau | 5cdfce603ad29af874f74f0f527adc6b4c9066e8 | [
"MIT"
] | 85 | 2020-02-04T21:57:14.000Z | 2021-05-03T14:29:40.000Z | docs/source/overview.rst | KBIbiopharma/pybleau | 5cdfce603ad29af874f74f0f527adc6b4c9066e8 | [
"MIT"
] | 1 | 2020-02-20T00:45:09.000Z | 2020-02-20T00:45:09.000Z |
.. _package_overview:
****************
Package Overview
****************
An App for custom exploration of tabular data
---------------------------------------------
The `app` submodule of the project compose a desktop application providing
analysts a limited but coherent set of tools enabling the following
capabilities:
* viewing the tabular data, sorting, shuffling and filtering it,

* viewing a customizable statistical summary of each column (bottom portion
of the screenshot above),
* generating polished plots with a graphical interface accessible to anyone,


* plotting columns as line plots, bar plots, scatter plots, histograms
(binned frequencies), and heatmaps,

* viewing any number of plots at once, organizing them freely to build a
coherent story from the data.
* creating and "freezing" plots for a given portion of the data obtained by
filtering,
* displaying any number of dimensions in a scatter plot by using the color
dimension and hover tools,
* selecting rows in the data table which triggers point selection in every
scatter plots and vice-versa,

In addition to the `pybleau_app` entry point to launch the application, an
example script launching the application on some data can be found in
`scripts/explore_dataframe.py`. Users looking for more built-in customizations
and controls should review `pybleau/app/ui/dataframe_analyzer_model_view.py`.
An app? Yes, but embeddable ETS components too
----------------------------------------------
Despite its name, the `app` submodule is designed to be used as Chaco/Trait
application components, to be embedded into any ETS-based application, no matter the
industry or purpose. Its highest level class is the `DataFrameAnalyzerView` which can
be launched as a top level application (in addition to being embedded in another
tool)::
>>> from pybleau.app.api import DataFrameAnalyzer, DataFrameAnalyzerView
>>> import pandas as pd
>>> data = pd.DataFrame({"A": [1, 2, 3], "B": [3, 2, 3], "C": list("abc")})
>>> app = DataFrameAnalyzerView(model=DataFrameAnalyzer(source_df=data))
>>> app.configure_traits()
In this tool, users can then filter or sort data, view its summary change live as
filtering changes. Users can also plot any and multiple parts of it to build an
analysis, before exporting that analysis to json. Since all plots are connected to a
single DataFrame, selecting data in 1 plot triggers a selection in the table view and
all existing plots. Additionally, the plotted data is the filtered data. Changing the
filter will therefore trigger an update of all plots that haven't been set as "frozen".
Additionally developers familiar with ETS can reuse lower level components as they
see fit, such as the `DataFramePlotManager`, its view class, or the configurators and
factories for each type of plot supported.
A functional API to build interactive Plotly plots
--------------------------------------------------
[Note: since this was created, a similar but more complete project has come
out: https://plot.ly/python/plotly-express/ which is recommended over this.]
The `plotly_api` submodule is designed as an API to build plotly plots
blending matplotlib-style function names, and seaborn-style syntax (provide a
dataframe, and column names for each dimension of the plot). For example, a
colored 2D scatter plot with hover capabilities is built with::
>>> from pybleau.plotly_api.api import plotly_scatter
>>> import pandas as pd
>>> data = pd.DataFrame({"A": [1, 2, 3], "B": [3, 2, 3], "C": list("abc")})
>>> plotly_scatter(x="A", y="B", data=data, hue="C", hover=["index", "A", "B"])
The same function can also build 3D scatter plots (with coloring and
hovering)::
>>> plotly_scatter(x="e", y="c", z="d", data=data, hue="a", hover="index")

Supported types:
* 2D scatter plots (dimensions along x, y, z, hue, hover)
* 3D scatter plots (dimensions along x, y, z, hue, hover)
* bar plots (multiple dimensions along y/hue)
* histogram plots (dimensions along x/hue)
For a complete set of examples using the `plotly_api`, refer to
`scripts/plotly_api_visual_tests.ipynb`.
In the long run, the purpose is to provide a functional and unified way to
build interactive plots using different backends in the future without knowing
each package's internals in particular plotly in the notebook and chaco in ETS
desktop apps. So you should expect a `chaco_api` subpackage at some point to
provide a functional entry point to Chaco.
| 46.533981 | 87 | 0.733361 |
3d271e4b72a3ee15f92e6b3e37b185348dc97a65 | 248 | rst | reStructuredText | docs/source2/generated/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.predict.rst | GreatWei/pythonStates | c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37 | [
"BSD-3-Clause"
] | 76 | 2019-12-28T08:37:10.000Z | 2022-03-29T02:19:41.000Z | docs/source2/generated/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.predict.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 11 | 2015-07-22T22:11:59.000Z | 2020-10-09T08:02:15.000Z | docs/source2/generated/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.predict.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 35 | 2020-02-04T14:46:25.000Z | 2022-03-24T03:56:17.000Z | :orphan:
statsmodels.tsa.statespace.structural.UnobservedComponents.predict
==================================================================
.. currentmodule:: statsmodels.tsa.statespace.structural
.. automethod:: UnobservedComponents.predict
| 27.555556 | 66 | 0.608871 |
fcc45fd8a836a19ae8bbf6f0179dcf8b3c871094 | 73 | rst | reStructuredText | CHANGES.rst | regen100/browsercookiejar | aef07807132cbb1ef1a78b6070f3cec9aae77a05 | [
"MIT"
] | 3 | 2015-04-18T07:22:37.000Z | 2017-05-20T01:46:53.000Z | CHANGES.rst | regen100/browsercookiejar | aef07807132cbb1ef1a78b6070f3cec9aae77a05 | [
"MIT"
] | 1 | 2016-11-21T21:42:31.000Z | 2016-11-21T21:42:31.000Z | CHANGES.rst | regen100/browsercookiejar | aef07807132cbb1ef1a78b6070f3cec9aae77a05 | [
"MIT"
] | null | null | null | Changelog
=========
0.1 (2014-08-30)
----------------
- Initial release
| 10.428571 | 17 | 0.452055 |
a7e3f7b96b2af1d012c189da6d7c90f324a14023 | 76 | rst | reStructuredText | doc/source/stdlib/detail/function_annotation-static_let-StaticLetMacro.rst | AndreiPotapov/daScript | 71691375ac6a62bf114adb7f620563c05f0a9c6d | [
"BSD-3-Clause"
] | null | null | null | doc/source/stdlib/detail/function_annotation-static_let-StaticLetMacro.rst | AndreiPotapov/daScript | 71691375ac6a62bf114adb7f620563c05f0a9c6d | [
"BSD-3-Clause"
] | null | null | null | doc/source/stdlib/detail/function_annotation-static_let-StaticLetMacro.rst | AndreiPotapov/daScript | 71691375ac6a62bf114adb7f620563c05f0a9c6d | [
"BSD-3-Clause"
] | null | null | null | This macro implements the `static_let` and `static_let_finalize` functions.
| 38 | 75 | 0.828947 |
14e5282307f5bad787969e96d5162f31a7c612e8 | 395 | rst | reStructuredText | src/doc_source/configuration/index.rst | proatria/sftpplus-docs | 6676e79a10cb1016faa2f823d6179d58b2b8b179 | [
"Apache-2.0"
] | null | null | null | src/doc_source/configuration/index.rst | proatria/sftpplus-docs | 6676e79a10cb1016faa2f823d6179d58b2b8b179 | [
"Apache-2.0"
] | 2 | 2021-11-15T20:01:20.000Z | 2021-11-18T10:28:00.000Z | src/doc_source/configuration/index.rst | proatria/sftpplus-docs | 6676e79a10cb1016faa2f823d6179d58b2b8b179 | [
"Apache-2.0"
] | 1 | 2019-10-29T18:44:06.000Z | 2019-10-29T18:44:06.000Z | General Configuration
=====================
This section includes the reference documentation for general configuration
options used by both client-side and server-side transfers.
.. toctree::
:maxdepth: 1
general
server
local-manager
configuration-file
matching-expression
monitor-service
resources
email-client
analytics
lets-encrypt
sqlite
| 18.809524 | 75 | 0.688608 |
e0fe4e86816890a320edead1ab629abf38c78b07 | 3,209 | rst | reStructuredText | aspnet/hosting/directory-structure.rst | alingarnwelay-thesis/Docs | bf22fbe17ef03f9ccef7facd9462a37a04b73d47 | [
"CC-BY-4.0",
"MIT"
] | 13 | 2019-02-14T19:48:34.000Z | 2021-12-24T13:38:23.000Z | aspnet/hosting/directory-structure.rst | sikandaramla/Docs | 7b104bf5450a13857b586da7111c4eaaef2b2756 | [
"Apache-2.0"
] | null | null | null | aspnet/hosting/directory-structure.rst | sikandaramla/Docs | 7b104bf5450a13857b586da7111c4eaaef2b2756 | [
"Apache-2.0"
] | 3 | 2017-12-29T18:10:16.000Z | 2018-07-24T18:41:45.000Z | .. _directory-structure:
Directory Structure
===================
By `Luke Latham`_
In ASP.NET Core, the application directory, *publish*, is comprised of application files, config files, static assets, packages, and the runtime (for self-contained apps). This is the same directory structure as previous versions of ASP.NET, where the entire application lives inside the web root directory.
+----------------+------------------------------------------------+
| App Type | Directory Structure |
+================+================================================+
| Portable | - publish* |
| | |
| | - logs* (if included in publishOptions) |
| | - refs* |
| | - runtimes* |
| | - Views* (if included in publishOptions) |
| | - wwwroot* (if included in publishOptions) |
| | - .dll files |
| | - myapp.deps.json |
| | - myapp.dll |
| | - myapp.pdb |
| | - myapp.runtimeconfig.json |
| | - web.config (if included in publishOptions) |
+----------------+------------------------------------------------+
| Self-contained | - publish* |
| | |
| | - logs* (if included in publishOptions) |
| | - refs* |
| | - Views* (if included in publishOptions) |
| | - wwwroot* (if included in publishOptions) |
| | - .dll files |
| | - myapp.deps.json |
| | - myapp.exe |
| | - myapp.pdb |
| | - myapp.runtimeconfig.json |
| | - web.config (if included in publishOptions) |
+----------------+------------------------------------------------+
\* Indicates a directory
The contents of the *publish* directory represent the *content root path*, also called the *application base path*, of the deployment. Whatever name is given to the *publish* directory in the deployment, its location serves as the server's physical path to the hosted application. The *wwwroot* directory, if present, only contains static assets. The *logs* directory may be included in the deployment by creating it in the project and adding it to **publishOptions** of *project.json* or by physically creating the directory on the server.
The deployment directory requires Read/Execute permissions, while the *logs* directory requires Read/Write permissions. Additional directories where assets will be written require Read/Write permissions.
| 68.276596 | 540 | 0.426924 |
3256117ebf4853bec3643e381b34ce1d0727fea5 | 534 | rst | reStructuredText | docs/index.rst | albertzhu01/nequip | 63ba41185e7852ebb6f68983ec30d1f569e43271 | [
"MIT"
] | 1 | 2022-03-13T10:17:53.000Z | 2022-03-13T10:17:53.000Z | docs/index.rst | leoil/nequip | 83b888797025c94b9963a508bc213a7c98da5bcb | [
"MIT"
] | null | null | null | docs/index.rst | leoil/nequip | 83b888797025c94b9963a508bc213a7c98da5bcb | [
"MIT"
] | null | null | null | .. NequIP documentation master file, created by
sphinx-quickstart on Fri May 28 16:23:04 2021.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
NequIP
======
NequIP is an open-source package for creating, training, and using E(3)-equivariant machine learning interatomic potentials.
.. toctree::
:maxdepth: 3
:caption: Contents:
guide/guide
api/nequip
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 20.538462 | 124 | 0.691011 |
d46bcdbf0b9b3c1f1e31031837968d4dd176ca9a | 7,984 | rst | reStructuredText | doc/source/admin/identity-security-compliance.rst | chetanzope/keystone | 13007a80d20521e75ad8803d6f1d11b2c50a1f69 | [
"Apache-2.0"
] | null | null | null | doc/source/admin/identity-security-compliance.rst | chetanzope/keystone | 13007a80d20521e75ad8803d6f1d11b2c50a1f69 | [
"Apache-2.0"
] | null | null | null | doc/source/admin/identity-security-compliance.rst | chetanzope/keystone | 13007a80d20521e75ad8803d6f1d11b2c50a1f69 | [
"Apache-2.0"
] | null | null | null | .. _identity_security_compliance:
===============================
Security compliance and PCI-DSS
===============================
As of the Newton release, the Identity service contains additional security
compliance features, specifically to satisfy Payment Card Industry -
Data Security Standard (PCI-DSS) v3.1 requirements. See
`Security Hardening PCI-DSS`_ for more information on PCI-DSS.
Security compliance features are disabled by default and most of the features
only apply to the SQL backend for the identity driver. Other identity backends,
such as LDAP, should implement their own security controls.
Enable these features by changing the configuration settings under the
``[security_compliance]`` section in ``keystone.conf``.
Setting an account lockout threshold
------------------------------------
The account lockout feature limits the number of incorrect password attempts.
If a user fails to authenticate after the maximum number of attempts, the
service disables the user. Users can be re-enabled by explicitly setting the
enable user attribute with the update user `v3`_ API call.
You set the maximum number of failed authentication attempts by setting
the ``lockout_failure_attempts``:
.. code-block:: ini
[security_compliance]
lockout_failure_attempts = 6
You set the number of minutes a user would be locked out by setting
the ``lockout_duration`` in seconds:
.. code-block:: ini
[security_compliance]
lockout_duration = 1800
If you do not set the ``lockout_duration``, users will be locked out
indefinitely until the user is explicitly enabled via the API.
You can ensure specific users are never locked out. This can be useful for
service accounts or administrative users. You can do this by setting
``ignore_lockout_failure_attempts`` to ``true`` via a user update API
(``PATCH /v3/users/{user_id}``):
.. code-block:: json
{
"user": {
"options": {
"ignore_lockout_failure_attempts": true
}
}
}
Disabling inactive users
------------------------
PCI-DSS 8.1.4 requires that inactive user accounts be removed or disabled
within 90 days. You can achieve this by setting the
``disable_user_account_days_inactive``:
.. code-block:: ini
[security_compliance]
disable_user_account_days_inactive = 90
This above example means that users that have not authenticated (inactive) for
the past 90 days are automatically disabled. Users can be re-enabled by
explicitly setting the enable user attribute via the API.
Force users to change password upon first use
---------------------------------------------
PCI-DSS 8.2.6 requires users to change their password for first time use and
upon an administrative password reset. Within the identity `user API`_,
`create user` and `update user` are considered administrative password
changes. Whereas, `change password for user` is a self-service password
change. Once this feature is enabled, new users, and users that have had their
password reset, will be required to change their password upon next
authentication (first use), before being able to access any services.
Prior to enabling this feature, you may want to exempt some users that you do
not wish to be required to change their password. You can mark a user as
exempt by setting the user options attribute
``ignore_change_password_upon_first_use`` to ``true`` via a user update API
(``PATCH /v3/users/{user_id}``):
.. code-block:: json
{
"user": {
"options": {
"ignore_change_password_upon_first_use": true
}
}
}
.. WARNING::
Failure to mark service users as exempt from this requirement will result
in your service account passwords becoming expired after being reset.
When ready, you can configure it so that users are forced to change their
password upon first use by setting ``change_password_upon_first_use``:
.. code-block:: ini
[security_compliance]
change_password_upon_first_use = True
.. _`user API`: http://developer.openstack.org/api-ref/identity/v3/index.html#users
Configuring password expiration
-------------------------------
Passwords can be configured to expire within a certain number of days by
setting the ``password_expires_days``:
.. code-block:: ini
[security_compliance]
password_expires_days = 90
Once set, any new password changes have an expiration date based on the
date/time of the password change plus the number of days defined here. Existing
passwords will not be impacted. If you want existing passwords to have an
expiration date, you would need to run a SQL script against the password table
in the database to update the expires_at column.
If there exists a user whose password you do not want to expire, keystone
supports setting that user's option ``ignore_password_expiry`` to ``true``
via user update API (``PATCH /v3/users/{user_id}``):
.. code-block:: json
{
"user": {
"options": {
"ignore_password_expiry": true
}
}
}
Configuring password strength requirements
------------------------------------------
You can set password strength requirements, such as requiring numbers in
passwords or setting a minimum password length, by adding a regular
expression to the ``password_regex`` setting:
.. code-block:: ini
[security_compliance]
password_regex = ^(?=.*\d)(?=.*[a-zA-Z]).{7,}$
The above example is a regular expression that requires a password to have:
* One (1) letter
* One (1) digit
* Minimum length of seven (7) characters
If you do set the ``password_regex``, you should provide text that
describes your password strength requirements. You can do this by setting the
``password_regex_description``:
.. code-block:: ini
[security_compliance]
password_regex_description = Passwords must contain at least 1 letter, 1
digit, and be a minimum length of 7
characters.
It is imperative that the ``password_regex_description`` matches the actual
regex. If the ``password_regex`` and the ``password_regex_description`` do
not match, it will cause user experience to suffer since this description
will be returned to users to explain why their requested password was
insufficient.
.. note::
You must ensure the ``password_regex_description`` accurately and
completely describes the ``password_regex``. If the two options are out of
sync, the help text could inaccurately describe the password requirements
being applied to the password. This would lead to a poor user experience.
Requiring a unique password history
-----------------------------------
The password history requirements controls the number of passwords for a user
that must be unique before an old password can be reused. You can enforce this
by setting the ``unique_last_password_count``:
.. code-block:: ini
[security_compliance]
unique_last_password_count= 5
The above example does not allow a user to create a new password that is the
same as any of their last four previous passwords.
Similarly, you can set the number of days that a password must be used before
the user can change it by setting the ``minimum_password_age``:
.. code-block:: ini
[security_compliance]
minimum_password_age = 1
In the above example, once a user changes their password, they would not be
able to change it again for one day. This prevents users from changing their
passwords immediately in order to wipe out their password history and reuse an
old password.
.. note::
When you set ``password_expires_days``, the value for the
``minimum_password_age`` should be less than the ``password_expires_days``.
Otherwise, users would not be able to change their passwords before they
expire.
.. _Security Hardening PCI-DSS: https://specs.openstack.org/openstack/keystone-specs/specs/keystone/newton/pci-dss.html
.. _v3: https://developer.openstack.org/api-ref/identity/v3/index.html#update-user
| 34.713043 | 119 | 0.726954 |
a43e28461bcd071fba17d7a092a3f068cc6d8a93 | 3,270 | rst | reStructuredText | docs/knowledge-base.rst | AGregorc/resolwe-bio-py | 62304e5d4c54c917575421701c6977dc63fc3a8f | [
"Apache-2.0"
] | 4 | 2016-09-28T16:00:05.000Z | 2018-08-16T16:14:10.000Z | docs/knowledge-base.rst | AGregorc/resolwe-bio-py | 62304e5d4c54c917575421701c6977dc63fc3a8f | [
"Apache-2.0"
] | 229 | 2016-03-28T19:41:00.000Z | 2022-03-16T15:02:15.000Z | docs/knowledge-base.rst | AGregorc/resolwe-bio-py | 62304e5d4c54c917575421701c6977dc63fc3a8f | [
"Apache-2.0"
] | 18 | 2016-03-10T16:11:57.000Z | 2021-06-01T10:01:49.000Z | .. _knowleedge-base:
==============
Knowledge base
==============
Genialis Knowledge base (KB) is a collection of "features" (genes,
transcripts, ...) and "mappings" between these features. It comes
very handy when performing various tasks with genomic features e.g.:
- find all aliases of gene ``BRCA2``
- finding all genes of type ``protein_coding``
- find all transcripts of gene ``FHIT``
- converting ``gene_id`` to ``gene_symbol``
- ...
Feature
=======
``Feature`` object represents a genomic feature: a gene, a transcript, etc.
You can query ``Feature`` objects by ``feature`` endpoint, similarly like
``Data``, ``Sample`` or any other ReSDK resource::
feature = res.feature.get(feature_id="BRCA2")
To examine all attributes of a ``Feature``, see the :ref:`reference`.
Here we will list a few most commonly used ones::
# Get the feature:
feature = res.feature.get(feature_id="BRCA2")
# Database where this feature is defined, e.g. ENSEMBL, UCSC, NCBI, ...
feature.source
# Unique identifier of a feature
feature.feature_id
# Feature species
feature.species
# Feature type, e.g. gene, transcript, exon, ...
feature.type
# Feature name
feature.name
# List of feature aliases
feature.aliases
The real power is in the filter capabilities. Here are some examples::
# Count number of Human "protein-conding" transcripts in ENSEMBL database
res.feature.filter(
species="Homo sapiens",
type="transcript",
subtype="protein-coding",
source="ENSEMBL",
).count()
# Convert all gene IDs in a list ``gene_ids`` to gene symbols::
gene_ids = ["ENSG00000139618", "ENSG00000189283"]
genes = res.feature.filter(
feature_id__in=gene_ids,
type="gene",
species="Homo sapiens",
)
mapping = {g.feature_id: g.name for g in genes}
gene_symbols = [mapping[gene_id] for gene_id in gene_ids]
.. warning::
It might look tempting to simplify the last example with::
gene_symbols = [g.name for g in genes]
Don't do this. The order of entries in the ``genes`` can be arbitrary
and therefore cause that the resulting list ``gene_symbols`` is not
ordered in the same way as ``gene_ids``.
Mapping
=======
Mapping is a *connection* between two features. Features can be related
in various ways. The type of mapping is indicated by ``relation_type``
attribute. It is one of the following options:
- ``crossdb``: Two features from different sources (databases)
that describe same feature. Example: connection for gene BRCA2
between database "UniProtKB" and "UCSC".
- ``ortholog``: Two features from different species that
describe orthologous gene.
- ``transcript``: Connection between gene and it's transcripts.
- ``exon``: Connection between gene / transcript and it's exons.
Again, we will only list an example and then let your imagination
fly::
# Find UniProtKB ID for gene with given ENSEMBL ID:
mapping = res.mapping.filter(
source_id="ENSG00000189283",
source_db="ENSEMBL",
target_db="UniProtKB",
source_species="Homo sapiens",
target_species="Homo sapiens",
)
uniprot_id = mapping[0].target_id
| 30.560748 | 77 | 0.675229 |
e3cc42004610f9880f37aa88a3f1e521aeee2639 | 475 | rst | reStructuredText | developers/plugins/API/index.rst | pmeerw/x64dbg-docs | 3dc0df485be716aa5e86ea0c1a53220a27224e4f | [
"MIT"
] | 902 | 2016-06-18T03:00:24.000Z | 2022-02-28T02:56:13.000Z | developers/plugins/API/index.rst | pmeerw/x64dbg-docs | 3dc0df485be716aa5e86ea0c1a53220a27224e4f | [
"MIT"
] | 83 | 2016-06-20T00:50:48.000Z | 2022-03-13T21:04:10.000Z | developers/plugins/API/index.rst | pmeerw/x64dbg-docs | 3dc0df485be716aa5e86ea0c1a53220a27224e4f | [
"MIT"
] | 35 | 2016-06-18T02:26:26.000Z | 2022-03-13T20:42:32.000Z | Functions
=========
This section contains information about the `_plugin_` prefixed functions exported by x64dbg.
**Contents:**
.. toctree::
:maxdepth: 0
debugpause
debugskipexceptions
logprintf
logputs
menuadd
menuaddentry
menuaddseparator
menuclear
menuentryseticon
menuentrysetchecked
menuseticon
registercallback
registercommand
unregistercallback
unregistercommand
waituntilpaused
hash | 18.269231 | 94 | 0.698947 |
9eea69f224afca1991bbcee81b2d3b0db76df01a | 1,182 | rst | reStructuredText | source/creature/giant-poisonous-snake.rst | rptb1/dnd-srd-sphinx | 0cbe362834d41021dfd368b718e8efe7591a229f | [
"OML"
] | 1 | 2021-12-11T01:08:40.000Z | 2021-12-11T01:08:40.000Z | source/creature/giant-poisonous-snake.rst | rptb1/dnd-srd-sphinx | 0cbe362834d41021dfd368b718e8efe7591a229f | [
"OML"
] | null | null | null | source/creature/giant-poisonous-snake.rst | rptb1/dnd-srd-sphinx | 0cbe362834d41021dfd368b718e8efe7591a229f | [
"OML"
] | null | null | null | Giant Poisonous Snake
---------------------
.. https://stackoverflow.com/questions/11984652/bold-italic-in-restructuredtext
.. raw:: html
<style type="text/css">
span.bolditalic {
font-weight: bold;
font-style: italic;
}
</style>
.. role:: bi
:class: bolditalic
*Medium beast, unaligned*
**Armor Class** 14
**Hit Points** 11 (2d8 + 2)
**Speed** 30 ft., swim 30 ft.
+-----------+-----------+-----------+-----------+-----------+-----------+
| STR | DEX | CON | INT | WIS | CHA |
+===========+===========+===========+===========+===========+===========+
| 10 (+0) | 18 (+4) | 13 (+1) | 2 (-4) | 10 (+0) | 3 (-4) |
+-----------+-----------+-----------+-----------+-----------+-----------+
**Skills** Perception +2
**Senses** blindsight 10 ft., passive Perception 12
**Languages** -
**Challenge** 1/4 (50 XP)
Actions
^^^^^^^
:bi:`Bite`. *Melee Weapon Attack:* +6 to hit, reach 10 ft., one target.
*Hit:* 6 (1d4 + 4) piercing damage, and the target must make a DC 11
Constitution saving throw, taking 10 (3d6) poison damage on a failed
save, or half as much damage on a successful one.
| 23.176471 | 79 | 0.464467 |
3f3c95330e7934d7ad4fac680fdee9dba5ccf223 | 914 | rst | reStructuredText | docs/source/index.rst | sourcery-ai-bot/webgrid | dda75e255ebe8d95a8fb9fa397cc7d0527340d5e | [
"BSD-3-Clause"
] | 9 | 2016-10-18T14:24:36.000Z | 2020-03-07T10:37:34.000Z | docs/source/index.rst | sourcery-ai-bot/webgrid | dda75e255ebe8d95a8fb9fa397cc7d0527340d5e | [
"BSD-3-Clause"
] | 145 | 2016-03-23T16:39:47.000Z | 2022-03-15T13:42:28.000Z | docs/source/index.rst | sourcery-ai-bot/webgrid | dda75e255ebe8d95a8fb9fa397cc7d0527340d5e | [
"BSD-3-Clause"
] | 5 | 2019-10-10T17:12:13.000Z | 2021-11-17T16:11:00.000Z | Welcome to WebGrid
======================
WebGrid is a datagrid library for Flask and other Python web frameworks designed to work with
SQLAlchemy ORM entities and queries.
With a grid configured from one or more entities, WebGrid provides these features for reporting:
- Automated SQL query construction based on specified columns and query join/filter/sort options
- Renderers to various targets/formats
- HTML output paired with JS (jQuery) for dynamic features
- Excel (XLSX)
- CSV
- User-controlled data filters
- Per-column selection of filter operator and value(s)
- Generic single-entry search
- Session storage/retrieval of selected filter options, sorting, and paging
**Table of Contents**
.. toctree::
:maxdepth: 2
getting-started
grid/grid
grid/managers
grid/args-loaders
grid/types
columns/index
filters/index
renderers/index
testing/index
gotchas
| 24.052632 | 96 | 0.739606 |
8a04d7811f79e359d269b0cf524a3502ae652a9d | 131 | rst | reStructuredText | docs/source/api/convert.rst | 1081/pymapdl | cbb474f1bf4b4fddd40fc5dc6b8836d0cb7eae24 | [
"MIT"
] | null | null | null | docs/source/api/convert.rst | 1081/pymapdl | cbb474f1bf4b4fddd40fc5dc6b8836d0cb7eae24 | [
"MIT"
] | null | null | null | docs/source/api/convert.rst | 1081/pymapdl | cbb474f1bf4b4fddd40fc5dc6b8836d0cb7eae24 | [
"MIT"
] | null | null | null | .. _ref_convert_api:
****************
Convert Function
****************
.. autofunction:: ansys.mapdl.core.convert.convert_script
| 18.714286 | 57 | 0.59542 |
ae10761990f0c7e0f3fcc01389690623f398e597 | 1,124 | rst | reStructuredText | docs/index.rst | dashdeckers/CI-CD-Testing | 40efaced3cc35a17774cab00d043265e3323e921 | [
"MIT"
] | null | null | null | docs/index.rst | dashdeckers/CI-CD-Testing | 40efaced3cc35a17774cab00d043265e3323e921 | [
"MIT"
] | 9 | 2020-09-25T19:57:23.000Z | 2020-10-13T12:04:26.000Z | docs/index.rst | dashdeckers/CI-CD-Testing | 40efaced3cc35a17774cab00d043265e3323e921 | [
"MIT"
] | null | null | null | The CI/CD Testground
====================
.. toctree::
:hidden:
:maxdepth: 1
license
reference
This is a testing ground for
trying out and practicing
linting, documentation generation,
testing, type checking,
CI/CD workflows,
best practices,
project management,
and anything else along those lines.
Installation
------------
This project is currently not deployed to PyPI.
So, you will need to clone the GitHub repo by
running this command in your terminal:
.. code-block:: console
$ git clone https://github.com/dashdeckers/CI-CD-Testing
Usage
-----
You can create a gif showing how
the plot of a logistic map:
:math:`x_{n+1} = r * x_n * (1 - x_n)`
changes for varying values of r.
Currently they range between 0 and 4.5,
but command line arguments
will be implemented shortly.
To run the code,
execute the following in your terminal
(make sure you use python3.x):
.. code-block:: console
$ python src/chaos/main.py
You can then view the file src/logistic_map.gif
Logging
-------
You can view the detailed log-tree by running
.. code-block:: console
$ eliot-tree logfile.log
| 17.5625 | 59 | 0.705516 |
0ef5ef516207b724aea9fae60d838609a6b708dc | 15,542 | rst | reStructuredText | doc/tutorials/video/background_subtraction/background_subtraction.rst | bonanza-market/opencv | 6550cb2a90b2b074234a3b2ae354d01a1e55fd2b | [
"BSD-3-Clause"
] | 144 | 2015-01-15T03:38:44.000Z | 2022-02-17T09:07:52.000Z | doc/tutorials/video/background_subtraction/background_subtraction.rst | bonanza-market/opencv | 6550cb2a90b2b074234a3b2ae354d01a1e55fd2b | [
"BSD-3-Clause"
] | 9 | 2015-09-09T06:51:46.000Z | 2020-06-17T14:10:10.000Z | doc/tutorials/video/background_subtraction/background_subtraction.rst | bonanza-market/opencv | 6550cb2a90b2b074234a3b2ae354d01a1e55fd2b | [
"BSD-3-Clause"
] | 58 | 2015-01-14T23:43:49.000Z | 2021-11-15T05:19:08.000Z | .. _Background_Subtraction:
How to Use Background Subtraction Methods
*****************************************
* Background subtraction (BS) is a common and widely used technique for generating a foreground mask (namely, a binary image containing the pixels belonging to moving objects in the scene) by using static cameras.
* As the name suggests, BS calculates the foreground mask performing a subtraction between the current frame and a background model, containing the static part of the scene or, more in general, everything that can be considered as background given the characteristics of the observed scene.
.. image:: images/Background_Subtraction_Tutorial_Scheme.png
:alt: Background Subtraction - General Scheme
:align: center
* Background modeling consists of two main steps:
#. Background Initialization;
#. Background Update.
In the first step, an initial model of the background is computed, while in the second step that model is updated in order to adapt to possible changes in the scene.
* In this tutorial we will learn how to perform BS by using OpenCV. As input, we will use data coming from the publicly available data set `Background Models Challenge (BMC) <http://bmc.univ-bpclermont.fr/>`_ .
Goals
======
In this tutorial you will learn how to:
#. Read data from videos by using :video_capture:`VideoCapture <>` or image sequences by using :imread:`imread <>`;
#. Create and update the background model by using :background_subtractor:`BackgroundSubtractor <>` class;
#. Get and show the foreground mask by using :imshow:`imshow <>`;
#. Save the output by using :imwrite:`imwrite <>` to quantitatively evaluate the results.
Code
=====
In the following you can find the source code. We will let the user chose to process either a video file or a sequence of images.
* Two different methods are used to generate two foreground masks:
#. :background_subtractor_mog:`MOG <>`
#. :background_subtractor_mog_two:`MOG2 <>`
The results as well as the input data are shown on the screen.
.. code-block:: cpp
//opencv
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/video/background_segm.hpp>
//C
#include <stdio.h>
//C++
#include <iostream>
#include <sstream>
using namespace cv;
using namespace std;
//global variables
Mat frame; //current frame
Mat fgMaskMOG; //fg mask generated by MOG method
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
int keyboard;
//function declarations
void help();
void processVideo(char* videoFilename);
void processImages(char* firstFrameFilename);
void help()
{
cout
<< "--------------------------------------------------------------------------" << endl
<< "This program shows how to use background subtraction methods provided by " << endl
<< " OpenCV. You can process both videos (-vid) and images (-img)." << endl
<< endl
<< "Usage:" << endl
<< "./bs {-vid <video filename>|-img <image filename>}" << endl
<< "for example: ./bs -vid video.avi" << endl
<< "or: ./bs -img /data/images/1.png" << endl
<< "--------------------------------------------------------------------------" << endl
<< endl;
}
int main(int argc, char* argv[])
{
//print help information
help();
//check for the input parameter correctness
if(argc != 3) {
cerr <<"Incorret input list" << endl;
cerr <<"exiting..." << endl;
return EXIT_FAILURE;
}
//create GUI windows
namedWindow("Frame");
namedWindow("FG Mask MOG");
namedWindow("FG Mask MOG 2");
//create Background Subtractor objects
pMOG = createBackgroundSubtractorMOG(); //MOG approach
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
if(strcmp(argv[1], "-vid") == 0) {
//input data coming from a video
processVideo(argv[2]);
}
else if(strcmp(argv[1], "-img") == 0) {
//input data coming from a sequence of images
processImages(argv[2]);
}
else {
//error in reading input parameters
cerr <<"Please, check the input parameters." << endl;
cerr <<"Exiting..." << endl;
return EXIT_FAILURE;
}
//destroy GUI windows
destroyAllWindows();
return EXIT_SUCCESS;
}
void processVideo(char* videoFilename) {
//create the capture object
VideoCapture capture(videoFilename);
if(!capture.isOpened()){
//error in opening the video input
cerr << "Unable to open video file: " << videoFilename << endl;
exit(EXIT_FAILURE);
}
//read input data. ESC or 'q' for quitting
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
//read the current frame
if(!capture.read(frame)) {
cerr << "Unable to read next frame." << endl;
cerr << "Exiting..." << endl;
exit(EXIT_FAILURE);
}
//update the background model
pMOG->apply(frame, fgMaskMOG);
pMOG2->apply(frame, fgMaskMOG2);
//get the frame number and write it on the current frame
stringstream ss;
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
cv::Scalar(255,255,255), -1);
ss << capture.get(CAP_PROP_POS_FRAMES);
string frameNumberString = ss.str();
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
//show the current frame and the fg masks
imshow("Frame", frame);
imshow("FG Mask MOG", fgMaskMOG);
imshow("FG Mask MOG 2", fgMaskMOG2);
//get the input from the keyboard
keyboard = waitKey( 30 );
}
//delete capture object
capture.release();
}
void processImages(char* fistFrameFilename) {
//read the first file of the sequence
frame = imread(fistFrameFilename);
if(!frame.data){
//error in opening the first image
cerr << "Unable to open first image frame: " << fistFrameFilename << endl;
exit(EXIT_FAILURE);
}
//current image filename
string fn(fistFrameFilename);
//read input data. ESC or 'q' for quitting
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
//update the background model
pMOG->apply(frame, fgMaskMOG);
pMOG2->apply(frame, fgMaskMOG2);
//get the frame number and write it on the current frame
size_t index = fn.find_last_of("/");
if(index == string::npos) {
index = fn.find_last_of("\\");
}
size_t index2 = fn.find_last_of(".");
string prefix = fn.substr(0,index+1);
string suffix = fn.substr(index2);
string frameNumberString = fn.substr(index+1, index2-index-1);
istringstream iss(frameNumberString);
int frameNumber = 0;
iss >> frameNumber;
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
cv::Scalar(255,255,255), -1);
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
//show the current frame and the fg masks
imshow("Frame", frame);
imshow("FG Mask MOG", fgMaskMOG);
imshow("FG Mask MOG 2", fgMaskMOG2);
//get the input from the keyboard
keyboard = waitKey( 30 );
//search for the next image in the sequence
ostringstream oss;
oss << (frameNumber + 1);
string nextFrameNumberString = oss.str();
string nextFrameFilename = prefix + nextFrameNumberString + suffix;
//read the next frame
frame = imread(nextFrameFilename);
if(!frame.data){
//error in opening the next image in the sequence
cerr << "Unable to open image frame: " << nextFrameFilename << endl;
exit(EXIT_FAILURE);
}
//update the path of the current frame
fn.assign(nextFrameFilename);
}
}
* The source file can be downloaded :download:`here <../../../../samples/cpp/tutorial_code/video/bg_sub.cpp>`.
Explanation
============
We discuss the main parts of the above code:
#. First, three Mat objects are allocated to store the current frame and two foreground masks, obtained by using two different BS algorithms.
.. code-block:: cpp
Mat frame; //current frame
Mat fgMaskMOG; //fg mask generated by MOG method
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
#. Two :background_subtractor:`BackgroundSubtractor <>` objects will be used to generate the foreground masks. In this example, default parameters are used, but it is also possible to declare specific parameters in the create function.
.. code-block:: cpp
Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
...
//create Background Subtractor objects
pMOG = createBackgroundSubtractorMOG(); //MOG approach
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
#. The command line arguments are analysed. The user can chose between two options:
* video files (by choosing the option -vid);
* image sequences (by choosing the option -img).
.. code-block:: cpp
if(strcmp(argv[1], "-vid") == 0) {
//input data coming from a video
processVideo(argv[2]);
}
else if(strcmp(argv[1], "-img") == 0) {
//input data coming from a sequence of images
processImages(argv[2]);
}
#. Suppose you want to process a video file. The video is read until the end is reached or the user presses the button 'q' or the button 'ESC'.
.. code-block:: cpp
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
//read the current frame
if(!capture.read(frame)) {
cerr << "Unable to read next frame." << endl;
cerr << "Exiting..." << endl;
exit(EXIT_FAILURE);
}
#. Every frame is used both for calculating the foreground mask and for updating the background. If you want to change the learning rate used for updating the background model, it is possible to set a specific learning rate by passing a third parameter to the 'apply' method.
.. code-block:: cpp
//update the background model
pMOG->apply(frame, fgMaskMOG);
pMOG2->apply(frame, fgMaskMOG2);
#. The current frame number can be extracted from the :video_capture:`VideoCapture <>` object and stamped in the top left corner of the current frame. A white rectangle is used to highlight the black colored frame number.
.. code-block:: cpp
//get the frame number and write it on the current frame
stringstream ss;
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
cv::Scalar(255,255,255), -1);
ss << capture.get(CAP_PROP_POS_FRAMES);
string frameNumberString = ss.str();
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
#. We are ready to show the current input frame and the results.
.. code-block:: cpp
//show the current frame and the fg masks
imshow("Frame", frame);
imshow("FG Mask MOG", fgMaskMOG);
imshow("FG Mask MOG 2", fgMaskMOG2);
#. The same operations listed above can be performed using a sequence of images as input. The processImage function is called and, instead of using a :video_capture:`VideoCapture <>` object, the images are read by using :imread:`imread <>`, after individuating the correct path for the next frame to read.
.. code-block:: cpp
//read the first file of the sequence
frame = imread(fistFrameFilename);
if(!frame.data){
//error in opening the first image
cerr << "Unable to open first image frame: " << fistFrameFilename << endl;
exit(EXIT_FAILURE);
}
...
//search for the next image in the sequence
ostringstream oss;
oss << (frameNumber + 1);
string nextFrameNumberString = oss.str();
string nextFrameFilename = prefix + nextFrameNumberString + suffix;
//read the next frame
frame = imread(nextFrameFilename);
if(!frame.data){
//error in opening the next image in the sequence
cerr << "Unable to open image frame: " << nextFrameFilename << endl;
exit(EXIT_FAILURE);
}
//update the path of the current frame
fn.assign(nextFrameFilename);
Note that this example works only on image sequences in which the filename format is <n>.png, where n is the frame number (e.g., 7.png).
Results
=======
* Given the following input parameters:
.. code-block:: cpp
-vid Video_001.avi
The output of the program will look as the following:
.. image:: images/Background_Subtraction_Tutorial_Result_1.png
:alt: Background Subtraction - Video File
:align: center
* The video file Video_001.avi is part of the `Background Models Challenge (BMC) <http://bmc.univ-bpclermont.fr/>`_ data set and it can be downloaded from the following link `Video_001 <http://bmc.univ-bpclermont.fr/sites/default/files/videos/evaluation/Video_001.zip>`_ (about 32 MB).
* If you want to process a sequence of images, then the '-img' option has to be chosen:
.. code-block:: cpp
-img 111_png/input/1.png
The output of the program will look as the following:
.. image:: images/Background_Subtraction_Tutorial_Result_2.png
:alt: Background Subtraction - Image Sequence
:align: center
* The sequence of images used in this example is part of the `Background Models Challenge (BMC) <http://bmc.univ-bpclermont.fr/>`_ dataset and it can be downloaded from the following link `sequence 111 <http://bmc.univ-bpclermont.fr/sites/default/files/videos/learning/111_png.zip>`_ (about 708 MB). Please, note that this example works only on sequences in which the filename format is <n>.png, where n is the frame number (e.g., 7.png).
Evaluation
==========
To quantitatively evaluate the results obtained, we need to:
* Save the output images;
* Have the ground truth images for the chosen sequence.
In order to save the output images, we can use :imwrite:`imwrite <>`. Adding the following code allows for saving the foreground masks.
.. code-block:: cpp
string imageToSave = "output_MOG_" + frameNumberString + ".png";
bool saved = imwrite(imageToSave, fgMaskMOG);
if(!saved) {
cerr << "Unable to save " << imageToSave << endl;
}
Once we have collected the result images, we can compare them with the ground truth data. There exist several publicly available sequences for background subtraction that come with ground truth data. If you decide to use the `Background Models Challenge (BMC) <http://bmc.univ-bpclermont.fr/>`_, then the result images can be used as input for the `BMC Wizard <http://bmc.univ-bpclermont.fr/?q=node/7>`_. The wizard can compute different measures about the accuracy of the results.
References
==========
* Background Models Challenge (BMC) website, `<http://bmc.univ-bpclermont.fr/>`_
* Antoine Vacavant, Thierry Chateau, Alexis Wilhelm and Laurent Lequievre. A Benchmark Dataset for Foreground/Background Extraction. In ACCV 2012, Workshop: Background Models Challenge, LNCS 7728, 291-300. November 2012, Daejeon, Korea.
| 39.953728 | 481 | 0.653777 |
87373e276f941b3e63303f1227c15366bdca8248 | 2,814 | rst | reStructuredText | section-1.rst | genomics-education-alliance/leptin-rna-seq-lesson | 93a49b53471a2c3fe9d9820e39ce48dccca1c0df | [
"RSA-MD"
] | null | null | null | section-1.rst | genomics-education-alliance/leptin-rna-seq-lesson | 93a49b53471a2c3fe9d9820e39ce48dccca1c0df | [
"RSA-MD"
] | null | null | null | section-1.rst | genomics-education-alliance/leptin-rna-seq-lesson | 93a49b53471a2c3fe9d9820e39ce48dccca1c0df | [
"RSA-MD"
] | null | null | null | .. include:: defined_substitutions.txt
Launch Lesson on CyVerse
=================================
.. admonition:: learning-objectives
- Understand how to get a CyVerse account
- Understand how to launch a lesson on CyVerse
- Understand to manage data on CyVerse
Get a CyVerse Account
----------------------
In order to run this lesson, you will need to have a |CyVerse| account. To get
your account, visit |CyVerse user portal|.
Launch a lesson on CyVerse
------------------------------
This lesson will be launched from the |CyVerse Discovery Environment|.
**For the RNA-Seq Leptin Lesson**
1. Click this quick-launch button to launch the lesson;
|CyVerse_lesson|
login to the Discovery Environment if necessary; then click **Launch Analysis**
to start the application.
2. In your notifications (bell icon, upper right of DE screen), you will see a
link to the JupyterLab session;
|Session Link|
it may take 5 minutes or more before the JupyterLab session loads in a new
browser tab.
|lab session|
3. The JupyterLab session will be active for 48 hours. After then, the lesson
will automatically terminate. Any files altered/created will be automatically
stored back to the |CyVerse Data Store|.
Data Management on CyVerse
------------------------------
Before launching a JupyterLab session, you can also elect to include specific
datasets not included in this lesson. You may also want to move data to or
from the JupyterLab session after you have started it. Learn more about Data
transfers from learning materials on |CyVerse Data Store| and |CyVerse VICE|.
Backup Notebook
------------------------------
**Notebook 5** Contains instructions for importing data into your JupyterLab
session. If you want to skip steps, or import previously worked results,
follow instructions in this notebook to import files.
----
.. Comment: Place Images Below This Line
use :width: to give a desired width for your image
use :height: to give a desired height for your image
replace the image name/location and URL if hyperlinked
.. |Session Link| image:: ./img/lesson_link_notification.png
:width: 300
:height: 100
.. |lab session| image:: ./img/lab_session.png
:width: 675
:height: 375
.. Comment: Place URLS Below This Line
# Use this example to ensure that links open in new tabs, avoiding
# forcing users to leave the document, and making it easy to update links
# In a single place in this document
.. |Substitution| raw:: html # Place this anywhere in the text you want a hyperlink
<a href="REPLACE_THIS_WITH_URL" target="blank">Replace_with_text</a>
.. |Github Repo Link| raw:: html
<a href="https://github.com/JasonJWilliamsNY/leptin-rna-seq-lesson-dev" target="blank">Github Repo Link</a>
| 30.923077 | 110 | 0.70398 |
48e4124c089a097582e919b05a7917775b7dd95c | 72 | rst | reStructuredText | Misc/NEWS.d/next/Tools-Demos/2018-06-15-23-07-50.bpo-29367.52w9Uq.rst | TinkerBoard-Android/external-python-cpython2 | 4ecd9450778988240cdfa9b2721750a433cec969 | [
"PSF-2.0"
] | 1 | 2021-08-13T12:32:14.000Z | 2021-08-13T12:32:14.000Z | Misc/NEWS.d/next/Tools-Demos/2018-06-15-23-07-50.bpo-29367.52w9Uq.rst | TinkerBoard-Android/external-python-cpython2 | 4ecd9450778988240cdfa9b2721750a433cec969 | [
"PSF-2.0"
] | null | null | null | Misc/NEWS.d/next/Tools-Demos/2018-06-15-23-07-50.bpo-29367.52w9Uq.rst | TinkerBoard-Android/external-python-cpython2 | 4ecd9450778988240cdfa9b2721750a433cec969 | [
"PSF-2.0"
] | 4 | 2021-05-19T18:02:29.000Z | 2022-02-12T16:08:48.000Z | python-gdb.py now supports also method-wrapper (wrapperobject) objects.
| 36 | 71 | 0.819444 |
bad343d1b7d1de6800f38c16c4e3fa5cfab0df0c | 1,120 | rst | reStructuredText | doc/ref/filters/filter_sort.rst | pguyot/zotonic | 5d252cb273faa7a056c28c435246fd61d9c54ec9 | [
"Apache-2.0"
] | null | null | null | doc/ref/filters/filter_sort.rst | pguyot/zotonic | 5d252cb273faa7a056c28c435246fd61d9c54ec9 | [
"Apache-2.0"
] | null | null | null | doc/ref/filters/filter_sort.rst | pguyot/zotonic | 5d252cb273faa7a056c28c435246fd61d9c54ec9 | [
"Apache-2.0"
] | null | null | null |
.. include:: meta-sort.rst
.. highlight:: django
The `sort` filter takes a list of items to sort. Items can be a ordinary list of terms, or a list of resources to be filtered based on their properties. Sort order and properties to sort on are given as arguments to the filter.
By default it sorts the list in `ascending` order, and resource lists are sorted on their `id` if no property is specified.
Example::
{{ [4, 6, 2, 3, 5]|sort }}
Sorts the list of numbers in `ascending` order.
Example::
{{ [4, 6, 2, 3, 5]|sort:'desc' }}
Sorts the list of numbers in `descending` order.
Example::
{% for r in id.o.author|sort:['title', 'desc', 'modified'] %}
do something with `r`...
{% endfor %}
This will sort on `title` in `ascending` order first, then on `modified` in `descending` order.
Any number of properties may be added, each one can have it's own sort order, or use the current one.
See :ref:`model-rsc` for a list of properties available to sort on.
Sort order may be either `ascending` or `descending` (may be abbreviated as `asc`, `+`, `desc`, `-` or as string version of those).
| 33.939394 | 227 | 0.690179 |
740a1180ac04c5aae40aa4dee6484d73f9625f4c | 1,420 | rst | reStructuredText | docs/source/index.rst | harmslab/pytc-gui | a77febcf74573be70173f03bf39b40ca71c66318 | [
"Unlicense"
] | 3 | 2017-04-27T16:30:42.000Z | 2017-06-17T15:27:03.000Z | docs/source/index.rst | harmslab/pytc-gui | a77febcf74573be70173f03bf39b40ca71c66318 | [
"Unlicense"
] | 2 | 2017-04-17T16:26:13.000Z | 2018-08-20T18:59:18.000Z | docs/source/index.rst | harmslab/pytc-gui | a77febcf74573be70173f03bf39b40ca71c66318 | [
"Unlicense"
] | 3 | 2017-03-13T23:50:55.000Z | 2017-06-15T15:37:16.000Z | ======================
pytc-gui documentation
======================
`pytc-gui` is a graphical interface for `pytc <https://github.com/harmslab/pytc>`_,
a flexible package for fitting Isothermal Titration Calorimetry data.
+ `Installation <installation.html>`_
+ `How to do fits <how_to_img.html>`_
+ `Full pytc docs <https://pytc.readthedocs.io/>`_
Start-up
========
After the pytc-gui is installed, double-click the icon for the installed program
run :code:`pytc-gui` on command line.
Workflow
========
Demo heat files are `here <https://github.com/harmslab/pytc-demos>`_.
+ Integrate raw power curves using Origin or NITPIC, creating files containing heats per shot.
+ Load heat files and `choose model describing experiment <https://pytc.readthedocs.io/en/latest/indiv_models.html>`_.
+ Choose the `fitter <https://pytc.readthedocs.io/en/latest/fitters.html>`_.
+ Link individual fit parameters to `global parameters <https://pytc.readthedocs.io/en/latest/global_models.html>`_.
+ Fit the model to the data.
+ Evaluate the `fit statistics <https://pytc.readthedocs.io/en/latest/statistics.html>`_.
+ Export the results, which will save a csv file and pdf files showing the fit and corner plot
Reference
=========
+ `Programming reference <programming.html>`_
.. toctree::
:maxdepth: 2
:caption: Contents:
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 30.869565 | 118 | 0.703521 |
800d7e79b15d9eb36522f6c62a0486eb78fdde81 | 235 | rst | reStructuredText | doc/bid_bim_fret_nbd_release/index.rst | johnbachman/bax_insertion_paper | c9a1f7280998a8c3110f65d604afaa2686cda5c3 | [
"BSD-2-Clause"
] | null | null | null | doc/bid_bim_fret_nbd_release/index.rst | johnbachman/bax_insertion_paper | c9a1f7280998a8c3110f65d604afaa2686cda5c3 | [
"BSD-2-Clause"
] | null | null | null | doc/bid_bim_fret_nbd_release/index.rst | johnbachman/bax_insertion_paper | c9a1f7280998a8c3110f65d604afaa2686cda5c3 | [
"BSD-2-Clause"
] | null | null | null | .. _bid_bim_fret_terbium_nbd:
KD2: Bid/Bim FRET, NBD, and Release Kinetics
============================================
.. toctree::
:maxdepth: 3
raw_data
outlier_removal
data_after_outliers
derivatives
nbd_error_estimates
| 15.666667 | 44 | 0.625532 |
67d47f40fde0ea044e01cb219b5990e5e8b6a954 | 10,339 | rst | reStructuredText | SIP.rst | ociu/MIL-STD-498-templates-rst | 19b91be32ebbd0e21772920c2141c9d168492ae9 | [
"Unlicense"
] | 1 | 2017-07-11T15:02:51.000Z | 2017-07-11T15:02:51.000Z | SIP.rst | ociu/MIL-STD-498-templates-rst | 19b91be32ebbd0e21772920c2141c9d168492ae9 | [
"Unlicense"
] | null | null | null | SIP.rst | ociu/MIL-STD-498-templates-rst | 19b91be32ebbd0e21772920c2141c9d168492ae9 | [
"Unlicense"
] | 1 | 2020-05-29T05:29:11.000Z | 2020-05-29T05:29:11.000Z | .. _SIP:
============================
Software Installation Plan
============================
Scope
=====
.. This section shall be divided into the following paragraphs.
Identification
--------------
.. This paragraph shall contain a full identification of the system
and the software to which this document applies, including, as
applicable, identification number(s), title(s), abbreviation(s),
version number(s), and release number(s).
System overview
---------------
.. This paragraph shall briefly state the purpose of the system and
the software to which this document applies. It shall describe the
general nature of the system and software; summarize the history of
system development, operation, and maintenance; identify the
project sponsor, acquirer, user, developer, and support agencies;
identify current and planned operating sites; and list other
relevant documents.
Document overview
-----------------
.. This paragraph shall summarize the purpose and contents of this
plan and shall describe any security or privacy considerations
associated with its use.
Relationship to other plans
---------------------------
.. This paragraph shall describe the relationship, if any, of the SIP
to other project management plans.
Referenced documents
====================
.. This section shall list the number, title, revision, and date of
all documents referenced in this plan. This section shall also
identify the source for all documents not available through normal
Government stocking activities.
Installation overview
=====================
.. This section shall be divided into the following paragraphs to
provide an overview of the installation process.
Description
-----------
.. This paragraph shall provide a general description of the
installation process to provide a frame of reference for the
remainder of the document. A list of sites for software
installation, the schedule dates, and the method of installation
shall be included.
Contact point
-------------
.. This paragraph shall provide the organizational name, office
symbol/code, and telephone number of a point of contact for
questions relating to this installation.
Support materials
-----------------
.. This paragraph shall list the type, source, and quantity of support
materials needed for the installation. Included shall be items such
as magnetic tapes, disk packs, computer printer paper, and special
forms.
Training
--------
.. This paragraph shall describe the developer's plans for training
personnel who will operate and/or use the software installed at
user sites. Included shall be the delineation between general
orientation, classroom training, and "hands on" training.
Tasks
-----
.. This paragraph shall list and describe in general terms each task
involved in the software installation. Each task description shall
identify the organization that will accomplish the task, usually
either the user, computer operations, or the developer. The task
list shall include such items as:
.. 1. Providing overall planning, coordination, and preparation for
installation
2. Providing personnel for the installation team
3. Arranging lodging, transportation, and office facilities for
the installation team
4. Ensuring that all manuals applicable to the installation are
available when needed
5. Ensuring that all other prerequisites have been fulfilled prior
to the installation
6. Planning and conducting training activities
7. Providing students for the training
8. Providing computer support and technical assistance for the
installation
9. Providing for conversion from the current system
Personnel
---------
.. This paragraph shall describe the number, type, and skill level of
the personnel needed during the installation period, including the
need for multishift operation, clerical support, etc.
Security and privacy
--------------------
.. This paragraph shall contain an overview of the security and
privacy considerations associated with the system.
Site-specific information for software center operations staff
==============================================================
.. This section applies if the software will be installed in computer
center(s) or other centralized or networked software installations
for users to access via terminals or using batch inputs/outputs. If
this type of installation does not apply, this section shall
contain the words "Not applicable."
(Site name)
-----------
.. This paragraph shall identify a site or set of sites and shall be
divided into the following subparagraphs to discuss those sites.
Multiple sites may be discussed together when the information for
those sites is generally the same.
Schedule
~~~~~~~~
.. This paragraph shall present a schedule of tasks to be accomplished
during installation. It shall depict the tasks in chronological
order with beginning and ending dates of each task and supporting
narrative as necessary.
Software inventory
~~~~~~~~~~~~~~~~~~
.. This paragraph shall provide an inventory of the software needed to
support the installation. The software shall be identified by name,
identification number, version number, release number,
configuration, and security classification, as applicable. This
paragraph shall indicate whether the software is expected to be on
site or will be delivered for the installation and shall identify
any software to be used only to facilitate the installation
process.
Facilities
~~~~~~~~~~
.. This paragraph shall detail the physical facilities and
accommodations needed during the installation period. This
description shall include the following, as applicable:
.. 1. Classroom, work space, and training aids needed, specifying
hours per day, number of days, and shifts
2. Hardware that must be operational and available
3. Transportation and lodging for the installation team
Installation team
~~~~~~~~~~~~~~~~~
.. This paragraph shall describe the composition of the installation
team. Each team member's tasks shall be defined.
Installation procedures
~~~~~~~~~~~~~~~~~~~~~~~
.. This paragraph shall provide step-by-step procedures for
accomplishing the installation. References may be made to other
documents, such as operator manuals. Safety precautions, marked by
WARNING or CAUTION, shall be included where applicable. The
procedures shall include the following, as applicable:
.. 1. Installing the software
2. Checking out the software once installed
3. Initializing databases and other software with site-specific
data
4. Conversion from the current system, possibly involving running
in parallel
5. Dry run of the procedures in operator and user manuals
Data update procedures
~~~~~~~~~~~~~~~~~~~~~~
.. This paragraph shall present the data update procedures to be
followed during the installation period. When the data update
procedures are the same as normal updating or processing
procedures, reference may be made to other documents, such as
operator manuals.
Site-specific information for software users
============================================
.. This section shall provide installation planning pertinent to users
of the software. When more than one type of user is involved, for
example, users at different positions, performing different
functions, or in different organizations, a separate section
(Sections 5 through n) may be written for each type of user and the
section titles modified to reflect each user.
(Site name)
-----------
.. This paragraph shall identify a site or set of sites and shall be
divided into the following subparagraphs to discuss those sites.
Multiple sites may be discussed together when the information for
those sites is generally the same.
Schedule
~~~~~~~~
.. This paragraph shall present a schedule of tasks to be accomplished
by the user during installation. It shall depict the tasks in
chronological order including beginning and ending dates for each
task and supporting narrative as necessary.
Installation procedures
~~~~~~~~~~~~~~~~~~~~~~~
.. This paragraph shall provide step-by-step procedures for
accomplishing the installation. Reference may be made to other
documents, such as user manuals. Safety precautions, marked by
WARNING or CAUTION, shall be included where applicable. The
procedures shall include the following, as applicable:
.. 1. Performing the tasks under 4.x.5 if not performed by operations
staff
2. Initializing user-specific data
3. Setting up queries and other user inputs
4. Performing sample processing
5. Generating sample reports
6. Conversion from the current system, possibly involving running
in parallel
7. Dry run of procedures in user manuals
Data update procedures
~~~~~~~~~~~~~~~~~~~~~~
.. This paragraph shall be divided into subparagraphs to present the
user's data update procedures to be followed during the
installation period. When update procedures are the same as normal
processing, reference may be made to other documents, such as user
manuals, and to Section 4 of this document
Notes
=====
.. This section shall contain any general information that aids in
understanding this document (e.g., background information,
glossary, rationale). This section shall include an alphabetical
listing of all acronyms, abbreviations, and their meanings as used
in this document and a list of terms and definitions needed to
understand this document. If section 5 has been expanded into
section(s) 6,...n, this section shall be numbered as the next
section following section n.
Appendixes
==========
.. Appendixes may be used to provide information published separately
for convenience in document maintenance (e.g., charts, classified
data). As applicable, each appendix shall be referenced in the main
body of the document where the data would normally have been
provided. Appendixes may be bound as separate documents for ease in
handling. Appendixes shall be lettered alphabetically (A, B,
etc.).
| 32.718354 | 70 | 0.730922 |
a17c14e153d6010aad1bbbcbb81d21979d253722 | 3,406 | rst | reStructuredText | doc/source/migration_v2.rst | musicinmybrain/NeuroM | 76b8c557b81d4189b6c04598e62af3a1a67bebfd | [
"BSD-3-Clause"
] | 1 | 2021-05-31T19:52:05.000Z | 2021-05-31T19:52:05.000Z | doc/source/migration_v2.rst | ABL-Lab/NeuroM | 8aa8813f7f1a4a8363863c9c2fc94a0a11d2b328 | [
"BSD-3-Clause"
] | null | null | null | doc/source/migration_v2.rst | ABL-Lab/NeuroM | 8aa8813f7f1a4a8363863c9c2fc94a0a11d2b328 | [
"BSD-3-Clause"
] | null | null | null | .. Copyright (c) 2015, Ecole Polytechnique Federale de Lausanne, Blue Brain Project
All rights reserved.
This file is part of NeuroM <https://github.com/BlueBrain/NeuroM>
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of
its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
.. _migration-v2:
=======================
Migration to v2 version
=======================
- ``Neuron`` object now extends ``morphio.Morphology``.
- NeuroM does not remove unifurcations on load. Unifurcation is a section with a single child. Such
sections are possible in H5 and ASC formats. Now, in order to remove them on your morphology, you
would need to call ``remove_unifurcations()`` right after the morphology is constructed.
.. code-block:: python
import neurom as nm
nrn = nm.load_neuron('some/data/path/morph_file.asc')
nrn.remove_unifurcations()
- Soma is not considered as a section anymore. Soma is skipped when iterating over neuron's
sections. It means that section indexing offset needs to be adjusted by
``-(number of soma sections)`` which is usually ``-1``.
- drop ``benchmarks``
- drop ``neurom.check.structural_checks`` as MorphIO does not allow to load invalid morphologies,
and it does not give access to raw data.
- drop ``Tree`` class. Use ``Section`` instead as it includes its functionality but if you need
``Tree`` separately then copy-paste ``Tree`` code from v1 version to your project.
- ``Section`` and ``Neurite`` class can't be copied anymore because their underlying MorphIO
objects can't be copied (pickled). Only copying of ``Neuron`` is preserved.
- drop ``FstNeuron``. It functionality is included in ``Neuron`` class. Use ``Neuron`` instead of
``FstNeuron``.
- Validation of morphologies changed.
The following is not an invalid morphology anymore:
- 2 point soma
- non-sequential ids
- script ``morph_check`` and ``morph_stats`` changed to ``neurom check`` and ``neurom stats``
correspondingly. | 53.21875 | 99 | 0.73488 |
7488ed34d9013876feee8b6aed03103e9d2a3fd8 | 3,964 | rst | reStructuredText | README.rst | ginkgobioworks/curious | ee8a33a92640b95d477422d5d627c750920d976b | [
"MIT"
] | 10 | 2016-12-18T20:19:39.000Z | 2020-09-05T02:49:38.000Z | README.rst | ginkgobioworks/curious | ee8a33a92640b95d477422d5d627c750920d976b | [
"MIT"
] | 5 | 2017-05-19T22:38:20.000Z | 2018-04-04T15:16:07.000Z | README.rst | ginkgobioworks/curious | ee8a33a92640b95d477422d5d627c750920d976b | [
"MIT"
] | 2 | 2017-04-11T10:09:02.000Z | 2017-06-04T01:07:38.000Z | Curious
=======
.. image:: https://travis-ci.org/ginkgobioworks/curious.svg?branch=master
:target: https://travis-ci.org/ginkgobioworks/curious
Curious traverses relationships in a relational database. Curious
queries allow users to explore relationships among objects, traverse
recursive relationships, and jump between loosely connected databases.
Curious also provides a JSON interface to the objects. Users and
programmers can use Curious queries in analysis scripts and
applications.
Curious favors a data centric model of application construction; Curious
queries expose normalized, relational data, reducing UI dependency on UI
specific API end-points serving denormalized data. Changing what data an
UI needs no longer requires changing the UI specific end-points.
Curious works well with deep data models with many relationships. A
Curious query can traverse 10s of foreign key like relationships
efficiently. Curious queries always operate on sets of objects, and can
connect a small number of objects via a relationship to a large number
of objects, then via another relationship from the large number of
objects to a smaller set again. For example, Book to Authors to Country
of Residence. Unlike GraphQL, Curious outputs relationships between
objects, rather than an ever growing tree of JSON representations of the
objects.
Example
-------
::
Book.last(10) Book.author_set Author.country(continent__name="North America")
Query Language
--------------
The query language allows traversing models by identfying the relationships between them,
through foreign keys in Django models, or arbitrary id-mapping functions. A Curious query
is a space-separated set of terms, which connect models together by relationships.
Several kinds of "joins" are possible using these relationship primitives:
- A traditional `inner join` ``Book Book.author_set``
- A `left outer join`: ``Book.last(10) ?(Book.author_set)``
- A `recusrive join`: ``Parent.children_*``
Furthermore, at each stage in a join, `filtering` can happen:
- Filtering by `Django field lookups`_: ``Book Book.author_set(id__in=[2,3,4])``
- Filtering by `subquery`: ``Book +(Book.author_set(id__in=[2,3,4]))``
- Filtering by `exclusive subquery` ``Book -(Book.author_set(id__in=[2,3,4]))``
Finally, relationships can generate `counts`:
- Counting ``Book Book.author_set__count``
.. _Django field lookups: https://docs.djangoproject.com/en/1.11/ref/models/querysets/#field-lookups
Configuring Curious
-------------------
::
import myapp.models
from curious import model_registry
def register():
model_registry.register(myapp.models)
Then include ``register`` when your Django app boots up.
Using Curious
-------------
Turn off CSRF. Deploy it as a Django app.
Writing Customized Relationships
--------------------------------
Use filter and deferred to real functions.
Development
-----------
Requires Docker. Spin up your container using the provided ``docker-compose.yml`` file and Makefile
by running ``make image``. This creates an image with a correct git configuration for your user,
which makes it easy to release. All of the commands you should need to run are defined the
``Makefile`` as targets. All of the targets except for ``image``, are meant to be run inside the
Docker container, but can be run from the host machine by having ``-ext`` appended to them. For
example, to run tests, you could either call ``make test`` from inside the container, or ``make
test-ext`` from the host.
If you are modifying the static assets during development, they can be recompiled with the
``build_assets`` make task, or by calling ``python setup.py build_assets``.
::
./make test-ext
Deployment
----------
Deployment of tagged commits happens to PyPI automatically via Travis CI. To bump and deploy a new
version, run ``make bump/[foo]-ext``, where ``[foo]`` is ``major``, ``minor``, or ``patch``. Then
``git push origin --tags master``.
| 36.036364 | 100 | 0.748739 |
fed0f6140d253f27d4750278c05fb73bf43cc29c | 1,218 | rst | reStructuredText | doc/source/installation/install.rst | japerr/Topshelf | 49f7bc026a01b5ec46cf8c626ebdbf0ded511f32 | [
"Apache-2.0"
] | 26 | 2015-01-28T16:46:27.000Z | 2021-01-23T02:28:17.000Z | doc/source/installation/install.rst | japerr/Topshelf | 49f7bc026a01b5ec46cf8c626ebdbf0ded511f32 | [
"Apache-2.0"
] | null | null | null | doc/source/installation/install.rst | japerr/Topshelf | 49f7bc026a01b5ec46cf8c626ebdbf0ded511f32 | [
"Apache-2.0"
] | 5 | 2015-01-12T01:21:10.000Z | 2016-11-17T13:57:41.000Z | Installing Topshelf
===================
NuGet
'''''
The simplest way to install Topshelf into your solution/project is to use
NuGet.::
nuget Install-Package Topshelf
Raw Binaries
''''''''''''
If you are a fan of getting the binaries you can get released builds from
http://github.com/topshelf/Topshelf/downloads
Then you will need to add references to
=======================================
* Topshelf.dll
Compiling From Source
'''''''''''''''''''''
Lastly, if you want to hack on Topshelf or just want to have the actual source
code you can clone the source from github.com.
To clone the repository using git try the following::
git clone git://github.com/Topshelf/Topshelf.git
If you want the development branch (where active development happens)::
git clone git://github.com/Topshelf/Topshelf.git
git checkout develop
Build Dependencies
''''''''''''''''''
To compile Topshelf from source you will need the following developer tools
installed:
* .Net 4.0 sdk
* ruby v 1.8.7
* gems (rake, albacore)
Compiling
'''''''''
To compile the source code, drop to the command line and type::
.\build.bat
If you look in the ``.\build_output`` folder you should see the binaries. | 21 | 78 | 0.67734 |
5ad4c185beffca48e4fccd92bd881414b700c5ac | 2,107 | rst | reStructuredText | docs/development_guide/source/samples/advance samples/emac/eth_http_server.rst | HuakeZhBo/bl_mcu_sdk | a6a7f077e5dd535419d1741e4c2f5215eebb699d | [
"Apache-2.0"
] | 93 | 2021-04-27T07:34:49.000Z | 2022-03-22T08:43:44.000Z | docs/development_guide/source/samples/advance samples/emac/eth_http_server.rst | HuakeZhBo/bl_mcu_sdk | a6a7f077e5dd535419d1741e4c2f5215eebb699d | [
"Apache-2.0"
] | 18 | 2021-05-23T14:10:12.000Z | 2022-03-30T09:18:39.000Z | docs/development_guide/source/samples/advance samples/emac/eth_http_server.rst | HuakeZhBo/bl_mcu_sdk | a6a7f077e5dd535419d1741e4c2f5215eebb699d | [
"Apache-2.0"
] | 33 | 2021-04-27T07:46:50.000Z | 2022-02-27T05:45:19.000Z |
http server -- 网页服务器
============================
HTTP 协议是 Hypertext Transfer Protocol(超文本传输协议)的缩写,是一种用于分布式、协作式和超媒体信息系统的应用层协议。HTTP 是万维网的数据通信的基础。
本 demo 主要是在 BL706 上基于 LwIP 协议栈,实现一个 HTTP 服务器,在 BL706 上部署了一个简单的网页,然后我们可以通过浏览器去访问 BL706 上的网页。
硬件准备
----------------
- 一块 BL706 ETH 开发板
- 一块 PHY 8720 模块
- 一个串口线
- 一根标准 5/6 类网线
硬件连接
----------------
本 demo 基于 BL706 ETH 开发板,将对应的功能引脚连接到 PHY8720 模块上,连接方式如下:
::
GPIO function GPIO pin
----------------------------------
RMII_CLK <--> GPIO0
RMII_TXD0 <--> GPIO1
RMII_TXD1 <--> GPIO2
RMII_RXD0 <--> GPIO7
RMII_RXD1 <--> GPIO8
RMII_MDC <--> GPIO18
RMII_MDIO <--> GPIO19
RMII_RXERR <--> GPIO20
RMII_TX_EN <--> GPIO21
RMII_RX_DV <--> GPIO22
接下来需要将 PHY8720 模块的 RJ-45 接口通过标准 5/6 类网线连接到与测试 PC 在同一局域网中的路由器或者交换机上。
生成 Web 网页及 LwIP 协议栈配置
---------------------------------
Web demo 的源码存放在 ``examples\emac\lwip_http_server\web_demo\pages`` 目录下,网页设计好后,可以使用 ``web_demo`` 目录下的 ``makefsdata.exe`` 工具将网页翻译成 LwIP 协议栈能够解析的文件格式,
在该目录下还有一个 ``makefsdata.bat`` 脚本,该脚本执行后会将 ``pages`` 目录下的 web 网页生成一个 ``fsdata_custom.c`` 文件;
将输出的 ``fsdata_custom.c`` 文件,放到 ``components\lwip\src\apps\http`` 目录下,然后在 ``components/lwip/lwipopts.h`` 文件中,使能 ``HTTPD_USE_CUSTOM_FSDATA`` 宏定义。
注:由于当前测试例程 local ip 地址采用静态 IP 配置进行的测试,如需修改可在 main.c 中进行相应的修改,也可直接使能 LWIP 的 DHCP 功能自动获取 IP 配置。
.. code-block:: c
:linenos:
#define LWIP_TCP 1
#define TCP_TTL 255
#define HTTPD_USE_CUSTOM_FSDATA 1
编译和下载
-------------------
- **命令行编译**
.. code-block:: bash
:linenos:
$ cd bl_mcu_sdk
$ make APP=lwip_http_server
- **烧录**
详见 :ref:`bl_dev_cube`
实验现象
-----------
编译完成后,烧写到芯片正确运行后,打开浏览器访问 BL706 相应的 IP 地址,即可看到一个测试网页。
串口 log 信息:
.. figure:: img/emac_http_1.png
:alt:
.. figure:: img/emac_http_2.png
:alt:
使用 Wireshark 抓取网络数据包,我们可以看到 TCP 协议的 “握手” 过程,以及 HTTP 协议的请求和数据传输:
.. figure:: img/emac_http_3.png
:alt:
| 23.153846 | 147 | 0.574276 |
125eed7b427289d97f8193d64775018c3e0b9f13 | 452 | rst | reStructuredText | _doc/sphinxdoc/source/i_examples.rst | sdpython/teachpyx | 93bd7164cfcf285866ac268a6e1e719a16d96ebf | [
"MIT"
] | 3 | 2018-04-13T15:06:52.000Z | 2019-05-26T09:11:18.000Z | _doc/sphinxdoc/source/i_examples.rst | sdpython/teachpyx | 93bd7164cfcf285866ac268a6e1e719a16d96ebf | [
"MIT"
] | 17 | 2016-09-25T12:35:50.000Z | 2021-11-20T21:44:16.000Z | _doc/sphinxdoc/source/i_examples.rst | sdpython/lecture_citation | ec92398321389fbc64619ed42b55be72a78ec0e7 | [
"MIT"
] | 1 | 2019-07-29T12:11:33.000Z | 2019-07-29T12:11:33.000Z |
========
Exemples
========
Exemples, FAQ (Frequently Asked Questions), notebooks
et autres petits bouts de codes qu'on espère pouvoir copier
coller sans les comprendre. Parfois c'est vrai.
.. only:: html
.. toctree::
:maxdepth: 1
i_ex
i_faq
gyexamples/index
all_notebooks
.. only:: latex
.. toctree::
:maxdepth: 2
i_ex
i_faq
gyexamples/index
all_notebooks
| 15.586207 | 59 | 0.579646 |
725c60bf478198c359a92d99fb151c3ee5bd1f4d | 1,762 | rst | reStructuredText | specs/juno/oslo-config-and-logging-integration.rst | aaronorosen/congress | 2f74410c93a4d761a6fb3d913ea6bec87fd3085c | [
"Apache-2.0"
] | 1 | 2016-02-10T00:59:31.000Z | 2016-02-10T00:59:31.000Z | specs/juno/oslo-config-and-logging-integration.rst | aaronorosen/congress | 2f74410c93a4d761a6fb3d913ea6bec87fd3085c | [
"Apache-2.0"
] | null | null | null | specs/juno/oslo-config-and-logging-integration.rst | aaronorosen/congress | 2f74410c93a4d761a6fb3d913ea6bec87fd3085c | [
"Apache-2.0"
] | null | null | null | ..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Integreate oslo.config and oslo-incubator logging
==========================================
https://blueprints.launchpad.net/congress/+spec/integrate-oslo-config-and-logging
All openstack projects integrate with a library called oslo.config and
oslo-incubator for config and logging management. This blueprint is to
to integrate these two libries into congress.
Problem description
===================
* In order to avoid code duplication all openstack projects leverage
a common library for config management (oslo.config) and log management
(oslo-incubator). To avoid reinviting the wheel here we should use these
libraries as well.
Proposed change
===============
Integrate:
https://github.com/openstack/oslo-incubator/
https://github.com/openstack/oslo.config
with congrss.
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
This blueprint provides integration to common libraries that all openstack
deployers are already used to using and configuring.
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
arosen
Work Items
----------
None
Dependencies
============
None
Testing
=======
Unit tests will be added
Documentation Impact
====================
None
References
==========
None
| 15.59292 | 81 | 0.628831 |
09b7f698c46b62d45292b1a274f22919c5d9fae9 | 489 | rst | reStructuredText | docs/source/index.rst | Takahiro-Funahashi/rs232com | f941aa460c298831b16db340900d1ed22131eb70 | [
"MIT"
] | null | null | null | docs/source/index.rst | Takahiro-Funahashi/rs232com | f941aa460c298831b16db340900d1ed22131eb70 | [
"MIT"
] | null | null | null | docs/source/index.rst | Takahiro-Funahashi/rs232com | f941aa460c298831b16db340900d1ed22131eb70 | [
"MIT"
] | null | null | null | .. rs232com documentation master file, created by
sphinx-quickstart on Fri Feb 4 11:16:33 2022.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to rs232com's documentation!
====================================
Last update:|today|
.. toctree::
:maxdepth: 2
:caption: Contents:
./_root_source_/modules
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 20.375 | 76 | 0.629857 |
7e13d6ab2fc1b290680089e6b661689cf0ccea32 | 717 | rst | reStructuredText | docs/code/PyFunceble.cli.entry_points.pyfunceble.rst | Centaurioun/PyFunceble | 59b809f3322118f7824195752c6015220738d4a0 | [
"Apache-2.0"
] | 213 | 2017-11-19T16:00:29.000Z | 2022-03-30T20:51:35.000Z | docs/code/PyFunceble.cli.entry_points.pyfunceble.rst | Centaurioun/PyFunceble | 59b809f3322118f7824195752c6015220738d4a0 | [
"Apache-2.0"
] | 270 | 2018-01-10T12:42:41.000Z | 2022-03-22T00:03:23.000Z | docs/code/PyFunceble.cli.entry_points.pyfunceble.rst | Centaurioun/PyFunceble | 59b809f3322118f7824195752c6015220738d4a0 | [
"Apache-2.0"
] | 48 | 2017-12-09T22:53:49.000Z | 2022-01-29T15:50:52.000Z | PyFunceble.cli.entry\_points.pyfunceble package
===============================================
Submodules
----------
PyFunceble.cli.entry\_points.pyfunceble.argsparser module
---------------------------------------------------------
.. automodule:: PyFunceble.cli.entry_points.pyfunceble.argsparser
:members:
:undoc-members:
:show-inheritance:
PyFunceble.cli.entry\_points.pyfunceble.cli module
--------------------------------------------------
.. automodule:: PyFunceble.cli.entry_points.pyfunceble.cli
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: PyFunceble.cli.entry_points.pyfunceble
:members:
:undoc-members:
:show-inheritance:
| 23.9 | 65 | 0.573222 |
8128b90fd93b83b4d6263290007704eda2499cab | 2,532 | rst | reStructuredText | workspace/src/rqt_common_plugins/rqt_bag_plugins/CHANGELOG.rst | migarstka/barc | deacfd974f251693d74b273d58d22e9fead2354f | [
"MIT"
] | 1 | 2019-01-10T22:07:07.000Z | 2019-01-10T22:07:07.000Z | workspace/src/rqt_common_plugins/rqt_bag_plugins/CHANGELOG.rst | migarstka/barc | deacfd974f251693d74b273d58d22e9fead2354f | [
"MIT"
] | null | null | null | workspace/src/rqt_common_plugins/rqt_bag_plugins/CHANGELOG.rst | migarstka/barc | deacfd974f251693d74b273d58d22e9fead2354f | [
"MIT"
] | null | null | null | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package rqt_bag_plugins
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.3.13 (2016-03-08)
-------------------
0.3.12 (2015-07-24)
-------------------
0.3.11 (2015-04-30)
-------------------
* add missing dependency on rqt_plot (`#316 <https://github.com/ros-visualization/rqt_common_plugins/pull/316>`_)
* work around Pillow segfault if PyQt5 is installed (`#289 <https://github.com/ros-visualization/rqt_common_plugins/pull/289>`_, `#290 <https://github.com/ros-visualization/rqt_common_plugins/pull/290>`_)
0.3.10 (2014-10-01)
-------------------
* add displaying of depth image thumbnails
0.3.9 (2014-08-18)
------------------
* add missing dependency on python-cairo (`#269 <https://github.com/ros-visualization/rqt_common_plugins/issues/269>`_)
0.3.8 (2014-07-15)
------------------
* fix missing installation of resource subfolder
0.3.7 (2014-07-11)
------------------
* add plotting plugin (`#239 <https://github.com/ros-visualization/rqt_common_plugins/issues/239>`_)
* fix rqt_bag to plot array members (`#253 <https://github.com/ros-visualization/rqt_common_plugins/issues/253>`_)
* export architecture_independent flag in package.xml (`#254 <https://github.com/ros-visualization/rqt_common_plugins/issues/254>`_)
0.3.6 (2014-06-02)
------------------
0.3.5 (2014-05-07)
------------------
* fix PIL/Pillow error (`#224 <https://github.com/ros-visualization/rqt_common_plugins/issues/224>`_)
0.3.4 (2014-01-28)
------------------
0.3.3 (2014-01-08)
------------------
0.3.2 (2013-10-14)
------------------
0.3.1 (2013-10-09)
------------------
0.3.0 (2013-08-28)
------------------
0.2.17 (2013-07-04)
-------------------
0.2.16 (2013-04-09 13:33)
-------------------------
0.2.15 (2013-04-09 00:02)
-------------------------
0.2.14 (2013-03-14)
-------------------
0.2.13 (2013-03-11 22:14)
-------------------------
0.2.12 (2013-03-11 13:56)
-------------------------
0.2.11 (2013-03-08)
-------------------
0.2.10 (2013-01-22)
-------------------
0.2.9 (2013-01-17)
------------------
0.2.8 (2013-01-11)
------------------
0.2.7 (2012-12-24)
------------------
0.2.6 (2012-12-23)
------------------
0.2.5 (2012-12-21 19:11)
------------------------
0.2.4 (2012-12-21 01:13)
------------------------
0.2.3 (2012-12-21 00:24)
------------------------
0.2.2 (2012-12-20 18:29)
------------------------
0.2.1 (2012-12-20 17:47)
------------------------
0.2.0 (2012-12-20 17:39)
------------------------
* first release of this package into Groovy
| 23.018182 | 204 | 0.491311 |
d141d45bb4fd5fdfc04ef7de629fc1054ceab634 | 2,678 | rst | reStructuredText | mod_wsgi-4.7.1/docs/release-notes/version-2.6.rst | wanyaworld/SearchThis | ea172d303679595158c55df6ec06168693e5c141 | [
"MIT"
] | 950 | 2015-01-03T07:11:57.000Z | 2022-03-29T11:24:41.000Z | docs/release-notes/version-2.6.rst | TheApacheCats/mod_wsgi | f46c1225da4593f81006704b14e700783380448f | [
"Apache-2.0"
] | 689 | 2015-01-11T05:27:00.000Z | 2022-03-29T18:29:31.000Z | docs/release-notes/version-2.6.rst | TheApacheCats/mod_wsgi | f46c1225da4593f81006704b14e700783380448f | [
"Apache-2.0"
] | 322 | 2015-01-21T19:09:48.000Z | 2022-02-26T17:31:47.000Z | ===========
Version 2.6
===========
Version 2.6 of mod_wsgi can be obtained from:
http://modwsgi.googlecode.com/files/mod_wsgi-2.6.tar.gz
For Windows binaries see:
http://code.google.com/p/modwsgi/wiki/InstallationOnWindows
Note that this release does not support Python 3.0. Python 3.0 will only be
supported in mod_wsgi 3.0.
Note that the fix for (3) below is believed to have already been backported
to mod_wsgi 2.5 in Debian Stable tree. Thus, if using mod_wsgi 2.5 from
Debian you do not need to be concerned about upgrading to this version.
Bug Fixes
---------
1. Fixed build issue on MacOS X where incorrect Python framework found at
run time. This was caused by '-W,-l' option prefix being dropped from '-F'
option in LDFLAGS of Makefile and not reverted back when related changes
undone. This would affect Python 2.3 through 2.5. For more details see:
http://code.google.com/p/modwsgi/issues/detail?id=28
2. Fixed build issue on MacOS X where incorrect Python framework found at
run time. This was caused by '-L/-l' flags being used for versions of Python
prior to 2.6. That approach, even where '.a' library link to framework exists,
doesn't seem to work for the older Python versions.
Because of the unpredictability as to when '-F/-framework' or '-L/-l'
should be used for specific Python versions or distributions. Now always
link against Python framework via '-F/-framework' if available. If for some
particular setup this isn't working, then the '--disable-framework' option
can be supplied to 'configure' script to force use of '-L/-l'. For more
details see:
http://code.google.com/p/modwsgi/issues/detail?id=28
3. Fixed bug where was decrementing Python object reference count on NULL
pointer, causing a crash. This was possibly only occuring in embedded mode
and only where closure of remote client connection was detected before any
request content was read. The issue may have been more prevalent for a HTTPS
connection from client.
4. Fixed bug for Python 2.X where when using 'print' to output multple
objects to log object via, wsgi.errors, stderr or stdout, a space wasn't
added to output between objects. This was occuring because log object
lacked a softspace attribute.
Features Changed
----------------
1. When trying to determining version of Apache being used at build time,
if Apache executable not available, fallback to getting version from the
installed Apache header files. Do this as some Linux distributions build
boxes do not actually have Apache executable itself installed, only the
header files and apxs tool needed to build modules. For more details see:
http://code.google.com/p/modwsgi/issues/detail?id=147
| 41.2 | 78 | 0.767737 |
b8a41e0c3b5da705117ee633a3ee91c1af14d33c | 796 | rst | reStructuredText | docs/guide/shipping.rst | al-bezd/longclaw | 0508231a90360670ed52d40475d8886a91c6920f | [
"MIT"
] | 351 | 2017-02-03T10:47:06.000Z | 2022-03-23T08:08:31.000Z | docs/guide/shipping.rst | al-bezd/longclaw | 0508231a90360670ed52d40475d8886a91c6920f | [
"MIT"
] | 392 | 2017-02-03T10:16:26.000Z | 2022-03-28T00:30:02.000Z | docs/guide/shipping.rst | al-bezd/longclaw | 0508231a90360670ed52d40475d8886a91c6920f | [
"MIT"
] | 108 | 2017-02-06T01:03:21.000Z | 2022-03-14T13:51:20.000Z | .. shipping:
Configuring Shipping
====================
Longclaw allows you to:
- Enable and set a default shipping rate applicable to any country
- Configure multiple shipping rates for individual countries.
The default shipping rate can be enabled and set from the ``settings -> Longclaw Settings`` menu
in the Wagtail admin.
If the default shipping rate is enabled, it implies that shipping is available to any country.
When a rate for a given country cannot be found, the default shipping rate will be used.
Shipping rates for individual countries can be configured via the ``Shipping Countries`` menu in the
Wagtail admin.
For each country added, you can configure any number of shipping rates. Each shipping rate states the
name, description, price and carrier (e.g. Royal Mail).
| 34.608696 | 101 | 0.765075 |
7424f563436df4f09ca7e846dc2ab47ece391916 | 2,202 | rst | reStructuredText | docs/source/index.rst | ESASPICEService/spiops | 0b6cdf9f9d528aa197e5d744b318edc6a3b91403 | [
"MIT"
] | 6 | 2018-09-25T10:58:52.000Z | 2020-03-21T13:05:28.000Z | docs/source/index.rst | ESASPICEService/spiops | 0b6cdf9f9d528aa197e5d744b318edc6a3b91403 | [
"MIT"
] | null | null | null | docs/source/index.rst | ESASPICEService/spiops | 0b6cdf9f9d528aa197e5d744b318edc6a3b91403 | [
"MIT"
] | null | null | null | Welcome to spiops's documentation!
==================================
spiops is a library aimed to help scientists and engineers that deal with
Solar System Geometry mainly for planetary science. More in particular is
aimed to assists the users to extend the usage of SPICE.
Functionalities vary from the computation of the illumination of a given
Field-of-View to obtaining the coverage of a given S/C for a particular
meta-kernel.
The underlying idea of spiops is to be used as a multi-user and
multi-disciplinary pool of re-usable SPICE based functions to provide cross
mission and discipline support of SPICE for ESA Planetary and Heliophyiscs
missions.
Feedback and new functionalities are always welcome, if you discover that a
function is not
working as expected or if you have a function that you believe can be of
interest to other people please open an issue or contact marc.costa@esa.int.
spiops is developed and maintained by the ESA SPICE Service (ESS)
http://spice.esac.esa.int.
SPICE is an essential tool for scientists and engineers alike in the
planetary science field for Solar System Geometry. Please visit the NAIF
website for more details about SPICE.
Contents
********
.. toctree::
:maxdepth: 2
documentation
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
Installation
************
First install the dependencies (spiceypy, pytest) for the project. Then run
.. code-block:: python
pip3 install spiops
to install from pypi. If you wish to install spiops from source first
download or clone the project from: https://mcosta@repos.cosmos.esa.int/socci/scm/spice/spiops.git
Then run
.. code-block:: python
python3 setup.py install.
Citations
*********
If the use of spiops is used in a publication, please consider
citing spiops, SpiceyPy and the SPICE toolkit. The citation information
for SPICE can be found on the NAIF website and the citation information for
SpiceyPy in the GitHub repository.
To cite SpiceyPy use the following DOI: |Citation|
.. |Citation| image:: https://zenodo.org/badge/16987/AndrewAnnex/SpiceyPy.svg
:target: https://zenodo.org/badge/latestdoi/16987/AndrewAnnex/SpiceyPy | 28.230769 | 98 | 0.755223 |
42d7265dd6a6b464f254d296df5d21fc06ec3418 | 2,109 | rst | reStructuredText | README.rst | elastx/openstack-ansible-os_cloudkitty | 319dadd64380f9208b4fa8ba0996a15fd107a699 | [
"Apache-2.0"
] | 14 | 2016-06-02T02:57:59.000Z | 2019-01-28T22:06:31.000Z | README.rst | elastx/openstack-ansible-os_cloudkitty | 319dadd64380f9208b4fa8ba0996a15fd107a699 | [
"Apache-2.0"
] | null | null | null | README.rst | elastx/openstack-ansible-os_cloudkitty | 319dadd64380f9208b4fa8ba0996a15fd107a699 | [
"Apache-2.0"
] | 2 | 2017-12-29T11:28:22.000Z | 2018-09-05T08:38:00.000Z | ========================
Team and repository tags
========================
.. image:: https://governance.openstack.org/tc/badges/openstack-ansible-os_cloudkitty.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
OpenStack-Ansible CloudKitty
############################
:tags: openstack, cloudkitty, cloud, ansible
:category: \*nix
This Ansible role installs and configures OpenStack cloudkitty.
This role will install the following Upstart services:
* cloudkitty-api
* cloudkitty-processor
Required Variables
==================
.. code-block:: yaml
external_lb_vip_address: 172.16.24.1
internal_lb_vip_address: 192.168.0.1
cloudkitty_galera_address: "{{ internal_lb_vip_address }}"
cloudkitty_container_mysql_password: "SuperSecretePassword1"
cloudkitty_service_password: "SuperSecretePassword2"
cloudkitty_rabbitmq_password: "SuperSecretePassword3"
Example Playbook
================
.. code-block:: yaml
- name: Install cloudkitty service
hosts: cloudkitty_all
user: root
roles:
- { role: "os_cloudkitty", tags: [ "os-cloudkitty" ] }
vars:
external_lb_vip_address: 172.16.24.1
internal_lb_vip_address: 192.168.0.1
cloudkitty_galera_address: "{{ internal_lb_vip_address }}"
cloudkitty_container_mysql_password: "SuperSecretePassword1"
cloudkitty_service_password: "SuperSecretePassword2"
cloudkitty_oslomsg_rpc_password: "SuperSecretePassword3"
cloudkitty_oslomsg_notify_password: "SuperSecretePassword4"
Documentation for the project can be found at:
https://docs.openstack.org/openstack-ansible-os_cloudkitty/latest/
Release notes for the project can be found at:
https://docs.openstack.org/releasenotes/openstack-ansible-os_cloudkitty/
The project source code repository is located at:
https://opendev.org/openstack/openstack-ansible-os_cloudkitty/
The project home is at:
https://launchpad.net/openstack-ansible
The project bug tracker is located at:
https://bugs.launchpad.net/openstack-ansible
| 31.954545 | 89 | 0.72404 |
8e8b3c76564efcdc451e7eea166b7cfb4670d59b | 1,853 | rst | reStructuredText | vendor/phpmd/phpmd/src/site/rst/rules/cleancode.rst | drentas/JenkinsTest | daff52d397435fbff35339ef7b55c8dc32fdffce | [
"MIT"
] | 1 | 2021-08-02T02:40:14.000Z | 2021-08-02T02:40:14.000Z | vendor/phpmd/phpmd/src/site/rst/rules/cleancode.rst | drentas/JenkinsTest | daff52d397435fbff35339ef7b55c8dc32fdffce | [
"MIT"
] | null | null | null | vendor/phpmd/phpmd/src/site/rst/rules/cleancode.rst | drentas/JenkinsTest | daff52d397435fbff35339ef7b55c8dc32fdffce | [
"MIT"
] | null | null | null | ================
Clean Code Rules
================
The Clean Code ruleset contains a collection of rules that enforce
software design principles such as the SOLID Principles and Object
Callisthenics.
They are very strict and cannot easily be followed without any violations.
If you use this ruleset you should:
1. Select important packages that should follow this ruleset and others that
don't
2. Set a treshold for failures and not fail at the first occurance.
ElseExpression
==============
Since: PHPMD 1.5
An if expression with an else branch is never necessary. You can rewrite the
conditions in a way that the else is not necessary and the code becomes simpler
to read. To achieve this use early return statements. To achieve this you may
need to split the code it several smaller methods. For very simple assignments
you could also use the ternary operations.
Example: ::
class Foo
{
public function bar($flag)
{
if ($flag) {
// one branch
} else {
// another branch
}
}
}
BooleanArgumentFlag
===================
A boolean flag argument is a reliable indicator for a violation of
the Single Responsibility Principle (SRP). You can fix this problem
by extracting the logic in the boolean flag into its own class
or method.
Example: ::
class Foo {
public function bar($flag = true) {
}
}
StaticAccess
============
Static acccess causes inexchangable dependencies to other classes and leads to
hard to test code. Avoid using static access at all costs and instead inject
dependencies through the constructor. The only case when static access is
acceptable is when used for factory methods.
Example: ::
class Foo
{
public function bar()
{
Bar::baz();
}
}
| 25.383562 | 79 | 0.669725 |
bba045dbd57eee2a40d8bcb3ae223faaf4a64a08 | 280 | rst | reStructuredText | docs/source/community/updates.rst | blockterms/lit | 25e7b5394c5e20c8d61052b23f3c41624707d243 | [
"MIT"
] | 1 | 2021-03-08T13:42:35.000Z | 2021-03-08T13:42:35.000Z | docs/source/community/updates.rst | blockterms/lit | 25e7b5394c5e20c8d61052b23f3c41624707d243 | [
"MIT"
] | null | null | null | docs/source/community/updates.rst | blockterms/lit | 25e7b5394c5e20c8d61052b23f3c41624707d243 | [
"MIT"
] | 3 | 2019-03-07T13:55:00.000Z | 2022-03-08T14:19:51.000Z | Stay Informed
=============
Check the following sources often to keep up on new features and other changes.
GitHub
------
The `project page <https://github.com/blockterms/lit>`_ on GitHub remains the best
way to track the development of Lit.
.. include:: ../../../HISTORY.rst
| 21.538462 | 82 | 0.689286 |
779e2f560db731a62284831e0c86ff1f7606e5fd | 9,648 | rst | reStructuredText | source/BAREMETAL_TO_BINDERHUB.rst | pbellec/docs.neurolibre.org | fb8c2432c84e8cb47eab53e0d2549b99dc6a4ae6 | [
"CC-BY-3.0"
] | 1 | 2020-03-18T15:27:11.000Z | 2020-03-18T15:27:11.000Z | source/BAREMETAL_TO_BINDERHUB.rst | pbellec/docs.neurolibre.org | fb8c2432c84e8cb47eab53e0d2549b99dc6a4ae6 | [
"CC-BY-3.0"
] | 17 | 2021-06-17T13:38:51.000Z | 2022-03-24T17:41:55.000Z | source/BAREMETAL_TO_BINDERHUB.rst | pbellec/docs.neurolibre.org | fb8c2432c84e8cb47eab53e0d2549b99dc6a4ae6 | [
"CC-BY-3.0"
] | 2 | 2020-03-19T01:39:57.000Z | 2020-06-05T03:22:46.000Z | Bare-metal to BinderHub
=======================
Installation of the BinderHub from bare-metal is fully automatic and reproducible through `terraform <https://www.terraform.io/>`_ configuration
runned via `this <https://github.com/neurolibre/neurolibre-binderhub/blob/master/Dockerfile>`_ Docker container.
The following is intended for neurolibre backend developpers, but can be read by anyone interrested in our process.
It assumes that you have basic knowledge on using the command line on a remote server (bash, ssh authentification..).
The sections :ref:`Pre-setup` and :ref:`Docker-specific preparations` should be done just the first time.
Once it is done, you can directly go to the section :ref:`Spawn a BinderHub instance using Docker`.
Pre-setup
---------
You first need to prepare the necessary files that will be used later to install and ssh to the newly spawned BinderHub instance.
We are using `git-crypt <https://github.com/AGWA/git-crypt>`_ to encrypt our password files for the whole process, these can be uncrypted with the appropriate :code:`gitcrypt-key`.
For the ssh authentication on the BinderHub server, you have two choices : i) use neurolibre’s key (recommended) or ii) use your own ssh key.
.. note:: You can request the :code:`gitcrypt-key` and :code:`neurolibre’s ssh key` to any infrastructure admin if authorized.
.. warning:: You should never share the :code:`gitcrypt-key` and :code:`neurolibre’s ssh key` to anyone.
1. Create a folder on your local machine, which is later to be mounted to the Docker container for securely using your keys during spawning a BinderHub instance.
Here, we will call it :code:`my-keys` for convenience:
.. code-block:: console
cd /home/$USER
mkdir /my-keys
2. Option (i), use neurolibre’s key (recommended):
a. Simply copy the public :code:`id_rsa.pub` and private key :code:`id_rsa` to :code:`/home/$USER/my-keys/`
.. code-block:: console
cp id_rsa* /home/$USER/my-keys/
3. Option (ii), use your own local key:
a. Make sure your public key and private are under :code:`/home/$USER/.ssh` an copy it to :code:`/home/$USER/my-keys`.
.. code-block:: console
cp /home/$USER/.ssh/id_rsa* /home/$USER/my-keys/
b. If not already associated, add your local's key to your GitHub account:
* You can check and add new keys on your `GitHub settings <https://github.com/settings/keys>`_.
* Test your ssh connection to your GitHub account by following `these steps <https://help.github.com/en/github/authenticating-to-github/testing-your-ssh-connection>`_.
4. Finally, copy the key :code:`gitcrypt-key` in :code:`/home/$USER/my-keys/`.
Docker-specific preparations
----------------------------
You will install a trusted Docker image that will later be used to spawn the BinderHub instance.
1. Install `Docker <https://www.Docker.com/get-started>`_ and log in to the dockerhub with your credentials.
.. code-block:: console
sudo docker login
2. Pull the Docker image that encapsulates the barebones environment to spawn a BinderHub instance with our provider (compute canada as of late 2019).
You can check the different tags available under our `dockerhub user <https://hub.Docker.com/r/conpdev/neurolibre-instance/tags>`_.
.. code-block:: console
sudo docker pull conpdev/neurolibre-instance:v1.2
Spawn a BinderHub instance using Docker
---------------------------------------
To achieve this, you will instantiate a container (from the image you just pulled) mounted with specific volumes from your computer.
You will be mounting two directories into the container: :code:`/my_keys` containing the files from :ref:`Pre-setup`, and :code:`/instance_name` containing the terraform recipe and artifacts.
.. warning:: The Docker container that you will run contain sensitive information (i.e. your ssh keys, passwords, etc), so never share it with anyone else.
If you need to share information to another developer, share the Dockerfile and/or these instructions.
.. note:: The Docker image itself has no knowledge of the sensitive files since they are used just at runtime
(through `entrypoint <https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime>`_ command).
1. Place a :code:`main.tf` file (see :ref:`Appendix A` for details) into a new folder :code:`/instance-name`, which describes the terraform recipe for spawning a BinderHub instance on the cloud provider.
For convenience, we suggest that you use the actual name of the instance (value of the :code:`project_name` field in :code:`main.tf`).
.. code-block:: console
mkdir /home/$USER/instance-name
vim /home/$USER/instance-name/main.tf
.. note:: If you choose not to copy :code:`main.tf` file to this directory, you will be asked to fill out one manually during container runtime.
2. Start the Docker container which is going to spawn the BinderHub instance:
.. code-block:: console
sudo docker run -v /home/$USER/my_keys:/tmp/.ssh -v /home/$USER/instance-name:/terraform-artifacts -it neurolibre-instance:v1.2
3. Take a coffee and wait! The instance should be ready in 5~10 minutes.
4. For security measure, stop and delete the container that you used to span the instance:
.. code-block:: console
sudo docker stop conpdev/neurolibre-instance:v1.2
sudo docker rm conpdev/neurolibre-instance:v1.2
Appendix A
----------
Here we describe the default terraform recipe that can be used to spawn a BinderHub instance, it is also available `online <https://github.com/neurolibre/neurolibre-binderhub/blob/master/terraform/main.tf>`_.
There are three different modules used by our terraform scripts, all run consecutively and only if the previous one succeeded.
1. :code:`provider` populates terraform with the variables related to our cloud provider (compute canada as of late 2019):
* :code:`project_name`: name of the instances (will be :code:`project_name_master` and :code:`project_name_nodei`)
* :code:`nb_nodes`: number of k8s nodes **excluding** the master node
* :code:`instance_volume_size`: main volume size of the instances in GB **including** the master node
* :code:`ssh_authorized_keys`: list of the public ssh keys that will be allowed on the server
* :code:`os_flavor_master`: hardware configuration of the k8s master instance in the form :code:`c{n_cpus}-{ram}gb-{optionnal_vol_in_gb}`
* :code:`os_flavor_node`: hardware configuration of the k8s node instances
* :code:`image_name`: OS image name used by the instance
* :code:`docker_registry`: domain for the Docker registry, if empty it uses :code:`Docker.io` by default
* :code:`docker_id`: user id credential to connect to the Docker registry
* :code:`docker_password`: password credential to connect to the Docker registry
.. warning:: The flavors and image name are not fully customizable and should be set accordingly to the provider's list.
You can check them through openstack API using :code:`openstack flavor list && openstack image list` or using the horizon dashboard.
2. :code:`dns` related to cloudflare DNS configuration:
* :code:`domain`: domain name to access your BinderHub environment, it will automatically point to the k8s master floating IP
3. :code:`binderhub` specific to binderhub configuration:
* :code:`binder_version`: you can check the current BinderHub version releases `here <https://jupyterhub.github.io/helm-chart/>`_
* :code:`TLS_email`: this email will be used by `Let's Encrypt <https://letsencrypt.org/>`_ to request a TLS certificate
* :code:`TLS_name`: TLS certificate name should be the same as the domain but with dashes :code:`-` instead of points :code:`.`
* :code:`mem_alloc_gb`: Amount of RAM (in GB) used by each user of your BinderHub
* :code:`cpu_alloc`: Number of CPU cores
(`Intel® Xeon® Gold 6130 <https://ark.intel.com/content/www/us/en/ark/products/120492/intel-xeon-gold-6130-processor-22m-cache-2-10-ghz.html>`_
for compute canada) used by each user of your BinderHub
.. code-block:: console
:linenos:
module "provider" {
source = "git::ssh://git@github.com/neurolibre/terraform-binderhub.git//terraform-modules/providers/openstack"
project_name = "instance-name"
nb_nodes = 1
instance_volume_size = 100
ssh_authorized_keys = ["<redacted>"]
os_flavor_master = "c4-30gb-83"
os_flavor_node = "c16-60gb-392"
image_name = "Ubuntu-18.04.3-Bionic-x64-2020-01"
is_computecanada = true
docker_registry = "binder-registry.conp.cloud"
docker_id = "<redacted>"
docker_password = "<redacted>"
}
module "dns" {
source = "git::ssh://git@github.com/neurolibre/terraform-binderhub.git//terraform-modules/dns/cloudflare"
domain = "instance-name.conp.cloud"
public_ip = "${module.provider.public_ip}"
}
module "binderhub" {
source = "git::ssh://git@github.com/neurolibre/terraform-binderhub.git//terraform-modules/binderhub"
ip = "${module.provider.public_ip}"
domain = "${module.dns.domain}"
admin_user = "${module.provider.admin_user}"
binder_version = "v0.2.0-n121.h6d936d7"
TLS_email = "<redacted>"
TLS_name = "instance-name-conp-cloud"
mem_alloc_gb = 4
cpu_alloc = 1
docker_registry = "${module.provider.docker_registry}"
docker_id = "${module.provider.docker_id}"
docker_password = "${module.provider.docker_password}"
} | 52.151351 | 209 | 0.715589 |
e1c95f63bf7c746550f8181cebea296446c2712e | 115 | rst | reStructuredText | Misc/NEWS.d/next/Tests/2021-03-31-11-38-42.bpo-37945.HTUYhv.rst | bhavinmanek/cpython | bec8c787ec72d73b39011bde3f3a93e9bb1174b7 | [
"0BSD"
] | 5 | 2021-06-22T19:41:37.000Z | 2022-01-03T18:49:33.000Z | Misc/NEWS.d/next/Tests/2021-03-31-11-38-42.bpo-37945.HTUYhv.rst | bhavinmanek/cpython | bec8c787ec72d73b39011bde3f3a93e9bb1174b7 | [
"0BSD"
] | 13 | 2020-06-27T08:26:53.000Z | 2022-03-01T20:02:25.000Z | Misc/NEWS.d/next/Tests/2021-03-31-11-38-42.bpo-37945.HTUYhv.rst | bhavinmanek/cpython | bec8c787ec72d73b39011bde3f3a93e9bb1174b7 | [
"0BSD"
] | 2 | 2018-04-28T21:09:04.000Z | 2019-05-15T15:52:34.000Z | Fix test_getsetlocale_issue1813() of test_locale: skip the test if
``setlocale()`` fails. Patch by Victor Stinner.
| 38.333333 | 66 | 0.782609 |
8f35a631857b248db56b47c4e319d63d5ad21ae8 | 1,776 | rst | reStructuredText | docs/contribute.rst | Kozea/tinycss2 | 7aa763b59050d112ca80f069ee5c2d04d511ab23 | [
"BSD-3-Clause"
] | 87 | 2017-10-02T14:43:10.000Z | 2022-02-07T18:21:03.000Z | docs/contribute.rst | Kozea/tinycss2 | 7aa763b59050d112ca80f069ee5c2d04d511ab23 | [
"BSD-3-Clause"
] | 36 | 2017-07-01T16:40:27.000Z | 2022-02-22T22:02:22.000Z | docs/contribute.rst | Kozea/tinycss2 | 7aa763b59050d112ca80f069ee5c2d04d511ab23 | [
"BSD-3-Clause"
] | 13 | 2018-03-01T20:28:21.000Z | 2021-06-01T14:08:25.000Z | Contribute
==========
You want to add some code to tinycss2, launch its tests or improve its
documentation? Thank you very much! Here are some tips to help you play with
tinycss2 in good conditions.
The first step is to clone the repository, create a virtual environment and
install tinycss2 dependencies.
.. code-block:: shell
git clone https://github.com/Kozea/tinycss2.git
cd tinycss2
python -m venv venv
venv/bin/pip install .[doc,test]
You can then let your terminal in the current directory and launch Python to
test your changes. ``import tinycss2`` will then import the working directory
code, so that you can modify it and test your changes.
.. code-block:: shell
venv/bin/python
Code & Issues
-------------
If you’ve found a bug in tinycss2, it’s time to report it, and to fix it if you
can!
You can report bugs and feature requests on GitHub_. If you want to add or
fix some code, please fork the repository and create a pull request, we’ll be
happy to review your work.
.. _GitHub: https://github.com/Kozea/tinycss2
Tests
-----
Tests are stored in the ``tests`` folder at the top of the repository. They use
the pytest_ library.
You can launch tests (with code coverage and lint) using the following command::
venv/bin/python -m pytest
.. _pytest: https://docs.pytest.org/
Documentation
-------------
Documentation is stored in the ``docs`` folder at the top of the repository. It
relies on the Sphinx_ library.
You can build the documentation using the following command::
venv/bin/sphinx-build docs docs/_build
The documentation home page can now be found in the
``/path/to/tinycss2/docs/_build/index.html`` file. You can open this file in a
browser to see the final rendering.
.. _Sphinx: https://www.sphinx-doc.org/
| 26.117647 | 80 | 0.737613 |
239f27e08da82ae3a4fa90cecfa789f35375851f | 16,581 | rst | reStructuredText | docs/release-notes.rst | acumos/federation | 8553a197016d78c94294ddf62038a5877ba3f9be | [
"Apache-2.0"
] | null | null | null | docs/release-notes.rst | acumos/federation | 8553a197016d78c94294ddf62038a5877ba3f9be | [
"Apache-2.0"
] | null | null | null | docs/release-notes.rst | acumos/federation | 8553a197016d78c94294ddf62038a5877ba3f9be | [
"Apache-2.0"
] | null | null | null | .. ===============LICENSE_START=======================================================
.. Acumos CC-BY-4.0
.. ===================================================================================
.. Copyright (C) 2017-2020 AT&T Intellectual Property & Tech Mahindra. All rights reserved.
.. Modifications Copyright (C) 2020 Nordix Foundation.
.. ===================================================================================
.. This Acumos documentation file is distributed by AT&T and Tech Mahindra
.. under the Creative Commons Attribution 4.0 International License (the "License");
.. you may not use this file except in compliance with the License.
.. You may obtain a copy of the License at
..
.. http://creativecommons.org/licenses/by/4.0
..
.. This file is distributed on an "AS IS" BASIS,
.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. See the License for the specific language governing permissions and
.. limitations under the License.
.. ===============LICENSE_END=========================================================
================================
Federation Gateway Release Notes
================================
This server is available as a Docker image in a Docker registry at the Linux Foundation.
The image name is "federation-gateway" and the tag is a version string as shown below.
Version 3.2.3, 2020-06-03
-------------------------
* Prevent oversize user notifications (`ACUMOS-4177 <https://jira.acumos.org/browse/ACUMOS-4177>`_)
Version 3.2.2, 2020-03-24
-------------------------
* Adding support for updating params to deployed model (`ACUMOS-3742 <https://jira.acumos.org/browse/ACUMOS-3742>`_)
Version 3.2.1, 2020-03-12
-------------------------
* LicenseAsset support NexusArtifactClient - `ACUMOS-3960 <https://jira.acumos.org/browse/ACUMOS-3960>`_
Version 3.2.0, 2020-02-17
-------------------------
* Adding support for model data sending over federation gateway (`ACUMOS-3920 <https://jira.acumos.org/browse/ACUMOS-3920>`_)
* Fix solution sourceId !=null (`ACUMOS-4021 <https://jira.acumos.org/browse/ACUMOS-4021>`_)
Version 3.1.2, 2020-03-12
-------------------------
* Update dependency version for the common data service client to 3.1.0 (`ACUMOS-3845 <https://jira.acumos.org/browse/ACUMOS-3845>`_)
* Bump version to avoid conflict with work on master branch for demeter
* Part of the Clio maintenance/point release
Version 3.1.1, 2020-01-27
-------------------------
* Update dependency version for the common data service client to 3.1.1 (`ACUMOS-3951 <https://jira.acumos.org/browse/ACUMOS-3951>`_)
Version 3.1.0, 2019-12-16
-------------------------
* Update dependency version for the common data service client to 3.1.0 (`ACUMOS-3845 <https://jira.acumos.org/browse/ACUMOS-3845>`_)
Version 3.0.3, 2020-02-26
-------------------------
* LicenseAsset support NexusArtifactClient - `ACUMOS-3960 <https://jira.acumos.org/browse/ACUMOS-3960>`_
Version 3.0.2, 2019-11-04
-------------------------
* Don't re-tag imported docker images unless the tag is different (`ACUMOS-3670 <https://jira.acumos.org/browse/ACUMOS-3670>`_)
* Update dependency versions for the security and license profile validation clients (`ACUMOS-3669 <https://jira.acumos.org/browse/ACUMOS-3669>`_)
Version 3.0.1, 2019-09-26
-------------------------
* When a model has been federated, register it with the license manager (`ACUMOS-3484 <https://jira.acumos.org/browse/ACUMOS-3484>`_)
* This adds a new required configuration value, "license-manager.url" for the
license management service.
Version 3.0.0, 2019-09-13
-------------------------
* Upgrade server to Java 11. Compile client for Java 8 (`ACUMOS-3334 <https://jira.acumos.org/browse/ACUMOS-3334>`_)
* Compile and run with Java 11, but keep client library compliance level at Java 8.
* Add "acumos/" prefix to container image name
* Update to CDS 3.0.0
Version 2.3.0, 2019-09-06
-------------------------
* Portal to show details of federation actions (`ACUMOS-1778 <https://jira.acumos.org/browse/ACUMOS-1778>`_)
* Run SV license scan when a model has been federated (`ACUMOS-3396 <https://jira.acumos.org/browse/ACUMOS-3396>`_)
* This adds a new required configuration value, "verification.url" for the
security verification service.
* Java code upgrade to Java 11 (`ACUMOS-3334 <https://jira.acumos.org/browse/ACUMOS-3334>`_)
* Update to CDS 2.2.6
* Fix DI artifact create fail due to Federation use of a stale TCP stream (`ACUMOS-3193 <https://jira.acumos.org/browse/ACUMOS-3193>`_)
* Federated model DI name to include model name - same as source peer DI name (`ACUMOS-3195 <https://jira.acumos.org/browse/ACUMOS-3195>`_)
* Publish E5 Federation client library (`ACUMOS-2760 <https://jira.acumos.org/browse/ACUMOS-2760>`_)
3 new sub-projects are introduced, in addition to the existing "gateway" sub-project.
* "acumos-fgw-client-config" contains bean classes used to specify properties
of a client's connection to its server, including basic authentication and
TLS (SSL) related properties.
* "acumos-fgw-client-test" contains classes for providing mock responses to
a client for testing applications that make calls to a server, as well as
dummy key store and trust store files to enable a client to be used to
test a server.
* "acumos-fgw-client" contains implementations of clients for both the
external "E5" and private interfaces to the Acumos Federation Gateway
as well as bean classes for the JSON wire formats used by those interfaces.
The existing "gateway" project is modified to use the client subproject when
making requests to a peer Acumos instance, when sending or receiving
artifacts from the Nexus server, and for creating the rest template used
to communicate with CDS.
* Access to the Swagger API is fixed and now gives responses appropriate to
the interface being queried (external "E5" or private).
* Some configuration is simplified.
* The federation.ssl.client-auth configuration parameter is now named
federation.client-auth and defaults to WANT, enabling access to the
Swagger specification on the external "E5" interface without requiring
a client certificate. Attempts to access the REST API endpoints without
providing a client certificate will return a 403 Forbidden error.
* The local.ssl.client-auth configuration parameter is now named
local.client-auth and defaults to WANT, enabling access to the
Swagger specification on the private interface without requiring
a client certificate. Attempts to access the REST API endpoints without
providing a client certificate will return a 403 Forbidden error.
* The federation.registration.enabled configuration parameter is now named
federation.registration-enabled. It still defaults to False.
* The federation.instance configuration parameter no longer needs to be set to
"gateway" and no longer has any effect.
* The value "local" in the spring.profiles.active configuration parameter no
longer has any effect.
* The catalog.catalogs-selector configuration parameter no longer has any effect.
* The various task.* configuration parameters no longer have any effect.
* The cdms.client.page-size configuration parameter no longer has any effect.
* The catalog-local.source, catalog-local.catalogs, codes-local.source,
peers-local.source, and peer-local.interval configuration parameters no
longer have any effect.
* Documentation is updated to reflect these changes.
Version 2.2.1, 2019-07-18
-------------------------
* Fix Boreas branch Jenkins build not working (`ACUMOS-3244 <https://jira.acumos.org/browse/ACUMOS-3244>`_)
* Fix DI artifact create fail due to Federation use of a stale TCP stream (`ACUMOS-3193 <https://jira.acumos.org/browse/ACUMOS-3193>`_)
* Federated model DI name to include model name - same as source peer DI name (`ACUMOS-3195 <https://jira.acumos.org/browse/ACUMOS-3195>`_)
Version 2.2.0, 2019-04-16
-------------------------
* Increase Spring async task timeout value (spring.mvc.async.request-timeout)
to 10 minutes (`ACUMOS-2749 <https://jira.acumos.org/browse/ACUMOS-2749>`_)
This prevents timeouts during retrieval of large docker image artifacts.
* Update to CDS 2.2.x with subscription by catalogs (`ACUMOS-2732 <https://jira.acumos.org/browse/ACUMOS-2732>`_)
This makes changes to the REST api for accessing Federation on both the
public and private interfaces:
* When listing solutions, the optional selector query parameter is replaced
by a required catalogId query parameter
* When getting revision details an optional catalogId query parameter is
added, used to retrieve descriptions and documents, from that catalog, for
the revision. If not specified, no descriptions or documents are returned.
* When getting artifact and document content, the form of the URI is changed
to eliminate the unused solution and revision IDs.
* When getting documents for a revision, the form of the URI is changed
to eliminate the unused solution ID and a required catalogID query parameter
is added.
Solution revisions in CDS no longer have access type codes, so the (optional)
catalog.default-access-type-code configuration parameter has been removed.
* Eliminate vulnerabilities and many "code smells" identified by SONAR.
Version 2.1.2, 2019-03-27
-------------------------
* Add JUnit test cases to reach 50% or better code coverage (`ACUMOS-2584 <https://jira.acumos.org/browse/ACUMOS-2584>`_)
* Add API to list remote catalogs to support subscribing (`ACUMOS-2575 <https://jira.acumos.org/browse/ACUMOS-2575>`_)
API to list catalogs is /catalogs
* Refactor code to avoid duplication related to implementing listing remote catalogs.
* Documentation configuration parameters (`ACUMOS-2661 <https://jira.acumos.org/browse/ACUMOS-2661>`_)
Version 2.1.1, 2019-03-07
-------------------------
* Solution picture should be copied (`ACUMOS-2570 <https://jira.acumos.org/browse/ACUMOS-2570>`_)
Version 2.1.0, 2019-03-05
-------------------------
* Update to CDS 2.1.2
Version 2.0.1, 2019-02-26
-------------------------
* Add catalogId field in solution search selector (`ACUMOS-2285 <https://jira.acumos.org/browse/ACUMOS-2285>`_)
* Normalize configured Nexus URL to have exactly one trailing slash (`ACUMOS-2554 <https://jira.acumos.org/browse/ACUMOS-2554>`_)
* Allow server to run as unprivileged user (`ACUMOS-2551 <https://jira.acumos.org/browse/ACUMOS-2551>`_)
* Various problems found with version 2.0.0 (`ACUMOS-2570 <https://jira.acumos.org/browse/ACUMOS-2570>`_)
- List dependency on jersey-hk2 for spring-boot
- Instant rendered as JSON object rather than seconds since epoch
- Seconds since epoch may parse as Integer instead of Long
Version 2.0.0, 2019-02-20
-------------------------
* Use Boreas log pattern; remove EELF (`ACUMOS-2329 <https://jira.acumos.org/browse/ACUMOS-2329>`_)
* Fix repeated update of metadata (`ACUMOS-2399 <https://jira.acumos.org/browse/ACUMOS-2399>`_)
* Update to CDS 2.0.7
Version 1.18.7, 2018-10-30
--------------------------
* Fix the subscription task early cancellation (`ACUMOS-1937 <https://jira.acumos.org/browse/ACUMOS-1937>`_)
* Fix the preemptive authentication (`ACUMOS-1952 <https://jira.acumos.org/browse/ACUMOS-1952>`_)
Version 1.18.6, 2018-10-08
--------------------------
* Fix for the handling of mis-represented content uris (`ACUMOS-1780 <https://jira.acumos.org/browse/ACUMOS-1780>`_)
* Adds subscription option directing the handling of error in content retrieval with respect to catalog updates
Version 1.18.5, 2018-10-02
--------------------------
* Fix for loss of file name prefix/suffix (`ACUMOS-1780 <https://jira.acumos.org/browse/ACUMOS-1780>`_)
* Fix for processing of docker artifacts, push to the local registry (`ACUMOS-1781 <https://jira.acumos.org/browse/ACUMOS-1781>`_)
* Add peer 'isActive' as controller calls pre-authorization check
* Fix the artifact content processing condition in the gateway
Version 1.18.4, 2018-09-21
--------------------------
* Fix download of large artifacts
* Upgrade Spring-Boot to 1.5.16.RELEASE (`ACUMOS-1754 <https://jira.acumos.org/browse/ACUMOS-1754>`_)
Version 1.18.3, 2018-09-14
--------------------------
* Increase max heap size
* configuration changes:
new top level docker configuration block::
"docker": {
"host": "tcp://your_host:port",
"registryUrl": "your_registry:port",
"registryUsername": "docker_username",
"registryPassword": "docker_password",
"registryEmail": ""
}
Version 1.18.2, 2018-09-13
--------------------------
* Rely on solution detail API for mapping (`ACUMOS-1690 <https://jira.acumos.org/browse/ACUMOS-1690>`_)
* Add binary stream to resource http content mapper (`ACUMOS-1690 <https://jira.acumos.org/browse/ACUMOS-1690>`_)
* Allow configuration of underlying executor and scheduler
* Do not overwrite user during mapping for local solutions
Version 1.18.1, 2018-09-05
--------------------------
* Simplified catalog solutions lookup
* Fix 'self' peer not found (`ACUMOS-1694 <https://jira.acumos.org/browse/ACUMOS-1694>`_)
* Fix task scheduler initialization (`ACUMOS-1690 <https://jira.acumos.org/browse/ACUMOS-1690>`_)
* Fix solution tag handling
* Move solution and revision updates to service interface
Version 1.18.0, 2018-09-05
--------------------------
* Align with data model changes from CDS 1.18.x
* Fix subscription update processing (`ACUMOS-1693 <https://jira.acumos.org/browse/ACUMOS-1693>`_)
Version 1.17.1, 2018-09-04
--------------------------
* Spread the use of configuration beans (`ACUMOS-1692 <https://jira.acumos.org/browse/ACUMOS-1692>`_)
Version 1.17.0, 2018-08-14
--------------------------
* Align with data model changes from CDS 1.17.x
* Add revision document federation (`ACUMOS-1606 <https://jira.acumos.org/browse/ACUMOS-1606>`_)
* Add tag federation (`ACUMOS-1544 <https://jira.acumos.org/browse/ACUMOS-1544>`_)
* Fix authorship federation (`ACUMOS-626 <https://jira.acumos.org/browse/ACUMOS-626>`_)
* The federation API for access to artifact and document content access have changed
to /solutions/{solutionId}/revisions/{revisionId}/artifacts/{artifactId}/content
and /solutions/{solutionId}/revisions/{revisionId}/documents/{documentId}/content
Version 1.16.1, 2018-08-08
--------------------------
* Temporary patch for tag handling during federation procedures
Version 1.16.0, 2018-08-01
--------------------------
* Aligns with the data model changes from CDS 1.16.x
* Minor fixes in order to adhere to project coding standards.
Version 1.15.1, 2018-07-31
--------------------------
* Fixes catalog solution lookup strategy due to used criteria moving to other entities (solution -> revision)
* Fixes some Sonar complaints
* Adds more unit tests for CDS based service implementations
* Align version numbers with CDS
Version 1.1.5, 2018-07-12
-------------------------
* Aligns with the data model changes from CDS 1.15 (`ACUMOS-1330 <https://jira.acumos.org/browse/ACUMOS-1330>`_)
Version 1.1.4.1, 2018-07-11
---------------------------
* Fix handling of docker images with no tags (`ACUMOS-1015 <https://jira.acumos.org/browse/ACUMOS-1015>`_)
Version 1.1.4, 2018-06-20
-------------------------
* Fix result size test when retrieving 'self' peer
* Fix handling of null solutions filter in the service. Fix the handling of no such item errors in catalog controller.
Version 1.1.3, 2018-05-10
-------------------------
* Upgrade to CDS 1.14.4
Version 1.1.2, 2018-04-19
-------------------------
* Revise code for Sonar warnings (`ACUMOS-672 <https://jira.acumos.org/browse/ACUMOS-672>`_)
Version 1.1.1, 2018-04-13
-------------------------
* Unit tests for local interface
* Separate federation and local service interfaces (`ACUMOS-276 <https://jira.acumos.org/browse/ACUMOS-276>`_)
Version 1.1.0, 2018-03-09
-------------------------
* Separate between federation and local interface with respect to network configuration, authorization and available REST API.
* Upgrade to CDS 1.14.0
Version 1.0.0, 2018-02-12
-------------------------
* Use release (not snapshot) versions of acumos-nexus-client and common-dataservice libraries
* Limit JVM memory use via Docker start command
* Revise docker projects to deploy images to nexus3.acumos.org
* Make aspectjweaver part of runtime
* Add dependency copy plugin
Version 0.2.0, 2017-11-28
-------------------------
* Support to CDS 1.9.0
* 2-Way SSL Support
* X509 Subject Principal Authentication
| 44.453083 | 146 | 0.691997 |
ee256436f9baf866152be654885273f6c61db433 | 235 | rst | reStructuredText | contingent/code/posts/intro.rst | kennywbin/500lines | e72f05bac2087f368251d3f263ae325c268e5171 | [
"CC-BY-3.0"
] | null | null | null | contingent/code/posts/intro.rst | kennywbin/500lines | e72f05bac2087f368251d3f263ae325c268e5171 | [
"CC-BY-3.0"
] | null | null | null | contingent/code/posts/intro.rst | kennywbin/500lines | e72f05bac2087f368251d3f263ae325c268e5171 | [
"CC-BY-3.0"
] | null | null | null |
==============
Introduction
==============
:Date: 2014/02/28
Several readers have been asking
for a series of articles about mathematics,
so the next several blog posts will be about
fundamental mathematical operations.
| 19.583333 | 45 | 0.668085 |
ba6ab2740754111daca5e356d50ad8b185a2a1ae | 280 | rst | reStructuredText | rtfm/source/_pages/build/dev.rst | newscorpaus/wordless | 21a5f7a4e7bbc94a6134f88dd4dc50fdd5909250 | [
"Unlicense",
"MIT"
] | 486 | 2015-01-02T07:00:07.000Z | 2022-03-10T05:01:47.000Z | rtfm/source/_pages/build/dev.rst | newscorpaus/wordless | 21a5f7a4e7bbc94a6134f88dd4dc50fdd5909250 | [
"Unlicense",
"MIT"
] | 61 | 2015-03-18T15:17:26.000Z | 2021-10-31T10:59:36.000Z | rtfm/source/_pages/build/dev.rst | newscorpaus/wordless | 21a5f7a4e7bbc94a6134f88dd4dc50fdd5909250 | [
"Unlicense",
"MIT"
] | 58 | 2015-01-21T10:58:18.000Z | 2022-03-25T08:12:48.000Z | .. _BuildDev:
Development build
=================
.. code-block:: bash
yarn clean:dist && yarn build:dev
.. note::
Most of the time you'll be working using the built-in development server
through ``yarn server``, but invoking a build arbitrarily is often useful.
| 20 | 78 | 0.657143 |
2a78e2c6b5099c930353b7f4d98c386192a71210 | 238 | rst | reStructuredText | docs/guides/index.rst | SD2E/python-datacatalog | 51ab366639505fb6e8a14cd6b446de37080cd20d | [
"CNRI-Python"
] | null | null | null | docs/guides/index.rst | SD2E/python-datacatalog | 51ab366639505fb6e8a14cd6b446de37080cd20d | [
"CNRI-Python"
] | 2 | 2019-07-25T15:39:04.000Z | 2019-10-21T15:31:46.000Z | docs/guides/index.rst | SD2E/python-datacatalog | 51ab366639505fb6e8a14cd6b446de37080cd20d | [
"CNRI-Python"
] | 1 | 2019-10-15T14:33:44.000Z | 2019-10-15T14:33:44.000Z | ============================
Finding and Discovering Data
============================
.. toctree::
:maxdepth: 1
jupyter_query
custom_metadata
redash_query
redash_dashboard
redash_notification
reactor_simple
| 17 | 28 | 0.546218 |
1f320a5aaeed61e7c8e55e253bd65baaf1484b39 | 163 | rst | reStructuredText | AUTHORS.rst | tomwerneruk/traefik-prism | 6be98e4213df1442d888721aedf4214806db4b0d | [
"BSD-3-Clause"
] | 6 | 2018-12-07T16:11:37.000Z | 2022-02-26T14:06:28.000Z | AUTHORS.rst | tomwerneruk/traefik-prism | 6be98e4213df1442d888721aedf4214806db4b0d | [
"BSD-3-Clause"
] | null | null | null | AUTHORS.rst | tomwerneruk/traefik-prism | 6be98e4213df1442d888721aedf4214806db4b0d | [
"BSD-3-Clause"
] | null | null | null | =======
Credits
=======
Development Lead
----------------
* Tom Werner <tom@fluffycloudsandlines.cc>
Contributors
------------
None yet. Why not be the first?
| 11.642857 | 42 | 0.558282 |
5ceb9b6cd45f3174b3914fe8cf8f85a633e17ad5 | 402 | rst | reStructuredText | doc/source/api/genui.qsar.migrations.rst | Tontolda/genui | c5b7da7c5a99fc16d34878e2170145ac7c8e31c4 | [
"0BSD"
] | 15 | 2021-05-31T13:39:17.000Z | 2022-03-30T12:04:14.000Z | doc/source/api/genui.qsar.migrations.rst | martin-sicho/genui | ea7f1272030a13e8e253a7a9b6479ac6a78552d3 | [
"MIT"
] | 3 | 2021-04-08T22:02:22.000Z | 2022-03-16T09:10:20.000Z | doc/source/api/genui.qsar.migrations.rst | Tontolda/genui | c5b7da7c5a99fc16d34878e2170145ac7c8e31c4 | [
"0BSD"
] | 5 | 2021-03-04T11:00:54.000Z | 2021-12-18T22:59:22.000Z | genui.qsar.migrations package
=============================
Submodules
----------
genui.qsar.migrations.0001\_initial module
------------------------------------------
.. automodule:: genui.qsar.migrations.0001_initial
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: genui.qsar.migrations
:members:
:undoc-members:
:show-inheritance:
| 18.272727 | 50 | 0.559701 |
9b6cd79a0f712150dbbb6e2056d272485296bf62 | 2,878 | rst | reStructuredText | doc/manual/plasma_equilibrium.rst | ratnania/caid | 5bc5428007c8642412762f80c36e8531e56cd15e | [
"MIT"
] | 4 | 2016-03-15T11:28:57.000Z | 2018-05-29T15:10:02.000Z | doc/manual/plasma_equilibrium.rst | pyccel/caid | 5bc5428007c8642412762f80c36e8531e56cd15e | [
"MIT"
] | null | null | null | doc/manual/plasma_equilibrium.rst | pyccel/caid | 5bc5428007c8642412762f80c36e8531e56cd15e | [
"MIT"
] | 5 | 2018-11-22T13:06:44.000Z | 2021-06-09T11:36:59.000Z | .. role:: envvar(literal)
.. role:: command(literal)
.. role:: file(literal)
.. role:: ref(title-reference)
.. _gallery:
Plasma Equlibirum
*****************
MHD Equilibirum
^^^^^^^^^^^^^^^
.. todo:: add grad-shafranov and soloviev solutions
Local Equilibrium
^^^^^^^^^^^^^^^^^
D-Shaped -- Miller
__________________
We consider a :math:`2D` local equilibrium as described by Miller. The change of coordinate is given in polar coordinates, by:
.. math::
R(r, \theta) &= R_0(r) + r \cos(\theta + \sinh(\delta) \sin(\theta))
\\
Z(r,\theta) &= \kappa(r) r \sin(\theta)
.. todo:: add citation Miller
where
.. math::
R_0 &= A \tilde{\psi}
\\
R_0(r) &= ( A - \partial_r R_0 ) \tilde{\psi} + r (\partial_r R_0)
\\
\kappa(r) &= \kappa_0 (1 + s_{\kappa} \ln(\frac{r}{\tilde{\psi}}))
\\
\delta(r) &= s_{\delta} \sqrt{1- \delta_0^2}\ln(\frac{r}{\tilde{\psi}})
Two kind of paramters should be provided
* **shape parameters** are :math:`A, \tilde{\psi}, \kappa_0, \delta_0, \alpha`. The user should specify them as a dictionary
.. literalinclude:: ../../examples/plasma_equilibrium_2d.py
:lines: 20-25
* **equilibrium parameters** are :math:`s_{\kappa}, s_{\delta}, \partial_r R_0, q, s`. The user should specify them as a dictionary
.. literalinclude:: ../../examples/plasma_equilibrium_2d.py
:lines: 27-32
Now that both the shape and equilibrium parameters are chosen, the user can create the Miller equilbrium, simply by importing the function **miller_equilibrium** from **caid.cad_geometry**
.. literalinclude:: ../../examples/plasma_equilibrium_2d.py
:lines: 17-39
The resulting plot is
.. image:: include/geometry/plasma_equilibrium_2d_miller_ex2.png
:width: 8cm
:height: 8cm
Default parameters are chosen, if the user does not specify them.
.. literalinclude:: ../../examples/plasma_equilibrium_2d.py
:lines: 7-13
The resulting plot is
.. image:: include/geometry/plasma_equilibrium_2d_miller_ex1.png
:width: 8cm
:height: 8cm
Now the next step is to remove the internal hole. This can be done again in two ways
* *using a singular mapping*
.. literalinclude:: ../../examples/plasma_equilibrium_2d.py
:lines: 43-54
The resulting plot is
.. image:: include/geometry/plasma_equilibrium_2d_miller_ex3.png
:width: 8cm
:height: 8cm
As we can see, the resulting mapping is not regulat. Additional work must be done: insert new knots and move the control points to ensure the :math:`\mathcal{C}^1` continuity.
.. todo:: regular map
* *using a 5 multipatch approach*
.. literalinclude:: ../../examples/plasma_equilibrium_2d.py
:lines: 58-72
The resulting plots are
.. image:: include/geometry/plasma_equilibrium_2d_miller_ex4_info.png
:width: 8cm
:height: 8cm
.. image:: include/geometry/plasma_equilibrium_2d_miller_ex4.png
:width: 8cm
:height: 8cm
.. Local Variables:
.. mode: rst
.. End:
| 25.696429 | 188 | 0.689368 |
1520993c2a8fbc65a86ad998ba87c1d21b55003b | 235 | rst | reStructuredText | docs_original/api/episcanpy.pl.overlap_heatmap.rst | kridsadakorn/epiScanpy | c1c2da9eb6e143c58b8429a28ed9d602affdb408 | [
"BSD-3-Clause"
] | null | null | null | docs_original/api/episcanpy.pl.overlap_heatmap.rst | kridsadakorn/epiScanpy | c1c2da9eb6e143c58b8429a28ed9d602affdb408 | [
"BSD-3-Clause"
] | null | null | null | docs_original/api/episcanpy.pl.overlap_heatmap.rst | kridsadakorn/epiScanpy | c1c2da9eb6e143c58b8429a28ed9d602affdb408 | [
"BSD-3-Clause"
] | null | null | null | :github_url: https://github.com/colomemaria/epiScanpy/tree/master/episcanpy/plotting/_heatmap.py#L85-L117
episcanpy.pl.overlap\_heatmap
=============================
.. currentmodule:: episcanpy
.. autofunction:: pl.overlap_heatmap | 29.375 | 106 | 0.702128 |
9a08ce50111021884c939d9ecc54a9a1611f0789 | 74 | rst | reStructuredText | sip/src/online_learners/header.rst | fbobee/Alpenglow | 5f956511017c1bee72390aaecd964c04d8ad4b45 | [
"Apache-2.0"
] | null | null | null | sip/src/online_learners/header.rst | fbobee/Alpenglow | 5f956511017c1bee72390aaecd964c04d8ad4b45 | [
"Apache-2.0"
] | null | null | null | sip/src/online_learners/header.rst | fbobee/Alpenglow | 5f956511017c1bee72390aaecd964c04d8ad4b45 | [
"Apache-2.0"
] | null | null | null | Online learners
---------------
This is the online_learners header file.
| 14.8 | 40 | 0.648649 |
ad0facf14a24845b69b00ba0092987b42838ced9 | 1,544 | rst | reStructuredText | doc/source/index.rst | alejandro-cermeno/arch | 3ef8dc38f9fd2a3759c2224ce8b8bb2b91e52d8a | [
"NCSA"
] | 1 | 2022-03-29T22:12:54.000Z | 2022-03-29T22:12:54.000Z | doc/source/index.rst | ahmeddeladly/arch | 20774dce296af3716c44ecd50716d368634acbba | [
"NCSA"
] | null | null | null | doc/source/index.rst | ahmeddeladly/arch | 20774dce296af3716c44ecd50716d368634acbba | [
"NCSA"
] | null | null | null | .. image:: images/color-logo.svg
:width: 33.3%
:alt: arch logo
.. note::
`Stable documentation <https://bashtage.github.io/arch/>`_ for the latest release
is located at `doc <https://bashtage.github.io/arch/>`_.
Documentation for `recent developments <https://bashtage.github.io/arch/devel/>`_
is located at `devel <https://bashtage.github.io/arch/devel/>`_.
Introduction
============
The ARCH toolbox contains routines for:
- Univariate volatility models;
- Bootstrapping;
- Multiple comparison procedures;
- Unit root tests;
- Cointegration Testing and Estimation; and
- Long-run covariance estimation.
Future plans are to continue to expand this toolbox to include additional
routines relevant for the analysis of financial data.
.. toctree::
:maxdepth: 2
:hidden:
Univariate Volatility Models <univariate/univariate>
Bootstrapping <bootstrap/bootstrap>
Multiple Comparison Problems <multiple-comparison/multiple-comparisons>
Unit Root Tests and Cointegration Analysis <unitroot/unitroot>
Long-run Covariance Estimation <covariance/covariance>
API Reference <api>
Change Log <changes>
Citation
========
This package should be cited using Zenodo. For example, for the 4.13 release,
.. [*] Kevin Sheppard (2021, March 3). bashtage/arch: Release 4.18 (Version v4.18).
Zenodo. https://doi.org/10.5281/zenodo.593254
.. image:: https://zenodo.org/badge/doi/10.5281/zenodo.593254.svg
:target: https://doi.org/10.5281/zenodo.593254
Index
=====
* :ref:`genindex`
* :ref:`modindex`
| 28.592593 | 83 | 0.725389 |
be68030a049a06b93c6ba1215a82a540c4723bdb | 66 | rst | reStructuredText | docs/base_menu.rst | robertwayne/discord-menus | 35cf9e3e9e6306cc6e6a5266688a56ae1edfa49e | [
"MIT"
] | 30 | 2020-06-26T00:44:27.000Z | 2021-11-24T18:54:08.000Z | docs/base_menu.rst | robertwayne/discord-menus | 35cf9e3e9e6306cc6e6a5266688a56ae1edfa49e | [
"MIT"
] | 38 | 2020-06-23T08:47:44.000Z | 2021-08-04T04:12:45.000Z | docs/base_menu.rst | robertwayne/discord-menus | 35cf9e3e9e6306cc6e6a5266688a56ae1edfa49e | [
"MIT"
] | 5 | 2020-12-17T20:22:06.000Z | 2021-03-30T01:36:41.000Z | BaseMenu
========
.. autoclass:: dpymenus.BaseMenu
:members:
| 11 | 32 | 0.606061 |
d5f3cb942e8379b1b0126cf558ff75d2b80f4ed6 | 4,016 | rst | reStructuredText | docs/about/changelog.rst | vpchung/challengeutils | 7a6a26971b26e6849e1eb39fc3e5d7491be0f53c | [
"Apache-2.0"
] | 5 | 2020-02-17T20:31:17.000Z | 2021-09-06T21:16:50.000Z | docs/about/changelog.rst | vpchung/challengeutils | 7a6a26971b26e6849e1eb39fc3e5d7491be0f53c | [
"Apache-2.0"
] | 146 | 2018-11-13T21:47:26.000Z | 2022-03-12T06:58:05.000Z | docs/about/changelog.rst | vpchung/challengeutils | 7a6a26971b26e6849e1eb39fc3e5d7491be0f53c | [
"Apache-2.0"
] | 6 | 2018-10-18T16:20:24.000Z | 2020-05-07T23:12:28.000Z | *********
Changelog
*********
This changelog is used to track all major changes to **challengeutils**.
For older releases, visit the `GitHub releases`_.
.. _Github releases: https://github.com/Sage-Bionetworks/challengeutils/releases
v4.1.0
------
.. Important::
**Support for synapseclient<2.4.0 is no longer available**; upgrade to the
latest version with:
.. code:: console
$ pip install synapseclient --upgrade
.. Important::
**Support for Python 3.6 is will be dropped in the later
half of this year.
- Fix `challengeutils` cli bug
- Update `synapse_login` to support Synapse PAT env variable.
v4.0.1
------
.. Important::
**Support for synapseclient<2.3.0 is no longer available**; upgrade to the
latest version with:
.. code:: console
$ pip install synapseclient --upgrade
.. Important::
**Support for Python 3.6 is will be dropped in the later
half of this year.
- Support `Python` 3.9
- Deprecate `helpers.py` and create `stop_submissions_over_quota` function
- Fix conditionals when validating permissions for project submissions
- Stopping submissions over a quota now uses submission views instead of evaluation queues.
v3.2.0
------
- Added push and pull Synapse wiki feature
v3.1.0
------
.. Important::
**Support for synapseclient<2.2.0 is no longer available**; upgrade to the
latest version with:
.. code:: console
$ pip install synapseclient --upgrade
- Remove team member from team
- Upgrade synapseclient used
- Retrieve number of members given team name / id.
- Move functions to team module
v3.0.0
------
.. Important::
**Support for synapseclient<2.1.0 is no longer available**; upgrade to the
latest version with:
.. code:: console
$ pip install synapseclient --upgrade
- Add Synapse `Thread` and `Reply` module
- Rename command line client functions to have dashes inbetween words (e.g. `challengeutils create-challenge`). This is a breaking change, but is done to standardize the command line client commands.
- `validate_project` now returns errors that are `str` type instead of `list`
v2.2.0
------
- Added `delete_submission`, `validate_project` and `archive_project` functions
- `Submission Views` are now supported in `Synapse`. Updating annotations now adds both `annotations` and `submissionAnnotations`.
v2.1.0
------
- Remove `invite_member_to_team` function as functionality is in `synapseclient`
- `challengeutils.discussion.copy_thread` now also copies replies instead of just the thread
- Fixed `challengeutils.createchallenge` function bug - Do not use `Challenge` class to instantiate the body of `restPOST` or `restPUT` calls
- Refactored and added tests for `challengeutils.mirrorwiki`
- `challengeutils.mirrorwiki.mirrorwiki` renamed to `challengeutils.mirrorwiki.mirror`
- Added `dryrun` parameter to let users know which pages would be updated in `challengeutils.mirrorwiki`
- Add automation of code coverage
- Revise documentation
v2.0.1
------
.. Important::
**Support for synapseclient<2.0.0 is no longer available**; upgrade to the
latest version with:
.. code:: console
$ pip install synapseclient --upgrade
- Added `CONTRIBUTING.md`
- Revised `README.md`
- Added `CODE_OF_CONDUCT.md`
- Update `version`
- Refine ``challenge`` services
- Update library dependency, e.g. using ``unittest.mock`` instead of ``mock``
- Fix queue query CLI errors
- Fix ``mirrorwiki`` error
v1.6.0
------
**synapseclient 2.0.0 is now fully supported!**
- Update the live page wiki content that ``createchallenge`` would create
- Show URLs of projects and teams created by ``createchallenge``
- Auto-build sphinx docs to ``gh-pages`` with ``gh-actions``. thus removing ``readthedocs`` dependency
v1.5.2
------
- Lock down ``synapseclient==1.9.4`` version in ``requirements.txt``
v1.5.1
------
- Versioning fix
- Add auto-generated documentation
- Fix CLI command for annotating submission
- Add ``setevaluationquota`` command
| 28.083916 | 200 | 0.717629 |
5775b2eb01d2ea865caf55d96a9fdc7e134dd127 | 7,462 | rst | reStructuredText | models/wrf/WRF_DART_utilities/wrf_dart_obs_preprocess.rst | johnsonbk/johnsonbk.github.io | 10d8fd3ed8df2e6da0fa604f35e8a8eb1aa09bab | [
"Apache-2.0"
] | null | null | null | models/wrf/WRF_DART_utilities/wrf_dart_obs_preprocess.rst | johnsonbk/johnsonbk.github.io | 10d8fd3ed8df2e6da0fa604f35e8a8eb1aa09bab | [
"Apache-2.0"
] | null | null | null | models/wrf/WRF_DART_utilities/wrf_dart_obs_preprocess.rst | johnsonbk/johnsonbk.github.io | 10d8fd3ed8df2e6da0fa604f35e8a8eb1aa09bab | [
"Apache-2.0"
] | null | null | null | PROGRAM ``wrf_dart_obs_preprocess``
===================================
Overview
--------
Program to preprocess observations, with specific knowledge of the WRF domain.
This program will exclude all observations outside of the given WRF domain. There are options to exclude or increase the
error values of obs close to the domain boundaries. The program can superob (average) aircraft and satellite wind obs if
they are too dense.
This program can read up to 9 additional obs_seq files and merge their data in with the basic obs_sequence file which is
the main input.
This program can reject surface observations if the elevation encoded in the observation is too different from the wrf
surface elevation.
This program can exclude observations above a specified height or pressure.
This program can overwrite the incoming Data QC value with another.
Namelist
--------
This namelist is read from the file ``input.nml``. Namelists start with an ampersand '&' and terminate with a slash '/'.
Character strings that contain a '/' must be enclosed in quotes to prevent them from prematurely terminating the
namelist.
::
&wrf_obs_preproc_nml
file_name_input = 'obs_seq.old'
file_name_output = 'obs_seq.new'
sonde_extra = 'obs_seq.rawin'
land_sfc_extra = 'obs_seq.land_sfc'
metar_extra = 'obs_seq.metar'
marine_sfc_extra = 'obs_seq.marine'
sat_wind_extra = 'obs_seq.satwnd'
profiler_extra = 'obs_seq.profiler'
gpsro_extra = 'obs_seq.gpsro'
acars_extra = 'obs_seq.acars'
trop_cyclone_extra = 'obs_seq.tc'
overwrite_obs_time = .false.
obs_boundary = 0.0
increase_bdy_error = .false.
maxobsfac = 2.5
obsdistbdy = 15.0
sfc_elevation_check = .false.
sfc_elevation_tol = 300.0
obs_pressure_top = 0.0
obs_height_top = 2.0e10
include_sig_data = .true.
tc_sonde_radii = -1.0
superob_aircraft = .false.
aircraft_horiz_int = 36.0
aircraft_pres_int = 2500.0
superob_sat_winds = .false.
sat_wind_horiz_int = 100.0
sat_wind_pres_int = 2500.0
overwrite_ncep_satwnd_qc = .false.
overwrite_ncep_sfc_qc = .false.
/
|
.. container::
Item
Type
Description
**Generic parameters:**
file_name_input
character(len=129)
The input obs_seq file.
file_name_output
character(len=129)
The output obs_seq file.
sonde_extra, land_sfc_extra, metar_extra, marine_sfc_extra, marine_sfc_extra, sat_wind_extra, profiler_extra,
gpsro_extra, acars_extra, trop_cyclone_extra
character(len=129)
The names of additional input obs_seq files, which if they exist, will be merged in with the obs from the
``file_name_input`` obs_seq file. If the files do not exist, they are silently ignored without error.
overwrite_obs_time
logical
If true, replace the incoming observation time with the analysis time. Not recommended.
**Boundary-specific parameters:**
obs_boundary
real(r8)
Number of grid points around domain boundary which will be considered the new extent of the domain. Observations outside
this smaller area will be excluded.
increase_bdy_error
logical
If true, observations near the domain boundary will have their observation error increased by ``maxobsfac``.
maxobsfac
real(r8)
If ``increase_bdy_error`` is true, multiply the error by a ramped factor. This item sets the maximum error.
obsdistbdy
real(r8)
If ``increase_bdy_error`` is true, this defines the region around the boundary (in number of grid points) where the
observation error values will be altered. This is ramped, so when you reach the innermost points the change in
observation error is 0.0.
**Parameters to reduce observation count :**
sfc_elevation_check
logical
If true, check the height of surface observations against the surface height in the model.
sfc_elevation_tol
real(r8)
If ``sfc_elevation_check`` is true, the maximum difference between the elevation of a surface observation and the model
surface height, in meters. If the difference is larger than this value, the observation is excluded.
obs_pressure_top
real(r8)
Observations with a vertical coordinate in pressure which are located above this pressure level (i.e. the obs vertical
value is smaller than the given pressure) will be excluded.
obs_height_top
real(r8)
Observations with a vertical coordinate in height which are located above this height value (i.e. the obs vertical value
is larger than the given height) will be excluded.
**Radio/Rawinsonde-specific parameters :**
include_sig_data
logical
If true, include significant level data from radiosondes.
tc_sonde_radii
real(r8)
If greater than 0.0 remove any sonde observations closer than this distance in Kilometers to the center of a Tropical
Cyclone.
**Aircraft-specific parameters :**
superob_aircraft
logical
If true, average all aircraft observations within the given radius and output only a single observation. Any observation
that is used in computing a superob observation is removed from the list and is not used in any other superob
computation.
aircraft_horiz_int
real(r8)
If ``superob_aircraft`` is true, the horizontal distance in Kilometers which defines the superob area. All other unused
aircraft observations within this radius will be averaged with the current observation.
aircraft_vert_int
real(r8)
If ``superob_aircraft`` is true, the vertical distance in Pascals which defines the maximum separation for including an
observation in the superob computation.
**Satellite Wind-specific parameters :**
superob_sat_winds
logical
If true, average all sat_wind observations within the given radius and output only a single observation. Any observation
that is used in computing a superob observation is removed from the list and is not used in any other superob
computation.
sat_wind_horiz_int
real(r8)
If ``superob_sat_winds`` is true, the horizontal distance in Kilometers which defines the superob area. All other unused
sat_wind observations within this radius will be averaged with the current observation.
sat_wind_vert_int
real(r8)
If ``superob_sat_winds`` is true, the vertical distance in Pascals which defines the maximum separation for including an
observation in the superob computation.
overwrite_ncep_satwnd_qc
logical
If true, replace the incoming Data QC value in satellite wind observations with 2.0.
**Surface Observation-specific parameters :**
overwrite_ncep_sfc_qc
logical
If true, replace the incoming Data QC value in surface observations with 2.0.
|
Modules used
------------
::
types_mod
obs_sequence_mod
utilities_mod
obs_kind_mod
time_manager_mod
model_mod
netcdf
Files
-----
- Input namelist ; ``input.nml``
- Input WRF state netCDF files; ``wrfinput_d01, wrfinput_d02, ...``
- Input obs_seq files (as specified in namelist)
- Output obs_seq file (as specified in namelist)
File formats
~~~~~~~~~~~~
This utility can read one or more obs_seq files and combine them while doing the rest of the processing. It uses the
standard DART observation sequence file format.
References
----------
- Generously contributed by Ryan Torn.
| 26.65 | 120 | 0.732377 |
efc852c7d15eff01d3ec4e26f7c29452ad663e31 | 4,731 | rst | reStructuredText | source/examples/linear-gauge-chart.rst | wdk-docs/ngx-charts-docs | e3beeb38a20971819e904d95692e5e578efa5dad | [
"MIT"
] | null | null | null | source/examples/linear-gauge-chart.rst | wdk-docs/ngx-charts-docs | e3beeb38a20971819e904d95692e5e578efa5dad | [
"MIT"
] | 1 | 2022-03-02T06:48:34.000Z | 2022-03-02T06:48:34.000Z | source/examples/linear-gauge-chart.rst | wdk-docs/ngx-charts-docs | e3beeb38a20971819e904d95692e5e578efa5dad | [
"MIT"
] | null | null | null | Linear Gauge Chart
==================
Linear Gauge
------------
{% embed
data=“{"url":"https://stackblitz.com/edit/swimlane-linear-gauge-chart?embed=1&file=app/app.component.ts","type":"link","title":"linear-gauge-chart
- StackBlitz","description":"Linear Gauge Chart demo for
ngx-charts","icon":{"type":"icon","url":"https://c.staticblitz.com/assets/icon-664493542621427cc8adae5e8f50d632f87aaa6ea1ce5b01e9a3d05b57940a9f.png","aspectRatio":0},"thumbnail":{"type":"thumbnail","url":"https://c.staticblitz.com/assets/icon-664493542621427cc8adae5e8f50d632f87aaa6ea1ce5b01e9a3d05b57940a9f.png","aspectRatio":0}}”
%}
Inputs
------
+----------+------+-----+---------------------------------------------------+
| Property | Type | De | Description |
| | | fa | |
| | | ul | |
| | | t | |
| | | Va | |
| | | lu | |
| | | e | |
+==========+======+=====+===================================================+
| vie | numb | | the dimensions of the chart [width, height]. If |
| w | er[] | | left undefined, the chart will fit to the parent |
| | | | container size |
+----------+------+-----+---------------------------------------------------+
| sch | obje | | the color scheme of the chart |
| eme | ct | | |
+----------+------+-----+---------------------------------------------------+
| cus | func | | custom colors for the chart. Used to override a |
| tom | tion | | color for a specific value |
| Col | or | | |
| ors | obje | | |
| | ct | | |
+----------+------+-----+---------------------------------------------------+
| ani | bool | tr | enable animations |
| mat | ean | ue | |
| ion | | | |
| s | | | |
+----------+------+-----+---------------------------------------------------+
| min | numb | 0 | starting point of the scale |
| | er | | |
+----------+------+-----+---------------------------------------------------+
| max | numb | 10 | ending point of the scale |
| | er | 0 | |
+----------+------+-----+---------------------------------------------------+
| val | numb | 0 | the value represented on the gauge |
| ue | er | | |
+----------+------+-----+---------------------------------------------------+
| pre | numb | | the value represented by the vertical line on the |
| vio | er | | gauge. Use this if you want to compare the |
| usV | | | current value to a previous one |
| alu | | | |
| e | | | |
+----------+------+-----+---------------------------------------------------+
| uni | stri | | text to display under the value |
| ts | ng | | |
+----------+------+-----+---------------------------------------------------+
| val | func | | function that formats the value in the middle of |
| ueF | tion | | the chart |
| orm | | | |
| att | | | |
| ing | | | |
+----------+------+-----+---------------------------------------------------+
Outputs
-------
======== ===========
Property Description
======== ===========
select click event
======== ===========
| 62.25 | 331 | 0.238427 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.