hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
de1f8abd8acde330d77dd72114d092a10e9e9374 | 1,234 | rst | reStructuredText | README.rst | prisae/empymod | c01eae0ac51b37864c0b68bf0c207c1bd7c7e585 | [
"Apache-2.0"
] | 31 | 2017-06-07T00:47:10.000Z | 2020-11-02T13:45:29.000Z | README.rst | prisae/empymod | c01eae0ac51b37864c0b68bf0c207c1bd7c7e585 | [
"Apache-2.0"
] | 97 | 2017-06-05T08:19:27.000Z | 2020-11-30T15:25:07.000Z | README.rst | prisae/empymod | c01eae0ac51b37864c0b68bf0c207c1bd7c7e585 | [
"Apache-2.0"
] | 14 | 2017-11-05T13:24:29.000Z | 2020-09-25T19:25:18.000Z | .. image:: https://raw.github.com/emsig/logos/main/empymod/empymod-logo.png
:target: https://emsig.xyz
:alt: empymod logo
|
.. image:: https://img.shields.io/pypi/v/empymod.svg
:target: https://pypi.python.org/pypi/empymod/
:alt: PyPI
.. image:: https://img.shields.io/conda/v/conda-forge/empymod.svg
:target: https://anaconda.org/conda-forge/empymod/
:alt: conda-forge
.. image:: https://img.shields.io/badge/python-3.7+-blue.svg
:target: https://www.python.org/downloads/
:alt: Supported Python Versions
.. image:: https://img.shields.io/badge/platform-linux,win,osx-blue.svg
:target: https://anaconda.org/conda-forge/empymod/
:alt: Linux, Windows, OSX
|
Open-source full 3D electromagnetic modeller for 1D VTI media
- **Website:** https://emsig.xyz
- **Documentation:** https://empymod.emsig.xyz
- **Source Code:** https://github.com/emsig/empymod
- **Bug reports:** https://github.com/emsig/empymod/issues
- **Contributing:** https://empymod.emsig.xyz/en/latest/dev
- **Contact:** see https://emsig.xyz
- **Zenodo:** https://doi.org/10.5281/zenodo.593094
Available through ``conda`` and ``pip``; consult the `documentation
<https://empymod.emsig.xyz>`_ for detailed installation instructions.
| 35.257143 | 75 | 0.704214 |
cda7a883a601e2b1ff8e3c3065106530026d2fe5 | 1,581 | rst | reStructuredText | docs/customer/project/quota.rst | eb4x/iaas | cae1896190451bdcda36101101345b8f6dfefe40 | [
"Apache-2.0"
] | 5 | 2015-02-11T16:45:37.000Z | 2017-12-15T08:20:52.000Z | docs/customer/project/quota.rst | eb4x/iaas | cae1896190451bdcda36101101345b8f6dfefe40 | [
"Apache-2.0"
] | 10 | 2016-03-17T13:31:12.000Z | 2019-02-02T13:54:58.000Z | docs/customer/project/quota.rst | eb4x/iaas | cae1896190451bdcda36101101345b8f6dfefe40 | [
"Apache-2.0"
] | 5 | 2015-11-25T09:02:15.000Z | 2020-11-06T17:33:04.000Z | ======
Kvoter
======
Sist endret: 2017-09-14
.. NOTE::
Under arbeid. Dette er et utkast og ikke et endelig dokument.
Hvor mye ressurser det er mulig å bruke i et prosjekt er styrt av kvoter.
Kvoter blir satt per region, og dersom et prosjekt ikke har fått tildelt
en kvote vil den automatisk få default kvote (se under).
Default kvote
=============
Alle prosjekter som ikke har fått egen kvote i en region vil få tildelt
default kvote. Alle demo-prosjekt har også default kvote. Denne er lik for
alle og vil ikke bli endret.
==================== =========== =============
kvote navn default
==================== =========== =============
Instanser instances 2
vCPU cores 2
Minne ram 2048 MB
Volum antall volumes 1
Volum størrelse gigabytes 20 GB
Volum snapshot snapshots 3
==================== =========== =============
Forklaring
==========
Alle kvoter gjelder summen av alle ressurser brukt i et prosjekt i en region.
Instanser
---------
Total antall instanser det er mulig å opprette i et prosjekt.
vCPU
----
Antall prosessorer (vCPU) det er mulig å tildele instanser.
Minne
-----
Størrelsen på minne det er mulig å tildele instanser.
Volum antall
------------
Volum er er benevnelse på blokklagringen i UH-IaaS. Volum antall sier hvor mange
volum det er mulig å lage i et prosjekt.
Volum størrelse
---------------
Total størrelse av alle volum i et prosjekt.
Volum snapshot
--------------
Totalt antall snapshot av alle volum i et prosjekt.
| 23.597015 | 80 | 0.609741 |
2bc26255b5e111866008dec773e332189462fed1 | 341 | rst | reStructuredText | docs/deploy/docker.rst | duncandewhurst/deploy | ae118f2e5149bba0320f38dba4951efb0a773ad0 | [
"MIT"
] | null | null | null | docs/deploy/docker.rst | duncandewhurst/deploy | ae118f2e5149bba0320f38dba4951efb0a773ad0 | [
"MIT"
] | null | null | null | docs/deploy/docker.rst | duncandewhurst/deploy | ae118f2e5149bba0320f38dba4951efb0a773ad0 | [
"MIT"
] | null | null | null | Docker tasks
============
Change to the application's directory, replacing ``APP``:
.. code-block:: bash
cd /data/deploy/APP
Create a superuser:
.. code-block:: bash
docker-compose run --rm web python manage.py createsuperuser
Migrate the database:
.. code-block:: bash
docker-compose run --rm web python manage.py migrate
| 16.238095 | 63 | 0.68915 |
6b1c66416b90da8e24fc4fae407aee46ff80022d | 5,118 | rst | reStructuredText | docs/releases/v6.0.0.rst | sadielbartholomew/tornado | 975e9168560cc03f6210d0d3aab10c870bd47080 | [
"Apache-2.0"
] | 1 | 2019-02-14T04:55:55.000Z | 2019-02-14T04:55:55.000Z | docs/releases/v6.0.0.rst | rymmx/tornado | 975e9168560cc03f6210d0d3aab10c870bd47080 | [
"Apache-2.0"
] | null | null | null | docs/releases/v6.0.0.rst | rymmx/tornado | 975e9168560cc03f6210d0d3aab10c870bd47080 | [
"Apache-2.0"
] | 1 | 2019-02-14T04:56:02.000Z | 2019-02-14T04:56:02.000Z | What's new in Tornado 6.0
=========================
In progress
-----------
Backwards-incompatible changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Python 2.7 and 3.4 are no longer supported; the minimum supported
Python version is 3.5.2.
- APIs deprecated in Tornado 5.1 have been removed. This includes the
``tornado.stack_context`` module and most ``callback`` arguments
throughout the package. All removed APIs emitted
`DeprecationWarning` when used in Tornado 5.1, so running your
application with the ``-Wd`` Python command-line flag or the
environment variable ``PYTHONWARNINGS=d`` should tell you whether
your application is ready to move to Tornado 6.0.
General changes
~~~~~~~~~~~~~~~
- Tornado now includes type annotations compatible with ``mypy``.
These annotations will be used when type-checking your application
with ``mypy``, and may be usable in editors and other tools.
- Tornado now uses native coroutines internally, improving performance.
`tornado.auth`
~~~~~~~~~~~~~~
- All ``callback`` arguments in this package have been removed. Use
the coroutine interfaces instead.
- The ``OAuthMixin._oauth_get_user`` method has been removed.
Override `~.OAuthMixin._oauth_get_user_future` instead.
`tornado.concurrent`
~~~~~~~~~~~~~~~~~~~~
- The ``callback`` argument to `.run_on_executor` has been removed.
- ``return_future`` has been removed.
`tornado.gen`
~~~~~~~~~~~~~
- Some older portions of this module have been removed. This includes
``engine``, ``YieldPoint``, ``Callback``, ``Wait``, ``WaitAll``,
``MultiYieldPoint``, and ``Task``.
- Functions decorated with ``@gen.coroutine`` no longer accept
``callback`` arguments.
`tornado.httpclient`
~~~~~~~~~~~~~~~~~~~~
- The behavior of ``raise_error=False`` has changed. Now only
suppresses the errors raised due to completed responses with non-200
status codes (previously it suppressed all errors).
- The ``callback`` argument to `.AsyncHTTPClient.fetch` has been removed.
`tornado.httputil`
~~~~~~~~~~~~~~~~~~
- ``HTTPServerRequest.write`` has been removed. Use the methods of
``request.connection`` instead.
`tornado.ioloop`
~~~~~~~~~~~~~~~~
- ``IOLoop.set_blocking_signal_threshold``,
``IOLoop.set_blocking_log_threshold``, ``IOLoop.log_stack``,
and ``IOLoop.handle_callback_exception`` have been removed.
- Improved performance of `.IOLoop.add_callback`.
`tornado.iostream`
~~~~~~~~~~~~~~~~~~
- All ``callback`` arguments in this module have been removed except
for `.BaseIOStream.set_close_callback`.
- ``streaming_callback`` arguments to `.BaseIOStream.read_bytes` and
`.BaseIOStream.read_until_close` have been removed.
- Eliminated unnecessary logging of "Errno 0".
`tornado.log`
~~~~~~~~~~~~~
- Log files opened by this module are now explicitly set to UTF-8 encoding.
`tornado.netutil`
~~~~~~~~~~~~~~~~~
- The results of ``getaddrinfo`` are now sorted by address family to
avoid partial failures and deadlocks.
`tornado.platform.twisted`
~~~~~~~~~~~~~~~~~~~~~~~~~~
- ``TornadoReactor`` and ``TwistedIOLoop`` have been removed.
``tornado.simple_httpclient``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- The default HTTP client now supports the ``network_interface``
request argument to specify the source IP for the connection.
- If a server returns a 3xx response code without a ``Location``
header, the response is raised or returned directly instead of
trying and failing to follow the redirect.
- When following redirects, methods other than ``POST`` will no longer
be transformed into ``GET`` requests. 301 (permanent) redirects are
now treated the same way as 302 (temporary) and 303 (see other)
redirects in this respect.
- Following redirects now works with ``body_producer``.
``tornado.stack_context``
~~~~~~~~~~~~~~~~~~~~~~~~~
- The ``tornado.stack_context`` module has been removed.
`tornado.tcpserver`
~~~~~~~~~~~~~~~~~~~
- `.TCPServer.start` now supports a ``max_restarts`` argument (same as
`.fork_processes`).
`tornado.testing`
~~~~~~~~~~~~~~~~~
- `.AsyncHTTPTestCase` now drops all references to the `.Application`
during ``tearDown``, allowing its memory to be reclaimed sooner.
- `.AsyncTestCase` now cancels all pending coroutines in ``tearDown``,
in an effort to reduce warnings from the python runtime about
coroutines that were not awaited. Note that this may cause
``asyncio.CancelledError`` to be logged in other places. Coroutines
that expect to be running at test shutdown may need to catch this
exception.
`tornado.web`
~~~~~~~~~~~~~
- The ``asynchronous`` decorator has been removed.
- The ``callback`` argument to `.RequestHandler.flush` has been removed.
- `.StaticFileHandler` now supports large negative values for the
``Range`` header and returns an appropriate error for ``end >
start``.
- It is now possible to set ``expires_days`` in ``xsrf_cookie_kwargs``.
`tornado.websocket`
~~~~~~~~~~~~~~~~~~~
- Pings and other messages sent while the connection is closing are
now silently dropped instead of logging exceptions.
`tornado.wsgi`
~~~~~~~~~~~~~~
- ``WSGIApplication`` and ``WSGIAdapter`` have been removed.
| 33.019355 | 75 | 0.694216 |
e27ca1d0427296f8c9e806334226f4d41a746df7 | 2,431 | rst | reStructuredText | docs/reference.rst | ri-gilfanov/aiohttp-sqlalchemy | 5324f753f76ea31424e5e5b95e4f92ca68b781f9 | [
"MIT"
] | 5 | 2021-04-14T15:08:59.000Z | 2021-12-01T08:05:27.000Z | docs/reference.rst | ri-gilfanov/aiohttp-sqlalchemy | 5324f753f76ea31424e5e5b95e4f92ca68b781f9 | [
"MIT"
] | 5 | 2021-06-29T08:17:26.000Z | 2021-07-12T08:17:33.000Z | docs/reference.rst | ri-gilfanov/aiohttp-sqlalchemy | 5324f753f76ea31424e5e5b95e4f92ca68b781f9 | [
"MIT"
] | 2 | 2021-06-07T23:23:08.000Z | 2021-06-21T20:12:48.000Z | =========
Reference
=========
Main user functionality
-----------------------
.. autofunction:: aiohttp_sqlalchemy.setup
.. autofunction:: aiohttp_sqlalchemy.bind
.. autofunction:: aiohttp_sqlalchemy.init_db
.. autofunction:: aiohttp_sqlalchemy.get_session
Class based views
-----------------
.. warning::
The API of class based views is experimental and unstable.
.. autoclass:: aiohttp_sqlalchemy.SAMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.SAModelMixin
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.DeleteStatementMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.UpdateStatementMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.SelectStatementMixin
:inherited-members:
:members:
:show-inheritance:
Instance mixins
^^^^^^^^^^^^^^^
.. autoclass:: aiohttp_sqlalchemy.PrimaryKeyMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.UnitAddMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.UnitDeleteMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.UnitEditMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.UnitViewMixin
:inherited-members:
:members:
:show-inheritance:
List mixins
^^^^^^^^^^^
.. autoclass:: aiohttp_sqlalchemy.OffsetPaginationMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.ListAddMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.ListDeleteMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.ListEditMixin
:inherited-members:
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.ListViewMixin
:inherited-members:
:members:
:show-inheritance:
Views
^^^^^
.. autoclass:: aiohttp_sqlalchemy.SABaseView
:members:
:show-inheritance:
.. autoclass:: aiohttp_sqlalchemy.SAModelView
:members:
:show-inheritance:
Additional functionality
------------------------
.. autofunction:: aiohttp_sqlalchemy.sa_decorator
.. autofunction:: aiohttp_sqlalchemy.sa_middleware
.. autofunction:: aiohttp_sqlalchemy.get_engine
.. autofunction:: aiohttp_sqlalchemy.get_session_factory
| 20.777778 | 60 | 0.728918 |
5f94588d49b619590ff8515a3d39ca3cd9926bc0 | 1,352 | rst | reStructuredText | docs/guide.rst | AustralianSynchrotron/pyrobot | a1b2b8fa02b6757ae22a4deb776cd9ae14ed45fe | [
"MIT"
] | 4 | 2019-10-08T19:24:13.000Z | 2020-10-21T05:41:27.000Z | docs/guide.rst | AustralianSynchrotron/pyrobot | a1b2b8fa02b6757ae22a4deb776cd9ae14ed45fe | [
"MIT"
] | 1 | 2020-01-06T22:03:44.000Z | 2020-01-06T22:03:44.000Z | docs/guide.rst | AustralianSynchrotron/pyrobot | a1b2b8fa02b6757ae22a4deb776cd9ae14ed45fe | [
"MIT"
] | 2 | 2016-07-20T00:00:11.000Z | 2018-04-08T08:49:07.000Z | Developers Guide
================
The classes in ASPyRobot are intended to be subclassed to add application
specific functionality. In the ``RobotServer`` subclass you can define
operation functions. These can be initiated from clients using the
``RobotClient.run_operation()`` method.
Robot operations should be decorated with ``server.foreground_operation``
or ``server.background_operation`` depending on whether the operation
blocks other robot operations or not. For example, any operation that will
drive the robot is a ``foreground_operation`` but reading information from the
robot can be run in the background.
For example::
from aspyrobot import RobotServer, RobotClient
from aspyrobot.server import foreground_operation
from aspyrobot.exceptions import RobotError
class SAMRobotServer(RobotServer):
@foreground_operation
def mount_sample(self, handle, sample):
if self.robot.motors_on.value != 1:
raise RobotError('Motors must be on')
self.robot.run_task('MountSample', sample)
Start the server as normal::
>>> from aspyrobot import Robot
>>> server = SAMRobotServer(Robot('SR08ID01ROB01:'))
>>> server.setup()
Then to execute the operation::
>>> robot = RobotClient()
>>> robot.setup()
>>> robot.run_operation('mount_sample', 'l A 1')
| 34.666667 | 78 | 0.722633 |
52f359293d2a1a3c01ac972f78e46267feb1860e | 1,231 | rst | reStructuredText | docs/source/getting_started/Installing.rst | JohnLaTwC/msticpy | b9b6c835ac16ca7362b6f2e3faa01bd1eff4e0f8 | [
"MIT"
] | null | null | null | docs/source/getting_started/Installing.rst | JohnLaTwC/msticpy | b9b6c835ac16ca7362b6f2e3faa01bd1eff4e0f8 | [
"MIT"
] | null | null | null | docs/source/getting_started/Installing.rst | JohnLaTwC/msticpy | b9b6c835ac16ca7362b6f2e3faa01bd1eff4e0f8 | [
"MIT"
] | null | null | null | Installing
==========
Python 3.6 or Later
-------------------
*msticpy* requires Python 3.6 or later.
If you are running in hosted environment such as Azure Notebooks,
Python is already installed. Please ensure that the Python 3.6 (or later)
kernel is selected for your notebooks.
If you are running the notebooks locally, you will need to install Python 3.6
or later. The Ananconda distribution is a good starting point since it comes
with many of packages required by *msticpy* pre-installed.
Creating a virtual environment
------------------------------
*msticpy* has a significant number of dependencies. To avoid conflicts
with packages in your existing Python environment you may want to
create a Python virtual environment
or a conda environment and install the package there.
For standard python use the ``virtualenv`` command (several alternatives
to virtualenv are available). For Conda use the conda ``env`` command.
In both cases be sure to activate the environment before running jupyter
using ``activate {my_env_name}``.
Installation
------------
Run the following command to install *msticpy*.
``pip install msticpy``
or for the latest dev build
``pip install git+https://github.com/microsoft/msticpy`` | 30.02439 | 77 | 0.743298 |
a1cd3637f6f649277d050e0400df891005cdc9bd | 1,697 | rst | reStructuredText | README.rst | timheap/syncfat | 9977563a2e690db083c93efd58d7f2ff341ac1c9 | [
"MIT"
] | null | null | null | README.rst | timheap/syncfat | 9977563a2e690db083c93efd58d7f2ff341ac1c9 | [
"MIT"
] | null | null | null | README.rst | timheap/syncfat | 9977563a2e690db083c93efd58d7f2ff341ac1c9 | [
"MIT"
] | null | null | null | =======
syncfat
=======
A dodgy ripoff of ``rsync`` for my music files.
MTP mounts have been very flakey for me in the past.
FAT32 and exFAT filesystems have restrictions on what filenames are allowed.
This combination makes transferring music to my phone difficult.
This script helps. It takes care of munging filenames appropriately,
checking if a file needs updating in case of a partial or failed transfer,
and copying only subsets of your library.
If a file already exists at the destination with the same file size, it will not copy.
To copy the contents of 'Zechs Marquise/Getting Paid/' and 'Grails/' from your
music library to your mounted phone:
.. code-block:: sh
$ syncfat --source $HOME/Music \
--destination /mnt/phone/Music \
'Zechs Marquise/Getting Paid/' \
'Grails/'
The source defaults to ``pwd``. This script works best when you are in the
source directory, as you can leave off the source and tab-complete files to
copy:
.. code-block:: sh
$ cd $HOME/Music
$ syncfat --destination /mnt/phone/Music \
'Zechs Marquise/Getting Paid/' \
'Grails/'
This never deletes files, and should not be used for transferring back to
your music library. It is designed specifically for transferring to
intermittent FAT devices. File names are munged on the destination to fit FAT
naming restrictions, as well as other conversions that might happen.
Usage
=====
See ``syncfat --help`` for detailed help.
Two useful options:
``-v``
Print more information about what is happening.
Use this twice to print even more information
``--dry-run``
Don't actually transfer anything, only print what would happen.
| 32.634615 | 86 | 0.723041 |
8596b18ea15793ff0b583dd14fe3575decd1e4d4 | 255 | rst | reStructuredText | apcommand/accesspoints/broadcom/api/apcommand.accesspoints.broadcom.commands.DisableInterface.singular_data.rst | russellnakamura/apcommand | 84a8ac522967477e10e51d3583f83c3b7de1ac2b | [
"MIT"
] | null | null | null | apcommand/accesspoints/broadcom/api/apcommand.accesspoints.broadcom.commands.DisableInterface.singular_data.rst | russellnakamura/apcommand | 84a8ac522967477e10e51d3583f83c3b7de1ac2b | [
"MIT"
] | null | null | null | apcommand/accesspoints/broadcom/api/apcommand.accesspoints.broadcom.commands.DisableInterface.singular_data.rst | russellnakamura/apcommand | 84a8ac522967477e10e51d3583f83c3b7de1ac2b | [
"MIT"
] | null | null | null | apcommand.accesspoints.broadcom.commands.DisableInterface.singular_data
=======================================================================
.. currentmodule:: apcommand.accesspoints.broadcom.commands
.. autoattribute:: DisableInterface.singular_data | 42.5 | 71 | 0.619608 |
082e75ed4f9a8e60d190ca653dc61bc9b1232d1b | 375 | rst | reStructuredText | doc/source/wcsutil.rst | mcara/stsci.tools | 39c4793b1a561ea496a87c3a635a23f307d14fb5 | [
"BSD-3-Clause"
] | 6 | 2018-02-10T16:03:45.000Z | 2022-03-13T21:17:19.000Z | doc/source/wcsutil.rst | mcara/stsci.tools | 39c4793b1a561ea496a87c3a635a23f307d14fb5 | [
"BSD-3-Clause"
] | 89 | 2016-03-23T05:10:59.000Z | 2021-11-18T04:26:24.000Z | docs/tools/source/wcsutil.rst | spacetelescope/stsciutils | 85df1f215071a9c00a1e8cf3da823b5fedf8f2f2 | [
"BSD-3-Clause"
] | 18 | 2016-03-14T15:53:17.000Z | 2022-03-10T14:45:47.000Z | *******
WCSUTIL
*******
The `wcsutil` module provides a stand-alone implementation of a WCS object which provides a number of basic transformations and query methods. Most (if not all) of these functions can be obtained from the use of the PyWCS or STWCS WCS object if those packages have been installed.
.. automodule:: stsci.tools.wcsutil
:members:
:undoc-members:
| 41.666667 | 281 | 0.746667 |
f313f9abbe8ba8c71f123182b5c0e3310f615f38 | 140 | rst | reStructuredText | doc/fluid/api_cn/nn_cn/pad_constant_like_cn.rst | shiyutang/docs | b05612213a08daf9f225abce08fc42f924ef51ad | [
"Apache-2.0"
] | 104 | 2018-09-04T08:16:05.000Z | 2021-05-06T20:45:26.000Z | doc/fluid/api_cn/nn_cn/pad_constant_like_cn.rst | shiyutang/docs | b05612213a08daf9f225abce08fc42f924ef51ad | [
"Apache-2.0"
] | 1,582 | 2018-06-25T06:14:11.000Z | 2021-05-14T16:00:43.000Z | doc/fluid/api_cn/nn_cn/pad_constant_like_cn.rst | shiyutang/docs | b05612213a08daf9f225abce08fc42f924ef51ad | [
"Apache-2.0"
] | 387 | 2018-06-20T07:42:32.000Z | 2021-05-14T08:35:28.000Z | .. _cn_api_nn_cn_pad_constant_like:
pad_constant_like
-------------------------------
:doc_source: paddle.fluid.layers.pad_constant_like
| 17.5 | 50 | 0.657143 |
7e4acf1e45527fc6cedf73255abd0a152d9e4fa9 | 1,018 | rst | reStructuredText | docs/src/index.rst | novopl/mappr | 354fd3322df9c66026ad647849c6ae32562cdd94 | [
"Apache-2.0"
] | null | null | null | docs/src/index.rst | novopl/mappr | 354fd3322df9c66026ad647849c6ae32562cdd94 | [
"Apache-2.0"
] | 6 | 2021-01-20T02:39:49.000Z | 2021-02-03T00:55:43.000Z | docs/src/index.rst | novopl/mappr | 354fd3322df9c66026ad647849c6ae32562cdd94 | [
"Apache-2.0"
] | null | null | null | .. include:: ../../README.rst
:start-after: readme_badges_start
:end-before: readme_badges_end
###############################################
mappr - Easily convert between arbitrary types.
###############################################
.. include:: ../../README.rst
:start-after: readme_about_start
:end-before: readme_about_end
Links
=====
- `Source Code`_
- `Documentation`_
- `Contributing`_
- `Reference`_
.. _Documentation: https://novopl.github.io/mappr
.. _Contributing: https://novopl.github.io/mappr/pages/contributing.html
.. _Reference: https://novopl.github.io/mappr/pages/reference.html
.. _Source Code: https://github.com/novopl/mappr
Installation
============
.. include:: ../../README.rst
:start-after: readme_installation_start
:end-before: readme_installation_end
Example
=======
.. literalinclude:: /examples/simple.py
:language: python
More Documentation
==================
.. toctree::
:maxdepth: 1
pages/contrib
pages/reference
| 19.576923 | 72 | 0.61002 |
3ee153a83560ff790c274714bfd1725f9e8a0857 | 29,691 | rst | reStructuredText | docs/usage.rst | Viruzzz-kun/sphinxit | 9f8d078b862e7afbbedc68616b66495ce15acdd3 | [
"BSD-3-Clause"
] | 9 | 2015-03-13T23:54:49.000Z | 2020-02-13T10:19:26.000Z | docs/usage.rst | Viruzzz-kun/sphinxit | 9f8d078b862e7afbbedc68616b66495ce15acdd3 | [
"BSD-3-Clause"
] | 3 | 2015-04-06T03:33:14.000Z | 2015-08-05T00:02:56.000Z | docs/usage.rst | Viruzzz-kun/sphinxit | 9f8d078b862e7afbbedc68616b66495ce15acdd3 | [
"BSD-3-Clause"
] | 14 | 2015-01-30T15:19:14.000Z | 2020-11-04T09:18:44.000Z | .. _usage:
Usage
=====
Make sure you have `Sphinx <http://sphinxsearch.com/>`_ itself up and running.
If you have no idea what to do, read it's own `official docs <http://sphinxsearch.com/docs/current.html>`_.
Maybe, :ref:`preparation` tutorial will be useful for you too.
So, you have some Python application and just want to start use Sphinx in it?
Some kind of filtered fulltext queries, maybe? Snippets? I know that feel :)
And Sphinxit was created exactly for that, as a thin layer between your Python app and powerful
Sphinx search engine.
Configuration
-------------
First of all - create Sphinxit config class::
class SphinxitConfig(object):
DEBUG = True
WITH_META = True
WITH_STATUS = True
POOL_SIZE = 5
SQL_ENGINE = 'oursql'
SEARCHD_CONNECTION = {
'host': '127.0.0.1',
'port': 9306,
}
Actually, you don't have to write this class from scratch, because there is
BaseSearchConfig class in ``sphinxit.core.helpers`` module::
from sphinxit.core.helpers import BaseSearchConfig
class SphinxitConfig(BaseSearchConfig):
WITH_STATUS = False
The class from above does the same but with :attr:`WITH_STATUS` False value (not everyone needs it).
By the way, :attr:`WITH_STATUS` makes additional `SHOW STATUS <http://sphinxsearch.com/docs/current.html#sphinxql-show-status>`_
subquery for each of yours search query.
The :attr:`DEBUG` attribute sets the level of verbosity and behavior. If it's True, illegal arguments
for search methods will raise exceptions with useful hints about what's wrong and how you can fix it.
Don't panic, it's normal and you have to know what has to be fixed to process correct query.
If it's False, and it should be so in production, illegal filters, options, sortings, etc.
will be ignored. Sphinxit will try to complete the query correctly, without broken parts.
:attr:`WITH_META` sets to return some useful stats (`SHOW META <http://sphinxsearch.com/docs/current.html#sphinxql-show-meta>`_ subquery)
with your search results. If you don't care - turn it off, set to False.
The :attr:`SQL_ENGINE` allow you to select engine for sql client. Supported options: 'oursql' (default) and 'mysqldb'.
The :attr:`SEARCHD_CONNECTION` attribute sets connection settings for the Sphinx's ``searchd`` daemon.
Change the host and port values if they differ from defaults, check your ``sphinx.conf``.
Since 0.3.1 version, Sphinxit has a connector with simple connection pool, to reduce connections opening/closing overhead.
You can tune how much connections will be preopen, how much ``searchd`` instances will be run for queries processing
with :attr:`POOL_SIZE` attribute value. Default is 5.
Your first query
----------------
Let's define some conventions::
# Import path, I'll write it once, before new class or helper usage
from sphinxit.core.processor import Search
# Base query that will be used in further examples
search_query = Search(indexes=['company'], config=SphinxitConfig)
# Internally translates into valid SphinxQL query:
# SphinxQL> SELECT * FROM company
You can use Sphinxit with any Sphinx configuration you already have. Set a list of indexes
and pass Sphinxit config as above to start make queries.
The main class with set of special methods is :class:`Search`::
from sphinxit.core.processor import Search
search_query = Search(indexes=['company'], config=SphinxitConfig)
search_query = search_query.match('fulltext query')
# SphinxQL> SELECT * FROM company WHERE MATCH('fulltext query')
Every search method except the :meth:`ask()` is chainable.
The :meth:`ask()` method explicitly fetches all results from the ``searchd``::
search_result = search_query.ask()
The ``search_result`` is a dict with key ``result`` (by default). Like this::
{
u'result': {
u'items': [
{
'id': 5015L,
'name': u'Doc 1',
'date_created': 2008L,
},
{
'id': 25502L,
'name': u'Doc 2',
'date_created': 2009L,
},
...
],
u'meta': {
u'total': u'16',
u'total_found': u'16',
u'docs[0]': u'16',
u'time': u'0.000',
u'hits[0]': u'16',
u'keyword[0]': u'doc'
}
}
}
It can seem strange, result dict with one key... You'll see later in subqueries examples why it is so.
The :meth:`match()` method was used for fulltext search and the :meth:`ask()` method for search processing.
Remember that :meth:`ask()` is the end point of your query.
This query gets all of the document attributes that were specified in your ``sphinx.conf``.
If you want to set some explicit list of attributes to get only them, use the :meth:`select()` method::
search_query = search_query.select('id', 'name')
# SphinxQL> SELECT id, name FROM company WHERE MATCH('fulltext query')
2 moments here:
* the query chain is not mutable inplace;
* the order of method calls doesn't matter.
Also, you can set aliases for your attributes::
search_query = search_query.select('id', ('name', 'title'))
# SphinxQL> SELECT id, name AS title FROM company
or, alternative form::
search_query = search_query.select(id, name='title')
# SphinxQL> SELECT id, name AS title FROM company
Fulltext searching
------------------
The :meth:`match()` method provides proper chars escaping, usually it's what you need.
But you may want to make some `raw` query too. Use :meth:`match()`
without escaping by providing extra argument :attr:`raw=True`. Note the difference::
search_query = search_query.match('@name query for search + "exact phrase"')
# SphinxQL> SELECT * FROM company WHERE MATCH('\@name query for search \\+ \"exact phrase\"')
and as a "raw" query::
search_query = search_query.match('@name query for search + "exact phrase"', raw=True)
# SphinxQL> SELECT * FROM company WHERE MATCH('@name query for search + "exact phrase"')
.. note::
You have to be very careful with fulltext queries from the outside in the raw mode,
they can contain special chars and you have to escape them manually!
Filtering
---------
Sphinxit works without data schema (like ORMs), so there is special syntax to filter query by attributes:
==================================== =================================
Sphinxit SphinxQL
==================================== =================================
``attr__eq = value`` ``attr = value``
``attr__neq = value`` ``attr != value``
``attr__gt = value`` ``attr > value``
``attr__gte = value`` ``attr >= value``
``attr__lt = value`` ``attr < value``
``attr__lte = value`` ``attr <= value``
``attr__in = [value, value, ...]`` ``attr IN (value, value, ...)``
``attr__between = [value, value]`` ``attr BETWEEN (value, value)``
==================================== =================================
Some examples::
search_query = search_query.filter(id__gt=42)
# SphinxQL> SELECT * FROM company WHERE id > 42
search_query = search_query.filter(id__between=[100, 200], id__in=[50,51,52])
# SphinxQL> SELECT * FROM company WHERE id BETWEEN 100 AND 200 AND id IN (50, 51, 52)
search_query = search_query.filter(id__gt=42).filter(id__between=[100, 200], id__in=[50,51,52])
# SphinxQL> SELECT * FROM company WHERE id > 42 AND id BETWEEN 100 AND 200 AND id IN (50, 51, 52)
Sure, you can combine them as you wish.
Note, that you can't use string attributes in filter clauses. It's Sphinx engine limitation. Integers, floats, datetime - you're welcome::
# will raise an exception, use match() for that
search_query = search_query.filter(name__eq="Semirook")
Sphinx uses UNIX_TIMESTAMP to work with data, so Sphinxit converts date and datetime to UNIX_TIMESTAMP implicitly::
search_query = search_query.filter(date_created__lt=datetime.today())
# SphinxQL> SELECT * FROM company WHERE date_created < 1372539600
OR objects
++++++++++
Sphinx joins your filters with AND, but you may want to join them with OR logic.
There is workaround for that case and to make it simple to use, Sphinxit provides special OR objects::
from sphinxit.core.nodes import OR
Simple example::
search_query = search_query.filter(OR(id__gte=100, id__eq=1))
# SphinxQL> SELECT *, (id>=100 OR id=1) AS cnd FROM company WHERE cnd>0
More complex, with OR expressions joins::
search_query = search_query.filter(
OR(id__gte=100, id__eq=1) & OR(
date_created__eq=datetime.today(),
date_created__lte=datetime.today() - datetime.timedelta(days=3)
)
)
# SphinxQL> SELECT *, \
# (id>=100 OR id=1) AND (date_created=1372798800 OR date_created<=1372539600) AS cnd \
# FROM index WHERE cnd>0
You can combine OR expressions via ``&`` or ``|`` (means ``AND`` and ``OR`` groups concatanation)::
search_query = search_query.filter(
OR(id__gte=100, id__eq=1) | OR(id__eq=42, id__lt=24, date_created__lt=datetime.today())
)
# SphinxQL> SELECT *, \
# (id>=100 OR id=1) OR (id=42 OR id<24 OR date_created=1372798800) AS cnd \
# FROM index WHERE cnd>0
Single OR expression group can contain as much filters as you need.
.. note::
__between, __in, __neq filtering is not allowed in OR expressions.
Grouping
--------
Aggregation is for some kind of data group processing. You can group search results with
:meth:`group_by()` method, by some field, and make some aggregation operation, like a counting::
search_query = search_query.match('Yandex').select('date_created', Count()).group_by('date_created')
# SphinxQL> SELECT date_created, COUNT(*) as num FROM company WHERE MATCH('Yandex') GROUP BY date_created
This expression will group search results by the field ``date_created`` and will count how much items we have in these groups, with special :class:`Count()` aggregation object.
The raw result of this query is something like this::
+--------------+------+
| date_created | num |
+--------------+------+
| 2011 | 12 |
| 2009 | 1 |
| 2010 | 5 |
| 2012 | 26 |
| 2013 | 8 |
+--------------+------+
5 rows in set (0.00 sec)
Aggregation objects
+++++++++++++++++++
The most popular functions are implemented. You can find them all in the ``sphinxit.core.nodes`` module::
from sphinxit.core.nodes import Avg, Min, Max, Sum, Count
All of them take two arguments - name of some field to aggregate and optional alias (for the :class:`Count` object, name is also optional)::
search_query = (
search_query
.select('id', 'name', Count('name', 'company_name'))
.group_by('name')
.order_by('company_name', 'desc')
)
# SphinxQL> SELECT id, name, COUNT(DISTINCT name) AS company_name \
# FROM company
# GROUP BY name
# ORDER BY company_name DESC
Note the difference between the forms of released Counts. If you pass a name of a field as the first attribute,
the Count is ``DISTINCT``. Use named attribute :attr:`alias` explicitly to save the star syntax::
search_query = search_query.select('date_created', Count(alias='date_alias')).group_by('date_created')
# SphinxQL> SELECT date_created, COUNT(*) AS date_alias FROM company GROUP BY date_created
Try to experiment with this.
Limit
-----
Sure, you can specify how much results you want to get, the size of necessary limit.
There is :meth:`limit()` method for that with two arguments - ``offset`` and ``limit``::
search_query = search_query.limit(0, 100)
# SphinxQL> SELECT * FROM company LIMIT 0, 100
.. note::
Implicit Sphinx limit is **20**
Ordering
--------
Just specify the field you want to sort by and the direction of sorting:
``asc`` or ``desc`` (case insensitive)::
search_query = search_query.match('Yandex').limit(0, 100).order_by('name', 'desc')
# SphinxQL> SELECT * FROM company ORDER BY name DESC LIMIT 0, 100
Options
-------
Sphinxit knows about Sphinx's `OPTION clause <http://sphinxsearch.com/docs/current.html#sphinxql-select>`_
and you can work with almost all of them:
========================= ======================================================================== ==============
Option Description Param type
========================= ======================================================================== ==============
``ranker`` Any of 'proximity_bm25', 'bm25', 'none', 'wordcount', 'proximity', string
'matchany', 'fieldmask', 'sph04' or 'expr'. See the table below.
``max_matches`` Integer (per-query max matches value). integer
``cutoff`` Integer (max found matches threshold). integer
``max_query_time`` Integer (max search time threshold, msec). integer
``retry_count`` Integer (distributed retries count).
``retry_delay`` Integer (distributed retry delay, msec). integer
``field_weights`` A named integer list (per-field user weights for ranking). dict
``index_weights`` A named integer list (per-index user weights for ranking). dict
``reverse_scan`` 0 or 1, lets you control the order in which full-scan query processes bool
the rows.
``comment`` String, user comment that gets copied to a query log file. string
========================= ======================================================================== ==============
Combine them to tune up your search mechanism::
search_query = (
search_query
.match('Yandex')
.select('id', 'name')
.options(
ranker='proximity_bm25',
max_matches=100,
field_weights={'name': 100, 'description': 80},
)
.order_by('name', 'desc')
)
# SphinxQL> SELECT id, name \
# FROM company
# WHERE MATCH('Yandex')
# ORDER BY name
# DESC OPTION ranker=proximity_bm25, max_matches=100, field_weights=(name=100, description=80)
From Sphinx docs:
| Ranking (aka weighting) of the search results can be defined as a process of computing a so-called
| relevance (aka weight) for every given matched document with regards to a given query that matched it.
| So relevance is in the end just a number attached to every document that estimates how relevant the document
| is to the query. Search results can then be sorted based on this number and/or some additional parameters,
| so that the most sought after results would come up higher on the results page.
And valid rankers are:
========================= ======================================================================== ================
Ranker Description Sphinx ver.
========================= ======================================================================== ================
``proximity_bm25`` The default ranking mode that uses and combines both phrase proximity ALL
and BM25 ranking.
``bm25`` Statistical ranking mode which uses BM25 ranking only (similar to most ALL
other full-text engines). This mode is faster but may result in worse
quality on queries which contain more than 1 keyword.
``wordcount`` Ranking by the keyword occurrences count. This ranker computes ALL
the per-field keyword occurrence counts, then multiplies them by field
weights, and sums the resulting values.
``proximity`` Returns raw phrase proximity value as a result. This mode is internally 0.9.9-rc1
used to emulate SPH_MATCH_ALL queries.
``matchany`` Returns rank as it was computed in SPH_MATCH_ANY mode ealier, and is 0.9.9-rc1
internally used to emulate SPH_MATCH_ANY queries.
``fieldmask`` Returns a 32-bit mask with N-th bit corresponding to N-th fulltext 0.9.9-rc2
field, numbering from 0. The bit will only be set when the respective
field has any keyword occurences satisfiying the query.
``sph04`` Is generally based on the default SPH_RANK_PROXIMITY_BM25 ranker, 1.10-beta
but additionally boosts the matches when they occur in the very
beginning or the very end of a text field. Thus, if a field equals
the exact query, SPH04 should rank it higher than a field that contains
the exact query but is not equal to it. (For instance, when the query
is "Hyde Park", a document entitled "Hyde Park" should be ranked higher
than a one entitled "Hyde Park, London" or "The Hyde Park Cafe".)
``expr`` Lets you specify the ranking formula in run time. It exposes a number 2.0.2-beta
of internal text factors and lets you define how the final weight
should be computed from those factors. You can find more details about
its syntax and a reference available factors in a subsection below.
``none`` No ranking mode. This mode is obviously the fastest. A weight of 1 ALL
is assigned to all matches. This is sometimes called boolean searching
that just matches the documents but does not rank them.
========================= ======================================================================== ================
Read more about rankers `here <http://sphinxsearch.com/docs/current.html#weighting>`_.
Batch. Subqueries. Facets.
--------------------------
Since 0.3.1 Sphinxit version you can make subqueries. It can be very useful to process
several queries at a time with the same connection. It's more fast and efficient than
making series of separate queries. For example, you want to recieve fulltext query result
with different groupings but with the same base part::
search_result_1 = search_query.match('Yandex').ask()
search_result_2 = (
search_query.match('Yandex')
.select('date_created', Count())
.group_by('date_created')
.ask()
)
search_result_3 = (
search_query.match('Yandex')
.select('id', 'name', Count('name', 'company_name'))
.group_by('name')
.order_by('company_name', 'desc')
.ask()
)
You can rewrite queries from above as subqueries::
search_query = search_query.match('Yandex').named('main_query')
search_result = search_query.ask(
subqueries=[
(
search_query.select('date_created', Count())
.group_by('date_created')
.named('date_group'),
)
(
search_query.select('id', 'name', Count('name', 'company_name'))
.group_by('name')
.order_by('company_name', 'desc')
.named('name_group')
)
]
)
And the result is more clean and convenient for postprocessing.
Also, you can save several milliseconds on each subquery for free!
Note the new method :meth:`named()` here. It sets the name of the key in result data structure. In the first
example you'll get three separate dicts with search results. But in the second example with subqueries you'll
get one dict with key/value per each query::
{
u'main_query': {
u'items': [
{'date_created': 2011L, 'products': u'', 'id': 345060L, ...},
{'date_created': 2009L, 'products': u'406,409,517', 'id': 78966L, ...},
{'date_created': 2010L, 'products': u'349052', 'id': 97693L, ...},
...
],
u'meta': {
u'total': u'50',
u'total_found': u'50',
u'docs[0]': u'52',
u'time': u'0.000',
u'hits[0]': u'53',
u'keyword[0]': u'yandex'
}
},
u'date_group': {
u'items': [
{'date_created': 2011L, 'num': 12L},
{'date_created': 2009L, 'num': 1L},
{'date_created': 2010L, 'num': 5L},
{'date_created': 2012L, 'num': 26L},
{'date_created': 2013L, 'num': 8L}
],
u'meta': {
u'total': u'5',
u'total_found': u'5',
u'docs[0]': u'52',
u'time': u'0.000',
u'hits[0]': u'53',
u'keyword[0]': u'yandex'
}
},
u'name_group': {
u'items': [
{'company_name': 2L, 'id': 433302L, 'name': u'yandex'},
{'company_name': 1L, 'id': 167334L, 'name': u'Yandex.ru'},
{'company_name': 1L, 'id': 403574L, 'name': u'Yandex.ua'},
...
],
u'meta': {
u'total': u'50',
u'total_found': u'50',
u'docs[0]': u'52',
u'time': u'0.000',
u'hits[0]': u'53',
u'keyword[0]': u'yandex'
}
}
}
Update syntax
-------------
Sphinxit supports UPDATE syntax for disk indexes. You can update
any value of any attribute except strings. The usage is quite simple::
search = Search(['company'], config=SearchConfig)
search = search.match('Yandex').update(products=(5,2)).filter(id__gt=1)
# SphinxQL> UPDATE company SET products=(5,2) WHERE MATCH('Yandex') AND id>1
`TODO: Complete this chapter`
Snippets
--------
There is special :class:`Snippet` class to provide `CALL SNIPPETS <http://sphinxsearch.com/docs/current.html#sphinxql-call-snippets>`_ syntax that is used for semi-automatic snippets creation.
The usage is similar to :class:`Search`, but set of methods is quit different.
* :meth:`from_data()` describes what text data should be used to process snippets.
* :meth:`for_query()` is for fulltext query, like :meth::`match()` method in :class:`Search`.
* :meth:`options()` supports all of the ``excert`` options from `Sphinx docs <http://sphinxsearch.com/docs/current.html#api-func-buildexcerpts>`_.
I hope it's clear how to use it from this snippet::
snippets = (
Snippet(index='company', config=SearchConfig)
.for_query("Me amore")
.from_data("amore mia")
.options(before_match='<strong>', after_match='</strong>')
)
# SphinxQL> CALL SNIPPETS \
# ('amore mia', 'company', 'Me amore', '<strong>' AS before_match, '</strong>' AS after_match)
========================= ======================================================================== ================
Option Description Sphinx ver.
========================= ======================================================================== ================
``before_match`` A string to insert before a keyword match. Default is "<b>". ALL
``after_match`` A string to insert after a keyword match. Default is "</b>". ALL
``chunk_separator`` A string to insert between snippet chunks (passages). ALL
Default is " ... ". "
``limit`` Maximum snippet size, in symbols (codepoints). ALL
Integer, default is 256.
``around`` How much words to pick around each matching keywords block. ALL
Integer, default is 5.
``exact_phrase`` Whether to highlight exact query phrase matches only instead of ALL
individual keywords. Boolean, default is false.
``use_boundaries`` Whether to additionaly break passages by phrase boundary characters, ALL
as configured in index settings with phrase_boundary directive.
Boolean, default is false.
``weight_order`` Whether to sort the extracted passages in order of relevance ALL
(decreasing weight), or in order of appearance in the document
(increasing position). Boolean, default is false.
``query_mode`` Whether to handle words as a query in extended syntax, or as a bag 1.10-beta
of words (default behavior). For instance, in query mode
("one two" | "three four") will only highlight and include those
occurrences "one two" or "three four" when the two words from each pair
are adjacent to each other. In default mode, any single occurrence of
"one", "two", "three", or "four" would be highlighted.
Boolean, default is false.
``force_all_words`` Ignores the snippet length limit until it includes all the keywords. 1.10-beta
Boolean, default is false.
``limit_passages`` Limits the maximum number of passages that can be included into 1.10-beta
the snippet. Integer, default is 0 (no limit).
``limit_words`` Limits the maximum number of words that can be included into 1.10-beta
the snippet. Note the limit applies to any words, and not just
the matched keywords to highlight. For example, if we are highlighting
"Mary" and a passage "Mary had a little lamb" is selected, then it
contributes 5 words to this limit, not just 1.
Integer, default is 0 (no limit).
``start_passage_id`` Specifies the starting value of %PASSAGE_ID% macro
(that gets detected and expanded in before_match, after_match strings). 1.10-beta
Integer, default is 1.
``load_files`` Whether to handle $docs as data to extract snippets from 1.10-beta
(default behavior), or to treat it as file names, and load data
from specified files on the server side.
``load_files_scattered`` It works only with distributed snippets generation with remote agents. 2.0.2-beta
The source files for snippets could be distributed among different
agents, and the main daemon will merge together all non-erroneous
results. So, if one agent of the distributed index has 'file1.txt',
another has 'file2.txt' and you call for the snippets with both these
files, the sphinx will merge results from the agents together,
so you will get the snippets from both 'file1.txt' and 'file2.txt'.
Boolean, default is false.
``html_strip_mode`` HTML stripping mode setting. Defaults to "index", which means that 1.10-beta
index settings will be used. The other values are "none" and "strip",
that forcibly skip or apply stripping irregardless of index settings;
and "retain", that retains HTML markup and protects it from
highlighting. The "retain" mode can only be used when highlighting
full documents and thus requires that no snippet size limits are set.
String, allowed values are "none", "strip", "index", and "retain".
``allow_empty`` Allows empty string to be returned as highlighting result when 1.10-beta
a snippet could not be generated (no keywords match, or no passages
fit the limit). By default, the beginning of original text would be
returned instead of an empty string. Boolean, default is false.
``passage_boundary`` Ensures that passages do not cross a sentence, paragraph, or zone 2.0.1-beta
boundary (when used with an index that has the respective indexing
settings enabled). String, allowed values are "sentence", "paragraph",
and "zone".
``emit_zones`` Emits an HTML tag with an enclosing zone name before each passage. 2.0.1-beta
Boolean, default is false.
========================= ======================================================================== ================
| 47.734727 | 192 | 0.577246 |
445eef40dc3f0832eb683b0263839e20533b942b | 129 | rest | reStructuredText | part3/phonebook-backend/requests/add-person.rest | JS64/fullstackopen | 0443faffd5e3564eeb3ae34fbb64575d900aff4a | [
"MIT"
] | 1 | 2020-06-27T12:33:37.000Z | 2020-06-27T12:33:37.000Z | part3/phonebook-backend/requests/add-person.rest | JS64/fullstackopen | 0443faffd5e3564eeb3ae34fbb64575d900aff4a | [
"MIT"
] | 2 | 2021-05-11T18:56:58.000Z | 2021-09-02T13:39:28.000Z | part3/phonebook-backend/requests/add-person.rest | JS64/fullstackopen | 0443faffd5e3564eeb3ae34fbb64575d900aff4a | [
"MIT"
] | null | null | null | POST http://localhost:3001/api/persons
Content-Type: application/json
{
"name": "Martin Fowler",
"number": "1234-5678"
} | 18.428571 | 38 | 0.674419 |
ce07ce2551f850ac4f31675c716ca3941a1e1774 | 67 | rst | reStructuredText | doc/source/sourcecode/writexl.rst | chessmith/pylightxl | 405e10f628f845549e7d68f8c888c0ff0094efec | [
"MIT"
] | null | null | null | doc/source/sourcecode/writexl.rst | chessmith/pylightxl | 405e10f628f845549e7d68f8c888c0ff0094efec | [
"MIT"
] | null | null | null | doc/source/sourcecode/writexl.rst | chessmith/pylightxl | 405e10f628f845549e7d68f8c888c0ff0094efec | [
"MIT"
] | null | null | null | writexl
======
.. automodule:: pylightxl.writexl
:members:
| 7.444444 | 33 | 0.597015 |
13e3b0c7eabd1891dabcb616135a74ad9dc0ba58 | 749 | rst | reStructuredText | docs/source/misc/support.rst | s4-2/scancode-toolkit | 8931b42e2630b94d0cabc834dfb3c16f01f82321 | [
"Apache-2.0",
"CC-BY-4.0"
] | 2 | 2021-04-08T07:04:55.000Z | 2021-05-14T04:20:33.000Z | docs/source/misc/support.rst | s4-2/scancode-toolkit | 8931b42e2630b94d0cabc834dfb3c16f01f82321 | [
"Apache-2.0",
"CC-BY-4.0"
] | 16 | 2021-04-13T18:04:38.000Z | 2021-04-13T18:05:07.000Z | docs/source/misc/support.rst | s4-2/scancode-toolkit | 8931b42e2630b94d0cabc834dfb3c16f01f82321 | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | .. _support:
Support
=======
Documentation
-------------
The ScanCode toolkit documentation lives at aboutcode.readthedocs.io/en/latest/scancode-toolkit/.
Issue Tracker
-------------
Post questions and bugs as GitHub tickets at: https://github.com/nexB/scancode-toolkit/issues
StackOverflow
-------------
Ask question on StackOverflow using the [scancode] tag.
Talk to the Developers
----------------------
Join our `Gitter Channel <https://gitter.im/aboutcode-org/discuss>`_ to talk with the developers of
ScanCode Toolkit.
Documentation
-------------
For more information on Documentation or to leave feedback mail at aboutCode@groups.io, or leave a
message at our `Docs Channel <https://gitter.im/aboutcode-org/gsod-season-of-docs>`_.
| 23.40625 | 99 | 0.70494 |
adbe96d048cb97d16c56d0ff75153136ad8857d2 | 4,883 | rst | reStructuredText | api/autoapi/Microsoft/Net/Http/Headers/index.rst | alingarnwelay-thesis/Docs | bf22fbe17ef03f9ccef7facd9462a37a04b73d47 | [
"CC-BY-4.0",
"MIT"
] | 13 | 2019-02-14T19:48:34.000Z | 2021-12-24T13:38:23.000Z | api/autoapi/Microsoft/Net/Http/Headers/index.rst | nikibobi/Docs | 05a0789c2c87bc4cb98a7b6411083ce3f9771b01 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | api/autoapi/Microsoft/Net/Http/Headers/index.rst | nikibobi/Docs | 05a0789c2c87bc4cb98a7b6411083ce3f9771b01 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2017-12-29T18:10:16.000Z | 2018-07-24T18:41:45.000Z |
Microsoft.Net.Http.Headers Namespace
====================================
.. toctree::
:hidden:
:maxdepth: 2
/autoapi/Microsoft/Net/Http/Headers/CacheControlHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/ContentDispositionHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/ContentRangeHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/CookieHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/EntityTagHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/HeaderNames/index
/autoapi/Microsoft/Net/Http/Headers/HeaderQuality/index
/autoapi/Microsoft/Net/Http/Headers/HeaderUtilities/index
/autoapi/Microsoft/Net/Http/Headers/MediaTypeHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/MediaTypeHeaderValueComparer/index
/autoapi/Microsoft/Net/Http/Headers/NameValueHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/RangeConditionHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/RangeHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/RangeItemHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/SetCookieHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/StringWithQualityHeaderValue/index
/autoapi/Microsoft/Net/Http/Headers/StringWithQualityHeaderValueComparer/index
.. toctree::
:hidden:
:maxdepth: 2
.. dn:namespace:: Microsoft.Net.Http.Headers
.. rubric:: Classes
class :dn:cls:`CacheControlHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.CacheControlHeaderValue
class :dn:cls:`ContentDispositionHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.ContentDispositionHeaderValue
class :dn:cls:`ContentRangeHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.ContentRangeHeaderValue
class :dn:cls:`CookieHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.CookieHeaderValue
class :dn:cls:`EntityTagHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.EntityTagHeaderValue
class :dn:cls:`HeaderNames`
.. object: type=class name=Microsoft.Net.Http.Headers.HeaderNames
class :dn:cls:`HeaderQuality`
.. object: type=class name=Microsoft.Net.Http.Headers.HeaderQuality
class :dn:cls:`HeaderUtilities`
.. object: type=class name=Microsoft.Net.Http.Headers.HeaderUtilities
class :dn:cls:`MediaTypeHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.MediaTypeHeaderValue
class :dn:cls:`MediaTypeHeaderValueComparer`
.. object: type=class name=Microsoft.Net.Http.Headers.MediaTypeHeaderValueComparer
Implementation of :any:`System.Collections.Generic.IComparer\`1` that can compare accept media type header fields
based on their quality values (a.k.a q-values).
class :dn:cls:`NameValueHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.NameValueHeaderValue
class :dn:cls:`RangeConditionHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.RangeConditionHeaderValue
class :dn:cls:`RangeHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.RangeHeaderValue
class :dn:cls:`RangeItemHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.RangeItemHeaderValue
class :dn:cls:`SetCookieHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.SetCookieHeaderValue
class :dn:cls:`StringWithQualityHeaderValue`
.. object: type=class name=Microsoft.Net.Http.Headers.StringWithQualityHeaderValue
class :dn:cls:`StringWithQualityHeaderValueComparer`
.. object: type=class name=Microsoft.Net.Http.Headers.StringWithQualityHeaderValueComparer
Implementation of :any:`System.Collections.Generic.IComparer\`1` that can compare content negotiation header fields
based on their quality values (a.k.a q-values). This applies to values used in accept-charset,
accept-encoding, accept-language and related header fields with similar syntax rules. See
:any:`Microsoft.Net.Http.Headers.MediaTypeHeaderValueComparer` for a comparer for media type
q-values.
| 16.780069 | 123 | 0.64571 |
892a50519bbf4f631afbc393730537d40a6bedd9 | 676 | rst | reStructuredText | pydocs/source/getting_started.rst | tchristiansen-aquaveo/xmsgrid | 800d6e759e5c95a0dff54258a5229691d4f27904 | [
"BSD-2-Clause"
] | 1 | 2018-07-19T14:53:52.000Z | 2018-07-19T14:53:52.000Z | pydocs/source/getting_started.rst | tchristiansen-aquaveo/xmsgrid | 800d6e759e5c95a0dff54258a5229691d4f27904 | [
"BSD-2-Clause"
] | 11 | 2018-10-09T12:39:24.000Z | 2022-03-16T18:16:27.000Z | pydocs/source/getting_started.rst | tchristiansen-aquaveo/xmsgrid | 800d6e759e5c95a0dff54258a5229691d4f27904 | [
"BSD-2-Clause"
] | 2 | 2020-09-24T22:38:54.000Z | 2021-04-14T21:05:34.000Z | Installation
------------
XmsGrid can be installed using `Anaconda <https://www.anaconda.com/download/>`_.
You can install XmsGrid using the `conda <https://www.anaconda.com/download/>`_ command::
conda install -c aquaveo xmsgrid
This will install XmsGrid and **all** the needed dependencies.
Usage
-----
The XmsGrid library contains classes for defining geometric grids that can be used in other
Aquaveo libraries.
Usage and documentation for each class can be found in the **User Interface** section
of this site. There are also additional examples that can be found on the Examples_ page
.. _Examples: https://aquaveo.github.io/examples/xmsinterp/xmsinterp.html | 30.727273 | 91 | 0.761834 |
d034163e293b2c58d7d86de788e72b27a097fe38 | 1,052 | rst | reStructuredText | docs/reference/lib/core/random.rst | xzfc/egison | 8b570cac4454365b972b2ef873682f2e796b4cbe | [
"MIT"
] | null | null | null | docs/reference/lib/core/random.rst | xzfc/egison | 8b570cac4454365b972b2ef873682f2e796b4cbe | [
"MIT"
] | null | null | null | docs/reference/lib/core/random.rst | xzfc/egison | 8b570cac4454365b972b2ef873682f2e796b4cbe | [
"MIT"
] | null | null | null | ===================
lib/core/random.egi
===================
.. highlight:: haskell
R.multiset
::
matchAll [1, 2] as R.multiset integer with
| $n :: $ns -> (n, ns)
---> [(1, [2]), (2, [1])] or [(2, [1]), (1, [2])]
matchAll [1, 2] as R.multiset integer with
| #1 :: $ns -> ns
---> [[2]]
R.set
::
matchAll [1, 2] as R.set integer with
| $n :: $ns -> (n, ns)
---> [(1, [1, 2]), (2, [1, 2])] or [(2, [1, 2]), (1, [1, 2])]
matchAll [1, 2] as R.set integer with
| #1 :: $ns -> ns
---> [[1, 2]]
pureRand
::
pureRand 1 6 ---> 1, 2, 3, 4, 5 or 6
randomize
::
randomize [1, 2, 3]
---> [1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2] or [3, 2, 1]
R.between
::
R.between 1 3
---> [1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2] or [3, 2, 1]
R.uncons
::
R.uncons [1, 2]
---> (1, [2]) or (2, [1])
R.head
::
R.head [1, 2]
---> 1 or 2
R.tail
::
R.tail [1, 2]
---> [2] or [1]
| 16.4375 | 77 | 0.352662 |
942bb27e7bf26dfc5ed81bcf5895afbc6888b2c5 | 683 | rst | reStructuredText | docs/index.rst | fdemmer/easy-thumbnails | bc39d7ec3f4c72b000a82f7838eb3c16bcaa86c7 | [
"BSD-3-Clause"
] | 691 | 2015-01-02T05:27:14.000Z | 2022-03-28T10:51:19.000Z | docs/index.rst | fdemmer/easy-thumbnails | bc39d7ec3f4c72b000a82f7838eb3c16bcaa86c7 | [
"BSD-3-Clause"
] | 219 | 2015-01-21T21:37:35.000Z | 2022-03-11T13:43:41.000Z | docs/index.rst | fdemmer/easy-thumbnails | bc39d7ec3f4c72b000a82f7838eb3c16bcaa86c7 | [
"BSD-3-Clause"
] | 201 | 2015-01-15T14:26:54.000Z | 2022-02-21T17:51:07.000Z | ===============
Easy Thumbnails
===============
.. raw:: html
<p>
<a href="https://travis-ci.org/SmileyChris/easy-thumbnails" >
<img src="https://travis-ci.org/SmileyChris/easy-thumbnails.png?branch=master" alt="Build Status"/>
</a>
</p>
This documentation covers the |version| release of easy-thumbnails, a
thumbnailing application for Django which is easy to use and customize.
To get up and running, consult the :doc:`installation guide <install>`
which describes all the necessary steps to install and configure this
application.
.. toctree::
:maxdepth: 2
:glob:
*
Reference documentation:
.. toctree::
:maxdepth: 1
:glob:
ref/*
| 20.69697 | 106 | 0.661786 |
0f089cf26b6558cd356a090fa9db3998c91888e0 | 87 | rst | reStructuredText | arch/cpu/nrf/lib/nrfx/doc/sphinx/nrf52820.rst | Lkiraa/Contiki-ng | 87b55a9233d5588b454f6f5ec580ee9af1ae88f8 | [
"BSD-3-Clause"
] | 196 | 2017-12-21T09:45:32.000Z | 2022-03-30T15:58:42.000Z | arch/cpu/nrf/lib/nrfx/doc/sphinx/nrf52820.rst | Lkiraa/Contiki-ng | 87b55a9233d5588b454f6f5ec580ee9af1ae88f8 | [
"BSD-3-Clause"
] | 91 | 2017-12-22T12:42:48.000Z | 2022-02-24T21:48:41.000Z | arch/cpu/nrf/lib/nrfx/doc/sphinx/nrf52820.rst | Lkiraa/Contiki-ng | 87b55a9233d5588b454f6f5ec580ee9af1ae88f8 | [
"BSD-3-Clause"
] | 106 | 2017-12-22T12:36:25.000Z | 2022-03-29T14:21:05.000Z | nRF52820 Drivers
================
.. doxygenpage:: nrf52820_drivers
:content-only: | 17.4 | 33 | 0.609195 |
b9394c12ef0ddd58822431c9d7002e2861a13762 | 41 | rst | reStructuredText | changelog.d/322.trivial.rst | frenzymadness/python-semver | 4631fa63815797cf312467bbed9053f6a71fee38 | [
"BSD-3-Clause"
] | 159 | 2019-11-14T11:47:44.000Z | 2022-03-29T02:57:46.000Z | changelog.d/322.trivial.rst | calebstewart/python-semver | e93b6def40194f374623ce5e524a3dcfa40f7682 | [
"BSD-3-Clause"
] | 146 | 2019-11-05T08:22:43.000Z | 2022-03-04T18:59:52.000Z | changelog.d/322.trivial.rst | calebstewart/python-semver | e93b6def40194f374623ce5e524a3dcfa40f7682 | [
"BSD-3-Clause"
] | 22 | 2019-11-29T17:20:21.000Z | 2022-03-28T15:06:42.000Z | Switch from Travis CI to GitHub Actions.
| 20.5 | 40 | 0.804878 |
cb1ad14e70e13bf576f2c9cedb3fe2a8f7520780 | 3,655 | rst | reStructuredText | docs/source/paramak.parametric_shapes.rst | RemDelaporteMathurin/paramak | 10552f1b89820dd0f7a08e4a126834877e3106b4 | [
"MIT"
] | null | null | null | docs/source/paramak.parametric_shapes.rst | RemDelaporteMathurin/paramak | 10552f1b89820dd0f7a08e4a126834877e3106b4 | [
"MIT"
] | null | null | null | docs/source/paramak.parametric_shapes.rst | RemDelaporteMathurin/paramak | 10552f1b89820dd0f7a08e4a126834877e3106b4 | [
"MIT"
] | null | null | null | Parametric Shapes
=================
RotateStraightShape()
^^^^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/8583900/86246786-767a2080-bba3-11ea-90e7-22d816690caa.png
:width: 250
:height: 200
:align: center
.. automodule:: paramak.parametric_shapes.rotate_straight_shape
:members:
:show-inheritance:
RotateSplineShape()
^^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/8583900/86246785-7548f380-bba3-11ea-90b7-03249be41a00.png
:width: 250
:height: 240
:align: center
.. automodule:: paramak.parametric_shapes.rotate_spline_shape
:members:
:show-inheritance:
RotateMixedShape()
^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/8583900/86258771-17240c80-bbb3-11ea-990f-e87de26b1589.png
:width: 250
:height: 230
:align: center
.. automodule:: paramak.parametric_shapes.rotate_mixed_shape
:members:
:show-inheritance:
RotateCircleShape()
^^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/8583900/86246778-72e69980-bba3-11ea-9b33-d74e2c2d084b.png
:width: 250
:height: 200
:align: center
.. automodule:: paramak.parametric_shapes.rotate_circle_shape
:members:
:show-inheritance:
ExtrudeStraightShape()
^^^^^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/8583900/86246776-724e0300-bba3-11ea-91c9-0fd239225206.png
:width: 200
:height: 270
:align: center
.. automodule:: paramak.parametric_shapes.extruded_straight_shape
:members:
:show-inheritance:
ExtrudeSplineShape()
^^^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/8583900/86246774-71b56c80-bba3-11ea-94cb-d2496365ff18.png
:width: 200
:height: 280
:align: center
.. automodule:: paramak.parametric_shapes.extruded_spline_shape
:members:
:show-inheritance:
ExtrudeMixedShape()
^^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/8583900/86261239-34a6a580-bbb6-11ea-812c-ac6fa6a8f0e2.png
:width: 200
:height: 200
:align: center
.. automodule:: paramak.parametric_shapes.extruded_mixed_shape
:members:
:show-inheritance:
ExtrudeCircleShape()
^^^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/8583900/86246768-6feba900-bba3-11ea-81a8-0d77a843b943.png
:width: 250
:height: 180
:align: center
.. automodule:: paramak.parametric_shapes.extruded_circle_shape
:members:
:show-inheritance:
SweepStraightShape()
^^^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/56687624/88060232-e0ac3280-cb5d-11ea-8bfe-b1db5f89a0d4.png
:width: 300
:height: 230
:align: center
.. automodule:: paramak.parametric_shapes.sweep_straight_shape
:members:
:show-inheritance:
SweepSplineShape()
^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/56687624/88060236-e275f600-cb5d-11ea-87c3-330272a75904.png
:width: 300
:height: 230
:align: center
.. automodule:: paramak.parametric_shapes.sweep_spline_shape
:members:
:show-inheritance:
SweepMixedShape()
^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/56687624/88064419-2b7c7900-cb63-11ea-901f-a7f8596e1f00.png
:width: 300
:height: 230
:align: center
.. automodule:: paramak.parametric_shapes.sweep_mixed_shape
:members:
:show-inheritance:
SweepCircleShape()
^^^^^^^^^^^^^^^^^^
.. image:: https://user-images.githubusercontent.com/56687624/88064426-2d463c80-cb63-11ea-980b-29f8c010c2bf.png
:width: 300
:height: 230
:align: center
.. automodule:: paramak.parametric_shapes.sweep_circle_shape
:members:
:show-inheritance:
| 24.530201 | 111 | 0.697674 |
e9c7177720f6470a1169fcec879ef3dc69b4a4ac | 10,326 | rst | reStructuredText | doc/howto/document.rst | cmake-basis/cmake-basis-legacy | 6364f1eae21b5562e893581d621104dcc68f3b37 | [
"BSD-2-Clause"
] | 9 | 2015-01-19T01:14:35.000Z | 2017-11-21T21:39:35.000Z | doc/howto/document.rst | cmake-basis/cmake-basis-legacy | 6364f1eae21b5562e893581d621104dcc68f3b37 | [
"BSD-2-Clause"
] | 195 | 2015-01-03T09:45:07.000Z | 2016-04-15T19:17:22.000Z | doc/howto/document.rst | cmake-basis/cmake-basis-legacy | 6364f1eae21b5562e893581d621104dcc68f3b37 | [
"BSD-2-Clause"
] | 7 | 2015-02-04T22:00:00.000Z | 2019-04-23T08:50:03.000Z | .. meta::
:description: How to document software following BASIS, a build system and
software implementation standard.
====================
Documenting Software
====================
.. note:: This how-to guide is yet not complete.
BASIS supports two well-known and established documentation generation tools:
Doxygen_ and Sphinx_.
Documentation Quick Start
=========================
When you use the ``basisproject`` tool to generate a project as described in
:doc:`/howto/create-and-modify-project`, you will have a tree with a ``/doc``
directory preconfigured to generate a starter documentation website and PDF
just like the BASIS website.
Here is how to create a new project that supports documentation:
.. code-block:: bash
basisproject --name docProject --description "This is a BASIS project." --full
We will assume that you ran this command in your ``~/``
directory for simplicity in the steps below.
Writing Documentation
---------------------
Now you can simply open the ``~/docProject/doc/*.rst`` files and start editing
the existing reStructuredText_ files to create your Sphinx documentation.
You can also update your
`doxygen mainpage <http://www.stack.nl/~dimitri/doxygen/manual/commands.html#cmdmainpage>`__
by opening ``~/docProject/doc/apidoc/apidoc.dox``.
We also suggest taking a look at the ``/doc`` folder of the BASIS source code
itself for more examples of how to write documentation.
Generating Documentation
------------------------
Once you have the project ready the docs can be generated.
.. code-block:: bash
mkdir ~/docProject-build
cd ~/docProject-build
cmake ../docProject -DBUILD_DOCUMENTATION=ON -DCMAKE_INSTALL_PREFIX=~/docProject-install
make doc
make install
The web documentation will be in ``~/docProject-install/doc/html/index.html``,
and the PDF docs will be in ``~/docProject-install/doc/docProject_Software_Manual.pdf``.
Serving Website Locally
-----------------------
Note that simply opening the documentation will not render all pages
correctly due to the use of the iframe HTML tag to embed the Doxygen
generated API docs and the security settings built into modern browsers.
Instead, display your docs via a server, for example, using Python by
running the following command in the root directory of the (installed)
documentation.
Python 2:
.. code-block:: python
python -m SimpleHTTPServer
Python 3:
.. code-block:: python
python -m http.server
Then go to `localhost:8000 <http://localhost:8000>`__ to view the pages.
Doxygen Documentation
=====================
Language Support
----------------
Since version 1.8.0, Doxygen_ can natively generate documentation from
- C/C++
- Java
- Python
- Tcl
- Fortran.
The markup language used to format documentation
comments was originally a set of commands inherited from Javadoc.
Recently Doxygen also adopted Markdown_ and elements from `Markdown Extra`_.
Doxygen Filters
---------------
To extend the reportoire of programming languages processed by Doxygen, so-called
custom Doxygen filters can be provided which transform any source code into
the syntax of one of the languages well understood by Doxygen. The target language
used is commonly C/C++ as this is the language best understood by Doxygen.
BASIS includes Doxygen filters for:
- CMake
- Bash
- Perl
- MATLAB
- Python
Generating Doxygen
------------------
The :apidoc:`basis_add_doxygen_doc` CMake command can be used to create your own custom doxygen documentation.
Sphinx Documentation
====================
BASIS makes use of Sphinx_ for the alternative documentation
generation from Python source code and corresponding doc strings. The markup
language used by Sphinx is reStructuredText_ (reST).
Sphinx Documentation has the advantages of being able to be produced in many
different formats, and it can be used inline in Python code, and producing
documentation in a much more usable layout. However, it cannot generate
documentaiton from inline code for C++ in the way that doxygen can.
Output Formats
--------------
Sphinx and restructured text allow documentation to be generated in a wide
number of useful formats, including:
- HTML
- LaTeX
- man pages
- Docutils
These can be used to produce:
- software manual
- developer's guide
- tutorial slides,
- project web site
This is accomplished by providing text files marked up using reST which are
then processed by Sphinx to generate documentation in the desired output format.
BASIS includes two Sphinx extensions breathe_ and doxylink_ which are included with BASIS
can be used to include, respectively, link to the the documentation generated
by Doxygen from the documentation generated by Sphinx. The latter only for the
HTML output, which, however, is the most commonly used and preferred output
format. Given that the project web site and manuals are generated by Sphinx and
only the more advanced reference documentation is generated by Doxygen, this
one directional linking of documentation pages is sufficient for most use cases.
Currently BASIS uses doxylink because it is able to work with more complete
and better organized output than breathe can handle as of the time of writing.
Themes
------
A number of Sphinx themes are provided with BASIS, and the recommended default theme
is readable-wide that is used by the BASIS website.
- readable-wide
- readable
- agogo
- default
- haiku
- pyramid
- sphinxdoc
- basic
- epub
- nature
- readable
- scrolls
- traditional
You can also use your own theme from the web or include it yourself by simply providing
a path to the theme using the HTML_THEME parameter of :apidoc:`basis_add_doc()` and
:apidoc:`basis_add_sphinx_doc()`.
Markdown
========
`Markdown <http://daringfireball.net/projects/markdown/>`_,
`GitHub flavored Markdown <https://help.github.com/articles/github-flavored-markdown>`_ and
Markdown Extra can be used for the root package documentation files such as the
AUTHORS.md, README.md, INSTALL.md, and COPYING.md files. Many online hosting platforms
for the distribution of open source software such as SourceForge and GitHub render markdown
on the project page with the marked up formatting.
.. note:: Not all of these documentation tools are supported for all languages.
Creating Documentation
======================
The best example for creating documenation is the BASIS documentation itself,
which can be found in the ``doc/apidoc`` folder. The most important function
for generating documentation is :apidoc:`basis_add_doc()`, which can handle
the parameters of the related :apidoc:`basis_add_sphinx_doc()` and
:apidoc:`basis_add_doxygen_doc()` commands.
.. only:: html
Here is the code that generates the integrated Sphinx and Doxygen Documentation:
.. literalinclude:: ../CMakeLists.txt
Software Manual
===============
Introduces users to software tools and guides them through example application.
Developer's Guide
=================
Describes implementation details.
API Documentation
=================
Documentation generated from source code and in-source comments, integrated with default template.
Software Web Site
=================
A web site can be created using the documentation generation tool Sphinx_.
The main input to this tool are text files written in the lightweight markup language
reStructuredText_. A default theme for use at SBIA has been created which is part
of BASIS. This theme together with the text files that define the content and
structure of the site, the HTML pages of the software web site can be generated
by ``sphinx-build``. The CMake function :apidoc:`basis_add_doc()` provides an easy way
to add such web site target to the build configuration. For example, the
template ``doc/CMakeLists.txt`` file contains the following section:
.. code-block:: cmake
# ----------------------------------------------------------------------------
# web site (optional)
if (EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/site/index.rst")
basis_add_doc (
site
GENERATOR Sphinx
BUILDER html dirhtml pdf man
MAN_SECTION 7
HTML_THEME readable-wide
HTML_SIDEBARS globaltoc
RELLINKS installation documentation publications people
COPYRIGHT "<year> University of Pennsylvania"
AUTHOR "<author>"
)
endif ()
where <year> and <author> should be replaced by the proper values. This is usually done
by the :doc:`basisproject <create-and-modify-project>` command-line tool upon creation
of a new project.
This CMake code adds a build target named ``site`` which invokes ``sphinx-build``
with the proper default configuration to generate a web site from the reST
source files with file name extension ``.rst`` found in the ``site/`` subdirectory.
The source file of the main page, the so-called master document, of the web site
must be named ``index.rst``. The main pages which are linked in the top
navigation bar are named using the ``RELLINKS`` option of :apidoc:`basis_add_sphinx_doc()`,
the CMake function which implements the addition of a Sphinx documentation target.
The corresponding source files must be named after these links. For example, given
above CMake code, the reStructuredText source of the page with the download
instructions has to be saved in the file ``site/download.rst``.
See the :ref:`corresponding section <Build>` of the :doc:`../install`
guide for details on how to generate the HTML pages from the reST source
files given the specification of a Sphinx documentation build target such as the
``site`` target defined by above template CMake code.
.. _basis_add_doc(): http://opensource.andreasschuh.com/cmake-basis/apidoc/latest/group__CMakeAPI.html#ga06f94c5d122393ad4e371f73a0803cfa
.. _Doxygen: http://www.doxygen.org/
.. _Sphinx: http://sphinx-doc.org/
.. _reStructuredText: http://docutils.sourceforge.net/rst.html
.. _Markdown: http://daringfireball.net/projects/markdown/
.. _Markdown Extra: http://michelf.ca/projects/php-markdown/extra/
.. _breathe: https://github.com/michaeljones/breathe
.. _doxylink: http://packages.python.org/sphinxcontrib-doxylink/
.. _`node.js http-sever`: https://npmjs.org/package/http-server
| 34.192053 | 137 | 0.735522 |
4cc1ee46af429ec9fe50f447399089b7fe0fb854 | 496 | rst | reStructuredText | doc/manpages/source/remotelist.rst | landonreed/GeoGit | c15062c16c08585362918fbcc51ba80c311ff736 | [
"BSD-3-Clause"
] | 1 | 2015-09-21T18:46:15.000Z | 2015-09-21T18:46:15.000Z | doc/manpages/source/remotelist.rst | jdgarrett/GeoGit | 56742f22e9c4def9445ad0ba3fb4e54d1eb2a76d | [
"BSD-3-Clause"
] | null | null | null | doc/manpages/source/remotelist.rst | jdgarrett/GeoGit | 56742f22e9c4def9445ad0ba3fb4e54d1eb2a76d | [
"BSD-3-Clause"
] | 1 | 2020-02-16T10:59:32.000Z | 2020-02-16T10:59:32.000Z |
.. _geogit-remote-list:
geogit-remote-list documentation
################################
SYNOPSIS
********
geogit remote list [-v]
DESCRIPTION
***********
Shows a list of existing remotes. With the -v option, be a little more descriptive and show the remote URL after the name.
OPTIONS
*******
-v, --verbose Be a little more verbose and show remote url after name.
SEE ALSO
********
:ref:`geogit-remote-add`
:ref:`geogit-remote-remove`
BUGS
****
Discussion is still open.
| 13.777778 | 122 | 0.622984 |
81934f5aa1c742967d9556089eccfda14a2506b4 | 760 | rst | reStructuredText | denso_robot_bringup/CHANGELOG.rst | rizgiak/denso_robot_ros | 522f696528b0bf07419671a3f23eee7cff792d99 | [
"BSD-3-Clause"
] | 40 | 2017-11-24T15:50:17.000Z | 2021-12-21T02:29:20.000Z | denso_robot_bringup/CHANGELOG.rst | rizgiak/denso_robot_ros | 522f696528b0bf07419671a3f23eee7cff792d99 | [
"BSD-3-Clause"
] | 46 | 2017-12-08T11:49:24.000Z | 2022-03-19T12:12:16.000Z | denso_robot_bringup/CHANGELOG.rst | rizgiak/denso_robot_ros | 522f696528b0bf07419671a3f23eee7cff792d99 | [
"BSD-3-Clause"
] | 36 | 2017-12-04T10:36:25.000Z | 2022-03-08T19:48:11.000Z | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package denso_robot_bringup
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3.2.0 (2021-06-02)
------------------
3.1.2 (2021-04-02)
------------------
3.1.1 (2021-03-03)
------------------
3.1.0 (2020-12-23)
------------------
* Add bcap_slave_control_cycle_msec to Parameter Server
3.0.4 (2019-11-27)
------------------
3.0.3 (2019-09-23)
------------------
* Change update_joint_limits.py `#23 <https://github.com/DENSORobot/denso_robot_ros/issues/23>`_
* Add joint_state_publisher
3.0.2 (2017-12-15)
------------------
* Change descriptions and add url, author
* Contributors: MIYAKOSHI Yoshihiro
Forthcoming
-----------
* update version to 3.0.0
* first commit
* Contributors: MIYAKOSHI Yoshihiro
| 21.111111 | 96 | 0.523684 |
85c0a9abb45ee16877790a1e42305e48b5598cf8 | 414 | rst | reStructuredText | doc/library-reference/boards/wemos_d1_mini.rst | gacha/simba | c31af7018990015784af0d84d427812db37917d8 | [
"MIT"
] | 325 | 2015-11-12T15:21:39.000Z | 2022-01-11T09:39:36.000Z | doc/library-reference/boards/wemos_d1_mini.rst | elektrik-elektronik-muhendisligi/simba | 6cf4b92db6a27bef70ceccb6204526e22dd8a863 | [
"MIT"
] | 216 | 2016-01-02T10:57:11.000Z | 2021-08-25T05:36:51.000Z | doc/library-reference/boards/wemos_d1_mini.rst | elektrik-elektronik-muhendisligi/simba | 6cf4b92db6a27bef70ceccb6204526e22dd8a863 | [
"MIT"
] | 101 | 2015-12-28T16:21:27.000Z | 2022-03-29T11:59:01.000Z | :mod:`wemos_d1_mini` --- WEMOS D1 Mini
======================================
.. module:: wemos_d1_mini
:synopsis: WEMOS D1 Mini.
Source code: :github-blob:`src/boards/wemos_d1_mini/board.h`,
:github-blob:`src/boards/wemos_d1_mini/board.c`
Hardware reference: :doc:`../../boards/wemos_d1_mini`
----------------------------------------------
.. doxygenfile:: boards/wemos_d1_mini/board.h
:project: simba
| 25.875 | 61 | 0.574879 |
6e11a7a768e759eadaec2bea15e8ff0cf9e0df0b | 2,458 | rst | reStructuredText | docs/index.rst | electrosaurus/Chronobiology | 0ec0755fa57d6c4e3d8b4047d4bf46759ac9c767 | [
"MIT"
] | null | null | null | docs/index.rst | electrosaurus/Chronobiology | 0ec0755fa57d6c4e3d8b4047d4bf46759ac9c767 | [
"MIT"
] | null | null | null | docs/index.rst | electrosaurus/Chronobiology | 0ec0755fa57d6c4e3d8b4047d4bf46759ac9c767 | [
"MIT"
] | 1 | 2021-07-19T15:02:52.000Z | 2021-07-19T15:02:52.000Z | Chronobiology
=============
A python package to calculate and plot circadian cycles data.
Introduction
------------
.. image:: periodogram.png
:width: 400
Circadian rhythms are ~24 hour cycles of physiology and behaviour that occur in virtually all
organisms from bacteria to man. These rhythms are generated by an internal biological clock
and persist even in isolation from any external environmental cues.
In humans, circadian rhythms in activity are typically measured using calibrated wrist-worn
accelerometers. By contrast, the activity of laboratory animals is typically measured using home cage
running wheels. Circadian data are typically double plotted as actograms, showing activity across multiple days.
The circadian field has developed standard methods for analysing circadian rhythms. This primarily includes methods to
detect recurring features in the data, enabling the period length of activity cycles to be determined.
Under entrained conditions, this period will normally be determined by environmental zeitgebers
A range of different methods are used to determine the underlying period in biological time series.
Three of the most commonly used are the Enright periodogram, Fourier analysis and the
Lomb-Scargle periodogram. In addition, activity onset is also frequently used to characterise phase
shifts in rhythms in response to environmental zeitgebers.
Circadian disruption may occur as a result of environmental conditions.
This includes misalignment (when two or more rhythms adopt an abnormal phase relationship)
and desynchrony (when two or more rhythms exhibit a different period).
A range of approaches have been used to assess circadian disruption.
These methods range from simple visual inspection of actograms to metrics such as periodogram
power, variability in activity onset, light phase activity, activity bouts, interdaily stability, intradaily
variability and relative amplitude.
This package provides a set of tools to calculate and plot these parameters based on activity
measurements for further inspection and analysis.
For more theory see the following paper:
:download:`Telling the Time with a Broken Clock: Quantifying Circadian Disruption in Animal Models <paper.pdf>`
.. toctree::
:caption: Contents:
:maxdepth: 4
storing_data
selecting_data
analyzing_data
chronobiology
Indices and tables
==================
* :ref:`genindex`
* :doc:`modules_index`
* :doc:`downloads_index`
| 43.122807 | 118 | 0.803092 |
6aeecb1b2da725f72dfd4eb75489791128771c30 | 35,773 | rst | reStructuredText | docs/resultdictionary.rst | johnp/privacyscanner | b79a96f4776be69779328908508980f995c6649f | [
"MIT"
] | 21 | 2018-05-11T16:32:30.000Z | 2021-08-04T09:01:50.000Z | docs/resultdictionary.rst | johnp/privacyscanner | b79a96f4776be69779328908508980f995c6649f | [
"MIT"
] | 29 | 2018-05-11T18:07:36.000Z | 2021-03-31T11:09:37.000Z | docs/resultdictionary.rst | johnp/privacyscanner | b79a96f4776be69779328908508980f995c6649f | [
"MIT"
] | 12 | 2018-05-11T15:50:14.000Z | 2020-10-30T17:02:52.000Z | The result dictionary
=====================
Privacyscanner provides most of its results in a larger JSON object.
The current dictionary format
-----------------------------
The current result dictionary is somewhat unstructured and contains pieces of
information that are not necessary. It will be replaced in the future. See the
following table for the result dictionary's keys:
+---------------------------------------+------------------+-------------+---------+
| Key | Type | Scan module | Remarks |
+=======================================+==================+=============+=========+
| reachable | boolean | network | |
+---------------------------------------+------------------+-------------+---------+
| final_url | string | network | |
+---------------------------------------+------------------+-------------+---------+
| https | boolean | network | |
+---------------------------------------+------------------+-------------+---------+
| final_url_is_https | boolean | network | |
+---------------------------------------+------------------+-------------+---------+
| mx_a_records | list[mxarecord] | network | |
+---------------------------------------+------------------+-------------+---------+
| a_records | list[ip] | network | |
+---------------------------------------+------------------+-------------+---------+
| a_locations | list[string] | network | |
+---------------------------------------+------------------+-------------+---------+
| mx_records | list[mxrecord] | network | |
+---------------------------------------+------------------+-------------+---------+
| mx_locations | list[string | network | |
+---------------------------------------+------------------+-------------+---------+
| a_records_reverse | list[reversea] | network | |
+---------------------------------------+------------------+-------------+---------+
| mx_a_records_reverse | list[mxreversea] | network | |
+---------------------------------------+------------------+-------------+---------+
| final_https_url | string | network | |
+---------------------------------------+------------------+-------------+---------+
| tracker_requests_elapsed_seconds | float | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| third_party_requests | list[request] | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| redirected_to_https | boolean | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| initial_url | string | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| third_parties_count | integer | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| flashcookies | list[string] | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| responses | list[response] | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| third_parties | list[string] | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| google_analytics_present | boolean | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| google_analytics_anonymizeIP_set | boolean | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| google_analytics_anonymize_IP_not_set | integer | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| cookie_stats | cookiestats | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| openwpm_final_url | string | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| mixed_content | boolean | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| headerchecks | headerchecks | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| third_party_requests_count | integer | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| requests | list[request] | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| cookies_count | integer | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| requests_count | integer | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| tracker_requests | list[request] | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| success | boolean | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| profilecookies | list[cookie] | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| flashcookies_count | integer | openwpm | |
+---------------------------------------+------------------+-------------+---------+
| leaks | list[string] | serverleaks | |
+---------------------------------------+------------------+-------------+---------+
| web_either_crl_or_ocsp_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_default_cipher_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_default_cipher_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_session_ticket | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_caa_record_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_strong_keysize_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_hsts_preload | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_1_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_hsts_header | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_strong_sig_algorithm | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_sslv2 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_2 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_certificate_transparency_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_2_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_sslv2_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_offers_ocsp | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_default_protocol | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_ocsp_must_staple | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_hsts_header_sufficient_time | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_session_ticket_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_testssl_missing_ids | list[string] | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_default_cipher | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_strong_keysize | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_vulnerabilities | vulnerabilities | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_ciphers | ciphers | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_2_finding | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_default_protocol_severity | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_san_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_cert_trusted_reason | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_cipher_order_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_certificate_not_expired | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_either_crl_or_ocsp | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_ocsp_stapling | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_ocsp_must_staple_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_pfs | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_caa_record | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_cipher_order | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_session_ticket_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_pfs_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_valid_san_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_1 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_sslv3 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_ssl | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_sslv3_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_certificate_transparency | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_3 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_1_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_keysize | integer | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_valid_san | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_hpkp_header | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_3_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_3_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_sslv2_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_default_protocol_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_sslv3_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_strong_sig_algorithm_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_ocsp_stapling_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_sig_algorithm | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_certificate_not_expired_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_hsts_preload_header | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_has_protocol_tls1_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| web_cert_trusted | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_ssl | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_sslv3_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_strong_keysize | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_san_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_either_crl_or_ocsp | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_string_sig_algorithm | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_certificate_not_expired_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_sslv3_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_ssl_finished | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_session_ticket_severity | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_certificate_transparency | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_3_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_default_protocol | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_sslv2_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_ocsp_stapling | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_2 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_default_cipher_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_ocsp_must_staple_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_sslv2 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_valid_san_severity | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_caa_record | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_1_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_cipher_order_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_strong_sig_algorithm_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_1_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_session_ticket_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_2_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_strong_keysize_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_cert_trusted_reason | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_certificate_transparency_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_sslv3 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_default_cipher_finding | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_cert_trusted | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_either_crl_or_ocsp_severity | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_cipher_order | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_default_cipher | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_session_ticket | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_certificate_not_expired | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_valid_san | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_ciphers | ciphers | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_default_protocol_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_keysize | integer | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_caa_record_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_ocsp_stapling_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_default_protocol_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_testssl_missing_ids | list[string] | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_offers_ocsp | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_sslv2_finding | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_3 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_vulnerabilities | vulnerabilities | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_pfs_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_sig_algorihm | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_3_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_1 | boolean | testssl | |
+---------------------------------------+------------------+-------------+---------+
| mx_has_protocol_tls1_2_severity | string | testssl | |
+---------------------------------------+------------------+-------------+---------+
The response object
^^^^^^^^^^^^^^^^^^^
+----------------------+--------------+-------------------------------------+
| Key | Type | Remarks |
+======================+==============+=====================================+
| method | string | GET, POST etc. |
+----------------------+--------------+-------------------------------------+
| url | string | |
+----------------------+--------------+-------------------------------------+
| time_stamp | string | Example: "2018-05-04T16:09:07.897Z" |
+----------------------+--------------+-------------------------------------+
| response_status_text | string | |
+----------------------+--------------+-------------------------------------+
| referrer | string | |
+----------------------+--------------+-------------------------------------+
| headers | list[header] | |
+----------------------+--------------+-------------------------------------+
| response_status | integer | |
+----------------------+--------------+-------------------------------------+
The header object
^^^^^^^^^^^^^^^^^
The header object is a list containing the header name as first element and the
header value as second element.
The cookiestats object
^^^^^^^^^^^^^^^^^^^^^^
The cookiestats objects contains various pieces of information of cookies.
+---------------------------+--------------+----------------------------------------------------+
| Key | Type | Explanation |
+===========================+==============+====================================================+
| third_party_flash | integer | Third-party flash cookies |
+---------------------------+--------------+----------------------------------------------------+
| first_party_long | integer | First-party cookies with a long runtime (??? days) |
+---------------------------+--------------+----------------------------------------------------+
| third_party_short | integer | Third-party cookies with a short runtime (???) |
+---------------------------+--------------+----------------------------------------------------+
| third_party_track_domains | list[string] | ??? |
+---------------------------+--------------+----------------------------------------------------+
| first_party_abort | integer | ??? |
+---------------------------+--------------+----------------------------------------------------+
| third_party_track | integer | ??? |
+---------------------------+--------------+----------------------------------------------------+
| first_party_flash | integer | ??? |
+---------------------------+--------------+----------------------------------------------------+
| third_party_track_uniq | integer | ??? |
+---------------------------+--------------+----------------------------------------------------+
| third_party_long | integer | Third-party cookies with long runtime (???) |
+---------------------------+--------------+----------------------------------------------------+
The headerchecks object
^^^^^^^^^^^^^^^^^^^^^^^
The headerchecks object holds pieces of information about security related headers.
The object's key contains the header name, while the value contains the information
object. The information object has the keys "status" and "value" (both strings). See
the following example::
{
"content-security-policy": {
"status": "MISSING",
"value": ""
}
}
The following headers (i.e. keys of the headercheck object) are supported:
* x-powered-by
* referrer-policy
* content-security-policy
* server
* x-content-type-options
* x-frame-options
* x-xss-protection
The request object
^^^^^^^^^^^^^^^^^^
+----------+--------+-----------------------------------------------+
| Key | Type | Remark |
+==========+========+===============================================+
| method | string | HTTP method (GET/POST/...) |
+----------+--------+-----------------------------------------------+
| headers | string | JSON encoded headers as string (yes, really!) |
+----------+--------+-----------------------------------------------+
| url | string | |
+----------+--------+-----------------------------------------------+
| referrer | string | |
+----------+--------+-----------------------------------------------+
The ciphers object
^^^^^^^^^^^^^^^^^^
The cipher object contains various cipher groups as keys and an information
object as value. The information object contains a key "finding" and a key
"severity". The following cipher groups are available:
* std_3DES
* std_HIGH
* std_128Bit
* std_EXPORT
* std_NULL
* std_DES+64Bit
* std_aNULL
* std_STRONG
The vulnerabilities object
^^^^^^^^^^^^^^^^^^^^^^^^^^
The vulnerabilities object contains various TLS-based vulnerabilities as keys
and an information object as value. The information object contains the following
keys: finding, cve, severity (all strings). The following vulnerabilities are
supported:
* LOGJAM_common_primes
* sec_client_renego
* beast
* secure_renego
* drown
* breach
* lucky13
* sweet32
* ccs
* ticketbleed
* rc4
* heartbleed
* crime
* freak
* poodle_ssl
* logjam
The mxarecord list
^^^^^^^^^^^^^^^^^^
The mxarecord list contains two elements. The first element is the priority of
the MX record. The second element is a list of IP addresses. To fill that list,
all MX records will be taken and resolved for A records.
Example::
[10, ["127.0.0.1", "127.0.1.1"]]
The cookie object
^^^^^^^^^^^^^^^^^
+--------------+---------+-----------------------+
| Key | Type | Remark |
+==============+=========+=======================+
| accessed | integer | ??? |
+--------------+---------+-----------------------+
| creationTime | integer | |
+--------------+---------+-----------------------+
| name | string | |
+--------------+---------+-----------------------+
| value | string | |
+--------------+---------+-----------------------+
| expiry | integer | |
+--------------+---------+-----------------------+
| baseDomain | string | |
+--------------+---------+-----------------------+
| path | string | |
+--------------+---------+-----------------------+
| host | string | |
+--------------+---------+-----------------------+
| isHttpOnly | integer | Yes, it is no boolean |
+--------------+---------+-----------------------+
| isSecure | integer | Yes, it it no boolean |
+--------------+---------+-----------------------+
The future result dictionary
----------------------------
It is not decided yet how this will look like. However, there are already
some ideas what to change:
* All web_* and mx_* entries from testssl should move to own on dictionary
without prefix. Those dictionary will be named tls_web and tls_mail.
* Remove the findings keys for testssl checks. If there are static strings,
remove them without substitution. Otherwise provide a new key with the
information provided in the finding (with the value only, not containing
formatting or english sentences)
* Remove the severity keys for testssl checks. Either convert them into
booleans or concrete numbers to evaluate oneself (e.g. key size)
* Google Analytics detection will be an own dictionary
| 67.496226 | 97 | 0.247281 |
a2a49e3de6512e1fb2192daf9798c9ebab620d5e | 958 | rst | reStructuredText | options/built-in/datetime-picker.rst | yellowcoma/Unyson-Documentation | 18fddf4824cc47d39333bc2c6694774d3bfa70db | [
"CC-BY-3.0"
] | 7 | 2016-01-21T08:26:04.000Z | 2019-04-04T20:38:28.000Z | options/built-in/datetime-picker.rst | yellowcoma/Unyson-Documentation | 18fddf4824cc47d39333bc2c6694774d3bfa70db | [
"CC-BY-3.0"
] | 15 | 2015-01-31T12:16:02.000Z | 2018-03-13T13:37:32.000Z | options/built-in/datetime-picker.rst | yellowcoma/Unyson-Documentation | 18fddf4824cc47d39333bc2c6694774d3bfa70db | [
"CC-BY-3.0"
] | 36 | 2015-11-13T21:25:09.000Z | 2022-02-09T03:11:41.000Z | Datetime Picker
---------------
Pick a datetime in calendar.
.. code-block:: php
array(
'type' => 'datetime-picker',
'value' => '',
'attr' => array( 'class' => 'custom-class', 'data-foo' => 'bar' ),
'label' => __('Label', '{domain}'),
'desc' => __('Description', '{domain}'),
'help' => __('Help tip', '{domain}'),
'datetime-picker' => array(
'format' => 'Y/m/d H:i', // Format datetime.
'maxDate' => false, // By default there is not maximum date , set a date in the datetime format.
'minDate' => false, // By default minimum date will be current day, set a date in the datetime format.
'timepicker' => true, // Show timepicker.
'datepicker' => true, // Show datepicker.
'defaultTime' => '12:00' // If the input value is empty, timepicker will set time use defaultTime.
),
) | 41.652174 | 121 | 0.509395 |
e76f161eab04050cc755fd35c6039d22d2dcf4e2 | 9,936 | rst | reStructuredText | docs/source/model_doc/luke.rst | Mechachleopteryx/transformers | da7aabf2ca63dede3e1891b8cb9fb2dddbd9820e | [
"Apache-2.0"
] | 2 | 2021-12-08T04:15:09.000Z | 2022-03-08T22:29:08.000Z | docs/source/model_doc/luke.rst | liugj101/transformers | 6a025487a63a206f2438b1dab426c5c8adc36144 | [
"Apache-2.0"
] | 2 | 2021-12-02T06:10:07.000Z | 2021-12-16T14:24:26.000Z | docs/source/model_doc/luke.rst | liugj101/transformers | 6a025487a63a206f2438b1dab426c5c8adc36144 | [
"Apache-2.0"
] | 1 | 2021-12-27T17:22:52.000Z | 2021-12-27T17:22:52.000Z | ..
Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
LUKE
-----------------------------------------------------------------------------------------------------------------------
Overview
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The LUKE model was proposed in `LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
<https://arxiv.org/abs/2010.01057>`_ by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda and Yuji Matsumoto.
It is based on RoBERTa and adds entity embeddings as well as an entity-aware self-attention mechanism, which helps
improve performance on various downstream tasks involving reasoning about entities such as named entity recognition,
extractive and cloze-style question answering, entity typing, and relation classification.
The abstract from the paper is the following:
*Entity representations are useful in natural language tasks involving entities. In this paper, we propose new
pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed
model treats words and entities in a given text as independent tokens, and outputs contextualized representations of
them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves
predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also
propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the
transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model
achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains
state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification),
CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question
answering).*
Tips:
- This implementation is the same as :class:`~transformers.RobertaModel` with the addition of entity embeddings as well
as an entity-aware self-attention mechanism, which improves performance on tasks involving reasoning about entities.
- LUKE treats entities as input tokens; therefore, it takes :obj:`entity_ids`, :obj:`entity_attention_mask`,
:obj:`entity_token_type_ids` and :obj:`entity_position_ids` as extra input. You can obtain those using
:class:`~transformers.LukeTokenizer`.
- :class:`~transformers.LukeTokenizer` takes :obj:`entities` and :obj:`entity_spans` (character-based start and end
positions of the entities in the input text) as extra input. :obj:`entities` typically consist of [MASK] entities or
Wikipedia entities. The brief description when inputting these entities are as follows:
- *Inputting [MASK] entities to compute entity representations*: The [MASK] entity is used to mask entities to be
predicted during pretraining. When LUKE receives the [MASK] entity, it tries to predict the original entity by
gathering the information about the entity from the input text. Therefore, the [MASK] entity can be used to address
downstream tasks requiring the information of entities in text such as entity typing, relation classification, and
named entity recognition.
- *Inputting Wikipedia entities to compute knowledge-enhanced token representations*: LUKE learns rich information
(or knowledge) about Wikipedia entities during pretraining and stores the information in its entity embedding. By
using Wikipedia entities as input tokens, LUKE outputs token representations enriched by the information stored in
the embeddings of these entities. This is particularly effective for tasks requiring real-world knowledge, such as
question answering.
- There are three head models for the former use case:
- :class:`~transformers.LukeForEntityClassification`, for tasks to classify a single entity in an input text such as
entity typing, e.g. the `Open Entity dataset <https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html>`__.
This model places a linear head on top of the output entity representation.
- :class:`~transformers.LukeForEntityPairClassification`, for tasks to classify the relationship between two entities
such as relation classification, e.g. the `TACRED dataset <https://nlp.stanford.edu/projects/tacred/>`__. This
model places a linear head on top of the concatenated output representation of the pair of given entities.
- :class:`~transformers.LukeForEntitySpanClassification`, for tasks to classify the sequence of entity spans, such as
named entity recognition (NER). This model places a linear head on top of the output entity representations. You
can address NER using this model by inputting all possible entity spans in the text to the model.
:class:`~transformers.LukeTokenizer` has a ``task`` argument, which enables you to easily create an input to these
head models by specifying ``task="entity_classification"``, ``task="entity_pair_classification"``, or
``task="entity_span_classification"``. Please refer to the example code of each head models.
There are also 3 notebooks available, which showcase how you can reproduce the results as reported in the paper with
the HuggingFace implementation of LUKE. They can be found `here
<https://github.com/studio-ousia/luke/tree/master/notebooks>`__.
Example:
.. code-block::
>>> from transformers import LukeTokenizer, LukeModel, LukeForEntityPairClassification
>>> model = LukeModel.from_pretrained("studio-ousia/luke-base")
>>> tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-base")
# Example 1: Computing the contextualized entity representation corresponding to the entity mention "Beyoncé"
>>> text = "Beyoncé lives in Los Angeles."
>>> entity_spans = [(0, 7)] # character-based entity span corresponding to "Beyoncé"
>>> inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
>>> outputs = model(**inputs)
>>> word_last_hidden_state = outputs.last_hidden_state
>>> entity_last_hidden_state = outputs.entity_last_hidden_state
# Example 2: Inputting Wikipedia entities to obtain enriched contextualized representations
>>> entities = ["Beyoncé", "Los Angeles"] # Wikipedia entity titles corresponding to the entity mentions "Beyoncé" and "Los Angeles"
>>> entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
>>> inputs = tokenizer(text, entities=entities, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
>>> outputs = model(**inputs)
>>> word_last_hidden_state = outputs.last_hidden_state
>>> entity_last_hidden_state = outputs.entity_last_hidden_state
# Example 3: Classifying the relationship between two entities using LukeForEntityPairClassification head model
>>> model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
>>> tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
>>> entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
>>> inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> predicted_class_idx = int(logits[0].argmax())
>>> print("Predicted class:", model.config.id2label[predicted_class_idx])
This model was contributed by `ikuyamada <https://huggingface.co/ikuyamada>`__ and `nielsr
<https://huggingface.co/nielsr>`__. The original code can be found `here <https://github.com/studio-ousia/luke>`__.
LukeConfig
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.LukeConfig
:members:
LukeTokenizer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.LukeTokenizer
:members: __call__, save_vocabulary
LukeModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.LukeModel
:members: forward
LukeForMaskedLM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.LukeForMaskedLM
:members: forward
LukeForEntityClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.LukeForEntityClassification
:members: forward
LukeForEntityPairClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.LukeForEntityPairClassification
:members: forward
LukeForEntitySpanClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.LukeForEntitySpanClassification
:members: forward
| 59.855422 | 137 | 0.692733 |
9adf478dee4cfcc92595963c72ce4f9f2b2b4e1f | 2,298 | rst | reStructuredText | docs/source/ossupport.rst | jamesabel/osnap | fc3f2affc3190d91f0465c35971e8f877269d5fd | [
"MIT"
] | 57 | 2016-08-20T03:15:50.000Z | 2021-02-20T10:06:48.000Z | docs/source/ossupport.rst | jamesabel/osnap | fc3f2affc3190d91f0465c35971e8f877269d5fd | [
"MIT"
] | 21 | 2016-09-15T20:40:58.000Z | 2020-10-27T23:22:55.000Z | docs/source/ossupport.rst | jamesabel/osnap | fc3f2affc3190d91f0465c35971e8f877269d5fd | [
"MIT"
] | 4 | 2016-11-02T22:06:13.000Z | 2020-09-09T07:21:34.000Z |
OS Support
==========
Background
----------
The philosophy around ``OSNAP`` is to bundle an embedded Python environment (interpreter) with your application.
Windows and OSX/MacOS [1]_ are supported. However, currently the
support for embedded Python for each OS is quite different from each other (hopefully in the future these
will converge).
Windows
-------
In Python 3.5, `Steve Dower added an embedded Python zip <https://blogs.msdn.microsoft.com/pythonengineering/2016/04/26/cpython-embeddable-zip-file/>`_
to the general distribution on python.org. This makes embedding Python in an application fairly straightforward.
So, this is used directly by ``OSNAP``.
OSX/MacOS
---------
As of this writing there is no embedded Python for Mac in the general distribution. ``OSNAP`` has two techniques
to fill this, each with their pros and cons:
Compilation
^^^^^^^^^^^
This technique compiles Python as part of the creation of the Python environment (what's in the ``osnapy``
directory). Mac compliation of Python requires absolute path names, so we predetermine the path that ``osnapy``
will be on the end user's system - i.e. ``/Applications/<application name>.app/Contents/MacOS/osnapy/`` - and
compile and "install" into that location. The pros/cons are:
Pros:
- This should be a complete solution since we have a regular Python environment : the Python interpreter, pip, etc.
- All the tools are generally available and free.
Cons:
- We have to compile.
- We need to install tools/libraries like XCode and OpenSSL.
- There is always a chance that compilation doesn't work for some reason.
- It's compiling (actually installing) into the /Applications directory, which requires root (sudo) for part of it.
eGenix™ PyRun™
^^^^^^^^^^^^^^
This uses `eGenix PyRun <http://www.egenix.com/products/python/PyRun/>`_, which is essentially an embedded
Python environment. The pros/cons are:
Pros:
- Prebuilt
- Compact
- Easy to use (like the Windows Embedded Python)
Cons:
- Is not necessarily 100% compatible with the general Python distribution. May not work with all packages.
Current Default
^^^^^^^^^^^^^^^
In order to support the widest range of end user applications, currently the compilation technique is the default.
.. [1] Here we are using OSX and MacOS interchangeably. | 37.672131 | 151 | 0.748477 |
5a018ac1956f055e89b8e248dd49e50e70851020 | 2,854 | rst | reStructuredText | docs/catalog/triggers/elasticsearch.rst | Rutam21/robusta | 7c918d96362f607488c0e7e0056f436a06dce4ae | [
"MIT"
] | 273 | 2021-12-28T20:48:48.000Z | 2022-03-31T16:03:13.000Z | docs/catalog/triggers/elasticsearch.rst | Rutam21/robusta | 7c918d96362f607488c0e7e0056f436a06dce4ae | [
"MIT"
] | 103 | 2022-01-10T11:45:47.000Z | 2022-03-31T16:31:11.000Z | docs/catalog/triggers/elasticsearch.rst | Rutam21/robusta | 7c918d96362f607488c0e7e0056f436a06dce4ae | [
"MIT"
] | 35 | 2021-12-30T15:30:14.000Z | 2022-03-28T11:43:57.000Z | Elasticsearch
#########################
Robusta actions can run in response to `Elasticsearch/Kibana watchers <https://www.elastic.co/guide/en/elasticsearch/reference/current/how-watcher-works.html>`_
by using `Elasticsearch webhook actions <https://www.elastic.co/guide/en/elasticsearch/reference/current/actions-webhook.html>`_.
A common use case is gathering troubleshooting data with Robusta when pods write specific error logs.
Robusta Configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. The Robusta-relay must be enabled so that it can route Elasticsearch webhooks to the appropriate Robusta runner
2. The following variables must be defined in your Helm values file:
.. code-block:: yaml
globalConfig:
account_id: "" # your official Robusta account_id
signing_key: "" # a secret key used to verify the identity of Elasticsearch
You do **not** define playbooks for Elasticsearch triggers in ``values.yaml``. Instead the playbook is defined
entirely on the Elasticsearch side.
Example Elasticsearch Watcher
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following Elasticsearch Watcher configuration will trigger a Robusta playbook.
Make sure you update ``<account_id>``, ``<cluster_name>``, and ``<secret_key>`` in the emphasized line.
These should match the Robusta Helm chart values.
.. code-block:: json
:emphasize-lines: 26,27,33
{
"trigger": {
"schedule": {
"interval": "30m"
}
},
"input": {
"simple": {
"str": "val1",
"obj": {
"str": "val2"
},
"num": 23
}
},
"condition": {
"always": {}
},
"actions": {
"robusta_webhook": {
"throttle_period_in_millis": 0,
"transform": {
"script": {
"source": """
return ['body' :
['account_id' : 'some_account',
'cluster_name' : 'gke_arabica-300319_us-central1-c_cluster-5',
'origin' : 'elasticsearch',
'action_name' : 'echo',
'action_params' : ['message' : 'Hello Robusta!'],
'sinks' : ['slack']
],
'key' : 'very_secret']""",
"lang": "painless"
}
},
"webhook": {
"scheme": "https",
"host": "api.robusta.dev",
"port": 443,
"method": "post",
"path": "/integrations/generic/actions_with_key",
"params": {},
"headers": {},
"body": "{{#toJson}}ctx.payload{{/toJson}}"
}
}
}
}
.. note::
Most Robusta actions can be triggered in this manner. Try changing ``action_name`` and ``action_params`` above to trigger a different action
| 32.804598 | 160 | 0.547302 |
41982ca106d0398e533cb0309ec18ef385a8b421 | 228 | rst | reStructuredText | step_orgmapper/step_orgmapper_find_user_by_email.rst | nathenharvey/chef-docs | 21aa14a43cc0c81db14eb107071f0f7245945df8 | [
"CC-BY-3.0"
] | 1 | 2020-02-02T21:57:47.000Z | 2020-02-02T21:57:47.000Z | step_orgmapper/step_orgmapper_find_user_by_email.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | null | null | null | step_orgmapper/step_orgmapper_find_user_by_email.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | null | null | null | .. This is an included how-to.
.. To find a user based on an email address:
.. code-block:: ruby
orgmapper:0 > USERS.select{|u| u.email == 'user@company.com'}
where ``user@company.com`` is the email address for the user. | 25.333333 | 64 | 0.675439 |
dba0b84485237097a377cbfe71198efca1e9189a | 989 | rst | reStructuredText | docs/source/index.rst | ZaneMuir/NeuroAnalysis | 7376b9f5aeed2ba283fc09b4239dfeca71660508 | [
"MIT"
] | null | null | null | docs/source/index.rst | ZaneMuir/NeuroAnalysis | 7376b9f5aeed2ba283fc09b4239dfeca71660508 | [
"MIT"
] | 1 | 2018-05-01T10:57:50.000Z | 2018-05-01T10:57:50.000Z | docs/source/index.rst | ZaneMuir/NeuroAnalysis | 7376b9f5aeed2ba283fc09b4239dfeca71660508 | [
"MIT"
] | null | null | null | .. neuroanalysis documentation master file, created by
sphinx-quickstart on Tue May 1 18:41:53 2018.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to neuroanalysis's documentation!
=========================================
Basic analysis methods for spike trains sorted from electrodes and fluorescence recorded from
two-photon imaging.
Quick Start
-----------
for python3 module:
.. code-block:: bash
$ git clone git@github.com:ZaneMuir/NeuroAnalysis.git
$ cd NeuroAnalysis
$ python3 setup.py install
$ pip3 install -r requirements.txt
for julia module (in julia REPL):
.. code-block:: julia
>>> Pkg.clone("https://github.com/ZaneMuir/NeuralModel.jl.git")
Contents
--------
.. toctree::
:maxdepth: 2
:glob:
install
workflow_mea
workflow_tpi
module/module
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 20.183673 | 93 | 0.658241 |
3baf655f42400985929bd18d55f22da91dbea641 | 680 | rst | reStructuredText | docs/src/cli/cmd/remote_log.rst | CloudSurgeon/titan | a6ccf6dda6a755a9fc4ee90ddf123e8f37d77698 | [
"Apache-2.0"
] | null | null | null | docs/src/cli/cmd/remote_log.rst | CloudSurgeon/titan | a6ccf6dda6a755a9fc4ee90ddf123e8f37d77698 | [
"Apache-2.0"
] | null | null | null | docs/src/cli/cmd/remote_log.rst | CloudSurgeon/titan | a6ccf6dda6a755a9fc4ee90ddf123e8f37d77698 | [
"Apache-2.0"
] | null | null | null | .. _cli_cmd_remote_log:
titan remote log
================
List commits in a remote. For more information on managing remotes, see
the :ref:`remote` section.
Syntax
------
::
titan remote log [-r remote] <repository>
Arguments
---------
repository
*Required*. The name of the target repository.
Options
-------
-r, --remote remote Optional remote name. If not provided, then the name
'origin' is assumed.
Example
-------
::
$ titan remote log hello-world
Remote: origin
Commit 0f53a6a4-90ff-4f8c-843a-a6cce36f4f4f
User: Eric Schrock
Email: Eric.Schrock@delphix.com
Date: 2019-09-20T13:45:38Z
demo data
| 16.585366 | 76 | 0.630882 |
3cd632aee0636ff0910beb2d46c05e1262715aa2 | 2,473 | rst | reStructuredText | src/_includes/responses/cities/list.rst | makemusicday/gemini-api-docs | 9831a9825628e70a4a44d2142d99412a8522fa29 | [
"MIT"
] | null | null | null | src/_includes/responses/cities/list.rst | makemusicday/gemini-api-docs | 9831a9825628e70a4a44d2142d99412a8522fa29 | [
"MIT"
] | null | null | null | src/_includes/responses/cities/list.rst | makemusicday/gemini-api-docs | 9831a9825628e70a4a44d2142d99412a8522fa29 | [
"MIT"
] | null | null | null | .. code-block:: json
[
{
"_links": {
"self": {
"href": "/api/cities/nf",
"method": "GET",
"title": "Full city information."
}
},
"accent_colour": "#da5616",
"base_url": "https://nf.makemusicday.org",
"contact_us_url": "http://makemusicday.org/western-ny",
"country": "USA",
"facebook_url": null,
"font": null,
"id": "88b9de68-f4de-46c0-b5cd-29d800d059be",
"instagram_url": null,
"latitude": 43.096214,
"locale": "en_US",
"logo_url": "https://da7jxvkvc73ty.cloudfront.net/wp-content/uploads/2016/02/NiagaraFalls-150x150.jpg",
"longitude": -79.037739,
"map_zoom_level": 10,
"name": "Niagara Falls",
"primary_button_colour": "#009faf",
"secondary_button_colour": "#99b0b2",
"slug": "nf",
"terms": null,
"timezone": "America/New_York",
"twitter_url": null,
"url": "http://makemusicday.org/western-ny",
"youtube_url": null
},
{
"_links": {
"self": {
"href": "/api/cities/nicholasville",
"method": "GET",
"title": "Full city information."
}
},
"accent_colour": "#da5616",
"base_url": "https://nicholasville.makemusicday.org",
"contact_us_url": "http://www.makemusicday.org/nicholasville/",
"country": "USA",
"facebook_url": null,
"font": null,
"id": "8bd511bf-bd00-45f2-80c3-fa7131e74b7d",
"instagram_url": null,
"latitude": 37.880371,
"locale": "en_US",
"logo_url": "https://da7jxvkvc73ty.cloudfront.net/wp-content/uploads/2018/04/nicholasville-150x150.jpg",
"longitude": -84.573021,
"map_zoom_level": null,
"name": "Nicholasville",
"primary_button_colour": "#009faf",
"secondary_button_colour": "#99b0b2",
"slug": "nicholasville",
"terms": null,
"timezone": "America/New_York",
"twitter_url": null,
"url": "http://www.makemusicday.org/nicholasville/",
"youtube_url": null
}
]
| 36.910448 | 116 | 0.469066 |
ebfc931cd9cab7b4a7a536ea734240376936a1b7 | 879 | rst | reStructuredText | docs/index.rst | bakera81/siuba | 568729989333193ff38c26ac68604aa8ba9b490b | [
"MIT"
] | null | null | null | docs/index.rst | bakera81/siuba | 568729989333193ff38c26ac68604aa8ba9b490b | [
"MIT"
] | null | null | null | docs/index.rst | bakera81/siuba | 568729989333193ff38c26ac68604aa8ba9b490b | [
"MIT"
] | null | null | null | .. toctree::
:maxdepth: 2
:hidden:
intro.Rmd
intro_sql_basic.ipynb
intro_sql_interm.ipynb
developer/index.rst
.. toctree::
:maxdepth: 2
:caption: Core One-table Verbs
:hidden:
:glob:
api_table_core/*
.. toctree::
:maxdepth: 2
:caption: Other One-table Verbs
:hidden:
:glob:
api_table_other/*
.. toctree::
:maxdepth: 2
:caption: Two-table Verbs
:hidden:
:glob:
api_table_two/*
.. toctree::
:maxdepth: 2
:caption: Tidy Verbs
:hidden:
:glob:
api_tidy/*
Siuba
=====
.. image:: siuba.svg
:width: 400px
Siuba is a library for quick, scrappy data analysis in Python.
It is a port of
`dplyr <https://dplyr.tidyverse.org>`_,
`tidyr <https://tidyr.tidyverse.org>`_,
and other R Tidyverse libraries.
Getting started:
* Introduction to Siuba
* tidytuesday-py examples
| 14.898305 | 62 | 0.629124 |
02336d32169fb5a39c390594ecb5ff6762d252c2 | 33,377 | rst | reStructuredText | docs/forms.rst | uussoft/mallie-hr | 88b2aa0231b281c16b6a20b17aa2647efa682128 | [
"MIT"
] | null | null | null | docs/forms.rst | uussoft/mallie-hr | 88b2aa0231b281c16b6a20b17aa2647efa682128 | [
"MIT"
] | 6 | 2020-06-15T14:14:10.000Z | 2022-02-19T02:15:14.000Z | docs/forms.rst | uussoft/mallie-hr | 88b2aa0231b281c16b6a20b17aa2647efa682128 | [
"MIT"
] | null | null | null | .. index::
single: Forms
Forms
=====
.. admonition:: Screencast
:class: screencast
Do you prefer video tutorials? Check out the `Symfony Forms screencast series`_.
Creating and processing HTML forms is hard and repetitive. You need to deal with
rendering HTML form fields, validating submitted data, mapping the form data
into objects and a lot more. Symfony includes a powerful form feature that
provides all these features and many more for truly complex scenarios.
Installation
------------
In applications using :ref:`Symfony Flex <symfony-flex>`, run this command to
install the form feature before using it:
.. code-block:: terminal
$ composer require symfony/form
Usage
-----
The recommended workflow when working with Symfony forms is the following:
#. **Build the form** in a Symfony controller or using a dedicated form class;
#. **Render the form** in a template so the user can edit and submit it;
#. **Process the form** to validate the submitted data, transform it into PHP
data and do something with it (e.g. persist it in a database).
Each of these steps is explained in detail in the next sections. To make
examples easier to follow, all of them assume that you're building a simple Todo
list application that displays "tasks".
Users create and edit tasks using Symfony forms. Each task is an instance of the
following ``Task`` class::
// src/Entity/Task.php
namespace App\Entity;
class Task
{
protected $task;
protected $dueDate;
public function getTask()
{
return $this->task;
}
public function setTask($task)
{
$this->task = $task;
}
public function getDueDate()
{
return $this->dueDate;
}
public function setDueDate(\DateTime $dueDate = null)
{
$this->dueDate = $dueDate;
}
}
This class is a "plain-old-PHP-object" because, so far, it has nothing to do
with Symfony or any other library. It's a normal PHP object that directly solves
a problem inside *your* application (i.e. the need to represent a task in your
application). But you can also edit :doc:`Doctrine entities </doctrine>` in the
same way.
.. _form-types:
Form Types
~~~~~~~~~~
Before creating your first Symfony form, it's important to understand the
concept of "form type". In other projects, it's common to differentiate between
"forms" and "form fields". In Symfony, all of them are "form types":
* a single ``<input type="text">`` form field is a "form type" (e.g. ``TextType``);
* a group of several HTML fields used to input a postal address is a "form type"
(e.g. ``PostalAddressType``);
* an entire ``<form>`` with multiple fields to edit a user profile is a
"form type" (e.g. ``UserProfileType``).
This may be confusing at first, but it will feel natural to you soon enough.
Besides, it simplifies code and makes "composing" and "embedding" form fields
much easier to implement.
There are tens of :doc:`form types provided by Symfony </reference/forms/types>`
and you can also :doc:`create your own form types </form/create_custom_field_type>`.
Building Forms
--------------
Symfony provides a "form builder" object which allows you to describe the form
fields using a fluent interface. Later, this builder creates the actual form
object used to render and process contents.
.. _creating-forms-in-controllers:
Creating Forms in Controllers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If your controller extends from the :ref:`AbstractController <the-base-controller-class-services>`,
use the ``createFormBuilder()`` helper::
// src/Controller/TaskController.php
namespace App\Controller;
use App\Entity\Task;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\Form\Extension\Core\Type\DateType;
use Symfony\Component\Form\Extension\Core\Type\SubmitType;
use Symfony\Component\Form\Extension\Core\Type\TextType;
use Symfony\Component\HttpFoundation\Request;
class TaskController extends AbstractController
{
public function new(Request $request)
{
// creates a task object and initializes some data for this example
$task = new Task();
$task->setTask('Write a blog post');
$task->setDueDate(new \DateTime('tomorrow'));
$form = $this->createFormBuilder($task)
->add('task', TextType::class)
->add('dueDate', DateType::class)
->add('save', SubmitType::class, ['label' => 'Create Task'])
->getForm();
// ...
}
}
If your controller does not extend from ``AbstractController``, you'll need to
:ref:`fetch services in your controller <controller-accessing-services>` and
use the ``createBuilder()`` method of the ``form.factory`` service.
In this example, you've added two fields to your form - ``task`` and ``dueDate``
- corresponding to the ``task`` and ``dueDate`` properties of the ``Task``
class. You've also assigned each a :ref:`form type <form-types>` (e.g. ``TextType``
and ``DateType``), represented by its fully qualified class name. Finally, you
added a submit button with a custom label for submitting the form to the server.
.. _creating-forms-in-classes:
Creating Form Classes
~~~~~~~~~~~~~~~~~~~~~
Symfony recommends to put as little logic as possible in controllers. That's why
it's better to move complex forms to dedicated classes instead of defining them
in controller actions. Besides, forms defined in classes can be reused in
multiple actions and services.
Form classes are :ref:`form types <form-types>` that implement
:class:`Symfony\\Component\\Form\\FormTypeInterface`. However, it's better to
extend from :class:`Symfony\\Component\\Form\\AbstractType`, which already
implements the interface and provides some utilities::
// src/Form/Type/TaskType.php
namespace App\Form\Type;
use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\Extension\Core\Type\DateType;
use Symfony\Component\Form\Extension\Core\Type\SubmitType;
use Symfony\Component\Form\Extension\Core\Type\TextType;
use Symfony\Component\Form\FormBuilderInterface;
class TaskType extends AbstractType
{
public function buildForm(FormBuilderInterface $builder, array $options)
{
$builder
->add('task', TextType::class)
->add('dueDate', DateType::class)
->add('save', SubmitType::class)
;
}
}
.. tip::
Install the `MakerBundle`_ in your project to generate form classes using
the ``make:form`` and ``make:registration-form`` commands.
The form class contains all the directions needed to create the task form. In
controllers extending from the :ref:`AbstractController <the-base-controller-class-services>`,
use the ``createForm()`` helper (otherwise, use the ``create()`` method of the
``form.factory`` service)::
// src/Controller/TaskController.php
namespace App\Controller;
use App\Form\Type\TaskType;
// ...
class TaskController extends AbstractController
{
public function new()
{
// creates a task object and initializes some data for this example
$task = new Task();
$task->setTask('Write a blog post');
$task->setDueDate(new \DateTime('tomorrow'));
$form = $this->createForm(TaskType::class, $task);
// ...
}
}
.. _form-data-class:
Every form needs to know the name of the class that holds the underlying data
(e.g. ``App\Entity\Task``). Usually, this is just guessed based off of the
object passed to the second argument to ``createForm()`` (i.e. ``$task``).
Later, when you begin :doc:`embedding forms </form/embedded>`, this will no
longer be sufficient.
So, while not always necessary, it's generally a good idea to explicitly specify
the ``data_class`` option by adding the following to your form type class::
// src/Form/Type/TaskType.php
namespace App\Form\Type;
use App\Entity\Task;
use Symfony\Component\OptionsResolver\OptionsResolver;
// ...
class TaskType extends AbstractType
{
// ...
public function configureOptions(OptionsResolver $resolver)
{
$resolver->setDefaults([
'data_class' => Task::class,
]);
}
}
.. _rendering-forms:
Rendering Forms
---------------
Now that the form has been created, the next step is to render it. Instead of
passing the entire form object to the template, use the ``createView()`` method
to build another object with the visual representation of the form::
// src/Controller/TaskController.php
namespace App\Controller;
use App\Entity\Task;
use App\Form\Type\TaskType;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\HttpFoundation\Request;
class TaskController extends AbstractController
{
public function new(Request $request)
{
$task = new Task();
// ...
$form = $this->createForm(TaskType::class, $task);
return $this->render('task/new.html.twig', [
'form' => $form->createView(),
]);
}
}
Then, use some :ref:`form helper functions <reference-form-twig-functions>` to
render the form contents:
.. code-block:: twig
{# templates/task/new.html.twig #}
{{ form(form) }}
That's it! The :ref:`form() function <reference-forms-twig-form>` renders all
fields *and* the ``<form>`` start and end tags. By default, the form method is
``POST`` and the target URL is the same that displayed the form, but
:ref:`you can change both <forms-change-action-method>`.
Notice how the rendered ``task`` input field has the value of the ``task``
property from the ``$task`` object (i.e. "Write a blog post"). This is the first
job of a form: to take data from an object and translate it into a format that's
suitable for being rendered in an HTML form.
.. tip::
The form system is smart enough to access the value of the protected
``task`` property via the ``getTask()`` and ``setTask()`` methods on the
``Task`` class. Unless a property is public, it *must* have a "getter" and
"setter" method so that Symfony can get and put data onto the property. For
a boolean property, you can use an "isser" or "hasser" method (e.g.
``isPublished()`` or ``hasReminder()``) instead of a getter (e.g.
``getPublished()`` or ``getReminder()``).
As short as this rendering is, it's not very flexible. Usually, you'll need more
control about how the entire form or some of its fields look. For example, thanks
to the :doc:`Bootstrap 4 integration with Symfony forms </form/bootstrap4>` you
can set this option to generate forms compatible with the Bootstrap 4 CSS framework:
.. configuration-block::
.. code-block:: yaml
# config/packages/twig.yaml
twig:
form_themes: ['bootstrap_4_layout.html.twig']
.. code-block:: xml
<!-- config/packages/twig.xml -->
<?xml version="1.0" encoding="UTF-8" ?>
<container xmlns="http://symfony.com/schema/dic/services"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:twig="http://symfony.com/schema/dic/twig"
xsi:schemaLocation="http://symfony.com/schema/dic/services
https://symfony.com/schema/dic/services/services-1.0.xsd
http://symfony.com/schema/dic/twig
https://symfony.com/schema/dic/twig/twig-1.0.xsd">
<twig:config>
<twig:form-theme>bootstrap_4_layout.html.twig</twig:form-theme>
<!-- ... -->
</twig:config>
</container>
.. code-block:: php
// config/packages/twig.php
$container->loadFromExtension('twig', [
'form_themes' => [
'bootstrap_4_layout.html.twig',
],
// ...
]);
The :ref:`built-in Symfony form themes <symfony-builtin-forms>` include
Bootstrap 3 and 4 and Foundation 5. You can also
:ref:`create your own Symfony form theme <create-your-own-form-theme>`.
In addition to form themes, Symfony allows you to
:doc:`customize the way fields are rendered </form/form_customization>` with
multiple functions to render each field part separately (widgets, labels,
errors, help messages, etc.)
.. _processing-forms:
Processing Forms
----------------
The :ref:`recommended way of processing forms <best-practice-handle-form>` is to
use a single action for both rendering the form and handling the form submit.
You can use separate actions, but using one action simplifies everything while
keeping the code concise and maintainable.
Processing a form means to translate user-submitted data back to the properties
of an object. To make this happen, the submitted data from the user must be
written into the form object::
// ...
use Symfony\Component\HttpFoundation\Request;
public function new(Request $request)
{
// just setup a fresh $task object (remove the example data)
$task = new Task();
$form = $this->createForm(TaskType::class, $task);
$form->handleRequest($request);
if ($form->isSubmitted() && $form->isValid()) {
// $form->getData() holds the submitted values
// but, the original `$task` variable has also been updated
$task = $form->getData();
// ... perform some action, such as saving the task to the database
// for example, if Task is a Doctrine entity, save it!
// $entityManager = $this->getDoctrine()->getManager();
// $entityManager->persist($task);
// $entityManager->flush();
return $this->redirectToRoute('task_success');
}
return $this->render('task/new.html.twig', [
'form' => $form->createView(),
]);
}
This controller follows a common pattern for handling forms and has three
possible paths:
#. When initially loading the page in a browser, the form hasn't been submitted
yet and ``$form->isSubmitted()`` returns ``false``. So, the form is created
and rendered;
#. When the user submits the form, :method:`Symfony\\Component\\Form\\FormInterface::handleRequest`
recognizes this and immediately writes the submitted data back into the
``task`` and ``dueDate`` properties of the ``$task`` object. Then this object
is validated (validation is explained in the next section). If it is invalid,
:method:`Symfony\\Component\\Form\\FormInterface::isValid` returns
``false`` and the form is rendered again, but now with validation errors;
#. When the user submits the form with valid data, the submitted data is again
written into the form, but this time :method:`Symfony\\Component\\Form\\FormInterface::isValid`
returns ``true``. Now you have the opportunity to perform some actions using
the ``$task`` object (e.g. persisting it to the database) before redirecting
the user to some other page (e.g. a "thank you" or "success" page);
.. note::
Redirecting a user after a successful form submission is a best practice
that prevents the user from being able to hit the "Refresh" button of
their browser and re-post the data.
.. caution::
The ``createView()`` method should be called *after* ``handleRequest()`` is
called. Otherwise, when using :doc:`form events </form/events>`, changes done
in the ``*_SUBMIT`` events won't be applied to the view (like validation errors).
.. seealso::
If you need more control over exactly when your form is submitted or which
data is passed to it, you can
:doc:`use the submit() method to handle form submissions </form/direct_submit>`.
.. _validating-forms:
Validating Forms
----------------
In the previous section, you learned how a form can be submitted with valid
or invalid data. In Symfony, the question isn't whether the "form" is valid, but
whether or not the underlying object (``$task`` in this example) is valid after
the form has applied the submitted data to it. Calling ``$form->isValid()`` is a
shortcut that asks the ``$task`` object whether or not it has valid data.
Before using validation, add support for it in your application:
.. code-block:: terminal
$ composer require symfony/validator
Validation is done by adding a set of rules (called constraints) to a class. To
see this in action, add validation constraints so that the ``task`` field cannot
be empty and the ``dueDate`` field cannot be empty and must be a valid \DateTime
object.
.. configuration-block::
.. code-block:: php-annotations
// src/Entity/Task.php
namespace App\Entity;
use Symfony\Component\Validator\Constraints as Assert;
class Task
{
/**
* @Assert\NotBlank
*/
public $task;
/**
* @Assert\NotBlank
* @Assert\Type("\DateTime")
*/
protected $dueDate;
}
.. code-block:: yaml
# config/validator/validation.yaml
App\Entity\Task:
properties:
task:
- NotBlank: ~
dueDate:
- NotBlank: ~
- Type: \DateTime
.. code-block:: xml
<!-- config/validator/validation.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<constraint-mapping xmlns="http://symfony.com/schema/dic/constraint-mapping"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://symfony.com/schema/dic/constraint-mapping
https://symfony.com/schema/dic/constraint-mapping/constraint-mapping-1.0.xsd">
<class name="App\Entity\Task">
<property name="task">
<constraint name="NotBlank"/>
</property>
<property name="dueDate">
<constraint name="NotBlank"/>
<constraint name="Type">\DateTime</constraint>
</property>
</class>
</constraint-mapping>
.. code-block:: php
// src/Entity/Task.php
namespace App\Entity;
use Symfony\Component\Validator\Constraints\NotBlank;
use Symfony\Component\Validator\Constraints\Type;
use Symfony\Component\Validator\Mapping\ClassMetadata;
class Task
{
// ...
public static function loadValidatorMetadata(ClassMetadata $metadata)
{
$metadata->addPropertyConstraint('task', new NotBlank());
$metadata->addPropertyConstraint('dueDate', new NotBlank());
$metadata->addPropertyConstraint(
'dueDate',
new Type(\DateTime::class)
);
}
}
That's it! If you re-submit the form with invalid data, you'll see the
corresponding errors printed out with the form. Read the
:doc:`Symfony validation documentation </validation>` to learn more about this
powerful feature.
Other Common Form Features
--------------------------
Passing Options to Forms
~~~~~~~~~~~~~~~~~~~~~~~~
If you :ref:`create forms in classes <creating-forms-in-classes>`, when building
the form in the controller you can pass custom options to it as the third optional
argument of ``createForm()``::
// src/Controller/TaskController.php
namespace App\Controller;
use App\Form\Type\TaskType;
// ...
class TaskController extends AbstractController
{
public function new()
{
$task = new Task();
// use some PHP logic to decide if this form field is required or not
$dueDateIsRequired = ...
$form = $this->createForm(TaskType::class, $task, [
'require_due_date' => $dueDateIsRequired,
]);
// ...
}
}
If you try to use the form now, you'll see an error message: *The option
"require_due_date" does not exist.* That's because forms must declare all the
options they accept using the ``configureOptions()`` method::
// src/Form/Type/TaskType.php
namespace App\Form\Type;
use Symfony\Component\OptionsResolver\OptionsResolver;
// ...
class TaskType extends AbstractType
{
// ...
public function configureOptions(OptionsResolver $resolver)
{
$resolver->setDefaults([
// ...,
'require_due_date' => false,
]);
// you can also define the allowed types, allowed values and
// any other feature supported by the OptionsResolver component
$resolver->setAllowedTypes('require_due_date', 'bool');
}
}
Now you can use this new form option inside the ``buildForm()`` method::
// src/Form/Type/TaskType.php
namespace App\Form\Type;
use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\Extension\Core\Type\DateType;
use Symfony\Component\Form\FormBuilderInterface;
class TaskType extends AbstractType
{
public function buildForm(FormBuilderInterface $builder, array $options)
{
$builder
// ...
->add('dueDate', DateType::class, [
'required' => $options['require_due_date'],
])
;
}
// ...
}
Form Type Options
~~~~~~~~~~~~~~~~~
Each :ref:`form type <form-types>` has a number of options to configure it, as
explained in the :doc:`Symfony form types reference </reference/forms/types>`.
Two commonly used options are ``required`` and ``label``.
The ``required`` Option
.......................
The most common option is the ``required`` option, which can be applied to any
field. By default, this option is set to ``true``, meaning that HTML5-ready
browsers will require to fill in all fields before submitting the form.
If you don't want this behavior, either
:ref:`disable client-side validation <forms-html5-validation-disable>` for the
entire form or set the ``required`` option to ``false`` on one or more fields::
->add('dueDate', DateType::class, [
'required' => false,
])
The ``required`` option does not perform any server-side validation. If a user
submits a blank value for the field (either with an old browser or a web
service, for example), it will be accepted as a valid value unless you also use
Symfony's ``NotBlank`` or ``NotNull`` validation constraints.
The ``label`` Option
....................
By default, the label of form fields are the *humanized* version of the
property name (``user`` -> ``User``; ``postalAddress`` -> ``Postal Address``).
Set the ``label`` option on fields to define their labels explicitly::
->add('dueDate', DateType::class, [
// set it to FALSE to not display the label for this field
'label' => 'To Be Completed Before',
])
.. tip::
By default, ``<label>`` tags of required fields are rendered with a
``required`` CSS class, so you can display an asterisk for required
fields applying these CSS styles:
.. code-block:: css
label.required:before {
content: "*";
}
.. _forms-change-action-method:
Changing the Action and HTTP Method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, a form will be submitted via an HTTP POST request to the same
URL under which the form was rendered. When building the form in the controller,
use the ``setAction()`` and ``setMethod()`` methods to change this::
// src/Controller/TaskController.php
namespace App\Controller;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\Form\Extension\Core\Type\DateType;
use Symfony\Component\Form\Extension\Core\Type\SubmitType;
use Symfony\Component\Form\Extension\Core\Type\TextType;
class TaskController extends AbstractController
{
public function new()
{
// ...
$form = $this->createFormBuilder($task)
->setAction($this->generateUrl('target_route'))
->setMethod('GET')
// ...
->getForm();
// ...
}
}
When building the form in a class, pass the action and method as form options::
// src/Controller/TaskController.php
namespace App\Controller;
use App\Form\TaskType;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
class TaskController extends AbstractController
{
public function new()
{
// ...
$form = $this->createForm(TaskType::class, $task, [
'action' => $this->generateUrl('target_route'),
'method' => 'GET',
]);
// ...
}
}
Finally, you can override the action and method in the template by passing them
to the ``form()`` or the ``form_start()`` helper functions:
.. code-block:: twig
{# templates/task/new.html.twig #}
{{ form_start(form, {'action': path('target_route'), 'method': 'GET'}) }}
.. note::
If the form's method is not ``GET`` or ``POST``, but ``PUT``, ``PATCH`` or
``DELETE``, Symfony will insert a hidden field with the name ``_method``
that stores this method. The form will be submitted in a normal ``POST``
request, but :doc:`Symfony's routing </routing>` is capable of detecting the
``_method`` parameter and will interpret it as a ``PUT``, ``PATCH`` or
``DELETE`` request. See the :ref:`configuration-framework-http_method_override` option.
Changing the Form Name
~~~~~~~~~~~~~~~~~~~~~~
If you inspect the HTML contents of the rendered form, you'll see that the
``<form>`` name and the field names are generated from the type class name
(e.g. ``<form name="task" ...>`` and ``<select name="task[dueDate][date][month]" ...>``).
If you want to modify this, use the :method:`Symfony\\Component\\Form\\FormFactoryInterface::createNamed`
method::
// src/Controller/TaskController.php
namespace App\Controller;
use App\Form\TaskType;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
class TaskController extends AbstractController
{
public function new()
{
$task = ...;
$form = $this->get('form.factory')->createNamed('my_name', TaskType::class, $task);
// ...
}
}
You can even suppress the name completely by setting it to an empty string.
.. _forms-html5-validation-disable:
Client-Side HTML Validation
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Thanks to HTML5, many browsers can natively enforce certain validation
constraints on the client side. The most common validation is activated by
adding a ``required`` attribute on fields that are required. For browsers
that support HTML5, this will result in a native browser message being displayed
if the user tries to submit the form with that field blank.
Generated forms take full advantage of this new feature by adding sensible HTML
attributes that trigger the validation. The client-side validation, however, can
be disabled by adding the ``novalidate`` attribute to the ``<form>`` tag or
``formnovalidate`` to the submit tag. This is especially useful when you want to
test your server-side validation constraints, but are being prevented by your
browser from, for example, submitting blank fields.
.. code-block:: twig
{# templates/task/new.html.twig #}
{{ form_start(form, {'attr': {'novalidate': 'novalidate'}}) }}
{{ form_widget(form) }}
{{ form_end(form) }}
.. _form-type-guessing:
Form Type Guessing
~~~~~~~~~~~~~~~~~~
If the object handled by the form includes validation constraints, Symfony can
introspect that metadata to guess the type of your field and set it up for you.
In the above example, Symfony can guess from the validation rules that both the
``task`` field is a normal ``TextType`` field and the ``dueDate`` field is a
``DateType`` field.
When building the form, omit the second argument to the ``add()`` method, or
pass ``null`` to it, to enable Symfony's "guessing mechanism"::
// src/Form/Type/TaskType.php
namespace App\Form\Type;
use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\Extension\Core\Type\DateType;
use Symfony\Component\Form\Extension\Core\Type\SubmitType;
use Symfony\Component\Form\Extension\Core\Type\TextType;
use Symfony\Component\Form\FormBuilderInterface;
class TaskType extends AbstractType
{
public function buildForm(FormBuilderInterface $builder, array $options)
{
$builder
// if you don't define field options, you can omit the second argument
->add('task')
// if you define field options, pass NULL as second argument
->add('dueDate', null, ['required' => false])
->add('save', SubmitType::class)
;
}
}
.. caution::
When using a specific :doc:`form validation group </form/validation_groups>`,
the field type guesser will still consider *all* validation constraints when
guessing your field types (including constraints that are not part of the
validation group(s) being used).
Form Type Options Guessing
..........................
When the guessing mechanism is enabled for some field (i.e. you omit or pass
``null`` as the second argument to ``add()``), in addition to its form type,
the following options can be guessed too:
``required``
The ``required`` option can be guessed based on the validation rules (i.e. is
the field ``NotBlank`` or ``NotNull``) or the Doctrine metadata (i.e. is the
field ``nullable``). This is very useful, as your client-side validation will
automatically match your validation rules.
``maxlength``
If the field is some sort of text field, then the ``maxlength`` option attribute
can be guessed from the validation constraints (if ``Length`` or ``Range`` is used)
or from the :doc:`Doctrine </doctrine>` metadata (via the field's length).
If you'd like to change one of the guessed values, override it by passing the
option in the options field array::
->add('task', null, ['attr' => ['maxlength' => 4]])
.. seealso::
Besides guessing the form type, Symfony also guesses :ref:`validation constraints <validating-forms>`
if you're using a Doctrine entity. Read :ref:`automatic_object_validation`
guide for more information.
Unmapped Fields
~~~~~~~~~~~~~~~
When editing an object via a form, all form fields are considered properties of
the object. Any fields on the form that do not exist on the object will cause an
exception to be thrown.
If you need extra fields in the form that won't be stored in the object (for
example to add an *"I agree with these terms"* checkbox), set the ``mapped``
option to ``false`` in those fields::
use Symfony\Component\Form\FormBuilderInterface;
public function buildForm(FormBuilderInterface $builder, array $options)
{
$builder
->add('task')
->add('dueDate')
->add('agreeTerms', CheckboxType::class, ['mapped' => false])
->add('save', SubmitType::class)
;
}
These "unmapped fields" can be set and accessed in a controller with::
$form->get('agreeTerms')->getData();
$form->get('agreeTerms')->setData(true);
Additionally, if there are any fields on the form that aren't included in
the submitted data, those fields will be explicitly set to ``null``.
Learn more
----------
When building forms, keep in mind that the first goal of a form is to translate
data from an object (``Task``) to an HTML form so that the user can modify that
data. The second goal of a form is to take the data submitted by the user and to
re-apply it to the object.
There's a lot more to learn and a lot of *powerful* tricks in the Symfony forms:
Reference:
.. toctree::
:maxdepth: 1
/reference/forms/types
Advanced Features:
.. toctree::
:maxdepth: 1
/controller/upload_file
/security/csrf
/form/form_dependencies
/form/create_custom_field_type
/form/data_transformers
/form/data_mappers
/form/create_form_type_extension
/form/type_guesser
Form Themes and Customization:
.. toctree::
:maxdepth: 1
/form/bootstrap4
/form/form_customization
/form/form_themes
Events:
.. toctree::
:maxdepth: 1
/form/events
/form/dynamic_form_modification
Validation:
.. toctree::
:maxdepth: 1
/form/validation_groups
/form/validation_group_service_resolver
/form/button_based_validation
/form/disabling_validation
Misc.:
.. toctree::
:maxdepth: 1
/form/direct_submit
/form/embedded
/form/form_collections
/form/inherit_data_option
/form/multiple_buttons
/form/unit_testing
/form/use_empty_data
/form/without_class
.. _`Symfony Forms screencast series`: https://symfonycasts.com/screencast/symfony-forms
.. _`MakerBundle`: https://symfony.com/doc/current/bundles/SymfonyMakerBundle/index.html
| 33.714141 | 105 | 0.65665 |
18c2905c46187ee25c9a8c63aec68213ae71889d | 132 | rst | reStructuredText | docs/source/wrdscloudquery/generated/famafrench.FamaFrench.queryrf1m.rst | christianjauregui/famafrench | 8ae3dc9a29c49e37acdbda6f717f58c7ad5401e3 | [
"MIT"
] | 22 | 2020-01-14T03:06:11.000Z | 2022-03-09T11:22:14.000Z | docs/build/_sources/wrdscloudquery/generated/famafrench.FamaFrench.queryrf1m.rst.txt | aliawaischeema/famafrench | 8ae3dc9a29c49e37acdbda6f717f58c7ad5401e3 | [
"MIT"
] | 1 | 2020-04-25T06:41:23.000Z | 2020-04-25T07:54:13.000Z | docs/build/_sources/wrdscloudquery/generated/famafrench.FamaFrench.queryrf1m.rst.txt | aliawaischeema/famafrench | 8ae3dc9a29c49e37acdbda6f717f58c7ad5401e3 | [
"MIT"
] | 14 | 2020-04-26T00:17:08.000Z | 2021-12-16T13:36:24.000Z | famafrench.FamaFrench.queryrf1m
===============================
.. currentmodule:: famafrench
.. automethod:: FamaFrench.queryrf1m | 22 | 36 | 0.613636 |
528fc6394be4fc8644d197cf23566ad570e2a696 | 1,810 | rst | reStructuredText | docs/source/developer/module_ref.rst | pierodesenzi/noronha | ee7cba8d0d29d0dc5484d2000e1a42c9954c20e4 | [
"Apache-2.0"
] | 43 | 2021-04-14T00:41:15.000Z | 2022-01-02T23:32:58.000Z | docs/source/developer/module_ref.rst | pierodesenzi/noronha | ee7cba8d0d29d0dc5484d2000e1a42c9954c20e4 | [
"Apache-2.0"
] | 19 | 2021-04-14T00:35:21.000Z | 2022-01-12T14:24:48.000Z | docs/source/developer/module_ref.rst | pierodesenzi/noronha | ee7cba8d0d29d0dc5484d2000e1a42c9954c20e4 | [
"Apache-2.0"
] | 8 | 2021-04-14T00:31:02.000Z | 2022-01-02T23:33:08.000Z | *********************
Modules Reference
*********************
.. highlight:: none
This section summarizes the roles and responsibilities of the most important modules inside Noronha's software architecture.
db
==
The following topics describe the modules inside the package `noronha.db <https://github.com/noronha-dataops/noronha/tree/master/noronha/db>`_,
which is responsible for defining the ORM's for all metadata objects managed by Noronha,
as well as utilities for handling those objects.
:main.py:
.. automodule:: noronha.db.main
:utils.py:
.. automodule:: noronha.db.utils
:proj.py:
.. automodule:: noronha.db.proj
:bvers.py:
.. automodule:: noronha.db.bvers
:model.py:
.. automodule:: noronha.db.model
:ds.py:
.. automodule:: noronha.db.ds
:train.py:
.. automodule:: noronha.db.train
:movers.py:
.. automodule:: noronha.db.movers
:depl.py:
.. automodule:: noronha.db.depl
:tchest.py:
.. automodule:: noronha.db.tchest
bay
===
The following topics describe the modules inside the package `noronha.bay <https://github.com/noronha-dataops/noronha/tree/master/noronha/bay>`_,
which provides interfaces that help Noronha interact with other systems such as container managers and file managers.
Note that every module inside this package has a nautic/pirate-like thematic.
:warehouse.py:
.. automodule:: noronha.bay.warehouse
:barrel.py:
.. automodule:: noronha.bay.barrel
:cargo.py:
.. automodule:: noronha.bay.cargo
:captain.py:
.. automodule:: noronha.bay.captain
:expedition.py:
.. automodule:: noronha.bay.expedition
:island.py:
.. automodule:: noronha.bay.island
:compass.py:
.. automodule:: noronha.bay.compass
:tchest.py:
.. automodule:: noronha.bay.tchest
:anchor.py:
.. automodule:: noronha.bay.anchor
:shipyard.py:
.. automodule:: noronha.bay.shipyard
| 18.282828 | 145 | 0.725967 |
e1221539342ee48ac84868bd38a277b1ff763e8c | 4,561 | rst | reStructuredText | README_PYPI.rst | 3v1lW1th1n/pywbemtools | ed4caea84dff5daa7c9a2c10dc493857c7118a29 | [
"Apache-2.0"
] | null | null | null | README_PYPI.rst | 3v1lW1th1n/pywbemtools | ed4caea84dff5daa7c9a2c10dc493857c7118a29 | [
"Apache-2.0"
] | null | null | null | README_PYPI.rst | 3v1lW1th1n/pywbemtools | ed4caea84dff5daa7c9a2c10dc493857c7118a29 | [
"Apache-2.0"
] | null | null | null | .. # README file for Pypi
Pywbemtools is a collection of command line tools that communicate with WBEM
servers. The tools are written in pure Python and support Python 2 and Python
3.
At this point, pywbemtools includes a single command line tool named
``pywbemcli`` that uses the `pywbem package on Pypi`_ to issue operations to a
WBEM server using the `CIM/WBEM standards`_ defined by the `DMTF`_ to perform
system management tasks.
CIM/WBEM standards are used for a wide variety of systems management tasks
in the industry including DMTF management standards and the `SNIA`_
Storage Management Initiative Specification (`SMI-S`_).
Pywbemcli provides access to WBEM servers from the command line.
It provides functionality to:
* Explore the CIM data of WBEM servers. It can manage/inspect the CIM model
components including CIM classes, CIM instances, and CIM qualifiers and
execute CIM methods and queries on the WBEM server.
* Execute specific CIM-XML operations on the WBEM server as defined in `DMTF`_
standard `DSP0200 (CIM Operations over HTTP)`_.
* Inspect and manage WBEM server functionality including:
* CIM namespaces
* Advertised WBEM management profiles
* WBEM server brand and version information
* Capture detailed information on CIM-XML interactions with the WBEM server
including time statistics and details of data flow.
* Maintain a file with persisted WBEM connection definitions so that pywbemcli
can access multiple WBEM servers by name.
* Provide both a command line mode and an interactive mode where multiple
pywbemcli commands can be executed within the context of a WBEM server.
* Use an integrated mock WBEM server to try out commands. The mock server
can be loaded with CIM objects defined in MOF files or via Python scripts.
Installation
------------
Requirements:
1. Python 2.7, 3.4 and higher
2. Operating Systems: Linux, OS-X, native Windows, UNIX-like environments on
Windows (e.g. Cygwin)
3. When using a pywbem version before 1.0.0 on Python 2, the following
OS-level packages are needed:
* On native Windows:
- ``choco`` - Chocolatey package manager. The pywbemtools package installation
uses Chocolatey to install OS-level software. See https://chocolatey.org/
for the installation instructions for Chocolatey.
- ``wget`` - Download tool. Can be installed with: ``choco install wget``.
* On Linux, OS-X, UNIX-like environments on Windows (e.g. Cygwin):
- ``wget`` - Download tool. Can be installed using the OS-level package
manager for the platform.
Installation:
* When using a pywbem version before 1.0.0 on Python 2, install OS-level
packages needed by the pywbem package:
- On native Windows:
.. code-block:: bash
> wget -q https://raw.githubusercontent.com/pywbem/pywbem/master/pywbem_os_setup.bat
> pywbem_os_setup.bat
- On Linux, OS-X, UNIX-like environments on Windows (e.g. Cygwin):
.. code-block:: bash
$ wget -q https://raw.githubusercontent.com/pywbem/pywbem/master/pywbem_os_setup.sh
$ chmod 755 pywbem_os_setup.sh
$ ./pywbem_os_setup.sh
The ``pywbem_os_setup.sh`` script uses sudo internally, so your userid
needs to have sudo permission.
* Install the pywbemtools Python package:
.. code-block:: bash
> pip install pywbemtools
For more details, including how to install the needed OS-level packages
manually, see `pywbemtools installation`_.
Documentation and change history
--------------------------------
For the latest version released on Pypi:
* `Pywbemtools documentation`_
* `Pywbemtools change history`_
.. _pywbemtools documentation: https://pywbemtools.readthedocs.io/en/stable/
.. _pywbemtools installation: https://pywbemtools.readthedocs.io/en/stable/introduction.html#installation
.. _pywbemtools contributions: https://pywbemtools.readthedocs.io/en/stable/development.html#contributing
.. _pywbemtools change history: https://pywbemtools.readthedocs.io/en/stable/changes.html
.. _pywbemtools issue tracker: https://github.com/pywbem/pywbemtools/issues
.. _pywbem package on Pypi: https://pypi.org/project/pywbem/
.. _DMTF: https://www.dmtf.org/
.. _CIM/WBEM standards: https://www.dmtf.org/standards/wbem/
.. _DSP0200 (CIM Operations over HTTP): https://www.dmtf.org/sites/default/files/standards/documents/DSP0200_1.4.0.pdf
.. _SNIA: https://www.snia.org/
.. _SMI-S: https://www.snia.org/forums/smi/tech_programs/smis_home
.. _Apache 2.0 License: https://github.com/pywbem/pywbemtools/tree/master/LICENSE.txt
| 36.488 | 118 | 0.749616 |
14537ba881e389c697caf90715620a6271672d59 | 776 | rst | reStructuredText | docs/podstawy/przyklady/index.rst | sokol02/python101 | e06ad83f9eee285023b9c8e79ab1c00ad8bd67e5 | [
"MIT"
] | null | null | null | docs/podstawy/przyklady/index.rst | sokol02/python101 | e06ad83f9eee285023b9c8e79ab1c00ad8bd67e5 | [
"MIT"
] | null | null | null | docs/podstawy/przyklady/index.rst | sokol02/python101 | e06ad83f9eee285023b9c8e79ab1c00ad8bd67e5 | [
"MIT"
] | null | null | null | .. _przyklady:
Python w przykładach
#####################
Poznawanie Pythona zrealizujemy poprzez rozwiązywanie prostych zadań,
które pozwolą zaprezentować elastyczność i łatwość tego języka.
Nazwy kolejnych skryptów umieszczone są jako komentarz zawsze w czwartej linii kodu.
Bardzo przydatnym narzędziem podczas kodowania w Pythonie, o czym wspomniano we wstępie,
jest konsola interpretera, którą uruchomimy wydając w terminalu polecenie ``python`` lub ``ipython``.
Można w niej testować i debugować wszystkie wyrażenia, warunki, polecenia itd.,
z których korzystamy w skryptach.
.. toctree::
:titlesonly:
przyklad00
przyklad01
przyklad02
przyklad03
przyklad04
przyklad05
przyklad06
przyklad07
przyklad08 | 29.846154 | 102 | 0.737113 |
bc250d14c7ddadb0ed1e87ad1af6159db8c01e02 | 6,237 | rst | reStructuredText | riuso-software/processo-di-messa-a-riuso-del-software-sotto-licenza-aperta.rst | giupal/lg-acquisizione-e-riuso-software-per-pa-docs | 64811bbc940159797bb8cf0ddb853ee9d1e1aaab | [
"CC0-1.0"
] | null | null | null | riuso-software/processo-di-messa-a-riuso-del-software-sotto-licenza-aperta.rst | giupal/lg-acquisizione-e-riuso-software-per-pa-docs | 64811bbc940159797bb8cf0ddb853ee9d1e1aaab | [
"CC0-1.0"
] | null | null | null | riuso-software/processo-di-messa-a-riuso-del-software-sotto-licenza-aperta.rst | giupal/lg-acquisizione-e-riuso-software-per-pa-docs | 64811bbc940159797bb8cf0ddb853ee9d1e1aaab | [
"CC0-1.0"
] | null | null | null | Processo di messa a riuso del software sotto licenza aperta
-----------------------------------------------------------
Il processo di messa a riuso è il seguente:
1. L’amministrazione individua uno strumento di **hosting di codice
aperto**. Una volta identificato lo strumento, può essere utilizzato
per tutto il software che deve essere messo a riuso (`Scelta di uno
strumento di code
hosting <#scelta-di-uno-strumento-di-code-hosting>`__)
2. L’amministrazione sceglie una licenza aperta da utilizzare (`Licenze
aperte e scelta di una
licenza <licenze-aperte-e-scelta-di-una-licenza.html>`__)
3. L’amministrazione, utilizzando proprie risorse oppure tramite un
appalto, pubblica il codice sorgente completo del software e la
relativa documentazione tecnica sullo strumento di code hosting.
Questo processo tecnologico è descritto nell' `Allegato B: Guida alla
pubblicazione di software Open
Source <../attachments/allegato-b-guida-alla-pubblicazione-open-source-di-software-realizzato-per-la-pa.html>`__,
allegata a queste linee guida. La guida è scritta in modo da poter
essere allegata ad un capitolato tecnico di gara, per facilitare
l’acquisizione di un servizio demandando al fornitore gli adempimenti
richiesti dalle presenti linee guida.
4. L’amministrazione “registra” il software sulla piattaforma Developers
Italia, così che sia indicizzato dal motore di ricerca e reso
visibile alle altre amministrazioni che cercano software in riuso.
Il processo qui delineato è valido sia per il software esistente di
proprietà delle amministrazioni (`Rilascio di software esistente sotto
licenza
aperta <#rilascio-di-software-esistente-sotto-licenza-aperta>`__), sia
per il software che verrà realizzato in futuro (`Sviluppo di software
ex-novo <#sviluppo-di-software-ex-novo>`__).
Scelta di uno strumento di code hosting
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Il rilascio di un software deve avvenire mediante uno strumento di code
hosting, specializzato nell’ospitare e mettere a disposizione il
software distribuito sotto licenza aperta. Esistono numerose soluzioni
sul mercato, sia gratuite sia commerciali.
Poiché il fine del comma 1 dell’articolo 69 è quello di favorire il
riuso tra amministrazioni, è necessario che lo strumento segua le
best-practice in termini di funzionalità per la pubblicazione del codice
sorgente, onde non causare costi aggiuntivi alle amministrazioni che
vogliano trovare ed utilizzare il software.
In particolare, lo strumento dovrà necessariamente avere almeno le
seguenti funzionalità:
- Accesso libero in lettura al codice sorgente, senza autenticazione;
- Registrazione gratuita e libera, aperta al pubblico;
- Interfaccia web per la lettura e navigazione del codice e della
relativa documentazione;
- Utilizzo di un sistema di controllo di versione con la funzionalità
di gestione di rami paralleli di sviluppo (*branch)*;
- Sistema di segnalazioni (*issue tracker*) aperto al pubblico in
lettura senza autenticazione e in scrittura dietro autenticazione;
- Implementazione di almeno un flusso di invio modifiche, revisione del
codice (*code review*), e integrazione della modifica, completamente
gestito dallo strumento, aperto al pubblico;
- Sistema di gestione dei rilasci;
- Disponibilità di API per interfacciarsi con lo strumento ed estrarre
dati e metadati relativi ai repository.
Per semplificare la scelta, l’Allegato B (`Guida alla pubblicazione di
software Open Source / Individuazione della piattaforma di code
hosting <../attachments/allegato-b-guida-alla-pubblicazione-open-source-di-software-realizzato-per-la-pa.html#individuazione-dello-strumento-di-code-hosting>`__)
contiene un elenco non esaustivo delle principali piattaforme sul
mercato che corrispondono ai requisiti richiesti.
Alcune piattaforme completamente aderenti ai parametri minimi sono
disponibili in modalità SaaS (cioè possono essere usate direttamente via
Internet senza doverne installare una copia su un server), senza alcun
costo di licenza, e senza la necessità di sottoscrivere contratti o
convenzioni; la scelta di una di queste piattaforme SaaS è quindi da
considerarsi preferenziale, nel caso non ci siano altri vincoli tecnici
(es: requisiti di integrazione), in modo da non causare costi diretti o
indiretti all’amministrazione.
L’amministrazione dovrebbe scegliere una piattaforma sulla quale
effettuare i rilasci di tutto il software di cui è titolare. In
alternativa, la `Guida alla pubblicazione di software Open
Source <../attachments/allegato-b-guida-alla-pubblicazione-open-source-di-software-realizzato-per-la-pa.html>`__
delinea un processo alternativo per demandare la scelta a ciascun
fornitore che, di volta in volta, sarà incaricato di effettuare lo
sviluppo del software e/o il rilascio dello stesso, per conto
dell’amministrazione.
Una volta eletto uno strumento per il code hosting, l’amministrazione
deve dare adeguata visibilità a questa nella propria pagina
istituzionale, come dettagliato nelle Linee Guida di design per i
servizi web della Pubblica Amministrazione.
Registrazione del software aperto su Developers Italia
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Il software rilasciato dalla amministrazione deve essere “registrato”
all’interno del motore di ricerca di Developers Italia, per agevolare la
consultazione alle altre amministrazioni che cercano un software in
riuso.
Il processo tecnico preciso per effettuare la registrazione è indicato
anch’esso nella sezione della `Guida alla pubblicazione di software Open
Source: Registrazione del repository su Developers
Italia <../attachments/allegato-b-guida-alla-pubblicazione-open-source-di-software-realizzato-per-la-pa.html#registrazione-del-repository-su-developers-italia>`__.
Responsabilità connesse al rilascio
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
L’amministrazione titolare del software non contrae alcun obbligo
specifico legato al rilascio: non è infatti necessario fornire alcuna
garanzia sul software, supporto tecnico o a livello utente, né tantomeno
supportare economicamente le amministrazioni che riusano il software nei
costi o nelle procedure di adozione.
.. discourse::
:topic_identifier: 2860
| 51.975 | 163 | 0.78852 |
5fb3ac15a4319f42ac4b6d4e988c10a41961c96f | 2,653 | rst | reStructuredText | doc/source/performance/query_cache_enhance.rst | hervewenjie/mysql | 49a37eda4e2cc87e20ba99e2c29ffac2fc322359 | [
"BSD-3-Clause"
] | null | null | null | doc/source/performance/query_cache_enhance.rst | hervewenjie/mysql | 49a37eda4e2cc87e20ba99e2c29ffac2fc322359 | [
"BSD-3-Clause"
] | null | null | null | doc/source/performance/query_cache_enhance.rst | hervewenjie/mysql | 49a37eda4e2cc87e20ba99e2c29ffac2fc322359 | [
"BSD-3-Clause"
] | null | null | null | .. _query_cache_enhance:
==========================
Query Cache Enhancements
==========================
This page describes the enhancements for the query cache. At the moment three features are available:
* Disabling the cache completely
* Diagnosing contention more easily
* Ignoring comments
Diagnosing contention more easily
=================================
This features provides a new thread state - ``Waiting on query cache mutex``. It has always been difficult to spot query cache bottlenecks because these bottlenecks usually happen intermittently and are not directly reported by the server. This new thread state appear in the output of SHOW PROCESSLIST, easing diagnostics.
Imagine that we run three queries simultaneously (each one in a separate thread): ::
> SELECT number from t where id > 0;
> SELECT number from t where id > 0;
> SELECT number from t where id > 0;
If we experience query cache contention, the output of ``SHOW PROCESSLIST`` will look like this: ::
> SHOW PROCESSLIST;
Id User Host db Command Time State Info
2 root localhost test Sleep 2 NULL
3 root localhost test Query 2 Waiting on query cache mutex SELECT number from t where id > 0;
4 root localhost test Query 1 Waiting on query cache mutex SELECT number from t where id > 0;
5 root localhost test Query 0 NULL
.. _ignoring_comments:
Ignoring comments
=================
This feature adds an option to make the server ignore comments when checking for a query cache hit. For example, consider these two queries: ::
/* first query */ select name from users where users.name like 'Bob%';
/* retry search */ select name from users where users.name like 'Bob%';
By default (option off), the queries are considered different, so the server will execute them both and cache them both.
If the option is enabled, the queries are considered identical, so the server will execute and cache the first one and will serve the second one directly from the query cache.
System Variables
================
.. variable:: query_cache_strip_comments
:cli: Yes
:conf: Yes
:scope: Global
:dyn: Yes
:vartype: Boolean
:default: Off
Makes the server ignore comments when checking for a query cache hit.
Other Reading
-------------
* `MySQL general thread states <http://dev.mysql.com/doc/refman/5.6/en/general-thread-states.html>`_
* `Query cache freezes <http://www.mysqlperformanceblog.com/2009/03/19/mysql-random-freezes-could-be-the-query-cache/>`_
| 37.366197 | 323 | 0.674331 |
1f583c3b70b3977476c841080a3f77a6e669a791 | 666 | rst | reStructuredText | not_for_deploy/docs/the_ievv_command.rst | appressoas/ievv_opensource | 63e87827952ddc8f6f86145b79478ef21d6a0990 | [
"BSD-3-Clause"
] | null | null | null | not_for_deploy/docs/the_ievv_command.rst | appressoas/ievv_opensource | 63e87827952ddc8f6f86145b79478ef21d6a0990 | [
"BSD-3-Clause"
] | 37 | 2015-10-26T09:14:12.000Z | 2022-02-10T10:35:33.000Z | not_for_deploy/docs/the_ievv_command.rst | appressoas/ievv_opensource | 63e87827952ddc8f6f86145b79478ef21d6a0990 | [
"BSD-3-Clause"
] | 1 | 2015-11-06T07:56:34.000Z | 2015-11-06T07:56:34.000Z | ####################
The ``ievv`` command
####################
The ``ievv`` command does two things:
1. It avoids having to write ``python manange.py appressotaks_something`` and
lets you write ``ievv something`` istead.
2. It provides commands that are not management commands, such as the commands
for building docs and creating new projects.
When we add the command for initializing a new project, the ievv command will
typically be installed globally instead of as a requirement of each project.
You find the source code for the command in
``ievv_opensource/ievvtasks_common/cli.py``.
Some of the commands has required settings. See :doc:`settings`.
| 33.3 | 78 | 0.725225 |
eb96dc6629a89ea5d8560e8a2807e5cdce67c6dc | 165 | rst | reStructuredText | apetools/devices/api/apetools.devices.adbdevice.AdbDevice.rssi.rst | rsnakamura/oldape | b4d1c77e1d611fe2b30768b42bdc7493afb0ea95 | [
"Apache-2.0"
] | null | null | null | apetools/devices/api/apetools.devices.adbdevice.AdbDevice.rssi.rst | rsnakamura/oldape | b4d1c77e1d611fe2b30768b42bdc7493afb0ea95 | [
"Apache-2.0"
] | null | null | null | apetools/devices/api/apetools.devices.adbdevice.AdbDevice.rssi.rst | rsnakamura/oldape | b4d1c77e1d611fe2b30768b42bdc7493afb0ea95 | [
"Apache-2.0"
] | null | null | null | apetools.devices.adbdevice.AdbDevice.rssi
=========================================
.. currentmodule:: apetools.devices.adbdevice
.. autoattribute:: AdbDevice.rssi | 27.5 | 45 | 0.606061 |
1a4882cb30e5081dbe65e7ef92ca1907f33c8ca8 | 1,623 | rst | reStructuredText | en/source/pages/applianceimportandexport/applianceImport_upload.rst | segalaj/api-docs | 44fe8a87c875efa67563fe28d36b923eb1ea5a25 | [
"Apache-2.0"
] | null | null | null | en/source/pages/applianceimportandexport/applianceImport_upload.rst | segalaj/api-docs | 44fe8a87c875efa67563fe28d36b923eb1ea5a25 | [
"Apache-2.0"
] | 6 | 2019-03-13T14:04:06.000Z | 2021-09-08T00:57:18.000Z | en/source/pages/applianceimportandexport/applianceImport_upload.rst | segalaj/api-docs | 44fe8a87c875efa67563fe28d36b923eb1ea5a25 | [
"Apache-2.0"
] | 4 | 2018-07-23T15:01:25.000Z | 2019-04-25T12:39:12.000Z | .. Copyright FUJITSU LIMITED 2016-2019
.. _applianceImport-upload:
applianceImport_upload
----------------------
.. function:: POST /users/{uid}/imports/{iid}/uploads
.. sidebar:: Summary
* Method: ``POST``
* Response Code: ``201``
* Response Formats: ``application/xml`` ``application/json``
* Since: ``UForge 3.5``
Upload the appliance archive. <p/>
In order to upload an archive, an ``appliance import ticket`` must first be created by using :ref:`appliance-import`. <p/>
Once the upload is complete, the platform extracts the archive and creates an appliance from the archive contents. This is an asynchronous job. To get the status of this import, use :ref:`applianceImportStatus-get`
Security Summary
~~~~~~~~~~~~~~~~
* Requires Authentication: ``true``
* Entitlements Required: ``appliance_create``
URI Parameters
~~~~~~~~~~~~~~
* ``uid`` (required): the user name (login name) of the :ref:`user-object`
* ``iid`` (required): the id of the :ref:`applianceimport-object` ticket
HTTP Request Body Parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The file to upload.
Example Request
~~~~~~~~~~~~~~~
.. code-block:: bash
curl "https://uforge.example.com/api/users/{uid}/imports/{iid}/uploads" -X POST \
-u USER_LOGIN:PASSWORD -H "Accept: application/xml"-H "Content-type: application/xml" --data-binary "@binaryFilePath"
.. seealso::
* :ref:`appliance-object`
* :ref:`applianceImportStatus-get`
* :ref:`applianceImport-delete`
* :ref:`applianceImport-get`
* :ref:`applianceImport-getAll`
* :ref:`applianceImport-getAllStatus`
* :ref:`appliance-import`
* :ref:`applianceimport-object`
| 27.982759 | 216 | 0.675909 |
5cf54e167ba55d385d539e699f58f1cf8bd4f994 | 724 | rst | reStructuredText | docs/install-windows-generic.rst | huibinshen/autogluon | 18c182c90df89762a916128327a6792b8887c5c6 | [
"Apache-2.0"
] | null | null | null | docs/install-windows-generic.rst | huibinshen/autogluon | 18c182c90df89762a916128327a6792b8887c5c6 | [
"Apache-2.0"
] | null | null | null | docs/install-windows-generic.rst | huibinshen/autogluon | 18c182c90df89762a916128327a6792b8887c5c6 | [
"Apache-2.0"
] | null | null | null |
If you run into difficulties installing AutoGluon on Windows, please provide details in `this GitHub Issue <https://github.com/awslabs/autogluon/issues/164>`_.
Note: ObjectDetector and any model that uses MXNet is not supported on Windows!
GPU-based MXNet is not supported on Windows, and it is recommended to use Linux instead for these models.
To install AutoGluon on Windows, it is recommended to use Anaconda:
1. `Install Anaconda <https://www.anaconda.com/products/individual>`_
- If Anaconda is already installed but is an old version, follow `this guide <https://docs.anaconda.com/anaconda/install/update-version/>`_ to update
2. Open Anaconda Prompt (anaconda3)
3. Inside Anaconda Prompt, do the following:
| 55.692308 | 160 | 0.78453 |
adcd65e84a5d5b2a540c53b7e5fe22d460d23f1a | 481 | rst | reStructuredText | doc/source/index.rst | disktnk/fhacking | bad09585e3037a3719a1a624c67eb81670569adc | [
"Apache-2.0"
] | null | null | null | doc/source/index.rst | disktnk/fhacking | bad09585e3037a3719a1a624c67eb81670569adc | [
"Apache-2.0"
] | 3 | 2019-01-22T03:17:28.000Z | 2019-01-22T03:18:19.000Z | doc/source/index.rst | disktnk/fhacking | bad09585e3037a3719a1a624c67eb81670569adc | [
"Apache-2.0"
] | null | null | null | ================================================
hacking: OpenStack Hacking Guideline Enforcement
================================================
hacking is a set of flake8 plugins that test and enforce the :ref:`StyleGuide`.
Hacking pins its dependencies, as a new release of some dependency can break
hacking based gating jobs. This is because new versions of dependencies can
introduce new rules, or make existing rules stricter.
.. toctree::
:maxdepth: 3
user/index
| 32.066667 | 79 | 0.623701 |
c8feb65c2b2c6c89c38cec7f80040c5f0f4b81d2 | 23,782 | rst | reStructuredText | docs/grouped_prophet.rst | databricks/diviner | 40d54e90c7f85a4a20158f27e97faf8cad3e1326 | [
"Apache-2.0"
] | 4 | 2022-03-31T00:07:45.000Z | 2022-03-31T07:10:43.000Z | docs/grouped_prophet.rst | databricks/diviner | 40d54e90c7f85a4a20158f27e97faf8cad3e1326 | [
"Apache-2.0"
] | 1 | 2022-03-31T02:46:19.000Z | 2022-03-31T02:46:19.000Z | docs/grouped_prophet.rst | databricks/diviner | 40d54e90c7f85a4a20158f27e97faf8cad3e1326 | [
"Apache-2.0"
] | null | null | null | .. _grouped_prophet:
Grouped Prophet
===============
The Grouped Prophet model is a multi-series orchestration framework for building multiple individual models
of related, but isolated series data. For example, a project that required the forecasting of airline passengers at
major airports around the world would historically require individual orchestration of data acquisition, hyperparameter
definitions, model training, metric validation, serialization, and registration of thousands of individual models.
This API consolidates the many thousands of models that would otherwise need to be implemented, trained individually,
and managed throughout their frequent retraining and forecasting lifecycles to a single high-level API that simplifies
these common use cases that rely on the `Prophet <https://facebook.github.io/prophet/>`_ forecasting library.
.. contents:: Table of Contents
:local:
:depth: 2
.. _api:
Grouped Prophet API
-------------------
The following sections provide a basic overview of using the :py:class:`GroupedProphet <diviner.GroupedProphet>` API,
from fitting of the grouped models, predicting forecasted data, saving, loading, and customization of the underlying
``Prophet`` instances.
To see working end-to-end examples, you can go to :ref:`tutorials-and-examples`. The examples will allow you
to explore the data structures required for training, how to extract forecasts for each group, and demonstrations of the
saving and loading of trained models.
.. _fitting:
Model fitting
^^^^^^^^^^^^^
In order to fit a :py:class:`GroupedProphet <diviner.GroupedProphet>` model instance, the :py:meth:`fit <diviner.GroupedProphet.fit>`
method is used. Calling this method will process the input ``DataFrame`` to create a grouped execution collection,
fit a ``Prophet`` model on each individual series, and persist the trained state of each group's model to the
object instance.
The arguments for the :py:meth:`fit <diviner.GroupedProphet.fit>` method are:
df
A 'normalized' DataFrame that contains an endogenous regressor column (the 'y' column), a date (or datetime) column
(that defines the ordering, periodicity, and frequency of each series (if this column is a string, the frequency will
be inferred)), and grouping column(s) that define the discrete series to be modeled. For further information
on the structure of this ``DataFrame``, see the :ref:`quickstart guide <quickstart>`
group_key_columns
The names of the columns within ``df`` that, when combined (in order supplied) define distinct series. See the
:ref:`quickstart guide <quickstart>` for further information.
kwargs
*[Optional]* Arguments that are used for overrides to the ``Prophet`` pystan optimizer. Details of what parameters are available
and how they might affect the optimization of the model can be found by running
``help(pystan.StanModel.optimizing)`` from a Python REPL.
Example:
.. code-block:: python
grouped_prophet_model = GroupedProphet().fit(df, ["country", "region"])
.. _forecasting:
Forecast
^^^^^^^^
The :py:meth:`forecast <diviner.GroupedProphet.forecast>` method is the 'primary means' of generating future forecast
predictions. For each group that was trained in the :ref:`fitting` of the grouped model,
a value of time periods is predicted based upon the last event date (or datetime) from each series' temporal
termination.
Usage of this method requires providing two arguments:
horizon
The number of events to forecast (supplied as a positive integer)
frequency
The periodicity between each forecast event. Note that this value does not have to match the periodicity of the
training data (i.e., training data can be in days and predictions can be in months, minutes, hours, or years).
The frequency abbreviations that are allowed can be found
`here. <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`_
.. note:: The generation of error estimates (`yhat_lower` and `yhat_upper`) in the output of a forecast are controlled
through the use of the ``Prophet`` argument ``uncertainty_samples`` during class instantiation, prior to :ref:`fitting`
being called. Setting this value to `0` will eliminate error estimates and will dramatically increase the speed of
training, prediction, and cross validation.
The return data structure for this method will be of a 'stacked' ``pandas`` ``DataFrame``, consisting of the
grouping keys defined (in the order in which they were generated), the grouping columns, elements of the prediction
values (deconstructed; e.g. 'weekly', 'yearly', 'daily' seasonality terms and the 'trend'), the date (datetime) values,
and the prediction itself (labeled `yhat`).
.. _predicting:
Predict
^^^^^^^
A 'manual' method of generating predictions based on discrete date (or datetime) values for each group specified.
This method accepts a ``DataFrame`` as input having columns that define discrete dates to generate predictions for
and the grouping key columns that match those supplied when the model was fit.
For example, a model trained with the grouping key columns of 'city' and 'country' that included New York City, US
and Toronto, Canada as series would generate predictions for both of these cities if the provided
``df`` argument were supplied:
.. code-block:: python
predict_config = pd.DataFrame.from_records(
{
"country": ["us", "us", "ca", "ca"],
"city": ["nyc", "nyc", "toronto", "toronto"],
"ds": ["2022-01-01", "2022-01-08", "2022-01-01", "2022-01-08"],
}
)
grouped_prophet_model.predict(predict_config)
The structure of this submitted ``DataFrame`` for the above use case is:
.. list-table:: Predict `df` Structure
:widths: 25 25 40
:header-rows: 1
* - country
- city
- ds
* - us
- nyc
- 2022-01-01
* - us
- nyc
- 2022-01-08
* - ca
- toronto
- 2022-01-01
* - ca
- toronto
- 2022-01-08
Usage of this method with the above specified df would generate 4 individual predictions; one for each row.
.. note:: The :ref:`forecasting` method is more appropriate for most use cases as it will continue immediately after the
training period of data terminates.
Predict Groups
^^^^^^^^^^^^^^
The :py:meth:`predict_groups <diviner.GroupedProphet.predict_groups>` method generates forecast data for a subset of
groups that a :py:class:`diviner.GroupedProphet` model was trained upon.
Example:
.. code-block:: python
from diviner import GroupedProphet
model = GroupedProphet().fit(df, ["country", "region"])
subset_forecasts = model.predict_groups(groups=[("US", "NY"), ("FR", "Paris"), ("UA", "Kyiv")],
horizon=90,
frequency="D",
on_error="warn"
)
The arguments for the :py:meth:`predict_groups <diviner.GroupedProphet.predict_groups>` method are:
groups
A collection of one or more groups for which to generate a forecast. The collection of groups must be submitted as a
``List[Tuple[str]]`` to identify the order-specific group values to retrieve the correct model. For instance, if the
model was trained with the specified ``group_key_columns`` of ``["country", "city"]``, a valid ``groups`` entry
would be: ``[("US", "LosAngeles"), ("CA", "Toronto")]``. Changing the order within the tuples will not resolve
(e.g. ``[("NewYork", "US")]`` would not find the appropriate model).
.. note::
Groups that are submitted for prediction that are not present in the trained model will, by default, cause an
Exception to be raised. This behavior can be changed to a warning or ignore status with the argument ``on_error``.
horizon
The number of events to forecast (supplied as a positive integer)
frequency
The periodicity between each forecast event. Note that this value does not have to match the periodicity of the
training data (i.e., training data can be in days and predictions can be in months, minutes, hours, or years).
The frequency abbreviations that are allowed can be found
`here. <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases>`_
predict_col
*[Optional]* The name to use for the generated column containing forecasted data. Default: ``"yhat"``
on_error
*[Optional]* [Default -> ``"raise"``] Dictates the behavior for handling group keys that have been submitted in the
``groups`` argument that do not match with a group identified and registered during training (``fit``). The modes
are:
- ``"raise"``
A ``DivinerException`` is raised if any supplied groups do not match to the fitted groups.
- ``"warn"``
A warning is emitted (printed) and logged for any groups that do not match to those that the model
was fit with.
- ``"ignore"``
Invalid groups will silently fail prediction.
.. note::
A ``DivinerException`` will still be raised even in ``"ignore"`` mode if there are no valid fit groups
to match the provided ``groups`` provided to this method.
Save
^^^^
Supports saving a :py:class:`GroupedProphet <diviner.GroupedProphet>` model that has been :py:meth:`fit <diviner.GroupedProphet.fit>`.
The serialization of the model instance does not rely on pickle or cloudpickle, rather a straight-forward json
serialization.
.. code-block:: python
save_location = "/path/to/store/model"
grouped_prophet_model.save(save_location)
Load
^^^^
Loading a saved :py:class:`GroupedProphet <diviner.GroupedProphet>` model is done through the use of a class method. The
:py:meth:`load <diviner.GroupedProphet.load>` method is called as below:
.. code-block:: python
load_location = "/path/to/stored/model"
grouped_prophet_model = GroupedProphet.load(load_location)
.. note:: The ``PyStan`` backend optimizer instance used to fit the model is not saved (this would require compilation of
``PyStan`` on the same machine configuration that was used to fit it in order for it to be valid to reuse) as it is
not useful to store and would require additional dependencies that are not involved in cross validation, parameter
extraction, forecasting, or predicting. If you need access to the ``PyStan`` backend, retrain the model and access
the underlying solver prior to serializing to disk.
Overriding Prophet settings
^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to create a :py:class:`GroupedProphet <diviner.GroupedProphet>` instance, there are no required attributes to
define. Utilizing the default values will, as with the underlying ``Prophet`` library, utilize the default values to
perform model fitting.
However, there are arguments that can be overridden which are pass-through values to the individual ``Prophet``
instances that are created for each group. Since these are ``**kwargs`` entries, the names will be argument names for
the respective arguments in ``Prophet``.
To see a full listing of available arguments for the given version of ``Prophet`` that you are using, the simplest
(as well as the recommended manner in the library documentation) is to run a ``help()`` command in a Python REPL:
.. code-block:: python
from prophet import Prophet
help(Prophet)
An example of overriding many of the arguments within the underlying ``Prophet`` model for the ``GroupedProphet`` API
is shown below.
.. code-block:: python
grouped_prophet_model = GroupedProphet(
growth='linear',
changepoints=None,
n_changepoints=90,
changepoint_range=0.8,
yearly_seasonality='auto',
weekly_seasonality='auto',
daily_seasonality='auto',
holidays=None,
seasonality_mode='additive',
seasonality_prior_scale=10.0,
holidays_prior_scale=10.0,
changepoint_prior_scale=0.05,
mcmc_samples=0,
interval_width=0.8,
uncertainty_samples=1000,
stan_backend=None
)
Utilities
---------
Parameter Extraction
^^^^^^^^^^^^^^^^^^^^
The method :py:meth:`extract_model_params <diviner.GroupedProphet.extract_model_params>` is a utility that extracts the tuning parameters
from each individual model from within the :ref:`model's <api>` container and returns them as a single DataFrame.
Columns are the parameters from the models, while each row is an individual group's Prophet model's parameter values.
Having a single consolidated extraction data structure eases the historical registration of model performance and
enables a simpler approach to the design of frequent retraining through passive retraining systems (allowing for
an easier means by which to acquire priors hyperparameter values on frequently retrained forecasting models).
An example extract from a 2-group model (cast to a dictionary from the ``Pandas DataFrame`` output) is shown below:
.. code-block:: python
{'changepoint_prior_scale': {0: 0.05, 1: 0.05},
'changepoint_range': {0: 0.8, 1: 0.8},
'component_modes': {0: {'additive': ['yearly',
'weekly',
'additive_terms',
'extra_regressors_additive',
'holidays'],
'multiplicative': ['multiplicative_terms',
'extra_regressors_multiplicative']},
1: {'additive': ['yearly',
'weekly',
'additive_terms',
'extra_regressors_additive',
'holidays'],
'multiplicative': ['multiplicative_terms',
'extra_regressors_multiplicative']}},
'country_holidays': {0: None, 1: None},
'daily_seasonality': {0: 'auto', 1: 'auto'},
'extra_regressors': {0: OrderedDict(), 1: OrderedDict()},
'fit_kwargs': {0: {}, 1: {}},
'grouping_key_columns': {0: ('key2', 'key1', 'key0'),
1: ('key2', 'key1', 'key0')},
'growth': {0: 'linear', 1: 'linear'},
'holidays': {0: None, 1: None},
'holidays_prior_scale': {0: 10.0, 1: 10.0},
'interval_width': {0: 0.8, 1: 0.8},
'key0': {0: 'T', 1: 'M'},
'key1': {0: 'A', 1: 'B'},
'key2': {0: 'C', 1: 'L'},
'logistic_floor': {0: False, 1: False},
'mcmc_samples': {0: 0, 1: 0},
'n_changepoints': {0: 90, 1: 90},
'seasonality_mode': {0: 'additive', 1: 'additive'},
'seasonality_prior_scale': {0: 10.0, 1: 10.0},
'specified_changepoints': {0: False, 1: False},
'stan_backend': {0: <prophet.models.PyStanBackend object at 0x7f900056d2e0>,
1: <prophet.models.PyStanBackend object at 0x7f9000523eb0>},
'start': {0: Timestamp('2018-01-02 00:02:00'),
1: Timestamp('2018-01-02 00:02:00')},
't_scale': {0: Timedelta('1459 days 00:00:00'),
1: Timedelta('1459 days 00:00:00')},
'train_holiday_names': {0: None, 1: None},
'uncertainty_samples': {0: 1000, 1: 1000},
'weekly_seasonality': {0: 'auto', 1: 'auto'},
'y_scale': {0: 1099.9530489951537, 1: 764.727400507604},
'yearly_seasonality': {0: 'auto', 1: 'auto'}}
.. _cv_score:
Cross Validation and Scoring
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The primary method of evaluating model performance across all groups is by using the method
:py:meth:`cross_validate_and_score <diviner.GroupedProphet.cross_validate_and_score>`. Using this method from a ``GroupedProphet`` instance
that has been fit will perform backtesting of each group's model using the training data set supplied when the
:py:meth:`fit <diviner.GroupedProphet.fit>` method was called.
The return type of this method is a single consolidated ``Pandas DataFrame`` that contains metrics as columns with
each row representing a distinct grouping key.
For example, below is a sample of 3 groups' cross validation metrics.
.. code-block:: python
{'coverage': {0: 0.21839080459770113,
1: 0.057471264367816084,
2: 0.5114942528735632},
'grouping_key_columns': {0: ('key2', 'key1', 'key0'),
1: ('key2', 'key1', 'key0'),
2: ('key2', 'key1', 'key0')},
'key0': {0: 'T', 1: 'M', 2: 'K'},
'key1': {0: 'A', 1: 'B', 2: 'S'},
'key2': {0: 'C', 1: 'L', 2: 'Q'},
'mae': {0: 14.230668998203283, 1: 34.62100210053155, 2: 46.17014668092673},
'mape': {0: 0.015166533573997266,
1: 0.05578282899646585,
2: 0.047658812366283436},
'mdape': {0: 0.013636314354422746,
1: 0.05644041426067295,
2: 0.039153745874603914},
'mse': {0: 285.42142900120183, 1: 1459.7746527190932, 2: 3523.9281809854906},
'rmse': {0: 15.197908800171147, 1: 35.520537302480314, 2: 55.06313841955681},
'smape': {0: 0.015327226830099487,
1: 0.05774645767583018,
2: 0.0494437278595581}}
Method arguments:
horizon
A ``pandas.Timedelta`` string consisting of two parts: an integer and a periodicity. For example, if the training
data is daily, consists of 5 years of data, and the end-use for the project is to predict 14 days of future values
every week, a plausible horizon value might be ``"21 days"`` or ``"28 days"``.
See `pandas documentation <https://pandas.pydata.org/docs/reference/api/pandas.Timedelta.html>`_ for information on
the allowable syntax and format for ``pandas.Timedelta`` values.
metrics
A list of metrics that will be calculated following the back-testing cross validation. By default, all of the
following will be tested:
* "mae" (`mean absolute error <https://scikit-learn.org/stable/modules/model_evaluation.html#mean-absolute-error>`_)
* "mape" (`mean absolute percentage error <https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_percentage_error.html#sklearn.metrics.mean_absolute_percentage_error>`_)
* "mdape" (median absolute percentage error)
* "mse" (`mean squared error <https://scikit-learn.org/stable/modules/model_evaluation.html#mean-squared-error>`_)
* "rmse" (`root mean squared error <https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html>`_)
* "smape" (`symmetric mean absolute percentage error <https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error>`_)
To restrict the metrics computed and returned, a subset of these tests can be supplied to the ``metrics`` argument.
period
The frequency at which each windowed collection of back testing cross validation will be conducted. If the argument
``cutoffs`` is left as ``None``, this argument will determine the spacing between training and validation sets
as the cross validation algorithm steps through each series. Smaller values will increase cross validation
execution time.
initial
The size of the initial training period to use for cross validation windows. The default derived value, if not
specified, is ``horizon`` * 3 with cutoff values for each window set at ``horizon`` / 2.
parallel
Mode of operation for calculating cross validation windows. ``None`` for serial execution, ``'processes'`` for
multiprocessing pool execution, and ``'threads'`` for thread pool execution.
cutoffs
Optional control mode that allows for defining specific datetime values in ``pandas.Timestamp`` format to determine
where to conduct train and test split boundaries for validation of each window.
kwargs
Individual optional overrides to ``prophet.diagnostics.cross_validation()`` and
``prophet.diagnostics.performance_metrics()`` functions. See the
`prophet docs <https://facebook.github.io/prophet/docs/diagnostics.html#cross-validation>`_ for more information.
.. _cv:
Cross Validation
^^^^^^^^^^^^^^^^
The :py:meth:`diviner.GroupedProphet.cross_validate` method is a wrapper around the ``Prophet`` function
``prophet.diagnostics.cross_validation()``. It is intended to be used as a debugging tool for the 'automated' metric
calculation method, see :ref:`Cross Validation and Scoring <cv_score>`. The arguments for this
method are:
horizon
A timedelta formatted string in the ``Pandas.Timedelta`` format that defines the amount of time to utilize
for generating a validation dataset that is used for calculating loss metrics per each cross validation window
iteration. Example horizons: (``"30 days"``, ``"24 hours"``, ``"16 weeks"``). See
`the pandas Timedelta docs <https://pandas.pydata.org/docs/reference/api/pandas.Timedelta.html>`_ for more
information on supported formats and syntax.
period
The periodicity of how often a windowed validation will be constructed. Smaller values here will take longer as
more 'slices' of the data will be made to calculate error metrics. The format is the same as that of the horizon
(i.e. ``"60 days"``).
initial
The minimum size of data that will be used to build the cross validation window. Values that are excessively small
may cause issues with the effectiveness of the estimated overall prediction error and lead to long cross validation
runtimes. This argument is in the same format as ``horizon`` and ``period``, a ``pandas.Timedelta`` format string.
parallel
Selection on how to execute the cross validation windows. Supported modes: (``None``, ``'processes'``, or
``'threads'``). Due to the reuse of the originating dataset for window slice selection, a shared memory instance
mode ``'threads'`` is recommended over using ``'processes'`` mode.
cutoffs
Optional arguments for specified ``pandas.Timestamp`` values to define where boundaries should be within
the group series values. If this is specified, the ``period`` and ``initial`` arguments are not used.
.. note:: For information on how cross validation works within the ``Prophet`` library, see this
`link <https://facebook.github.io/prophet/docs/diagnostics.html#cross-validation>`_.
The return type of this method is a dictionary of ``{<group_key>: <pandas DataFrame>}``, the ``DataFrame`` containing
the cross validation window scores across time horizon splits.
Performance Metrics
^^^^^^^^^^^^^^^^^^^
The :py:meth:`calculate_performance_metrics <diviner.GroupedProphet.calculate_performance_metrics>` method is a
debugging tool that wraps the function `performance_metrics <https://facebook.github.io/prophet/docs/diagnostics.html>`_
from ``Prophet``. Usage of this method will generate the defined metric scores for each cross validation window,
returning a dictionary of ``{<group_key>: <DataFrame of metrics for each window>}``
Method arguments:
cv_results
The output of :py:meth:`cross_validate <diviner.GroupedProphet.cross_validate>`.
metrics
Optional subset list of metrics. See the signature for :ref:`cross_validate_and_score() <cv_score>` for supported
metrics.
rolling_window
Defines the fractional amount of data to use in each rolling window to calculate the performance metrics.
Must be in the range of {0: 1}.
monthly
Boolean value that, if set to ``True``, will collate the windows to ensure that horizons are computed as a factor
of months of the year from the cutoff date. This is only useful if the data has a yearly seasonality component to it
that relates to day of month.
Class Signature
---------------
.. autoclass:: diviner.GroupedProphet
:members: | 48.337398 | 195 | 0.696535 |
f96f68a5fa7dcc84858208d94ff22b2e601838a4 | 8,784 | rst | reStructuredText | docs/user_guide/pipelines.rst | aborodya/AlphaPy | b24de414b89a74d0ef1a6249e0bc4f96fb508a1e | [
"Apache-2.0"
] | 559 | 2018-09-13T00:14:34.000Z | 2022-03-31T19:17:12.000Z | docs/user_guide/pipelines.rst | aborodya/AlphaPy | b24de414b89a74d0ef1a6249e0bc4f96fb508a1e | [
"Apache-2.0"
] | 24 | 2018-09-15T21:01:50.000Z | 2021-12-30T01:39:57.000Z | docs/user_guide/pipelines.rst | aborodya/AlphaPy | b24de414b89a74d0ef1a6249e0bc4f96fb508a1e | [
"Apache-2.0"
] | 116 | 2018-09-26T12:05:46.000Z | 2022-03-11T10:23:24.000Z | AlphaPy
=======
.. image:: model_pipeline.png
:alt: AlphaPy Model Pipeline
:width: 100%
:align: center
Model Object Creation
---------------------
**AlphaPy** first reads the ``model.yml`` file and then displays
the model parameters as confirmation that the file was read
successfully. As shown in the example below, the Random Forest
(RF) and XGBoost (XGB) algorithms are used to build the model.
From the model specifications, a ``Model`` object will be
created.
All of the model parameters are listed in alphabetical order.
At a minimum, scan for ``algorithms``, ``features``, ``model_type``,
and ``target`` to verify their accuracy, i.e., that you are
running the right model. The ``verbosity`` parameter will control
the degree of output that you see when running the pipeline.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 1-84
Data Ingestion
--------------
Data are loaded from both the training file and the test file.
Any features that you wish to remove from the data are then
dropped. Statistics about the shape of the data and the target
variable proportions are logged.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 85-103
Feature Processing
------------------
There are two stages to feature processing. First, you may want
to transform a column of a dataframe into a different format
or break up a feature into its respective components. This is
known as a *treatment*, and it is a one-to-many transformation.
For example, a date feature can be extracted into day, month,
and year.
The next stage is feature type determination, which applies
to all features, regardless of whether or not a treatment has
been previously applied. The unique number of a feature's values
dictates whether or not that feature is a factor. If the
given feature is a factor, then a specific type of encoding
is applied. Otherwise, the feature is generally either text
or a number.
.. image:: features.png
:alt: Feature Flowchart
:width: 100%
:align: center
In the example below, each feature's type is identified along
with the unique number of values. For factors, a specific type
of encoding is selected, as specified in the ``model.yml``
file. For text, you can choose either count vectorization and
TF-IDF or just plain factorization. Numerical features have
both imputation and log-transformation options.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 104-172
As AlphaPy runs, you can see the number of new features that are
generated along the way, depending on which features you selected
in the ``features`` section of the ``model.yml`` file. For
interactions, you specify the polynomial degree and the percentage
of the interactions that you would like to retain in the model.
Be careful of the polynomial degree, as the number of interaction
terms is exponential.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 173-185
Feature Selection
-----------------
There are two types of feature selection:
* Univariate Selection
* Recursive Feature Elimination (RFE)
Univariate selection finds the informative features based on
a percentile of the highest scores, using a scoring function
such as ANOVA F-Scores or Chi-squared statistics. There are
scoring functions for both classification and regression.
RFE is more time-consuming, but has cross-validation with a
configurable scoring function and step size. We also recommend
using a seed for reproducible results, as the resulting
support vector (a ranking of the features) can vary dramatically
across runs.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 195-198
Model Estimation
----------------
A classification model is highly dependent on the class proportions.
If you’re trying to predict a rare pattern with high accuracy,
then training for accuracy will be useless because a dumb classifier
could just predict the majority class and be right most of the time.
As a result, **AlphaPy** gives data scientists the ability to
undersample majority classes or oversample minority classes. There
are even techniques that combine the two, e.g., SMOTE or ensemble
sampling.
Before estimation, we need to apply sampling and possibly
shuffling to improve cross-validation. For example, time series
data is ordered, and you may want to eliminate that dependency.
At the beginning of the estimation phase, we read in all of the
algorithms from the ``algos.yml`` file and then select those
algorithms used in this particular model. The process is
iterative for each algorithm: initial fit, feature selection,
grid search, and final fit.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 186-233
Grid Search
-----------
There are two types of grid search for model hyperparameters:
* Full Grid Search
* Randomized Grid Search
A full grid search is exhaustive and can be the most time-consuming
task of the pipeline. We recommend that you save the full grid search
until the end of your model development, and in the interim use a
randomized grid search with a fixed number of iterations. The
results of the top 3 grid searches are ranked by mean validation
score, and the best estimator is saved for making predictions.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 199-210
Model Evaluation
----------------
Each model is evaluated using all of the metrics_ available in
*scikit-learn* to give you a sense of how other scoring functions
compare. Metrics are calculated on the training data for every
algorithm. If test labels are present, then metrics are also
calculated for the test data.
.. _metrics: http://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 237-276
Model Selection
---------------
Blended Model
~~~~~~~~~~~~~
.. image:: model_blend.png
:alt: Blended Model Creation
:width: 100%
:align: center
.. _ridge: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 234-236
Best Model
~~~~~~~~~~
The best model is selected from the score of:
* a model for each algorithm, and
* a *blended* model
Depending on the scoring function, best model selection is
based on whether the score must be minimized or maximized.
For example, the Area Under the Curve (AUC) must be
maximized, and negative log loss must be minimized.
.. image:: model_best.png
:alt: Best Model Selection
:width: 100%
:align: center
When more than one algorithm is scored in the estimation stage,
the final step is to combine the predictions of each one and
create the blended model, i.e., the predictions from the
independent models are used as training features. For
classification, AlphaPy uses logistic regression, and for
regression, we use ridge_ regression.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 276-285
Plot Generation
---------------
The user has the option of generating the following plots:
* Calibration Plot
* Confusion Matrix
* Feature Importances
* Learning Curve
* ROC Curve
All plots are saved to the ``plots`` directory of your project.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 285-339
Calibration Plot
~~~~~~~~~~~~~~~~
.. image:: plot_calibration.png
:alt: Calibration Plot
:width: 100%
:align: center
Confusion Matrix
~~~~~~~~~~~~~~~~
.. image:: plot_confusion_matrix.png
:alt: Confusion Matrix
:width: 100%
:align: center
Feature Importances
~~~~~~~~~~~~~~~~~~~
.. image:: plot_feature_importances.png
:alt: Feature Importances
:width: 100%
:align: center
Learning Curve
~~~~~~~~~~~~~~
.. image:: plot_learning_curve.png
:alt: Learning Curve
:width: 100%
:align: center
ROC Curve
~~~~~~~~~
.. image:: plot_roc_curve.png
:alt: ROC Curve
:width: 100%
:align: center
Final Results
-------------
* The model object is stored in Pickle (.pkl) format in the ``models``
directory of the project. The model is loaded later in prediction mode.
* The feature map is stored in Pickle (.pkl) format in the ``models``
directory. The feature map is restored for prediction mode.
* Predictions are stored in the project's ``output`` directory.
* Sorted rankings of predictions are stored in ``output``.
* Any submission files are stored in ``output``.
.. literalinclude:: alphapy.log
:language: text
:caption: **alphapy.log**
:lines: 340-351
| 29.377926 | 104 | 0.733834 |
1f90f7c12c06e44835021897bdb2ca403e41ca45 | 249 | rst | reStructuredText | doc/source/cce/user-guide/storage_management.rst | OpenTelekomCloud/docs | bf7f76b5c8f74af898e3b3f726ee563c89ec2fed | [
"Apache-2.0"
] | 1 | 2017-09-18T13:54:39.000Z | 2017-09-18T13:54:39.000Z | doc/source/cce/user-guide/storage_management.rst | opentelekomcloud/docs | bf7f76b5c8f74af898e3b3f726ee563c89ec2fed | [
"Apache-2.0"
] | 34 | 2020-02-21T17:23:45.000Z | 2020-09-30T09:23:10.000Z | doc/source/cce/user-guide/storage_management.rst | OpenTelekomCloud/docs | bf7f76b5c8f74af898e3b3f726ee563c89ec2fed | [
"Apache-2.0"
] | 14 | 2017-08-01T09:33:20.000Z | 2019-12-09T07:39:26.000Z | ==================
Storage Management
==================
.. toctree::
:maxdepth: 1
storage-overview.md
using-local-disks-for-storage.md
using-evs-disks-for-storage.md
using-sfs-file-systems-for-storage.md
snapshot-and-backup.md
| 17.785714 | 40 | 0.610442 |
b8ec00f3a7dbb69b95204803dc12f4150ea68985 | 508 | rst | reStructuredText | src/documentation/original_data.rst | JonathanWillnow/european_factor_stockpicking_screener | 4bb9396490a2171954ecf7aff16656d4225ae251 | [
"MIT"
] | null | null | null | src/documentation/original_data.rst | JonathanWillnow/european_factor_stockpicking_screener | 4bb9396490a2171954ecf7aff16656d4225ae251 | [
"MIT"
] | null | null | null | src/documentation/original_data.rst | JonathanWillnow/european_factor_stockpicking_screener | 4bb9396490a2171954ecf7aff16656d4225ae251 | [
"MIT"
] | null | null | null | .. _original_data:
*************
Original data
*************
Documentation of the different datasets in *original_data*.
Data Scraping
=================
The following functions are used to obtain the datasets for the different stockindicies / stockexchnages contained in *original_data* and can be found in *data_management*.
Stockinfo Scraping
=================
.. automodule:: src.data_management.stockinfo_scraper
:members:
.. automodule:: src.data_management.task_get_stockinfo
:members:
| 22.086957 | 172 | 0.694882 |
7ab46d4163020807e8e45fb33af4e0e5cfc0ae20 | 64 | rst | reStructuredText | py/docs/source/common/calendar.rst | curtislb/ProjectEuler | 7baf8d7b7ac0e8697d4dec03458b473095a45da4 | [
"MIT"
] | null | null | null | py/docs/source/common/calendar.rst | curtislb/ProjectEuler | 7baf8d7b7ac0e8697d4dec03458b473095a45da4 | [
"MIT"
] | null | null | null | py/docs/source/common/calendar.rst | curtislb/ProjectEuler | 7baf8d7b7ac0e8697d4dec03458b473095a45da4 | [
"MIT"
] | null | null | null | calendar
========
.. automodule:: common.calendar
:members:
| 10.666667 | 31 | 0.609375 |
027a43e98159fef4d526428db36878d176bf98ca | 155 | rst | reStructuredText | torchbenchmark/models/fastNLP/docs/source/fastNLP.core.utils.rst | Chillee/benchmark | 91e1b2871327e44b9b7d24d173ca93720fb6565b | [
"BSD-3-Clause"
] | 2,693 | 2018-03-08T03:09:20.000Z | 2022-03-30T07:38:42.000Z | docs/source/fastNLP.core.utils.rst | stratoes/fastNLP | a8a458230489710ab945b37ec22e93315230f2de | [
"Apache-2.0"
] | 291 | 2018-07-21T07:43:17.000Z | 2022-03-07T13:06:58.000Z | docs/source/fastNLP.core.utils.rst | stratoes/fastNLP | a8a458230489710ab945b37ec22e93315230f2de | [
"Apache-2.0"
] | 514 | 2018-03-09T06:54:25.000Z | 2022-03-26T20:11:44.000Z | fastNLP.core.utils
==================
.. automodule:: fastNLP.core.utils
:members: cache_results, seq_len_to_mask, get_seq_len
:inherited-members:
| 19.375 | 56 | 0.670968 |
f71545101d5774eb3d8f939bafd26ed415c3999a | 1,522 | rst | reStructuredText | pyGeno/doc/source/publications.rst | ealong/pyGeno | b397bf36d49419ecc4c217a64ea64fa90f5a0392 | [
"Apache-2.0"
] | 309 | 2015-01-04T06:33:41.000Z | 2022-03-23T02:06:44.000Z | pyGeno/doc/source/publications.rst | ealong/pyGeno | b397bf36d49419ecc4c217a64ea64fa90f5a0392 | [
"Apache-2.0"
] | 56 | 2015-02-17T17:10:20.000Z | 2021-05-04T22:09:06.000Z | pyGeno/doc/source/publications.rst | ealong/pyGeno | b397bf36d49419ecc4c217a64ea64fa90f5a0392 | [
"Apache-2.0"
] | 58 | 2015-05-22T12:48:01.000Z | 2022-01-29T17:14:33.000Z | Publications
============
Please cite this one:
---------------------
`pyGeno: A Python package for precision medicine and proteogenomics. F1000Research, 2016`_
.. _`pyGeno: A Python package for precision medicine and proteogenomics. F1000Research, 2016`: http://f1000research.com/articles/5-381/v2
pyGeno was also used in the following studies:
----------------------------------------------
`MHC class I–associated peptides derive from selective regions of the human genome. J Clin Invest. 2016`_
.. _`MHC class I–associated peptides derive from selective regions of the human genome. J Clin Invest. 2016`: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5127664/
`Global proteogenomic analysis of human MHC class I-associated peptides derived from non-canonical reading frames. Nat. Comm. 2015`_
.. _Global proteogenomic analysis of human MHC class I-associated peptides derived from non-canonical reading frames. Nat. Comm. 2015: http://dx.doi.org/10.1038/ncomms10238
`Impact of genomic polymorphisms on the repertoire of human MHC class I-associated peptides. Nat. Comm. 2014`_
.. _Impact of genomic polymorphisms on the repertoire of human MHC class I-associated peptides. Nat. Comm. 2014: http://www.ncbi.nlm.nih.gov/pubmed/24714562
`MHC I-associated peptides preferentially derive from transcripts bearing miRNA response elements. Blood. 2012`_
.. _MHC I-associated peptides preferentially derive from transcripts bearing miRNA response elements. Blood. 2012: http://www.ncbi.nlm.nih.gov/pubmed/22438248
| 50.733333 | 172 | 0.752957 |
7764e1657e8ad7474a03ec5ee702997b917279dd | 1,821 | rst | reStructuredText | docs/behave_ecosystem.rst | jessingrass/behave | c5da3ddad284baf399b9596346652f0e557108d6 | [
"BSD-2-Clause"
] | 22 | 2015-03-29T17:08:17.000Z | 2021-12-21T17:27:20.000Z | docs/behave_ecosystem.rst | jessingrass/behave | c5da3ddad284baf399b9596346652f0e557108d6 | [
"BSD-2-Clause"
] | null | null | null | docs/behave_ecosystem.rst | jessingrass/behave | c5da3ddad284baf399b9596346652f0e557108d6 | [
"BSD-2-Clause"
] | 12 | 2015-11-12T13:14:33.000Z | 2021-05-25T13:51:46.000Z | .. _id.appendix.behave_ecosystem:
Behave Ecosystem
==============================================================================
The following tools and extensions try to simplify the work with `behave`_.
.. _behave: https://github.com/behave/behave
Tools
------------------------------------------------------------------------------
=========================== ===========================================================================
Tool Description
=========================== ===========================================================================
`cucutags`_ Generate `ctags`_-like information (cross-reference index)
for Gherkin feature files and behave step definitions.
=========================== ===========================================================================
.. _cucutags: https://gitorious.org/cucutags/cucutags/
.. _ctags: http://ctags.sourceforge.net/
Editor Plugins
------------------------------------------------------------------------------
=================== =============================================================================
Editor Plugin Description
=================== =============================================================================
`gedit-behave`_ `gedit`_ plugin for jumping between feature and step files.
`vim-behave`_ `vim`_ plugin: Port of `vim-cucumber`_ to Python `behave`_.
=================== =============================================================================
.. _gedit: https://projects.gnome.org/gedit/
.. _vim: http://www.vim.org/
.. _gedit-behave: https://gitorious.org/cucutags/gedit-behave
.. _vim-behave: https://gitorious.org/cucutags/vim-behave
.. _vim-cucumber: https://github.com/tpope/vim-cucumber.git
| 42.348837 | 103 | 0.365184 |
06265823d58c68aadf43fc34a538122a73797502 | 11,275 | rst | reStructuredText | docs/user/introduction.rst | pycampers/zproc | cd385852c32afb76d3e3a558ba0c809466de0a60 | [
"MIT"
] | 106 | 2018-04-30T15:39:05.000Z | 2019-09-09T02:15:52.000Z | docs/user/introduction.rst | devxpy/zproc | cd385852c32afb76d3e3a558ba0c809466de0a60 | [
"MIT"
] | 2 | 2020-03-24T17:20:04.000Z | 2020-03-31T04:59:13.000Z | docs/user/introduction.rst | devxpy/zproc | cd385852c32afb76d3e3a558ba0c809466de0a60 | [
"MIT"
] | 1 | 2021-02-17T19:52:12.000Z | 2021-02-17T19:52:12.000Z | Introduction to ZProc
=====================
The idea of zproc revolves around this funky :py:class:`.State` object.
A :py:class:`.Context` is provided as a factory for creating objects.
It's the easiest, most obvious way to use zproc.
Each :py:class:`.Context` object must be associated with a server process,
whose job is to manage the state of your applciation;
anything that needs synchronization.
Creating one is as simple as:
.. code-block:: python
import zproc
ctx = zproc.Context()
It makes the creation of objects explicit and bound to a specific Context,
eliminating the need for various guesing games.
The Context means just that.
It's a collection of various parameters and flags that help the framework
identify *where* the program currently is.
Launching a Process
---------------------------------
.. sidebar:: Decorators
Function decorators are functions which
accept other functions as arguments,
and add some wrapper code around them.
.. code-block:: python
@decorator
def func():
pass
# roughly equivalent to:
func = decorator(func)
The :py:meth:`.Context.spawn` function allows you to launch processes.
.. code-block:: python
def my_process(state):
...
ctx.spawn(my_process)
It works both as a function, and decorator.
.. code-block:: python
@ctx.process
def my_process(state):
...
The state
---------
.. code-block:: python
state = ctx.create_state()
:py:meth:`~.Context.spawn` will launch a process, and provide it with ``state``.
:py:class:`.State` is a *dict-like* object.
*dict-like*, because it's not exactly a ``dict``.
It supports common dictionary operations on the state.
| However, you *cannot* actually operate on the underlying ``dict``.
| It's guarded by a Process, whose sole job is to manage it.
| The :py:class:`.State` object only *instructs* that Process to modify the ``dict``.
You may also access it from the :py:class:`.Context` itself -- ``ctx.state``.
Process arguments
------------------------------
To supply arguments to a the Process's target function,
you can use ``args`` or ``kwargs``:
.. code-block:: python
def my_process(state, num, exp):
print(num, exp) # 2, 4
ctx.spawn(my_process, args=[2], kwargs={'exp': 4})
``args`` is a sequence of positional arguments for the function;
``kwargs`` is a dict, which maps argument names and values.
Waiting for a Process
-----------------------------------
Once you've launched a Process, you can wait for it to complete,
and obtain the return value.
.. code-block:: python
from time import sleep
def sleeper(state):
sleep(5)
return 'Hello There!'
p = ctx.spawn(sleeper)
result = p.wait()
print(result) # Hello There!
.. _process_factory:
Process Factory
--------------------------
:py:meth:`~.Context.spawn` also lets you launch many processes at once.
.. code-block:: python
p_list = ctx.spawn(sleeper, count=10)
p_list.wait()
.. _worker_map:
Worker Processes
----------------
This feature let's you distribute a computation to serveral,
fixed amount of workers.
This is meant to be used for CPU bound tasks,
since you can only have a limited number of CPU bound Processes working at any given time.
:py:meth:`~.Context.worker_map` let's you use the in-built `map()` function in a parallel way.
It divides up the sequence you provide into ``count`` number of pieces,
and sends them to ``count`` number of workers.
---
You first, need a :py:class:`.Swarm` object,
which is the front-end for using worker Processes.
.. code-block:: python
:caption: obtaining workers
ctx = zproc.Context()
swarm = ctx.create_swarm(4)
---
Now, we can start to use it.
.. code-block:: python
:caption: Works similar to ``map()``
def square(num):
return num * num
# [1, 4, 9, 16]
list(workers.map(square, [1, 2, 3, 4]))
.. code-block:: python
:caption: Common Arguments.
def power(num, exp):
return num ** exp
# [0, 1, 8, 27, 64, ... 941192, 970299]
list(
workers.map(
power,
range(100),
args=[3],
count=10 # distribute among 10 workers.
)
)
.. code-block:: python
:caption: Mapped Positional Arguments.
def power(num, exp):
return num ** exp
# [4, 9, 36, 256]
list(
workers.map(
power,
map_args=[(2, 2), (3, 2), (6, 2), (2, 8)]
)
)
.. code-block:: python
:caption: Mapped Keyword Arguments.
def my_thingy(seed, num, exp):
return seed + num ** exp
# [1007, 3132, 298023223876953132, 736, 132, 65543, 8]
list(
ctx.worker_map(
my_thingy,
args=[7],
map_kwargs=[
{'num': 10, 'exp': 3},
{'num': 5, 'exp': 5},
{'num': 5, 'exp': 2},
{'num': 9, 'exp': 3},
{'num': 5, 'exp': 3},
{'num': 4, 'exp': 8},
{'num': 1, 'exp': 4},
],
count=5
)
)
---
What's interesting about :py:meth:`~.Context.worker_map` is that it returns a generator.
The moment you call it, it will distribute the task to "count" number of workers.
It will then, return with a generator,
which in-turn will do the job of pulling out the results from these workers,
and arranging them in order.
---
The amount of time it takes for ``next(res)`` is non-linear,
because all the blocking computation is being carrried out in the background.
>>> import zproc
>>> import time
>>> ctx = zproc.Context()
>>> def blocking_func(x):
... time.sleep(5)
...
... return x * x
...
>>> res = ctx.worker_map(blocking_func, range(10)) # returns immediately
>>> res
<generator object Context._pull_results_for_task at 0x7fef735e6570>
>>> next(res) # might block
0
>>> next(res) # might block
1
>>> next(res) # might block
4
>>> next(res) # might block
9
>>> next(res) # might block
16
*and so on..*
.. _process_map:
Map Processes
-------------
This is meant to be used for I/O and network bound tasks,
as you can have more number of Processes working together,
than the number physical CPUs.
This is beacuase these kind of tasks typically involve waiting for a resource,
and are, as a result quite lax on CPU resources.
:py:meth:`~.Context.map_process`
has the exact same semantics for mapping sequences as :py:meth:`~.Context.map_process`,
except that it launches a new Process for each item in the sequence.
Reactive programming
--------------------
.. sidebar:: Reactive Programming
Reactive programming is a declarative programming
paradigm concerned with data streams and the propagation of change.
This is the part where you really start to see the benefits of a smart state.
The state knows when it's being updated, and does the job of notifying everyone.
State watching allows you to "react" to some change in the state in an efficient way.
The problem
+++++++++++
.. sidebar:: Busy waiting
busy-waiting
is a technique in which a process repeatedly checks to see if a condition is true,
such as whether keyboard input or a lock is available.
*Busy waiting is expensive and quite tricky to get right.*
Lets say, you want to wait for the number of ``"cookies"`` to be ``5``.
Using busy-waiting, you might do it with something like this:
.. code-block:: python
while True:
if cookies == 5:
print('done!')
break
But then you find out that this eats too much CPU, and put put some sleep.
.. code-block:: python
from time import sleep
while True:
if cookies == 5:
print('done!')
break
sleep(1)
And from there on, you try to manage the time for which your application sleeps (to arrive at a sweet spot).
The solution
++++++++++++
zproc provides an elegant, easy to use solution to this problem.
.. code-block:: python
def my_process(state):
state.get_when_equal('cookies', 5)
print('done with zproc!')
This eats very little to no CPU, and is fast enough for almost everyone needs.
You can also provide a callable,
which gets called on each state update
to check whether the return value is *truthy*.
.. code-block:: python
state.get_when(lambda snap: snap.get('cookies') == 5)
.. caution::
Wrong use of state watchers!
.. code-block:: python
from time import time
t = time()
state.get_when(lambda _: time() > t + 5) # wrong!
State only knows how to respond to *state* changes.
Changing time doesn't signify a state update.
Read more on the :ref:`state-watching`.
Mutating objects inside state
-----------------------------
.. sidebar:: Mutation
In computer science,
mutation refers to the act of modifying an object in-place.
When we say that an object is mutable,
it implies that its in-place methods "mutate" the object's contents.
Zproc does not allow one to mutate objects inside the state.
.. code-block:: python
:caption: incorrect mutation
state['numbers'] = [1, 2, 3] # works
state['numbers'].append(4) # doesn't work
The *right* way to mutate objects in the state,
is to do it using the :py:func:`~.atomic` decorator.
.. code-block:: python
:caption: correct mutation
@zproc.atomic
def add_a_number(snap, to_add)
snap['numbers'].append(to_add)
@ctx.process
def my_process(state):
add_a_number(state, 4)
Read more about :ref:`atomicity`.
Here be dragons
---------------
.. sidebar:: Thread safety
Thread-safe code only manipulates shared data structures in a manner that ensures that all
threads behave properly and fulfill their design specifications without unintended interaction.
Absolutely none of the the classes in ZProc are Process or Thread safe.
You must never attempt to share an object across multiple Processes.
Create a new object for each Process.
Communicate and synchronize using the :py:class:`.State` at all times.
This is, in-general *very* good practice.
Never attempt to directly share python objects across Processes,
and the framework will reward you :).
The problem
+++++++++++
.. code-block:: python
:caption: incorrect use of the framework
ctx = zproc.Context()
def my_process(state):
ctx.spawn(some_other_process) # very wrong!
ctx.spawn(my_process)
Here, the ``ctx`` object is shared between the parent and child Process.
This is not allowed, and will inevitably lead to improper behavior.
The solution
++++++++++++
You can ask zproc to create new objects for you.
.. code-block:: python
:caption: correct use of the framework
ctx = zproc.Context()
def my_process(inner_ctx):
inner_ctx.spawn(some_other_process) # correct.
ctx.spawn(my_process, pass_context=True) # Notice "pass_context"
---
Or, create new ones youself.
.. code-block:: python
:caption: correct use of the framework
ctx = zproc.Context()
def my_process(state):
inner_ctx = zproc.Context() # important!
inner_ctx.spawn(some_other_process)
ctx.spawn(my_process)
| 23.010204 | 108 | 0.647273 |
060321bee299a00c55cc8d3849316a00082ed9f1 | 74 | rest | reStructuredText | docker/echo-app/test.rest | MichEam/kubernetes-sample-1 | 33a61128d83183c794db8e9aaba2262774d7c297 | [
"MIT"
] | null | null | null | docker/echo-app/test.rest | MichEam/kubernetes-sample-1 | 33a61128d83183c794db8e9aaba2262774d7c297 | [
"MIT"
] | null | null | null | docker/echo-app/test.rest | MichEam/kubernetes-sample-1 | 33a61128d83183c794db8e9aaba2262774d7c297 | [
"MIT"
] | null | null | null | http://localhost:8080/
--
--
GET /saaa/gggg?hoge=moge&a=b
--
GET /hello?
| 9.25 | 28 | 0.621622 |
13a2592a6372465459cd8f4a81f265ec6395db14 | 198 | rst | reStructuredText | docs/api/metrics/uplift_by_percentile.rst | bwbelljr/scikit-uplift | b2fbbb04bd10fc082f25aac2e04dc94cfc8bd64a | [
"MIT"
] | 403 | 2019-12-21T09:36:57.000Z | 2022-03-30T09:36:56.000Z | docs/api/metrics/uplift_by_percentile.rst | fspofficial/scikit-uplift | c9dd56aa0277e81ef7c4be62bf2fd33432e46f36 | [
"MIT"
] | 100 | 2020-02-29T11:52:21.000Z | 2022-03-29T23:14:33.000Z | docs/api/metrics/uplift_by_percentile.rst | tankudo/scikit-uplift | 412b4904397628d1e1101b7c6c4a561cc2ef38f5 | [
"MIT"
] | 81 | 2019-12-26T08:28:44.000Z | 2022-03-22T09:08:54.000Z | *********************************************
`sklift.metrics <./>`_.uplift_by_percentile
*********************************************
.. autofunction:: sklift.metrics.metrics.uplift_by_percentile | 39.6 | 61 | 0.434343 |
cf7acae040162ce88fa01f135cf1735f9c23fa1a | 394 | rst | reStructuredText | doc/source/users/guides/telemetry.rst | teresa-ho/stx-openstacksdk | 7d723da3ffe9861e6e9abcaeadc1991689f782c5 | [
"Apache-2.0"
] | 43 | 2018-12-19T08:39:15.000Z | 2021-07-21T02:45:43.000Z | doc/source/users/guides/telemetry.rst | teresa-ho/stx-openstacksdk | 7d723da3ffe9861e6e9abcaeadc1991689f782c5 | [
"Apache-2.0"
] | 11 | 2019-03-17T13:28:56.000Z | 2020-09-23T23:57:50.000Z | doc/source/users/guides/telemetry.rst | teresa-ho/stx-openstacksdk | 7d723da3ffe9861e6e9abcaeadc1991689f782c5 | [
"Apache-2.0"
] | 47 | 2018-12-19T05:14:25.000Z | 2022-03-19T15:28:30.000Z | Using OpenStack Telemetry
=========================
.. caution::
BETA: This API is a work in progress and is subject to change.
Before working with the Telemetry service, you'll need to create a connection
to your OpenStack cloud by following the :doc:`connect` user guide. This will
provide you with the ``conn`` variable used in the examples below.
.. TODO(thowe): Implement this guide
| 32.833333 | 77 | 0.713198 |
7639565febfb8ae664525518ec5a379ead98b2b2 | 1,047 | rst | reStructuredText | doc/Changelog/9.0/Breaking-82368-SignalAfterExtensionConfigurationWriteRemoved.rst | DanielSiepmann/typo3scan | 630efc8ea9c7bd86c4b9192c91b795fff5d3b8dc | [
"MIT"
] | 1 | 2019-10-04T23:58:04.000Z | 2019-10-04T23:58:04.000Z | doc/Changelog/9.0/Breaking-82368-SignalAfterExtensionConfigurationWriteRemoved.rst | DanielSiepmann/typo3scan | 630efc8ea9c7bd86c4b9192c91b795fff5d3b8dc | [
"MIT"
] | 1 | 2021-12-17T10:58:59.000Z | 2021-12-17T10:58:59.000Z | doc/Changelog/9.0/Breaking-82368-SignalAfterExtensionConfigurationWriteRemoved.rst | DanielSiepmann/typo3scan | 630efc8ea9c7bd86c4b9192c91b795fff5d3b8dc | [
"MIT"
] | 4 | 2020-10-06T08:18:55.000Z | 2022-03-17T11:14:09.000Z | .. include:: ../../Includes.txt
====================================================================
Breaking: #82368 - Signal 'afterExtensionConfigurationWrite' removed
====================================================================
See :issue:`82368`
Description
===========
The extension manager no longer emits signal :php:`afterExtensionConfigurationWrite`.
The code has been moved to the install tool which does not load signal / slot
information at this point.
Impact
======
Slots of this signal are no longer executed.
Affected Installations
======================
Extensions that use signal :php:`afterExtensionConfigurationWrite`. This is a rather seldom
used signal, relevant mostly only for distributions.
Migration
=========
In many cases it should be possible to use signal :php:`afterExtensionInstall` of class
:php:`\TYPO3\CMS\Extensionmanager\Utility\InstallUtility` instead which is fired after an extension
has been installed.
.. index:: Backend, LocalConfiguration, PHP-API, NotScanned, ext:extensionmanager
| 27.552632 | 99 | 0.653295 |
af41e5d24c9aefabd041f758800fa0f2fbf89f63 | 23,928 | rst | reStructuredText | tl/appendices/orm-migration.rst | NewBLife/docs | 48ecb8ef234fd2f97537d36a76135e4b936b0c0a | [
"MIT"
] | null | null | null | tl/appendices/orm-migration.rst | NewBLife/docs | 48ecb8ef234fd2f97537d36a76135e4b936b0c0a | [
"MIT"
] | null | null | null | tl/appendices/orm-migration.rst | NewBLife/docs | 48ecb8ef234fd2f97537d36a76135e4b936b0c0a | [
"MIT"
] | null | null | null | New ORM Upgrade Guide
#####################
CakePHP 3.0 features a new ORM that has been re-written from the ground up.
While the ORM used in 1.x and 2.x has served us well for a long time it had
a few issues that we wanted to fix.
* Frankenstein - Is it a record, or a table? Currently it's both.
* Inconsistent API - Model::read() for example.
* No query object - Queries are always defined as arrays, this has some
limitations and restrictions. For example it makes doing unions and
sub-queries much harder.
* Returns arrays - This is a common complaint about CakePHP, and has probably
reduced adoption at some levels.
* No record object - This makes attaching formatting methods
difficult/impossible.
* Containable - Should be part of the ORM, not a crazy hacky behavior.
* Recursive - This should be better controlled as defining which associations
are included, not a level of recursiveness.
* DboSource - It is a beast, and Model relies on it more than datasource. That
separation could be cleaner and simpler.
* Validation - Should be separate, it's a giant crazy function right now. Making
it a reusable bit would make the framework more extensible.
The ORM in CakePHP 3.0 solves these and many more problems. The new ORM
focuses on relational data stores right now. In the future and through plugins
we will add non relational stores like ElasticSearch and others.
Design of the New ORM
=====================
The new ORM solves several problems by having more specialized and focused
classes. In the past you would use ``Model`` and a Datasource for all
operations. Now the ORM is split into more layers:
* ``Cake\Database\Connection`` - Provides a platform independent way to create
and use connections. This class provides a way to use transactions,
execute queries and access schema data.
* ``Cake\Database\Dialect`` - The classes in this namespace provide platform
specific SQL and transform queries to work around platform specific
limitations.
* ``Cake\Database\Type`` - Is the gateway class to CakePHP database type
conversion system. It is a pluggable framework for adding abstract column
types and providing mappings between database, PHP representations and PDO
bindings for each data type. For example datetime columns are represented as
``DateTime`` instances in your code now.
* ``Cake\ORM\Table`` - The main entry point into the new ORM. Provides access
to a single table. Handles the definition of association, use of behaviors and
creation of entities and query objects.
* ``Cake\ORM\Behavior`` - The base class for behaviors, which act very similar
to behaviors in previous versions of CakePHP.
* ``Cake\ORM\Query`` - A fluent object based query builder that replaces
the deeply nested arrays used in previous versions of CakePHP.
* ``Cake\ORM\ResultSet`` - A collection of results that gives powerful tools
for manipulating data in aggregate.
* ``Cake\ORM\Entity`` - Represents a single row result. Makes accessing data
and serializing to various formats a snap.
Now that you are more familiar with some of the classes you'll interact with
most frequently in the new ORM it is good to look at the three most important
classes. The ``Table``, ``Query`` and ``Entity`` classes do much of the heavy
lifting in the new ORM, and each serves a different purpose.
Table Objects
-------------
Table objects are the gateway into your data. They handle many of the tasks that
``Model`` did in previous releases. Table classes handle tasks like:
- Creating queries.
- Providing finders.
- Validating and saving entities.
- Deleting entities.
- Defining and accessing associations.
- Triggering callback events.
- Interacting with behaviors.
The documentation chapter on :doc:`/orm/table-objects` provides far more detail
on how to use table objects than this guide can. Generally when moving existing
model code over it will end up in a table object. Table objects don't contain
any platform dependent SQL. Instead they collaborate with entities and the query
builder to do their work. Table objects also interact with behaviors and other
interested parties through published events.
Query Objects
-------------
While these are not classes you will build yourself, your application code will
make extensive use of the :doc:`/orm/query-builder` which is central to the new
ORM. The query builder makes it easy to build simple or complex queries
including those that were previously very difficult in CakePHP like ``HAVING``,
``UNION`` and sub-queries.
The various find() calls your application has currently will need to be updated
to use the new query builder. The Query object is responsible for containing the
data to make a query without executing the query itself. It collaborates with
the connection/dialect to generate platform specific SQL which is executed
creating a ``ResultSet`` as the output.
Entity Objects
--------------
In previous versions of CakePHP the ``Model`` class returned dumb arrays that
could not contain any logic or behavior. While the community made this
short-coming less painful with projects like CakeEntity, the array results were
often a short coming that caused many developers trouble. For CakePHP 3.0, the
ORM always returns object result sets unless you explicitly disable that
feature. The chapter on :doc:`/orm/entities` covers the various tasks you can
accomplish with entities.
Entities are created in one of two ways. Either by loading data from the
database, or converting request data into entities. Once created, entities allow
you to manipulate the data they contain and persist their data by collaborating
with table objects.
Key Differences
===============
The new ORM is a large departure from the existing ``Model`` layer. There are
many important differences that are important in understanding how the new ORM
operates and how to update your code.
Inflection Rules Updated
------------------------
You may have noticed that table classes have a pluralized name. In addition to
tables having pluralized names, associations are also referred in the plural
form. This is in contrast to ``Model`` where class names and association aliases
were singular. There are a few reasons for this change:
* Table classes represent **collections** of data, not single rows.
* Associations link tables together, describing the relations between many
things.
While the conventions for table objects are to always use plural forms, your
entity association properties will be populated based on the association type.
.. note::
BelongsTo and HasOne associations will use the singular form in entity
properties, while HasMany and BelongsToMany (HABTM) will use plural forms.
The convention change for table objects is most apparent when building queries.
Instead of expressing queries like::
// Wrong
$query->where(['User.active' => 1]);
You need to use the plural form::
// Correct
$query->where(['Users.active' => 1]);
Find returns a Query Object
---------------------------
One important difference in the new ORM is that calling ``find`` on a table will
not return the results immediately, but will return a Query object; this serves
several purposes.
It is possible to alter queries further, after calling ``find``::
$articles = TableRegistry::get('Articles');
$query = $articles->find();
$query->where(['author_id' => 1])->order(['title' => 'DESC']);
It is possible to stack custom finders to append conditions, sorting, limit and
any other clause to the same query before it is executed::
$query = $articles->find('approved')->find('popular');
$query->find('latest');
You can compose queries one into the other to create subqueries easier than
ever::
$query = $articles->find('approved');
$favoritesQuery = $article->find('favorites', ['for' => $user]);
$query->where(['id' => $favoritesQuery->select(['id'])]);
You can decorate queries with iterators and call methods without even touching
the database. This is great when you have parts of your view cached and having
the results taken from the database is not actually required::
// No queries made in this example!
$results = $articles->find()
->order(['title' => 'DESC'])
->formatResults(function (\Cake\Collection\CollectionInterface $results) {
return $results->extract('title');
});
Queries can be seen as the result object, trying to iterate the query, calling
``toArray()`` or any method inherited from :doc:`collection </core-libraries/collections>`,
will result in the query being executed and results returned to you.
The biggest difference you will find when coming from CakePHP 2.x is that
``find('first')`` does not exist anymore. There is a trivial replacement for it,
and it is the ``first()`` method::
// Before
$article = $this->Article->find('first');
// Now
$article = $this->Articles->find()->first();
// Before
$article = $this->Article->find('first', [
'conditions' => ['author_id' => 1]
]);
// Now
$article = $this->Articles->find('all', [
'conditions' => ['author_id' => 1]
])->first();
// Can also be written
$article = $this->Articles->find()
->where(['author_id' => 1])
->first();
If you are loading a single record by its primary key, it will be better to
just call ``get()``::
$article = $this->Articles->get(10);
Finder Method Changes
---------------------
Returning a query object from a find method has several advantages, but comes at
a cost for people migrating from 2.x. If you had some custom find methods in
your models, they will need some modifications. This is how you create custom
finder methods in 3.0::
class ArticlesTable
{
public function findPopular(Query $query, array $options)
{
return $query->where(['times_viewed' > 1000]);
}
public function findFavorites(Query $query, array $options)
{
$for = $options['for'];
return $query->matching('Users.Favorites', function ($q) use ($for) {
return $q->where(['Favorites.user_id' => $for]);
});
}
}
As you can see, they are pretty straightforward, they get a Query object instead
of an array and must return a Query object back. For 2.x users that implemented
afterFind logic in custom finders, you should check out the :ref:`map-reduce`
section, or use the features found on the
:doc:`collection objects </core-libraries/collections>`. If in your
models you used to rely on having an afterFind for all find operations you can
migrate this code in one of a few ways:
1. Override your entity constructor method and do additional formatting there.
2. Create accessor methods in your entity to create the virtual properties.
3. Redefine ``findAll()`` and use ``formatResults``.
In the 3rd case above your code would look like::
public function findAll(Query $query, array $options)
{
return $query->formatResults(function (\Cake\Collection\CollectionInterface $results) {
return $results->map(function ($row) {
// Your afterfind logic
});
})
}
You may have noticed that custom finders receive an options array. You can pass
any extra information to your finder using this parameter. This is great
news for people migrating from 2.x. Any of the query keys that were used in
previous versions will be converted automatically for you in 3.x to the correct
functions::
// This works in both CakePHP 2.x and 3.0
$articles = $this->Articles->find('all', [
'fields' => ['id', 'title'],
'conditions' => [
'OR' => ['title' => 'Cake', 'author_id' => 1],
'published' => true
],
'contain' => ['Authors'], // The only change! (notice plural)
'order' => ['title' => 'DESC'],
'limit' => 10,
]);
If your application uses 'magic' or :ref:`dynamic-finders`, you will have to
adapt those calls. In 3.x the ``findAllBy*`` methods have been removed, instead
``findBy*`` always returns a query object. To get the first result, you need to
use the ``first()`` method::
$article = $this->Articles->findByTitle('A great post!')->first();
Hopefully, migrating from older versions is not as daunting as it first seems.
Many of the features we have added will help you remove code as you can better
express your requirements using the new ORM and at the same time the
compatibility wrappers will help you rewrite those tiny differences in a fast
and painless way.
One of the other nice improvements in 3.x around finder methods is that
behaviors can implement finder methods with no fuss. By simply defining a method
with a matching name and signature on a Behavior the finder will automatically
be available on any tables the behavior is attached to.
Recursive and ContainableBehavior Removed
-----------------------------------------
In previous versions of CakePHP you needed to use ``recursive``,
``bindModel()``, ``unbindModel()`` and ``ContainableBehavior`` to reduce the
loaded data to the set of associations you were interested in. A common tactic
to manage associations was to set ``recursive`` to ``-1`` and use Containable to
manage all associations. In CakePHP 3.0 ContainableBehavior, recursive,
bindModel, and unbindModel have all been removed. Instead the ``contain()``
method has been promoted to be a core feature of the query builder. Associations
are only loaded if they are explicitly turned on. For example::
$query = $this->Articles->find('all');
Will **only** load data from the ``articles`` table as no associations have been
included. To load articles and their related authors you would do::
$query = $this->Articles->find('all')->contain(['Authors']);
By only loading associated data that has been specifically requested you spend
less time fighting the ORM trying to get only the data you want.
No afterFind Event or Virtual Fields
------------------------------------
In previous versions of CakePHP you needed to make extensive use of the
``afterFind`` callback and virtual fields in order to create generated data
properties. These features have been removed in 3.0. Because of how ResultSets
iteratively generate entities, the ``afterFind`` callback was not possible.
Both afterFind and virtual fields can largely be replaced with virtual
properties on entities. For example if your User entity has both first and last
name columns you can add an accessor for `full_name` and generate the property
on the fly::
namespace App\Model\Entity;
use Cake\ORM\Entity;
class User extends Entity
{
protected function _getFullName()
{
return $this->first_name . ' ' . $this->last_name;
}
}
Once defined you can access your new property using ``$user->full_name``.
Using the :ref:`map-reduce` features of the ORM allow you to build aggregated
data from your results, which is another use case that the ``afterFind``
callback was often used for.
While virtual fields are no longer an explicit feature of the ORM, adding
calculated fields is easy to do in your finder methods. By using the query
builder and expression objects you can achieve the same results that virtual
fields gave::
namespace App\Model\Table;
use Cake\ORM\Table;
use Cake\ORM\Query;
class ReviewsTable extends Table
{
public function findAverage(Query $query, array $options = [])
{
$avg = $query->func()->avg('rating');
$query->select(['average' => $avg]);
return $query;
}
}
Associations No Longer Defined as Properties
--------------------------------------------
In previous versions of CakePHP the various associations your models had were
defined in properties like ``$belongsTo`` and ``$hasMany``. In CakePHP 3.0,
associations are created with methods. Using methods allows us to sidestep the
many limitations class definitions have, and provide only one way to define
associations. Your ``initialize()`` method and all other parts of your application
code, interact with the same API when manipulating associations::
namespace App\Model\Table;
use Cake\ORM\Table;
use Cake\ORM\Query;
class ReviewsTable extends Table
{
public function initialize(array $config)
{
$this->belongsTo('Movies');
$this->hasOne('Ratings');
$this->hasMany('Comments')
$this->belongsToMany('Tags')
}
}
As you can see from the example above each of the association types uses
a method to create the association. One other difference is that
``hasAndBelongsToMany`` has been renamed to ``belongsToMany``. To find out more
about creating associations in 3.0 see the section on :doc:`/orm/associations`.
Another welcome improvement to CakePHP is the ability to create your own
association classes. If you have association types that are not covered by the
built-in relation types you can create a custom ``Association`` sub-class and
define the association logic you need.
Validation No Longer Defined as a Property
------------------------------------------
Like associations, validation rules were defined as a class property in previous
versions of CakePHP. This array would then be lazily transformed into
a ``ModelValidator`` object. This transformation step added a layer of
indirection, complicating rule changes at runtime. Furthermore, validation rules
being defined as a property made it difficult for a model to have multiple sets
of validation rules. In CakePHP 3.0, both these problems have been remedied.
Validation rules are always built with a ``Validator`` object, and it is trivial
to have multiple sets of rules::
namespace App\Model\Table;
use Cake\ORM\Table;
use Cake\ORM\Query;
use Cake\Validation\Validator;
class ReviewsTable extends Table
{
public function validationDefault(Validator $validator)
{
$validator->requirePresence('body')
->add('body', 'length', [
'rule' => ['minLength', 20],
'message' => 'Reviews must be 20 characters or more',
])
->add('user_id', 'numeric', [
'rule' => 'numeric'
]);
return $validator;
}
}
You can define as many validation methods as you need. Each method should be
prefixed with ``validation`` and accept a ``$validator`` argument.
In previous versions of CakePHP 'validation' and the related callbacks covered
a few related but different uses. In CakePHP 3.0, what was formerly called
validation is now split into two concepts:
#. Data type and format validation.
#. Enforcing application, or business rules.
Validation is now applied before ORM entities are created from request data.
This step lets you ensure data matches the data type, format, and basic shape
your application expects. You can use your validators when converting request
data into entities by using the ``validate`` option. See the documentation on
:ref:`converting-request-data` for more information.
:ref:`Application rules <application-rules>` allow you to define rules that
ensure your application's rules, state and workflows are enforced. Rules are
defined in your Table's ``buildRules()`` method. Behaviors can add rules using
the ``buildRules()`` hook method. An example ``buildRules()`` method for our
articles table could be::
// In src/Model/Table/ArticlesTable.php
namespace App\Model\Table;
use Cake\ORM\Table;
use Cake\ORM\RulesChecker;
class ArticlesTable extends Table
{
public function buildRules(RulesChecker $rules)
{
$rules->add($rules->existsIn('user_id', 'Users'));
$rules->add(
function ($article, $options) {
return ($article->published && empty($article->reviewer));
},
'isReviewed',
[
'errorField' => 'published',
'message' => 'Articles must be reviewed before publishing.'
]
);
return $rules;
}
}
Identifier Quoting Disabled by Default
--------------------------------------
In the past CakePHP has always quoted identifiers. Parsing SQL snippets and
attempting to quote identifiers was both error prone and expensive. If you are
following the conventions CakePHP sets out, the cost of identifier quoting far
outweighs any benefit it provides. Because of this identifier quoting has been
disabled by default in 3.0. You should only need to enable identifier quoting if
you are using column names or table names that contain special characters or are
reserved words. If required, you can enable identifier quoting when configuring
a connection::
// In config/app.php
'Datasources' => [
'default' => [
'className' => 'Cake\Database\Driver\Mysql',
'username' => 'root',
'password' => 'super_secret',
'host' => 'localhost',
'database' => 'cakephp',
'quoteIdentifiers' => true,
]
],
.. note::
Identifiers in ``QueryExpression`` objects will not be quoted, and you will
need to quote them manually or use IdentifierExpression objects.
Updating Behaviors
==================
Like most ORM related features, behaviors have changed in 3.0 as well. They now
attach to ``Table`` instances which are the conceptual descendant of the
``Model`` class in previous versions of CakePHP. There are a few key
differences from behaviors in CakePHP 2.x:
- Behaviors are no longer shared across multiple tables. This means you no
longer have to 'namespace' settings stored in a behavior. Each table using
a behavior will get its own instance.
- The method signatures for mixin methods have changed.
- The method signatures for callback methods have changed.
- The base class for behaviors have changed.
- Behaviors can add finder methods.
New Base Class
--------------
The base class for behaviors has changed. Behaviors should now extend
``Cake\ORM\Behavior``; if a behavior does not extend this class an exception
will be raised. In addition to the base class changing, the constructor for
behaviors has been modified, and the ``startup()`` method has been removed.
Behaviors that need access to the table they are attached to should define
a constructor::
namespace App\Model\Behavior;
use Cake\ORM\Behavior;
class SluggableBehavior extends Behavior
{
protected $_table;
public function __construct(Table $table, array $config)
{
parent::__construct($table, $config);
$this->_table = $table;
}
}
Mixin Methods Signature Changes
-------------------------------
Behaviors continue to offer the ability to add 'mixin' methods to Table objects,
however the method signature for these methods has changed. In CakePHP 3.0,
behavior mixin methods can expect the **same** arguments provided to the table
'method'. For example::
// Assume table has a slug() method provided by a behavior.
$table->slug($someValue);
The behavior providing the ``slug()`` method will receive only 1 argument, and its
method signature should look like::
public function slug($value)
{
// Code here.
}
Callback Method Signature Changes
---------------------------------
Behavior callbacks have been unified with all other listener methods. Instead of
their previous arguments, they need to expect an event object as their first
argument::
public function beforeFind(Event $event, Query $query, array $options)
{
// Code.
}
See :ref:`table-callbacks` for the signatures of all the callbacks a behavior
can subscribe to.
| 39.485149 | 95 | 0.705575 |
1187428097f6b2b42865f5669f2cc376065049d4 | 621 | rst | reStructuredText | docs/CadVlan.Net.rst | marcusgc/GloboNetworkAPI-WebUI | 1172f14028f9c116d71df7489eda770446b131d2 | [
"Apache-2.0"
] | 17 | 2015-05-19T20:03:45.000Z | 2022-03-24T06:19:47.000Z | docs/CadVlan.Net.rst | marcusgc/GloboNetworkAPI-WebUI | 1172f14028f9c116d71df7489eda770446b131d2 | [
"Apache-2.0"
] | 41 | 2015-01-27T18:36:07.000Z | 2021-06-10T20:34:03.000Z | docs/CadVlan.Net.rst | marcusgc/GloboNetworkAPI-WebUI | 1172f14028f9c116d71df7489eda770446b131d2 | [
"Apache-2.0"
] | 19 | 2016-09-12T07:35:42.000Z | 2022-01-28T23:46:11.000Z | CadVlan.Net package
===================
Submodules
----------
CadVlan.Net.business module
---------------------------
.. automodule:: CadVlan.Net.business
:members:
:undoc-members:
:show-inheritance:
CadVlan.Net.forms module
------------------------
.. automodule:: CadVlan.Net.forms
:members:
:undoc-members:
:show-inheritance:
CadVlan.Net.views module
------------------------
.. automodule:: CadVlan.Net.views
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: CadVlan.Net
:members:
:undoc-members:
:show-inheritance:
| 15.923077 | 36 | 0.549114 |
6b0fa97f9764c4f11b2aedd55ad3c697bd1efd47 | 5,415 | rst | reStructuredText | documentation/source/pydwf_api/pydwf_overview.rst | sidneycadot/pydwf | cd9eba8b45d990f09095bec62b20115f0757baba | [
"MIT"
] | 14 | 2021-05-10T16:19:45.000Z | 2022-03-13T08:30:12.000Z | documentation/source/pydwf_api/pydwf_overview.rst | sidneycadot/pydwf | cd9eba8b45d990f09095bec62b20115f0757baba | [
"MIT"
] | 22 | 2021-05-01T09:51:09.000Z | 2021-11-13T12:35:36.000Z | documentation/source/pydwf_api/pydwf_overview.rst | sidneycadot/pydwf | cd9eba8b45d990f09095bec62b20115f0757baba | [
"MIT"
] | 2 | 2021-05-02T12:13:16.000Z | 2022-03-11T21:15:07.000Z | .. include:: /substitutions.rst
Overview of |pydwf|
===================
All core |pydwf| functionality is made available for import from the top-level |pydwf| package:
* the |DwfLibrary:link| class, which is the starting point for all |pydwf| functionality;
* the |PyDwfError:link| and |DwfLibraryError:link| exceptions;
* the |enumeration types:link| that are used for parameters and result values of |pydwf| methods.
A small number of convenience functions and types have been implemented on top of the core |pydwf| package to simplify often-recurring tasks. These can be found in the |pydwf.utilities:link| package.
A minimal example of |pydwf| usage
----------------------------------
In practice, Python scripts that use |pydwf| will deal almost exclusively with just two classes: |DwfLibrary| and |DwfDevice|.
The following is a minimal example of using |pydwf| that uses both of these classes to produce a 1 kHz tone on the first analog output channel:
.. code-block:: python
"""A minimal, self-contained example of using pydwf."""
from pydwf import DwfLibrary, DwfAnalogOutNode, DwfAnalogOutFunction
from pydwf.utilities import openDwfDevice
CH1 = 0 # Channel numbering starts at zero.
node = DwfAnalogOutNode.Carrier
dwf = DwfLibrary()
with openDwfDevice(dwf) as device:
device.analogOut.reset(CH1)
device.analogOut.nodeEnableSet(CH1, node, True)
device.analogOut.nodeFunctionSet(CH1, node, DwfAnalogOutFunction.Sine)
device.analogOut.nodeFrequencySet(CH1, node, 1000.0)
# Start the channel.
device.analogOut.configure(CH1, True)
input("Producing a 1 kHz tone on CH1. Press Enter to quit ...")
With this example in mind, we can introduce the all-important |DwfLibrary| and |DwfDevice| classes.
The two main |pydwf| classes
----------------------------
As a |pydwf| user, you will interact directly with two classes: |DwfLibrary| and |DwfDevice|. We shortly summarize what they do here. They each have their own more comprehensive sections later on.
.. rubric:: The |DwfLibrary| class
The |DwfLibrary:link| class represents the loaded Digilent Waveforms shared library itself, and provides methods that are not specific to a particular previously opened device. Examples include querying the library version, enumeration of devices, and opening a specific device for use.
Typically, a script will instantiate a single |DwfLibrary| and use that instance to open a specific Digilent Waveforms device, yielding a |DwfDevice| instance that can be used for the task at hand. This is also what happens in the example shown above.
A |DwfLibrary| instance provides a small number of methods and two attributes that provide access to further functionality:
* |deviceEnum:link| provides device enumeration functionality;
* |deviceControl:link| provides functionality to open a single device and to close all previously opened devices.
In most programs, the |DwfLibrary| class is only used to open a device for use, optionally selecting a specific |device configuration:link|. Since this is such an often-occurring operation, |pydwf| provides the |pydwf.utilities.openDwfDevice:link| convenience function that handles several practical use-cases, such as opening a specific device by its serial number, and/or selecting a device configuration that maximizes the buffer size for a certain instrument.
A comprehensive description of the |DwfLibrary| and its two attributes can be found :py:doc:`here </pydwf_api/DwfLibraryToC>`.
.. rubric:: The |DwfDevice| class
The |DwfDevice:link| class represents a specific Digilent Waveforms device, such as an Analog Discovery 2 or a Digital Discovery, connected to the computer.
Instances of |DwfDevice| are obtained either by calling on of the low-level |DeviceControl.open:link| or |DeviceControl.openEx:link| methods, or by calling the higher-level, more powerful |pydwf.utilities.openDwfDevice:link| convenience function.
The |DwfDevice| class provides several miscellaneous methods, but the bulk of its functionality is accessible via one of the eleven attributes listed below:
* |analogIn:link| provides a multi-channel oscilloscope;
* |analogOut:link| provides a multi-channel analog signal generator;
* |analogIO:link| provides voltage, current, and temperature monitoring and control;
* |analogImpedance:link| provides measurement of impedance and other quantities;
* |digitalIn:link| provides a multi-channel digital logic analyzer;
* |digitalOut:link| provides a multi-channel digital pattern generator;
* |digitalIO:link| provides static digital I/O functionality;
* |digitalUart:link| provides UART protocol configuration, send, and receive functionality;
* |digitalCan:link| provides CAN protocol configuration, send, and receive functionality;
* |digitalI2c:link| provides I2C protocol configuration, send, and receive functionality;
* |digitalSpi:link| provides SPI protocol configuration, send, and receive functionality.
After use, a Python script should :py:meth:`~pydwf.core.dwf_device.DwfDevice.close` the |DwfDevice|. Alternatively, the |DwfDevice| can act as a *context manager* for itself, to make sure it is closed whenever the containing *with* statement ends.
A comprehensive description of the |DwfDevice| and its eleven attributes can be found :py:doc:`here </pydwf_api/DwfDeviceToC>`.
| 59.505495 | 463 | 0.76325 |
1e0b224fc397d699119af5be784d7dad6cfd0123 | 1,855 | rst | reStructuredText | kernel/linux-5.4/Documentation/driver-api/backlight/lp855x-driver.rst | josehu07/SplitFS | d7442fa67a17de7057664f91defbfdbf10dd7f4a | [
"Apache-2.0"
] | 44 | 2022-03-16T08:32:31.000Z | 2022-03-31T16:02:35.000Z | docs/linux/driver-api/backlight/lp855x-driver.rst | lukedsmalley/oo-kernel-hacking | 57161ae3e8a780a72b475b3c27fec8deef83b8e1 | [
"MIT"
] | 13 | 2021-07-10T04:36:17.000Z | 2022-03-03T10:50:05.000Z | docs/linux/driver-api/backlight/lp855x-driver.rst | lukedsmalley/oo-kernel-hacking | 57161ae3e8a780a72b475b3c27fec8deef83b8e1 | [
"MIT"
] | 18 | 2022-03-19T04:41:04.000Z | 2022-03-31T03:32:12.000Z | ====================
Kernel driver lp855x
====================
Backlight driver for LP855x ICs
Supported chips:
Texas Instruments LP8550, LP8551, LP8552, LP8553, LP8555, LP8556 and
LP8557
Author: Milo(Woogyom) Kim <milo.kim@ti.com>
Description
-----------
* Brightness control
Brightness can be controlled by the pwm input or the i2c command.
The lp855x driver supports both cases.
* Device attributes
1) bl_ctl_mode
Backlight control mode.
Value: pwm based or register based
2) chip_id
The lp855x chip id.
Value: lp8550/lp8551/lp8552/lp8553/lp8555/lp8556/lp8557
Platform data for lp855x
------------------------
For supporting platform specific data, the lp855x platform data can be used.
* name:
Backlight driver name. If it is not defined, default name is set.
* device_control:
Value of DEVICE CONTROL register.
* initial_brightness:
Initial value of backlight brightness.
* period_ns:
Platform specific PWM period value. unit is nano.
Only valid when brightness is pwm input mode.
* size_program:
Total size of lp855x_rom_data.
* rom_data:
List of new eeprom/eprom registers.
Examples
========
1) lp8552 platform data: i2c register mode with new eeprom data::
#define EEPROM_A5_ADDR 0xA5
#define EEPROM_A5_VAL 0x4f /* EN_VSYNC=0 */
static struct lp855x_rom_data lp8552_eeprom_arr[] = {
{EEPROM_A5_ADDR, EEPROM_A5_VAL},
};
static struct lp855x_platform_data lp8552_pdata = {
.name = "lcd-bl",
.device_control = I2C_CONFIG(LP8552),
.initial_brightness = INITIAL_BRT,
.size_program = ARRAY_SIZE(lp8552_eeprom_arr),
.rom_data = lp8552_eeprom_arr,
};
2) lp8556 platform data: pwm input mode with default rom data::
static struct lp855x_platform_data lp8556_pdata = {
.device_control = PWM_CONFIG(LP8556),
.initial_brightness = INITIAL_BRT,
.period_ns = 1000000,
};
| 22.621951 | 76 | 0.719137 |
f756b05603f67ff8da4d107de629673082500e27 | 723 | rst | reStructuredText | doc/Changelog/7.4/Feature-67603-IntroduceTcaDescriptionColumn.rst | DanielSiepmann/typo3scan | 630efc8ea9c7bd86c4b9192c91b795fff5d3b8dc | [
"MIT"
] | 1 | 2019-10-04T23:58:04.000Z | 2019-10-04T23:58:04.000Z | doc/Changelog/7.4/Feature-67603-IntroduceTcaDescriptionColumn.rst | DanielSiepmann/typo3scan | 630efc8ea9c7bd86c4b9192c91b795fff5d3b8dc | [
"MIT"
] | 1 | 2021-12-17T10:58:59.000Z | 2021-12-17T10:58:59.000Z | doc/Changelog/7.4/Feature-67603-IntroduceTcaDescriptionColumn.rst | DanielSiepmann/typo3scan | 630efc8ea9c7bd86c4b9192c91b795fff5d3b8dc | [
"MIT"
] | 4 | 2020-10-06T08:18:55.000Z | 2022-03-17T11:14:09.000Z |
.. include:: ../../Includes.txt
==========================================================
Feature: #67603 - Introduce TCA > ctrl > descriptionColumn
==========================================================
See :issue:`67603`
Description
===========
To annotate database table column fields as internal description for editors and admins a new setting
for TCA is introduced. Setting is called `['TCA'][$tableName]['ctrl']['descriptionColumn']` and holds column name.
This description should only displayed in the backend to guide editors and admins.
Usage of descriptionColumn is added under different issues.
Impact
======
None, since annotation itself is added only. Does not impact.
.. index:: TCA, Backend
| 26.777778 | 114 | 0.618257 |
5ba1ab5658293319f12ce4fc43821717995230c2 | 158 | rst | reStructuredText | aspnet/security/authentication/index.rst | Erikre/Docs | 79afa0988a78e5f3e9da91c42b233f17271d9990 | [
"Apache-2.0"
] | 2 | 2021-05-10T11:57:55.000Z | 2022-03-15T12:50:36.000Z | aspnet/security/authentication/index.rst | Erikre/Docs | 79afa0988a78e5f3e9da91c42b233f17271d9990 | [
"Apache-2.0"
] | null | null | null | aspnet/security/authentication/index.rst | Erikre/Docs | 79afa0988a78e5f3e9da91c42b233f17271d9990 | [
"Apache-2.0"
] | 1 | 2015-04-29T05:39:49.000Z | 2015-04-29T05:39:49.000Z | Authentication
--------------
.. toctree::
:titlesonly:
introduction-to-aspnet-identity
sociallogins
accconfirm
2fa
oauth2
cookie
| 12.153846 | 34 | 0.607595 |
9b024bbfb510119cfb277b2072f31b0332571a62 | 547 | rst | reStructuredText | docs/source/index.rst | modestmerit/investpy | bef9e1dc6fee028aaecd9a4bebf7e037fb109b61 | [
"MIT"
] | 1 | 2021-03-29T23:16:38.000Z | 2021-03-29T23:16:38.000Z | docs/source/index.rst | modestmerit/investpy | bef9e1dc6fee028aaecd9a4bebf7e037fb109b61 | [
"MIT"
] | null | null | null | docs/source/index.rst | modestmerit/investpy | bef9e1dc6fee028aaecd9a4bebf7e037fb109b61 | [
"MIT"
] | 1 | 2021-02-09T03:00:12.000Z | 2021-02-09T03:00:12.000Z | Welcome to investpy's documentation!
====================================
.. image:: https://raw.githubusercontent.com/alvarobartt/investpy/refactor/docs/_static/logo.png
:align: center
.. toctree::
:maxdepth: 3
:caption: Contents:
_info/introduction.rst
_info/installation.rst
_info/usage.rst
_info/models.rst
_info/stocks.rst
_info/funds.rst
_info/api.rst
_info/information.md
_info/faq.md
_info/disclaimer.md
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search` | 19.535714 | 96 | 0.636197 |
a040ee5dcd0b98a0091c283b6a5276e3c6c87b29 | 92 | rst | reStructuredText | docs/jax.lax.rst | j-towns/jax | 49f3f991d4faae22fcd9d8248f3d36575b5004f6 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2019-02-11T16:44:26.000Z | 2019-12-21T06:17:36.000Z | docs/jax.lax.rst | j-towns/jax | 49f3f991d4faae22fcd9d8248f3d36575b5004f6 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | docs/jax.lax.rst | j-towns/jax | 49f3f991d4faae22fcd9d8248f3d36575b5004f6 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-10-13T13:25:49.000Z | 2020-10-13T13:25:49.000Z | jax.lax package
================
.. automodule:: jax.lax
:members:
:undoc-members:
| 13.142857 | 23 | 0.521739 |
4e5b1a9571fb50521ad372e96fac788b236d03da | 11,124 | rst | reStructuredText | source_rst/lectures/Means/notebook.rst | tokyoquantopian/quantopian-doc-ja | 3861745cec8db79daf510f7e86b5433576d7c0c4 | [
"CC-BY-4.0"
] | 9 | 2020-04-04T07:31:21.000Z | 2020-06-13T05:07:46.000Z | source_rst/lectures/Means/notebook.rst | tokyoquantopian/quantopian-doc-ja | 3861745cec8db79daf510f7e86b5433576d7c0c4 | [
"CC-BY-4.0"
] | 80 | 2020-04-04T07:29:50.000Z | 2020-10-31T05:04:38.000Z | source_rst/lectures/Means/notebook.rst | tokyoquantopian/quantopian-doc-ja | 3861745cec8db79daf510f7e86b5433576d7c0c4 | [
"CC-BY-4.0"
] | 3 | 2020-06-21T00:44:48.000Z | 2020-08-09T17:07:28.000Z | Measures of Central Tendency
============================
By Evgenia “Jenny” Nitishinskaya, Maxwell Margenot, and Delaney
Mackenzie.
Part of the Quantopian Lecture Series:
- `www.quantopian.com/lectures <https://www.quantopian.com/lectures>`__
- `github.com/quantopian/research_public <https://github.com/quantopian/research_public>`__
--------------
In this notebook we will discuss ways to summarize a set of data using a
single number. The goal is to capture information about the distribution
of data.
Arithmetic mean
===============
The arithmetic mean is used very frequently to summarize numerical data,
and is usually the one assumed to be meant by the word “average.” It is
defined as the sum of the observations divided by the number of
observations:
.. math:: \mu = \frac{\sum_{i=1}^N X_i}{N}
where :math:`X_1, X_2, \ldots , X_N` are our observations.
.. code:: ipython3
# Two useful statistical libraries
import scipy.stats as stats
import numpy as np
# We'll use these two data sets as examples
x1 = [1, 2, 2, 3, 4, 5, 5, 7]
x2 = x1 + [100]
print 'Mean of x1:', sum(x1), '/', len(x1), '=', np.mean(x1)
print 'Mean of x2:', sum(x2), '/', len(x2), '=', np.mean(x2)
.. parsed-literal::
Mean of x1: 29 / 8 = 3.625
Mean of x2: 129 / 9 = 14.3333333333
We can also define a weighted arithmetic mean, which is useful for
explicitly specifying the number of times each observation should be
counted. For instance, in computing the average value of a portfolio, it
is more convenient to say that 70% of your stocks are of type X rather
than making a list of every share you hold.
The weighted arithmetic mean is defined as
.. math:: \sum_{i=1}^n w_i X_i
where :math:`\sum_{i=1}^n w_i = 1`. In the usual arithmetic mean, we
have :math:`w_i = 1/n` for all :math:`i`.
Median
======
The median of a set of data is the number which appears in the middle of
the list when it is sorted in increasing or decreasing order. When we
have an odd number :math:`n` of data points, this is simply the value in
position :math:`(n+1)/2`. When we have an even number of data points,
the list splits in half and there is no item in the middle; so we define
the median as the average of the values in positions :math:`n/2` and
:math:`(n+2)/2`.
The median is less affected by extreme values in the data than the
arithmetic mean. It tells us the value that splits the data set in half,
but not how much smaller or larger the other values are.
.. code:: ipython3
print 'Median of x1:', np.median(x1)
print 'Median of x2:', np.median(x2)
.. parsed-literal::
Median of x1: 3.5
Median of x2: 4.0
Mode
====
The mode is the most frequently occuring value in a data set. It can be
applied to non-numerical data, unlike the mean and the median. One
situation in which it is useful is for data whose possible values are
independent. For example, in the outcomes of a weighted die, coming up 6
often does not mean it is likely to come up 5; so knowing that the data
set has a mode of 6 is more useful than knowing it has a mean of 4.5.
.. code:: ipython3
# Scipy has a built-in mode function, but it will return exactly one value
# even if two values occur the same number of times, or if no value appears more than once
print 'One mode of x1:', stats.mode(x1)[0][0]
# So we will write our own
def mode(l):
# Count the number of times each element appears in the list
counts = {}
for e in l:
if e in counts:
counts[e] += 1
else:
counts[e] = 1
# Return the elements that appear the most times
maxcount = 0
modes = {}
for (key, value) in counts.items():
if value > maxcount:
maxcount = value
modes = {key}
elif value == maxcount:
modes.add(key)
if maxcount > 1 or len(l) == 1:
return list(modes)
return 'No mode'
print 'All of the modes of x1:', mode(x1)
.. parsed-literal::
One mode of x1: 2
All of the modes of x1: [2, 5]
For data that can take on many different values, such as returns data,
there may not be any values that appear more than once. In this case we
can bin values, like we do when constructing a histogram, and then find
the mode of the data set where each value is replaced with the name of
its bin. That is, we find which bin elements fall into most often.
.. code:: ipython3
# Get return data for an asset and compute the mode of the data set
start = '2014-01-01'
end = '2015-01-01'
pricing = get_pricing('SPY', fields='price', start_date=start, end_date=end)
returns = pricing.pct_change()[1:]
print 'Mode of returns:', mode(returns)
# Since all of the returns are distinct, we use a frequency distribution to get an alternative mode.
# np.histogram returns the frequency distribution over the bins as well as the endpoints of the bins
hist, bins = np.histogram(returns, 20) # Break data up into 20 bins
maxfreq = max(hist)
# Find all of the bins that are hit with frequency maxfreq, then print the intervals corresponding to them
print 'Mode of bins:', [(bins[i], bins[i+1]) for i, j in enumerate(hist) if j == maxfreq]
.. parsed-literal::
Mode of returns: No mode
Mode of bins: [(-0.001330629195540084, 0.00097352774911502182)]
Geometric mean
==============
While the arithmetic mean averages using addition, the geometric mean
uses multiplication:
.. math:: G = \sqrt[n]{X_1X_1\ldots X_n}
for observations :math:`X_i \geq 0`. We can also rewrite it as an
arithmetic mean using logarithms:
.. math:: \ln G = \frac{\sum_{i=1}^n \ln X_i}{n}
The geometric mean is always less than or equal to the arithmetic mean
(when working with nonnegative observations), with equality only when
all of the observations are the same.
.. code:: ipython3
# Use scipy's gmean function to compute the geometric mean
print 'Geometric mean of x1:', stats.gmean(x1)
print 'Geometric mean of x2:', stats.gmean(x2)
.. parsed-literal::
Geometric mean of x1: 3.09410402498
Geometric mean of x2: 4.55253458762
What if we want to compute the geometric mean when we have negative
observations? This problem is easy to solve in the case of asset
returns, where our values are always at least :math:`-1`. We can add 1
to a return :math:`R_t` to get :math:`1 + R_t`, which is the ratio of
the price of the asset for two consecutive periods (as opposed to the
percent change between the prices, :math:`R_t`). This quantity will
always be nonnegative. So we can compute the geometric mean return,
.. math:: R_G = \sqrt[T]{(1 + R_1)\ldots (1 + R_T)} - 1
.. code:: ipython3
# Add 1 to every value in the returns array and then compute R_G
ratios = returns + np.ones(len(returns))
R_G = stats.gmean(ratios) - 1
print 'Geometric mean of returns:', R_G
.. parsed-literal::
Geometric mean of returns: 0.000540898532267
The geometric mean is defined so that if the rate of return over the
whole time period were constant and equal to :math:`R_G`, the final
price of the security would be the same as in the case of returns
:math:`R_1, \ldots, R_T`.
.. code:: ipython3
T = len(returns)
init_price = pricing[0]
final_price = pricing[T]
print 'Initial price:', init_price
print 'Final price:', final_price
print 'Final price as computed with R_G:', init_price*(1 + R_G)**T
.. parsed-literal::
Initial price: 179.444
Final price: 205.53
Final price as computed with R_G: 205.53
Harmonic mean
=============
The harmonic mean is less commonly used than the other types of means.
It is defined as
.. math:: H = \frac{n}{\sum_{i=1}^n \frac{1}{X_i}}
As with the geometric mean, we can rewrite the harmonic mean to look
like an arithmetic mean. The reciprocal of the harmonic mean is the
arithmetic mean of the reciprocals of the observations:
.. math:: \frac{1}{H} = \frac{\sum_{i=1}^n \frac{1}{X_i}}{n}
The harmonic mean for nonnegative numbers :math:`X_i` is always at most
the geometric mean (which is at most the arithmetic mean), and they are
equal only when all of the observations are equal.
.. code:: ipython3
print 'Harmonic mean of x1:', stats.hmean(x1)
print 'Harmonic mean of x2:', stats.hmean(x2)
.. parsed-literal::
Harmonic mean of x1: 2.55902513328
Harmonic mean of x2: 2.86972365624
The harmonic mean can be used when the data can be naturally phrased in
terms of ratios. For instance, in the dollar-cost averaging strategy, a
fixed amount is spent on shares of a stock at regular intervals. The
higher the price of the stock, then, the fewer shares an investor
following this strategy buys. The average (arithmetic mean) amount they
pay for the stock is the harmonic mean of the prices.
Point Estimates Can Be Deceiving
================================
Means by nature hide a lot of information, as they collapse entire
distributions into one number. As a result often ‘point estimates’ or
metrics that use one number, can disguise large programs in your data.
You should be careful to ensure that you are not losing key information
by summarizing your data, and you should rarely, if ever, use a mean
without also referring to a measure of spread.
Underlying Distribution Can be Wrong
------------------------------------
Even when you are using the right metrics for mean and spread, they can
make no sense if your underlying distribution is not what you think it
is. For instance, using standard deviation to measure frequency of an
event will usually assume normality. Try not to assume distributions
unless you have to, in which case you should rigourously check that the
data do fit the distribution you are assuming.
References
----------
- “Quantitative Investment Analysis”, by DeFusco, McLeavey, Pinto, and
Runkle
*This presentation is for informational purposes only and does not
constitute an offer to sell, a solicitation to buy, or a recommendation
for any security; nor does it constitute an offer to provide investment
advisory or other services by Quantopian, Inc. (“Quantopian”). Nothing
contained herein constitutes investment advice or offers any opinion
with respect to the suitability of any security, and any views expressed
herein should not be taken as advice to buy, sell, or hold any security
or as an endorsement of any security or company. In preparing the
information contained herein, Quantopian, Inc. has not taken into
account the investment needs, objectives, and financial circumstances of
any particular investor. Any views expressed and data illustrated herein
were prepared based upon information, believed to be reliable, available
to Quantopian, Inc. at the time of publication. Quantopian makes no
guarantees as to their accuracy or completeness. All information is
subject to change and may quickly become unreliable for various reasons,
including changes in market conditions or economic circumstances.*
| 34.546584 | 110 | 0.703254 |
839ec956592b6f4058a9e0af6fa8a1b011aac7bc | 788 | rst | reStructuredText | docs/source/download.rst | qinlene/pyscal | 3542ac1a181db577840cb5ee99ebea5991ecc6dc | [
"BSD-3-Clause"
] | null | null | null | docs/source/download.rst | qinlene/pyscal | 3542ac1a181db577840cb5ee99ebea5991ecc6dc | [
"BSD-3-Clause"
] | null | null | null | docs/source/download.rst | qinlene/pyscal | 3542ac1a181db577840cb5ee99ebea5991ecc6dc | [
"BSD-3-Clause"
] | null | null | null | Downloads
=========
The source code is available in latest stable or release versions. We recommend using the latest stable
version for all updated features.
Source code
-----------
* `latest stable version of pyscal (tar.gz) <https://github.com/srmnitc/pyscal/archive/master.zip>`_
* `release version (zip) <https://doi.org/10.5281/zenodo.3522376>`_
Documentation
-------------
* `PDF version <https://readthedocs.org/projects/pyscal/downloads/pdf/latest/>`_
* `Epub version <https://readthedocs.org/projects/pyscal/downloads/epub/latest/>`_
Publication
-----------
* `Publication <https://joss.theoj.org/papers/10.21105/joss.01824>`_
* `citation <https://rubde-my.sharepoint.com/:u:/g/personal/sarath_menon_rub_de/Ecfuz7X8__ZJiz73k-dvvpEBjjMU6VJvg0v-hDtsFd3Kkw?download=1>`_
| 29.185185 | 141 | 0.728426 |
eb1c0d564098ceecb55bdf2c481ed1e596016de5 | 336 | rst | reStructuredText | README.rst | optimizely/appengine.py | 7284329fe2b006e626020ddd20fd7797cfbd87f0 | [
"MIT"
] | null | null | null | README.rst | optimizely/appengine.py | 7284329fe2b006e626020ddd20fd7797cfbd87f0 | [
"MIT"
] | null | null | null | README.rst | optimizely/appengine.py | 7284329fe2b006e626020ddd20fd7797cfbd87f0 | [
"MIT"
] | 1 | 2021-02-14T11:59:56.000Z | 2021-02-14T11:59:56.000Z | appengine.py
============
A command-line tool to install the Google App Engine SDK.
Usage
-----
::
appengine.py [sdk] [--force]
Where sdk can be one of
* a version number in x.y.z form
* an URL pointing to a zipped SDK to download
* a local path to a zipped SDK
Option: --force
Overwrite existing SDK installation and tools.
| 16 | 57 | 0.684524 |
117530ddc32d900ea1d1661386b625e1d48ce3a1 | 1,417 | rst | reStructuredText | docs/commands/datatypes.rst | simonbray/parsec | c0e123cbf7cb1289ec722357a6262f716575e4d9 | [
"Apache-2.0"
] | 8 | 2015-03-27T17:09:15.000Z | 2021-07-13T15:33:02.000Z | docs/commands/datatypes.rst | simonbray/parsec | c0e123cbf7cb1289ec722357a6262f716575e4d9 | [
"Apache-2.0"
] | 30 | 2015-02-27T21:21:47.000Z | 2021-08-31T14:19:55.000Z | docs/commands/datatypes.rst | simonbray/parsec | c0e123cbf7cb1289ec722357a6262f716575e4d9 | [
"Apache-2.0"
] | 12 | 2017-06-01T03:49:23.000Z | 2021-07-13T15:33:06.000Z | datatypes
=========
This section is auto-generated from the help text for the parsec command
``datatypes``.
``get_datatypes`` command
-------------------------
**Usage**::
parsec datatypes get_datatypes [OPTIONS]
**Help**
Get the list of all installed datatypes.
**Output**
A list of datatype names.
For example::
['snpmatrix',
'snptest',
'tabular',
'taxonomy',
'twobit',
'txt',
'vcf',
'wig',
'xgmml',
'xml']
**Options**::
--extension_only Return only the extension rather than the datatype name
--upload_only Whether to return only datatypes which can be uploaded
-h, --help Show this message and exit.
``get_sniffers`` command
------------------------
**Usage**::
parsec datatypes get_sniffers [OPTIONS]
**Help**
Get the list of all installed sniffers.
**Output**
A list of sniffer names.
For example::
['galaxy.datatypes.tabular:Vcf',
'galaxy.datatypes.binary:TwoBit',
'galaxy.datatypes.binary:Bam',
'galaxy.datatypes.binary:Sff',
'galaxy.datatypes.xml:Phyloxml',
'galaxy.datatypes.xml:GenericXml',
'galaxy.datatypes.sequence:Maf',
'galaxy.datatypes.sequence:Lav',
'galaxy.datatypes.sequence:csFasta']
**Options**::
-h, --help Show this message and exit.
| 18.166667 | 79 | 0.575865 |
a5452ba9d099f08aa93880e762f6d3f8fca309ce | 727 | rst | reStructuredText | docs/source/ingest.rst | spacetelescope/jeta | 264dcd2b1bbe5b169b51ba833cf947968145644d | [
"BSD-3-Clause"
] | 2 | 2019-06-25T13:08:33.000Z | 2021-02-28T04:43:12.000Z | docs/source/ingest.rst | spacetelescope/jeta | 264dcd2b1bbe5b169b51ba833cf947968145644d | [
"BSD-3-Clause"
] | 38 | 2020-02-21T19:08:30.000Z | 2022-03-11T18:23:06.000Z | docs/source/ingest.rst | spacetelescope/jeta | 264dcd2b1bbe5b169b51ba833cf947968145644d | [
"BSD-3-Clause"
] | 2 | 2021-02-28T04:15:45.000Z | 2022-03-02T18:29:56.000Z | **************
Ingest Process
**************
.. py:currentmodule:: jeta.archive.ingest
=============
Classes Index
=============
.. autoclass::
:show-inheritance:
:members:
----------------------------------------
Manually Starting an Ingest from the CLI
----------------------------------------
.. code-block:: bash
$ python run.py --create # use the create option if its the first ingest.
----------------------
Supported Ingest Files
----------------------
* CSV
* Single MSID FOF (CSV) - Comma-delimited tabular data for a single msid
* Flat HDF5
* Grouped HDF5
---------------------------
Telemetry Archive Structure
---------------------------
TBD
---------------
Ingest Schedule
---------------
TBD | 17.309524 | 77 | 0.455296 |
876b6874750cd2044829528ee7092f0b740ab141 | 1,118 | rst | reStructuredText | README.rst | ProteinQure/visualize_HW | 499a3d7eb0823541824adb99371d03512fa86ba4 | [
"MIT"
] | 5 | 2019-03-20T16:16:20.000Z | 2020-06-11T10:01:19.000Z | README.rst | ProteinQure/visualize_HW | 499a3d7eb0823541824adb99371d03512fa86ba4 | [
"MIT"
] | null | null | null | README.rst | ProteinQure/visualize_HW | 499a3d7eb0823541824adb99371d03512fa86ba4 | [
"MIT"
] | 3 | 2019-05-01T01:44:51.000Z | 2021-10-30T17:45:08.000Z | ======================================================
Simple visualization of a sequence in a helical wheel
======================================================
:Authors: Katrin Reichel
:Company: `ProteinQure Inc. <https://www.proteinqure.com>`
:Year: 2019
:Licence: MIT License
:Copyright: © 2019 Katrin Reichel
Description
===========
The provided python script generates a helical wheel based on an input sequence. For a random sequence it looks like this:
.. image:: /img/hw_example.png
:height: 100px
Dependencies and Software Requirements
======================================
* Python (>=3.6)
* Python packages: numpy, matplotlib
Usage
=====
To generate a helical wheel with an input sequence, simply do the following::
python hw_visualization.py \
-s ACDEFGHIKLMNPQRSTVWYACDE \
-o hw_output.png
Help
====
Please, if you encounter any issues with the tool, open an issue here on the github repository https://github.com/proteinqure/visualize_hw/issues.
If you have any questions or suggestions, please contact team@proteinqure.com.
| 26.619048 | 146 | 0.618068 |
582d3427de3f819558164cca07438c879a4aa555 | 4,956 | rst | reStructuredText | kernel/linux-5.4/Documentation/admin-guide/cgroup-v1/freezer-subsystem.rst | josehu07/SplitFS | d7442fa67a17de7057664f91defbfdbf10dd7f4a | [
"Apache-2.0"
] | 44 | 2022-03-16T08:32:31.000Z | 2022-03-31T16:02:35.000Z | docs/linux/admin-guide/cgroup-v1/freezer-subsystem.rst | lukedsmalley/oo-kernel-hacking | 57161ae3e8a780a72b475b3c27fec8deef83b8e1 | [
"MIT"
] | 13 | 2021-07-10T04:36:17.000Z | 2022-03-03T10:50:05.000Z | docs/linux/admin-guide/cgroup-v1/freezer-subsystem.rst | lukedsmalley/oo-kernel-hacking | 57161ae3e8a780a72b475b3c27fec8deef83b8e1 | [
"MIT"
] | 18 | 2022-03-19T04:41:04.000Z | 2022-03-31T03:32:12.000Z | ==============
Cgroup Freezer
==============
The cgroup freezer is useful to batch job management system which start
and stop sets of tasks in order to schedule the resources of a machine
according to the desires of a system administrator. This sort of program
is often used on HPC clusters to schedule access to the cluster as a
whole. The cgroup freezer uses cgroups to describe the set of tasks to
be started/stopped by the batch job management system. It also provides
a means to start and stop the tasks composing the job.
The cgroup freezer will also be useful for checkpointing running groups
of tasks. The freezer allows the checkpoint code to obtain a consistent
image of the tasks by attempting to force the tasks in a cgroup into a
quiescent state. Once the tasks are quiescent another task can
walk /proc or invoke a kernel interface to gather information about the
quiesced tasks. Checkpointed tasks can be restarted later should a
recoverable error occur. This also allows the checkpointed tasks to be
migrated between nodes in a cluster by copying the gathered information
to another node and restarting the tasks there.
Sequences of SIGSTOP and SIGCONT are not always sufficient for stopping
and resuming tasks in userspace. Both of these signals are observable
from within the tasks we wish to freeze. While SIGSTOP cannot be caught,
blocked, or ignored it can be seen by waiting or ptracing parent tasks.
SIGCONT is especially unsuitable since it can be caught by the task. Any
programs designed to watch for SIGSTOP and SIGCONT could be broken by
attempting to use SIGSTOP and SIGCONT to stop and resume tasks. We can
demonstrate this problem using nested bash shells::
$ echo $$
16644
$ bash
$ echo $$
16690
From a second, unrelated bash shell:
$ kill -SIGSTOP 16690
$ kill -SIGCONT 16690
<at this point 16690 exits and causes 16644 to exit too>
This happens because bash can observe both signals and choose how it
responds to them.
Another example of a program which catches and responds to these
signals is gdb. In fact any program designed to use ptrace is likely to
have a problem with this method of stopping and resuming tasks.
In contrast, the cgroup freezer uses the kernel freezer code to
prevent the freeze/unfreeze cycle from becoming visible to the tasks
being frozen. This allows the bash example above and gdb to run as
expected.
The cgroup freezer is hierarchical. Freezing a cgroup freezes all
tasks belonging to the cgroup and all its descendant cgroups. Each
cgroup has its own state (self-state) and the state inherited from the
parent (parent-state). Iff both states are THAWED, the cgroup is
THAWED.
The following cgroupfs files are created by cgroup freezer.
* freezer.state: Read-write.
When read, returns the effective state of the cgroup - "THAWED",
"FREEZING" or "FROZEN". This is the combined self and parent-states.
If any is freezing, the cgroup is freezing (FREEZING or FROZEN).
FREEZING cgroup transitions into FROZEN state when all tasks
belonging to the cgroup and its descendants become frozen. Note that
a cgroup reverts to FREEZING from FROZEN after a new task is added
to the cgroup or one of its descendant cgroups until the new task is
frozen.
When written, sets the self-state of the cgroup. Two values are
allowed - "FROZEN" and "THAWED". If FROZEN is written, the cgroup,
if not already freezing, enters FREEZING state along with all its
descendant cgroups.
If THAWED is written, the self-state of the cgroup is changed to
THAWED. Note that the effective state may not change to THAWED if
the parent-state is still freezing. If a cgroup's effective state
becomes THAWED, all its descendants which are freezing because of
the cgroup also leave the freezing state.
* freezer.self_freezing: Read only.
Shows the self-state. 0 if the self-state is THAWED; otherwise, 1.
This value is 1 iff the last write to freezer.state was "FROZEN".
* freezer.parent_freezing: Read only.
Shows the parent-state. 0 if none of the cgroup's ancestors is
frozen; otherwise, 1.
The root cgroup is non-freezable and the above interface files don't
exist.
* Examples of usage::
# mkdir /sys/fs/cgroup/freezer
# mount -t cgroup -ofreezer freezer /sys/fs/cgroup/freezer
# mkdir /sys/fs/cgroup/freezer/0
# echo $some_pid > /sys/fs/cgroup/freezer/0/tasks
to get status of the freezer subsystem::
# cat /sys/fs/cgroup/freezer/0/freezer.state
THAWED
to freeze all tasks in the container::
# echo FROZEN > /sys/fs/cgroup/freezer/0/freezer.state
# cat /sys/fs/cgroup/freezer/0/freezer.state
FREEZING
# cat /sys/fs/cgroup/freezer/0/freezer.state
FROZEN
to unfreeze all tasks in the container::
# echo THAWED > /sys/fs/cgroup/freezer/0/freezer.state
# cat /sys/fs/cgroup/freezer/0/freezer.state
THAWED
This is the basic mechanism which should do the right thing for user space task
in a simple scenario.
| 38.71875 | 79 | 0.770581 |
794b1bbe3c70c1f64f9904fad06c801cca56945d | 124 | rst | reStructuredText | docs/api/mlsnippet.utils.rst | haowen-xu/mlsnippet | 94f0b419340e763747a008b8c93feca06140adc5 | [
"MIT"
] | 1 | 2018-05-25T07:57:13.000Z | 2018-05-25T07:57:13.000Z | docs/api/mlsnippet.utils.rst | haowen-xu/mlsnippet | 94f0b419340e763747a008b8c93feca06140adc5 | [
"MIT"
] | 2 | 2018-06-02T04:03:36.000Z | 2018-07-18T03:57:46.000Z | docs/api/mlsnippet.utils.rst | haowen-xu/mltoolkit | 94f0b419340e763747a008b8c93feca06140adc5 | [
"MIT"
] | null | null | null | mlsnippet\.utils
================
.. automodule:: mlsnippet.utils
:members:
:undoc-members:
:show-inheritance:
| 15.5 | 31 | 0.580645 |
419d1192f5207285bd19093c477ba995a8237025 | 140 | rst | reStructuredText | docs/Usecase1/v12.1/Service_removal.rst | vbojko/F5_Service_POP | 0f3aab1d5b983a620905c79d07ffee9a1de42a48 | [
"MIT"
] | null | null | null | docs/Usecase1/v12.1/Service_removal.rst | vbojko/F5_Service_POP | 0f3aab1d5b983a620905c79d07ffee9a1de42a48 | [
"MIT"
] | null | null | null | docs/Usecase1/v12.1/Service_removal.rst | vbojko/F5_Service_POP | 0f3aab1d5b983a620905c79d07ffee9a1de42a48 | [
"MIT"
] | null | null | null | Removal of existing services
============================
This describes how existing services can be deleted from the infrastructure
tbd
| 20 | 75 | 0.664286 |
4b635256535d99de29ab8834581a6f73f4071df4 | 2,946 | rst | reStructuredText | docs/administrator_guide/backup_and_restore.rst | nilsholle/sampledb | 90d7487a3990995ca2ec5dfd8b59d4739d6a9a87 | [
"MIT"
] | 5 | 2020-02-13T15:25:37.000Z | 2021-05-06T21:05:14.000Z | docs/administrator_guide/backup_and_restore.rst | nilsholle/sampledb | 90d7487a3990995ca2ec5dfd8b59d4739d6a9a87 | [
"MIT"
] | 28 | 2019-11-12T14:14:08.000Z | 2022-03-11T16:29:27.000Z | docs/administrator_guide/backup_and_restore.rst | nilsholle/sampledb | 90d7487a3990995ca2ec5dfd8b59d4739d6a9a87 | [
"MIT"
] | 8 | 2019-12-10T15:46:02.000Z | 2021-11-02T12:24:52.000Z | .. _backup_and_restore:
Backup and Restore
==================
SampleDB stores all its information in:
- the PostgreSQL database, and
- the file directory
So to create a backup of SampleDB, you will need to create backups of these.
It is recommended that you stop the SampleDB container before creating backups and start it again afterwards.
While you yourself will need to decide when and how exactly you want to create backups, the following sections show examples of how backups of these two sources of information can be created.
Please follow general system administration best practices.
The PostgreSQL database
-----------------------
One way of creating a backup of a PostgreSQL database is to create an SQL dump using the `pg_dump` tool:
.. code-block:: bash
docker exec sampledb-postgres pg_dump -U postgres postgres > backup.sql
The resulting ``backup.sql`` file can then be copied to another system.
To restore the PostgreSQL database from such an SQL dump, you should first remove the existing database:
.. code-block:: bash
docker stop sampledb-postgres
docker rm sampledb-postgres
rm -rf pgdata
You can then recreate the database container and restore the backup using the ``psql`` tool:
.. code-block:: bash
docker run \
-d \
-e POSTGRES_PASSWORD=password \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v `pwd`/pgdata:/var/lib/postgresql/data/pgdata:rw \
--restart=always \
--name sampledb-postgres \
postgres:12
docker exec -i sampledb-postgres psql -U postgres postgres < backup.sql
If you have set different options for the database container before, e.g. setting it in a specific network and giving it a fixed IP, you should also set these options here.
For more information on backing up a PostgreSQL database and restoring a backup, see the `PostgreSQL documentation on Backup and Restore <https://www.postgresql.org/docs/current/backup.html>`_
The file directory
------------------
If you have followed the Getting Started guide, you will have set the ``SAMPLEDB_FILE_STORAGE_PATH`` variable and mounted a local directory ``files`` into the SampleDB container so that SampleDB can store uploaded files in this directory.
Files are uploaded there over time and should not change, so this directory is especially suited for incremental backups.
One way to copy it to another system is to use the ``rsync`` tool:
.. code-block:: bash
rsync -a files <hostname>:<backup_directory>
Or, if you wish to create a backup locally, you can simply use ``cp``:
.. code-block:: bash
cp -an files <backup_directory>
Here the ``-n`` option prevents copying files which already exist in the backup directory.
To restore the backup, simply copy the backup to your local file directory.
.. note::
By default, new files will be stored in the database instead of in the file directory, so the file directory may be empty. | 36.37037 | 238 | 0.73761 |
a72f74ddf2427e7cd65c9d9d4f53219dcfd64574 | 157 | rst | reStructuredText | old-reference-manuals/portlets/appendix/index.rst | Acidburn0zzz/documentation | 360a6741c62e24c0e487436438d0be97ce9ee2a1 | [
"CC-BY-4.0"
] | null | null | null | old-reference-manuals/portlets/appendix/index.rst | Acidburn0zzz/documentation | 360a6741c62e24c0e487436438d0be97ce9ee2a1 | [
"CC-BY-4.0"
] | null | null | null | old-reference-manuals/portlets/appendix/index.rst | Acidburn0zzz/documentation | 360a6741c62e24c0e487436438d0be97ce9ee2a1 | [
"CC-BY-4.0"
] | null | null | null | ====================
Appendix: Practicals
====================
.. toctree::
:maxdepth: 2
subclassing
moving
schema_update
available_adapter
| 13.083333 | 20 | 0.515924 |
399f75078b985ec350aec711cdb8676fa0cb7e02 | 31,542 | rst | reStructuredText | doc/user-manual/language/with-abstraction.lagda.rst | cruhland/agda | 7f58030124fa99dfbf8db376659416f3ad8384de | [
"MIT"
] | 1,989 | 2015-01-09T23:51:16.000Z | 2022-03-30T18:20:48.000Z | doc/user-manual/language/with-abstraction.lagda.rst | cruhland/agda | 7f58030124fa99dfbf8db376659416f3ad8384de | [
"MIT"
] | 4,066 | 2015-01-10T11:24:51.000Z | 2022-03-31T21:14:49.000Z | doc/user-manual/language/with-abstraction.lagda.rst | cruhland/agda | 7f58030124fa99dfbf8db376659416f3ad8384de | [
"MIT"
] | 371 | 2015-01-03T14:04:08.000Z | 2022-03-30T19:00:30.000Z | ..
::
{-# OPTIONS --allow-unsolved-metas --irrelevant-projections --guardedness #-}
module language.with-abstraction where
open import Agda.Builtin.Nat using (Nat; zero; suc; _<_)
open import Agda.Builtin.Bool using (Bool; true; false)
data Comparison : Set where
equal greater less : Comparison
data List (A : Set) : Set where
[] : List A
_∷_ : A → List A → List A
open import Agda.Builtin.Equality using (_≡_; refl)
data ⊥ : Set where
.. _with-abstraction:
****************
With-Abstraction
****************
.. contents::
:depth: 2
:local:
With-abstraction was first introduced by Conor McBride [McBride2004]_ and lets
you pattern match on the result of an intermediate computation by effectively
adding an extra argument to the left-hand side of your function.
Usage
-----
In the simplest case the ``with`` construct can be used just to discriminate on
the result of an intermediate computation. For instance
..
::
module verbose-usage where
::
filter : {A : Set} → (A → Bool) → List A → List A
filter p [] = []
filter p (x ∷ xs) with p x
filter p (x ∷ xs) | true = x ∷ filter p xs
filter p (x ∷ xs) | false = filter p xs
The clause containing the with-abstraction has no right-hand side. Instead it
is followed by a number of clauses with an extra argument on the left,
separated from the original arguments by a vertical bar (``|``).
When the original arguments are the same in the new clauses you can use the
``...`` syntax:
..
::
module ellipsis-usage where
::
filter : {A : Set} → (A → Bool) → List A → List A
filter p [] = []
filter p (x ∷ xs) with p x
... | true = x ∷ filter p xs
... | false = filter p xs
In this case ``...`` expands to ``filter p (x ∷ xs)``. There are three cases
where you have to spell out the left-hand side:
- If you want to do further pattern matching on the original
arguments.
- When the pattern matching on the intermediate result refines some of
the other arguments (see :ref:`dot-patterns`).
- To disambiguate the clauses of nested with-abstractions (see
:ref:`nested-with-abstractions` below).
..
::
module generalisation where
.. _generalisation:
Generalisation
~~~~~~~~~~~~~~
The power of with-abstraction comes from the fact that the goal type
and the type of the original arguments are generalised over the value
of the scrutinee. See :ref:`technical-details` below for the details.
This generalisation is important when you have to prove properties
about functions defined using ``with``. For instance, suppose we want
to prove that the ``filter`` function above satisfies some property
``P``. Starting out by pattern matching of the list we get the
following (with the goal types shown in the holes)
..
::
open ellipsis-usage
::
postulate P : ∀ {A} → List A → Set
postulate p-nil : ∀ {A} → P {A} []
postulate Q : Set
postulate q-nil : Q
..
::
module verbose-proof where
::
proof : {A : Set} (p : A → Bool) (xs : List A) → P (filter p xs)
proof p [] = {! P [] !}
proof p (x ∷ xs) = {! P (filter p (x ∷ xs) | p x) !}
..
::
module ellipsis-proof where
In the cons case we have to prove that ``P`` holds for ``filter p (x ∷ xs) | p x``.
This is the syntax for a stuck with-abstraction---\ ``filter`` cannot reduce
since we don't know the value of ``p x``. This syntax is used for printing, but
is not accepted as valid Agda code. Now if we with-abstract over ``p x``, but
don't pattern match on the result we get::
proof : {A : Set} (p : A → Bool) (xs : List A) → P (filter p xs)
proof p [] = p-nil
proof p (x ∷ xs) with p x
... | r = {! P (filter p (x ∷ xs) | r) !}
..
::
module ellipsis-proof-step where
Here the ``p x`` in the goal type has been replaced by the variable ``r``
introduced for the result of ``p x``. If we pattern match on ``r`` the
with-clauses can reduce, giving us::
proof : {A : Set} (p : A → Bool) (xs : List A) → P (filter p xs)
proof p [] = p-nil
proof p (x ∷ xs) with p x
... | true = {! P (x ∷ filter p xs) !}
... | false = {! P (filter p xs) !}
Both the goal type and the types of the other arguments are generalised, so it
works just as well if we have an argument whose type contains ``filter p xs``.
::
proof₂ : {A : Set} (p : A → Bool) (xs : List A) → P (filter p xs) → Q
proof₂ p [] _ = q-nil
proof₂ p (x ∷ xs) H with p x
... | true = {! H : P (x ∷ filter p xs) !}
... | false = {! H : P (filter p xs) !}
The generalisation is not limited to scrutinees in other with-abstractions. All
occurrences of the term in the goal type and argument types will be
generalised.
Note that this generalisation is not always type correct and may
result in a (sometimes cryptic) type error. See
:ref:`ill-typed-with-abstractions` below for more details.
.. _nested-with-abstractions:
Nested with-abstractions
~~~~~~~~~~~~~~~~~~~~~~~~
..
::
module compare-verbose where
With-abstractions can be nested arbitrarily. The only thing to keep in mind in
this case is that the ``...`` syntax applies to the closest with-abstraction.
For example, suppose you want to use ``...`` in the definition below.
::
compare : Nat → Nat → Comparison
compare x y with x < y
compare x y | false with y < x
compare x y | false | false = equal
compare x y | false | true = greater
compare x y | true = less
You might be tempted to replace ``compare x y`` with ``...`` in all the
with-clauses as follows.
.. code-block:: agda
compare : Nat → Nat → Comparison
compare x y with x < y
... | false with y < x
... | false = equal
... | true = greater
... | true = less -- WRONG
This, however, would be wrong. In the last clause the ``...`` is interpreted as
belonging to the inner with-abstraction (the whitespace is not taken into
account) and thus expands to ``compare x y | false | true``. In this case you
have to spell out the left-hand side and write
..
::
module compare-ellipsis where
::
compare : Nat → Nat → Comparison
compare x y with x < y
... | false with y < x
... | false = equal
... | true = greater
compare x y | true = less
..
::
module simultaneous-abstraction where
open import Agda.Builtin.Nat using (_+_)
.. _simultaneous-abstraction:
Simultaneous abstraction
~~~~~~~~~~~~~~~~~~~~~~~~
You can abstract over multiple terms in a single with-abstraction. To do this
you separate the terms with vertical bars (``|``).
::
compare : Nat → Nat → Comparison
compare x y with x < y | y < x
... | true | _ = less
... | _ | true = greater
... | false | false = equal
In this example the order of abstracted terms does not matter, but in general
it does. Specifically, the types of later terms are generalised over the values
of earlier terms. For instance
::
postulate plus-commute : (a b : Nat) → a + b ≡ b + a
postulate P : Nat → Set
..
::
module simultaneous-thm-unmatched where
::
thm : (a b : Nat) → P (a + b) → P (b + a)
thm a b t with a + b | plus-commute a b
thm a b t | ab | eq = {! t : P ab, eq : ab ≡ b + a !}
Note that both the type of ``t`` and the type of the result ``eq`` of
``plus-commute a b`` have been generalised over ``a + b``. If the terms in the
with-abstraction were flipped around, this would not be the case. If we now
pattern match on ``eq`` we get
..
::
module simultaneous-thm-refl where
::
thm : (a b : Nat) → P (a + b) → P (b + a)
thm a b t with a + b | plus-commute a b
thm a b t | .(b + a) | refl = {! t : P (b + a) !}
and can thus fill the hole with ``t``. In effect we used the
commutativity proof to rewrite ``a + b`` to ``b + a`` in the type of
``t``. This is such a useful thing to do that there is special syntax
for it. See :ref:`Rewrite <with-rewrite>` below.
..
::
module with-on-lemma where
.. _with-on-lemma:
A limitation of generalisation is that only occurrences of the term that are
visible at the time of the abstraction are generalised over, but more instances
of the term may appear once you start filling in the right-hand side or do
further matching on the left. For instance, consider the following contrived
example where we need to match on the value of ``f n`` for the type of ``q`` to
reduce, but we then want to apply ``q`` to a lemma that talks about ``f n``::
postulate
R : Set
P : Nat → Set
f : Nat → Nat
lemma : ∀ n → P (f n) → R
Q : Nat → Set
Q zero = ⊥
Q (suc n) = P (suc n)
..
::
module proof-blocked where
::
proof : (n : Nat) → Q (f n) → R
proof n q with f n
proof n () | zero
proof n q | suc fn = {! q : P (suc fn) !}
..
::
module proof-lemma where
Once we have generalised over ``f n`` we can no longer apply the lemma, which
needs an argument of type ``P (f n)``. To solve this problem we can add the
lemma to the with-abstraction::
proof : (n : Nat) → Q (f n) → R
proof n q with f n | lemma n
proof n () | zero | _
proof n q | suc fn | lem = lem q
In this case the type of ``lemma n`` (``P (f n) → R``) is generalised over ``f
n`` so in the right-hand side of the last clause we have ``q : P (suc fn)`` and
``lem : P (suc fn) → R``.
See :ref:`the-inspect-idiom` below for an alternative approach.
..
::
module with-modalities where
.. _with-modalities:
Making with-abstractions hidden and/or irrelevant
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to add hiding and relevance annotations to `with`
expressions. For example::
module _ (A B : Set) (recompute : .B → .{{A}} → B) where
_$_ : .(A → B) → .A → B
f $ x with .{f} | .(f x) | .{{x}}
... | y = recompute y
This can be useful for hiding with-abstractions that you do not need
to match on but that need to be abstracted over for the result to be
well-typed. It can also be used to abstract over the fields of a
record type with irrelevant fields, for example::
record EqualBools : Set₁ where
field
bool1 : Bool
bool2 : Bool
.same : bool1 ≡ bool2
open EqualBools
example : EqualBools → EqualBools
example x with bool1 x | bool2 x | .(same x)
... | true | y′ | eq′ = record { bool1 = true; bool2 = y′; same = eq′ }
... | false | y′ | eq′ = record { bool1 = false; bool2 = y′; same = eq′ }
..
::
module with-clause-underscore where
.. _with-clause-underscore:
Using underscores and variables in pattern repetition
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If an ellipsis `...` cannot be used, the with-clause has to repeat (or
refine) the patterns of the parent clause. Since Agda 2.5.3, such
patterns can be replaced by underscores `_` if the variables they bind
are not needed. Here is a (slightly contrived) example::
record R : Set where
coinductive -- disallows matching
field f : Bool
n : Nat
data P (r : R) : Nat → Set where
fTrue : R.f r ≡ true → P r zero
nSuc : P r (suc (R.n r))
data Q : (b : Bool) (n : Nat) → Set where
true! : Q true zero
suc! : ∀{b n} → Q b (suc n)
test : (r : R) {n : Nat} (p : P r n) → Q (R.f r) n
test r nSuc = suc!
test r (fTrue p) with R.f r
test _ (fTrue ()) | false
test _ _ | true = true! -- underscore instead of (isTrue _)
Since Agda 2.5.4, patterns can also be replaced by a variable::
f : List Nat → List Nat
f [] = []
f (x ∷ xs) with f xs
f xs0 | r = ?
The variable `xs0` is treated as a let-bound variable with value `.x ∷
.xs` (where `.x : Nat` and `.xs : List Nat` are out of scope). Since
with-abstraction may change the type of variables, the instantiation
of such let-bound variables are type checked again after
with-abstraction.
..
::
module with-invert {a} {A : Set a} where
open import Agda.Builtin.Nat
open import Agda.Builtin.Sigma
open import Agda.Builtin.Equality
open import Agda.Builtin.Unit
.. _with-invert:
Irrefutable With
~~~~~~~~~~~~~~~~
When a pattern is irrefutable, we can use a pattern-matching ``with``
instead of a traditional ``with`` block. This gives us a lightweight
syntax to make a lot of observations before using a "proper" ``with``
block. For a basic example of such an irrefutable pattern, see this
unfolding lemma for ``pred`` ::
pred : Nat → Nat
pred zero = zero
pred (suc n) = n
NotNull : Nat → Set
NotNull zero = ⊥ -- false
NotNull (suc n) = ⊤ -- trivially true
pred-correct : ∀ n (pr : NotNull n) → suc (pred n) ≡ n
pred-correct n pr with suc p ← n = refl
In the above code snippet we do not need to entertain the idea that ``n``
could be equal to ``zero``: Agda detects that the proof ``pr`` allows us
to dismiss such a case entirely.
The patterns used in such an inversion clause can be arbitrary. We can
for instance have deep patterns, e.g. projecting out the second element
of a vector whose length is neither 0 nor 1:
::
infixr 5 _∷_
data Vec {a} (A : Set a) : Nat → Set a where
[] : Vec A zero
_∷_ : ∀ {n} → A → Vec A n → Vec A (suc n)
second : ∀ {n} {pr : NotNull (pred n)} → Vec A n → A
second vs with (_ ∷ v ∷ _) ← vs = v
Remember the example of :ref:`simultaneous
abstraction <simultaneous-abstraction>` from above. A simultaneous
rewrite / pattern-matching ``with`` is to be understood as being nested.
That is to say that the type refinements introduced by the first
case analysis may be necessary to type the following ones.
In the following example, in ``focusAt`` we are only able to perform
the ``splitAt`` we are interested in because we have massaged the type
of the vector argument using ``suc-+`` first.
::
suc-+ : ∀ m n → suc m + n ≡ m + suc n
suc-+ zero n = refl
suc-+ (suc m) n rewrite suc-+ m n = refl
infixr 1 _×_
_×_ : ∀ {a b} (A : Set a) (B : Set b) → Set ?
A × B = Σ A (λ _ → B)
splitAt : ∀ m {n} → Vec A (m + n) → Vec A m × Vec A n
splitAt zero xs = ([] , xs)
splitAt (suc m) (x ∷ xs) with (ys , zs) ← splitAt m xs = (x ∷ ys , zs)
-- focusAt m (x₀ ∷ ⋯ ∷ xₘ₋₁ ∷ xₘ ∷ xₘ₊₁ ∷ ⋯ ∷ xₘ₊ₙ)
-- returns ((x₀ ∷ ⋯ ∷ xₘ₋₁) , xₘ , (xₘ₊₁ ∷ ⋯ ∷ xₘ₊ₙ))
focusAt : ∀ m {n} → Vec A (suc (m + n)) → Vec A m × A × Vec A n
focusAt m {n} vs rewrite suc-+ m n
with (before , focus ∷ after) ← splitAt m vs
= (before , focus , after)
You can alternate arbitrarily many ``rewrite`` and pattern-matching
``with`` clauses and still perform a ``with`` abstraction afterwards
if necessary.
..
::
module with-rewrite where
open import Agda.Builtin.Nat using (_+_)
.. _with-rewrite:
Rewrite
~~~~~~~
Remember example of :ref:`simultaneous
abstraction <simultaneous-abstraction>` from above.
..
::
module remember-simultaneous-abstraction where
postulate P : Nat → Set
::
postulate plus-commute : (a b : Nat) → a + b ≡ b + a
thm : (a b : Nat) → P (a + b) → P (b + a)
thm a b t with a + b | plus-commute a b
thm a b t | .(b + a) | refl = t
..
::
open simultaneous-abstraction
This pattern of rewriting by an equation by with-abstracting over it and its
left-hand side is common enough that there is special syntax for it::
thm : (a b : Nat) → P (a + b) → P (b + a)
thm a b t rewrite plus-commute a b = t
The ``rewrite`` construction takes a term ``eq`` of type ``lhs ≡ rhs``, where ``_≡_``
is the :ref:`built-in equality type <built-in-equality>`, and expands to a
with-abstraction of ``lhs`` and ``eq`` followed by a match of the result of
``eq`` against ``refl``:
.. code-block:: agda
f ps rewrite eq = v
-->
f ps with lhs | eq
... | .rhs | refl = v
One limitation of the ``rewrite`` construction is that you cannot do further
pattern matching on the arguments *after* the rewrite, since everything happens
in a single clause. You can however do with-abstractions after the rewrite. For
instance,
::
postulate T : Nat → Set
isEven : Nat → Bool
isEven zero = true
isEven (suc zero) = false
isEven (suc (suc n)) = isEven n
thm₁ : (a b : Nat) → T (a + b) → T (b + a)
thm₁ a b t rewrite plus-commute a b with isEven a
thm₁ a b t | true = t
thm₁ a b t | false = t
Note that the with-abstracted arguments introduced by the rewrite (``lhs`` and
``eq``) are not visible in the code.
..
::
module inspect-idiom where
.. _the-inspect-idiom:
With-abstraction equality
~~~~~~~~~~~~~~~~~~~~~~~~~
When you with-abstract a term ``t`` you lose the connection between
``t`` and the new argument representing its value. That's fine as long
as all instances of ``t`` that you care about get generalised by the
abstraction, but as we saw :ref:`above <with-on-lemma>` this is not
always the case. In that example we used simultaneous abstraction to
make sure that we did capture all the instances we needed.
An alternative to that is to get Agda to remember in an equality proof
that the patterns in the with clauses come from the expression you abstracted
over. This is possible using the ``in`` keyword.
..
::
open import Agda.Builtin.Sigma using (Σ; _,_)
open import Agda.Builtin.Nat using (_+_)
In the following artificial example, we try to prove that there exists two
numbers such that one equals the double of the other. We start by computing
the double of our input ``m`` and call it ``n``. We can then return the nested
pair containing ``m``, ``n``, and we now need a proof that ``m + m ≡ n``.
Luckily we used ``in eq`` when computing ``n`` as ``m + m`` and this ``eq``
is exactly the proof we need.
::
double : Nat → Σ Nat (λ m → Σ Nat (λ n → m + m ≡ n))
double m with n ← m + m in eq = m , n , eq
For a more natural example, we prove that ``filter`` (defined at the top of this
page) is idempotent. That is to say that applying it twice to an input list is
the same as only applying it once.
In the ``filter-filter p (x ∷ xs)`` case, abstracting over and then matching
on the result of ``p x`` allows the first call to ``filter p (x ∷ xs)`` to
reduce.
In case the element ``x`` is kept (i.e. ``p x`` is ``true``), the second call
to ``filter`` on the LHS goes on to performs the same ``p x`` test. Because we
have retained the proof that ``p x ≡ true`` in ``eq``, we are able to rewrite
by this equality and get it to reduce too.
This leads to just enough computation that we can finish the proof with
an appeal to congruence and the induction hypothesis.
..
::
open ellipsis-usage
cong : {A B : Set} (f : A → B) → ∀ {x y} → x ≡ y → f x ≡ f y
cong f refl = refl
::
filter-filter : ∀ {A} p (xs : List A) → filter p (filter p xs) ≡ filter p xs
filter-filter p [] = refl
filter-filter p (x ∷ xs) with p x in eq
... | false = filter-filter p xs -- easy
... | true -- second filter stuck on `p x`: rewrite by `eq`!
rewrite eq = cong (x ∷_) (filter-filter p xs)
Alternatives to with-abstraction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Although with-abstraction is very powerful there are cases where you cannot or
don't want to use it. For instance, you cannot use with-abstraction if you are
inside an expression in a right-hand side. In that case there are a couple of
alternatives.
Pattern lambdas
+++++++++++++++
Agda does not have a primitive ``case`` construct, but one can be emulated
using :ref:`pattern matching lambdas <pattern-lambda>`. First you define a
function ``case_of_`` as follows::
case_of_ : ∀ {a b} {A : Set a} {B : Set b} → A → (A → B) → B
case x of f = f x
You can then use this function with a pattern matching lambda as the second
argument to get a Haskell-style case expression::
filter : {A : Set} → (A → Bool) → List A → List A
filter p [] = []
filter p (x ∷ xs) =
case p x of
λ { true → x ∷ filter p xs
; false → filter p xs
}
This version of ``case_of_`` only works for non-dependent functions. For
dependent functions the target type will in most cases not be inferrable, but
you can use a variant with an explicit ``B`` for this case::
case_return_of_ : ∀ {a b} {A : Set a} (x : A) (B : A → Set b) → (∀ x → B x) → B x
case x return B of f = f x
The dependent version will let you generalise over the scrutinee, just like a
with-abstraction, but you have to do it manually. Two things that it will not let you do is
- further pattern matching on arguments on the left-hand side, and
- refine arguments on the left by the patterns in the case expression. For
instance if you matched on a ``Vec A n`` the ``n`` would be refined by the
nil and cons patterns.
Helper functions
++++++++++++++++
Internally with-abstractions are translated to auxiliary functions
(see :ref:`technical-details` below) and you can always write these
functions manually. The downside is that the type signature for the
helper function needs to be written out explicitly, but fortunately
the :ref:`emacs-mode` has a command (``C-c C-h``) to generate it using
the same algorithm that generates the type of a with-function.
Termination checking
~~~~~~~~~~~~~~~~~~~~
..
::
module Termination where
postulate
some-stuff : Nat
module _ where
The termination checker runs on the translated auxiliary functions, which
means that some code that looks like it should pass termination checking
does not. Specifically this happens in call chains like ``c₁ (c₂ x) ⟶ c₁ x``
where the recursive call is under a with-abstraction. The reason is that
the auxiliary function only gets passed ``x``, so the call chain is actually
``c₁ (c₂ x) ⟶ x ⟶ c₁ x``, and the termination checker cannot see that this
is terminating. For example::
data D : Set where
[_] : Nat → D
..
::
module M₁ where
{-# TERMINATING #-}
::
fails : D → Nat
fails [ zero ] = zero
fails [ suc n ] with some-stuff
... | _ = fails [ n ]
The easiest way to work around this problem is to perform a with-abstraction
on the recursive call up front::
fixed : D → Nat
fixed [ zero ] = zero
fixed [ suc n ] with fixed [ n ] | some-stuff
... | rec | _ = rec
..
::
module M₂ where
{-# TERMINATING #-}
If the function takes more arguments you might need to abstract over a
partial application to just the structurally recursive argument. For instance,
::
fails : Nat → D → Nat
fails _ [ zero ] = zero
fails _ [ suc n ] with some-stuff
... | m = fails m [ n ]
fixed : Nat → D → Nat
fixed _ [ zero ] = zero
fixed _ [ suc n ] with (λ m → fixed m [ n ]) | some-stuff
... | rec | m = rec m
..
::
postulate
A possible complication is that later with-abstractions might change the
type of the abstracted recursive call::
T : D → Set
suc-T : ∀ {n} → T [ n ] → T [ suc n ]
zero-T : T [ zero ]
..
::
module M₃ where
{-# TERMINATING #-}
::
fails : (d : D) → T d
fails [ zero ] = zero-T
fails [ suc n ] with some-stuff
... | _ with [ n ]
... | z = suc-T (fails [ n ])
Trying to abstract over the recursive call as before does not work in this case.
.. code-block:: agda
still-fails : (d : D) → T d
still-fails [ zero ] = zero-T
still-fails [ suc n ] with still-fails [ n ] | some-stuff
... | rec | _ with [ n ]
... | z = suc-T rec -- Type error because rec : T z
To solve the problem you can add ``rec`` to the with-abstraction messing up
its type. This will prevent it from having its type changed::
fixed : (d : D) → T d
fixed [ zero ] = zero-T
fixed [ suc n ] with fixed [ n ] | some-stuff
... | rec | _ with rec | [ n ]
... | _ | z = suc-T rec
Performance considerations
~~~~~~~~~~~~~~~~~~~~~~~~~~
The :ref:`generalisation step <generalisation>` of a with-abstraction
needs to normalise the scrutinee and the goal and argument types to
make sure that all instances of the scrutinee are generalised. The
generalisation also needs to be type checked to make sure that it's
not :ref:`ill-typed <ill-typed-with-abstractions>`. This makes it
expensive to type check a with-abstraction if
- the normalisation is expensive,
- the normalised form of the goal and argument types are big, making finding
the instances of the scrutinee expensive,
- type checking the generalisation is expensive, because the types are big, or
because checking them involves heavy computation.
In these cases it is worth looking at the `alternatives to with-abstraction`_
from above.
.. _technical-details:
Technical details
-----------------
Internally with-abstractions are translated to auxiliary functions---there are
no with-abstractions in the :ref:`core-language`. This translation proceeds as
follows. Given a with-abstraction
.. math::
:nowrap:
\[\arraycolsep=1.4pt
\begin{array}{lrllcll}
\multicolumn{3}{l}{f : \Gamma \to B} \\
f ~ ps & \mathbf{with} ~ & t_1 & | & \ldots & | ~ t_m \\
f ~ ps_1 & | ~ & q_{11} & | & \ldots & | ~ q_{1m} &= v_1 \\
\vdots \\
f ~ ps_n & | ~ & q_{n1} & | & \ldots & | ~ q_{nm} &= v_n
\end{array}\]
where :math:`\Delta \vdash ps : \Gamma` (i.e. :math:`\Delta` types the
variables bound in :math:`ps`), we
- Infer the types of the scrutinees :math:`t_1 : A_1, \ldots, t_m : A_m`.
- Partition the context :math:`\Delta` into :math:`\Delta_1` and
:math:`\Delta_2` such that :math:`\Delta_1` is the smallest context where
:math:`\Delta_1 \vdash t_i : A_i` for all :math:`i`, i.e., where the scrutinees are well-typed.
Note that the partitioning is not required to be a split,
:math:`\Delta_1\Delta_2` can be a (well-formed) reordering of :math:`\Delta`.
- Generalise over the :math:`t_i` s, by computing
.. math::
C = (w_1 : A_1)(w_1 : A_2')\ldots(w_m : A_m') \to \Delta_2' \to B'
such that the normal form of :math:`C` does not contain any :math:`t_i` and
.. math::
A_i'[w_1 := t_1 \ldots w_{i - 1} := t_{i - 1}] \simeq A_i
(\Delta_2' \to B')[w_1 := t_1 \ldots w_m := t_m] \simeq \Delta_2 \to B
where :math:`X \simeq Y` is equality of the normal forms of :math:`X` and
:math:`Y`. The type of the auxiliary function is then :math:`\Delta_1 \to C`.
- Check that :math:`\Delta_1 \to C` is type correct, which is not
guaranteed (see :ref:`below <ill-typed-with-abstractions>`).
- Add a function :math:`f_{aux}`, mutually recursive with :math:`f`, with the
definition
.. math::
:nowrap:
\[\arraycolsep=1.4pt
\begin{array}{llll}
\multicolumn{4}{l}{f_{aux} : \Delta_1 \to C} \\
f_{aux} ~ ps_{11} & \mathit{qs}_1 & ps_{21} &= v_1 \\
\vdots \\
f_{aux} ~ ps_{1n} & \mathit{qs}_n & ps_{2n} &= v_n \\
\end{array}\]
where :math:`\mathit{qs}_i = q_{i1} \ldots q_{im}`, and :math:`ps_{1i} : \Delta_1` and
:math:`ps_{2i} : \Delta_2` are the patterns from :math:`ps_i` corresponding to
the variables of :math:`ps`. Note that due to the possible reordering of the
partitioning of :math:`\Delta` into :math:`\Delta_1` and :math:`\Delta_2`,
the patterns :math:`ps_{1i}` and :math:`ps_{2i}` can be in a different order
from how they appear :math:`ps_i`.
- Replace the with-abstraction by a call to :math:`f_{aux}` resulting in the
final definition
.. math::
:nowrap:
\[\arraycolsep=1.4pt
\begin{array}{l}
f : \Gamma \to B \\
f ~ ps = f_{aux} ~ \mathit{xs}_1 ~ ts ~ \mathit{xs}_2
\end{array}\]
where :math:`ts = t_1 \ldots t_m` and :math:`\mathit{xs}_1` and
:math:`\mathit{xs}_2` are the variables from :math:`\Delta` corresponding to
:math:`\Delta_1` and :math:`\Delta_2` respectively.
..
::
module examples where
Examples
~~~~~~~~
Below are some examples of with-abstractions and their translations.
::
postulate
A : Set
_+_ : A → A → A
T : A → Set
mkT : ∀ x → T x
P : ∀ x → T x → Set
-- the type A of the with argument has no free variables, so the with
-- argument will come first
f₁ : (x y : A) (t : T (x + y)) → T (x + y)
f₁ x y t with x + y
f₁ x y t | w = {!!}
-- Generated with function
f-aux₁ : (w : A) (x y : A) (t : T w) → T w
f-aux₁ w x y t = {!!}
-- x and p are not needed to type the with argument, so the context
-- is reordered with only y before the with argument
f₂ : (x y : A) (p : P y (mkT y)) → P y (mkT y)
f₂ x y p with mkT y
f₂ x y p | w = {!!}
f-aux₂ : (y : A) (w : T y) (x : A) (p : P y w) → P y w
f-aux₂ y w x p = {!!}
postulate
H : ∀ x y → T (x + y) → Set
-- Multiple with arguments are always inserted together, so in this case
-- t ends up on the left since it’s needed to type h and thus x + y isn’t
-- abstracted from the type of t
f₃ : (x y : A) (t : T (x + y)) (h : H x y t) → T (x + y)
f₃ x y t h with x + y | h
f₃ x y t h | w₁ | w₂ = {! t : T (x + y), goal : T w₁ !}
f-aux₃ : (x y : A) (t : T (x + y)) (h : H x y t) (w₁ : A) (w₂ : H x y t) → T w₁
f-aux₃ x y t h w₁ w₂ = {!!}
-- But earlier with arguments are abstracted from the types of later ones
f₄ : (x y : A) (t : T (x + y)) → T (x + y)
f₄ x y t with x + y | t
f₄ x y t | w₁ | w₂ = {! t : T (x + y), w₂ : T w₁, goal : T w₁ !}
f-aux₄ : (x y : A) (t : T (x + y)) (w₁ : A) (w₂ : T w₁) → T w₁
f-aux₄ x y t w₁ w₂ = {!!}
..
::
module ill-typed where
.. _ill-typed-with-abstractions:
Ill-typed with-abstractions
~~~~~~~~~~~~~~~~~~~~~~~~~~~
As mentioned above, generalisation does not always produce well-typed results.
This happens when you abstract over a term that appears in the *type* of a subterm
of the goal or argument types. The simplest example is abstracting over the
first component of a dependent pair. For instance,
::
postulate
A : Set
B : A → Set
H : (x : A) → B x → Set
.. code-block:: agda
bad-with : (p : Σ A B) → H (fst p) (snd p)
bad-with p with fst p
... | _ = {!!}
Here, generalising over ``fst p`` results in an ill-typed application ``H w
(snd p)`` and you get the following type error:
.. code-block:: none
fst p != w of type A
when checking that the type (p : Σ A B) (w : A) → H w (snd p) of
the generated with function is well-formed
This message can be a little difficult to interpret since it only prints the
immediate problem (``fst p != w``) and the full type of the with-function. To
get a more informative error, pointing to the location in the type where the
error is, you can copy and paste the with-function type from the error message
and try to type check it separately.
.. [McBride2004] C. McBride and J. McKinna. **The view from the left**. Journal of Functional Programming, 2004.
http://strictlypositive.org/vfl.pdf.
.. _std-lib: https://github.com/agda/agda-stdlib
.. _agda-prelude: https://github.com/UlfNorell/agda-prelude
| 30.712756 | 112 | 0.612453 |
bf8f8040a49f58a7b6b29c856dbbc01681f2b3ac | 108 | rst | reStructuredText | doc/reference/dipy.tracking.markov.rst | maurozucchelli/dipy | 8f7d512529852066ed9a1a710282cbbd956bd017 | [
"BSD-3-Clause"
] | null | null | null | doc/reference/dipy.tracking.markov.rst | maurozucchelli/dipy | 8f7d512529852066ed9a1a710282cbbd956bd017 | [
"BSD-3-Clause"
] | null | null | null | doc/reference/dipy.tracking.markov.rst | maurozucchelli/dipy | 8f7d512529852066ed9a1a710282cbbd956bd017 | [
"BSD-3-Clause"
] | null | null | null | :mod:`dipy.tracking.markov`
===========================
.. automodule:: dipy.tracking.markov
:members:
| 18 | 36 | 0.518519 |
081be5ed6eb2e860d99ab7f488b2ecda1ed35f9a | 2,520 | rst | reStructuredText | emulator/3rdparty/sol2/docs/source/usertypes.rst | rjw57/tiw-computer | 5ef1c79893165b8622d1114d81cd0cded58910f0 | [
"MIT"
] | 1 | 2022-01-15T21:38:38.000Z | 2022-01-15T21:38:38.000Z | emulator/3rdparty/sol2/docs/source/usertypes.rst | rjw57/tiw-computer | 5ef1c79893165b8622d1114d81cd0cded58910f0 | [
"MIT"
] | null | null | null | emulator/3rdparty/sol2/docs/source/usertypes.rst | rjw57/tiw-computer | 5ef1c79893165b8622d1114d81cd0cded58910f0 | [
"MIT"
] | null | null | null | usertypes
=========
Perhaps the most powerful feature of sol2, ``usertypes`` are the way sol2 and C++ communicate your classes to the Lua runtime and bind things between both tables and to specific blocks of C++ memory, allowing you to treat Lua userdata and other things like classes.
To learn more about usertypes, visit:
* :doc:`the basic tutorial<tutorial/cxx-in-lua>`
* :doc:`customization point tutorial<tutorial/customization>`
* :doc:`api documentation<api/usertype>`
* :doc:`memory documentation<api/usertype_memory>`
The examples folder also has a number of really great examples for you to see. There are also some notes about guarantees you can find about usertypes, and their associated userdata, below:
* You can push types classified as userdata before you register a usertype.
- You can register a usertype with the Lua runtime at any time sol2
- You can retrieve them from the Lua runtime as well through sol2
- Methods and properties will be added to the type only after you register it in the Lua runtime
* Types either copy once or move once into the memory location, if it is a value type. If it is a pointer, we store only the reference.
- This means take arguments of class types (not primitive types like strings or integers) by ``T&`` or ``T*`` to modify the data in Lua directly, or by plain ``T`` to get a copy
- Return types and passing arguments to ``sol::function`` use perfect forwarding and reference semantics, which means no copies happen unless you specify a value explicitly. See :ref:`this note for details<function-argument-handling>`.
* The first ``sizeof( void* )`` bytes is always a pointer to the typed C++ memory. What comes after is based on what you've pushed into the system according to :doc:`the memory specification for usertypes<api/usertype_memory>`. This is compatible with a number of systems.
* Member methods, properties, variables and functions taking ``self&`` arguments modify data directly
- Work on a copy by taking or returning a copy by value.
* The actual metatable associated with the usertype has a long name and is defined to be opaque by the Sol implementation.
* Containers get pushed as special usertypes, but can be disabled if problems arising as detailed :doc:`here<api/containers>`.
* You can use bitfields but it requires some finesse on your part. We have an example to help you get started `here that uses a few tricks`_.
.. _here that uses a few tricks: https://github.com/ThePhD/sol2/blob/develop/examples/usertype_bitfields.cpp
| 84 | 272 | 0.771825 |
f8fe911e57d6e59e28571a04c9726987313090e7 | 1,133 | rst | reStructuredText | doc/index.rst | MeekSci/zprocess | 3f94e3afa01dd53e3ccba2951b04f6f1588aef33 | [
"BSD-2-Clause"
] | 2 | 2020-02-02T10:58:56.000Z | 2022-03-08T16:52:38.000Z | doc/index.rst | MeekSci/zprocess | 3f94e3afa01dd53e3ccba2951b04f6f1588aef33 | [
"BSD-2-Clause"
] | 6 | 2019-08-23T17:36:45.000Z | 2021-12-26T00:47:58.000Z | doc/index.rst | MeekSci/zprocess | 3f94e3afa01dd53e3ccba2951b04f6f1588aef33 | [
"BSD-2-Clause"
] | 1 | 2019-08-31T14:24:21.000Z | 2019-08-31T14:24:21.000Z |
========================
zprocess |release|
========================
`Chris Billington <mailto:chrisjbillington@gmail.com>`_, |today|
.. contents::
:local:
TODO: Summary
`View on PyPI <http://pypi.python.org/pypi/zprocess>`_
| `View on BitBucket <https://bitbucket.org/cbillington/zprocess>`_
| `Read the docs <http://zprocess.readthedocs.org>`_
------------
Installation
------------
to install ``zprocess``, run:
.. code-block:: bash
$ pip3 install zprocess
or to install from source:
.. code-block:: bash
$ python3 setup.py install
.. note::
Also works with Python 2.7
------------
Introduction
------------
TODO: introduction
-------------
Example usage
-------------
.. code-block:: python
:name: example.py
def todo():
print('example')
todo()
.. code-block:: bash
:name: output
$ python3 example.py
example
Description of examples
----------------
Module reference
----------------
.. autoclass:: zprocess.clientserver.ZMQServer
:members:
.. autoclass:: zprocess.clientserver.ZMQClient
:members:
.. autofunction:: zprocess.utils.start_daemon
| 14.907895 | 67 | 0.586055 |
abc86e67b9cdaea7c71d72ca42afd2f882b53158 | 960 | rst | reStructuredText | README.rst | mm770912/boost_bandwidth_via_speedtest | 0cd75e5f0aee48d16a1e756ac0f68fb016f1001a | [
"Apache-2.0"
] | 1 | 2021-07-07T11:18:17.000Z | 2021-07-07T11:18:17.000Z | README.rst | erokui/boost_bandwidth_via_speedtest | 0cd75e5f0aee48d16a1e756ac0f68fb016f1001a | [
"Apache-2.0"
] | null | null | null | README.rst | erokui/boost_bandwidth_via_speedtest | 0cd75e5f0aee48d16a1e756ac0f68fb016f1001a | [
"Apache-2.0"
] | null | null | null | copied form speedtest-cli
To reserve bandwidth, this program only sends packets to speedtest websit and
gets some configurations for this site. it will not do download/upload test.
so dont worry that it will occupies your bandwidth
=============
Command line interface for testing internet bandwidth using
speedtest.net
.. image:: https://img.shields.io/pypi/v/speedtest-cli.svg
:target: https://pypi.python.org/pypi/speedtest-cli/
:alt: Latest Version
.. image:: https://img.shields.io/travis/sivel/speedtest-cli.svg
:target: https://pypi.python.org/pypi/speedtest-cli/
:alt: Travis
.. image:: https://img.shields.io/pypi/l/speedtest-cli.svg
:target: https://pypi.python.org/pypi/speedtest-cli/
:alt: License
speedtest-cli works with Python 2.4-3.7
.. image:: https://img.shields.io/pypi/pyversions/speedtest-cli.svg
:target: https://pypi.python.org/pypi/speedtest-cli/
:alt: Versions
| 30.967742 | 77 | 0.703125 |
9dab5cb3b8e88cc9da06bcb6d66dcdc16ddf5b62 | 4,272 | rst | reStructuredText | HISTORY.rst | dementrock/pystache_custom | 776973740bdaad83a3b029f96e415a7d1e8bec2f | [
"MIT"
] | 21 | 2015-01-16T05:10:02.000Z | 2021-06-11T20:48:15.000Z | HISTORY.rst | dementrock/pystache_custom | 776973740bdaad83a3b029f96e415a7d1e8bec2f | [
"MIT"
] | 1 | 2019-09-09T12:10:27.000Z | 2020-05-22T10:12:14.000Z | dependencies/pystache/HISTORY.rst | charlesmchen/typefacet | 8c6db26d0c599ece16f3704696811275120a4044 | [
"Apache-2.0"
] | 2 | 2015-05-03T04:51:08.000Z | 2018-08-24T08:28:53.000Z | History
=======
0.5.2 (2012-05-03)
------------------
* Added support for dot notation and version 1.1.2 of the spec (issue #99). [rbp]
* Missing partials now render as empty string per latest version of spec (issue #115).
* Bugfix: falsey values now coerced to strings using str().
* Bugfix: lambda return values for sections no longer pushed onto context stack (issue #113).
* Bugfix: lists of lambdas for sections were not rendered (issue #114).
0.5.1 (2012-04-24)
------------------
* Added support for Python 3.1 and 3.2.
* Added tox support to test multiple Python versions.
* Added test script entry point: pystache-test.
* Added __version__ package attribute.
* Test harness now supports both YAML and JSON forms of Mustache spec.
* Test harness no longer requires nose.
0.5.0 (2012-04-03)
------------------
This version represents a major rewrite and refactoring of the code base
that also adds features and fixes many bugs. All functionality and nearly
all unit tests have been preserved. However, some backwards incompatible
changes to the API have been made.
Below is a selection of some of the changes (not exhaustive).
Highlights:
* Pystache now passes all tests in version 1.0.3 of the `Mustache spec`_. [pvande]
* Removed View class: it is no longer necessary to subclass from View or
from any other class to create a view.
* Replaced Template with Renderer class: template rendering behavior can be
modified via the Renderer constructor or by setting attributes on a Renderer instance.
* Added TemplateSpec class: template rendering can be specified on a per-view
basis by subclassing from TemplateSpec.
* Introduced separation of concerns and removed circular dependencies (e.g.
between Template and View classes, cf. `issue #13`_).
* Unicode now used consistently throughout the rendering process.
* Expanded test coverage: nosetests now runs doctests and ~105 test cases
from the Mustache spec (increasing the number of tests from 56 to ~315).
* Added a rudimentary benchmarking script to gauge performance while refactoring.
* Extensive documentation added (e.g. docstrings).
Other changes:
* Added a command-line interface. [vrde]
* The main rendering class now accepts a custom partial loader (e.g. a dictionary)
and a custom escape function.
* Non-ascii characters in str strings are now supported while rendering.
* Added string encoding, file encoding, and errors options for decoding to unicode.
* Removed the output encoding option.
* Removed the use of markupsafe.
Bug fixes:
* Context values no longer processed as template strings. [jakearchibald]
* Whitespace surrounding sections is no longer altered, per the spec. [heliodor]
* Zeroes now render correctly when using PyPy. [alex]
* Multline comments now permitted. [fczuardi]
* Extensionless template files are now supported.
* Passing ``**kwargs`` to ``Template()`` no longer modifies the context.
* Passing ``**kwargs`` to ``Template()`` with no context no longer raises an exception.
0.4.1 (2012-03-25)
------------------
* Added support for Python 2.4. [wangtz, jvantuyl]
0.4.0 (2011-01-12)
------------------
* Add support for nested contexts (within template and view)
* Add support for inverted lists
* Decoupled template loading
0.3.1 (2010-05-07)
------------------
* Fix package
0.3.0 (2010-05-03)
------------------
* View.template_path can now hold a list of path
* Add {{& blah}} as an alias for {{{ blah }}}
* Higher Order Sections
* Inverted sections
0.2.0 (2010-02-15)
------------------
* Bugfix: Methods returning False or None are not rendered
* Bugfix: Don't render an empty string when a tag's value is 0. [enaeseth]
* Add support for using non-callables as View attributes. [joshthecoder]
* Allow using View instances as attributes. [joshthecoder]
* Support for Unicode and non-ASCII-encoded bytestring output. [enaeseth]
* Template file encoding awareness. [enaeseth]
0.1.1 (2009-11-13)
------------------
* Ensure we're dealing with strings, always
* Tests can be run by executing the test file directly
0.1.0 (2009-11-12)
------------------
* First release
.. _2to3: http://docs.python.org/library/2to3.html
.. _issue #13: https://github.com/defunkt/pystache/issues/13
.. _Mustache spec: https://github.com/mustache/spec
| 36.20339 | 93 | 0.724017 |
1b8408f43ba2e5287c6c3deeadce98ed823fd361 | 763 | rst | reStructuredText | readme.rst | albertvisser/actiereg | dfbab5ad1989532421e452ee9233ba5822bc4d40 | [
"MIT"
] | null | null | null | readme.rst | albertvisser/actiereg | dfbab5ad1989532421e452ee9233ba5822bc4d40 | [
"MIT"
] | null | null | null | readme.rst | albertvisser/actiereg | dfbab5ad1989532421e452ee9233ba5822bc4d40 | [
"MIT"
] | null | null | null | ========
ActieReg
========
The name stands for Actie Registratie (action registration),
it is the web version of `ProbReg </albertvisser/probreg/>`_ -
that itself should have been called ActieReg
because it does more than just register (the progress on) problems.
For using it in the web browser, I added user support and changed the data storage
to an SQL database instead of XML files.
There's also the possibility to communicate with another web app of mine,
a `software project administration </albertvisser/myprojects/>`_,
to provide some context to the activity.
Usage
-----
Use manage.py or the provided fcgi or wsgi script to start the django app, and
configure your web server to communicate with it.
Requirements
------------
- Python
- Django
| 25.433333 | 82 | 0.749672 |
ac70ffc397a3388e4524cf29e789cba297e9ad65 | 4,742 | rst | reStructuredText | tools/seq_composition/README.rst | Neato-Nick/pico_galaxy | 79666612a9ca2d335622bc282a4768bb43d91419 | [
"MIT"
] | 18 | 2015-06-09T13:57:09.000Z | 2022-01-14T21:05:54.000Z | tools/seq_composition/README.rst | Neato-Nick/pico_galaxy | 79666612a9ca2d335622bc282a4768bb43d91419 | [
"MIT"
] | 34 | 2015-04-02T19:26:08.000Z | 2021-06-17T18:59:24.000Z | tools/seq_composition/README.rst | Neato-Nick/pico_galaxy | 79666612a9ca2d335622bc282a4768bb43d91419 | [
"MIT"
] | 24 | 2015-02-25T13:40:19.000Z | 2021-09-08T20:40:40.000Z | Galaxy tool reporting sequence composition
==========================================
This tool is copyright 2014-2017 by Peter Cock, The James Hutton Institute
(formerly SCRI, Scottish Crop Research Institute), UK. All rights reserved.
See the licence text below (MIT licence).
This tool is a short Python script (using Biopython library functions) to
loop over given sequence files (in a range of formats including FASTA, FASTQ,
and SFF), and report the count of each letter (i.e. amino acids or bases).
This can be useful for sanity checking assemblies (e.g. proportion of N
bases) or looking at differences in base composition.
This tool is available from the Galaxy Tool Shed at:
* http://toolshed.g2.bx.psu.edu/view/peterjc/seq_composition
Automated Installation
======================
This should be straightforward using the Galaxy Tool Shed, which should be
able to automatically install the dependency on Biopython, and then install
this tool and run its unit tests.
Manual Installation
===================
There are just two files to install to use this tool from within Galaxy:
* ``seq_composition.py`` (the Python script)
* ``seq_composition.xml`` (the Galaxy tool definition)
The suggested location is in a dedicated ``tools/seq_composition`` folder.
You will also need to modify the ``tools_conf.xml`` file to tell Galaxy to offer the
tool. One suggested location is in the filters section. Simply add the line::
<tool file="seq_composition/seq_composition.xml" />
You will also need to install Biopython 1.62 or later.
If you wish to run the unit tests, also move/copy the ``test-data/`` files
under Galaxy's ``test-data/`` folder. Then::
./run_tests.sh -id seq_composition
That's it.
History
=======
======= ======================================================================
Version Changes
------- ----------------------------------------------------------------------
v0.0.1 - Initial version.
- Tool definition now embeds citation information.
v0.0.2 - Reorder XML elements (internal change only).
- Planemo for Tool Shed upload (``.shed.yml``, internal change only).
v0.0.3 - Python style updates (internal change only).
v0.0.4 - Depends on Biopython 1.67 via legacy Tool Shed package or bioconda.
v0.0.5 - Use ``<command detect_errors="aggressive">`` (internal change only).
- Single quote command line arguments (internal change only).
======= ======================================================================
Developers
==========
This script and related tools are being developed on this GitHub repository:
https://github.com/peterjc/pico_galaxy/tree/master/tools/seq_composition
For pushing a release to the test or main "Galaxy Tool Shed", use the following
Planemo commands (which requires you have set your Tool Shed access details in
``~/.planemo.yml`` and that you have access rights on the Tool Shed)::
$ planemo shed_update -t testtoolshed --check_diff tools/seq_composition/
...
or::
$ planemo shed_update -t toolshed --check_diff tools/seq_composition/
...
To just build and check the tar ball, use::
$ planemo shed_upload --tar_only tools/seq_composition/
...
$ tar -tzf shed_upload.tar.gz
test-data/MID4_GLZRM4E04_rnd30_frclip.sff
test-data/MID4_GLZRM4E04_rnd30_frclip.seq_composition.tabular
test-data/ecoli.fastq
test-data/ecoli.seq_composition.tabular
test-data/four_human_proteins.fasta
test-data/four_human_proteins.seq_composition.tabular
tools/seq_composition/README.rst
tools/seq_composition/seq_composition.py
tools/seq_composition/seq_composition.xml
tools/seq_composition/tool_dependencies.xml
Licence (MIT)
=============
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| 37.634921 | 84 | 0.713412 |
fc1baa17f2e40bea77892dea340625d54e828dba | 116 | rst | reStructuredText | README.rst | GraciaMuniz/aio-oss | 68aa993d75a2c406733b8da2739a2cbbf690fa93 | [
"MIT"
] | 1 | 2022-03-08T03:59:09.000Z | 2022-03-08T03:59:09.000Z | README.rst | GraciaMuniz/aio-oss | 68aa993d75a2c406733b8da2739a2cbbf690fa93 | [
"MIT"
] | null | null | null | README.rst | GraciaMuniz/aio-oss | 68aa993d75a2c406733b8da2739a2cbbf690fa93 | [
"MIT"
] | null | null | null | Most of source code from https://github.com/aliyun/aliyun-oss-python-sdk,
only get_object function is implemented.
| 29 | 73 | 0.801724 |
74b9f40f7c238c7a0a59126092d034483efb91a6 | 1,524 | rst | reStructuredText | docs/devguides/extending.rst | ineersa/DeepPavlov | 8200bf9a0f0b378baad4ee0eb75b59453f516004 | [
"Apache-2.0"
] | 3 | 2020-04-16T04:25:10.000Z | 2021-05-07T23:04:43.000Z | docs/devguides/extending.rst | ineersa/DeepPavlov | 8200bf9a0f0b378baad4ee0eb75b59453f516004 | [
"Apache-2.0"
] | 12 | 2020-01-28T22:14:04.000Z | 2022-02-10T00:10:17.000Z | docs/devguides/extending.rst | ineersa/DeepPavlov | 8200bf9a0f0b378baad4ee0eb75b59453f516004 | [
"Apache-2.0"
] | 1 | 2022-02-08T14:41:28.000Z | 2022-02-08T14:41:28.000Z | Extending the library
=====================
In order to extend the library, you need to register your classes and functions; it is done in two steps.
1. Decorate your :class:`~deeppavlov.core.models.component.Component`
(or :class:`~deeppavlov.core.data.dataset_reader.DatasetReader`,
or :class:`~deeppavlov.core.data.data_learning_iterator.DataLearningIterator`,
or :class:`~deeppavlov.core.data.data_fitting_iterator.DataFittingIterator`)
using :func:`~deeppavlov.core.common.registry.register` and/or metrics function
using :func:`~deeppavlov.core.common.metrics_registry.register_metric`.
2. Rebuild the registry running from DeepPavlov root directory:
::
python -m utils.prepare.registry
This script imports all the modules in deeppavlov package, builds the registry from them and writes it to a file.
However, it is possible to use some classes and functions inside configuration files without registering them explicitly.
There are two options available here:
- instead of ``{"class_name": "registered_component_name"}`` in config file use key-value pair similar to
``{"class_name": "my_package.my_module:MyClass"}``
- if your classes/functions are properly decorated but not included in the registry, use ``"metadata"`` section of
your config file specifying imports as ``"metadata": {"imports": ["my_local_package.my_module", "global_package.module"]}``;
then the second step described above will be unnecessary (local packages are imported from the current working
directory).
| 47.625 | 126 | 0.769685 |
cfb8923b16dce384e61c8045e30081316db7180b | 5,923 | rst | reStructuredText | doc/link_jageocoder.rst | geonlp-platform/pygeonlp | f91a38ac416b7dbd5ef689742d0ffac48e6d350e | [
"BSD-2-Clause"
] | 7 | 2021-07-21T08:22:00.000Z | 2022-02-18T09:18:31.000Z | doc/link_jageocoder.rst | geonlp-platform/pygeonlp | f91a38ac416b7dbd5ef689742d0ffac48e6d350e | [
"BSD-2-Clause"
] | 8 | 2021-07-09T01:43:14.000Z | 2021-09-09T05:02:23.000Z | doc/link_jageocoder.rst | geonlp-platform/pygeonlp | f91a38ac416b7dbd5ef689742d0ffac48e6d350e | [
"BSD-2-Clause"
] | null | null | null | .. _link_jageocoder:
住所ジオコーダー連携
====================
pygeonlp を住所ジオコーダー `jageocoder <https://pypi.org/project/jageocoder/>`_ と
連携することで、テキスト中の住所を地名語ではなく住所として認識できます。
jageocoder は pygeonlp をインストールすると自動的にインストールされますが、
住所辞書を設定しないと機能しません。
初回は以下の手順で jageocoder 用辞書データのダウンロードとインストールを
行なってください。 ::
$ curl https://www.info-proto.com/static/jusho.zip -o jusho.zip
$ python
>>> import jageocoder
>>> jageocoder.install_dictionary('jusho.zip')
インストールが完了したら `juzho.zip` は削除して構いません。
住所ジオコーダーを利用したい時は、 `geoparse()` を呼びだす時に
`jageocoder` オプションに True をセットしてください。
.. code-block :: plaintext
>>> import pygeonlp.api as api
>>> api.init()
>>> api.geoparse('NIIは千代田区一ツ橋2-1-2にあります。', jageocoder=True)
[{'type': 'Feature', 'geometry': None, 'properties': {'surface': 'NII', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': '*', 'pos': '名詞', 'prononciation': '', 'subclass1': '固有名詞', 'subclass2': '組織', 'subclass3': '*', 'surface': 'NII', 'yomi': ''}}}, {'type': 'Feature', 'geometry': None, 'properties': {'surface': 'は', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': 'は', 'pos': '助詞', 'prononciation': 'ワ', 'subclass1': '係助詞', 'subclass2': '*', 'subclass3': '*', 'surface': 'は', 'yomi': 'ハ'}}}, {'type': 'Feature', 'geometry': {'type': 'Point', 'coordinates': [139.758148, 35.692332]}, 'properties': {'surface': '千代田区一ツ橋2-1-', 'node_type': 'ADDRESS', 'morphemes': [{'surface': '千代田区', 'node_type': 'GEOWORD', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': '千代田区', 'pos': '名詞', 'prononciation': '', 'subclass1': '固有名詞', 'subclass2': '地名語', 'subclass3': 'WWIY7G:千代田区', 'surface': '千代田区', 'yomi': ''}, 'geometry': {'type': 'Point', 'coordinates': [139.753634, 35.694003]}, 'prop': {'address': '東京都千代田区', 'body': '千代田', 'body_variants': '千代田', 'code': {}, 'countyname': '', 'countyname_variants': '', 'dictionary_id': 1, 'entry_id': '13101A1968', 'geolod_id': 'WWIY7G', 'hypernym': ['東京都'], 'latitude': '35.69400300', 'longitude': '139.75363400', 'ne_class': '市区町村', 'prefname': '東京都', 'prefname_variants': '東京都', 'source': '1/千代田区役所/千代田区九段南1-2-1/P34-14_13.xml', 'suffix': ['区'], 'valid_from': '', 'valid_to': '', 'dictionary_identifier': 'geonlp:geoshape-city'}}, {'surface': '一ツ橋', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': '一ツ橋', 'pos': '名詞', 'prononciation': 'ヒトツバシ', 'subclass1': '固有名詞', 'subclass2': '地域', 'subclass3': '一般', 'surface': '一ツ橋', 'yomi': 'ヒトツバシ'}, 'geometry': None, 'prop': None}, {'surface': '2', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': '*', 'pos': '名詞', 'prononciation': '', 'subclass1': '数', 'subclass2': '*', 'subclass3': '*', 'surface': '2', 'yomi': ''}, 'geometry': None, 'prop': None}, {'surface': '-', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': '*', 'pos': '名詞', 'prononciation': '', 'subclass1': 'サ変接続', 'subclass2': '*', 'subclass3': '*', 'surface': '-', 'yomi': ''}, 'geometry': None, 'prop': None}, {'surface': '1', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': '*', 'pos': '名詞', 'prononciation': '', 'subclass1': '数', 'subclass2': '*', 'subclass3': '*', 'surface': '1', 'yomi': ''}, 'geometry': None, 'prop': None}, {'surface': '-', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': '*', 'pos': '名詞', 'prononciation': '', 'subclass1': 'サ変接続', 'subclass2': '*', 'subclass3': '*', 'surface': '-', 'yomi': ''}, 'geometry': None, 'prop': None}], 'address_properties': {'id': 11460296, 'name': '1番', 'x': 139.758148, 'y': 35.692332, 'level': 7, 'note': None, 'fullname': ['東京都', '千代田区', '一ツ橋', '二丁目', '1番']}}}, {'type': 'Feature', 'geometry': None, 'properties': {'surface': '2', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': '*', 'pos': '名詞', 'prononciation': '', 'subclass1': '数', 'subclass2': '*', 'subclass3': '*', 'surface': '2', 'yomi': ''}}}, {'type': 'Feature', 'geometry': None, 'properties': {'surface': 'に', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': 'に', 'pos': '助詞', 'prononciation': 'ニ', 'subclass1': '格助詞', 'subclass2': '一般', 'subclass3': '*', 'surface': 'に', 'yomi': 'ニ'}}}, {'type': 'Feature', 'geometry': None, 'properties': {'surface': 'あり', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '五段・ラ行', 'conjugation_type': '連用形', 'original_form': 'ある', 'pos': '動詞', 'prononciation': 'アリ', 'subclass1': '自立', 'subclass2': '*', 'subclass3': '*', 'surface': 'あり', 'yomi': 'アリ'}}}, {'type': 'Feature', 'geometry': None, 'properties': {'surface': 'ます', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '特殊・マス', 'conjugation_type': '基本形', 'original_form': 'ます', 'pos': '助動詞', 'prononciation': 'マス', 'subclass1': '*', 'subclass2': '*', 'subclass3': '*', 'surface': 'ます', 'yomi': 'マス'}}}, {'type': 'Feature', 'geometry': None, 'properties': {'surface': '。', 'node_type': 'NORMAL', 'morphemes': {'conjugated_form': '*', 'conjugation_type': '*', 'original_form': '。', 'pos': '記号', 'prononciation': '。', 'subclass1': '句点', 'subclass2': '*', 'subclass3': '*', 'surface': '。', 'yomi': '。'}}}]
このサンプルコードは以下の処理を行ないます。
1. `pygeonlp.api パッケージ <pygeonlp.api.html>`_ を import します。
2. `api.init() <pygeonlp.api.html#pygeonlp.api.init>`_ を呼んで pygeonlp.api を利用可能にします。
3. `api.geoparse() <pygeonlp.api.html#pygeonlp.api.geoparse>`_ を jageocoder オプション付きで実行します。
jageocoder パラメータに True が指定されていると、地名語を抽出した後で
住所文字列の可能性がある部分をジオコーダーで確認し、
住所として解析できれば住所ノードとして返します。
住所ノードは ``node_type`` が ADDRESS になります。
また、住所ノードは地名語ノードと同じように、 JSON エンコードすれば
GeoJSON Feature オブジェクトになります。
住所辞書がインストールされていない時に `jageocoder=True` を指定すると、
ParseError 例外が発生します。
| 131.622222 | 4,688 | 0.619787 |
2f41c485b83f953b46f9948595d10a62b8a809ed | 27,953 | rst | reStructuredText | doc/sphinx/developer_manual/libraries/utilities/utilities_library.rst | Noxsense/mCRL2 | dd2fcdd6eb8b15af2729633041c2dbbd2216ad24 | [
"BSL-1.0"
] | 61 | 2018-05-24T13:14:05.000Z | 2022-03-29T11:35:03.000Z | doc/sphinx/developer_manual/libraries/utilities/utilities_library.rst | Noxsense/mCRL2 | dd2fcdd6eb8b15af2729633041c2dbbd2216ad24 | [
"BSL-1.0"
] | 229 | 2018-05-28T08:31:09.000Z | 2022-03-21T11:02:41.000Z | doc/sphinx/developer_manual/libraries/utilities/utilities_library.rst | Noxsense/mCRL2 | dd2fcdd6eb8b15af2729633041c2dbbd2216ad24 | [
"BSL-1.0"
] | 28 | 2018-04-11T14:09:39.000Z | 2022-02-25T15:57:39.000Z | Introduction
============
.. cpp:namespace:: mcrl2::utilities
This library holds functionality that does not (or not yet) fit in any of the other libraries. It mainly contains functionality that simplifies the use of other libraries or combinations thereof. The purpose of bundling this functionality is to encourage reuse.
Much of the current functionality should at some point be integrated in one of the other libraries. Please contact any of the developers when you think this is the case.
Structure
=========
The header files of the utilities library are roughly organised as depicted below.
.. figure:: /_static/utilities/layout.png
:align: center
The top directory is mcrl2, containing a header file with toolset specific build information and the utilities directory.
The command line interfacing sublibrary standardises some more aspects of tool command line interfaces.
.. _cli_library:
CLI Library
===========
Introduction
------------
A set of user interface guidelines and conventions has been compiled to standardise user interfaces across the tools in the mCRL2 toolset. The purpose of this library is to simplify creation and maintenance of standard conforming command line user interfaces of tools in the mCRL2 toolset.
Concepts
--------
Here we introduce the set of concepts involved.
Command line interface
^^^^^^^^^^^^^^^^^^^^^^
A command line interface is an interaction mechanism for software systems based on textual commands. The system awaits the next command, at which point it interprets this command and starts waiting again for the next command.
A shell is an example of a command line interface that provides a user with access to services typically services provided by the operating system kernel. The programs we would like to consider, mCRL2 tools, are typically started by feeding a command to a shell. The command that starts a tool program is an integral part of the command line interface of that program.
Command
^^^^^^^
A command is a sequence of characters (also called string) that is used to invoke tools.
Command Arguments
^^^^^^^^^^^^^^^^^
Commands consist of a part that identifies a tool and optional arguments that affect the behaviour of that tool. We distinguish two types of arguments options and non-option arguments.
An option is a special command argument that is used to trigger optional behaviour in a tool. Every option has a long identifier and optionally a single-character short identifier. Additional structure is imposed on options, for one they need to be distinguishable from the arguments. To disambiguate between arguments and option identifiers (to specify options) the short and long option identifiers are prefixed with '-' and '--' respectively. An extension taken from the getopt library (a well-known C-style library command line parsing) is the chaining of short option identifiers. For example -bc2 is treated the same as -b -c2 (under context conditions stated below).
For the sake of completeness, the following is a full EBNF(-like) grammar for commands::
command ::= white-space * tool-name ( white-space+ ( option | argument ) ) * white-space *
white-space ::= [ \t\n\r]
argument ::= [^ \t\n\r-] + | '"' [^"]* '"' | "'" [^']* "'"
option ::= ("--" long-option [ "=" argument ] ) | ("-" short-options [ white-space * argument ])
long-option ::= alpha-low ( long-option-character ) *
short-options ::= ( short-option-character ) +
long-option-character ::= '-' | digit | alpha-low
short-option-character ::= digit | alpha-low | alpha-high
alpha-low ::= 'a' | 'b' | 'c' | ... | 'z'
alpha-high ::= 'A' | 'B' | 'C' | ... | 'Z'
digit ::= '0' | '1' | '2' | ... | '9'
Context conditions
^^^^^^^^^^^^^^^^^^
An option argument is called optional (otherwise mandatory) if a default value is available that is assumed present as argument if the option was found without argument. The option identified by a short or long identifier is associated with the information that it either expects no arguments, an optional argument or a mandatory argument. A default value must be associated with every optional argument. Finally no white-space is allowed between a short option identifier and the optional argument. For example '-oA' (and not '-o A') for option o with argument A and '-o ' specifies the the default value for option o.
For the chaining of short options it is required that all options except the last in the chain take no arguments (so not even an optional argument). The reason is that there is no reliable way to disambiguate between option argument and option identifiers. All that follows the first option in the chain that takes an optional or mandatory argument is taken as argument.
Library interface
-----------------
The public library interface consists of two classes, one for the specification of a command line interface and the other for the actual parsing of a command using an interface specification.
Objects of class mcrl2::utilities::interface_description contain an interface specification and a description of this interface. The specification part consist of a set of option specifications each containing a long identifier, a description of the option, and optionally an argument specification and short identifier. The argument specification describes whether the argument is mandatory or optional (in the latter case case it also specifies a default value). The descriptive part consists of some general interface information and every option and option argument is equipped with a textual description of its use.
Up to here functionality focusses on specifying input. The user interface conventions also mention standardised output. Formatting functionality is available for:
* printing a textual interface description (for use with -h or --help option),
* printing a copyright message,
* printing a man page,
* version information (--version option),
* error reporting for command line parsing.
Especially the error reporting functionality can be useful for tool developers in situations where problems arise during processing the results of command line parsing.
Parsing commands against an interface specification and accessing the results can be done using an mcrl2::utilities::command_line_parser object. The output of parsing is the set of options and arguments associated to options that were part of the input command. When parsing finishes without problems parse results are available for inspection. On a parse error an exception is thrown, with a properly formatted error description as message.
Important usability notes
The interface conventions specify a number of standard options:
#. for messaging \-\-verbose (-v), \-\-quiet (-q), \-\-debug (-d), and
#. for strategy selection for rewriting using the rewrite library
If the tool uses the core messaging layer, it is necessary to include mcrl2/core/messaging.h prior to the header file of this library in order to activate automatic handling of messaging options on the command line. Similarly if a tool uses the rewriter library, it is necessary to include mcrl2/data/rewriter.h prior to header files of this library to activate handling of rewriter options.
Tutorial
--------
There is no tutorial for the use of this library, the reference documentation contains a number of small examples on the use of this library.
The command line interfacing library is part of the mCRL2 utilities library. It contains only infrastructure functionality for the construction of tools that provide the doorway to the core functionality of the mCRL2 toolset. The references pages are part of the utilities library reference pages.
.. _tool_classes:
Tool classes
============
To simplify the creation of a tool, a number of tool classes is available in the
Utilities Library. They all inherit from the class `tool`, and they can be found
in the namespace `utilities::tools`. The main purpose of the tool classes is to
standardize the behavior of tools. Tool classes use the :ref:`cli_library` for
handling command line arguments.
Using the tool classes ensure that all tools adhere to the following
guidelines
Tool interface guidelines
-------------------------
Command line interface
^^^^^^^^^^^^^^^^^^^^^^
The command line interface of each tool should adhere to the following guidelines.
Options
"""""""
Options can be provided in the following two forms:
* a long form (mandatory): ``--option``, where ``option`` is a string of the
form ``[a-z][a-z0-9\-]*``;
* a short form (strongly recommended): ``-o``, where ``o`` is a
character of the form ``[a-zA-Z0-9]``. Furthermore, the options should
adhere to the following:
* Options may take arguments, either mandatory or optionally;
the mandatory argument of an option must be accepted as ``--option=ARG``
for long forms and as ``-oARG`` or ``-o␣ARG`` for short forms, where
``␣`` stands for one or more whitespace characters.
The optional argument of an option must be accepted as ``--option=ARG`` for
long forms and as ``-oARG`` for short forms.
* Short forms of options may be concatenated, where the last option in the chain
may take an argument. For instance, given options ``-o`` and
``-p`` where the latter takes an argument ``ARG``, the chain
``-opARG`` is valid (but ``-pARGo`` is not).
* Users should not be allowed to specify an option more than once.
* Every tool should provide the following standard options::
| -q, --quiet do not display warning messages
| -v, --verbose display short intermediate messages
| -d, --debug display detailed intermediate messages
| -h, --help display help information
| --version display version information
* Every tool that utilises ''rewriting'' should additionally provide the
following option::
| -rNAME, --rewriter=NAME use rewrite strategy NAME:
| 'jitty' for jitty rewriting (default),
| 'jittyp' for jitty rewriting with prover,
| 'jittyc' for compiled jitty rewriting.
Input and output files
""""""""""""""""""""""
Some tools require input and/or output files; these include
transformation and conversion tools (but not GUI tools). The most important
input file and the most important output file (if any) should be accepted as
optional command line arguments, in the following way:
* the first argument is treated as the input file, the second argument is
treated as the output file (if present);
* when the input file is not supplied, input is read from ``stdin``;
* when the output file is not supplied, output is written to ``stdout``.
It is only allowed to deviate from these rules if it is technically
infeasible to read from ``stdin`` or write to ``stdout``.
Furthermore, the following features are not allowed:
* designate the input file without its extension, e.g.
* wrong: ``mcrl22lps abp``
* right: ``mcrl22lps abp.mcrl2``
* option ``-`` to indicate input should be read from ``stdin``, e.g.
* wrong: ``... | lpsrewr - abp.rewr.lps``
* right: ``... | lpsrewr > abp.rewr.lps``
Exit codes
""""""""""
The command line interface should have an exit code of ``0`` upon
successful termination, and non-zero upon unsuccessful termination. Success here
means that during executing of the tool, no errors have occurred.
No special meaning may be assigned to specific non-zero exit codes.
Handling interface errors
"""""""""""""""""""""""""
When parsing the command line, errors may be encountered, for instance due to an
invalid number of arguments, unrecognised options or illegal arguments to
options. When such errors are encountered the following actions should be taken,
depending on whether the tool has a GUI or not:
A tool that does not have a GUI should print the following message to ``stderr``::
TOOL: ERROR_MSG
Try `TOOL --help' for more information.
where:
* ``TOOL`` stands for the name of the tool that the user called, i.e. ``argv[0]``;
* ``ERROR_MSG`` stands for the error message corresponding to the first
error that is encountered when parsing the command line. After that, the tool
should terminate with exit code ``1``.
A tool that has a GUI should show an error message dialog containing the error
message corresponding to the first error that is encountered when parsing
the command line.
Exceptions
""""""""""
It is not allowed for tools to pass unhandled exceptions to the operating system.
Graphical user interface
^^^^^^^^^^^^^^^^^^^^^^^^
Every tool that has a graphical user interface tool should provide a help
menu containing the following menu items:
* Contents: a link to the tool user manual;
* About: a message dialog containing the tool version information.
Use of the :cpp:class:`mcrl2::utilities::qt::qt_tool` class takes care of both by
default. This class must be used for all QT tools to get the correct
command line interface behaviour.
Help and version information
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Tool help and version information should adhere to the following guidelines.
Help information
""""""""""""""""
Help information should be provided by the command line option ``-h, --help``.
It basically is a condensed version of the tool user manual in plain text with a
maximum width of 80 characters.
Version information
"""""""""""""""""""
Version information should be provided by:
* the command line option ``--version``;
* the ``About`` menu item.
Available tool classes
----------------------
The table below gives an overview of the
available tool classes, and the command line options that they handle.
.. table:: Tool classes and their supported command line arguments
================================================================================ ===================================================================
tool class command line arguments
================================================================================ ===================================================================
class :cpp:class:`mcrl2::utilities::tool` handles =--quiet=, =--verbose=, =--debug=, =--help= and =--version=
class :cpp:class:`mcrl2::utilities::input_tool` in addition handles a positional input file argument
class :cpp:class:`mcrl2::utilities::input_output_tool` in addition handles a positional output file argument
template <typename Tool> class :cpp:class:`mcrl2::utilities::rewriter_tool` extends a tool with a =--rewriter= option
template <typename Tool> class :cpp:class:`mcrl2::utilities::pbes_rewriter_tool` extends a tool with =--rewriter= and =--pbes-rewriter= options
================================================================================ ===================================================================
The class :cpp:class:`mcrl2::utilities::rewriter_tool` makes strategies of the
data rewriter available to the user. The class
:cpp:class:`mcrl2::utilities::pbes_rewriter_tool` makes pbes rewriters available
to the user.
Example
-------
A good example to look at is the pbesparelm tool. Since this is a tool that
takes a file as input and also writes output to a file, it derives from the
class :cpp:class:`mcrl2:utilities:input_output_tool`. It can be found in
the directory ``tools/release/pbesparelm/pbesparelm.cpp``.
In the constructor a few settings are provided.
This is enough to create a tool with the follow help message::
Usage: pbesparelm [OPTION]... [INFILE [OUTFILE]]
Reads a file containing a PBES, and applies parameter elimination to it. If
OUTFILE is not present, standard output is used. If INFILE is not present,
standard input is used.
Tool properties
^^^^^^^^^^^^^^^
.. table:: Tool properties
======== ==============================
Property Meaning
======== ==============================
synopsis Summary of command-line syntax
what is don't know
======== ==============================
Creating a new tool
^^^^^^^^^^^^^^^^^^^
To create a new tool, the following needs to be done:
#. Override the :cpp:member:`run` member function
The actual execution of the tool happens in the virtual member function :cpp:member:`run`.
The developer has to override this function to add the behavior of the tool
The :cpp:member:`run` function is called from the :cpp:member:`execute` member function, after the
command line parameters have been parsed.
#. Set some parameters in the constructor
In the constructor of a tool, one has to supply a name for the tool,
an author and a description:
.. code-block:: c++
class my_tool: public input_tool
{
public:
my_tool()
: input_tool(
"mytool",
"John Doe",
"Reads a file and processes it"
)
{}
};
#. Optionally add additional command line arguments]
Additional command line arguments can be specified by overriding the virtual
methods :cpp:member:`parse_options` and :cpp:member:`add_options`:
.. code-block:: c++
class pbes_constelm_tool: public filter_tool_with_pbes_rewriter
{
protected:
bool m_compute_conditions;
void parse_options(const command_line_parser& parser)
{
m_compute_conditions = parser.options.count("compute-conditions") > 0;
}
void add_options(interface_description& clinterface)
{
clinterface.add_option("compute-conditions", "compute propagation conditions", 'c');
}
...
};
One can change this selection
by overriding the method :cpp:member:`available_rewriters`.
.. _logging_library:
Logging Library
===============
Introduction
------------
Printing of logging and debug messages has been standardised throughout the
mCRL2 toolset through this logging library. The facilities provided by this
library should be used throughout the toolset. The library is inspired by the
description in `"Logging in C++" by P. Marginean <http://drdobbs.com/cpp/201804215>`_.
All code of this library can be found in the mcrl2::log namespace.
Concepts
--------
The logging library incorporates the concepts introduced in this section.
Log level
^^^^^^^^^
The type :cpp:type:`log_level_t` describes the various log levels that we identify.
The log level describes the severity of the message.
.. note::
No message should ever be printed to the quiet log level. This level
is meant to disable all messages.
Hint
^^^^
Hints can be used to distinguish between separate components in the toolset.
The logging library allows controlling logging statements with different hints
separately. One can e.g. change the log level for a specific hint, or attach
another output stream to a specific hint, allowing the library user to write
specific messages to a file.
OutputPolicy
^^^^^^^^^^^^
The output policy controls the way messages are output.
By default the file_output policy is used, which writes a message to the
file related to the hint of the current message.
Library interface
-----------------
The main routine in the library is :cpp:func:`mCRL2log(level, hint)`, where level is a
loglevel, and hint is a (optional) string hint. The routine returns an output
stream to which a single log message may be printed. Printing defaults
to stderr.
Maximal log level (compile time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The library includes a compile time variable :c:macro:`MCRL2_MAX_LOG_LEVEL`, which,
if not set, defaults to debug. All log messages with a log level
higher than :c:macro:`MCRL2_MAX_LOG_LEVEL` will be disabled during compile-time,
meaning they will not be in the generated executable.
Maximal log level (runtime)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The maximal reporting level can be set using
:cpp:member:`mcrl2_logger::set_reporting_level(level)`, by default info is assumed.
Setting output stream
^^^^^^^^^^^^^^^^^^^^^
The output stream of the logger can be set to be any file using
:cpp:member:`mcrl2_logger::output_policy_t::set_stream(file_pointer)`. Note that
file_pointer in this case can also be stderr or stdout. The default
output stream is stderr.
Incorporating hints
^^^^^^^^^^^^^^^^^^^
For both the reporting level and the stream, the routines to change them have
an optional hint argument that can be used to override the defaults for a
specific hint. To set a reporting level for a specific hint "hint" one can
use :cpp:member:`mcrl2_logger::set_reporting_level(level, "hint")`, likewise, for a stream
one can use :cpp:member:`mcrl2_logger::output_policy_t::set_stream(file_pointer, "hint")`.
In order to remove specific treatment of a hint, the routines
:cpp:member:`mcrl2_logger::clear_reporting_level("hint")` an
:cpp:member:`mcrl2_logger::output_policy_t::clear_stream("hint")` can be used.
Formatting the output
^^^^^^^^^^^^^^^^^^^^^
By default each line in the output is prefixed with a fixed string,
including a timestamp, the log level and, if provided, a hint. Furthermore,
the user of the library can control indentation (at a global level) using
the routines :cpp:member:`mcrl2_logger::indent()` and
:cpp:member:`mcrl2_logger::unindent()`.
Tutorial
--------
In this section we describe a typical use case of the logging library.
To enable logging, first include the header file.
.. code-block:: c++
#include "mcrl2/utilities/logger.h"
If you want to control the log levels that are compiled into the code, you
should set the following macro *before the first include* of logger.h, or
you should provide it as a compiler flag.
.. code-block:: c++
#define MCRL2_MAX_LOG_LEVEL debug
this only compiles logging statements up to and including debug
(and is actually the default).
Now let's start out main routine as usual
.. code-block:: c++
using namespace mcrl2;
int main(int argc, char** argv)
{
We only allow reporting of messages up to verbose, so we do not print
messages of level debug or higher.
.. code-block:: c++
log::mcrl2_logger::set_reporting_level(log::verbose);
We want this information to be printed to stderr, which is the default.
Let's do some logging.
.. code-block:: c++
mCRL2log(log::info) << "This shows the way info messages are printed, using the default messages" << std::endl;
mCRL2log(log::debug) << "This line is not printed, and the function " << my_function() << " is not evaluated" << std::endl;
Now we call an algorithm :cpp:func:`my_algorithm`, which we will define later.
The algorithm uses "my_algorithm" as hint for logging, and we want to write
its output to a file. First we create a file logger_test_file.txt to which
we log, and assign it to the hint "my_algorithm".
.. code-block:: c++
FILE* plogfile;
plogfile = fopen("logger_test_file.txt" , "w");
if(plogfile == NULL)
{
throw mcrl2::runtime_error("Cannot open logfile for writing");
}
log::mcrl2_logger::output_policy_t::set_stream(plogfile, "my_algorithm");
log::mcrl2_logger::set_reporting_level(log::debug3, "my_algorithm");
// Execute algorithm
my_algorithm();
// Do not forget to close the file.
fclose(plogfile);
}
Let's take a look at an implementation of =my_algorithm()=.
.. code-block:: c++
void do_something_special()
{
mCRL2log(log::debug3, "my_algorithm") << "doing something special" << std::endl;
}
std::string my_algorithm()
{
mCRL2log(log::debug, "my_algorithm") << "Starting my_algorithm" << std::endl;
int iterations = 3;
mCRL2log(log::debug1, "my_algorithm") << "A loop with " << iterations << " iterations" << std::endl;
log::mcrl2_logger::indent();
for(int i = 0; i < iterations; ++i)
{
mCRL2log(log::debug2, "my_algorithm") << "Iteration " << i << std::endl;
if(i >= 2)
{
log::mcrl2_logger::indent();
mCRL2log(log::debug3, "my_algorithm") << "iteration number >= 2, treating specially" << std::endl;
do_something_special();
log::mcrl2_logger::unindent();
}
}
log::mcrl2_logger::unindent();
return "my_algorithm";
}
Note that, with the settings so far, only the first debug statement in
:cpp:func:`my_algorithm` will be printed, the other log messages are compiled away due
to the setting of :c:macro:`MCRL2_MAX_LOG_LEVEL`. To overcome this, the define before
the include of ``logger.h`` must allow for more debug levels, e.g. by setting
it as follows
.. code-block:: c++
#define MCRL2_MAX_LOG_LEVEL log::debug3
This does not yet suffice; setting this only made sure that the logging
statements of all levels up to and including debug3 are actually compiled
into the code. We still have to enable the logging statements at run-time,
because so far we have only allowed logging of messages up to verbose level.
Therefore we should add the following anywhere before the execution of
the second debug print in :cpp:func:`my_algorithm`
.. code-block:: c++
log::mcrl2_logger::set_reporting_level(log::debug3, "my_algorithm");
The complete code now looks as follows:
.. code-block:: c++
#define MCRL2_MAX_LOG_LEVEL mcrl2::log::debug3
#include "mcrl2/utilities/logger.h"
using namespace mcrl2;
void do_something_special()
{
mCRL2log(log::debug3, "my_algorithm") << "doing something special" << std::endl;
}
std::string my_algorithm()
{
mCRL2log(log::debug, "my_algorithm") << "Starting my_algorithm" << std::endl;
int iterations = 3;
mCRL2log(log::debug1, "my_algorithm") << "A loop with " << iterations << " iterations" << std::endl;
log::mcrl2_logger::indent();
for(int i = 0; i < iterations; ++i)
{
mCRL2log(log::debug2, "my_algorithm") << "Iteration " << i << std::endl;
if(i >= 2)
{
log::mcrl2_logger::indent();
mCRL2log(log::debug3, "my_algorithm") << "iteration number >= 2, treating specially" << std::endl;
do_something_special();
log::mcrl2_logger::unindent();
}
}
log::mcrl2_logger::unindent();
return "my_algorithm";
}
int main(int argc, char** argv)
{
log::mcrl2_logger::set_reporting_level(log::verbose);
mCRL2log(log::info) << "This shows the way info messages are printed, using the default messages" << std::endl;
mCRL2log(log::debug) << "This line is not printed, and the function " << my_algorithm() << " is not evaluated" << std::endl;
FILE* plogfile;
plogfile = fopen("logger_test_file.txt" , "w");
if(plogfile == NULL)
{
throw std::runtime_error("Cannot open logfile for writing");
}
log::mcrl2_logger::output_policy_t::set_stream(plogfile, "my_algorithm");
log::mcrl2_logger::set_reporting_level(log::debug3, "my_algorithm");
// Execute algorithm
my_algorithm();
// Do not forget to close the file.
fclose(plogfile);
}
Note that in this code, the logging of :cpp:func:`my_algorithm` is done to the file
logger_test_file.txt, whereas the other log messages are printed to stderr.
After execution, stderr looks as follows::
[11:51:02.639 info] This shows the way info messages are printed, using the default messages
The file logger_test_file.txt contains the following::
[11:52:35.381 my_algorithm::debug] Starting my_algorithm
[11:52:35.381 my_algorithm::debug] A loop with 3 iterations
[11:52:35.381 my_algorithm::debug] Iteration 0
[11:52:35.381 my_algorithm::debug] Iteration 1
[11:52:35.381 my_algorithm::debug] Iteration 2
[11:52:35.381 my_algorithm::debug] iteration number >= 2, treating specially
[11:52:35.381 my_algorithm::debug] doing something special
| 42.938556 | 673 | 0.697456 |
71d69cde980013fa7173d3894d465bcdb12b95a6 | 12,528 | rst | reStructuredText | docs/release/results/tc012-memory-read-write-bandwidth.rst | upfront710/yardstick | 2c3898f2ca061962cedbfc7435f78b59aa39b097 | [
"Apache-2.0"
] | 28 | 2017-02-07T07:46:42.000Z | 2021-06-30T08:11:06.000Z | docs/release/results/tc012-memory-read-write-bandwidth.rst | upfront710/yardstick | 2c3898f2ca061962cedbfc7435f78b59aa39b097 | [
"Apache-2.0"
] | 6 | 2018-01-18T08:00:54.000Z | 2019-04-11T04:51:41.000Z | docs/release/results/tc012-memory-read-write-bandwidth.rst | upfront710/yardstick | 2c3898f2ca061962cedbfc7435f78b59aa39b097 | [
"Apache-2.0"
] | 46 | 2016-12-13T10:05:47.000Z | 2021-02-18T07:33:06.000Z | .. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
==================================================
Test results for TC012 memory read/write bandwidth
==================================================
.. toctree::
:maxdepth: 2
Overview of test case
=====================
TC012 measures the rate at which data can be read from and written to the memory (this includes all levels of memory).
In this test case, the bandwidth to read data from memory and then write data to the same memory location are measured.
Metric: memory bandwidth
Unit: MBps
Euphrates release
-----------------
Test results per scenario and pod (higher is better):
{
"os-nosdn-nofeature-ha:lf-pod1:apex": [23126.325],
"os-odl-nofeature-noha:lf-pod1:apex": [23123.975],
"os-odl-nofeature-ha:lf-pod1:apex": [23068.965],
"os-odl-nofeature-ha:lf-pod2:fuel": [22972.46],
"os-nosdn-nofeature-ha:lf-pod2:fuel": [22912.015],
"os-nosdn-nofeature-noha:lf-pod1:apex": [22911.35],
"os-ovn-nofeature-noha:lf-pod1:apex": [22900.93],
"os-nosdn-bar-ha:lf-pod1:apex": [22767.56],
"os-nosdn-bar-noha:lf-pod1:apex": [22721.83],
"os-odl-sfc-noha:lf-pod1:apex": [22511.565],
"os-nosdn-ovs-ha:lf-pod2:fuel": [22071.235],
"os-odl-sfc-ha:lf-pod1:apex": [21646.415],
"os-nosdn-nofeature-ha:flex-pod2:apex": [20229.99],
"os-nosdn-ovs-noha:ericsson-virtual4:fuel": [17491.18],
"os-nosdn-ovs-noha:ericsson-virtual1:fuel": [17474.965],
"os-nosdn-ovs-ha:ericsson-pod1:fuel": [17141.375],
"os-nosdn-nofeature-ha:ericsson-pod1:fuel": [17134.99],
"os-odl-nofeature-ha:ericsson-pod1:fuel": [17124.27],
"os-nosdn-ovs-noha:ericsson-virtual2:fuel": [16599.325],
"os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [16309.13],
"os-odl-nofeature-noha:ericsson-virtual4:fuel": [16137.48],
"os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [15960.76],
"os-nosdn-ovs-noha:ericsson-virtual3:fuel": [15685.505],
"os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [15536.65],
"os-odl-nofeature-noha:ericsson-virtual3:fuel": [15431.795],
"os-odl-nofeature-noha:ericsson-virtual2:fuel": [15129.27],
"os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [15125.51],
"os-odl_l3-nofeature-ha:huawei-pod2:compass": [15030.65],
"os-nosdn-nofeature-ha:huawei-pod2:compass": [15019.89],
"os-odl-sfc-ha:huawei-pod2:compass": [15005.11],
"os-nosdn-bar-ha:huawei-pod2:compass": [14975.645],
"os-nosdn-kvm-ha:huawei-pod2:compass": [14968.97],
"os-odl_l2-moon-ha:huawei-pod2:compass": [14968.97],
"os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [14741.425],
"os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [14714.28],
"os-odl_l2-moon-noha:huawei-virtual4:compass": [14674.38],
"os-odl_l2-moon-noha:huawei-virtual3:compass": [14664.12],
"os-odl-sfc-noha:huawei-virtual4:compass": [14587.62],
"os-nosdn-nofeature-noha:huawei-virtual3:compass": [14539.94],
"os-nosdn-nofeature-noha:huawei-virtual4:compass": [14534.54],
"os-odl_l3-nofeature-noha:huawei-virtual3:compass": [14511.925],
"os-nosdn-nofeature-noha:huawei-virtual1:compass": [14496.875],
"os-odl_l2-moon-ha:huawei-virtual3:compass": [14378.87],
"os-odl_l3-nofeature-noha:huawei-virtual4:compass": [14366.69],
"os-nosdn-nofeature-ha:huawei-virtual4:compass": [14356.695],
"os-odl_l3-nofeature-ha:huawei-virtual3:compass": [14341.605],
"os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [14327.78],
"os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [14313.81],
"os-nosdn-nofeature-ha:intel-pod18:joid": [14284.365],
"os-nosdn-nofeature-noha:huawei-pod12:joid": [14157.99],
"os-nosdn-nofeature-ha:huawei-pod12:joid": [14144.86],
"os-nosdn-openbaton-ha:huawei-pod12:joid": [14138.9],
"os-nosdn-kvm-noha:huawei-virtual3:compass": [14117.7],
"os-nosdn-nofeature-ha:huawei-virtual3:compass": [14097.255],
"os-nosdn-nofeature-noha:huawei-virtual2:compass": [14085.675],
"os-odl-sfc-noha:huawei-virtual3:compass": [14071.605],
"os-nosdn-openbaton-ha:intel-pod18:joid": [14059.51],
"os-odl-sfc-ha:huawei-virtual4:compass": [14057.155],
"os-odl-sfc-ha:huawei-virtual3:compass": [14051.945],
"os-nosdn-bar-ha:huawei-virtual3:compass": [14020.74],
"os-nosdn-kvm-noha:huawei-virtual4:compass": [14017.915],
"os-nosdn-nofeature-noha:intel-pod18:joid": [13954.27],
"os-odl_l3-nofeature-ha:huawei-virtual4:compass": [13915.87],
"os-odl_l3-nofeature-ha:huawei-virtual2:compass": [13874.59],
"os-nosdn-nofeature-noha:intel-pod5:joid": [13812.215],
"os-odl_l2-moon-ha:huawei-virtual4:compass": [13777.59],
"os-nosdn-bar-ha:huawei-virtual4:compass": [13765.36],
"os-nosdn-nofeature-ha:huawei-virtual1:compass": [13559.905],
"os-nosdn-nofeature-ha:huawei-virtual2:compass": [13477.52],
"os-nosdn-kvm-ha:huawei-virtual3:compass": [13255.17],
"os-nosdn-nofeature-ha:intel-pod5:joid": [13189.64],
"os-nosdn-kvm-ha:huawei-virtual4:compass": [12718.545],
"os-nosdn-nofeature-ha:huawei-virtual9:compass": [12559.445],
"os-nosdn-nofeature-noha:huawei-virtual8:compass": [12409.66],
"os-nosdn-kvm-noha:huawei-virtual8:compass": [8832.515],
"os-odl-sfc-ha:huawei-virtual8:compass": [8823.955],
"os-odl-nofeature-ha:arm-pod5:fuel": [4398.08],
"os-nosdn-nofeature-ha:arm-pod5:fuel": [4375.75],
"os-nosdn-nofeature-ha:arm-pod6:fuel": [4260.77],
"os-odl-nofeature-ha:arm-pod6:fuel": [4259.62]
}
The influence of the scenario
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: images/tc012_scenario.png
:width: 800px
:alt: TC012 influence of scenario
{
"os-ovn-nofeature-noha": [22900.93],
"os-nosdn-bar-noha": [22721.83],
"os-nosdn-ovs-ha": [22063.67],
"os-odl-nofeature-ha": [17146.05],
"os-odl-nofeature-noha": [16017.41],
"os-nosdn-ovs-noha": [16005.74],
"os-nosdn-nofeature-noha": [15290.94],
"os-nosdn-nofeature-ha": [15038.74],
"os-nosdn-bar-ha": [14972.975],
"os-odl_l2-moon-ha": [14956.955],
"os-odl_l3-nofeature-ha": [14839.21],
"os-odl-sfc-ha": [14823.48],
"os-nosdn-ovs_dpdk-ha": [14822.17],
"os-nosdn-ovs_dpdk-noha": [14725.9],
"os-odl_l2-moon-noha": [14665.4],
"os-odl_l3-nofeature-noha": [14483.09],
"os-odl-sfc-noha": [14373.21],
"os-nosdn-openbaton-ha": [14135.325],
"os-nosdn-kvm-noha": [14020.26],
"os-nosdn-kvm-ha": [13996.02]
}
The influence of the POD
^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: images/tc012_pod.png
:width: 800px
:alt: TC012 influence of the POD
{
"lf-pod1": [22912.39],
"lf-pod2": [22637.67],
"flex-pod2": [20229.99],
"ericsson-virtual1": [17474.965],
"ericsson-pod1": [17127.38],
"ericsson-virtual4": [16219.97],
"ericsson-virtual2": [15652.28],
"ericsson-virtual3": [15551.26],
"huawei-pod2": [15017.2],
"huawei-virtual4": [14266.34],
"huawei-virtual1": [14233.035],
"huawei-virtual3": [14227.63],
"huawei-pod12": [14147.245],
"intel-pod18": [14058.33],
"huawei-virtual2": [13862.85],
"intel-pod5": [13280.32],
"huawei-virtual9": [12559.445],
"huawei-virtual8": [8998.02],
"arm-pod5": [4388.875],
"arm-pod6": [4260.2]
}
Fraser release
--------------
Test results per scenario and pod (higher is better):
{
"os-nosdn-nofeature-ha:lf-pod2:fuel": [21421.795],
"os-odl-sfc-noha:lf-pod1:apex": [21075],
"os-odl-sfc-ha:lf-pod1:apex": [21017.44],
"os-nosdn-bar-noha:lf-pod1:apex": [20991.46],
"os-nosdn-bar-ha:lf-pod1:apex": [20812.405],
"os-ovn-nofeature-noha:lf-pod1:apex": [20694.035],
"os-nosdn-nofeature-noha:lf-pod1:apex": [20672.765],
"os-odl-nofeature-ha:lf-pod2:fuel": [20269.65],
"os-nosdn-calipso-noha:lf-pod1:apex": [20186.32],
"os-odl-nofeature-noha:lf-pod1:apex": [19959.915],
"os-nosdn-ovs-ha:lf-pod2:fuel": [19719.38],
"os-odl-nofeature-ha:lf-pod1:apex": [19654.505],
"os-nosdn-nofeature-ha:lf-pod1:apex": [19391.145],
"os-nosdn-nofeature-noha:intel-pod18:joid": [19378.64],
"os-odl-nofeature-ha:ericsson-pod1:fuel": [19103.43],
"os-nosdn-nofeature-ha:intel-pod18:joid": [18688.695],
"os-nosdn-openbaton-ha:intel-pod18:joid": [18557.95],
"os-nosdn-nofeature-ha:ericsson-pod1:fuel": [17088.61],
"os-nosdn-ovs-ha:ericsson-pod1:fuel": [17040.78],
"os-nosdn-ovs-noha:ericsson-virtual2:fuel": [16057.235],
"os-odl-nofeature-noha:ericsson-virtual4:fuel": [15622.355],
"os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [15422.235],
"os-odl-sfc-ha:huawei-pod2:compass": [15403.09],
"os-odl-nofeature-noha:ericsson-virtual2:fuel": [15141.58],
"os-nosdn-bar-ha:huawei-pod2:compass": [14922.37],
"os-odl_l3-nofeature-ha:huawei-pod2:compass": [14864.195],
"os-nosdn-nofeature-ha:huawei-pod2:compass": [14856.295],
"os-nosdn-kvm-ha:huawei-pod2:compass": [14796.035],
"os-odl-sfc-noha:huawei-virtual4:compass": [14484.375],
"os-nosdn-nofeature-ha:huawei-pod12:joid": [14441.955],
"os-odl-sfc-noha:huawei-virtual3:compass": [14373],
"os-nosdn-nofeature-noha:huawei-virtual4:compass": [14330.44],
"os-nosdn-ovs-noha:ericsson-virtual4:fuel": [14320.305],
"os-odl_l3-nofeature-noha:huawei-virtual3:compass": [14253.715],
"os-nosdn-nofeature-ha:huawei-virtual4:compass": [14203.655],
"os-nosdn-nofeature-noha:huawei-virtual3:compass": [14179.93],
"os-odl-nofeature-ha:zte-pod2:daisy": [14177.135],
"os-nosdn-nofeature-ha:zte-pod2:daisy": [14150.825],
"os-nosdn-nofeature-noha:huawei-pod12:joid": [14100.87],
"os-nosdn-bar-noha:huawei-virtual4:compass": [14033.36],
"os-odl_l3-nofeature-noha:huawei-virtual4:compass": [13963.73],
"os-nosdn-kvm-noha:huawei-virtual3:compass": [13874.775],
"os-nosdn-kvm-noha:huawei-virtual4:compass": [13805.65],
"os-odl_l3-nofeature-ha:huawei-virtual3:compass": [13754.63],
"os-nosdn-nofeature-noha:huawei-virtual2:compass": [13702.92],
"os-nosdn-bar-ha:huawei-virtual3:compass": [13638.115],
"os-odl-sfc-ha:huawei-virtual3:compass": [13637.83],
"os-odl_l3-nofeature-ha:huawei-virtual4:compass": [13635.66],
"os-nosdn-bar-noha:huawei-virtual3:compass": [13635.58],
"os-nosdn-bar-ha:huawei-virtual4:compass": [13544.95],
"os-nosdn-nofeature-ha:huawei-virtual3:compass": [13514.27],
"os-nosdn-nofeature-ha:huawei-virtual1:compass": [13496.45],
"os-odl-sfc-ha:huawei-virtual4:compass": [13475.38],
"os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [12733.19],
"os-nosdn-kvm-ha:huawei-virtual4:compass": [12682.805],
"os-odl-nofeature-ha:arm-pod5:fuel": [4326.11],
"os-nosdn-nofeature-ha:arm-pod6:fuel": [3824.13],
"os-odl-nofeature-ha:arm-pod6:fuel": [3797.795],
"os-nosdn-ovs-ha:arm-pod6:fuel": [3749.91]
}
The influence of the scenario
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: images/tc012_scenario_fraser.png
:width: 800px
:alt: TC012 influence of scenario
{
"os-ovn-nofeature-noha": [20694.035],
"os-nosdn-calipso-noha": [20186.32],
"os-nosdn-openbaton-ha": [18557.95],
"os-nosdn-ovs-ha": [17048.17],
"os-odl-nofeature-noha": [16191.125],
"os-nosdn-ovs-noha": [15790.32],
"os-nosdn-bar-ha": [14833.97],
"os-odl-sfc-ha": [14828.72],
"os-odl_l3-nofeature-ha": [14801.25],
"os-nosdn-kvm-ha": [14700.1],
"os-nosdn-nofeature-ha": [14610.48],
"os-nosdn-nofeature-noha": [14555.975],
"os-odl-sfc-noha": [14508.14],
"os-nosdn-bar-noha": [14395.22],
"os-odl-nofeature-ha": [14231.245],
"os-odl_l3-nofeature-noha": [14161.58],
"os-nosdn-kvm-noha": [13845.685]
}
The influence of the POD
^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: images/tc012_pod_fraser.png
:width: 800px
:alt: TC012 influence of the POD
{
"lf-pod1": [20552.9],
"lf-pod2": [20058.925],
"ericsson-pod1": [18930.78],
"intel-pod18": [18757.545],
"ericsson-virtual4": [15389.465],
"ericsson-virtual2": [15343.79],
"huawei-pod2": [14870.78],
"zte-pod2": [14157.99],
"huawei-pod12": [14126.99],
"huawei-virtual3": [13929.67],
"huawei-virtual4": [13847.155],
"huawei-virtual2": [13702.92],
"huawei-virtual1": [13496.45],
"ericsson-virtual3": [12733.19],
"arm-pod5": [4326.11],
"arm-pod6": [3809.885]
}
| 24.373541 | 120 | 0.637851 |
60997049d785d6498c955bcde791b6eaa37f63d9 | 929 | rst | reStructuredText | docs/source/optimizers/portfolio-regret.rst | wangcj05/allopy | 0d97127e5132df1449283198143994b45fb11214 | [
"MIT"
] | 1 | 2021-04-06T04:33:03.000Z | 2021-04-06T04:33:03.000Z | docs/source/optimizers/portfolio-regret.rst | wangcj05/allopy | 0d97127e5132df1449283198143994b45fb11214 | [
"MIT"
] | null | null | null | docs/source/optimizers/portfolio-regret.rst | wangcj05/allopy | 0d97127e5132df1449283198143994b45fb11214 | [
"MIT"
] | null | null | null | Portfolio Regret Optimizer
==========================
The :class:`PortfolioRegretOptimizer` inherits the :class:`RegretOptimizer`. The `minimum regret optimization <https://en.wikipedia.org/wiki/Regret_(decision_theory)>`_ is a technique under decision theory on making decisions under uncertainty.
The methods in the :class:`PortfolioRegretOptimizer` are only applied at the first stage of the procedure. The :class:`PortfolioRegretOptimizer` houses the following convenience methods:
:maximize_returns:
Maximize the returns of the portfolio. You may put in volatility or CVaR constraints for this procedure.
:minimize_volatility:
Minimizes the total portfolio volatility
:minimize_cvar:
Minimizes the conditional value at risk (expected shortfall of the portfolio)
:maximize_sharpe_ratio:
Maximizes the Sharpe ratio of the portfolio.
.. autoclass:: allopy.optimize.PortfolioRegretOptimizer
:members:
| 42.227273 | 244 | 0.782562 |
7b0fe2568e603cd79870d1f4fe2e4204a2ba7242 | 1,499 | rst | reStructuredText | rst/doloop.rst | mvz/vb2py | 6ea046f6fc202527a1b3fcd3ef5a67b969dea715 | [
"BSD-3-Clause"
] | 2 | 2015-12-01T10:52:36.000Z | 2021-04-20T05:15:01.000Z | rst/doloop.rst | mvz/vb2py | 6ea046f6fc202527a1b3fcd3ef5a67b969dea715 | [
"BSD-3-Clause"
] | 4 | 2016-07-18T18:28:24.000Z | 2016-07-19T08:30:14.000Z | rst/doloop.rst | mvz/vb2py | 6ea046f6fc202527a1b3fcd3ef5a67b969dea715 | [
"BSD-3-Clause"
] | 3 | 2015-07-15T21:08:19.000Z | 2021-02-25T09:39:12.000Z | vb2Py - Do ... Loop
===================
Contents of this page:
* General_
* `Default Conversion`_
* `List of Options`_
Different forms:
* `Do ... Loop`_
* `Do While ... Loop`_
* `Do ... Loop While`_
* `Do Until ... Loop`_
* `Do ... Loop Until`_
General
-------
All variations of VB's ``Do ... Loop`` construct are converted to an equivalent Python ``while`` block. Preconditions are converted to the equivalent condition in the ``while`` statement itself, whereas post-conditions are implemented using an ``if ...: break`` . ``Exit's`` from the loop are also implemented using ``break`` . ``Until`` conditions (pre or post) are implemented by negating the condition itself but do not affect the structure.
Default Conversion
------------------
Do ... Loop
~~~~~~~~~~~
VB::
Do
Val = Val + 1
If SomeCondition Then Exit Do
Loop
Do While ... Loop
~~~~~~~~~~~~~~~~~
VB::
Do While Condition
Val = Val + 1
If SomeCondition Then Exit Do
Loop
Do ... Loop While
~~~~~~~~~~~~~~~~~
VB::
Do
Val = Val + 1
If SomeCondition Then Exit Do
Loop While Condition
Do Until ... Loop
~~~~~~~~~~~~~~~~~
VB::
Do Until Condition
Val = Val + 1
If SomeCondition Then Exit Do
Loop
Do ... Loop Until
~~~~~~~~~~~~~~~~~
VB::
Do
Val = Val + 1
If SomeCondition Then Exit Do
Loop Until Condition
List of Options
---------------
There are no options for the ``Do ... Loop`` construct.
| 17.430233 | 444 | 0.569046 |
655c7fb7ee6a6ffa9d488de0a0a246ddbe3908de | 2,718 | rst | reStructuredText | README.rst | vbarashko/yandex-search | 93a9fd249785a4159ab1458d708f378da8fb3a80 | [
"MIT"
] | 7 | 2017-09-17T16:19:00.000Z | 2021-02-24T15:35:59.000Z | README.rst | vbarashko/yandex-search | 93a9fd249785a4159ab1458d708f378da8fb3a80 | [
"MIT"
] | 2 | 2017-06-10T19:12:00.000Z | 2019-11-28T06:49:45.000Z | README.rst | vbarashko/yandex-search | 93a9fd249785a4159ab1458d708f378da8fb3a80 | [
"MIT"
] | 4 | 2017-08-02T12:21:03.000Z | 2021-09-07T07:13:31.000Z | =============
Yandex Search
=============
.. image:: https://img.shields.io/pypi/v/yandex-search.svg
:target: https://pypi.python.org/pypi/yandex-search
.. image:: https://img.shields.io/travis/fluquid/yandex-search.svg
:target: https://travis-ci.org/fluquid/yandex-search
.. image:: https://codecov.io/github/fluquid/yandex-search/coverage.svg?branch=master
:alt: Coverage Status
:target: https://codecov.io/github/fluquid/yandex-search
.. image:: https://requires.io/github/fluquid/yandex-search/requirements.svg?branch=master
:alt: Requirements Status
:target: https://requires.io/github/fluquid/yandex-search/requirements/?branch=master
.. image:: http://fluquid.com:8000/api/badge/github.com/fluquid/yandex-search/status.svg?branch=master
:alt: Build Status
:target: http://fluquid.com:8000/github.com/fluquid/yandex-search
Search library for yandex.ru search engine.
Yandex allows **10,000 searches per day** when registered with a validated (international) mobile number.
Example
-------
::
>>> yandex = yandex_search.Yandex(api_user='asdf', api_key='asdf')
>>> yandex.search('"Interactive Saudi"').items
[{
"snippet": "Your Software Development Partner In Saudi Arabia . Since our early days in 2003, our main goal in Interactive Saudi Arabia has been: \"To earn customer respect and maintain long-term loyalty\".",
"url": "http://www.interactive.sa/en",
"title": "Interactive Saudi Arabia Limited",
"domain": "www.interactive.sa"
}]
Getting Started
---------------
* register account: https://passport.yandex.ru/registration
* use google translate addon (right-click "translate page")
* provide an (international) mobile phone number to unlock 10k queries/day
* configure yandex: https://xml.yandex.ru/settings.xml
* Navigate to "Settings"
* switch language to english in bottom left (En/Ru)
* enter email for "Email notifications"
* set "Search type" to "Worldwide"
* set "Main IP-address" to your querying machine
* "I accept the terms of License Agreement"
* Save
* Navigate to "Test"
* "? user = " is your credentials username
* "& key = " is your crednetials key
Notes
-----
* Yandex highlights matching terms, leading to extra whitespace from `' '.join`
Alternatives
------------
* pyyaxml is py2-only and was giving me grief ;)
Documentation
-------------
search operators:
* https://yandex.com/support/search/how-to-search/search-operators.html
settings:
* https://xml.yandex.ru/settings.xml
docs:
* https://tech.yandex.ru/xml/doc/dg/concepts/restrictions-docpage/
* https://yandex.com/support/search/robots/search-api.html
| 31.604651 | 227 | 0.688374 |
ecc988f3065318ba04c34fd77a1a8da8c38ab59f | 40,716 | rst | reStructuredText | doc/sphinx/manpages/pegasus-mpi-cluster.rst | ryantanaka/pegasus | ffb2b5a41cc7dd6219fe88b6dfd79880899d0928 | [
"Apache-2.0"
] | null | null | null | doc/sphinx/manpages/pegasus-mpi-cluster.rst | ryantanaka/pegasus | ffb2b5a41cc7dd6219fe88b6dfd79880899d0928 | [
"Apache-2.0"
] | null | null | null | doc/sphinx/manpages/pegasus-mpi-cluster.rst | ryantanaka/pegasus | ffb2b5a41cc7dd6219fe88b6dfd79880899d0928 | [
"Apache-2.0"
] | null | null | null | ===================
pegasus-mpi-cluster
===================
1
pegasus-mpi-cluster
Enables running DAGs (Directed Acyclic Graphs) on clusters using MPI.
::
pegasus-mpi-cluster [options] workflow.dag
.. __description:
Description
===========
**pegasus-mpi-cluster** is a tool used to run HTC (High Throughput
Computing) scientific workflows on systems designed for HPC (High
Performance Computing). Many HPC systems have custom architectures that
are optimized for tightly-coupled, parallel applications. These systems
commonly have exotic, low-latency networks that are designed for passing
short messages very quickly between compute nodes. Many of these
networks are so highly optimized that the compute nodes do not even
support a TCP/IP stack. This makes it impossible to run HTC applications
using software that was designed for commodity clusters, such as Condor.
**pegasus-mpi-cluster** was developed to enable loosely-coupled HTC
applications such as scientific workflows to take advantage of HPC
systems. In order to get around the network issues outlined above,
**pegasus-mpi-cluster** uses MPI (Message Passing Interface), a commonly
used API for writing SPMD (Single Process, Multiple Data) parallel
applications. Most HPC systems have an MPI implementation that works on
whatever exotic network architecture the system uses.
An **pegasus-mpi-cluster** job consists of a single master process (this
process is rank 0 in MPI parlance) and several worker processes. The
master process manages the workflow and assigns workflow tasks to
workers for execution. The workers execute the tasks and return the
results to the master. Any output written to stdout or stderr by the
tasks is captured (see `TASK STDIO <#TASK_STDIO>`__).
**pegasus-mpi-cluster** applications are expressed as DAGs (Directed
Acyclic Graphs) (see `DAG FILES <#DAG_FILES>`__). Each node in the graph
represents a task, and the edges represent dependencies between the
tasks that constrain the order in which the tasks are executed. Each
task is a program and a set of parameters that need to be run (i.e. a
command and some optional arguments). The dependencies typically
represent data flow dependencies in the application, where the output
files produced by one task are needed as inputs for another.
If an error occurs while executing a DAG that causes the workflow to
stop, it can be restarted using a rescue file, which records the
progress of the workflow (see `RESCUE FILES <#RESCUE_FILES>`__). This
enables **pegasus-mpi-cluster** to pick up running the workflow where it
stopped.
**pegasus-mpi-cluster** was designed to work either as a standalone tool
or as a complement to the Pegasus Workflow Managment System (WMS). For
more information about using PMC with Pegasus see the section on `PMC
AND PEGASUS <#PMC_AND_PEGASUS>`__.
**pegasus-mpi-cluster** allows applications expressed as a DAG to be
executed in parallel on a large number of compute nodes. It is designed
to be simple, lightweight and robust.
.. __options:
Options
=======
**-h**; \ **--help**
Print help message
**-V**; \ **--version**
Print version information
**-v**; \ **--verbose**
Increase logging verbosity. Adding multiple **-v** increases the
level more. The default log level is *INFO*. (see
`LOGGING <#LOGGING>`__)
**-q**; \ **--quiet**
Decrease logging verbosity. Adding multiple **-q** decreases the
level more. The default log level is *INFO*. (see
`LOGGING <#LOGGING>`__)
**-s**; \ **--skip-rescue**
Ignore the rescue file for *workflow.dag* if it exists. Note that
**pegasus-mpi-cluster** will still create a new rescue file for the
current run. The default behavior is to use the rescue file if one is
found. (see `RESCUE FILES <#RESCUE_FILES>`__)
**-o** *path*; \ **--stdout** *path*
Path to file for task stdout. (see `TASK STDIO <#TASK_STDIO>`__ and
**--per-task-stdio**)
**-e** *path*; \ **--stderr** *path*
Path to file for task stderr. (see `TASK STDIO <#TASK_STDIO>`__ and
**--per-task-stdio**)
**-m** *M*; \ **--max-failures** *M*
Stop submitting new tasks after *M* tasks have failed. Once *M* has
been reached, **pegasus-mpi-cluster** will finish running any tasks
that have been started, but will not start any more tasks. This
option is used to prevent **pegasus-mpi-cluster** from continuing to
run a workflow that is suffering from a systematic error, such as a
missing binary or an invalid path. The default for *M* is 0, which
means unlimited failures are allowed.
**-t** *T*; \ **--tries** *T*
Attempt to run each task *T* times before marking the task as failed.
Note that the *T* tries do not count as failures for the purposes of
the **-m** option. A task is only considered failed if it is tried
*T* times and all *T* attempts result in a non-zero exitcode. The
value of *T* must be at least 1. The default is 1.
**-n**; \ **--nolock**
Do not lock DAGFILE. By default, **pegasus-mpi-cluster** will attempt
to acquire an exclusive lock on DAGFILE to prevent multiple MPI jobs
from running the same DAG at the same time. If this option is
specified, then the lock will not be acquired.
**-r**; \ **--rescue** *path*
Path to rescue log. If the file exists, and **-s** is not specified,
then the log will be used to recover the state of the workflow. The
file is truncated after it is read and a new rescue log is created in
its place. The default is to append *.rescue* to the DAG file name.
(see `RESCUE FILES <#RESCUE_FILES>`__)
**--host-script** *path*
Path to a script or executable to launch on each unique host that
**pegasus-mpi-cluster** is running on. This path can also be set
using the PMC_HOST_SCRIPT environment variable. (see `HOST
SCRIPTS <#HOST_SCRIPTS>`__)
**--host-memory** *size*
Amount of memory available on each host in MB. The default is to
determine the amount of physical RAM automatically. This value can
also be set using the PMC_HOST_MEMORY environment variable. (see
`RESOURCE-BASED SCHEDULING <#RESOURCE_SCHED>`__)
**--host-cpus** *cpus*
Number of CPUs available on each host. The default is to determine
the number of CPU cores automatically. This value can also be set
using the PMC_HOST_CPUS environment variable. (see `RESOURCE-BASED
SCHEDULING <#RESOURCE_SCHED>`__)
**--strict-limits**
This enables strict memory usage limits for tasks. When this option
is specified, and a task tries to allocate more memory than was
requested in the DAG, the memory allocation operation will fail.
**--max-wall-time** *minutes*
This is the maximum number of minutes that **pegasus-mpi-cluster**
will allow the workflow to run. When this time expires
**pegasus-mpi-cluster** will abort the workflow and merge all of the
stdout/stderr files of the workers. The value is in minutes, and the
default is unlimited wall time. This option was added so that the
output of a workflow will be recorded even if the workflow exceeds
the max wall time of its batch job. This value can also be set using
the PMC_MAX_WALL_TIME environment variable.
**--per-task-stdio**
This causes PMC to generate a .out.XXX and a .err.XXX file for each
task instead of writing task stdout/stderr to **--stdout** and
**--stderr**. The name of the files are "TASKNAME.out.XXX" and
"TASKNAME.err.XXX", where "TASKNAME" is the name of the task from the
DAG and "XXX" is a sequence number that is incremented each time the
task is tried. This option overrides the values for **--stdout** and
**--stderr**. This argument is used by Pegasus when workflows are
planned in PMC-only mode to facilitate debugging and monitoring.
**--jobstate-log**
This option causes PMC to generate a jobstate.log file for the
workflow. The file is named "jobstate.log" and is placed in the same
directory where the DAG file is located. If the file already exists,
then PMC appends new lines to the existing file. This option is used
by Pegasus when workflows are planned in PMC-only mode to facilitate
monitoring.
**--monitord-hack**
This option causes PMC to generate a .dagman.out file for the
workflow. This file mimics the contents of the .dagman.out file
generated by Condor DAGMan. The point of this option is to trick
monitord into thinking that it is dealing with DAGMan so that it will
generate the appropriate events to populate the STAMPEDE database for
monitoring purposes. The file is named "DAG.dagman.out" where "DAG"
is the path to the PMC DAG file.
**--no-resource-log**
Do not generate a *workflow.dag.resource* file for the workflow.
**--no-sleep-on-recv**
Do not use polling with sleep() to implement message receive. (see
`Known Issues: CPU Usage <#CPU_USAGE_ISSUE>`__)
**--maxfds**
Set the maximum number of file descriptors that can be left open by
the master for I/O forwarding. By default this value is set
automatically based on the value of getrlimit(RLIMIT_NOFILE). The
value must be at least 1, and cannot be more than RLIMIT_NOFILE.
**--keep-affinity**
By default PMC attempts to clear the CPU and memory affinity. This is
to ensure that all available CPUs and memory can be used by PMC tasks
on systems that are not configured properly. This flag tells PMC to
keep the affinity settings inherited from its parent. Note that the
memory policy can only be cleared if PMC was compiled with libnuma.
CPU affinity is cleared using **sched_setaffinity()**, and memory
policy is cleared with **set_mempolicy()**.
**--set-affinity**
If this flag is set, then PMC will allocate CPUs to tasks and call
**sched_setaffinity()** to bind the task to those CPUs. This only
applies to multicore tasks (i.e. those tasks that specify -c N where
N > 1). Single core tasks are not bound to a CPU to reduce the
possibility of fragmentation. PMC does not currently have any
mechanism to handle resource fragmentation that may occur if a
workflow contains several tasks with different core counts. In the
case that fragmentation would result in a task not being bound to a
minimal number of sockets and cores, PMC will not bind the task to
any CPUs. For example, if a 2 socket, 8 core machine without
hyperthreading is being used to run 2, 4-core tasks, each task will
be bound to a full socket. If the same machine is running 4, 2-core
tasks, each task will get 2-cores on one socket. If 2 of the 2-core
tasks finish, but they free up cores on two different sockets, and
PMC wants to run a 4-core task, it will not bind the 4-core task to
any CPUs, because that would result in the 4-core task being bound to
two different sockets. Instead, PMC lets the 4-core task float, so
that the scheduler can find a better placement when another one of
the 2-core tasks finishes. In order to fix this issue we need to
rearchitect PMC, which is on the roadmap.
.. _DAG_FILES:
DAG Files
=========
**pegasus-mpi-cluster** workflows are expressed using a simple
text-based format similar to that used by Condor DAGMan. There are only
two record types allowed in a DAG file: **TASK** and **EDGE**. Any blank
lines in the DAG (lines with all whitespace characters) are ignored, as
are any lines beginning with # (note that # can only appear at the
beginning of a line, not in the middle).
The format of a **TASK** record is:
::
"TASK" id [options...] executable [arguments...]
Where *id* is the ID of the task, *options* is a list of task options,
*executable* is the path to the executable or script to run, and
*arguments…* is a space-separated list of arguments to pass to the task.
An example is:
::
TASK t01 -m 10 -c 2 /bin/program -a -b
This example specifies a task *t01* that requires 10 MB memory and 2
CPUs to run */bin/program* with the arguments *-a* and *-b*. The
available task options are:
**-m** *M*; \ **--request-memory** *M*
The amount of memory required by the task in MB. The default is 0,
which means memory is not considered for this task. This option can
be set for a job in the DAX by specifying the
pegasus::pmc_request_memory profile. (see `RESOURCE-BASED
SCHEDULING <#RESOURCE_SCHED>`__)
**-c** *N*; \ **--request-cpus** *N*
The number of CPUs required by the task. The default is 1, which
implies that the number of slots on a host should be less than or
equal to the number of physical CPUs in order for all the slots to be
used. This option can be set for a job in the DAX by specifying the
pegasus::pmc_request_cpus profile. (see `RESOURCE-BASED
SCHEDULING <#RESOURCE_SCHED>`__)
**-t** *T*; \ **--tries** *T*
The number of times to try to execute the task before failing
permanently. This is the task-level equivalent of the **--tries**
command-line option.
**-p** *P*; \ **--priority** *P*
The priority of the task. P should be an integer. Larger values have
higher priority. The default is 0. Priorities are simply hints and
are not strict—if a task cannot be matched to an available slot (e.g.
due to resource availability), but a lower-priority task can, then
the task will be deferred and the lower priority task will be
executed. This option can be set for a job in the DAX by specifying
the pegasus::pmc_priority profile.
**-f** *VAR=FILE*; \ **--pipe-forward** *VAR=FILE*
Forward I/O to file *FILE* using pipes to communicate with the task.
The environment variable *VAR* will be set to the value of a file
descriptor for a pipe to which the task can write to get data into
*FILE*. For example, if a task specifies: -f FOO=/tmp/foo then the
environment variable FOO for the task will be set to a number (e.g.
3) that represents the file /tmp/foo. In order to specify this
argument in a Pegasus DAX you need to set the pegasus::pmc_arguments
profile (note that the value of pmc_arguments must contain the "-f"
part of the argument, so a valid value would be: <profile
namespace="pegasus" key="pmc_arguments">-f A=/tmp/a </profile>). (see
`I/O FORWARDING <#IO_FORWARDING>`__)
**-F** *SRC=DEST*; \ **--file-forward** *SRC=DEST*
Forward I/O to the file *DEST* from the file *SRC*. When the task
finishes, the worker will read the data from *SRC* and send it to the
master where it will be written to the file *DEST*. After *SRC* is
read it is deleted. In order to specify this argument in a Pegasus
DAX you need to set the pegasus::pmc_arguments profile. (see `I/O
FORWARDING <#IO_FORWARDING>`__)
The format of an **EDGE** record is:
::
"EDGE" parent child
Where *parent* is the ID of the parent task, and *child* is the ID of
the child task. An example **EDGE** record is:
::
EDGE t01 t02
A simple diamond-shaped workflow would look like this:
::
# diamond.dag
TASK A /bin/echo "I am A"
TASK B /bin/echo "I am B"
TASK C /bin/echo "I am C"
TASK D /bin/echo "I am D"
EDGE A B
EDGE A C
EDGE B D
EDGE C D
.. _RESCUE_FILES:
Rescue Files
============
Many different types of errors can occur when running a DAG. One or more
of the tasks may fail, the MPI job may run out of wall time,
**pegasus-mpi-cluster** may segfault (we hope not), the system may
crash, etc. In order to ensure that the DAG does not need to be
restarted from the beginning after an error, **pegasus-mpi-cluster**
generates a rescue file for each workflow.
The rescue file is a simple text file that lists all of the tasks in the
workflow that have finished successfully. This file is updated each time
a task finishes, and is flushed periodically so that if the work- flow
fails and the user restarts it, **pegasus-mpi-cluster** can determine
which tasks still need to be executed. As such, the rescue file is a
sort-of transaction log for the workflow.
The rescue file contains zero or more DONE records. The format of these
records is:
::
"DONE" *taskid*
Where *taskid* is the ID of the task that finished successfully.
By default, rescue files are named *DAGNAME.rescue* where *DAGNAME* is
the path to the input DAG file. The file name can be changed by
specifying the **-r** argument.
.. _PMC_AND_PEGASUS:
PMC and Pegasus
===============
.. __using_pmc_for_pegasus_task_clustering:
Using PMC for Pegasus Task Clustering
-------------------------------------
PMC can be used as the wrapper for executing clustered jobs in Pegasus.
In this mode Pegasus groups several tasks together and submits them as a
single clustered job to a remote system. PMC then executes the
individual tasks in the cluster and returns the results.
PMC can be specified as the task manager for clustered jobs in Pegasus
in three ways:
1. Globally in the properties file
The user can set a property in the properties file that results in
all the clustered jobs of the workflow being executed by PMC. In the
Pegasus properties file specify:
::
#PEGASUS PROPERTIES FILE
pegasus.clusterer.job.aggregator=mpiexec
In the above example, all the clustered jobs on all remote sites will
be launched via PMC as long as the property value is not overridden
in the site catalog.
2. By setting the profile key "job.aggregator" in the site catalog:
::
<site handle="siteX" arch="x86" os="LINUX">
...
<profile namespace="pegasus" key="job.aggregator">mpiexec</profile>
</site>
In the above example, all the clustered jobs on a siteX are going to
be executed via PMC as long as the value is not overridden in the
transformation catalog.
3. By setting the profile key "job.aggregator" in the transformation
catalog:
::
tr B {
site siteX {
pfn "/path/to/mytask"
arch "x86"
os "linux"
type "INSTALLED"
profile pegasus "clusters.size" "3"
profile pegasus "job.aggregator" "mpiexec"
}
}
In the above example, all the clustered jobs for transformation B on
siteX will be executed via PMC.
It is usually necessary to have a pegasus::mpiexec entry in your
transformation catalog that specifies a) the path to PMC on the remote
site and b) the relevant globus profiles such as xcount, host_xcount and
maxwalltime to control size of the MPI job. That entry would look like
this:
::
tr pegasus::mpiexec {
site siteX {
pfn "/path/to/pegasus-mpi-cluster"
arch "x86"
os "linux"
type "INSTALLED"
profile globus "maxwalltime" "240"
profile globus "host_xcount" "1"
profile globus "xcount" "32"
}
}
If this transformation catalog entry is not specified, Pegasus will
attempt create a default path on the basis of the environment profile
PEGASUS_HOME specified in the site catalog for the remote site.
PMC can be used with both horizontal and label-based clustering in
Pegasus, but we recommend using label-based clustering so that entire
sub-graphs of a Pegasus DAX can be clustered into a single PMC job,
instead of only a single level of the workflow.
.. __pegasus_profiles_for_pmc:
Pegasus Profiles for PMC
------------------------
There are several Pegasus profiles that map to PMC task options:
**pmc_request_memory**
This profile is used to set the --request-memory task option and is
usually specified in the DAX or transformation catalog.
**pmc_request_cpus**
This key is used to set the --request-cpus task option and is usually
specified in the DAX or transformation catalog.
**pmc_priority**
This key is used to set the --priority task option and is usually
specified in the DAX.
These profiles are used by Pegasus when generating PMC’s input DAG when
PMC is used as the task manager for clustered jobs in Pegasus.
The profiles can be specified in the DAX like this:
::
<job id="ID0000001" name="mytask">
<arguments>-a 1 -b 2 -c 3</arguments>
...
<profile namespace="pegasus" key="pmc_request_memory">1024</profile>
<profile namespace="pegasus" key="pmc_request_cpus">4</profile>
<profile namespace="pegasus" key="pmc_priority">10</profile>
</job>
This example specifies a PMC task that requires 1GB of memory and 4
cores, and has a priority of 10. It produces a task in the PMC DAG that
looks like this:
::
TASK mytask_ID00000001 -m 1024 -c 4 -p 10 /path/to/mytask -a 1 -b 2 -c 3
.. __using_pmc_for_the_entire_pegasus_dax:
Using PMC for the Entire Pegasus DAX
------------------------------------
Pegasus can also be configured to run the entire workflow as a single
PMC job. In this mode Pegasus will generate a single PMC DAG for the
entire workflow as well as a PBS script that can be used to submit the
workflow.
In contrast to using PMC as a task clustering tool, in this mode there
are no jobs in the workflow executed without PMC. The entire workflow,
including auxilliary jobs such as directory creation and file transfers,
is managed by PMC. If Pegasus is configured in this mode, then DAGMan
and Condor are not required.
To run in PMC-only mode, set the property "pegasus.code.generator" to
"PMC" in the Pegasus properties file:
::
pegasus.code.generator=PMC
In order to submit the resulting PBS job you may need to make changes to
the .pbs file generated by Pegasus to get it to work with your cluster.
This mode is experimental and has not been used extensively.
.. _LOGGING:
Logging
=======
By default, all logging messages are printed to stderr. If you turn up
the logging using **-v** then you may end up with a lot of stderr being
forwarded from the workers to the master.
The log levels in order of severity are: FATAL, ERROR, WARN, INFO,
DEBUG, and TRACE.
The default logging level is INFO. The logging levels can be increased
with **-v** and decreased with **-q**.
.. _TASK_STDIO:
Task STDIO
==========
By default the stdout and stderr of tasks will be redirected to the
master’s stdout and stderr. You can change the path of these files with
the **-o** and **-e** arguments. You can also enable per-task stdio
files using the **--per-task-stdio** argument. Note that if per-task
stdio files are not used then the stdio of all workers will be merged
into one out and one err file by the master at the end, so I/O from
different workers will not be interleaved, but I/O from each worker will
appear in the order that it was generated. Also note that, if the job
fails for any reason, the outputs will not be merged, but instead there
will be one file for each worker named DAGFILE.out.X and DAGFILE.err.X,
where DAGFILE is the path to the input DAG, and *X* is the worker’s
rank.
.. _HOST_SCRIPTS:
Host Scripts
============
A host script is a shell script or executable that
**pegasus-mpi-cluster** launches on each unique host on which it is
running. They can be used to start auxilliary services, such as
memcached, that the tasks in a workflow require.
Host scripts are specified using either the **--host-script** argument
or the **PMC_HOST_SCRIPT** environment variable.
The host script is started when **pegasus-mpi-cluster** starts and must
exit with an exitcode of 0 before any tasks can be executed. If it the
host script returns a non-zero exitcode, then the workflow is aborted.
The host script is given 60 seconds to do any setup that is required. If
it doesn’t exit in 60 seconds then a SIGALRM signal is delivered to the
process, which, if not handled, will cause the process to terminate.
When the workflow finishes, **pegasus-mpi-cluster** will deliver a
SIGTERM signal to the host script’s process group. Any child processes
left running by the host script will receive this signal unless they
created their own process group. If there were any processes left to
receive this signal, then they will be given a few seconds to exit, then
they will be sent SIGKILL. This is the mechanism by which processes
started by the host script can be informed of the termination of the
workflow.
.. _RESOURCE_SCHED:
Resource-Based Scheduling
=========================
High-performance computing resources often have a low ratio of memory to
CPUs. At the same time, workflow tasks often have high memory
requirements. Often, the memory requirements of a workflow task exceed
the amount of memory available to each CPU on a given host. As a result,
it may be necessary to disable some CPUs in order to free up enough
memory to run the tasks. Similarly, many codes have support for
multicore hosts. In that case it is necessary for efficiency to ensure
that the number of cores required by the tasks running on a host do not
exceed the number of cores available on that host.
In order to make this process more efficient, **pegasus-mpi-cluster**
supports resource-based scheduling. In resource-based scheduling the
tasks in the workflow can specify how much memory and how many CPUs they
require, and **pegasus-mpi-cluster** will schedule them so that the
tasks running on a given host do not exceed the amount of physical
memory and CPUs available. This enables **pegasus-mpi-cluster** to take
advantage of all the CPUs available when the tasks' memory requirement
is low, but also disable some CPUs when the tasks' memory requirement is
higher. It also enables workflows with a mixture of single core and
multi-core tasks to be executed on a heterogenous pool.
If there are no hosts available that have enough memory and CPUs to
execute one of the tasks in a workflow, then the workflow is aborted.
.. __memory:
Memory
------
Users can specify both the amount of memory required per task, and the
amount of memory available per host. If the amount of memory required by
any task exceeds the available memory of all the hosts, then the
workflow will be aborted. By default, the host memory is determined
automatically, however the user can specify **--host-memory** to "lie"
to **pegasus-mpi-cluster**. The amount of memory required for each task
is specified in the DAG using the **-m**/**--request-memory** argument
(see `DAG Files <#DAG_FILES>`__).
.. __cpus:
CPUs
----
Users can specify the number of CPUs required per task, and the total
number of CPUs available on each host. If the number of CPUs required by
a task exceeds the available CPUs on all hosts, then the workflow will
be aborted. By default, the number of CPUs on a host is determined
automatically, but the user can specify **--host-cpus** to over- or
under-subscribe the host. The number of CPUs required for each task is
specified in the DAG using the **-c**/**--request-cpus** argument (see
`DAG Files <#DAG_FILES>`__).
.. _IO_FORWARDING:
I/O Forwarding
==============
In workflows that have lots of small tasks it is common for the I/O
written by those tasks to be very small. For example, a workflow may
have 10,000 tasks that each write a few KB of data. Typically each task
writes to its own file, resulting in 10,000 files. This I/O pattern is
very inefficient on many parallel file systems because it requires the
file system to handle a large number of metadata operations, which are a
bottleneck in many parallel file systems.
One way to handle this problem is to have all 10,000 tasks write to a
single file. The problem with this approach is that it requires those
tasks to synchronize their access to the file using POSIX locks or some
other mutual exclusion mechanism. Otherwise, the writes from different
tasks may be interleaved in arbitrary order, resulting in unusable data.
In order to address this use case PMC implements a feature that we call
"I/O Forwarding". I/O forwarding enables each task in a PMC job to write
data to an arbitrary number of shared files in a safe way. It does this
by having PMC worker processes collect data written by the task and send
it over over the high-speed network using MPI messaging to the PMC
master process, where it is written to the output file. By having one
process (the PMC master process) write to the file all of the I/O from
many parallel tasks can be synchronized and written out to the files
safely.
There are two different ways to use I/O forwarding in PMC: pipes and
files. Pipes are more efficient, but files are easier to use.
.. __i_o_forwarding_using_pipes:
I/O forwarding using pipes
--------------------------
I/O forwarding with pipes works by having PMC worker processes collect
data from each task using UNIX pipes. This approach is more efficient
than the file-based approach, but it requires the code of the task to be
changed so that the task writes to the pipe instead of a regular file.
In order to use I/O forwarding a PMC task just needs to specify the
**-f/--pipe-forward** argument to specify the name of the file to
forward data to, and the name of an environment variable through which
the PMC worker process can inform it of the file descriptor for the
pipe.
For example, if there is a task "mytask" that needs to forward data to
two files: "myfile.a" and "myfile.b", it would look like this:
::
TASK mytask -f A=/tmp/myfile.a -f B=/tmp/myfile.b /bin/mytask
When the /bin/mytask process starts it will have two variables in its
environment: "A=3" and "B=4", for example. The value of these variables
is the file descriptor number of the corresponding files. In this case,
if the task wants to write to "/tmp/myfile.a", it gets the value of
environment variable "A", and calls write() on that descriptor number.
In C the code for that looks like this:
::
char *A = getenv("A");
int fd = atoi(A);
char *message = "Hello, World\n";
write(fd, message, strlen(message));
In some programming languages it is not possible to write to a file
descriptor directly. Fortran, for example, refers to files by unit
number instead of using file descriptors. In these languages you can
either link C I/O functions into your binary and call them from routines
written in the other language, or you can open a special file in the
Linux /proc file system to get another handle to the pipe you want to
access. For the latter, the file you should open is
"/proc/self/fd/NUMBER" where NUMBER is the file descriptor number you
got from the environment variable. For the example above, the pipe for
myfile.a (environment variable A) is "/proc/self/fd/3".
If you are using **pegasus-kickstart**, which is probably the case if
you are using PMC for a Pegasus workflow, then there’s a trick you can
do to avoid modifying your code. You use the /proc file system, as
described above, but you let pegasus-kickstart handle the path
construction. For example, if your application has an argument, -o, that
allows you to specify the output file then you can write your task like
this:
::
TASK mytask -f A=/tmp/myfile.a /bin/pegasus-kickstart /bin/mytask -o /proc/self/fd/$A
In this case, pegasus-kickstart will replace the $A in your application
arguments with the file descriptor number you want. Your code can open
that path normally, write to it, and then close it as if it were a
regular file.
.. __i_o_forwarding_using_files:
I/O forwarding using files
--------------------------
I/O forwarding with files works by having tasks write out data in files
on the local disk. The PMC worker process reads these files and forwards
the data to the master where it can be written to the desired output
file. This approach may be much less efficient than using pipes because
it involves the file system, which has more overhead than a pipe.
File forwarding can be enabled by giving the **-F/--file-forward**
argument to a task.
Here’s an example:
::
TASK mytask -F /tmp/foo.0=/scratch/foo /bin/mytask -o /tmp/foo.0
In this case, the worker process will expect to find the file /tmp/foo.0
when mytask exits successfully. It reads the data from that file and
sends it to the master to be written to the end of /scratch/foo. After
/tmp/foo.0 is read it will be deleted by the worker process.
This approach works best on systems where the local disk is a RAM file
system such as Cray XT machines. Alternatively, the task can use
/dev/shm on a regular Linux cluster. It might also work relatively
efficiently on a local disk if the file system cache is able to absorb
all of the reads and writes.
.. __i_o_forwarding_caveats:
I/O forwarding caveats
----------------------
When using I/O forwarding it is important to consider a few caveats.
First, if the PMC job fails for any reason (including when the workflow
is aborted for violating **--max-wall-time**), then the files containing
forwarded I/O may be corrupted. They can include **partial records**,
meaning that only part of the I/O from one or more tasks was written,
and they can include **duplicate records**, meaning that the I/O was
written, but the PMC job failed before the task could be marked as
successful, and the workflow was restarted later. We make no guarantees
about the contents of the data files in this case. It is up to the code
that reads the files to a) detect and b) recover from such problems. To
eliminate duplicates the records should include a unique identifier, and
to eliminate partials the records should include a checksum.
Second, you should not use I/O forwarding if your task is going to write
a lot of data to the file. Because the PMC worker is reading data off
the pipe/file into memory and sending it in an MPI message, if you write
too much, then the worker process will run the system out of memory.
Also, all the data needs to fit in a single MPI message. In pipe
forwarding there is no hard limit on the size, but in file forwarding
the limit is 1MB. We haven’t benchmarked the performance on large I/O,
but anything larger than about 1 MB is probably too much. At any rate,
if your data is larger than 1MB, then I/O forwarding probably won’t have
much of a performance benefit anyway.
Third, the I/O is not written to the file if the task returns a non-zero
exitcode. We assume that if the task failed that you don’t want the data
it produced.
Fourth, the data from different tasks is not interleaved. All of the
data written by a given task will appear sequentially in the output
file. Note that you can still get partial records, however, if any data
from a task appears it will never be split among non-adjacent ranges in
the output file. If you have 3 tasks that write: "I am a task" you can
get:
::
I am a taskI am a taskI am a task
and:
::
I am a taskI amI am a task
but not:
::
I am a taskI amI am a task a task
Fifth, data from different tasks appears in arbitrary order in the
output file. It depends on what order the tasks were executed by PMC,
which may be arbitrary if there are no dependencies between the tasks.
The data that is written should contain enough information that you are
able to determine which task produced it if you require that. PMC does
not add any headers or trailers to the data.
Sixth, a task will only be marked as successful if all of its I/O was
successfully written. If the workflow completed successfully, then the
I/O is guaranteed to have been written.
Seventh, if the master is not able to write to the output file for any
reason (e.g. the master tries to write the I/O to the destination file,
but the write() call returns an error) then the task is marked as failed
even if the task produced a non-zero exitcode. In other words, you may
get a non-zero kickstart record even when PMC marks the task failed.
Eighth, the pipes are write-only. If you need to read and write data
from the file you should use file forwarding and not pipe forwarding.
Ninth, all files are opened by the master in append mode. This is so
that, if the workflow fails and has to be restarted, or if a task fails
and is retried, the data that was written previously is not lost. PMC
never truncates the files. This is one of the reasons why you can have
partial records and duplicate records in the output file.
Finally, in file forwarding the output file is removed when the task
exits. You cannot rely on the file to be there when the next task runs
even if you write it to a shared file system.
.. __misc:
Misc
====
.. __resource_utilization:
Resource Utilization
--------------------
At the end of the workflow run, the master will report the resource
utilization of the job. This is done by adding up the total runtimes of
all the tasks executed (including failed tasks) and dividing by the
total wall time of the job times N, where N is both the total number of
processes including the master, and the total number of workers. These
two resource utilization values are provided so that users can get an
idea about how efficiently they are making use of the resources they
allocated. Low resource utilization values suggest that the user should
use fewer cores, and longer wall time, on future runs, while high
resource utilization values suggest that the user could use more cores
for future runs and get a shorter wall time.
.. __known_issues:
Known Issues
============
.. __cray_compiler_wrappers:
Cray Compiler Wrappers
----------------------
On Cray machines, the CC compiler wrapper for C++ code should be used to
compile PMC. That wrapper links in all the required MPI libraries.
**Cray compiler wrappers should not be used to compile tasks that run
under PMC.** If you use a Cray wrapper to compile a task that runs under
PMC, then the task will hang, or exit immediately with a 0 exit code
without doing anything. This appears to happen only when the application
binary is dynamically linked. It seems to be a problem with the
libraries that are linked into the code when it is compiled with a Cray
wrapper. To summarize: on Cray machines, compile PMC with the CC
wrapper, but compile code that runs under PMC without any wrappers.
.. __fork_and_exec:
fork() and exec()
-----------------
In order for the worker processes to start tasks on the compute node the
compute nodes must support the **fork()** and **exec()** system calls.
If your target machine runs a stripped-down OS on the compute nodes that
does not support these system calls, then **pegasus-mpi-cluster** will
not work.
.. _CPU_USAGE_ISSUE:
CPU Usage
---------
Many MPI implementations are optimized so that message sends and
receives do busy waiting (i.e. they spin/poll on a message send or
receive instead of sleeping). The reasoning is that sleeping adds
overhead and, since many HPC systems use space sharing on dedicated
hardware, there are no other processes competing, so spinning instead of
sleeping can produce better performance. On those implementations MPI
processes will run at 100% CPU usage even when they are just waiting for
a message. This is a big problem for multicore tasks in
**pegasus-mpi-cluster** because idle slots consume CPU resources. In
order to solve this problem **pegasus-mpi-cluster** processes sleep for
a short period between checks for waiting messages. This reduces the
load significantly, but causes a short delay in receiving messages. If
you are using an MPI implementation that sleeps on message send and
receive instead of doing busy waiting, then you can disable the sleep by
specifying the **--no-sleep-on-recv** option. Note that the master will
always sleep if **--max-wall-time** is specified because there is no way
to interrupt or otherwise timeout a blocking call in MPI (e.g. SIGALRM
does not cause MPI_Recv to return EINTR).
.. __task_environment:
Task Environment
================
PMC sets a few environment variables when it launches a task. In
addition to the environment variables for pipe forwarding, it sets:
**PMC_TASK**
The name of the task from the DAG file.
**PMC_MEMORY**
The amount of memory requested by the task.
**PMC_CPUS**
The number of CPUs requested by the task.
**PMC_RANK**
The rank of the MPI worker that launched the task.
**PMC_HOST_RANK**
The host rank of the MPI worker that launched the task.
In addition, if **--set-affinity** is specified, and PMC has allocated
some CPUs to the task, then it will export:
**PMC_AFFINITY**
A comma-separated list of CPUs to which the task is/should be bound.
.. __environment_variables:
Environment Variables
=====================
The environment variables below are aliases for command-line options. If
the environment variable is present, then it is used as the default for
the associated option. If both are present, then the command-line option
is used.
**PMC_HOST_SCRIPT**
Alias for the **--host-script** option.
**PMC_HOST_MEMORY**
Alias for the **--host-memory** option.
**PMC_HOST_CPUS**
Alias for the **--host-cpus** option.
**PMC_MAX_WALL_TIME**
Alias for the **--max-wall-time** option.
.. __author:
Author
======
Gideon Juve ``<gideon@isi.edu>``
Mats Rynge ``<rynge@isi.edu>``
| 40.352825 | 88 | 0.741232 |
6190099bb6ec7afa795f54ce92d6b57d97d32866 | 2,311 | rst | reStructuredText | docs/packages/decorators.rst | code-review-doctor/domonic | fee5704ab051d40c7b3fec5488a44d3ab1ee027c | [
"MIT"
] | 94 | 2020-07-12T12:02:07.000Z | 2022-03-25T03:04:57.000Z | docs/packages/decorators.rst | code-review-doctor/domonic | fee5704ab051d40c7b3fec5488a44d3ab1ee027c | [
"MIT"
] | 41 | 2021-06-02T10:51:58.000Z | 2022-02-21T09:58:43.000Z | docs/packages/decorators.rst | code-review-doctor/domonic | fee5704ab051d40c7b3fec5488a44d3ab1ee027c | [
"MIT"
] | 17 | 2021-06-10T00:34:27.000Z | 2022-02-21T09:47:30.000Z | decorators
======================
Everyone loves python decorators
We have a few in domonic to make life more fun!
el
--------------------------------
You can use the el decorator to wrap elements around function results.
.. code-block :: bash
from domonic.decorators import el
@el(html, True)
@el(body)
@el(div)
def test():
return 'hi!'
print(test())
# <html><body><div>hi!</div></body></html>
# returns pyml objects so call str to render
assert str(test()) == '<html><body><div>hi!</div></body></html>'
It returns the tag object by default.
You can pass True as a second param to the decorator to return a rendered string instead. Also accepts strings as first param i.e. custom tags.
silence
--------------------------------
Want that unit test to stfu?
.. code-block :: bash
from domonic.decorators import silence
@silence
def test_that_wont_pass():
assert True == False
called
--------------------------------
Python's lambda restrictions may force you to create anonymous success methods above calling functions.
domonic uses a unique type of decorator to call anonymouse methods immediately after calling the passed method.
To use it, pass 2 functions, something to call BEFORE hand, and an error method
Then your decorated anonymous function will recieve the data of the first function you passed in as a parameter.
Let me show you...
.. code-block :: bash
from domonic.decorators import called
@called(
lambda: º.ajax('https://www.google.com'),
lambda err: print('error:', err))
def success(data=None):
print("Sweet as a Nut!")
print(data.text)
It's meant for anonymous functions and calls immediately. So don't go using it on class methods.
It's also called iffe. (so you can know when ur just passing nothing)
.. code-block :: bash
@iife()
def sup():
print("sup!")
return True
check
--------------------------------
logs the entry and exit of a function and is useful for debugging. i.e.
.. code-block :: bash
@check
def somefunc():
return True
somefunc()
# would output this to the console
# Entering somefunc
# Exited somefunc
.. autoclass:: domonic.decorators
:members:
:noindex:
| 21.201835 | 143 | 0.632194 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.