hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b956e4be3fb80e3debbcff0292441d291709996f | 749 | rst | reStructuredText | doc/source/reference/embedding/runtime_error_handling.rst | Semphriss/squirrel | d5239c70c96f29825be03f55406a641ce7a31534 | [
"MIT"
] | 745 | 2016-01-18T06:44:06.000Z | 2022-03-23T11:05:23.000Z | doc/source/reference/embedding/runtime_error_handling.rst | Semphriss/squirrel | d5239c70c96f29825be03f55406a641ce7a31534 | [
"MIT"
] | 267 | 2019-02-26T22:16:48.000Z | 2022-02-09T09:49:22.000Z | doc/source/reference/embedding/runtime_error_handling.rst | Semphriss/squirrel | d5239c70c96f29825be03f55406a641ce7a31534 | [
"MIT"
] | 175 | 2016-01-18T17:47:21.000Z | 2022-03-20T20:01:28.000Z | .. _embedding_runtime_error_handling:
======================
Runtime error handling
======================
When an exception is not handled by Squirrel code with a try/catch statement, a runtime
error is raised and the execution of the current program is interrupted. It is possible to
set a call back function to intercept the runtime error from the host program; this is
useful to show meaningful errors to the script writer and for implementing visual
debuggers.
The following API call pops a Squirrel function from the stack and sets it as error handler.::
SQUIRREL_API void sq_seterrorhandler(HSQUIRRELVM v);
The error handler is called with 2 parameters, an environment object (this) and a object.
The object can be any squirrel type.
| 41.611111 | 94 | 0.757009 |
e6e7e6c80955223a2bd04c64844158736f39a8de | 9,325 | rst | reStructuredText | docs/source/data_acquisition/DataProv-MSSentinel.rst | ekmixon/msticpy-1 | 3587ed4f604529c6f5784d51a557c3ad2379f781 | [
"MIT"
] | 16 | 2019-02-25T01:34:49.000Z | 2019-05-05T16:55:21.000Z | docs/source/data_acquisition/DataProv-MSSentinel.rst | ekmixon/msticpy-1 | 3587ed4f604529c6f5784d51a557c3ad2379f781 | [
"MIT"
] | null | null | null | docs/source/data_acquisition/DataProv-MSSentinel.rst | ekmixon/msticpy-1 | 3587ed4f604529c6f5784d51a557c3ad2379f781 | [
"MIT"
] | 1 | 2019-02-27T10:26:42.000Z | 2019-02-27T10:26:42.000Z | Microsoft Sentinel Provider
===========================
Sentinel Configuration
----------------------
You can store configuration for your workspace (or workspaces) in either
your ``msticpyconfig.yaml`` or a ``config.json`` file. The latter
file is auto-created in your Azure Machine Learning (AML) workspace when
you launch a notebook from the Sentinel portal. It can however, only
store details for a single workspace.
Sentinel Configuration in *msticpyconfig.yaml*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is the simplest place to store your workspace details.
You likely need to use a *msticpyconfig.yaml* anyway. If you are using other
*msticpy* features such as Threat Intelligence Providers, GeoIP Lookup, Azure Data,
etc., these all have their own configuration settings, so using a single
configuration file makes managing your settings easier. If you are running
notebooks in an AML workspace and you do not have a *msticpyconfig.yaml*
*MSTICPy* will create one and import settings from a *config.json*, if it can find
one.
For more information on using and configuring *msticpyconfig.yaml* see
:doc:`msticpy Package Configuration <../getting_started/msticpyconfig>`
and :doc:`MSTICPy Settings Editor<../getting_started/SettingsEditor>`
The MS Sentinel connection settings are stored in the
``AzureSentinel\\Workspaces`` section of the file. Here is an example.
.. code:: yaml
AzureSentinel:
Workspaces:
# Workspace used if you don't explicitly name a workspace when creating WorkspaceConfig
# Specifying values here overrides config.json settings unless you explicitly load
# WorkspaceConfig with config_file parameter (WorkspaceConfig(config_file="../config.json")
Default:
WorkspaceId: 271f17d3-5457-4237-9131-ae98a6f55c37
TenantId: 335b56ab-67a2-4118-ac14-6eb454f350af
ResourceGroup: soc
SubscriptionId: a5b24e23-a96a-4472-b729-9e5310c83e20
WorkspaceName: Workspace1
# To use these launch with an explicit name - WorkspaceConfig(workspace_name="Workspace2")
Workspace1:
WorkspaceId: "c88dd3c2-d657-4eb3-b913-58d58d811a41"
TenantId: "335b56ab-67a2-4118-ac14-6eb454f350af"
ResourceGroup: soc
SubscriptionId: a5b24e23-a96a-4472-b729-9e5310c83e20
WorkspaceName: Workspace1
TestWorkspace:
WorkspaceId: "17e64332-19c9-472e-afd7-3629f299300c"
TenantId: "4ea41beb-4546-4fba-890b-55553ce6003a"
ResourceGroup: soc
SubscriptionId: a5b24e23-a96a-4472-b729-9e5310c83e20
WorkspaceName: Workspace2
If you only use a single workspace, you only need to create a ``Default`` entry and
add the values for your *WorkspaceID* and *TenantID*. You can add other entries here,
for example, SubscriptionID, ResourceGroup. These are recommended but not required
for the QueryProvider (they may be used by other *MSTICPy* components however).
.. note:: The property names are spelled differently to the values in the
*config.json* so be sure to enter these as shown in the example. These
names are case-sensitive.
.. note:: The section names (Default, Workspace1 and TestWorkspace) do
not have to be the same as the workspace name - you can choose friendlier
aliases, if you wish.
If you use multiple workspaces, you can add further entries here. Each
workspace entry is normally the name of the Azure Sentinel workspace but
you can use any name you prefer.
Sentinel Configuration in *config.json*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When you load a notebook from the MS Sentinel UI a configuration file *config.json*
is provisioned for you with the details of the source workspace populated in
the file. An example is shown here.
.. code:: json
{
"tenant_id": "335b56ab-67a2-4118-ac14-6eb454f350af",
"subscription_id": "b8f250f8-1ba5-4b2c-8e74-f7ea4a1df8a6",
"resource_group": "ExampleWorkspaceRG",
"workspace_id": "271f17d3-5457-4237-9131-ae98a6f55c37",
"workspace_name": "ExampleWorkspace"
}
If no *msticpyconfig.yaml* is found *MSTICPy* will automatically look for a
*config.json* file in the current
directory. If not found here, it will search the parent directory and in all
its subdirectories. It will use the first *config.json* file found.
Loading a QueryProvider for Microsoft Sentinel
----------------------------------------------
.. code:: ipython3
qry_prov = QueryProvider(
data_environment="MSSentinel",
)
.. note::"LogAnalytics" and "AzureSentinel" are also aliases
for "MSSentinel"
Connecting to a MS Sentinel Workspace
-------------------------------------
Once we have instantiated the QueryProvider we need to authenticate to Sentinel
Workspace. This is done by calling the connect() function of the Query
Provider.
connect() requires a connection string as its parameter. For MS Sentinel
we can use the ``WorkspaceConfig`` class.
WorkspaceConfig
~~~~~~~~~~~~~~~
This handles loading your workspace configuration and generating a
connection string from your configuration.
See :py:mod:`WorkspaceConfig API documentation<msticpy.common.wsconfig>`
``WorkspaceConfig`` works with workspace configuration stored in *msticpyconfig.yaml*
or *config.json* (although the former takes precedence).
To use ``WorkspaceConfig``, simple create an instance of it. It will automatically build
your connection string for use with the query provider library.
.. code:: IPython
>>> ws_config = WorkspaceConfig()
>>> ws_config.code_connect_str
"loganalytics://code().tenant('335b56ab-67a2-4118-ac14-6eb454f350af').workspace('271f17d3-5457-4237-9131-ae98a6f55c37')"
You can use this connection string in the call to ``QueryProvider.connect()``
When called without parameters, *WorkspaceConfig* loads the "Default"
entry in your *msticpyconfig.yaml* (or falls back to loading the settings
in *config.json*). To specify a different workspace pass the ``workspace`` parameter
with the name of your workspace entry. This value is the name of
the section in the *msticpyconfig* ``Workspaces`` section, which may
not necessarily be the same as your workspace name.
.. code:: IPython
>>> ws_config = WorkspaceConfig(workspace="TestWorkspace")
To see which workspaces are configured in your *msticpyconfig.yaml* use
the ``list_workspaces()`` function.
.. tip:: ``list_workspaces`` is a class function, so you do not need to
instantiate a WorkspaceConfig to call this function.
.. code:: IPython
>>> WorkspaceConfig.list_workspaces()
{'Default': {'WorkspaceId': '271f17d3-5457-4237-9131-ae98a6f55c37',
'TenantId': '335b56ab-67a2-4118-ac14-6eb454f350af'},
'Workspace1': {'WorkspaceId': 'c88dd3c2-d657-4eb3-b913-58d58d811a41',
'TenantId': '335b56ab-67a2-4118-ac14-6eb454f350af'},
'TestWorkspace': {'WorkspaceId': '17e64332-19c9-472e-afd7-3629f299300c',
'TenantId': '4ea41beb-4546-4fba-890b-55553ce6003a'}}
Entries in msticpyconfig always take precedence over settings in your
config.json. If you want to force use of the config.json, specify the path
to the config.json file in the ``config_file`` parameter to ``WorkspaceConfig``.
Connecting to the workspace
~~~~~~~~~~~~~~~~~~~~~~~~~~~
When connecting you can just pass an instance of WorkspaceConfig to
the query provider's ``connect`` method.
.. code:: IPython
qry_prov.connect(WorkspaceConfig())
# or
qry_prov.connect(WorkspaceConfig(workspace="TestWorkspace"))
If you need use a specific instance of a config.json you can specify a full
path to the file you want to use when you create your ``WorkspaceConfig``
instance.
MS Sentinel Authentication options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, the data provider tries to use chained authentication,
attempting to use existing Azure credentials, if they are available.
- If you are running in an AML workspace, it will attempt to use
integrated MSI authentication, using the identity that you used to
authenticate to AML.
- If you have logged in to Azure CLI, the Sentinel provider will
try to use your AzureCLI credentials
- If you have your credentials stored as environment variables, it
will try to use those
- Finally, it will fall back on using interactive browser-based
device authentication.
If you are using a Sovereign cloud rather than the Azure global cloud,
you should select the appropriate cloud in the Azure section of
the *msticpyconfig*.
.. warning:: Although msticpy allows you to configure multiple entries for
workspaces in different tenants, you cannot currently authenticate to workspaces
that span multiple tenants in the same notebook. If you need to do this, you
should investigate
`Azure Lighthouse <https://azure.microsoft.com/services/azure-lighthouse/>`__.
This allows delegated access to workspaces in multiple tenants from a single
tenant.
Other MS Sentinel Documentation
-------------------------------
For examples of using the MS Defender provider, see the sample
`M365 Defender Notebook<https://github.com/microsoft/msticpy/blob/master/docs/notebooks/Data_Queries.ipynb>`
Built-in :ref:`data_acquisition/DataQueries:Queries for Microsoft Sentinel`.
:py:mod:`Sentinel KQL driver API documentation<msticpy.data.drivers.kql_driver>`
| 40.193966 | 124 | 0.732332 |
036b005b12664dde7e0c1a012bbd05628301ea31 | 583 | rst | reStructuredText | docs/vim/host/FlagInfo.rst | nandonov/pyvmomi | ad9575859087177623f08b92c24132ac019fb6d9 | [
"Apache-2.0"
] | 12 | 2016-09-14T21:59:46.000Z | 2019-12-18T18:02:55.000Z | docs/vim/host/FlagInfo.rst | nandonov/pyvmomi | ad9575859087177623f08b92c24132ac019fb6d9 | [
"Apache-2.0"
] | 2 | 2019-01-07T12:02:47.000Z | 2019-01-07T12:05:34.000Z | docs/vim/host/FlagInfo.rst | nandonov/pyvmomi | ad9575859087177623f08b92c24132ac019fb6d9 | [
"Apache-2.0"
] | 8 | 2020-05-21T03:26:03.000Z | 2022-01-26T11:29:21.000Z | .. _bool: https://docs.python.org/2/library/stdtypes.html
.. _VI API 2.5: ../../vim/version.rst#vimversionversion2
.. _vmodl.DynamicData: ../../vmodl/DynamicData.rst
vim.host.FlagInfo
=================
The FlagInfo data object type encapsulates the flag settings for a host. These properties are optional since the same structure is used to change the values during an edit or create operation.
:extends: vmodl.DynamicData_
:since: `VI API 2.5`_
Attributes:
backgroundSnapshotsEnabled (`bool`_, optional):
Flag to specify whether background snapshots are enabled.
| 32.388889 | 194 | 0.732419 |
5413d9142476715da621b3b8a5a8d38d2173be08 | 296 | rst | reStructuredText | docs/reference/cpylog.rst | cpylog/cpylog | 4b0dd669e7f5bd6dfa1376d95d0c368bc42f9f19 | [
"BSD-3-Clause"
] | 1 | 2021-01-29T18:17:58.000Z | 2021-01-29T18:17:58.000Z | docs/reference/cpylog.rst | SteveDoyle2/cpylog | 4b0dd669e7f5bd6dfa1376d95d0c368bc42f9f19 | [
"BSD-3-Clause"
] | 1 | 2020-01-03T22:14:38.000Z | 2020-01-05T00:26:17.000Z | docs/reference/cpylog.rst | cpylog/cpylog | 4b0dd669e7f5bd6dfa1376d95d0c368bc42f9f19 | [
"BSD-3-Clause"
] | 2 | 2019-03-09T01:20:35.000Z | 2020-06-17T03:15:30.000Z | cpylog package
==============
Submodules
----------
cpylog.utils module
-------------------
.. automodule:: cpylog.utils
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: cpylog
:members:
:undoc-members:
:show-inheritance:
| 12.869565 | 28 | 0.537162 |
3f3ea815d7bed4ee5436ac6ec9d22ced3e6d02c1 | 732 | rst | reStructuredText | suricata-4.1.4/doc/userguide/performance/tcmalloc.rst | runtest007/dpdk_surcata_4.1.1 | 5abf91f483b418b5d9c2dd410b5c850d6ed95c5f | [
"MIT"
] | 77 | 2019-06-17T07:05:07.000Z | 2022-03-07T03:26:27.000Z | suricata-4.1.4/doc/userguide/performance/tcmalloc.rst | clockdad/DPDK_SURICATA-4_1_1 | 974cc9eb54b0b1ab90eff12a95617e3e293b77d3 | [
"MIT"
] | 22 | 2019-07-18T02:32:10.000Z | 2022-03-24T03:39:11.000Z | suricata-4.1.4/doc/userguide/performance/tcmalloc.rst | clockdad/DPDK_SURICATA-4_1_1 | 974cc9eb54b0b1ab90eff12a95617e3e293b77d3 | [
"MIT"
] | 49 | 2019-06-18T03:31:56.000Z | 2022-03-13T05:23:10.000Z | Tcmalloc
========
‘tcmalloc’ is a library Google created as part of the google-perftools
suite for improving memory handling in a threaded program. It’s very
simple to use and does work fine with Suricata. It leads to minor
speed ups and also reduces memory usage quite a bit.
Installation
~~~~~~~~~~~~
On Ubuntu, install the libtcmalloc-minimal0 package:
::
apt-get install libtcmalloc-minimal0
On Fedora, install the gperftools-libs package:
::
yum install gperftools-libs
Usage
~~~~~
Use the tcmalloc by preloading it:
Ubuntu:
::
LD_PRELOAD=”/usr/lib/libtcmalloc_minimal.so.0" suricata -c suricata.yaml -i eth0
Fedora:
::
LD_PRELOAD="/usr/lib64/libtcmalloc_minimal.so.4" suricata -c suricata.yaml -i eth0
| 18.3 | 84 | 0.739071 |
7fdb48a2c4983208deaa34043c00a1853a47808d | 218 | rst | reStructuredText | ExtraQuestions/Q41/Q41.rst | joshuaandrewsN/ncpbootcamp | 99b6bd5d32752590a25c7c2b2acf9f997ff942d1 | [
"MIT"
] | null | null | null | ExtraQuestions/Q41/Q41.rst | joshuaandrewsN/ncpbootcamp | 99b6bd5d32752590a25c7c2b2acf9f997ff942d1 | [
"MIT"
] | null | null | null | ExtraQuestions/Q41/Q41.rst | joshuaandrewsN/ncpbootcamp | 99b6bd5d32752590a25c7c2b2acf9f997ff942d1 | [
"MIT"
] | 1 | 2020-06-16T20:47:31.000Z | 2020-06-16T20:47:31.000Z | .. Adding labels to the beginning of your lab is helpful for linking to the lab from other pages
.. _E_question_41:
-------------
Question 41
-------------
.. figure:: images/q41.png
Answer: :ref:`E_answer_41`
| 13.625 | 96 | 0.633028 |
4717a0b6568cc2af88acc7173007022c6cb39d6a | 10,301 | rst | reStructuredText | doc/sphinx/source/community/review.rst | cffbots/ESMValTool | a9b6592a02f2085634a214ff5f36a736fa18ff47 | [
"Apache-2.0"
] | 148 | 2017-02-07T13:16:03.000Z | 2022-03-26T02:21:56.000Z | doc/sphinx/source/community/review.rst | cffbots/ESMValTool | a9b6592a02f2085634a214ff5f36a736fa18ff47 | [
"Apache-2.0"
] | 2,026 | 2017-02-03T12:57:13.000Z | 2022-03-31T15:11:51.000Z | doc/sphinx/source/community/review.rst | cffbots/ESMValTool | a9b6592a02f2085634a214ff5f36a736fa18ff47 | [
"Apache-2.0"
] | 113 | 2017-01-27T13:10:19.000Z | 2022-02-03T13:42:11.000Z | .. _reviewing:
Review of pull requests
=======================
In the ESMValTool community we use pull request reviews to ensure all code and
documentation contributions are of good quality.
An introduction to code reviews can be found in `The Turing Way`_, including
`why code reviews are important`_ and advice on
`how to have constructive reviews`_.
Most pull requests will need two reviews before they can be merged.
First a technical review takes place and then a scientific review.
Once both reviewers have approved a pull request, it can be merged.
These three steps are described in more detail below.
If a pull request contains only technical changes, e.g. a pull request that
corrects some spelling errors in the documentation or a pull request that
fixes some installation problem, a scientific review is not needed.
If you are a regular contributor, please try to review a bit more than two
other pull requests for every pull request you create yourself, to make sure
that each pull request gets the attention it deserves.
.. _technical_review:
1. Technical review
-------------------
Technical reviews are done by the technical review team.
This team consists of regular contributors that have a strong interest and
experience in software engineering.
Technical reviewers use the technical checklist from the
`pull request template`_ to make sure the pull request follows the standards we
would like to uphold as a community.
The technical reviewer also keeps an eye on the design and checks that no major
design changes are made without the approval from the technical lead development
team.
If needed, the technical reviewer can help with programming questions, design
questions, and other technical issues.
The technical review team can be contacted by writing
`@ESMValGroup/tech-reviewers`_ in a comment on an issue or pull request on
GitHub.
.. _scientific_review:
2. Scientific review
--------------------
Scientific reviews are done by the scientific review team.
This team consists of contributors that have a strong interest and
experience in climate science or related domains.
Scientific reviewers use the scientific checklist from the
`pull request template`_ to make sure the pull request follows the standards we
would like to uphold as a community.
The scientific review team can be contacted by writing
`@ESMValGroup/science-reviewers`_ in a comment on an issue or pull request on
GitHub.
3. Merge
--------
Pull requests are merged by the `@ESMValGroup/esmvaltool-coreteam`_.
Specifically, pull requests containing a :ref:`CMORizer script<new-dataset>` can only be merged by
`@remi-kazeroni`_, who will then add the CMORized data to the OBS data pool at
DKRZ and CEDA-Jasmin.
The team member who does the merge first checks that both the technical and
scientific reviewer approved the pull request and that the reviews were
conducted thoroughly.
He or she looks at the list of files that were changed in the pull request and
checks that all relevant checkboxes from the checklist in the pull request
template have been added and ticked.
Finally, he or she checks that the :ref:`pull_request_checks` passed and
merges the pull request.
The person doing the merge commit edits the merge commit message so it
contains a concise and meaningful text.
Any issues that were solved by the pull request can be closed after merging.
It is always a good idea to check with the author of an issue and ask if it is
completely solved by the related pull request before closing the issue.
The core development team can be contacted by writing `@ESMValGroup/esmvaltool-coreteam`_
in a comment on an issue or pull request on GitHub.
Frequently asked questions
--------------------------
How do I request a review of my pull request?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you know a suitable reviewer, e.g. because your pull request fixes an issue
that they opened or they are otherwise interested in the work you are
contributing, you can ask them for a review by clicking the cogwheel next to
'Reviewers' on the pull request 'Conversation' tab and clicking on that person.
When changing code, it is a good idea to ask the original authors of that code
for a review.
An easy way to find out who previously worked on a particular piece of code is
to use `git blame`_.
GitHub will also suggest reviewers based on who previously worked on the files
changed in a pull request.
Every recipe has a maintainer and authors listed in the recipe, it is a good
idea to ask these people for a review.
If there is no obvious reviewer, you can attract the attention of the relevant
team of reviewers by writing to `@ESMValGroup/tech-reviewers`_ or
`@ESMValGroup/science-reviewers`_ in a comment on your pull request.
You can also label your pull request with one of the labels
`looking for technical reviewer <https://github.com/ESMValGroup/ESMValTool/labels/looking%20for%20technical%20reviewer>`_
or
`looking for scientific reviewer <https://github.com/ESMValGroup/ESMValTool/labels/looking%20for%20scientific%20reviewer>`_,
though asking people for a review directly is probably more effective.
.. _easy_review:
How do I optimize for a fast review?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When authoring a pull request, please keep in mind that it is easier and
faster to review a pull request that does not contain many changes.
Try to add one new feature per pull request and change only a few files.
For the ESMValTool repository, try to limit changes to a few hundred lines of
code and new diagnostics to not much more than a thousand lines of code.
For the ESMValCore repository, a pull request should ideally change no more
than about a hundred lines of existing code, though adding more lines for unit
tests and documentation is fine.
If you are a regular contributor, make sure you regularly review other people's
pull requests, that way they will be more inclined to return the favor by
reviewing your pull request.
How do I find a pull request to review?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please pick pull requests to review yourself based on your interest or
expertise.
We try to be self organizing, so there is no central authority that will assign
you to review anything.
People may advertise that they are looking for a reviewer by applying the label
`looking for technical reviewer <https://github.com/ESMValGroup/ESMValTool/labels/looking%20for%20technical%20reviewer>`_
or `looking for scientific reviewer <https://github.com/ESMValGroup/ESMValTool/labels/looking%20for%20scientific%20reviewer>`_.
If someone knows you have expertise on a certain topic, they might request your
review on a pull request though.
If your review is requested, please try to respond within a few days if at all
possible.
If you do not have the time to review the pull request, notify the author and
try to find a replacement reviewer.
How do I actually do a review?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To do a review, go to the pull request on GitHub, the list of all pull requests
is available here https://github.com/ESMValGroup/ESMValCore/pulls for the ESMValCore
and here https://github.com/ESMValGroup/ESMValTool/pulls for the ESMValTool, click the
pull request you would like to review.
The top comment should contain (a selection of) the checklist available in the
`pull request template`_.
If it is not there, copy the relevant items from the `pull request template`_.
Which items from the checklist are relevant, depends on which files are changed
in the pull request.
To see which files have changed, click the tab 'Files changed'.
Please make sure you are familiar with all items from the checklist by reading
the content linked from :ref:`pull_request_checklist` and check all items
that are relevant.
Checklists with some of the items to check are available:
:ref:`recipe and diagnostic checklist <diagnostic_checklist>` and
:ref:`dataset checklist <dataset_checklist>`.
In addition to the items from the checklist, good questions to start a review
with are 'Do I understand why these changes improve the tool?' (if not, ask the
author to improve the documentation contained in the pull request and/or the
description of the pull request on GitHub) and 'What could possibly go wrong if
I run this code?'.
To comment on specific lines of code or documentation, click the 'plus' icon
next to a line of code and write your comment.
When you are done reviewing, use the 'Review changes' button in the top right
corner to comment on, request changes to, or approve the pull request.
What if the author and reviewer disagree?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When the author and the reviewer of a pull request have difficulty agreeing
on what needs to be done before the pull request can be approved, it is usually
both more pleasant and more efficient to schedule a meeting or co-working
session, for example using `Google meet`_ or `Jitsi meet`_.
When reviewing a pull request, try to refrain from making changes to the pull
request yourself, unless the author specifically agrees to those changes, as
this could potentially be perceived as offensive.
If talking about the pull requests in a meeting still does not resolve the
disagreement, ask a member of the `@ESMValGroup/esmvaltool-coreteam`_ for
their opinion and try to find a solution.
.. _`The Turing Way`: https://the-turing-way.netlify.app/reproducible-research/reviewing.html
.. _`why code reviews are important`: https://the-turing-way.netlify.app/reproducible-research/reviewing/reviewing-motivation.html
.. _`how to have constructive reviews`: https://the-turing-way.netlify.app/reproducible-research/reviewing/reviewing-recommend.html
.. _`@ESMValGroup/tech-reviewers`: https://github.com/orgs/ESMValGroup/teams/tech-reviewers
.. _`@ESMValGroup/science-reviewers`: https://github.com/orgs/ESMValGroup/teams/science-reviewers
.. _`@ESMValGroup/esmvaltool-coreteam`: https://github.com/orgs/ESMValGroup/teams/esmvaltool-coreteam
.. _`@remi-kazeroni`: https://github.com/remi-kazeroni
.. _`pull request template`: https://raw.githubusercontent.com/ESMValGroup/ESMValTool/main/.github/pull_request_template.md
.. _`Google meet`: https://meet.google.com
.. _`Jitsi meet`: https://meet.jit.si
.. _`git blame`: https://www.freecodecamp.org/news/git-blame-explained-with-examples/
| 48.819905 | 131 | 0.777789 |
d1c17f0798ff7dd224318bac0cbeba567833868d | 4,561 | rst | reStructuredText | docs/source/installing_proven/installingProven.rst | tonya1/ProvenanceEnvironment | 20c7382a5470e3a5016a1a859b259b92402f625e | [
"MIT"
] | 1 | 2018-03-09T18:14:54.000Z | 2018-03-09T18:14:54.000Z | docs/source/installing_proven/installingProven.rst | tonya1/ProvenanceEnvironment | 20c7382a5470e3a5016a1a859b259b92402f625e | [
"MIT"
] | 2 | 2017-09-30T00:08:26.000Z | 2017-10-23T03:01:21.000Z | docs/source/installing_proven/installingProven.rst | tonya1/ProvenanceEnvironment | 20c7382a5470e3a5016a1a859b259b92402f625e | [
"MIT"
] | 4 | 2017-09-25T18:15:16.000Z | 2020-12-12T21:54:18.000Z |
Purpose
-------
* Setting up a development and testbed environment is not trivial. This slide deck documents the testbed I set up on my MacOS laptop. Hopefully this will be helpful to the wider GridAPPS-D team or other development teams using ProvEn.
* Disclaimer: This guide is intended to offer a complete set of notes. However there may be differences depending on the platform you are using and unfortunately there may some gaps of knowledge.
What you should expect to do
----------------------------
Once the development system and testbed are completely setup you should be able to run a ProvEn server in debug mode, accessible by REST services
Prerequisites
-------------
* Download and install
* Latest Eclipse IDE J2EE (I used Eclipse Oxygen.2 (4.7.2))
* Java 8 JDK
* Brew install
* git 2.12.0
* gradle 4.5.1
* influxdb 1.4.2
* maven 3.3.3 3.3.9
* Download and set aside for later use
* payara-micro-5.181.jar from: https://s3-eu-west-1.amazonaws.com/payara.fish/Payara+Downloads/
* Please note that Eclipse will need to be configured to support your Gradle, Maven, use your Java 8 JDK
Clone Proven Repositories
-------------------------
* https://github.com/pnnl/proven-message
* https://github.com/pnnl/proven-cluster
* https://github.com/pnnl/proven-client
* https://github.com/pnnl/proven-docker
Import Gradle Projects in Eclipse
---------------------------------
* Import proven-message and proven-member projects as gradle projects.
* Note: The “proven-cluster” project contains several nested layers of projects.
* Import the “proven-cluster” subproject “proven-member” – importing “proven-cluster will cause undesirable effects, limiting what you can build.
.. figure:: import_projects.png
:align: left
:alt: import_projects-image
:figclass: align-left
Create General Eclipse Project for testbed Resources
----------------------------------------------------
This project(name it "payara-resources") will be used to provide a micro service engine for testing later. Add the payara-micro jar in the top folder
.. figure:: payara-resources.png
:alt: payara-resources-image
..
Build and publish proven_message jar
-------------------------------------
* Open the following Eclipse views using Window->show view
* General->Console
* Gradle->Gradle Executions
* Gradle->Gradle Tasks
* Click on the proven_message project (you may need to click on the build.gradle file).
.. figure:: build_publish.png
:alt: build_publish-image
..
Build and publish proven_message-0.1-all-in-one jar
----------------------------------------------------
Build and publish the proven_message-0.1-all-in-one.jar file to maven local repository so that the hybrid services can use the interface.
* Open build task folder
* Double click on “build” task.
* Open publishing task folder.
* Double click on “publish” task.
* Double click on “publishToMavenLocal”
* Confirm no errors in Console View.
* Inspect the proven-message/build/libs/ directory for proven-message-0.1-all-in-one.jar
.. figure:: build_maven_local.png
:align: left
:alt: build_maven_local-image
:figclass: align-left
..
Building the ProvEn Server (proven-member)
------------------------------------------
* Use Gradle Tasks to Build the Proven hybrid service war file
* If necessary use Gradle IDE tasks to rebuild eclipse files.
.. figure:: build_proven_server.png
:align: left
:alt: build_proven_server-image
:figclass: align-left
Create External Tools Configurations
------------------------------------
Create Debug Configuration
--------------------------
.. figure:: debug_config.png
:align: left
:alt: debug_config-image
:figclass: align-left
Running the Hybrid Service
--------------------------
* Steps to running server in debug mode:
* Start InfluxDB
* Run External Tools Configurations “proven payara micro 181 [DEBUG CLONE 1]”
* Run debug configuration “proven micro 181 hybrid-service node 1”
* Startup can take several minutes
.. figure:: hybrid_service1.png
:align: left
:alt: hybrid_service1-image
:figclass: align-left
.. figure:: hybrid_service2.png
:align: left
:alt: hybrid_service2-image
:figclass: align-left
Correct startup should look something like this in the console
.. figure:: startup.png
:align: left
:alt: startup-image
:figclass: align-left
Swagger UI of Debug Interface
-----------------------------
.. figure:: swagger.png
:align: left
:alt: swagger-image
:figclass: align-left
| 31.027211 | 235 | 0.677483 |
134928db32d25c456b399ac85ef649bea635ac47 | 821 | rst | reStructuredText | docs/bibliography.rst | Wheelspawn/pudl | 40d176313e60dfa9d2481f63842ed23f08f1ad5f | [
"MIT"
] | null | null | null | docs/bibliography.rst | Wheelspawn/pudl | 40d176313e60dfa9d2481f63842ed23f08f1ad5f | [
"MIT"
] | null | null | null | docs/bibliography.rst | Wheelspawn/pudl | 40d176313e60dfa9d2481f63842ed23f08f1ad5f | [
"MIT"
] | null | null | null | ===============================================================================
Bibliography
===============================================================================
Catalyst Publications
---------------------------------
Data, software, and analyses that we have published for public use. We self-archive
all of our publications
`on Zenodo <https://zenodo.org/communities/catalyst-cooperative/>`__
.. bibliography:: catalyst_pubs.bib
:all:
Work Citing Catalyst/PUDL
------------------------------------------
Academic, policy, and industry publications that reference PUDL and analyses done by
Catalyst Cooperative.
.. bibliography:: catalyst_cites.bib
:all:
Further Reading
---------------
Other research and publications relevant to the work we do.
.. bibliography:: further_reading.bib
:all:
| 29.321429 | 84 | 0.540804 |
f2db32675890611d8caefcc16621a33becd2bafb | 260 | rst | reStructuredText | docs/source/imagestats.rst | zanecodes/stsci.imagestats | 30e5031b0f7a9f20bffb4349a71c776a94d675e7 | [
"BSD-3-Clause"
] | 3 | 2019-03-07T09:47:46.000Z | 2022-03-28T05:46:29.000Z | docs/source/imagestats.rst | zanecodes/stsci.imagestats | 30e5031b0f7a9f20bffb4349a71c776a94d675e7 | [
"BSD-3-Clause"
] | 10 | 2018-06-20T20:30:39.000Z | 2021-09-15T16:28:36.000Z | docs/imagestats/source/imagestats.rst | spacetelescope/stsciutils | 85df1f215071a9c00a1e8cf3da823b5fedf8f2f2 | [
"BSD-3-Clause"
] | 5 | 2016-03-29T19:55:54.000Z | 2021-09-14T19:54:56.000Z | **********************************
Primary User Interface: ImageStats
**********************************
.. moduleauthor:: Warren Hack, Christopher Hanley
.. currentmodule:: stsci.imagestats
.. automodule:: stsci.imagestats
:members:
:undoc-members:
| 20 | 49 | 0.542308 |
1baf2dc95dfc016b05c9f86c553fc5cd339c4a1f | 198 | rst | reStructuredText | doc/api/pypeit.spectrographs.lbt_mods.rst | ykwang1/PypeIt | a96cff699f1284905ce7ef19d06a9027cd333c63 | [
"BSD-3-Clause"
] | 107 | 2018-08-06T07:07:20.000Z | 2022-02-28T14:33:42.000Z | doc/api/pypeit.spectrographs.lbt_mods.rst | ykwang1/PypeIt | a96cff699f1284905ce7ef19d06a9027cd333c63 | [
"BSD-3-Clause"
] | 889 | 2018-07-26T12:14:33.000Z | 2022-03-18T22:49:42.000Z | doc/api/pypeit.spectrographs.lbt_mods.rst | ykwang1/PypeIt | a96cff699f1284905ce7ef19d06a9027cd333c63 | [
"BSD-3-Clause"
] | 74 | 2018-09-25T17:03:07.000Z | 2022-03-10T23:59:24.000Z | pypeit.spectrographs.lbt\_mods module
=====================================
.. automodule:: pypeit.spectrographs.lbt_mods
:members:
:private-members:
:undoc-members:
:show-inheritance:
| 22 | 45 | 0.59596 |
2238a9d79da2d6f0a03ca16314eeab7f8c687f45 | 796 | rst | reStructuredText | docs/source/utility/installation.rst | alextsaihi/rnaget-compliance-suite | a3accae431b9e4f7791dfa5ae867e70da2dd6278 | [
"Apache-2.0"
] | 1 | 2019-09-18T14:38:55.000Z | 2019-09-18T14:38:55.000Z | docs/source/utility/installation.rst | alextsaihi/rnaget-compliance-suite | a3accae431b9e4f7791dfa5ae867e70da2dd6278 | [
"Apache-2.0"
] | 14 | 2019-05-24T18:55:23.000Z | 2022-02-25T16:56:28.000Z | docs/source/utility/installation.rst | alextsaihi/rnaget-compliance-suite | a3accae431b9e4f7791dfa5ae867e70da2dd6278 | [
"Apache-2.0"
] | 8 | 2019-04-08T14:48:35.000Z | 2022-02-04T16:59:59.000Z | Installation
============
This section provides instructions on how to install the RNAget compliance
suite application.
As a prerequisite, python 3 and pip must be installed on your system. The
application can be installed by running the following from the command line.
1. Clone the latest build from https://github.com/ga4gh-rnaseq/rnaget-compliance-suite
.. code-block:: bash
git clone https://github.com/ga4gh-rnaseq/rnaget-compliance-suite
2. Enter rnaget-compliance-suite directory and install
.. code-block:: bash
cd rnaget-compliance-suite
python setup.py install
3. Confirm installation by executing the rnaget-compliance command
.. code-block:: bash
rnaget-compliance report
The `next article <usage.html>`_ explains how to run the compliance application.
| 26.533333 | 86 | 0.765075 |
e940bc50f798b15078623495b1704c65f9c040f6 | 8,491 | rst | reStructuredText | docs/tutorial.rst | njlr/mnmlstc-unittest | 3e48e15730535f258251742efddf556be764e079 | [
"Apache-2.0"
] | 3 | 2015-02-27T04:09:09.000Z | 2021-05-11T16:02:55.000Z | docs/tutorial.rst | njlr/mnmlstc-unittest | 3e48e15730535f258251742efddf556be764e079 | [
"Apache-2.0"
] | null | null | null | docs/tutorial.rst | njlr/mnmlstc-unittest | 3e48e15730535f258251742efddf556be764e079 | [
"Apache-2.0"
] | 2 | 2017-08-15T12:34:09.000Z | 2020-05-17T07:30:05.000Z | Getting Started With Unittest
=============================
This tutorial will walk you through the basics of using MNMLSTC Unittest. It
will introduce to you the concepts that MNMLSTC Unittest introduces, as well
as explain *why* MNMLSTC Unittest works in the way it does, and does not take
the same approach to unit tests as many other C++ unit testing frameworks.
It should be noted, MNMLSTC Unittest assumes the following:
* You are using CMake
* You are using CTest as the test runner
* You are using a C++11 compiler
* Any code that interacts with MNMLSTC Unittest will have exceptions enabled
.. todo:: set a link to the API from here.
Additionally, this tutorial will not teach you everything you need to know to
effectively use MNMLSTC Unittest. It will show the most basic steps that need
to be taken to begin using MNMLSTC Unittest. To get a better idea of how
it works, see the API documentation.
.. _tutorial-concepts:
Concepts
--------
Unittest differs from most unit testing libraries in their naming conventions.
This section will discuss how these naming conventions result in a few
*technical* conceptual differences. At the end of the day, however, they are
all the same. You have cases, suites, tests, and fixtures. When explaining the
differences, similarities to other frameworks (such as GTest, Unity, or
python's unittest module) will be made.
.. glossary::
test suite
For unittest, the resulting executable produced from CMake is your test
suite. One could argue that organizing suites within suites ad infinitum is
unnecessary. This is a position that MNMLSTC Unittest takes.
test case
Unlike python's unittest or GTest, the 'test case' in MNMLSTC Unittest is
just a 'test'. Tests are made up of steps, or tasks. A case, technically
speaking, is a collection of steps taken to ensure that your code is
sane. So, while other systems refer to a collection of tasks as a test
case, MNMLSTC Unittest calls these a test.
test step
A step is placed within a test. They can fail, pass, or skip. They are,
outside of a single assertion statement, the smallest unit of execution
within MNMLSTC Unittest, and where most of your work will go.
As you can see from the *very* brief explanations, MNMLSTC Unittest doesn't
differ much from other unit testing libraries. However, in the realm of C++,
it does differ in one major way: *There are absolutely no preprocessor macros*.
But why would MNMLSTC Unittest have none of these?
When it comes to the preprocessor macros, one has to understand what these
macros actually do to understand any potential compiler errors. And at runtime,
we aren't given an idea of *why* the assertion failed (e.g., what are the
values inside of the tested variables, rather than what their names are/what
line it all happens on?). Python's unittest does this well (and it comes as a
result of its ability to perform runtime reflection on variables, as well as
know the current line), but C++ is not python, no matter how hard one wished
it was. As such, MNMLSTC Unittest makes due with what it can.
One might notice, however, the absence of the common setup, teardown and
test fixture concepts. Because of RAII, it is deemed unnecessary for these to
be provided, simply because every other test framework implements them. There
is a way to imitate this kind of consistent setup and teardown via capturing
an object by value within a lambda declaration.
Additionally, initialization of resources can be postponed by simply
initializing them within a lambda, and this should be taken advantage of.
The most important part of MNMLSTC Unittest is that it tries to make you solve
the problem at hand: Do the components you are testing, work?
Your First Unittest Unit Test
-----------------------------
Well, enough about the concepts of Unittest. It's time to write our first
test!
.. highlight:: cpp
Because of how MNMLSTC Unittest works, we can declare any test anywhere within
the program, as long as it is within the scope of *some* function. For the
purposes of this tutorial, we'll simply place everything within main::
#include <unittest/unittest.hpp>
int main () {
using namespace unittest;
test("my-first-test") = {
task("first-task") = []{ assert::fail("Not Yet Implemented"); },
task("second-task") = []{ assert::fail("Not Yet Implemented"); }
};
monitor::run();
}
And now, a brief explanation of just what in tarnation we think we're doing
here. First, we include the unittest amalgamation header. This header is
provided to insure that the proper namespaces and declarations are available
for the user.
Next, we do a ``using namespace unittest``. This is *absolutely* vital, as it
allows unittest to use its automatic value printing fallback without issue.
Now to get to the meat of the program. We declare a test with the name
"my-first-test". All tests (and tasks!) *should* use a string literal. However,
as of right now, their interface is to take a ``std::string&&``.
On the same line, we begin to assign an initialization_list of tasks. Just know
that the only way to submit tasks to a test case is to place them within
this initialization_list.
Tasks work almost like tests in that we assign a value to them after their
declaration. In the case of a task, it *always* takes a
``std::function<void()>`` by rvalue reference. This means anything you pass
into will not be used elsewhere. At that point, MNMLSTC Unittest now *owns*
that object and will do with it as it pleases. While it is possible to
construct a ``std::function<void()>`` with a variety of ways, it is easiest
to simply use a lambda. The lambda will allow for capturing fixtures (declared
with RAII) by value or by reference. Finally, because we really have nothing
to do, *yet*, we call ``assert::fail`` which will immediately make the test
runner stop handling the task.
Finally we call ``monitor::run()``, which is located in the unittest namespace.
.. note:: Make sure that this is the last function call you make. Whether
all tests and tasks pass or not is irrelevant, as it will *always*
call ``std::exit``.
Of course, this only shows how to write a test, not use it with CMake or CTest.
So let's do that!
.. highlight:: cmake
Our CMakeLists.txt file will look like::
cmake_minimum_required(VERSION 2.8.11)
project(our-first-unittest-test CXX)
find_package(unittest REQUIRED)
include(CTest)
include_directories(${UNITTEST_INCLUDE_DIR})
add_executable(my-first-unittest-test ${PROJECT_SOURCE_DIR}/test.cpp)
add_test(my-first-unittest my-first-unittest-test)
Fairly simple! Go ahead and build then run your tests. If you did everything
right (and let's be honest here, you totally did!), you should see ctest
giving you the 'FAILURE' output. That's fine, because we're going to start
expanding on our test file for the rest of the tutorial.
.. highlight:: cpp
Now that we've got our test building, it's time to get down and dirty with the
assertions. Go ahead and open up your test file. Within the first-task, we're
going to write the test to succeed now. Everybody loves math right? So let's do
some. Place the following within the first-task lambda (make sure to remove the
calls to ``assert::fail``)::
assert::not_equal(1 + 2, 4);
assert::equal(2 + 2, 4);
Fairly simple, but we're just trying to show that our assert_equal works!
Compile and run, and we'll still get the failure output. That's fine! We can
modify the second-task to be a bit more complex. Let's create a sequence and
see if it has a value or not. Place the following in the second-task lambda::
std::vector<int> sequence = { 1, 2, 3, 4, 5, 6 };
/* Assume you wish to do some work with the sequence here */
assert::not_in(7, sequence);
assert::in(4, sequence);
If we were to build and run our test, it should now succeed! Our unittest test
file should now look like this::
#include <unittest/unittest.hpp>
#include <vector>
int main () {
using namespace unittest;
test("my-first-test") = {
task("first-task") = []{
assert::not_equal(1 + 2, 4);
assert::equal(2 + 2, 4);
},
task("second-task") = []{
std::vector<int> sequence = { 1, 2, 3, 4, 5, 6 };
assert::not_in(7, sequence);
assert::in(4, sequence);
}
};
monitor::run();
}
Pretty neat!
| 40.822115 | 79 | 0.728654 |
e0a13c193167d8d69c6faa926d85f571a73a937d | 218 | rst | reStructuredText | README.rst | iamdork/dork.teamcity_agent | 18e771809a1c5c5830e9ac10c22ba279a7d7301d | [
"MIT"
] | null | null | null | README.rst | iamdork/dork.teamcity_agent | 18e771809a1c5c5830e9ac10c22ba279a7d7301d | [
"MIT"
] | null | null | null | README.rst | iamdork/dork.teamcity_agent | 18e771809a1c5c5830e9ac10c22ba279a7d7301d | [
"MIT"
] | null | null | null | dork.teamcity_agent
===================
Install a TeamCity_ agent on a dork host to connect to a TeamCity_ instance
to work as a continuous integration build server.
.. _TeamCity: https://www.jetbrains.com/teamcity/
| 27.25 | 75 | 0.724771 |
38b2290e2e5a86b341b298daf82c8ba0fd510844 | 8,861 | rst | reStructuredText | doc/guide.rst | bioidiap/bob.kaldi | fe5f968a0aa114bd7dafc0c651366588b0383222 | [
"BSD-3-Clause"
] | 2 | 2020-09-15T07:25:18.000Z | 2021-09-16T02:13:26.000Z | doc/guide.rst | bioidiap/bob.kaldi | fe5f968a0aa114bd7dafc0c651366588b0383222 | [
"BSD-3-Clause"
] | null | null | null | doc/guide.rst | bioidiap/bob.kaldi | fe5f968a0aa114bd7dafc0c651366588b0383222 | [
"BSD-3-Clause"
] | null | null | null | .. py:currentmodule:: bob.kaldi
.. testsetup:: *
from __future__ import print_function
import pkg_resources
import bob.kaldi
import bob.io.audio
import tempfile
import numpy
================================
Voice Activity Detection (VAD)
================================
Energy-based
------------
A simple energy-based VAD is implemented in :py:func:`bob.kaldi.compute_vad`.
The function expects the speech samples as :obj:`numpy.ndarray` and the sampling
rate as :obj:`float`, and returns an array of VAD labels :obj:`numpy.ndarray`
with the labels of 0 (zero) or 1 (one) per speech frame:
.. doctest::
>>> sample = pkg_resources.resource_filename('bob.kaldi', 'test/data/sample16k.wav')
>>> data = bob.io.audio.reader(sample)
>>> print ("Compute VAD"); VAD_labels = bob.kaldi.compute_vad(data.load()[0], data.rate) # doctest: +ELLIPSIS
Compute VAD...
>>> print (len(VAD_labels))
317
DNN-based
---------
A Deep Neural Network (DNN), frame-based, VAD is implemented in
:py:func:`bob.kaldi.compute_dnn_vad`. Pre-trained DNN on AMI_ database
with headset microphone recordings is used for forward pass of mfcc
features. The VAD decision is computed by comparing the silence
posterior feature with the silence threshold.
.. doctest::
>>> print("Compute DNN VAD"); DNN_VAD_labels = bob.kaldi.compute_dnn_vad(data.load()[0], data.rate) # doctest: +ELLIPSIS
Compute DNN VAD...
>>> print (len(DNN_VAD_labels))
317
================================
Speaker recognition evaluation
================================
MFCC Extraction
---------------
Two functions are implemented to extract MFCC features
:py:func:`bob.kaldi.mfcc` and :py:func:`bob.kaldi.mfcc_from_path`. The former
function accepts the speech samples as :obj:`numpy.ndarray`, whereas the latter
the filename as :obj:`str`:
1. :py:func:`bob.kaldi.mfcc`
.. doctest::
>>> feat = bob.kaldi.mfcc(data.load()[0], data.rate, normalization=False)
>>> print (feat.shape)
(317, 39)
2. :py:func:`bob.kaldi.mfcc_from_path`
.. doctest::
>>> feat = bob.kaldi.mfcc_from_path(sample)
>>> print (feat.shape)
(317, 39)
UBM training and evaluation
---------------------------
Both diagonal and full covariance Universal Background Models (UBMs)
are supported, speakers can be enrolled and scored:
.. doctest::
>>> # Train small diagonall GMM
>>> diag_gmm_file = tempfile.NamedTemporaryFile()
>>> full_gmm_file = tempfile.NamedTemporaryFile()
>>> print ("ubm train"); dubm = bob.kaldi.ubm_train(feat, diag_gmm_file.name, num_gauss=2, num_gselect=2, num_iters=2) # doctest: +ELLIPSIS
ubm train...
>>> print ("Train small full GMM"); ubm = bob.kaldi.ubm_full_train(feat, dubm, full_gmm_file.name, num_gselect=2, num_iters=2) # doctest: +ELLIPSIS
Train...
>>> # Enrollement - MAP adaptation of the UBM-GMM
>>> print ("Enrollement"); spk_model = bob.kaldi.ubm_enroll(feat, dubm) # doctest: +ELLIPSIS
Enrollement...
>>> print ("GMN scoring"); score = bob.kaldi.gmm_score(feat, spk_model, dubm) # doctest: +ELLIPSIS
GMN...
>>> print ('%.2f' % score)
0.29
iVector + PLDA training and evaluation
--------------------------------------
The implementation is based on Kaldi recipe SRE10_. It includes
ivector extrator training from full-diagonal GMMs, PLDA model
training, and PLDA scoring.
.. doctest::
>>> plda_file = tempfile.NamedTemporaryFile()
>>> mean_file = tempfile.NamedTemporaryFile()
>>> spk_file = tempfile.NamedTemporaryFile()
>>> test_file = pkg_resources.resource_filename('bob.kaldi', 'test/data/test-mobio.ivector')
>>> features = pkg_resources.resource_filename('bob.kaldi', 'test/data/feats-mobio.npy')
>>> train_feats = numpy.load(features)
>>> test_feats = numpy.loadtxt(test_file)
>>> # Train PLDA model; plda[0] - PLDA model, plda[1] - global mean
>>> print ("Train PLDA"); plda = bob.kaldi.plda_train(train_feats, plda_file.name, mean_file.name) # doctest: +ELLIPSIS
Train...
>>> # Speaker enrollment (calculate average iVectors for the first speaker)
>>> print ("Speaker enrollment"); enrolled = bob.kaldi.plda_enroll(train_feats[0], plda[1]) # doctest: +ELLIPSIS
Speaker...
>>> # Calculate PLDA score
>>> print ("PLDA score"); score = bob.kaldi.plda_score(test_feats, enrolled, plda[0], plda[1]) # doctest: +ELLIPSIS
PLDA...
>>> print ('%.4f' % score)
-23.9922
======================
Deep Neural Networks
======================
Forward pass
------------
A forward-pass with pre-trained DNN is implemented in
:py:func:`bob.kaldi.nnet_forward`. Output posterior features are
returned as :obj:`numpy.ndarray`. First output features of each row (a
processed speech frame) contain posteriors of silence, laughter
and noise, indexed 0, 1 and 2, respectively. These posteriors are thus
used for silence detection in :py:func:`bob.kaldi.compute_dnn_vad`,
but might be used also for the laughter and noise detection as well.
.. doctest::
>>> nnetfile = pkg_resources.resource_filename('bob.kaldi', 'test/dnn/ami.nnet.txt')
>>> transfile = pkg_resources.resource_filename('bob.kaldi', 'test/dnn/ami.feature_transform.txt')
>>> feats = bob.kaldi.cepstral(data.load()[0], 'mfcc', data.rate, normalization=False)
>>> nnetf = open(nnetfile)
>>> trnf = open(transfile)
>>> dnn = nnetf.read()
>>> trn = trnf.read()
>>> nnetf.close()
>>> trnf.close()
>>> print ("NNET forward"); ours = bob.kaldi.nnet_forward(feats, dnn, trn) # doctest: +ELLIPSIS
NNET...
>>> print (ours.shape)
(317, 43)
===================
Speech recognition
===================
Speech recognition is a processes that generates a text transcript
given speech audio. Most of current Automatic Speech Recognition
(ASR) systems use the following pipeline:
.. image:: img/ASR.png
The ASR system has to be first trained. More specifically, its key
statistical models:
* Pronunciation model, the lexicon, that associates written and spoken
form of words. The lexicon contains words :math:`W` and defines them
as sequences of phonemes (the speech sounds) :math:`Q`.
* Acoustic model, GMMs or DNNs, that associates the speech features
:math:`O` and the spoken words :math:`Q`.
* Language model, usually n-gram model, that captures most probably
sequences of :math:`W` of a particular language.
The transcript of the input audio waveform :math:`X` is then generated
by transformation of :math:`X` to features :math:`O` (for example
ceptral features computed by :py:func:`bob.kaldi.cepstral`), and an
ASR decoder that outputs the most probable transcript :math:`W^*`
using the pre-trained statistical models.
Acoustic models
---------------
The basic acoustic model is called monophone model, where :math:`Q`
consists just of the phonemes, and consider them contextually
independent. The training of such model has following pipeline:
* Model initialization for a given Hidden Markov Model (HMM)
structure, usually 3-state left-to-right model.
* Compiling training graphs that compiles Finite State Transducers
(FSTs), one for each train utterance. This requires the lexicon, and
the word transcription of the training data.
* First alignment and update stage that produces a transition-model
and GMM objects for equally spaced alignments.
* Iterative alignment and update stage.
.. doctest::
>>> fstfile = pkg_resources.resource_filename('bob.kaldi', 'test/hmm/L.fst')
>>> topofile = pkg_resources.resource_filename('bob.kaldi', 'test/hmm/topo.txt')
>>> phfile = pkg_resources.resource_filename('bob.kaldi', 'test/hmm/sets.txt')
>>> # word labels
>>> uttid='test'
>>> labels = uttid + ' 27312 27312 27312'
>>> train_set={}
>>> train_set[uttid]=feats
>>> topof = open(topofile)
>>> topo = topof.read()
>>> topof.close()
>>> print ("Train mono"); model = bob.kaldi.train_mono(train_set, labels, fstfile, topo, phfile , numgauss=2, num_iters=2) # doctest: +ELLIPSIS
Train...
>>> print (model.find('TransitionModel'))
1
Phone frame decoding
--------------------
Simple frame decoding can by done by finding the indices of the
maximum values along the frame axis. The following example performs
a forward pass with pre-trained phone DNN, and finds :math:`argmax()`
of the output posterior features. Looking at the DNN labels, the
phones are decoded per frame.
.. doctest::
>>> sample = pkg_resources.resource_filename('bob.kaldi', 'test/data/librivox.wav')
>>> data = bob.io.audio.reader(sample)
>>> print ("Compute dnn phone"); post, labs = bob.kaldi.compute_dnn_phone(data.load()[0], data.rate) # doctest: +ELLIPSIS
Compute...
>>> mdecoding = numpy.argmax(post,axis=1) # max decoding
>>> print (labs[mdecoding[250]]) # the last spoken sound of sample is N (of the word DOMAIN)
N
.. include:: links.rst
| 36.465021 | 149 | 0.681977 |
dd9d7fbc7c5c573fc794fa9eb84de7e49c758939 | 3,363 | rst | reStructuredText | docs/source/settingConfig.rst | danmichaeljones/blendz | 215e73ccd6471f9125b4bba57ad059b5d400344c | [
"MIT"
] | 6 | 2018-08-09T06:08:24.000Z | 2020-03-11T16:59:49.000Z | docs/source/settingConfig.rst | danmichaeljones/blendz | 215e73ccd6471f9125b4bba57ad059b5d400344c | [
"MIT"
] | null | null | null | docs/source/settingConfig.rst | danmichaeljones/blendz | 215e73ccd6471f9125b4bba57ad059b5d400344c | [
"MIT"
] | 1 | 2020-11-05T20:39:59.000Z | 2020-11-05T20:39:59.000Z | .. _set-config:
Setting the configuration
=========================
.. code:: python
import blendz
Classes in ``blendz`` use a ``Configuration`` object instance to manage
all of their settings. These *can* be created directly by instantiating
the class, and passed to classes that require them using the ``config``
keyword argument:
.. code:: python
cfg = blendz.Configuration(configuration_option='setting_value')
templates = blendz.fluxes.Templates(config=cfg)
However, constructing the configuration like this is usually not
necessary. The ``photoz`` class is designed as the only user-facing
class and handles the configuration for all of the classes it depends
on. Instead, there are two recommended ways of setting the
configuration:
Pass keyword arguments to classes
------------------------------------
The configuration can be set programmatically by passing settings as
keyword arguments:
.. code:: python
.. code:: python
pz = blendz.Photoz(data_path='path/to/data.txt',
mag_cols = [1, 2, 3, 4, 5],
sigma_cols = [6, 7, 8, 9, 10],
ref_band = 2,
filters=['sdss/u', 'sdss/g',
'sdss/r', 'sdss/i', 'sdss/z'])
Read in a configuration file
----------------------------
Configurations can also be read in from a file (or multiple files) by
using the ``config_path`` keyword argument.
``config_path`` should either be a string of the absolute file path to
the configuration file to be read, or a list of strings if you want to
read multiple files.
.. code:: python
path1 = join(blendz.RESOURCE_PATH, 'config/testRunConfig.txt')
path2 = join(blendz.RESOURCE_PATH, 'config/testDataConfig.txt')
data = blendz.Photoz(config_path=[path1, path2])
Configuration file format
--------------------------
Configuration files are INI-style files read using the
`configparser <https://docs.python.org/3/library/configparser.html>`_
module of the standard python library. These consist of ``key = value`` pairs
separated by either a ``=`` or ``:`` separator. Whitespace around the separator is optional.
A few notes about their format:
- Configuration options *must* be separated into two (case-sensitive) sections, ``[Run]`` and ``[Data]``. An explanation of all possible configuration options, split by these sections can be found on the :ref:`config-options` page.
- Comments can be added to configuration files using ``#``
- If you want to use default settings, leave that option out of the coniguration file. Don't just leave an option blank after the ``=``/``:`` separator.
- Multiple configuration files can be loaded at once. While this provides a simple way to separate ``[Run]`` and ``[Data]`` settings (e.g., for running the same analysis on different datasets), options can be spread over different files however you want, provided that each setting is within its correct section.
An example of a configuration file (leaving some settings as default) is given below.
.. code:: ini
[Data]
data_path = path/to/datafile.txt
mag_cols = 1, 3, 5, 7, 9
sigma_cols = 2, 4, 6, 8, 10
ref_band = 2
filters = sdss/u, sdss/g, sdss/r, sdss/i, sdss/z
zero_point_errors = 0.01, 0.01, 0.01, 0.01, 0.01
[Run]
z_hi = 2
ref_mag_lo = 20
ref_mag_hi = 32
template_set = BPZ6
| 33.29703 | 312 | 0.679453 |
2a03dfce9c7bf03673f10e317da088902156e149 | 1,373 | rst | reStructuredText | en/source/pages/appliancesoftwarecomponent/applianceMySoftware_getAll.rst | segalaj/api-docs | 44fe8a87c875efa67563fe28d36b923eb1ea5a25 | [
"Apache-2.0"
] | null | null | null | en/source/pages/appliancesoftwarecomponent/applianceMySoftware_getAll.rst | segalaj/api-docs | 44fe8a87c875efa67563fe28d36b923eb1ea5a25 | [
"Apache-2.0"
] | 6 | 2019-03-13T14:04:06.000Z | 2021-09-08T00:57:18.000Z | en/source/pages/appliancesoftwarecomponent/applianceMySoftware_getAll.rst | segalaj/api-docs | 44fe8a87c875efa67563fe28d36b923eb1ea5a25 | [
"Apache-2.0"
] | 4 | 2018-07-23T15:01:25.000Z | 2019-04-25T12:39:12.000Z | .. Copyright FUJITSU LIMITED 2016-2019
.. _applianceMySoftware-getAll:
applianceMySoftware_getAll
--------------------------
.. function:: GET /users/{uid}/appliances/{aid}/mysoftware
.. sidebar:: Summary
* Method: ``GET``
* Response Code: ``200 / 304``
* Response Formats: ``application/xml`` ``application/json``
* Since: ``UForge 1.0``
Retrieves all the 3rd party software components that have been added to the appliance from the user's ``Software Library``.
This returns a list of :ref:`mySoftware-object` objects.
Security Summary
~~~~~~~~~~~~~~~~
* Requires Authentication: ``true``
* Entitlements Required: ``appliance_create``
URI Parameters
~~~~~~~~~~~~~~
* ``uid`` (required): the user name (login name) of the :ref:`user-object`
* ``aid`` (required): the id of the :ref:`appliance-object`
HTTP Request Body Parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
None
Example Request
~~~~~~~~~~~~~~~
.. code-block:: bash
curl "https://uforge.example.com/api/users/{uid}/appliances/{aid}/mysoftware" -X GET \
-u USER_LOGIN:PASSWORD -H "Accept: application/xml"
.. seealso::
* :ref:`appliance-object`
* :ref:`applianceProject-getAll`
* :ref:`appliance-clone`
* :ref:`appliance-create`
* :ref:`appliance-delete`
* :ref:`appliance-get`
* :ref:`appliance-getAll`
* :ref:`appliance-update`
* :ref:`mySoftware-object`
* :ref:`user-object`
| 23.672414 | 124 | 0.647487 |
1ad2afaf60ce10b125e77b02f01a33a94cabdd10 | 868 | rst | reStructuredText | docs/source/api/config.rst | AI4Finance-LLC/ElegantRL | e0cdaa52133c9e2b1a00a807cc5adb8c1a7d81ed | [
"Apache-2.0"
] | 752 | 2021-02-10T09:23:00.000Z | 2021-09-02T18:04:46.000Z | docs/source/api/config.rst | AI4Finance-LLC/ElegantRL | e0cdaa52133c9e2b1a00a807cc5adb8c1a7d81ed | [
"Apache-2.0"
] | 34 | 2021-02-10T14:18:25.000Z | 2021-08-25T12:10:48.000Z | docs/source/api/config.rst | AI4Finance-LLC/ElegantRL | e0cdaa52133c9e2b1a00a807cc5adb8c1a7d81ed | [
"Apache-2.0"
] | 141 | 2021-02-15T21:12:01.000Z | 2021-09-02T09:08:45.000Z | Configuration: *config.py*
==================
To keep ElegantRL simple to use, we allow users to control the training process through an ``Arguments`` class. This class contains all adjustable parameters of the training process, including environment setup, model training, model evaluation, and resource allocation.
The ``Arguments`` class provides users an unified interface to customize the training process and save the training profile. The class should be initialized at the start of the training process.
.. code-block:: python
from elegantrl.train.config import Arguments
from elegantrl.agents.AgentPPO import AgentPPO
from elegantrl.envs.Gym import build_env
import gym
args = Arguments(build_env('Pendulum-v1'), AgentPPO())
The full list of parameters in ``Arguments``:
.. autoclass:: elegantrl.train.config.Arguments
:members:
| 39.454545 | 271 | 0.760369 |
9f9d43ce7209496324e21bf5432b0e41eed16360 | 617 | rst | reStructuredText | source/index.rst | Fractural/FracturalVNEDocs | 249ed2a641717247f6e50adb7f2af6e7c7f4b70d | [
"MIT"
] | 1 | 2021-11-22T20:22:04.000Z | 2021-11-22T20:22:04.000Z | source/index.rst | Fractural/FracturalVNEDocs | 249ed2a641717247f6e50adb7f2af6e7c7f4b70d | [
"MIT"
] | 4 | 2021-10-01T12:44:03.000Z | 2022-01-25T02:35:38.000Z | source/index.rst | Fractural/FracturalVNEDocs | 249ed2a641717247f6e50adb7f2af6e7c7f4b70d | [
"MIT"
] | null | null | null | .. Fractural VNE documentation master file, created by
sphinx-quickstart on Sat Aug 21 22:32:20 2021.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Fractural VNE Docs
=====================================================================
Welcome to the documentation of Fractural Visual Novel Engine, a free and open source visual novel game engine built on Godot.
.. toctree::
:maxdepth: 1
:caption: General
:name: sec-general
.. toctree::
:maxdepth: 1
:caption: Getting Started
:name: sec-getting-started
step-by-step/index | 29.380952 | 126 | 0.658023 |
9d8c685497d4358ada4737353e6be731be24c8a7 | 2,926 | rst | reStructuredText | docs/eog/eog-in-practice/cw21/feasibility-stage/fs-audience.rst | softwaresaved/event-organisation-guide | c92979ec2882c33cebd0f736101f91659f3c3375 | [
"CC-BY-4.0"
] | 4 | 2019-11-07T18:42:08.000Z | 2021-12-03T23:56:16.000Z | docs/eog/eog-in-practice/cw21/feasibility-stage/fs-audience.rst | softwaresaved/event-organisation-guide | c92979ec2882c33cebd0f736101f91659f3c3375 | [
"CC-BY-4.0"
] | 85 | 2019-01-18T17:05:14.000Z | 2022-03-07T10:29:45.000Z | docs/eog/eog-in-practice/cw21/feasibility-stage/fs-audience.rst | softwaresaved/event-organisation-guide | c92979ec2882c33cebd0f736101f91659f3c3375 | [
"CC-BY-4.0"
] | 6 | 2019-07-24T10:45:49.000Z | 2020-07-30T14:16:24.000Z | .. _cw21-fs-audience:
CW21 Audience
=============================
The Collaborations Workshop series brings together researchers, developers, innovators, managers, funders, publishers, leaders and educators to explore best practices and the future of research software.
As such, our main target audience includes changemakers, advocates and ambassadors of all career stages and identities for each of these stakeholder groups who will continue and build off of the conversation and collaborations beyond the workshop.
Our target audience comprises of people who can dedicate time to a three-day event as this will allow us to meet our :doc:`fs-goals-and-objectives`.
By being multi-disciplinary, we can engage a wide range of people who can bring knowledge and skills back to their domains.
The geographic location of the CW target audience is the UK/Europe.
As CW21 will be online, the target audience's geographic location will not be limited except by time zone - participants in Europe, Africa and eastern timezones of North and South America are more likely to participate in a 10:00-16:00 BST event.
This will give us the opportunity to invite changemakers who would not normally be able to travel to the UK, and make the event more accessible.
We also aim to engage people in other timezones in the discussions through social media and sharing of outputs.
Below we discuss specific audiences, communities and job roles to target related to the themes of CW21. We will encourage connections within these communities to reach out to their communities so that the recommendation/promotion comes from a trusted member and is more likely to be actioned.
FAIR Research Software
--------------------
- Funders
- Policy makers
- Publishers
- Librarians
- Research Data Managers
- FAIR projects (not necessarily software-focused)
- Open Science/Research advocates
Example communities
- `NL eScience Centre <https://www.esciencecenter.nl/>`_
- `UoM eScience Lab <https://esciencelab.org.uk/>`_
- `RDA FAIR4RS Working Group <https://www.rd-alliance.org/groups/fair-4-research-software-fair4rs-wg>`_
- `FAIRsFAIR <https://www.fairsfair.eu/>`_
- `FAIRsharing <https://fairsharing.org/>`_
- `FAIRplus <https://fairplus-project.eu/>`_
Diversity and Inclusion
--------------------
- Community Managers
- Recruiters
- Policy makers
- Funders
- Grassroots groups
Example communities
- `The Carpentries <https://carpentries.org/>`_
- `CSCCE <https://www.cscce.org/>`_
- `UKRI <https://www.ukri.org/about-us/equality-diversity-and-inclusion/>`_
- `Wellcome <https://wellcome.ac.uk/what-we-do/our-work/diversity-and-inclusion>`_
- `WHPC <https://womeninhpc.org/>`_
Software Sustainability
--------------------
- Developers
- Researchers
- Publishers
- Funders
Example communities
- `RSE <https://society-rse.org/>`_ (UK, US, NL, DE, etc...)
- `eLife <https://elifesciences.org/>`_
- `JOSS <https://joss.theoj.org/>`_ | 41.8 | 292 | 0.752221 |
3f8d4d5c2128858e888e66ca0999e75f039ea050 | 598 | rst | reStructuredText | docs/networkapi.api_ip.v4.tasks.rst | vinicius-marinho/GloboNetworkAPI | 94651d3b4dd180769bc40ec966814f3427ccfb5b | [
"Apache-2.0"
] | 73 | 2015-04-13T17:56:11.000Z | 2022-03-24T06:13:07.000Z | docs/networkapi.api_ip.v4.tasks.rst | leopoldomauricio/GloboNetworkAPI | 3b5b2e336d9eb53b2c113977bfe466b23a50aa29 | [
"Apache-2.0"
] | 99 | 2015-04-03T01:04:46.000Z | 2021-10-03T23:24:48.000Z | docs/networkapi.api_ip.v4.tasks.rst | shildenbrand/GloboNetworkAPI | 515d5e961456cee657c08c275faa1b69b7452719 | [
"Apache-2.0"
] | 64 | 2015-08-05T21:26:29.000Z | 2022-03-22T01:06:28.000Z | networkapi.api_ip.v4.tasks package
==================================
Submodules
----------
networkapi.api_ip.v4.tasks.ipv4 module
--------------------------------------
.. automodule:: networkapi.api_ip.v4.tasks.ipv4
:members:
:undoc-members:
:show-inheritance:
networkapi.api_ip.v4.tasks.ipv6 module
--------------------------------------
.. automodule:: networkapi.api_ip.v4.tasks.ipv6
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: networkapi.api_ip.v4.tasks
:members:
:undoc-members:
:show-inheritance:
| 19.290323 | 47 | 0.550167 |
e52984ab4e06924eced9908a4719110ede086fac | 212 | rst | reStructuredText | _unittests/ut_helpgen/data/rstplot.rst | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 18 | 2015-11-10T08:09:23.000Z | 2022-02-16T11:46:45.000Z | _unittests/ut_helpgen/data/rstplot.rst | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 321 | 2015-06-14T21:34:28.000Z | 2021-11-28T17:10:03.000Z | _unittests/ut_helpgen/data/rstplot.rst | Pandinosaurus/pyquickhelper | 326276f656cf88989e4d0fcd006ada0d3735bd9e | [
"MIT"
] | 10 | 2015-06-20T01:35:00.000Z | 2022-01-19T15:54:32.000Z |
.. _l-plot:
exemple de graph
================
.. plot::
import matplotlib.pyplot as plt
import numpy as np
x = np.random.randn(1000)
plt.hist( x, 20)
plt.grid()
plt.title('r')
plt.show()
| 13.25 | 34 | 0.551887 |
840966a0e335337c6bdc4bb907faea3e293a98c3 | 81 | rst | reStructuredText | docs/computer_implementation/inputfiles/runcontrol/card86.rst | dsi-llc/EFDCPlus | 27ece1cd0bb9e02a46d1ad20f343bc5d109acfb3 | [
"BSD-3-Clause"
] | 35 | 2019-09-11T23:39:25.000Z | 2022-03-29T08:14:29.000Z | docs/computer_implementation/inputfiles/runcontrol/card86.rst | dsi-llc/EFDCPlus8.5 | 27ece1cd0bb9e02a46d1ad20f343bc5d109acfb3 | [
"BSD-3-Clause"
] | 3 | 2020-08-06T01:59:24.000Z | 2022-02-25T01:46:32.000Z | docs/computer_implementation/inputfiles/runcontrol/card86.rst | dsi-llc/EFDCPlus | 27ece1cd0bb9e02a46d1ad20f343bc5d109acfb3 | [
"BSD-3-Clause"
] | 22 | 2020-03-06T06:34:38.000Z | 2022-02-02T04:15:55.000Z | .. card86:
card86
-------
.. literalinclude:: ../efdc_card_opts/card86
| 11.571429 | 47 | 0.567901 |
a9608f8ec803614a7b08ce131601fd8f29066659 | 180 | rst | reStructuredText | v1.0/block_interleaver/syn/block_interleaver/block_interleaver.runs/impl_1/.route_design.begin.rst | mateusgs/FEC_IEEE.802.15.7 | 3177d0cb07d99439f206a58384a30eec91e88e17 | [
"MIT"
] | 1 | 2021-05-25T03:10:53.000Z | 2021-05-25T03:10:53.000Z | v1.0/block_interleaver/syn/block_interleaver/block_interleaver.runs/impl_1/.route_design.begin.rst | mateusgs/FEC_IEEE.802.15.7 | 3177d0cb07d99439f206a58384a30eec91e88e17 | [
"MIT"
] | null | null | null | v1.0/block_interleaver/syn/block_interleaver/block_interleaver.runs/impl_1/.route_design.begin.rst | mateusgs/FEC_IEEE.802.15.7 | 3177d0cb07d99439f206a58384a30eec91e88e17 | [
"MIT"
] | null | null | null | <?xml version="1.0"?>
<ProcessHandle Version="1" Minor="0">
<Process Command=".planAhead." Owner="RosanaVillela" Host="ESCRITORIO" Pid="13536">
</Process>
</ProcessHandle>
| 30 | 87 | 0.677778 |
3f162c6b8619e3cb6f33b79b291aaa4ced17f007 | 214 | rst | reStructuredText | docs/reference/functions/transformation/uniquify_nmr_property.rst | byuccl/spydrnet-tmr | ca9f026db70be96d57aa3604447abecb68670c56 | [
"BSD-3-Clause"
] | null | null | null | docs/reference/functions/transformation/uniquify_nmr_property.rst | byuccl/spydrnet-tmr | ca9f026db70be96d57aa3604447abecb68670c56 | [
"BSD-3-Clause"
] | 6 | 2021-08-13T18:39:59.000Z | 2022-03-04T22:20:44.000Z | docs/reference/functions/transformation/uniquify_nmr_property.rst | byuccl/spydrnet-tmr | ca9f026db70be96d57aa3604447abecb68670c56 | [
"BSD-3-Clause"
] | null | null | null | .. _uniquify_nmr_property:
=======================
uniquify_nmr_property
=======================
.. currentmodule:: spydrnet_tmr
.. autofunction:: uniquify_nmr_property
.. autosummary::
:toctree: generated/
| 16.461538 | 39 | 0.598131 |
0b57a99cc306529160f70389f2b1c567e4aed53d | 186 | rst | reStructuredText | docs/api/cybox/objects/custom_object.rst | tirkarthi/python-cybox | a378deb68b3ac56360c5cc35ff5aad1cd3dcab83 | [
"BSD-3-Clause"
] | 40 | 2015-03-05T18:22:51.000Z | 2022-03-06T07:29:25.000Z | docs/api/cybox/objects/custom_object.rst | tirkarthi/python-cybox | a378deb68b3ac56360c5cc35ff5aad1cd3dcab83 | [
"BSD-3-Clause"
] | 106 | 2015-01-12T18:52:20.000Z | 2021-04-25T22:57:52.000Z | docs/api/cybox/objects/custom_object.rst | tirkarthi/python-cybox | a378deb68b3ac56360c5cc35ff5aad1cd3dcab83 | [
"BSD-3-Clause"
] | 30 | 2015-03-25T07:24:40.000Z | 2021-07-23T17:10:11.000Z | :mod:`cybox.objects.custom_object` module
=========================================
.. automodule:: cybox.objects.custom_object
:members:
:undoc-members:
:show-inheritance:
| 23.25 | 43 | 0.553763 |
4f509609dbb92a83246e634b91125202ca5d9436 | 2,748 | rst | reStructuredText | docs/development/guide/documentation.rst | amagee/staircase | e0a45c05648e778ef61b624836908726fcc98b48 | [
"MIT"
] | 25 | 2020-09-05T01:26:43.000Z | 2021-01-31T06:51:47.000Z | docs/development/guide/documentation.rst | amagee/staircase | e0a45c05648e778ef61b624836908726fcc98b48 | [
"MIT"
] | 76 | 2020-03-03T22:26:19.000Z | 2021-07-09T09:29:38.000Z | docs/development/guide/documentation.rst | amagee/staircase | e0a45c05648e778ef61b624836908726fcc98b48 | [
"MIT"
] | 10 | 2021-08-25T02:01:09.000Z | 2021-11-23T10:31:12.000Z | .. _development.documentation:
Contributing to the documentation
======================================
The documentation is written with `reStructuredText <https://docutils.sourceforge.io/rst.html>`_ and organised into files with *.rst* extensions. This format can be turned into the html pages you are reading using `Sphinx <https://www.sphinx-doc.org/en/master/>`_. See the `reStructuredText Primer <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html>`_ for an introduction on using this language.
Sphinx also generates documentation from the docstrings in the code. Docstrings are specific to a particular function and should not be confused with *code comments*, which are written for the developer. Although Sphinx can work with several docstring formats, the only format used in staircase is the *Numpy Docstring Format* (`see examples here <https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html>`_). In addition, the `pandas docstring guide <https://pandas.pydata.org/docs/development/contributing_docstring.html>`_ is a valuable resource.
Note that Sphinx will ignore producing documentation for functions whose name starts with an underscore. The conventions behind underscores in Python names are summed up well by Dan Bader in his `blog <https://dbader.org/blog/meaning-of-underscores-in-python>`_.
Code and plotting examples are facilitated through either the `IPython directive <https://matplotlib.org/sampledoc/ipython_directive.html>`_ or the `plot directive <https://matplotlib.org/stable/api/sphinxext_plot_directive_api.html>`_. Many examples of each can be found throughout staircase documentation.
Also note that while American English is used for function names, much of the documentation is written in British English.
Building the documentation locally
*********************************************
The documentation has its own environment, separate from the rest of the project, and is specified by a requirements file located in the *docs* folder in the root of the project. To create the environment, navigate to the docs folder in a terminal window and run::
python -m venv .venv
.venv\Scripts\activate
pip install -r requirements.txt
With the environment activated the documentation can then be built using either *make.bat* or *Makefile*, depending on your operating system. For example, in Powershell run::
make.bat html
to produce the static html documentation that you are currently reading. The resulting output can be found in the *_build* folder. Sometimes Sphinx will utilise existing artifacts in the _build folder and you may not see the changes you expect. To force a build from scratch use::
make.bat clean html
| 70.461538 | 570 | 0.770015 |
e8c777510c5bfe604718a8eb6b37c8fd602adfbc | 7,112 | rst | reStructuredText | README_CN.rst | senique/py-mysql2pgsql | b770ada4e8f44656edad7a8ce4191c4ff6f8a18e | [
"MIT"
] | 1 | 2018-03-23T06:28:22.000Z | 2018-03-23T06:28:22.000Z | README_CN.rst | senique/py-mysql2pgsql | b770ada4e8f44656edad7a8ce4191c4ff6f8a18e | [
"MIT"
] | null | null | null | README_CN.rst | senique/py-mysql2pgsql | b770ada4e8f44656edad7a8ce4191c4ff6f8a18e | [
"MIT"
] | null | null | null | ==============================
全量数据迁移_MySQL_2_PostgreSQL
==============================
支持版本: MySQL 5.x 迁移到PostgresSQL 8.2 or higher(含Greenplum Database base on PSQL)
致谢: `py-mysql2pgsql <https://github.com/philipsoutham/py-mysql2pgsql>`_ 所有的贡献者
版本修改简述:
1. 修改对表注释和表字段注释的处理,已经支持中文注释迁移;
2. 增加配置参数is_gpdb,设置对GPDB的特殊处理忽略(INDEXES(not PRIMARY KEY INDEXE), CONSTRAINTS, AND TRIGGERS);
3. 增加配置参数destination.postgres.sameschame,设置导入GPDB的schema与mysql.database设置相同(需提前创建);
4. 增加配置参数mysql.getdbinfo,如果为true,则只读取MySQL的数据库统计信息,不执行数据迁移操作;
5. 修改逻辑为单独处理comment,增加try/exception/else防止execute()异常导致后续脚本执行被终止;
6. 优化脚本文件输出(通过destination.file指定):表名和字段去除双引号(不带双引号,会自动转换为小写)及转换为小写,脚本名称和格式;
.. attention::
README_CN.rst(本中文说明)非英文原版说明的翻译(详细请参考 README.rst),只是使用简述。_
Linux环境下安装说明:见README.rst
====================
Windows环境下安装和使用说明:
============================
1. 安装Python 2.7 和如下依赖:
-----------------------------
* `psycopg2 for Windows <http://www.stickpeople.com/projects/python/win-psycopg/>`_
* `MySQL-python for Windows <http://www.codegood.com/archives/129>`_
2. clone代码到本地,比如D:\python\py-mysql2pgsql,执行如下命令安装和测试:
-------------------------------------------------------------------------
::
> cd D:\python\py-mysql2pgsql
> python setup.py install
> cd D:\python\py-mysql2pgsql\bin
> python py-mysql2pgsql -h
3. 参照说明编写'.yml'文件,或者直接执行[4.]命令,如指定文件不存在则可创建指定名字的初始化配置文件,再作修改;
--------------------------------------------------------------------------------------------------
a .执行命令创建指定名字的初始化配置文件:
::
bin> python py-mysql2pgsql -v -f mysql2pgsql.yml
No configuration file found.
A new file has been initialized at: mysql2pgsql.yml
Please review the configuration and retry...
b .由于数据库结构差异:MySQL(**database**->table)与PostgreSQL(**database->schema**->table),配置database字段时,PostgreSQL需要通过 **冒号** 指定schema(**database:schema**),否则会迁移到public模式下;
c .迁移前需要在新库(PostgreSQL)创建冒号指定的模式(schema),否则会报不存在;
d .读取MySQL表结构,当注释有乱码时,可能导致报错,需要将报错的表注释更新正确,然后再迁移数据;
e .其他参数配置:
- destination.file: 指定输出的postgres脚本文件,如设置则只生成脚本,不执行数据迁移操作;
- destination.postgres.sameschame: true-导入GPDB的schema指定为mysql.database;
- mysql.getdbinfo: true-只读取MySQL的数据库统计信息,不执行数据迁移操作;
- only_tables: 指定迁移的table(必须换行减号加空格缩进列出表名),不指定则迁移全部;
- exclude_tables:指定排除的table(必须换行减号加空格缩进列出表名),不指定则不排除;
- supress_data: true-只迁移模式(包含表结构),默认false;
- supress_ddl: true-只迁移数据,默认false;如果只全量同步数据,同时force_truncate也应该需要为true;
- force_truncate: true-迁移数据前,对目标表执行truncate操作,默认false;
- timezone: true-转换时间,默认false;
- index_prefix: 指定索引前缀,默认为空;
- is_gpdb: true-GPDB的特殊性,需要忽略INDEXES(not PRIMARY KEY INDEXE), CONSTRAINTS, AND TRIGGERS,默认false;
f .使用drop+data(即supress_data: false;supress_ddl: false;),会删除引用的视图:
如不删除视图,则考虑使用其他方式处理【表结构变化】,然后使用truncate+only data同步数据;
g .使用truncate+only data(即force_truncate: true;supress_ddl: true;),可能会报错(如下):**事务提交逻辑更新(见 postgres_writer.py 函数 execute)**
File "/usr/lib/python2.7/site-packages/py_mysql2pgsql-0.1.6-py2.7.egg/mysql2pgsql/lib/postgres_db_writer.py", line 216, in write_contents
self.copy_from(f, '"%s"' % table.name, ['"%s"' % c['name'] for c in table.columns])
File "/usr/lib/python2.7/site-packages/py_mysql2pgsql-0.1.6-py2.7.egg/mysql2pgsql/lib/postgres_db_writer.py", line 121, in copy_from
columns=columns
psycopg2.InternalError: current transaction is aborted, commands ignored until end of transaction block
4. 执行命令迁移数据:
--------------------
::
> cd D:\python\py-mysql2pgsql\bin
> python py-mysql2pgsql -v -f mysql2pgsql.yml
同步方式说明(建议指定表,即:only_tables):
- 基本配置(其他配置见后面说明)
::
#基础配置
mysql:
...
#指定需要同步的库(模式),可以多个,用逗号分隔
database: vm_cs
compress: false
getdbinfo: false
destination:
postgres:
...
database: bizdata:test_new
sameschame: true
#指定需要同步的表
only_tables:
- table1
# 其他配置
supress_data: false
supress_ddl: false
force_truncate: false
timezone: false
is_gpdb: true
- 同步结构和数据(删表重建,加同步数据)
::
# 其他配置同'基本配置'
supress_data: false
supress_ddl: false
force_truncate: false
- 只同步结构(删表重建)
::
# 其他配置同'基本配置'
supress_data: true
supress_ddl: false
force_truncate: false
- 只同步数据(truncate+同步数据)
- 异常1:表结构(字段数量或者顺序)不一致, 需要重新同步表结构,或者手工处理表结构;
::
# 其他配置同'基本配置'
supress_data: false
supress_ddl: true
force_truncate: true
- 只统计源数据库的数据量
::
# 其他配置同'基本配置'
mysql:
...
getdbinfo: true
5. 数据库统计信息说明(输出到文件:_database_sync_info.txt):
--------
::
> ########################################
> ##TOTAL Database Rows:[迁移的总数据量]##
> ########################################
> ##Process Time:迁移数据执行时间 s.##
>
> DATABASE SATISTICS INFO:
> 数据库名(或模式):单个库总数据量|TOTAL
> 表名:单个表数据量
>
> test_db:8|TOTAL
> test_inc:6
> test_primary_error:2
>
> INDEXES, CONSTRAINTS, AND TRIGGERS DETAIL:
> 导入数据库名:导入模式名
> 操作信息(create/ignore): 表名|字段名(备注信息)
>
> mydb:test_db
> create index: test_inc|id|PRIMARY
> create index: test_primary_error|code|PRIMARY
> ignore index: test_primary_error|code
6. 注意:
--------
* 不支持MySQL空间数据类型(**Spatial Data Types**);
* 由于Greenplum Database(base on PSQL)对 **UNIQUE Index** 的特殊处理,迁移unique index可能会报错。介于GPDB特殊性,迁移时建议忽略除主键外的其他约束(主键,约束和触发器)。即 *不创建任何索引的情况下测试下性能,而后再做出正确的决定。* 详情如下:
* `Greenplum Database does not allow having both PRIMARY KEY and UNIQUE constraints <https://stackoverflow.com/questions/40987460/how-should-i-deal-with-my-unique-constraints-during-my-data-migration-from-postg>`_
* `EXCERPT:CREATE_INDEX <http://gpdb.docs.pivotal.io/4320/ref_guide/sql_commands/CREATE_INDEX.html>`_
::
In Greenplum Database, unique indexes are allowed only if the columns of the index key are the same as
(or a superset of) the Greenplum distribution key. On partitioned tables, a unique index is only supported
within an individual partition - not across all partitions
* **SHOW TABLE STATUS;** 结果说明:Rows-行数:对于非事务性表(如MyISAM),这个值是精确的;但对于事务性引擎(如InnoDB),这个值通常是估算的,与实际值相差可达40到50%。对于INFORMATION_SCHEMA中的表,Rows值为NULL。所以替换方案是使用 **SELECT COUNT(\*)** 获取准确的数据。详情如下:
* `why-is-innodbs-show-table-status-so-unreliable <https://stackoverflow.com/questions/8624408/why-is-innodbs-show-table-status-so-unreliable>`_
* `EXCERPT:INNODB-RESTRICTIONS <https://dev.mysql.com/doc/refman/5.7/en/innodb-restrictions.html>`_
::
The official MySQL 5.1 documentation acknowledges that InnoDB does not give accurate statistics with SHOW
TABLE STATUS. Whereas MYISAM tables specifically keep an internal cache of meta-data such as number of rows
etc, the InnoDB engine stores both table data and indexes in */var/lib/mysql/ibdata**
Inconsistent table row numbers are reported by SHOW TABLE STATUS because InnoDB dynamically estimates the
'Rows' value by sampling a range of the table data (in */var/lib/mysql/ibdata**) and then extrapolates the
approximate number of rows.So much so that the InnoDB documentation acknowledges row number inaccuracy of
up to 50% when using SHOW TABLE STATUS.
So use SELECT COUNT(*) FROM TABLE_NAME.
| 31.192982 | 215 | 0.686727 |
62ccfce60fac28d753f96560a81cf5afd5e65594 | 10,514 | rst | reStructuredText | doc/TaskLibrary.rst | tschoonj/cgat-daisy | f85a2c82ca04f352aad00660cfc14a9aa6773168 | [
"MIT"
] | 1 | 2020-06-29T14:39:42.000Z | 2020-06-29T14:39:42.000Z | doc/TaskLibrary.rst | tschoonj/cgat-daisy | f85a2c82ca04f352aad00660cfc14a9aa6773168 | [
"MIT"
] | 1 | 2019-05-15T20:50:37.000Z | 2019-05-15T20:50:37.000Z | doc/TaskLibrary.rst | tschoonj/cgat-daisy | f85a2c82ca04f352aad00660cfc14a9aa6773168 | [
"MIT"
] | 1 | 2021-11-11T13:22:56.000Z | 2021-11-11T13:22:56.000Z | .. _tasklibrary:
================
Task Library
================
Tasks are python functors and are defined in the TaskLibrary
sub-package within the Benchmark distribution. Functors follow
the naming convention
``run_tool_xyz``
for a task running a tool called xyz
``run_metric_xyz``
for a metric running a tool called xyz
Tools and metrics are auto-discovered by the package, hence the
requirement of a naming convention.
Tools and metrics are organised in a class hierarchy that is rooted at
the class :class:`.Runner`. Immediately derived from these are
:class:`.ToolRunner` and :class:`.MetricRunner`. User defined tasks
should be derived from the latter these.
The base classes take care of interfacing with the benchmark system
such as collecting dependencies and runtime statistics. A tool- or
metric-runner should implement a run() method. The principal
difference between a tool and metric is the call interface for the
run() method.
A :term:`tool` expects two arguments outfile and params, which
are the output file name and a parameter dictionary. The input file names
are part of the parameter dictionary and can be referred to by
name. The expected names are listed in a class attribute, for
example::
class run_tool_listdir(outfile, params):
expected = ["directory"]
path = "ls"
def run(self, outfile, params):
return P.run("{params.path} -1 {params.directory} "
"> {outfile}".format(**locals()))
A :term:`metric` expects three arguments, outfile and params as
before, but also an input file name, for example::
class run_metric_count(infile, outfile, params):
path = "wc"
def run(self, infile, outfile, params):
return P.run("{params.path} -l {infile} "
"> {outfile}".format(**locals()))
The distinction exists because the input for tools are files outside
the workflow and static, while the input for a metric are files that are created
within the workflow and are thus dynamically generated.
Both :term:`metric` and :term:`tool` should return the value of the
:meth:`.run` call, which contains runtime information such as the time
to execute the command and the host name.
Parameterisation of tasks
================================
A task can be parameterised by defining member variables
in the task runner::
class run_metric_count(infile, outfile, params):
path = "wc"
counting_method = "lines"
def run(self, infile, outfile, params):
if params.counting_method == "lines":
opt = "-l"
elif params.counting_method == "chars":
opt = "-c"
else:
raise ValueError("unknown counting method")
return P.run("{params.path} {opt} {infile} "
"> {outfile}".format(**locals()))
Note how the method refers to the options through the ``params``
variable. This is necessary as the class attributes are shared between
all tasks, while the params contain the options specific to a
task. Using ``params`` permits setting task specific parameters for
a tool in the configuration file.
A parameter called ``options`` is always defined and can be used
directly. This is for generic command-line options. The example above
could thus simply be::
class run_metric_count(infile, outfile, params):
path = "wc"
counting_method = "lines"
def run(self, infile, outfile, params):
return P.run("{params.path} {params.options} {infile} "
"> {outfile}".format(**locals()))
It is generally preferred to keep the number of explicit options to a
minimum and rather let the user set command line options for
the specific tool.
A use case where it additional options are useful is when the command
line statement will take different shape depending on options. For
example, a metric might offer pre-filters for computing the metric. In
this case it is important to remember that tasks are functors and thus
should not contain state information that is specific to a task.
Additional topics to be written:
* specifying multi-threaded execution
* pre-processing
* database interface
Data upload
===========
The function of a metric task is to run a metric on some data and
output the data in well-formed, tab-separated tables. The benchmark
system then takes care of uploading the data into the database. For
general notes about the database and how data are organized, see
:ref:`data_organization`.
Standard case
-------------
The simplest metric outputs a single table into ``outfile``. In the
example above::
class run_metric_count(infile, outfile, params):
path = "wc"
counting_method = "lines"
def run(self, infile, outfile, params):
return P.run("{params.path} {params.options} {infile} "
"> {outfile}".format(**locals()))
``outfile`` will be created in the approriate location and could look
like this::
>cat tool.dir/count.dir/count.tsv
word count
hello 1
world 1
This table on file will automatically be uploaded into a database
table called ``count``::
> SELECT * FROM count;
word count instance_id
hello 1 5
world 1 5
Note how the column :term:`instance_id` has been added to link the
results to a specific :term:`instance`.
Multiple outfiles
-----------------
Frequently, a metric might output multiple tables. To register these
into the system, define them as separate tablenames::
class run_metric_count(infile, outfile, params):
path = "wc"
counting_method = "lines"
tablenames = ["count_lines", "count_words"]
def run(self, infile, outfile, params):
retvals = P.run("{params.path} --count-words {params.options} {infile} "
"> {outfile}.count_words.tsv".format(**locals()))
retvals.extend(P.run("{params.path} --count-lines {params.options} {infile} "
"> {outfile}.count_lines.tsv".format(**locals())))
return retvals
Note how the names of the table (e.g. ``count_lines``) imply the name
of the ouput file
(:file:`tool.dir/count.dir/count.tsv.count_lines.tsv`).
.. note::
If the naming convention is not followed, existing tables will not
be picked up.
A missing output file is ignored. This accomodates metrics that create
multiple output files of which some are optional.
Transforming data before upload
-------------------------------
Before uploading, the Benchmark system can apply some basic table transformations.
These are registered in the task function.
.. glossary::
upload_melted
melt the table before uploading. This is important for metrics
that output a variable number of columns, for example, if a tool
outputs one column per sample. The argument is a dictionary
mapping the table name to parameters in the `pandas melt
<http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html>`_
function. For example::
upload_melted = {
"benchmark_vcf_stats_format_unset_samples":
{"id_vars": ["FORMAT"],
"var_name": "sample",
"value_name": "unset"},
}
upload_transpose
transpose a table before uploading. The argument is a list of tables
to transpose::
upload_transpose = ["count_lines"]
upload_separate
upload each data point into a separate table. By default and design,
data from the same metric are stored in the same table. Using this option,
each run will create a separate table which is the name of the metric
suffixed with the :term:`instance_id`. Use sparingly, as a large number
of tables will clutter the database. The argument to this option is
a list of tables to uload separately::
upload_separate = ["count_lines"]
upload_normalize
normalize a column in a table. This is useful if the table contains
one or more columns with categorical data. To save disk space, all the
categorical data will be replaced by integer levels and the mapping
between level and values will be stored in a separate table. The argument
to this option is a dictionary of tables and a list of columns that
should be stored as factors::
upload_normalize = {"count_words": ["word"]}
In our example, this will create an additional table
``count_factors``. Note that factor values are only consistent
within an instance, but not across instances or
experiments. Multiple columns will be normalized together as a
combination of values.
upload_as_matrix
upload a table as a matrix. The argument is a dictionary of
tables. Optionally, a group key can be specified::
upload_as_matrix = {
"benchmark_vcf_stats_format_per_sample": "FORMAT",
}
This transformation assumes that the resulting data is uniform,
i.e. that all columns have the same type. Uploading as a matrix
is advisable for data that have data dependent labels in both
rows and columns. Melting such data will result in a massively
inflated data size that needs to be stored. The resulting table
will contain the fields ``instance_id``, ``rows``, ``columns``,
``data`` and ``dtype`` (data type), where rows and columns are
``,`` separated lists of values.
Contents of the task library
================================
The inheritance diagram of the TaskLibrary is below:
.. inheritance-diagram::
daisy.tasks.Runner
daisy.tasks.ToolRunner
daisy.tasks.MetricRunner
daisy.tasks.BAMTools
daisy.tasks.BAMMetrics
daisy.tasks.VariantCallers
daisy.tasks.VCFTools
daisy.tasks.VCFMetrics
Task library methods
------------------------------------------------
This section lists modules containing tool runners and metric runners.
.. automodule:: daisy.tasks.BAMTools
:undoc-members:
:members:
.. automodule:: daisy.tasks.BAMMetrics
:undoc-members:
:members:
.. automodule:: daisy.tasks.VCFMetrics
:undoc-members:
:members:
.. automodule:: daisy.tasks.VariantCallers
:undoc-members:
:members:
The task library engine
===========================
This section lists modules that are part of the task library engine
.. automodule:: daisy.tasks.Runner
:undoc-members:
:members:
.. automodule:: daisy.tasks.MetricRunner
:undoc-members:
:members:
.. automodule:: daisy.tasks.ToolRunner
:undoc-members:
:members:
| 32.251534 | 88 | 0.688225 |
4a4581932db9ebd8bcd3f9026e0aeaead855b181 | 2,671 | rst | reStructuredText | docs/index.rst | GEUZE/Hyperloop | 2cb62cc1da39e7e4b25e7b709d916d68c7b5671c | [
"Apache-2.0"
] | 21 | 2015-01-20T16:04:20.000Z | 2022-01-12T13:08:26.000Z | docs/index.rst | uwhl/Hyperloop | b00a1a6570e1c3d94b3e0ce95bad75892eb6caec | [
"Apache-2.0"
] | null | null | null | docs/index.rst | uwhl/Hyperloop | b00a1a6570e1c3d94b3e0ce95bad75892eb6caec | [
"Apache-2.0"
] | 11 | 2015-08-03T17:55:12.000Z | 2022-02-09T17:45:03.000Z | =======================
Introduction
=======================
This is the documentation for a hyperloop optimization framework, which can be found
in this `github repository`__.
.. __: https://github.com/OpenMDAO-Plugins/Hyperloop
The hyperloop concept is a proposed transportation system that could offer lower costs and
travel times relative to California's current high speed rail project. The design consists
of a passenger pod traveling in a tube under light vacuum at near sonic speeds. Propulsion
is provided via linear accelerators mounted on the tube itself and the pod rides on a set
of air bearings. The speed of the capsule is limited by the ability of air to escape around
the outside of the vehicle, therefore a compression system is used to draw air through the
vehicle. Compressing the air allows the capsule to reach higher speeds and also provides
the air bearings with the necessary pressure. Many of these different sub-systems interact
with each other and an effective hyperloop system will need to balance multiple trade-offs.
.. figure:: images/hyperloop.png
:align: center
:width: 800 px
:alt: hyperloop re-designed inlet
Calculated baseline hyperloop dimensions for a capsule speed of Mach 0.8, rendered in OpenCSM a parametric solid modeler.
We propose the design of the hyperloop should be taken from a systems perspective with
the dual objectives of minimizing ticket cost and minimizing travel time. In order to achieve
this goal we propose a top down design approach where the designs of all components
are optimized simultaneously with respect to the overall system goals.
An overarching framework is needed to orchestrate the interaction between models of
various subsystems and perform the necessary optimization. This code base contains a hyperloop
system model built by a handful of engineers and computer scientists as an `OpenMDAO.`__
plugin. The intention is to provide this code as a baseline for further public
contribution to support an open source hyperloop design. Interested parties should feel free
to modify the code as they see fit and improve the models in areas where they have expertise.
.. __: http://openmdao.org/
================
Contents
================
.. toctree::
:maxdepth: 2
io
usage
baseline
future
contribute
modeling
srcdocs
pkgdocs
The hyperloop github repository and installation instructions can be found here: `https://github.com/OpenMDAO-Plugins/Hyperloop`__.
.. __: https://github.com/OpenMDAO-Plugins/Hyperloop
The original hyperloop proposal can be found here: `Hyperloop-Alpha`__.
.. __: http://www.spacex.com/hyperloop
| 39.279412 | 131 | 0.764882 |
0e9ce3d62adbbb245f82f1a0f95137e33277852b | 7,482 | rst | reStructuredText | README.rst | aviramha/fastuuid | ffcfaf166d7b6043097542554d2b4d81cd0daf6d | [
"BSD-3-Clause"
] | null | null | null | README.rst | aviramha/fastuuid | ffcfaf166d7b6043097542554d2b4d81cd0daf6d | [
"BSD-3-Clause"
] | null | null | null | README.rst | aviramha/fastuuid | ffcfaf166d7b6043097542554d2b4d81cd0daf6d | [
"BSD-3-Clause"
] | null | null | null | fastuuid
========
.. image:: https://travis-ci.com/thedrow/fastuuid.svg?branch=master
:target: https://travis-ci.com/thedrow/fastuuid
FastUUID is a library which provides CPython bindings to Rust's UUID library.
The provided API is exactly as Python's builtin UUID class.
It is supported on Python 3.5, 3.6, 3.7.
Why?
----
It is much faster than Python's pure-python implementation and it is stricter
when parsing hexadecimal representation of UUIDs.
If you need to generate a lot of random UUIDs we also provide the `uuid4_bulk()`
function which releases the GIL for the entire duration of the generation.
This allows other threads to run while the library generates UUIDs.
Benchmarks
----------
=========== ========= ================= ======================= =============================== ================ ==================== ================= ======== =========================================
processor machine python compiler python implementation python implementation version python version python build release system cpu
=========== ========= ================= ======================= =============================== ================ ==================== ================= ======== =========================================
x86_64 x86_64 GCC 5.5.0 CPython 3.7.2 3.7.2 default 4.15.0-50-generic Linux Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
=========== ========= ================= ======================= =============================== ================ ==================== ================= ======== =========================================
======================================================= ====================== ====================== ====================== ====================== ====================== ====================== ========== ================== ====== ==========
name min max mean stddev median iqr outliers ops rounds iterations
======================================================= ====================== ====================== ====================== ====================== ====================== ====================== ========== ================== ====== ==========
tests/test_benchmarks.py::test_parse_bytes_fastuuid 8.770148269832134e-07 3.0054012313485146e-05 9.848993185755912e-07 6.654121944953314e-07 9.530049283057451e-07 2.6979250833392143e-08 515;8080 1015332.2082162144 149366 1
tests/test_benchmarks.py::test_parse_bytes_uuid 9.00006853044033e-07 2.4181994376704097e-05 1.0102117337399724e-06 6.361040394445994e-07 9.739887900650501e-07 3.899913281202316e-08 1130;10702 989891.4916557473 198020 1
tests/test_benchmarks.py::test_parse_bytes_le_fastuuid 9.00996383279562e-07 2.4662993382662535e-05 1.0116569599011118e-06 5.687526428398989e-07 9.840005077421665e-07 2.200249582529068e-08 703;9368 988477.3590622543 163052 1
tests/test_benchmarks.py::test_parse_bytes_le_uuid 1.348991645500064e-06 3.5200006095692515e-05 1.5184524591452776e-06 9.295692916442362e-07 1.448992406949401e-06 3.897002898156643e-08 1620;12511 658565.2346092485 170271 1
tests/test_benchmarks.py::test_parse_fields_fastuuid 9.819923434406519e-07 3.2625976018607616e-05 1.217285795660234e-06 1.0234898538816672e-06 1.087988493964076e-06 6.702612154185772e-08 3199;12487 821499.7690477591 143844 1
tests/test_benchmarks.py::test_parse_fields_uuid 1.1137977708131076e-06 0.000147809402551502 1.2054474234359692e-06 5.093104655522965e-07 1.144595444202423e-06 6.060581654310231e-08 2304;5896 829567.4954861335 167983 5
tests/test_benchmarks.py::test_parse_hex_fastuuid 9.870273061096668e-07 2.906599547713995e-05 1.11212962918218e-06 6.906885628642859e-07 1.0759977158159018e-06 3.0995579436421394e-08 577;8272 899175.7559191765 143288 1
tests/test_benchmarks.py::test_parse_hex_uuid 1.3360113371163607e-06 2.6262016035616398e-05 1.4448148991822913e-06 7.064083638385458e-07 1.3989920262247324e-06 2.9016518965363503e-08 679;4802 692130.1826039868 82156 1
tests/test_benchmarks.py::test_parse_int_uuid 5.448004230856896e-07 4.164349229540676e-06 6.099919819231937e-07 2.0401652680352933e-07 5.548994522541762e-07 4.430039552971725e-08 3607;3925 1639365.8107557097 87951 20
tests/test_benchmarks.py::test_parse_int_fastuuid 8.950009942054749e-07 4.946498665958643e-05 1.0105578493921953e-06 6.873330198387691e-07 9.739887900650501e-07 2.1012965589761734e-08 529;12534 989552.4542226401 176088 1
tests/test_benchmarks.py::test_fast_uuidv3 5.410998710431158e-07 3.5570512409321965e-06 5.971385425220447e-07 1.672736409563351e-07 5.526497261598707e-07 2.949964255094524e-08 4865;6332 1674653.248434526 83508 20
tests/test_benchmarks.py::test_uuidv3 3.6269775591790676e-06 4.193797940388322e-05 3.933511159797234e-06 1.4521217506191846e-06 3.782013664022088e-06 6.00120984017849e-08 548;4193 254225.79455743768 53582 1
tests/test_benchmarks.py::test_fast_uuidv4 1.47343598655425e-07 2.069187758024782e-06 1.6777362874701377e-07 7.169360028617447e-08 1.5453133528353646e-07 8.188180800061673e-09 6101;11550 5960412.297619802 198413 32
tests/test_benchmarks.py::test_uuidv4 2.275977749377489e-06 5.939402035437524e-05 2.5699563458422217e-06 1.316784132061215e-06 2.38200300373137e-06 1.309963408857584e-07 2068;5815 389111.667837409 85610 1
tests/test_benchmarks.py::test_fast_uuidv4_bulk_threads 0.0009843519947025925 0.007268004992511123 0.0014418828965801719 0.0007545185495019851 0.0012059269938617945 0.0003288870066171512 42;54 693.5375975204223 549 1
tests/test_benchmarks.py::test_fast_uuidv4_threads 0.0030693279986735433 0.008087011985480785 0.004009611603774935 0.000715605913448762 0.0038650799833703786 0.0006588477554032579 53;19 249.40071478707026 273 1
tests/test_benchmarks.py::test_uuidv4_threads 0.030999513022834435 0.06895541000994854 0.040025271589084616 0.009975862168373506 0.036475206492468715 0.008713199000339955 3;2 24.98421522947798 22 1
tests/test_benchmarks.py::test_fast_uuidv5 5.316498572938144e-07 4.090600123163313e-06 5.890041556925782e-07 1.8620985914996815e-07 5.419497028924525e-07 2.9799412004649576e-08 3998;6415 1697780.8905680121 88921 20
tests/test_benchmarks.py::test_uuidv5 3.7190038710832596e-06 5.8079982409253716e-05 4.403547300216035e-06 2.439066121654033e-06 3.910012310370803e-06 2.169981598854065e-07 2283;4139 227089.64655629804 57383 1
======================================================= ====================== ====================== ====================== ====================== ====================== ====================== ========== ================== ====== ==========
Run them yourself to verify.
What's Missing?
---------------
- UUIDv1 generation
- Pickle support
PRs are welcome.
| 113.363636 | 251 | 0.59289 |
e1ad345eea207a529cf44ac516aa19b1fefd1a88 | 7,915 | rst | reStructuredText | README.rst | audiolion/django-shibauth-rit | b2db5bb998b6efe8ac09b892d30370630075c816 | [
"MIT"
] | 4 | 2017-02-17T02:11:50.000Z | 2018-05-29T13:07:26.000Z | README.rst | audiolion/django-shibauth-rit | b2db5bb998b6efe8ac09b892d30370630075c816 | [
"MIT"
] | null | null | null | README.rst | audiolion/django-shibauth-rit | b2db5bb998b6efe8ac09b892d30370630075c816 | [
"MIT"
] | null | null | null | =============================
Django Shib Auth RIT
=============================
.. image:: https://badge.fury.io/py/django-shibauth-rit.svg
:target: https://badge.fury.io/py/django-shibauth-rit
.. image:: https://travis-ci.org/audiolion/django-shibauth-rit.svg?branch=master
:target: https://travis-ci.org/audiolion/django-shibauth-rit
.. image:: https://codecov.io/gh/audiolion/django-shibauth-rit/branch/master/graph/badge.svg
:target: https://codecov.io/gh/audiolion/django-shibauth-rit
Integrate Shibboleth Authentication with your RIT projects
Quickstart
----------
Install Django Shib Auth RIT::
pip install django-shibauth-rit
Add it to your `INSTALLED_APPS`:
.. code-block:: python
INSTALLED_APPS = (
...
'shibauth_rit',
...
)
Add the authentication backend:
.. code-block:: python
AUTHENTICATION_BACKENDS = [
'shibauth_rit.backends.ShibauthRitBackend',
...
]
Add the middleware to process requests:
.. code-block:: python
# use MIDDLEWARE_CLASSES on Django 1.8
MIDDLEWARE = (
...
'django.contrib.auth.middleware.AuthenticationMiddleware',
'shibauth_rit.middleware.ShibauthRitMiddleware',
...
)
Add Django Shib Auth RIT's URL patterns:
.. code-block:: python
urlpatterns = [
...
url(r'^', include('shibauth_rit.urls')),
...
]
Set the `LOGIN_URL` setting to the login handler of RIT's Shibboleth installation:
.. code-block:: python
LOGIN_URL = 'https://<your-site-root>/Shibboleth.sso/Login'
Map Shibboleth's return attributes to your user model:
.. code-block:: python
SHIBAUTH_ATTRIBUTE_MAP = {
'uid': (True, 'username'),
'mail': (False, 'email'),
}
Shibboleth returns a number of attributes after a successful authentication. According to RIT's
docs the current attributes returned are:
.. code-block::
uid - the user's RIT username
givenName - the user's given (first) name
sn -the user's surname (last/family name)
mail - the user's email address (note that this can be null)
ritEduMemberOfUid - groups the account is a member of (Ex: forklift-operators, vendingmach-admins, historyintegrator, etc.)
ritEduAffiliation - multi-valued attribute showing relationship to RIT (Ex: Student, Staff, StudentWorker, Adjust, Retiree etc.)
Note: Additional attributes can be configured on a site-by-site basis. Please contact the ITS Service Desk with requests for additional attributes.
When you map attributes, you use a Tuple of ``(Boolean, 'UserModelField')`` where ``Boolean`` indicates if the field is ``REQUIRED``. This should match your
User model's requirements. If your User model is as follow:
.. code-block:: python
class User(AbstractBaseUser, PermissionsMixin):
USERNAME_FIELD = 'email'
EMAIL_FIELD = 'email'
email = models.EmailField(_('email address'), unique=True, blank=True, null=True)
username = models.CharField(_('username'), unique=True, required=True, max_length=50)
name = models.CharField(_('Name of User'), blank=True, max_length=100)
Then ``username`` is a required attribute and should be ``'uid': (True, 'username')`` but email is not
required and should be ``'mail': (False, 'email')``.
Note: If email is a required field on your model, shibboleth doesn't guarantee that `mail` will be populated so you will need to handle that exception. You can do this by subclassing `ShibauthRitBackend` and overriding ``handle_parse_exception()`` method. See `Subclassing ShibauthRitMiddleware`_ .
.htaccess Setup
---------------
This package requires your site to be hosted on RIT's servers. The .htaccess should look like this
.. code-block:: apache
# Ensure https is on. required for shibboleth auth
RewriteCond ${HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST} [R,L]
# Two options, lazy loading where people do not need to authenticate to get to your site
<If "%{HTTPS} == 'on'">
SSLRequireSSL
AuthType shibboleth
Require shibboleth
ShibRequestSetting requireSession false
ShibRedirectToSSL 443
</If>
# Or no lazy loading, strict requirement of shib authentication before accesing site
<If "%{HTTPS} == 'on'">
SSLRequireSSL
AuthType shibboleth
ShibRequireSession On
require valid-user
# see https://www.rit.edu/webdev/authenticating-and-authorizing-rit-users for other require options
</If>
This sets up some stuff with the Apache webserver so when people go to ``https://<your-site-root>/Shibboleth.sso/Login`` it initiates the redirect to RIT's Shibboleth logon. Don't put a url route there, though I think Apache would always pick it up before it got to your code, might as well not mess with it.
Context Processors
------------------
There are two context processors included which allow you to place `{{ login_link }}` or `{{ logout_link }}` in your templates for routing users to the login or logout page. These are available as a convenience and are not required. To activate, add the following to your settings:
.. code-block:: python
TEMPLATES = [
{
...
'OPTIONS': {
'context_processors': [
...
'shibauth_rit.context_processors.login_link',
'shibauth_rit.context_processors.logout_link',
...
],
},
...
},
]
Subclassing ShibauthRitMiddleware
------------------------------
ShibauthRitMiddleware has a few hooks that you can utilize to get customized behavior. To use these create a ``middleware.py`` file and add the following:
.. code-block:: python
from shibauth_rit.middleware import ShibauthRitMiddleware as middleware
from shibauth_rit.middleware import ShibauthRitValidationException
class ShibauthRitMiddleware(middleware):
def make_profile(self, user, shib_meta):
"""
This is here as a stub to allow subclassing of ShibauthRitMiddleware
to include a make_profile method that will create a Django user profile
from the Shib provided attributes. By default it does nothing.
"""
pass
def setup_session(self, request):
"""
If you want to add custom code to setup user sessions, you can extend this.
"""
pass
def handle_parse_exception(self, shib_meta):
"""
This is a stub method that can be subclassed to handle what should happen when a parse
exception occurs. If you raise ShibauthRitValidationException it will need to be caught
further up to prevent an internal server error (HTTP 500). An example of this would be if
you require an email address and RIT Shibboleth doesn't return one, what should you do?
"""
pass
Replace ``pass`` with any custom code you want to run. Then make sure to modify your ``MIDDLEWARE`` or ``MIDDLEWARE_CLASSES`` attribute to include the path to your custom middleware and replace this packages.
.. code-block:: python
MIDDLEWARE = (
...
yourapp.backends.ShibauthRitMiddleware,
...
)
Running Tests
-------------
To do a simple test run with your current config
.. code-block:: bash
$ python runtests.py
To comprehensively test the suite across versions of python and django
.. code-block:: bash
source <YOURVIRTUALENV>/bin/activate
(myenv) $ pip install tox
(myenv) $ tox
Credits
-------
Tools used in rendering this package:
* Cookiecutter_
* `cookiecutter-djangopackage`_
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`cookiecutter-djangopackage`: https://github.com/pydanny/cookiecutter-djangopackage
.. _`Subclassing ShibauthRitMiddleware`: #subclassing-shibauthritmiddleware
| 32.979167 | 308 | 0.674037 |
b11cbf850be4ad10bbe91e8b9fe57f6f396f56ac | 925 | rst | reStructuredText | doc/source/how_can_i_help.rst | lmontand/CrossHair | ddcea5a19ea2ba48ac47fff4dcf1a1aa4479393c | [
"MIT"
] | null | null | null | doc/source/how_can_i_help.rst | lmontand/CrossHair | ddcea5a19ea2ba48ac47fff4dcf1a1aa4479393c | [
"MIT"
] | null | null | null | doc/source/how_can_i_help.rst | lmontand/CrossHair | ddcea5a19ea2ba48ac47fff4dcf1a1aa4479393c | [
"MIT"
] | null | null | null | ***************
How Can I Help?
***************
* :ref:`Try it out <Get Started>` on your own python project!
Let us know about what does and doesn't work.
* Participate (or just lurk) in the `gitter chat`_.
* `File an issue`_.
* `Ask a question`_ on github.
* Make a pull request. Please read the
:ref:`contributing guidelines <contributing>`.
* Help me evangelize: share with your friends and coworkers.
If you think it's a neat project, star `the repo`_. ★
* `Subscribe`_ to email updates (`or by RSS`_).
Send me feedback about those at ``pschanely@gmail.com``.
.. _gitter chat: https://gitter.im/Cross_Hair/Lobby
.. _File an issue: https://github.com/pschanely/CrossHair/issues
.. _Ask a question: https://github.com/pschanely/CrossHair/discussions/new?category=q-a
.. _the repo: https://github.com/pschanely/CrossHair
.. _Subscribe: http://eepurl.com/hGTLRH
.. _or by RSS: https://pschanely.github.io/feed.xml
| 40.217391 | 87 | 0.701622 |
16b7de708692b00aa23a98cb423af56b9329a6c3 | 28,037 | rst | reStructuredText | phase5/phase5.rst | wessenstam/gts2021-prep | b406de0b27a398986d8c01d8ab1f96b59f1005bd | [
"MIT"
] | 1 | 2021-01-19T22:35:13.000Z | 2021-01-19T22:35:13.000Z | phase5/phase5.rst | wessenstam/gts2021-prep | b406de0b27a398986d8c01d8ab1f96b59f1005bd | [
"MIT"
] | null | null | null | phase5/phase5.rst | wessenstam/gts2021-prep | b406de0b27a398986d8c01d8ab1f96b59f1005bd | [
"MIT"
] | 1 | 2021-02-08T16:00:51.000Z | 2021-02-08T16:00:51.000Z | .. _phase5_era:
Getting a development environment with Era
==========================================
Now that we have a CI/CD pipeline doing our work with respect to building, pushing and deploying our Fiesta container, let's bring in the database manipulation as well.
For this part of the workshop we are going to do the following:
- Check the registration the deployed MariaDB in Era
- Get the API calls for Clone the Production environment MariaDB database server if it doesn't exist
- Create a developer's version of the runapp.sh script
- If a clone of the production database doesn't exist, create a clone of the database
.. note::
Estimated time **45-60 minutes**
Check Era version
-----------------
As we are using Era 2.0 version please make sure that the version of Era is 2.0.0.2. If it is not, please upgrade to this version as we have dependencies in the APIs call we are using in this part of the workshop. If you are on this version, you can proceed.
Check MariaDB registration in Era
---------------------------------
The blueprint that been deployed installs the VM, but also registers the MariaDB Database server and the FiestaDB to the Era instance you have running
#. Open the Era instance in your cluster
#. Login using the username and password given
#. Click on **Dashboard -> Databases**
.. figure:: images/1.png
#. Click **Sources**. Your *Initials* **-FiestaDB** database should be registered and shown
.. figure:: images/2.png
Create a snapshot of the deployed MariaDB database
--------------------------------------------------
To be able to clone a Database and its Database Server we need to have a snapshot.
#. Open in your Era instance **Time Machines** from the dropdown menu
#. Select the radio button in front of your *Initials* **-MariaDB_TM** instance
#. Select **Actions -> Snapshot**
#. Type the name **First-Snapshot** and click the **Create** button
.. note::
Make 110% sure you have typed it in as mentioned in this step, otherwise the deployment of the Development container will not work later in the script!!!
.. figure:: images/2a.png
#. Click on **Operations** (via the drop down menu or by clicking in the top right hand corner)
#. Wait till the operation has finished (approx. 2 minutes)
Now that the snapshot is there we can proceed to the next step.
Get the API to Clone the MariaDB database
-----------------------------------------
As we want to have the creation of the Fiesta Dev environment to clone the Production MariaDB server before we play with it, we need the API calls of Era to do so. This part of the module is going to use Era UI to get the API calls.
After we have the API calls we are going to use variables to set the correct values.
#. In your Era UI, click on **Time Machine**
#. Click the radio button in front of *Initials* **-FiestaDB_TM**
#. Click **Actions -> Snapshot** and call it **First-Snapshot**
#. Click on **Operations** (via the drop down menu or by clicking in the top right hand corner)
#. Wait till the snapshot operation has ended before moving forward
#. Return to the Time Machine, click the radio button in front of *Initials* **-FiestaDB_TM**
#. Click **Actions -> Create a Clone of MariaDB Instance**
.. figure:: images/3.png
#. Select the **First-Snapshot** as the snapshot to use and click **Next**
#. Provide the follow information in the fields
- **Database Server VM** - Create New Server
- **Database Server VM Name** - *Initials* -MariaDB_DEV_VM
- **Description** - (Optional) Dev clone from the *Initials* -FiestaDB
- **Compute Profile** - CUSTOM_EXTRA_SMALL
- **Network Profile** - Era_Managed_MariaDB
- Use for **Provide SSH Public Key Through** the following key (select **Text** first):
.. code-block:: SSH
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmhJS2RbHN0+Cz0ebCmpxBCT531ogxhxv8wHB+7Z1G0I77VnXfU+AA3x7u4gnjbZLeswrAyXk8Rn/wRMyJNAd7FTqrlJ0Imd4puWuE2c+pIlU8Bt8e6VSz2Pw6saBaECGc7BDDo0hPEeHbf0y0FEnY0eaG9MmWR+5SqlkepgRRKN8/ipHbi5AzsQudjZg29xra/NC/BHLAW/C+F0tE6/ghgtBKpRoj20x+7JlA/DJ/Ec3gU0AyYcvNWlhlR+qc83lXppeC1ie3eb9IDTVbCI/4dXHjdSbhTCRu0IwFIxPGK02BL5xOVTmxQyvCEOn5MSPI41YjJctUikFkMgOv2mlV root@centos
#. Click **Next**
#. Provide the following information:
- **Name** - *Initials*-FiestaDB_DEV
- **Description** - (Optional) Dev clone from the *Initials* -FiestaDB
- **New ROOT Password** - nutanix/4u
- **Database Parameter Profile** - DEFAULT_MARIADB_PARAMS
#. Then **DON'T CLICK THE CLONE BUTTON!!**, but click the **API Equivalent** button
.. figure:: images/4.png
#. Take a closer look at the curl command and especially at the JSON data being send (left hand side of the screen)
#. The JSON data being send to the Era server is full of variable values
- Era instance IP
- Era User Name
- Era Password
- Era ClusterUUID
- TimeMachineID
- SnapshotID
- vmName
- ComputeProfileID
- NetworkProfileID
- vm_name
- databaseParameterProfileID
#. Click the **Close** button and the **X** to close the Clone button.
Now that we know how to get the API calls we are going to change the deployment with tour CI/CD pipeline so it calls the commands.
Changes for Drone
----------------
We need to tell drone to make a difference in the steps it needs to run.
#. In VC open the **.drone.yml** file
#. Copy and paste below content over the exiting content in the **.drone.yml** file
.. code-block:: yaml
kind: pipeline
name: default
clone:
skip_verify: true
steps:
- name: Build Image (Prod)
image: docker:latest
pull: if-not-exists
volumes:
- name: docker_sock
path: /var/run/docker.sock
commands:
- docker build -t fiesta_app:${DRONE_COMMIT_SHA:0:6} .
when:
branch:
- master
- name: Build Image (Dev)
image: docker:latest
pull: if-not-exists
volumes:
- name: docker_sock
path: /var/run/docker.sock
commands:
- docker build -t fiesta_app_dev:${DRONE_COMMIT_SHA:0:6} -f dockerfile-dev .
when:
branch:
- dev
- name: Test container (Prod)
image: fiesta_app:${DRONE_COMMIT_SHA:0:6}
pull: if-not-exists
environment:
USERNAME:
from_secret: dockerhub_username
PASSWORD:
from_secret: dockerhub_password
DB_SERVER:
from_secret: db_server_ip
DB_PASSWD:
from_secret: db_passwd
DB_USER:
from_secret: db_user
DB_TYPE:
from_secret: db_type
DB_NAME:
from_secret: db_name
commands:
- npm version
- mysql -u$DB_PASSWD -p$DB_USER -h $DB_SERVER $DB_NAME -e "select * from Products;"
- if [ `echo $DB_PASSWD | grep "/" | wc -l` -gt 0 ]; then DB_PASSWD=$(echo "${DB_PASSWD//\//\\/}"); fi
- sed -i 's/REPLACE_DB_NAME/FiestaDB/g' /code/Fiesta/config/config.js
- sed -i "s/REPLACE_DB_HOST_ADDRESS/$DB_SERVER/g" /code/Fiesta/config/config.js
- sed -i "s/REPLACE_DB_DIALECT/$DB_TYPE/g" /code/Fiesta/config/config.js
- sed -i "s/REPLACE_DB_USER_NAME/$DB_USER/g" /code/Fiesta/config/config.js
- sed -i "s/REPLACE_DB_PASSWORD/$DB_PASSWD/g" /code/Fiesta/config/config.js
when:
branch:
- master
- name: Test container (Dev)
image: fiesta_app_dev:${DRONE_COMMIT_SHA:0:6}
pull: if-not-exists
environment:
USERNAME:
from_secret: dockerhub_username
PASSWORD:
from_secret: dockerhub_password
DB_SERVER:
from_secret: db_server_ip
DB_PASSWD:
from_secret: db_passwd
DB_USER:
from_secret: db_user
DB_TYPE:
from_secret: db_type
DB_NAME:
from_secret: db_name
commands:
- npm version
- mysql -u$DB_PASSWD -p$DB_USER -h $DB_SERVER $DB_NAME -e "select * from Products;"
- if [ `echo $DB_PASSWD | grep "/" | wc -l` -gt 0 ]; then DB_PASSWD=$(echo "${DB_PASSWD//\//\\/}"); fi
- sed -i 's/REPLACE_DB_NAME/FiestaDB/g' /code/Fiesta/config/config.js
- sed -i "s/REPLACE_DB_HOST_ADDRESS/$DB_SERVER/g" /code/Fiesta/config/config.js
- sed -i "s/REPLACE_DB_DIALECT/$DB_TYPE/g" /code/Fiesta/config/config.js
- sed -i "s/REPLACE_DB_USER_NAME/$DB_USER/g" /code/Fiesta/config/config.js
- sed -i "s/REPLACE_DB_PASSWORD/$DB_PASSWD/g" /code/Fiesta/config/config.js
when:
branch:
- dev
- name: Push to Dockerhub (Prod)
image: docker:latest
pull: if-not-exists
environment:
USERNAME:
from_secret: dockerhub_username
PASSWORD:
from_secret: dockerhub_password
volumes:
- name: docker_sock
path: /var/run/docker.sock
commands:
- docker login -u $USERNAME -p $PASSWORD
- docker image tag fiesta_app:${DRONE_COMMIT_SHA:0:6} $USERNAME/fiesta_app:latest
- docker image tag fiesta_app:${DRONE_COMMIT_SHA:0:6} $USERNAME/fiesta_app:${DRONE_COMMIT_SHA:0:6}
- docker push $USERNAME/fiesta_app:${DRONE_COMMIT_SHA:0:6}
- docker push $USERNAME/fiesta_app:latest
when:
branch:
- master
- name: Deploy Prod image
image: docker:latest
pull: if-not-exists
environment:
USERNAME:
from_secret: dockerhub_username
PASSWORD:
from_secret: dockerhub_password
DB_SERVER:
from_secret: db_server_ip
DB_PASSWD:
from_secret: db_passwd
DB_USER:
from_secret: db_user
DB_TYPE:
from_secret: db_type
DB_NAME:
from_secret: db_name
volumes:
- name: docker_sock
path: /var/run/docker.sock
commands:
- if [ `docker ps | grep fiesta_app | wc -l` -eq 1 ]; then echo "Stopping existing Docker Container...."; docker stop fiesta_app; else echo "Docker container has not been found..."; fi
- sleep 10
- docker run --name fiesta_app --rm -p 5000:3000 -d -e DB_SERVER=$DB_SERVER -e DB_USER=$DB_USER -e DB_TYPE=$DB_TYPE -e DB_PASSWD=$DB_PASSWD -e DB_NAME=$DB_NAME $USERNAME/fiesta_app:latest
when:
branch:
- master
- name: Deploy Dev image
image: docker:latest
pull: if-not-exists
environment:
USERNAME:
from_secret: dockerhub_username
PASSWORD:
from_secret: dockerhub_password
DB_SERVER:
from_secret: db_server_ip
DB_PASSWD:
from_secret: db_passwd
DB_USER:
from_secret: db_user
DB_TYPE:
from_secret: db_type
DB_NAME:
from_secret: db_name
ERA_IP:
from_secret: era_ip
ERA_USER:
from_secret: era_user
ERA_PASSWORD:
from_secret: era_password
INITIALS:
from_secret: initials
volumes:
- name: docker_sock
path: /var/run/docker.sock
commands:
- if [ `docker ps | grep fiesta_app_dev | wc -l` -eq 1 ]; then echo "Stopping existing Docker Container...."; docker stop fiesta_app_dev; else echo "Docker container has not been found..."; fi
- sleep 10
- docker run -d --rm --name fiesta_app_dev -p 5050:3000 -e DB_SERVER=$DB_SERVER -e DB_USER=$DB_USER -e DB_TYPE=$DB_TYPE -e DB_PASSWD=$DB_PASSWD -e DB_NAME=$DB_NAME -e initials=$INITIALS -e era_ip=$ERA_IP -e era_admin=$ERA_USER -e era_password=$ERA_PASSWORD fiesta_app_dev:${DRONE_COMMIT_SHA:0:6}
when:
branch:
- dev
volumes:
- name: docker_sock
host:
path: /var/run/docker.sock
The new **.drone.yml** file does a few things
- Run distinct steps based on the branch the push has been made on
- If branch is dev, the following changes in the steps, compared to earlier runs, are:
- Change the name of the build image to **fiesta_app_dev**
- Use a different dockerfile to build the image (**dockerfile-dev**)
- Don't push the image to Dockerhub
- Start a container using the dev built container on port **5050, not 5000**
- name the container **fiesta_app_dev**
#. Save, Commit and Push to Gitea.
#. This will fire a new build, but you should see the steps with **(Prod)**
.. figure:: images/7.png
Now we know that Drone is capable of changing steps based on braches (in .drone.yml you see the **when: branche: - master/dev**) we are going to use that.
Create a new branch in VC
-------------------------
As we are mimicking the full development of the applicaiton, we are going to create a new branch. This branch will be used to do a few things:
- Change the creation of the development container
- Run a different start script which will:
- Deploy a clone of the MariaDB server, if there is none
- Use the cloned MariaDB server and not the MariaDB production server for the development of our application
- Don't upload the container onto our DockerHub repo as it has no Production value
#. Open VC
#. Close all open files
#. Click in the bottom left corner on the text **master**
.. figure:: images/8.png
#. Than in the message box that opens at the top of the screen select **+ Create new branch...**
.. figure:: images/9.png
#. Type **dev** in the next message box and hit enter
This will have all the same files that the master branch had (our original) but we can independently develop our code
Create development script version
---------------------------------
As we have seen in former steps, there are a lot of variables that are installation dependent for the cloning of the MariaDB server you deployed with the Blueprint.
To make your life easier we have already created the needed content for the files (besides Drone secrets we are going to set later).
#. Make sure you are in the **dev** branch.
.. figure:: images/10.png
#. Create a new file called **runapp-dev.sh**
#. Copy and paste the below content in the file
.. code-block:: bash
#!/bin/sh
# Install curl and jq package as we need it
apk add curl jq
# Function area
function waitloop {
op_answer="$1"
loop=$2
# Get the op_id from the task
op_id=$(echo $op_answer | jq '.operationId' | tr -d \")
# Checking on error. if we have received an error, show it and exit 1
if [[ -z $op_id ]]
then
echo "We have received an error message. The reply from the Era system has been "$op_answer" .."
exit 1
else
counter=1
# Checking routine to see that the registration in Era worked
while [[ $counter -le $loop ]]
do
ops_status=$(curl -k --silent https://${era_ip}/era/v0.9/operations/${op_id} -H 'Content-Type: application/json' --user $era_admin:$era_password | jq '.["percentageComplete"]' | tr -d \")
if [[ $ops_status == "100" ]]
then
ops_status=$(curl -k --silent https://${era_ip}/era/v0.9/operations/${op_id} -H 'Content-Type: application/json' --user $era_admin:$era_password | jq '.status' | tr -d \")
if [[ $ops_status == "5" ]]
then
echo "Database and Database server have been registreed in Era..."
break
else
echo "Database and Database server registration not correct. Please look at the Era GUI to find the reason..."
exit 1
fi
else
echo "Operation still in progress, it is at $ops_status %... Sleep for 30 seconds before retrying.. ($counter/$loop)"
sleep 30
fi
counter=$((counter+1))
done
if [[ $counter -ge $loop ]]
then
echo "We have tried for "$(expr $loop / 2)" minutes to register the MariaDB server and Database, but were not successful. Please look at the Era GUI to see if anything has happened..."
fi
fi
}
# Variables received from the environmental values via the Drone Secrets
# era_ip, era_user, era_password and initials
# Create VM-Name
vm_name_dev=$initials"-MariaDB_DEV-VM"
db_name_prod=$initials"-FiestaDB"
db_name_dev=$initials"-FiestaDB_DEV"
# Get the UUID of the Era server
era_uuid=$(curl -k --insecure --silent https://${era_ip}/era/v0.9/clusters -H 'Content-Type: application/json' --user $era_admin:$era_password | jq '.[].id' | tr -d \")
# Get the UUID of the network called Era_Managed_MariaDB
network_id=$(curl --silent -k "https://${era_ip}/era/v0.9/profiles?type=Network&name=Era_Managed_MariaDB" -H 'Content-Type: application/json' --user $era_admin:$era_password | jq '.id' | tr -d \")
# Get the UUID for the ComputeProfile
compute_id=$(curl --silent -k "https://${era_ip}/era/v0.9/profiles?&type=Compute&name=CUSTOM_EXTRA_SMALL" -H 'Content-Type: application/json' --user $era_admin:$era_password | jq '.id' | tr -d \")
# Get the UUID for the DatabaseParameter ID
db_param_id=$(curl --silent -k "https://${era_ip}/era/v0.9/profiles?engine=mariadb_database&name=DEFAULT_MARIADB_PARAMS" -H 'Content-Type: application/json' --user $era_admin:$era_password | jq '.id' | tr -d \")
# Get the UUID of the timemachine
db_name_tm=$initials"-FiestaDB_TM"
tms_id=$(curl --silent -k "https://${era_ip}/era/v0.9/tms" -H 'Content-Type: application/json' --user $era_admin:$era_password | jq --arg db_name_tm $db_name_tm '.[] | select (.name==$db_name_tm) .id' | tr -d \")
# Get the UUID of the First-Snapshot for the TMS we just found
snap_id=$(curl --silent -k "https://${era_ip}/era/v0.9/snapshots" -H 'Content-Type: application/json' --user $era_admin:$era_password | jq --arg tms_id $tms_id '.[] | select (.timeMachineId==$tms_id) | select (.name=="First-Snapshot") .id' | tr -d \")
# Now that we have all the needed parameters we can check if there is a clone named INITIALS-FiestaDB_DEV
clone_id=$(curl --silent -k "https://${era_ip}/era/v0.9/clones" -H 'Content-Type: application/json' --user $era_admin:$era_password | jq --arg db_name_dev $db_name_dev '.[] | select (.name==$db_name_dev) .id' | tr -d \")
# Check if there is a clone already. if not, start the clone process
if [[ -z $clone_id ]]
then
# Clone call of the MariaDB
opanswer=$(curl --silent -k -X POST \
"https://${era_ip}/era/v0.9/tms/$tms_id/clones" \
-H 'Content-Type: application/json' \
--user $era_admin:$era_password \
-d \
'{"name":"'$db_name_dev'","description":"Dev clone from the "'$db_name_prod'","createDbserver":true,"clustered":false,"nxClusterId":"'$era_uuid'","sshPublicKey":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmhJS2RbHN0+Cz0ebCmpxBCT531ogxhxv8wHB+7Z1G0I77VnXfU+AA3x7u4gnjbZLeswrAyXk8Rn/wRMyJNAd7FTqrlJ0Imd4puWuE2c+pIlU8Bt8e6VSz2Pw6saBaECGc7BDDo0hPEeHbf0y0FEnY0eaG9MmWR+5SqlkepgRRKN8/ipHbi5AzsQudjZg29xra/NC/BHLAW/C+F0tE6/ghgtBKpRoj20x+7JlA/DJ/Ec3gU0AyYcvNWlhlR+qc83lXppeC1ie3eb9IDTVbCI/4dXHjdSbhTCRu0IwFIxPGK02BL5xOVTmxQyvCEOn5MSPI41YjJctUikFkMgOv2mlV root@centos","dbserverId":null,"dbserverClusterId":null, "dbserverLogicalClusterId":null,"timeMachineId":"'$tms_id'","snapshotId":"'$snap_id'", "userPitrTimestamp":null,"timeZone":"Europe/Amsterdam","latestSnapshot":false,"nodeCount":1,"nodes":[{"vmName":"'$vm_name_dev'", "computeProfileId":"'$compute_id'","networkProfileId":"'$network_id'","newDbServerTimeZone":null, "nxClusterId":"'$era_uuid'","properties":[]}],"actionArguments":[{"name":"vm_name","value":"'$vm_name_dev'"}, {"name":"dbserver_description","value":"Dev clone from the '$vm_name'"},{"name":"db_password","value":"nutanix/4u"}],"tags":[],"newDbServerTimeZone":"UTC","computeProfileId":"'$compute_id'","networkProfileId":"'$network_id'", "databaseParameterProfileId":"'$db_param_id'"}')
# Call the waitloop function
waitloop "$opanswer" 30
fi
# Let's get the IP address of the cloned database server
cloned_vm_ip=$(curl --silent -k "https://${era_ip}/era/v0.9/dbservers" -H 'Content-Type: application/json' --user $era_admin:$era_password | jq --arg clone_name $vm_name_dev '.[] | select (.name==$clone_name) .ipAddresses[0]' | tr -d \")
DB_SERVER=$cloned_vm_ip
# If there is a "/" in the password or username we need to change it otherwise sed goes haywire
if [ `echo $DB_PASSWD | grep "/" | wc -l` -gt 0 ]
then
DB_PASSWD1=$(echo "${DB_PASSWD//\//\\/}")
else
DB_PASSWD1=$DB_PASSWD
fi
if [ `echo $DB_USER | grep "/" | wc -l` -gt 0 ]
then
DB_USER1=$(echo "${DB_USER//\//\\/}")
else
DB_USER1=$DB_USER
fi
# Change the Fiesta configuration code so it works in the container
sed -i "s/REPLACE_DB_NAME/$DB_NAME/g" /code/Fiesta/config/config.js
sed -i "s/REPLACE_DB_HOST_ADDRESS/$DB_SERVER/g" /code/Fiesta/config/config.js
sed -i "s/REPLACE_DB_DIALECT/$DB_TYPE/g" /code/Fiesta/config/config.js
sed -i "s/REPLACE_DB_USER_NAME/$DB_USER1/g" /code/Fiesta/config/config.js
sed -i "s/REPLACE_DB_PASSWORD/$DB_PASSWD1/g" /code/Fiesta/config/config.js
# Run the NPM Application
cd /code/Fiesta
npm start
.. note::
This script will:
- Check if there is a clone from the *Initials* **-MariaDB_VM** server, if not create one with the naming of:
- *Initials* **-MariaDB_DEV-VM** as the Database server
- *Initials* **-FiestaDB_DEV** as the name of the cloned Database
- *Initials* **-FiestaDB_DEV_TM** as the name of the Time Machine of the cloned Database
- Set the script to use the cloned database as its database server
- Run the rest as the normal production script deployed earlier
#. Save the file in VC **DON'T COMMIT AND PUSH TO GITEA!**
Create a new dockerfile
-----------------------
Now we need to make sure that the development container is using the newly created **runapp-dev.sh** file.
#. Create a new file called **dockerfile-dev**
#. Copy and paste the below content in the file
.. code-block:: docker
# This dockerfile multi step is to start the container faster as the runapp.sh doesn't have to run all npm steps
# Grab the Alpine Linux OS image and name the container base
FROM alpine:3.11 as base
# Install needed packages
RUN apk add --no-cache --update nodejs npm git
# Create and set the working directory
RUN mkdir /code
WORKDIR /code
# Get the Fiesta Application in the container
RUN git clone https://github.com/sharonpamela/Fiesta.git /code/Fiesta
# Get ready to install and build the application
RUN cd /code/Fiesta && npm install
RUN cd /code/Fiesta/client && npm install
RUN cd /code/Fiesta/client && npm audit fix
RUN cd /code/Fiesta/client && npm fund
RUN cd /code/Fiesta/client && npm update
RUN cd /code/Fiesta/client && npm run build
# Grab the Alpine Linux OS image and name it Final_Image
FROM alpine:3.11 as Final_Image
# Install some needed packages
RUN apk add --no-cache --update nodejs npm mysql-client
# Get the NMP nodemon and install it
RUN npm install -g nodemon
# Copy the earlier created application from the first step into the new container
COPY --from=base /code /code
# Copy the starting app, but dev version
COPY runapp-dev.sh /code/runapp.sh
RUN chmod +x /code/runapp.sh
WORKDIR /code
# Start the application
ENTRYPOINT [ "/code/runapp.sh"]
EXPOSE 3001 3000
As you can see there is just a small change where we copied **runapp.sh** in earlier steps, we now copy **runapp-dev.sh** as **runapp.sh**
#. Save the file in VC **DON'T COMMIT AND PUSH TO GITEA!**
Add extra Drone secrets
-----------------------
As we need to tell drone where our Era instance is and what credentials are needed, we need to create these as secrets.
#. Open your Drone UI at **\http://<IP ADDRESS DOCKERVM>:8080**
#. Click on your **Repository -> SETTINGS**
#. Add the following secrets (Click **ADD SECRET** to save the secret):
- **era_ip** - <IP ADDRESS OF ERA>
- **era_user** - admin
- **era_password** - <ADMIN PASSWORD ERA>
- **initials** - <YOUR INITIALS>
.. note::
You should now have 11 secrets
.. figure:: images/11.png
Push your files to Gitea
------------------------
#. Open your VC
#. Commit and push all to your Gitea
#. Click **OK** on the message box you get as Gitea doesn't know YET about this branch
.. figure:: images/12.png
#. Open Drone UI to see the job running
.. figure:: images/13.png
#. Open a ssh session to your docker vm server and run ``docker logs --follow fiesta_app_dev``
#. You will see a step running mentioning ```Operation still in progress...``
.. figure:: images/14.png
#. Open your Era interface and you will see in **Operations** a **Clone Database** operation
.. figure:: images/15.png
#. Wait till the step is done (approx. 10 minutes)
#. Return to your ssh session to see the progress of the ``docker logs`` command.
#. Wait til you see the line ``On Your Network: http://172.17.0.7:3000``
#. Open the development version of the Fiesta Application at **\http://<IP ADDRESS DOCKERVM>:5050**
#. Goto **Products**
#. Add an extra product by clicking on the **Add New Product** button
#. Use the following values for the fields
- **Product Name (\*)** - Nutanix HQ JS Reception
- **Suggested Retail Price (\*)** - 10000
- **Product Image URL (optional)** - \https://images.squarespace-cdn.com/content/v1/5d31ebb829f8cc0001b2481b/1564761967972-SUOBVO463RDQ2GSY9JD1/ke17ZwdGBToddI8pDm48kGmScA6V2_DHTkmfhjdEzm97gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z5QPOohDIaIeljMHgDF5CVlOqpeNLcJ80NK65_fV7S1UZMI6X7yGUDybalAFUlJQFpALT4Jd0h1Jp53vKTUc5VLbka3MzgShcsnUbwZjk4-8w/Nutanix+%282%29.jpg?format=1500w
- **Product Comments (optional)** - Full reception including screens
#. Click the **Submit** button
#. Click the **OK** button
#. Scroll all the way down to see the new added item
#. Change the URL to the production application by changing the port number from **5000** to **5050** and the new added item is NOT there.
Now that we have seen that we are working on two different database, the development area is complete. Whatever we do, it will have no impact on the production database!
.. let's roll the Development database back to the time we created the snapshot.
Refresh the development database
--------------------------------
#. Open your Era instance
#. Goto **Databases (drop down menu) -> Clones**
#. Click the radio button in from of your *Initials* **-FiestaDB_DEV** clone
#. Click the **Refresh** button
#. Select under **Snapshot** your **First-Snapshot**
.. figure:: images/16.png
#. Click **Refresh**
#. Click **Operations** to follow the process (approx. 5-7 minutes)
------
Takeaways
---------
- Ease of use for the deployment of a development environment using Era for database management
- Use of Calm to deploy a development environment that integrates with Era
- Use of a CI/CD and Era is quiet easy to set update
- CI/CD pipeline to have a distinction between Production and Development. | 42.160902 | 1,319 | 0.647644 |
9d66fcaada1a13631e13f07364ff5d709ce70bfa | 3,565 | rst | reStructuredText | doc/source/lessons/lesson5.rst | draghuram/pythonintro | 81994a156ac7134779c011a9f6befa5b11569dde | [
"Apache-2.0"
] | null | null | null | doc/source/lessons/lesson5.rst | draghuram/pythonintro | 81994a156ac7134779c011a9f6befa5b11569dde | [
"Apache-2.0"
] | null | null | null | doc/source/lessons/lesson5.rst | draghuram/pythonintro | 81994a156ac7134779c011a9f6befa5b11569dde | [
"Apache-2.0"
] | 1 | 2020-03-30T19:18:19.000Z | 2020-03-30T19:18:19.000Z |
Lesson 5 - Number of byes
=========================
Built-in function `int`
-----------------------
The built-in function `int` can be used to convert strings or floating
point numbers to integers.
>>> int("10")
10
>>> int(5.123)
5
>>> int(5.9)
5
Number of byes in a tournament
------------------------------
We will try to write code to solve the following problem.
*In a tennis tournament, how many players need to be given byes?*
If the number of players who sign up for a tournament is not equal to
power of 2, some players need to be given byes and they automatically
move to second round. For example, if there are 21 people, 11 people
will need to be given byes. More details about this problem can be
found here:
http://draghuram.github.io/tournament/byes/2018/01/20/how-many-byes.html
Don't worry if the math on that page looks complicated. You don't need
that for writing code. You just need to know the solution which is
this:
*Find a power of 2 that is greater than number of players and subtract
number of players from that number.*
For example, if there are 21 people in the tournament, 32 is the
closest power of 2 that is greater than 21. So the number of byes
would be::
32 - 21 = 11
This is all you need to know to start writing code.
Solution
--------
To solve this problem, we need to learn a function called ``log`` in
the module ``math``. This function computes the *logarithm* of a
*number* with respect to a *base*. Here are some examples::
>>> import math
>>> math.log(16, 2)
4.0
>>> math.log(8, 2)
3.0
>>> math.log(32, 2)
5.0
Can you understand the return value of ``log()`` function? Here is
another way of understanding it::
>>> 2 ** 3 # 2 to the power of 3
8
>>> math.log(8, 2)
3.0
>>> 2 ** 4 # 2 to the power of 4
16
>>> math.log(16, 2)
4.0
>>> 2 ** 5 # 2 to the power of 5
32
>>> math.log(32, 2)
5.0
As you can see, ``log()`` is kind of opposite to ``power``
operator. Armed with this new function, let us proceed with the code
to solve our problem. Remember that this is the algorithm to solve our
"byes" problem:
*Find a power of 2 that is greater than number of players and subtract
number of players from that number.*
Let us assume that there are 21 players in a tournament. Our job is to
find the number of byes that need to be given.
>>> math.log(21, 2)
4.392317422778761
Note that we are getting a floating point number here because 21 is
not an exact power of 2 number. We need to convert this to an integer
first.
>>> int(4.392317422778761)
4
Note that 2 to the power of 4 will give us a power of 2 number that is
less than 21. But what we really want is a power of 2 that is greater
than 21. This is how we do it.
>>> 2 ** (4 + 1)
32
All we need to do now is to subtract number of players from this
number.
>>> byes = 32 - 21
There you go! We now have code to solve the problem when the number of
players is 21.
Assignment
----------
Write a program that accepts number of players as input (integer) and
prints number of byes that need to be given. You need to take code
explained above and generalize it. Try to write the code using
functions.::
$ python3 num_byes.py 21
11
$ python3 num_byes.py 10
6
Bonus points if you can write the program so that when a power of 2 is
given as input, the output is 0 (because there is no need to give byes
in this case).::
$ python3 num_byes.py 16
0
$ python3 num_byes.py 128
0
| 24.93007 | 72 | 0.670126 |
511f02ad0871c2633f15175c77e74f4c4eccfc59 | 400 | rst | reStructuredText | docs/automl_v1beta1/services.rst | genquan9/python-automl | 7b9d62248c7d69bc9e6a00f2caf4d7a5fa6eabec | [
"Apache-2.0"
] | null | null | null | docs/automl_v1beta1/services.rst | genquan9/python-automl | 7b9d62248c7d69bc9e6a00f2caf4d7a5fa6eabec | [
"Apache-2.0"
] | 1 | 2021-02-23T12:40:11.000Z | 2021-02-23T12:40:11.000Z | docs/automl_v1beta1/services.rst | isabella232/python-automl | dbf1bf1bcc7575cd5ab85921311e18ecfed27dc7 | [
"Apache-2.0"
] | null | null | null | Services for Google Cloud Automl v1beta1 API
============================================
.. automodule:: google.cloud.automl_v1beta1.services.auto_ml
:members:
:inherited-members:
.. automodule:: google.cloud.automl_v1beta1.services.prediction_service
:members:
:inherited-members:
.. automodule:: google.cloud.automl_v1beta1.services.tables
:members:
:inherited-members:
| 30.769231 | 71 | 0.6675 |
c61581320b6a48b70d81e1f6c16271aa19ec8f0f | 1,622 | rst | reStructuredText | docs/screencasts.rst | geromueller/rpyc | 12a53169100bce7175470f959e956f29902fa153 | [
"MIT"
] | 1 | 2016-09-09T19:42:06.000Z | 2016-09-09T19:42:06.000Z | docs/screencasts.rst | tvanzyl/rpyc | 1b7ea8b51fd459cf1c7035578068104de5e67e1c | [
"MIT"
] | null | null | null | docs/screencasts.rst | tvanzyl/rpyc | 1b7ea8b51fd459cf1c7035578068104de5e67e1c | [
"MIT"
] | 1 | 2019-01-27T19:59:30.000Z | 2019-01-27T19:59:30.000Z | .. _screencasts:
Screen Casts
============
Part 1: Introduction to RPyC and the Classic Mode
-------------------------------------------------
.. raw:: html
<object width="425" height="344" id="_741" data="http://showmedo.com/static/flowplayer/flowplayer-3.1.5.swf" type="application/x-shockwave-flash"><param name="movie" value="http://showmedo.com/static/flowplayer/flowplayer-3.1.5.swf" />
<param name="allowfullscreen" value="true" />
<param name="allowscriptaccess" value="always" />
<param name="flashvars" value='config={"key":"#$824e5316466b69d76dc","logo":{"url":"http://showmedo.com/static/images/showmedo_logo_vp.png","fullscreenOnly":false,"top":20,"right":20,"opacity":0.5,"displayTime":0,"linkUrl":"http://showmedo.com"},"clip":{"baseUrl":"http://showmedo.com","autoPlay":false,"autoBuffering":true},"playlist":[{"url":"http://videos1.showmedo.com/ShowMeDos/2780000.flv","title":"Introduction to RPyC 3.0","baseUrl":"http://showmedo.com","autoPlay":false,"autoBuffering":true}],"plugins":{"controls":{"url":"http://showmedo.com/static/flowplayer/flowplayer.controls-3.1.5.swf","playlist":true}}}' />
</object>
* `Link <http://showmedo.com/videotutorials/video?name=2780000;fromSeriesID=278>`_
Part 2: Services
----------------
N/A
Part 3: Callbacks
-----------------
N/A
| 57.928571 | 937 | 0.686806 |
f051213060e9da26f873828decb8558d0038a98a | 178 | rst | reStructuredText | docs/source/api/tensortrade.oms.wallets.ledger.rst | zeeshanalipanhwar/tensortrade | 7c294293cb65d0e31cae47402145dffe2e7bc75f | [
"Apache-2.0"
] | 3,081 | 2020-01-12T13:42:13.000Z | 2022-03-27T18:09:31.000Z | docs/source/api/tensortrade.oms.wallets.ledger.rst | zeeshanalipanhwar/tensortrade | 7c294293cb65d0e31cae47402145dffe2e7bc75f | [
"Apache-2.0"
] | 257 | 2020-01-15T03:14:29.000Z | 2022-03-31T04:19:14.000Z | docs/source/api/tensortrade.oms.wallets.ledger.rst | zeeshanalipanhwar/tensortrade | 7c294293cb65d0e31cae47402145dffe2e7bc75f | [
"Apache-2.0"
] | 804 | 2020-01-12T12:22:22.000Z | 2022-03-28T13:41:59.000Z | tensortrade.oms.wallets.ledger module
=====================================
.. automodule:: tensortrade.oms.wallets.ledger
:members:
:undoc-members:
:show-inheritance:
| 22.25 | 46 | 0.58427 |
cfdffe6ba4bb0abc1c271cbd4b7e753b6b380e8f | 2,563 | rst | reStructuredText | docs/userguide/api/execution.rst | mbeko/moztrap | db75e1f8756ef2c0c39652a66302b19c8afa0256 | [
"BSD-2-Clause"
] | null | null | null | docs/userguide/api/execution.rst | mbeko/moztrap | db75e1f8756ef2c0c39652a66302b19c8afa0256 | [
"BSD-2-Clause"
] | null | null | null | docs/userguide/api/execution.rst | mbeko/moztrap | db75e1f8756ef2c0c39652a66302b19c8afa0256 | [
"BSD-2-Clause"
] | null | null | null | Test Runs API
=============
Test Run
--------
The Test Run (and related) API has some special handling that differs from the
standard APIs. This is because there is complex logic for submitting results,
and new runs with results can be submitted.
Please consider `MozTrap Connect`_ as a way to submit results for tests. You
can also check out `MozTrap Connect on github`_.
.. _MozTrap Connect on github: https://github.com/camd/moztrap-connect/
.. _MozTrap Connect: https://moztrap-connect.readthedocs.org/en/latest/index.html
.. http:get:: /api/v1/run
.. http:post:: /api/v1/run
:productversion: (optional) The ProductVersion ID to filter on.
:productversion__version: (optional) The ProductVersion ``name`` to filter
on. For example, if the Product and Version are ``Firefox 10`` then
the ``productversion__version`` would be ``10``.
:productversion__product__name: (optional) The Product ``name`` to filter on.
:status: (optional) The status of the run. One of ``active`` or ``draft``.
**Example request**:
.. sourcecode:: http
GET /api/v1/run/?format=json&productversion__version=10&case__suites__name=Sweet%20Suite
Run Case Versions
-----------------
.. http:get:: /api/v1/runcaseversion
Filtering
^^^^^^^^^
:run: The ``id`` of the run
:run__name: The ``name`` of the run
:caseversion: The ``id`` of the caseversion
:caseversion__name: The ``name`` of the caseversion
.. sourcecode:: http
GET /api/v1/product/?format=json&run__name=runfoo
Results
-------
.. http:patch:: /api/v1/result
**Example request**:
This endpoint is write only. The submitted result objects should
be formed like this:
.. sourcecode:: http
{
"objects": [
{
"case": "1",
"environment": "23",
"run_id": "1",
"status": "passed"
},
{
"case": "14",
"comment": "why u no make sense??",
"environment": "23",
"run_id": "1",
"status": "invalidated"
},
{
"bug": "http://www.deathvalleydogs.com",
"case": "326",
"comment": "why u no pass?",
"environment": "23",
"run_id": "1",
"status": "failed",
"stepnumber": 1
}
]
}
| 28.477778 | 96 | 0.540382 |
596f86800d6d7e1fd17c8d2a2a78114385d29086 | 11,924 | rst | reStructuredText | source/workbook/xlsx.rst | slott56/Stingray-Reader | 6be63d1656eba3005dd7c08eb9d30eb8c3766d70 | [
"MIT"
] | 5 | 2019-06-22T01:05:51.000Z | 2021-08-30T20:02:35.000Z | docs/html/_sources/workbook/xlsx.rst.txt | slott56/Stingray-Reader | 6be63d1656eba3005dd7c08eb9d30eb8c3766d70 | [
"MIT"
] | 4 | 2020-01-11T00:46:49.000Z | 2021-09-20T20:21:14.000Z | source/workbook/xlsx.rst | slott56/Stingray-Reader | 6be63d1656eba3005dd7c08eb9d30eb8c3766d70 | [
"MIT"
] | 2 | 2020-02-13T22:34:01.000Z | 2021-11-15T14:20:55.000Z | .. #!/usr/bin/env python3
.. _`workbook_xlsx`:
XLSX or XLSM Workbook
----------------------
::
import logging
import pprint
import xml.etree.cElementTree as dom
import re
import zipfile
from collections import defaultdict
from stingray.workbook.base import Workbook
import stingray.sheet
import stingray.cell
.. py:module:: workbook.xlsx
.. py:class:: XLSX_Workbook
Extract sheets, rows and cells from an XLSX format file.
We're opening a ZIP archive and parsing the various XML documents
that we find therein.
The :py:class:`ElementTree` incremental parser provides
parse "events" for specific tags, allowing for lower-memory parsing of
the sometimes large XML documents.
See http://effbot.org/zone/element-iterparse.htm
The class as a whole defines some handy constants like XML namespaces
and a pattern for parsing Cell ID's to separate the letters from the numbers.
In addition to the superclass attributes, some additional unique
attributes are introduced here.
.. py:attribute:: zip_archive
A zip archive for this file.
.. py:attribute:: strings_dict
The strings in this workbook
::
class XLSX_Workbook( Workbook ):
"""ECMA Standard XLSX or XLSM documents.
Locate sheets and rows within a given sheet.
See http://www.ecma-international.org/publications/standards/Ecma-376.htm
"""
# Relevant subset of namespaces used
XLSX_NS = {
"main":"http://schemas.openxmlformats.org/spreadsheetml/2006/main",
"r":"http://schemas.openxmlformats.org/officeDocument/2006/relationships",
"rel":"http://schemas.openxmlformats.org/package/2006/relationships",
}
cell_id_pat = re.compile( r"(\D+)(\d+)" )
def __init__( self, name, file_object=None ):
"""Prepare the workbook for reading.
:param name: File name
:param file_object: Optional file-like object. If omitted, the named file is opened.
"""
super().__init__( name, file_object )
self.zip_archive= zipfile.ZipFile( file_object or name, "r" )
self._prepare()
The are two preparation steps required for reading these files. First, the
sheets must be located. This involves resolving internal rID numbers.
Second, the shared strings need to be loaded into memory.
::
def _prepare( self ):
self._locate_sheets()
self._get_shared_strings()
Locate all sheets involves building a :py:data:`name_to_id` mapping and and :py:data:`id_to_member` mapping. This allows is to map the
user-oriented name to an id and the id to the XLSX zipfile member.
::
def _locate_sheets( self ):
"""Locate the name to id mapping and the id to member mapping.
"""
# 1a. Open "workbook.xml" member.
workbook_zip= self.zip_archive.getinfo("xl/workbook.xml")
workbook_doc= dom.parse( self.zip_archive.open(workbook_zip) )
# 1b. Get a dict of sheet names and their rIdx values.
key_attr_id= 'name'
val_attr_id= dom.QName( self.XLSX_NS['r'], 'id' )
self.name_to_id = dict(
( s.attrib[key_attr_id], s.attrib[val_attr_id] )
for s in workbook_doc.findall("*/main:sheet", namespaces=self.XLSX_NS)
)
logging.debug( self.name_to_id )
# 2a. Open the "_rels/workbook.xml.rels" member
rels_zip= self.zip_archive.getinfo("xl/_rels/workbook.xml.rels")
rels_doc= dom.parse( self.zip_archive.open(rels_zip) )
# 2b. Get a dict of rIdx to Target member name
logging.debug( dom.tostring( rels_doc.getroot() ) )
key_attr_id= 'Id'
val_attr_id= 'Target'
self.id_to_member = dict(
( r.attrib[key_attr_id], r.attrib[val_attr_id] )
for r in rels_doc.findall("rel:Relationship", namespaces=self.XLSX_NS)
)
logging.debug( self.id_to_member )
Get Shared Strings walks a fine line. Ideally, we'd like to parse
the document and simply use ``itertext`` to gather all of the text
within a given string instance (:samp:`<si>`) tag. **However.**
In practice, these documents can be so huge that they don't fit
in memory comfortably. We rely on incremental parsing via the ``iterparse`` function.
::
def _get_shared_strings( self ):
"""Build ``strings_dict`` with all shared strings.
"""
self.strings_dict= defaultdict(str)
count= 0
text_tag= dom.QName( self.XLSX_NS['main'], "t" )
string_tag= dom.QName( self.XLSX_NS['main'], "si" )
# 1. Open the "xl/sharedStrings.xml" member
sharedStrings_zip= self.zip_archive.getinfo("xl/sharedStrings.xml")
for event, element in dom.iterparse(
self.zip_archive.open( sharedStrings_zip ), events=('end',) ):
logging.debug( event, element.tag )
if element.tag == text_tag:
self.strings_dict[ count ]+= element.text
elif element.tag == string_tag:
count += 1
element.clear()
logging.debug( self.strings_dict )
The shared strings may be too massive for in-memory incremental parsing.
We can create a temporary extract file to handle this case. Here's
the kind of code we might use.
.. parsed-literal::
with tempfile.TemporaryFile( ) as temp:
self.zip_archive.extract( sharedStrings_mbr, temp.filename )
for event, element in dom.iterparse( temp.filename ):
*process event and element*
.. py:method:: XLSX_Workbook.sheets( )
Return the list of sheets for this workbook.
::
def sheets( self ):
return self.name_to_id.keys()
Translate a col-row pair from :samp:`({letter}, {number})`
to proper 0-based Python index of :samp:`({row}, {col})`.
::
@staticmethod
def make_row_col( col_row_pair ):
col, row = col_row_pair
cn = 0
for char in col_row_pair[0]:
cn = cn*26 + (ord(char)-ord("A")+1)
return int(row), cn-1
We can build an eager :py:class:`sheet.Row` or a :py:class:`sheet.LazyRow` from the available data.
The eager :py:class:`sheet.Row` is built from :py:class:`cell.Cell` objects.
The :py:class:`sheet.LazyRow` delegates the creation
of :py:class:`cell.Cell` objects to :py:meth:`Workbook.row_get`.
This uses an incremental parser, also. There are four kinds of tags that
have to be located.
- :samp:`<row>{row}</row>`, end event. Finish (and yield) the row of cells.
Since XLSX is sparse, missing empty cells must be filled in.
- :samp:`<c t="{type}" r="{id}">{cell}</c>`.
- Start event for ``c``. Get the cell type and id. Empty the value accumulator.
- End event for ``c``. Save the accumulated value. This allows the cell to have
mixed content model.
- :samp:`<v>{value}</v>`, end event. Use the :py:meth:`cell` method to track down
enough information to build the Cell instance.
.. py:method:: XLSX_Workbook.rows_of( sheet )
Iterate through rows of the given sheet.
::
def rows_of( self, sheet ):
"""Iterator over rows as a list of Cells for a named worksheet."""
# 1. Map user name to member.
rId = self.name_to_id[sheet.name]
self.sheet_member_name = self.id_to_member[rId]
# 2. Open member.
sheet_zip= self.zip_archive.getinfo("xl/"+self.sheet_member_name)
self.row= {}
# 3. Assemble each row, allowing for missing cells.
row_tag= dom.QName(self.XLSX_NS['main'], "row")
cell_tag= dom.QName(self.XLSX_NS['main'], "c")
value_tag= dom.QName(self.XLSX_NS['main'], "v")
format_tag= dom.QName(self.XLSX_NS['main'], "f")
for event, element in dom.iterparse(
self.zip_archive.open(sheet_zip), events=('start','end') ):
logging.debug( element.tag, repr(element.text) )
if event=='end' and element.tag == row_tag:
# End of row: fill in missing cells
if self.row.keys():
data= stingray.sheet.Row( sheet, *(
self.row.get(i, stingray.cell.EmptyCell('', self))
for i in range(max(self.row.keys())+1) ) )
yield data
else:
yield stingray.sheet.Row( sheet )
self.row= {}
element.clear()
elif event=='end' and element.tag == cell_tag:
# End of cell: consolidate the final string
self.row[self.row_col[1]] = self.value
self.value= stingray.cell.EmptyCell( '', self )
elif event=='start' and element.tag == cell_tag:
# Start of cell: collect a string in pieces.
self.cell_type= element.attrib.get('t',None)
self.cell_id = element.attrib['r']
id_match = self.cell_id_pat.match( self.cell_id )
self.row_col = self.make_row_col( id_match.groups() )
self.value= stingray.cell.EmptyCell( '', self )
elif event=='end' and element.tag == value_tag:
# End of a value; what type was it?
self.value= self.cell( element )
elif event=='end' and element.tag == format_tag:
pass # A format string
else:
pass
logging.debug( "Ignoring", end="" ) # Numerous bits of structure exposed.
logging.debug( dom.tostring(element) )
.. py:method:: XLSX_Workbook.row_get( row, attribute )
Low-level get of a particular attribute from the given row.
::
def row_get( self, row, attribute ):
"""Create a Cell from the row's data."""
return row[attribute.position]
.. py:method:: XLSX_Workbook.cell( row, element )
Build a subclass of :py:class:`cell.Cell` from the current value tag content plus the
containing cell type information.
::
def cell( self, element ):
"""Create a proper :py:class:`cell.Cell` subclass from cell and value information."""
logging.debug( self.cell_type, self.cell_id, element.text )
if self.cell_type is None or self.cell_type == 'n':
try:
return stingray.cell.NumberCell( float(element.text), self )
except ValueError:
print( self.cell_id, element.attrib, element.text )
return None
elif self.cell_type == "s":
try:
# Shared String?
return stingray.cell.TextCell( self.strings_dict[int(element.text)], self )
except ValueError:
# Inline String?
logging.debug( self.cell_id, element.attrib, element.text )
return stingray.cell.TextCell( element.text, self )
except KeyError:
# Not a valid shared string identifier?
logging.debug( self.cell_id, element.attrib, element.text )
return stingray.cell.TextCell( element.text, self )
elif self.cell_type == "b":
return stingray.cell.BooleanCell( float(element.text), self )
elif self.cell_type == "d":
return stingray.cell.FloatDateCell( float(element.text), self )
elif self.cell_type == "e":
return stingray.cell.ErrorCell( element.text, self )
else:
# 'str' (formula), 'inlineStr' (string), 'e' (error)
print( self.cell_type, self.cell_id, element.attrib, element.text )
logging.debug( self.strings_dict.get(int(element.text)) )
return None
| 38.714286 | 136 | 0.605921 |
d3a83732fed4a00ad6fc0f4a4ecbf16421ff2973 | 1,383 | rst | reStructuredText | blogger/2012/10/01/what-page-rank-5.rst | hoamon/www-hoamon-info | 4260057d8c63abb3b5f52f6cce3d236c312a5834 | [
"BSD-3-Clause"
] | null | null | null | blogger/2012/10/01/what-page-rank-5.rst | hoamon/www-hoamon-info | 4260057d8c63abb3b5f52f6cce3d236c312a5834 | [
"BSD-3-Clause"
] | null | null | null | blogger/2012/10/01/what-page-rank-5.rst | hoamon/www-hoamon-info | 4260057d8c63abb3b5f52f6cce3d236c312a5834 | [
"BSD-3-Clause"
] | null | null | null | What !!!??? Page Rank 5
================================================================================
`.. image:: http://3.bp.blogspot.com/-Z6KFvtK_P9Q/UGjy17Z0pMI/AAAAAAAAFoE/HEr
WvC4UEtU/s640/Google+PageRank+Checker+-+Check+Google+page+rank+instantly-0925
15.png
`_
引自 `http://www.prchecker.info/check_page_rank.php`_
今天在幫客戶`查`_他的網址 PR 時,順便查了自己的,結果“莫明其妙”地發現本 Blog 的 PR 值高達 5 。這個是什麼意思呢? 就是 Google
認定本站的被連結關係程度高於`綠角`_、`MMDays`_、`MR JAMIE`_、`Mr.6`_、`Jserv`_、`Python
星球`_、`Linux 星球`_,與 `xdite`_ 及`清大彭明輝`_的網站是同一個等級。
我想這有幾個原因:
1. Google 算錯了。
2. PR值就算海浪一樣,潮起潮落,今天剛好輪到我是最高值,而其他人是最低值。
3. PR值已經“不重要”了。
因為我怎麼也想不通,平均一天來站訪客不超過 150 人次的網站, PR 值可以有 5 。不過,這還是給我有搖尾巴的機會。
今天是 2012-10-01 ,就讓我小小虛榮一下吧!
.. _: http://3.bp.blogspot.com/-Z6KFvtK_P9Q/UGjy17Z0pMI/AAAAAAAAFoE/HErWv
C4UEtU/s1600/Google+PageRank+Checker+-+Check+Google+page+rank+instantly-0
92515.png
.. _http://www.prchecker.info/check_page_rank.php:
http://www.prchecker.info/check_page_rank.php
.. _綠角: http://greenhornfinancefootnote.blogspot.tw/
.. _MMDays: http://mmdays.com/
.. _MR JAMIE: http://mrjamie.cc/
.. _Mr.6: http://mr6.cc/
.. _Jserv: http://blog.linux.org.tw/~jserv/
.. _Python 星球: http://planet.python.org.tw/
.. _Linux 星球: http://planet.linux.org.tw/
.. _xdite: http://blog.xdite.net/
.. _清大彭明輝: http://mhperng.blogspot.tw/
.. author:: default
.. categories:: chinese
.. tags::
.. comments:: | 28.8125 | 80 | 0.673897 |
971f13f3c705b1f07d3625d035ff37b2463ba5d6 | 173 | rst | reStructuredText | docs/source2/generated/statsmodels.tsa.x13.x13_arima_select_order.rst | GreatWei/pythonStates | c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37 | [
"BSD-3-Clause"
] | 76 | 2019-12-28T08:37:10.000Z | 2022-03-29T02:19:41.000Z | docs/source2/generated/statsmodels.tsa.x13.x13_arima_select_order.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 11 | 2015-07-22T22:11:59.000Z | 2020-10-09T08:02:15.000Z | docs/source2/generated/statsmodels.tsa.x13.x13_arima_select_order.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 35 | 2020-02-04T14:46:25.000Z | 2022-03-24T03:56:17.000Z | statsmodels.tsa.x13.x13\_arima\_select\_order
=============================================
.. currentmodule:: statsmodels.tsa.x13
.. autofunction:: x13_arima_select_order | 28.833333 | 45 | 0.595376 |
70dfabd09c932be8032c56b31564c636a1b80242 | 620 | rst | reStructuredText | typo3/sysext/core/Documentation/Changelog/7.4/Feature-20194-ConfigurationForDisplayingTheSaveViewButton.rst | dennned/jesus | f4f8575f81f3b95ffab830037264b4c0f0ae6421 | [
"PostgreSQL"
] | null | null | null | typo3/sysext/core/Documentation/Changelog/7.4/Feature-20194-ConfigurationForDisplayingTheSaveViewButton.rst | dennned/jesus | f4f8575f81f3b95ffab830037264b4c0f0ae6421 | [
"PostgreSQL"
] | null | null | null | typo3/sysext/core/Documentation/Changelog/7.4/Feature-20194-ConfigurationForDisplayingTheSaveViewButton.rst | dennned/jesus | f4f8575f81f3b95ffab830037264b4c0f0ae6421 | [
"PostgreSQL"
] | null | null | null | =======================================================================
Feature: #20194 - Configuration for displaying the "Save & View" button
=======================================================================
Description
===========
The "Save & View" button is configurable by TSConfig "TCEMAIN.preview.disableButtonForDokType" (CSV of "doktype" IDs) to
disable the button for custom page "doktypes". The default value is set in the PHP implementation: "254, 255, 199"
(Storage Folder, Recycler and Menu Seperator)
Impact
======
The "Save & View" button is no longer displayed in folders and recycler pages.
| 36.470588 | 120 | 0.569355 |
bfb8d232a8d65924833d5f3a2aac45e6f7d21cfa | 677 | rst | reStructuredText | doc/command_line_overview.rst | l3atbc/psiturk | 85ffa74030ec2a5f4142e5bca63b2f8b8807d7c6 | [
"MIT"
] | 2 | 2016-07-27T12:33:07.000Z | 2017-02-25T08:24:53.000Z | doc/command_line_overview.rst | l3atbc/psiturk | 85ffa74030ec2a5f4142e5bca63b2f8b8807d7c6 | [
"MIT"
] | null | null | null | doc/command_line_overview.rst | l3atbc/psiturk | 85ffa74030ec2a5f4142e5bca63b2f8b8807d7c6 | [
"MIT"
] | 1 | 2018-07-27T06:39:46.000Z | 2018-07-27T06:39:46.000Z | Command-line Interface
======================
The **psiTurk shell** is a simple, interactive command line interface which
allows users to communicate with Amazon Mechanical Turk, psiturk.org, and their
own experiment servers.
.. toctree::
:maxdepth: 2
command_line/starting.rst
command_line/prompt.rst
command_line/amt_balance.rst
command_line/config.rst
command_line/db.rst
command_line/debug.rst
command_line/download_datafiles.rst
command_line/help.rst
command_line/hit.rst
command_line/psiturk_status.rst
command_line/quit.rst
command_line/server.rst
command_line/status.rst
command_line/mode.rst
command_line/worker.rst
| 25.074074 | 79 | 0.753323 |
b5fbf17f3281de12b168502f555fcfba5aa7abe8 | 259 | rst | reStructuredText | programming-language/cases/php/README.rst | wdv4758h/notes | 60fa483961245ec5bb264d3f28a885fb82a1c25e | [
"Unlicense"
] | 136 | 2015-06-15T13:26:40.000Z | 2022-03-03T07:47:31.000Z | programming-language/cases/php/README.rst | wdv4758h/notes | 60fa483961245ec5bb264d3f28a885fb82a1c25e | [
"Unlicense"
] | 82 | 2017-01-06T06:32:55.000Z | 2020-09-03T03:34:24.000Z | programming-language/cases/php/README.rst | wdv4758h/notes | 60fa483961245ec5bb264d3f28a885fb82a1c25e | [
"Unlicense"
] | 18 | 2015-12-04T04:02:44.000Z | 2022-02-24T03:48:57.000Z | ========================================
PHP
========================================
.. contents:: 目錄
參考
========================================
* `References in PHP: An Indepth Look <https://derickrethans.nl/talks/phparch-php-variables-article.pdf>`_
| 19.923077 | 106 | 0.378378 |
dcf7d93a824973711b0e60dbbe2723d521daae69 | 407 | rst | reStructuredText | docs/source/reference/mdb.rst | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | 7 | 2022-01-21T09:15:45.000Z | 2022-02-15T09:31:58.000Z | docs/source/reference/mdb.rst | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | docs/source/reference/mdb.rst | Haiiliin/PyAbaqus | f20db6ebea19b73059fe875a53be370253381078 | [
"MIT"
] | null | null | null | =====================
Abaqus Model Database
=====================
Mdb commands are used to create and upgrade an Abaqus model database that stores models and analysis controls.
.. toctree::
:maxdepth: 1
:caption: Objects in Mdb
mdb/model
mdb/job
mdb/annotation
mdb/edit_mesh
Object features
---------------
.. autoclass:: abaqus.Mdb.Mdb.Mdb
:members:
:inherited-members:
| 16.958333 | 110 | 0.604423 |
12943e6188fad8d6cf19a196115f75c8298d388c | 2,836 | rst | reStructuredText | src/z3c/form/browser/file.rst | erral/z3c.form | 9d6d83c71d8b83752a0dde162ef358116a804564 | [
"ZPL-2.1"
] | 5 | 2015-09-02T15:03:34.000Z | 2018-05-09T04:12:36.000Z | src/z3c/form/browser/file.rst | erral/z3c.form | 9d6d83c71d8b83752a0dde162ef358116a804564 | [
"ZPL-2.1"
] | 65 | 2015-02-20T12:19:15.000Z | 2022-03-22T08:14:09.000Z | src/z3c/form/browser/file.rst | erral/z3c.form | 9d6d83c71d8b83752a0dde162ef358116a804564 | [
"ZPL-2.1"
] | 27 | 2015-02-17T19:32:14.000Z | 2020-07-21T05:42:03.000Z | File Widget
-----------
The file widget allows you to upload a new file to the server. The "file" type
of the "INPUT" element is described here:
http://www.w3.org/TR/1999/REC-html401-19991224/interact/forms.html#edef-INPUT
As for all widgets, the file widget must provide the new ``IWidget``
interface:
>>> from zope.interface.verify import verifyClass
>>> from z3c.form import interfaces
>>> from z3c.form.browser import file
>>> verifyClass(interfaces.IWidget, file.FileWidget)
True
The widget can be instantiated only using the request:
>>> from z3c.form.testing import TestRequest
>>> request = TestRequest()
>>> widget = file.FileWidget(request)
Before rendering the widget, one has to set the name and id of the widget:
>>> widget.id = 'widget.id'
>>> widget.name = 'widget.name'
We also need to register the template for the widget:
>>> import zope.component
>>> from zope.pagetemplate.interfaces import IPageTemplate
>>> from z3c.form.testing import getPath
>>> from z3c.form.widget import WidgetTemplateFactory
>>> zope.component.provideAdapter(
... WidgetTemplateFactory(getPath('file_input.pt'), 'text/html'),
... (None, None, None, None, interfaces.IFileWidget),
... IPageTemplate, name=interfaces.INPUT_MODE)
If we render the widget we get a simple input element:
>>> print(widget.render())
<input type="file" id="widget.id" name="widget.name"
class="file-widget" />
Let's now make sure that we can extract user entered data from a widget:
>>> try:
... from StringIO import StringIO as BytesIO
... except ImportError:
... from io import BytesIO
>>> myfile = BytesIO(b'My file contents.')
>>> widget.request = TestRequest(form={'widget.name': myfile})
>>> widget.update()
>>> isinstance(widget.extract(), BytesIO)
True
If nothing is found in the request, the default is returned:
>>> widget.request = TestRequest()
>>> widget.update()
>>> widget.extract()
<NO_VALUE>
Make also sure that we can handle FileUpload objects given form a file upload.
>>> from zope.publisher.browser import FileUpload
Let's define a FieldStorage stub:
>>> class FieldStorageStub:
... def __init__(self, file):
... self.file = file
... self.headers = {}
... self.filename = 'foo.bar'
Now build a FileUpload:
>>> myfile = BytesIO(b'File upload contents.')
>>> aFieldStorage = FieldStorageStub(myfile)
>>> myUpload = FileUpload(aFieldStorage)
>>> widget.request = TestRequest(form={'widget.name': myUpload})
>>> widget.update()
>>> widget.extract()
<zope.publisher.browser.FileUpload object at ...>
If we render them, we get a regular file upload widget:
>>> print(widget.render())
<input type="file" id="widget.id" name="widget.name"
class="file-widget" />
| 29.237113 | 78 | 0.682652 |
791cddba83e20537625d4ae6952b4137e3d80944 | 269 | rst | reStructuredText | docs/source/index.rst | demis001/raslpipe | 5df0e5f1787f47fee89cd611f558767cbacf1369 | [
"MIT"
] | 1 | 2016-06-23T13:57:57.000Z | 2016-06-23T13:57:57.000Z | docs/source/index.rst | demis001/raslpipe | 5df0e5f1787f47fee89cd611f558767cbacf1369 | [
"MIT"
] | 16 | 2016-03-28T20:33:34.000Z | 2017-01-04T20:16:55.000Z | docs/source/index.rst | demis001/raslpipe | 5df0e5f1787f47fee89cd611f558767cbacf1369 | [
"MIT"
] | null | null | null | temposeqcount
=============
Contents:
.. toctree::
:maxdepth: 2
install
help
analysis
scripts/index
stages/index
modules
.. only:: html
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 10.76 | 21 | 0.524164 |
2a7cac9838f8e77fd81a99b7d5ab7d3d08e0adc6 | 1,144 | rst | reStructuredText | docs/whatsnew/v0-0-6.rst | hannesfelipe/oemof-solph | a29802c73b9f3a1240a9ea6cec28f9d52bf1001c | [
"MIT"
] | 59 | 2020-04-01T12:02:37.000Z | 2022-03-26T06:31:06.000Z | docs/whatsnew/v0-0-6.rst | hannesfelipe/oemof-solph | a29802c73b9f3a1240a9ea6cec28f9d52bf1001c | [
"MIT"
] | 170 | 2020-03-31T12:04:26.000Z | 2022-03-31T15:41:04.000Z | docs/whatsnew/v0-0-6.rst | hannesfelipe/oemof-solph | a29802c73b9f3a1240a9ea6cec28f9d52bf1001c | [
"MIT"
] | 33 | 2020-04-28T11:17:09.000Z | 2022-03-14T21:25:08.000Z | v0.0.6 (April 29, 2016)
-----------------------
New features
^^^^^^^^^^^^^^^^^^^^
* It is now possible to choose whether or not the heat load profile generated
with the BDEW heat load profile method should only include space heating
or space heating and warm water combined.
(`Issue #130 <https://github.com/oemof/oemof-solph/issues/130>`_)
* Add possibility to change the order of the columns of a DataFrame subset. This is useful to change the order of stacked plots. (`Issue #148 <https://github.com/oemof/oemof-solph/pull/148>`_)
Documentation
^^^^^^^^^^^^^^^^^^^^
Testing
^^^^^^^^^^^^^^^^^^^^
* Fix constraint tests (`Issue #137 <https://github.com/oemof/oemof-solph/issues/137>`_)
Bug fixes
^^^^^^^^^^^^^^^^^^^^
* Use of wrong columns in generation of SF vector in BDEW heat load profile
generation (`Issue #129 <https://github.com/oemof/oemof-solph/issues/129>`_)
* Use of wrong temperature vector in generation of h vector in BDEW heat load
profile generation.
Other changes
^^^^^^^^^^^^^^^^^^^^
Contributors
^^^^^^^^^^^^^^^^^^^^
* Uwe Krien
* Stephan Günther
* Simon Hilpert
* Cord Kaldemeyer
* Birgit Schachler
| 27.238095 | 192 | 0.658217 |
730eb6e0f5dd34e204c6c0d09dec8855d8f9de64 | 811 | rst | reStructuredText | includes_config/includes_config_rb_supermarket.rst | nathenharvey/chef-docs | 21aa14a43cc0c81db14eb107071f0f7245945df8 | [
"CC-BY-3.0"
] | null | null | null | includes_config/includes_config_rb_supermarket.rst | nathenharvey/chef-docs | 21aa14a43cc0c81db14eb107071f0f7245945df8 | [
"CC-BY-3.0"
] | null | null | null | includes_config/includes_config_rb_supermarket.rst | nathenharvey/chef-docs | 21aa14a43cc0c81db14eb107071f0f7245945df8 | [
"CC-BY-3.0"
] | null | null | null | .. The contents of this file are included in multiple topics.
.. This file should not be changed in a way that hinders its ability to appear in multiple documentation sets.
The |supermarket rb| file contains all of the non-default configuration settings used by the |supermarket|. (The default settings are built-in to the |supermarket| configuration and should only be added to the |supermarket rb| file to apply non-default values.) These configuration settings are processed when the ``supermarket-ctl reconfigure`` command is run, such as immediately after setting up |supermarket| or after making a change to the underlying configuration settings after the server has been deployed. The |supermarket rb| file is a |ruby| file, which means that conditional statements can be used in the configuration file. | 202.75 | 637 | 0.802713 |
7717a454a744dea41c65dc08d614d88af9869295 | 387 | rst | reStructuredText | docs/light_lists.rst | paul-butcher/light_character | b25ebb2341302ad1775a26c38248f99932ee36c6 | [
"MIT"
] | null | null | null | docs/light_lists.rst | paul-butcher/light_character | b25ebb2341302ad1775a26c38248f99932ee36c6 | [
"MIT"
] | null | null | null | docs/light_lists.rst | paul-butcher/light_character | b25ebb2341302ad1775a26c38248f99932ee36c6 | [
"MIT"
] | null | null | null | Light Characteristics
=====================
You can make up your own, based on the information on
`Wikipedia <https://en.wikipedia.org/wiki/Light_characteristic>`_
or as described in the features.
Alternatively, you could look up some real examples from a light list.
The American `National Geospatial Intelligence Agency <https://msi.nga.mil/Publications/NGALOL>`_
is a good source.
| 32.25 | 97 | 0.751938 |
b495a5f2d5e8691e307314fd115d22803127bbdc | 313 | rst | reStructuredText | docs/source/reference/logging.rst | iamshnoo/optuna | d809aacd0384a17d06dcab03cf99d5c4c86abc92 | [
"MIT"
] | 1 | 2020-12-29T07:38:45.000Z | 2020-12-29T07:38:45.000Z | docs/source/reference/logging.rst | nabenabe0928/optuna | aa505125de8515518fe19ba227edf7a1d3f8ebda | [
"MIT"
] | 1 | 2021-06-25T15:45:42.000Z | 2021-06-25T15:45:42.000Z | docs/source/reference/logging.rst | nabenabe0928/optuna | aa505125de8515518fe19ba227edf7a1d3f8ebda | [
"MIT"
] | 2 | 2020-03-03T00:40:28.000Z | 2021-01-28T11:54:32.000Z | optuna.logging
==============
.. autosummary::
:toctree: generated/
:nosignatures:
optuna.logging.get_verbosity
optuna.logging.set_verbosity
optuna.logging.disable_default_handler
optuna.logging.enable_default_handler
optuna.logging.disable_propagation
optuna.logging.enable_propagation
| 22.357143 | 41 | 0.760383 |
8769e75545ce9b8a1ac4be9a96c490788ce5f5a7 | 2,992 | rst | reStructuredText | CHANGES.rst | jcushman/pyquery | 39ff94ee834c5041dd22b6dda45057d0ccecbb40 | [
"BSD-3-Clause"
] | null | null | null | CHANGES.rst | jcushman/pyquery | 39ff94ee834c5041dd22b6dda45057d0ccecbb40 | [
"BSD-3-Clause"
] | null | null | null | CHANGES.rst | jcushman/pyquery | 39ff94ee834c5041dd22b6dda45057d0ccecbb40 | [
"BSD-3-Clause"
] | null | null | null | 1.2.11 (unreleased)
-------------------
- Nothing changed yet.
1.2.10 (2016-01-05)
-------------------
- Fixed #118: implemented usage `lxml.etree.tostring` within `outer_html` method
- Fixed #117: Raise HTTP Error if HTTP status code is not equal to 200
- Fixed #112: make_links_absolute does not apply to form actions
- Fixed #98: contains act like jQuery
1.2.9 (2014-08-22)
------------------
- Support for keyword arguments in PyQuery custom functions
- Fixed #78: items must take care or the parent
- Fixed #65 PyQuery.make_links_absolute() no longer creates 'href' attribute
when it isn't there
- Fixed #19. ``is_()`` was broken.
- Fixed #9. ``.replaceWith(PyQuery element)`` raises error
- Remove official python3.2 support (mostly because of 3rd party semi-deps)
1.2.8 (2013-12-21)
------------------
- Fixed #22: Open by filename fails when file contains invalid xml
- Bug fix in .remove_class()
1.2.7 (2013-12-21)
------------------
- Use pep8 name for methods but keep an alias for camel case method.
Eg: remove_attr and removeAttr works
Fix #57
- .text() now return an empty string instead of None if there is no text node.
Fix #45
- Fixed #23: removeClass adds class attribute to elements which previously
lacked one
1.2.6 (2013-10-11)
------------------
- README_fixt.py was not include in the release. Fix #54.
1.2.5 (2013-10-10)
------------------
- cssselect compat. See https://github.com/SimonSapin/cssselect/pull/22
- tests improvments. no longer require a eth connection.
- fix #55
1.2.4
-----
- Moved to github. So a few files are renamed from .txt to .rst
- Added .xhtml_to_html() and .remove_namespaces()
- Use requests to fetch urls (if available)
- Use restkit's proxy instead of Paste (which will die with py3)
- Allow to open https urls
- python2.5 is no longer supported (may work, but tests are broken)
1.2.3
-----
- Allow to pass this in .filter() callback
- Add .contents() .items()
- Add tox.ini
- Bug fixes: fix #35 #55 #64 #66
1.2.2
-----
- Fix cssselectpatch to match the newer implementation of cssselect. Fixes issue #62, #52 and #59 (Haoyu Bai)
- Fix issue #37 (Caleb Burns)
1.2.1
-----
- Allow to use a custom css translator.
- Fix issue 44: case problem with xml documents
1.2
---
- PyQuery now use `cssselect <http://pypi.python.org/pypi/cssselect>`_. See issue 43.
- Fix issue 40: forward .html() extra arguments to ``lxml.etree.tostring``
1.1.1
-----
- Minor release. Include test file so you can run tests from the tarball.
1.1
---
- fix issues 30, 31, 32 - py3 improvements / webob 1.2+ support
1.0
---
- fix issues 24
0.7
---
- Python 3 compatible
- Add __unicode__ method
- Add root and encoding attribute
- fix issues 19, 20, 22, 23
0.6.1
------
- Move README.txt at package root
- Add CHANGES.txt and add it to long_description
0.6
----
- Added PyQuery.outerHtml
- Added PyQuery.fn
- Added PyQuery.map
- Change PyQuery.each behavior to reflect jQuery api
| 17.916168 | 109 | 0.668115 |
a2fdb9c7ae9387d654cec1d6f86330a5826e1ae6 | 154 | rst | reStructuredText | HISTORY.rst | RocketPunch-inc/django-haystack-elasticsearch | bcb8fc250f1e88fdbeefdf2d33442193fd474949 | [
"BSD-3-Clause"
] | null | null | null | HISTORY.rst | RocketPunch-inc/django-haystack-elasticsearch | bcb8fc250f1e88fdbeefdf2d33442193fd474949 | [
"BSD-3-Clause"
] | null | null | null | HISTORY.rst | RocketPunch-inc/django-haystack-elasticsearch | bcb8fc250f1e88fdbeefdf2d33442193fd474949 | [
"BSD-3-Clause"
] | null | null | null | =======
History
=======
0.1.1 (2021-05-10)
------------------
* Update six dependency.
0.1.0 (2016-12-29)
------------------
* First release on PyPI.
| 11 | 24 | 0.428571 |
f70ea723b7ff8737a838679810ba026eb574d915 | 504 | rst | reStructuredText | docs/source/api/topopt.filters.rst | arnavbansal2764/topopt | 74d8f17568a9d3349632e23840a9dc5b0d6c4d1f | [
"MIT"
] | 53 | 2020-04-14T10:13:04.000Z | 2022-02-24T03:16:57.000Z | docs/source/api/topopt.filters.rst | arnavbansal2764/topopt | 74d8f17568a9d3349632e23840a9dc5b0d6c4d1f | [
"MIT"
] | 5 | 2020-11-12T23:56:30.000Z | 2021-09-30T19:24:06.000Z | docs/source/api/topopt.filters.rst | arnavbansal2764/topopt | 74d8f17568a9d3349632e23840a9dc5b0d6c4d1f | [
"MIT"
] | 15 | 2020-02-12T01:32:07.000Z | 2022-02-20T02:44:55.000Z | Filters
=======
.. automodule:: topopt.filters
Base Filter
-----------
.. autoclass:: topopt.filters.Filter
:members:
:undoc-members:
:special-members: __init__
Density Based Filter
--------------------
.. autoclass:: topopt.filters.DensityBasedFilter
:members:
:undoc-members:
:special-members: __init__
Sensitivity Based Filter
------------------------
.. autoclass:: topopt.filters.SensitivityBasedFilter
:members:
:undoc-members:
:special-members: __init__
| 17.37931 | 52 | 0.625 |
6357fc40e29d1ef1e3eb19aaeac4060dfc625227 | 1,428 | rst | reStructuredText | docs/components/homeassistant.rst | Nekmo/then | 0bbc6b8c2b46170a6716f96e22382ae41d7e6ea3 | [
"MIT"
] | 25 | 2018-04-02T17:34:46.000Z | 2021-10-11T15:44:01.000Z | docs/components/homeassistant.rst | Nekmo/then | 0bbc6b8c2b46170a6716f96e22382ae41d7e6ea3 | [
"MIT"
] | 48 | 2018-08-01T09:02:13.000Z | 2021-03-12T00:41:06.000Z | docs/components/homeassistant.rst | Nekmo/then | 0bbc6b8c2b46170a6716f96e22382ae41d7e6ea3 | [
"MIT"
] | 1 | 2018-11-13T09:05:48.000Z | 2018-11-13T09:05:48.000Z | Home Assistant
##############
:Component Type: Event
:Requirements: None
Execute an Home Assistant event using Home Assistant API.
Setup for users
===============
These instructions are for the users. For developer instructions, look below.
Prerequisites
-------------
* A Home Assistant installation ( https://www.home-assistant.io/getting-started/ ).
Config setup
------------
These are the **required** parameters:
* **url**: Home Assistant address (ip or domain) with or without protocol and port (by default ``http``
and ``8123``). Syntax: ``[<protocol>://]<server>[:<port>]``. For example: ``https://hassio.local:1234``.
These are the **optional** parameters:
* **access**: HomeAssistant password for API (``x-ha-access`` header).
* **timeout**: Connection timeout to send event.
Message setup
-------------
These are the **required** parameters:
* **event**: You can use any event name. Just use an event name in THEN and create an automation
in Homeassistant for your event.
More info about events in the homeassistant documentation:
* https://www.home-assistant.io/docs/configuration/events/
* https://www.home-assistant.io/docs/automation/trigger/
These are the **optional** parameters:
* **body**: Event data to send (JSON).
Instructions for developers
===========================
.. automodule:: then.components.homeassistant
:members: HomeAssistant,HomeAssistantMessage
:noindex:
| 23.8 | 106 | 0.682773 |
d8e04b6d9df23797b8466b00451e54d56ec57b09 | 135 | rst | reStructuredText | source/redmine/index.rst | pkimber/my-memory | 2ab4c924f1d2869e3c39de9c1af81094b368fb4a | [
"Apache-2.0"
] | null | null | null | source/redmine/index.rst | pkimber/my-memory | 2ab4c924f1d2869e3c39de9c1af81094b368fb4a | [
"Apache-2.0"
] | null | null | null | source/redmine/index.rst | pkimber/my-memory | 2ab4c924f1d2869e3c39de9c1af81094b368fb4a | [
"Apache-2.0"
] | null | null | null | Redmine
*******
Contents
.. toctree::
:maxdepth: 1
links
install
admin
attachments
issues
time
wiki-syntax
| 7.941176 | 15 | 0.585185 |
c662eb8dc6659c825f0901a70af67d540bb862ae | 18,191 | rst | reStructuredText | list/list07/sutra0439.rst | lxinning98/qldzjv | 91b0e16af932ffa74e330a50a23a4af7b5e3284a | [
"BSD-2-Clause"
] | 1 | 2018-07-17T08:37:38.000Z | 2018-07-17T08:37:38.000Z | list/list07/sutra0439.rst | lxinning98/qldzjv | 91b0e16af932ffa74e330a50a23a4af7b5e3284a | [
"BSD-2-Clause"
] | null | null | null | list/list07/sutra0439.rst | lxinning98/qldzjv | 91b0e16af932ffa74e330a50a23a4af7b5e3284a | [
"BSD-2-Clause"
] | null | null | null | 第0439部~大方广如来秘密藏经二卷
====================================
**失译人名附二秦录**
**大方广如来秘密藏经卷上**
如是我闻:一时,佛住王舍城祇阇崛山,与大比丘僧八千人俱。菩萨摩诃萨三万二千,众所知识,得陀罗尼,无碍辩才,得无生法忍,降伏魔怨,一切法中快得自在,善能种种神通变化,善知一切禅定三昧入出自在,为诸众生作不请友,永离盖缠,善能了知诸众生根,善知依止于了义法,净修六度到于彼岸,游戏五通教化众生心无厌倦,无量无边百千万亿那由他劫久修诸行,已曾供养无量诸佛,善为诸佛之所护持,护正法城不断佛种,常以圣德悦乐一切,转妙法轮,善能往来无边佛土,奉觐诸佛,大师子吼,治大法船,击大法鼓,吹大法蠡,善集一切福德庄严,相好严身,念慧坚进,善知惭愧,法喜自娱,具足成就大慈大悲,隐蔽日月所有光明,利衰毁誉称讥苦乐——是世八法所不能污,不高不下,善断爱恚,常与方便智慧相应,随众生根善开化之,救无救者,有所为作善观察之,身口意业无诸过患,善能集于定慧庄严,其心调柔犹如大龙,如大师子降伏外道,善能进趣大丈夫行离诸怖畏,善能决断诸众生疑,善能劝请无量诸佛转于法轮,善住大愿,永离二见,常勤度脱一切众生,善知垢净所起因缘,善修正念,不起声闻、缘觉之念,不舍一切智宝之心,其心清净犹如虚空,其身柔软,心无染污志意无坏,随所至处心无染著,妙音和软,有所言说显露易解,其言清白说无染法句,常观他德,勇猛无侣,志欲道场。其名曰:山刚菩萨、大山菩萨、持山岩菩萨、山积王菩萨、石山王菩萨、大进菩萨、信进菩萨、极进菩萨、喜手菩萨、宝印手菩萨、宝手菩萨、德手菩萨、灯手菩萨、常举手菩萨、常下手菩萨、常喜根菩萨、常思念菩萨、常勤菩萨、常观菩萨、法勇王菩萨、净宝光明威德王菩萨、摩尼光王菩萨、过诸盖菩萨、总持自在王菩萨、发心转法轮菩萨、法勇菩萨、净众生宝勇菩萨、道分味菩萨、捷辩菩萨、无碍辩菩萨、不动足进菩萨、金刚足进菩萨、金刚志菩萨、虚空藏菩萨、相好积严菩萨、坏魔网菩萨、胜志菩萨、导师菩萨、喜见菩萨,贤护等十六大士,弥勒等贤劫菩萨。兜率陀天曼陀罗华香等而为上首,他化自在天王等三万二千,如是天子,及余趣向于大乘者,三千大千世界之中释、梵、护世,欲界、色界、净居诸天,一切来集,恭敬供养礼拜如来。
尔时,世尊为于无量百千大众恭敬围绕而演说法。是时,东方去此佛土七十二亿刹,彼有佛土,名曰常出大法之音。其国有佛,号曰宝杖如来、应供、正遍知觉,今者现在。如是常出大法音国,一切江河池泉诸水,一切树林,一切众华,一切诸叶,一切华果,一切台观,常出法宝无上法音。彼土众生,常闻如是胜妙法音。是宝杖佛常出大法音国,有菩萨名无量志庄严王。是菩萨观宝杖佛已,犹如壮士屈伸臂顷,没是常出大法音国;一念之顷,而来至此娑婆世界。
时,无量志庄严王菩萨,化作八万四千宝台,妙宝所成,四方四柱,纵广正等,庄严极妙。一一宝台化作八万四千宝树,华果茂盛。一一树下皆悉化作宝师子座,众宝厕填,皆悉敷置百千妙衣。是诸座上皆见佛坐,形色相貌如释迦牟尼。是无量志庄严王菩萨现是化已,化虚空中化作宝盖,纵广正等百千由旬,垂悬缯彩,铃网庄饰。风吹铃网,出柔和微妙可爱软音,其音遍告三千大千佛之世界。时此三千大千世界,平坦如掌,生宝莲华供养如来。时无量志庄严王菩萨,以八万四千宝台而自围绕,来诣佛所。
是时,大众见是化已,得未曾有,而作是言:“如今所见此大士来庄严事相,必说大法!”及此三千大千世界诸庄严事,又上空中垂悬宝盖于如来上,一切天宫悉皆隐蔽。
是时,大德摩诃迦葉,承佛神力,从座而起,整衣服,偏袒右肩,右膝著地,向佛合掌,而说偈言:
“无垢净光从空出, 隐蔽释梵诸光明,
及蔽日月珠火光, 唯愿人尊说此相。
此空中现妙宝盖, 遍覆百千由旬地,
幢幡铃网以庄严, 世尊今将雨法雨。
铃网所出妙声音, 其音遍告此佛界,
有闻音者烦恼息, 为何利益说此事?
三千世界平等掌, 百千莲华从地出,
华香适意悦身心, 是何威德之所为?
东方遍放金色光, 八万四千妙宝台,
台内宝树师子座, 见如导师释师子。
导师此是何利事? 见此事者何增益?
此是何种欲佛知, 现此无量诸神变?”
尔时,佛告摩诃迦葉:“东方去此七十二亿佛土,有国名常出大法音。彼中有佛号曰宝杖,今者现在。彼有菩萨,名无量志庄严王,来至此土,见我礼拜咨受听法。为诸菩萨生大法欲,生大法力,集大法智,欲显常出大法音国所有功德,及宝杖佛所有功德。以此缘故,是无量志庄严王菩萨,而来至此娑婆世界,一日一夜所利众生,多于汝等满此三千大千世界诸大声闻法利众生。假令汝等数如稻麻竹苇甘蔗丛林,寿命一劫所利众生,犹尚不等!”
大德迦葉白言:“世尊,阎浮提人,若得闻是善丈夫名,尚得大利,况有信心复闻说法!”
时,无量志庄严王菩萨及诸宝台,住如来前,顶礼佛足。当礼佛时,令是三千大千世界六种震动,百千伎乐不鼓自鸣。一切大众礼如来足。
尔时,无量志庄严王菩萨绕佛三匝,及与八万四千宝台亦绕三匝。绕三匝已,向佛合掌,以偈赞佛:
“善能柔软微妙语, 无错无杂净无垢,
善名威德慧中胜, 我今稽首最胜仙。
多百千亿功德满, 施安隐乐灭百苦,
仁大悲喜等三界, 而演说法除尘垢。
十方诸佛叹仁德, 善逝恶时得菩提,
度恶众生无疲倦, 度一众生尚为难。
一切诸佛悉平等, 智慧通等号人尊,
成佛无等白净法, 示现卑劣调众生。
尊若悉示佛境界, 一切众生心迷乱,
大悲为利是等故, 修彼所行演说法。
人尊智胜众所乐, 常先和颜柔软语,
算数人天德无等, 是故欢喜顶礼尊。
一切智等诸众生, 尽诸法降降外道,
一切智见伏魔怨, 稽首百力降诸力。
常乐真实诚谛语, 善知如说如所行,
苦乐不动如山王, 我今稽首施世乐。”
尔时,无量志庄严王菩萨,偈赞佛已,而白佛言:“世尊,宝杖如来问讯世尊:少病少恼,起居轻利,安乐行不?世尊,我今欲少请问如来、应供、正遍觉。若佛听者,乃敢咨启。”
佛告无量志庄严王菩萨:“善男子,如来常听,随所有疑,恣汝所问。吾当随汝所问演说,悦可汝心。”
“如是,世尊,愿乐欲闻。”时,无量志庄严王菩萨白言:“世尊,我从先佛如来、应供、正遍觉闻,有法名如来秘密藏。若有菩萨住是秘藏,得无尽法,得无尽辩,见佛无尽,善能获得无尽神通,为诸众生作实依止。善哉!世尊,愿为演说如来秘密藏法。”
尔时,佛告无量志庄严王菩萨:“善哉!善哉!善男子,乃能问佛如是之法。善男子,汝已曾于恒河沙佛所,植诸善根,咨受请问。善男子,汝今谛听!善思念之,吾当少说如来密藏法。”
无量志庄严王菩萨,即白佛言:“如是,世尊,受教而听。”
佛言:“善男子,如来密藏法,谓一切智心。发是心已,坚固守护不退不舍,无有娆乱善好忆念,炽然劝导显示教诲,善根先首喜乐守护,常恒坚造应作之业:为是布施,为是持戒,为是忍辱,为是精进,为是禅定,为是方便。是心为柱,不怯不弱,不羸不坏,无有懒惰,不背不舍,顺向是心而觉了之。善业为首,质直无曲,正住端直,无幻无伪,作已无疑,未作者作,如所应作勤修行之,舍不正行,勤修正行。善男子,是名如来秘密藏法所入法门,所谓坚固一切智心,好坚守护,不弃舍之。善男子,何等一切智心坚固?善男子,一切智心坚固有四。何等四?不念余乘,不礼余天,不发余心,志意无转。是为四。”而说颂曰:
“不生念余乘, 礼佛不礼天,
不生余欲心, 不礼外凡夫。
修行是法时, 一切智心坚,
非魔及外道, 得便如毛发。
“善男子,复有四法,护一切智心。何等四?不为色醉,及财封醉,非眷属醉,及自在醉。是为四。”而说颂曰:
“非色财封醉, 眷属及自在,
色财封自在, 眷属不放逸。
观诸有为法, 皆悉是无常,
不放逸离慢, 守护菩提心。
斯行法功德, 趣菩提不退!
“善男子,复有四法,不退菩提心。何等四?集诸波罗蜜,亲近实菩萨,修集大悲心,以四摄法摄诸众生。是为四。”而说颂曰:
“常修六度无满足, 生闻闻已心柔软,
生于大欲离恶友, 亲近善友随所欲。
常修胜道近向者, 常修悲心住四摄,
常好坚住菩提心, 佛功德聚不难得。
“善男子,菩萨具足四法,不舍一切智心。何等四?信佛功德,修集佛智,见佛神通,不断佛种。是名为四。”而说颂曰:
“信解佛德已, 勤修集佛智,
见佛神通已, 勤守护佛种。
修行如是法, 不舍菩提心,
随所见诸佛, 倍生精进力。
“善男子,菩萨具足四法,终不娆乱菩提之心。何等四?给侍诸佛面前,从于如来闻法,常叹佛德,依止寂静缘念于佛。是名为四。”而说颂曰:
“给侍于如来, 好尊重恭敬,
若有所闻法, 闻已如说行。
常赞叹如来, 信敬爱乐之,
面闻胜法已, 智者依于义。
常赞叹功德, 调御世所有,
彼常勤依止, 正念于诸佛。
数数赞佛德, 常勤观己行,
常乐独静处, 思念于如来。
善摄如是法, 修行心不乱,
斯人有三昧, 不忘菩提心。
“善男子,菩萨具足四法,忆菩提心。何等四?我要当为一切众生良福田,我当说道,我当随趣如来所趣,我当实知诸众生行。是为四。”而说偈曰:
“我当为世胜福田, 趣邪道者示正路,
善逝所趣我当趣, 我当常知众生行。
菩萨大士念此德, 常念菩提胜道心,
彼当速疾成法王, 得神通智世无等。
“善男子,菩萨具足四法,念一切智心。何等四?专志念意是诸法本,当念法本,发一切智心是世宝塔,当念宝塔。是名为四。”而说颂曰:
“当专志念意, 极好专念意,
此是诸法本, 一切世间塔。
常念菩提心, 住意好善住,
此是十力本, 当为天世塔。
“善男子,菩萨具足四法,然一切智心。何等四?势力通集不失本行,满五根力,身心精进而无有我,勤行精进为利益他。是名为四。”而说颂曰:
“所演说四法, 炽然菩提心,
若炽然智慧, 得止息烦恼。
势力及通达, 如是勤精进,
安住服是已, 庄严无懈怠。
斯不失本誓, 善安住根力,
身心无疲倦, 勤进求实身。
住如是炽然, 增长菩提心,
彼智慧如是, 犹日月增长。
“善男子,菩萨有四法,劝菩提心。何等四?在大众中称扬赞叹菩提之心,令其开解菩提之心,善受教诲随顺师长发清净心,一切烦恼不得自在。是名为四。”而说颂曰:
“劝导唱道心, 先住此为本,
当有一切智, 是名知因者。
是一切智心, 清净常照明,
常住于是中, 世间所顶礼。
常出柔软语, 速疾受教诲,
咨问诸师长, 一切智胜心。
本性常清净, 守护菩提心,
白净离烦恼, 最胜不相违。
“善男子,菩萨有四法,显示菩提心。何等四?此是我住处,住是处已开示显说,知于是心有无量德,亦为他说如是之事。是名为四。”而说颂曰:
“善住于所住, 菩萨住是已,
称扬如是法, 菩提之妙心。
道心德无量, 发及称扬等,
称扬已便行, 称扬者所得。
“善男子,菩萨有四法,教修菩提心。何等四?谓不粗穬,言说柔软,无有粗涩,颜色和悦。是为四。”而说偈言:
“柔软解说义, 常无有粗穬,
和颜住是法, 彼教菩提心。
“善男子,菩萨有四法,菩提之心善根为首。何等四?成满相好开门大施,修净佛土行种种施,净于智慧常伏憍慢,满足智慧修集多闻。是名为四。”而说颂曰:
“常开门大施, 彼到相好岸,
善好种种施, 斯当有净土。
常无有憍慢, 恒求集佛智,
集闻无满足, 斯有利智慧。
如是胜妙相, 方便起道根,
是巧心所转, 集先诸功德。
“善男子,菩萨有四法常喜乐。何等四?喜乐见佛,见余菩萨胜精进者生于喜乐,作如是言:‘我当何时满足受记,受于无上菩提道记?我当何时诸众生前作诸佛事,于佛智慧生喜乐心?’是名为四。”而说颂曰:
“我当何时现见佛? 彼生喜乐欲见佛,
见余菩萨胜进者, 生喜欲修是精进。
我当何时满德聚, 得授胜记证菩提,
胜智某方作法王? 菩萨常生是喜欲。
我何时世作佛事, 得神通智到彼岸,
名闻普遍十方供? 菩萨常生此喜欲。
“善男子,菩萨有四法不喜。何等四?不喜称誉不实功德得诸利养,不喜得诸释、梵、护世、人天富乐,不喜一切声闻、缘觉,不喜一切外道所得胜供养事。是为四法不喜。”而说颂曰:
“不喜名称大利养, 于身命财亦如是,
不喜释梵及护世, 是诸邪有悉无常。
不喜声闻及缘觉, 唯除起何胜乘心,
不喜世禅及外道, 不喜身见及边见。
“善男子,菩萨有四法,护一切智心。何等四?如说如住,如作而说,于诸众生其心平等,生极欲心谓于善法。是名为四。”而说颂曰:
“如说如住如作说, 等心众生极欲道,
善住于是四胜法, 常护道心不忘失。
“善男子,菩萨有四法,是所应作。何等四?修集多闻,思念多闻,说于所闻,不退寂静。是名为四。”而说颂曰:
“斯常勤集于未闻, 是常修念思多闻,
是常勤说于多闻, 是常勤修为得禅。
“善男子,菩萨有二法,定一切智心而行布施。何等二?专意念定,舍不望果报。是为二。”而说颂曰:
“以欢喜心而施与, 施已生喜不望报,
一切悉舍向菩提, 定心施已证菩提。
“善男子,菩萨有二法,一切智为首,修持净戒。何等二?于诸众生无侵害心,毁戒者所生大悲心。是为二。”而说颂曰:
“不生毁害心, 等施上中下,
倍增生悲心, 于恶逆众生。
“善男子,菩萨有二法,一切智为首,修行忍辱。何等二?自舍己乐,施与他乐。是为二。”而说颂曰:
“不求于自乐, 常为利乐他,
斯有如是忍, 佛菩提为道。
“善男子,菩萨有二法,一切智为首,修行精进。何等二?菩提心为首,不舍诸众生。是为二。”而说颂曰:
“行一切白净, 上道心为首,
不见我众生, 精进无毁减。
“善男子,菩萨成就二法,一切智为首,修行禅定。何等二?方便入禅,本愿力出。是为二。”而说颂曰:
“勇健者常起, 智者行禅定,
降伏诸结使, 恒常欲得禅。
本愿力持出, 当为世导师,
斯有如是德, 获得于禅定。
“善男子,菩萨成就二法,一切智为首,有于智慧。何等二?自离诸见,为断一切众生见故修行智慧。是为二。”而说颂曰:
“彼离于诸见, 修利为众生,
有胜智现前, 智安隐行道。
“善男子,菩萨成就四法,有于方便。何等四?慈愍众生而为作救,大悲真实无有疲倦,喜乐于法生欢喜故,舍离烦恼无有怯弱。是名为四。”而说颂曰:
“修慈无嗔恚, 起悲无疲倦,
以法生欢喜, 舍烦恼无难。
“善男子,菩萨有四法无厌。何等四?多闻无厌,集德无满,阿练儿处无满,回向无满足。是名为四。”而说颂曰:
“求闻无满集福尔, 阿练儿处无满足,
福德回向无满足, 菩萨如是四无厌。
“善男子,菩萨有四法无足。何等四?是菩萨念过去佛作如是念:‘是诸佛等皆悉修集最胜菩提,我今云何而不修集?’;念未来佛,‘我亦入在是等数中’;念现在佛,念是佛时而作是念:‘此诸佛等现悉了知一切诸法。’是诸念中无有怯弱。是名为四。”而说颂曰:
“忆念过去佛, 无怯心增长,
彼佛得胜道, 我云何不得?
念未来善逝, 我在是数中,
无怯倍精进, 我定在是数。
念现在导师, 本行菩萨时,
我当除诸结, 证寂灭菩提。
解了一切法, 所住如所欲,
终不生怯心, 倍生好胜进。
“善男子,菩萨有四法,不退大乘。何等四?其心如地,其心如水,其心如火,其心如风。是名为四。”而说颂曰:
“其心如地水, 心亦如风火,
作不作同等, 不得道不退。
“善男子,菩萨有四法,解知无我。何等四?而是菩萨作如是念:‘诸众生界,我当悉知是等心行。诸众生界,我当悉知是等诸根而为说法。诸众生界,我当除断一切烦恼而为说法。无量佛智我等觉了,实非我身能觉此法,亦非我心。我诸善根能觉此法。’无有我者名为菩萨。是名为四。”而说颂曰:
“众生界诸心, 所行叵思议,
烦恼妄分别, 妄想生是非。
佛智亦如是, 无量叵思议,
非我之所能, 解了于佛智。
诸结使相违, 无色不可见,
我应悉除断, 显示解脱道。
“善男子,菩萨有四法,无有怯弱。何等四?愿诸善根,修方便慧,修信进念力,信无上道。是名为四。”而说颂曰:
“善喜悦充润, 慧方便众香,
信精进念力, 斯有解脱道。
如是四慧法, 持法无有厌,
为厌倦者依, 亦为世作救。
**大方广如来秘密藏经卷下**
“善男子,菩萨有四障法,应当觉知。何等四?毁谤正法,秘吝惜法,怀增上慢,修无色定。是名为四。”而说颂曰:
“菩提心有四, 说示名障碍,
菩萨应觉知, 应数数远离。
毁诽于正法, 多闻怀吝惜,
增上慢贡高, 不善起禅定。
是故护正法, 闻已广流布,
舍慢无贡高, 远离不禅定。
“善男子,菩萨有四法,所造速疾。何等四?所作以智,不以憍慢;所有善根回向菩提,不趣下乘;一切诸趣不生染著,若生染著,一向专为化于众生;昼夜三时常修三分,灭过恶业,未来不造。是名为四。”而说颂曰:
“所造以智不以慢, 回善上道非下众,
慧者不信于诸有, 发心为利诸众生。
昼日三时夜亦尔, 三分悔过灭先恶,
不造众恶集诸善, 慧者如是集善业。
“善男子,菩萨有四法极好。何等四?不自称举,不轻于他,远离诸恶,舍除诸慢。是名为四。”而说颂曰:
“不自称举不轻他, 所造诸恶悔不作,
不生憍慢及慢慢, 其心端直修善行。
“善男子,菩萨有二法,端直速疾。何等二?若有所问,如实而答;先所见事,无所覆藏。是为二。”而说颂曰:
“如问而演说, 不藏先所见,
宁舍于身命, 终不说妄语。
正直于是法, 是为贤善根,
彼得于质直, 疾觉胜菩提。
“善男子,菩萨有二法,无有谄伪。何等二?虽多获利,不欲叹德;不得利养,不自称举。是为二。”而说颂曰:
“虽多获利养, 不叹示己德,
大智所不欲, 是不谄者得。
设不得利养, 此是我本业,
不欲他有过, 勿令彼业熟。
“善男子,菩萨有二法,不望他报。何等二?我应当利一切众生,非诸众生而利于我;我当觉知而为菩提。是为二。”而说颂曰:
“我应利众生, 我荷担彼等,
我求无为道, 不观望他报。
我不求有为, 我求无为道,
我摄护世间, 不望报得道。
“善男子,菩萨有二法,作于不作。何等二?不知恩者而常供给,于知恩者作于重任。是为二。”而说颂曰:
“不知恩众生, 于彼不望报,
诸阴界入等, 皆为作菩提。
“善男子,菩萨有二法,是所应处。何等二?常值诸佛,亦常值遇菩萨乘者。是为二。”而说颂曰:
“二种所应处, 是处增名称,
得值诸如来, 菩萨所识知。
“善男子,菩萨有二法所不应修。何等二?不与愿行声闻乘者而共同止,不惊畏诸有独处宴默。是为二。”而说颂曰:
“不与修行者, 而共同止住,
不惊畏诸趣, 依止宴寂处。
“善男子,是名初入如来密藏根本句也。菩萨若入是初根本句,是菩萨能成就如来秘密藏法。”
世尊说入如来密藏初句法时,六万众生及天与人发于无上正真道心,十千菩萨得无生法忍,五百比丘不受诸法,永尽诸漏,心得解脱。时此三千大千世界六种震动,大光普照,人天伎乐不鼓自鸣。人、天、阿修罗等,同声三唱作如是言:“其有众生得闻于是如来密藏法,快得善利!若有书写、受持读诵、如说修行,是等众生皆当不失如是如来秘密藏法。”
尔时,无量志庄严王菩萨,闻是如来密藏法已,即作是念:“我今当以何等供具,供养如来、应供、正遍觉?”复作是念:“外物易舍,内事难舍,我今当以自身奉供如来世尊。”即升虚空,而说偈言:
“我今奉独觉, 以自身供养,
以此无上舍, 愿令如导师。
财供二足尊, 此事不为难,
云何为希有? 所谓身供养!
我今供无等, 自身奉遍眼,
为世人天供, 如大智师子。”
尔时,无量志庄严王菩萨,即便放身投如来上。当于尔时,以佛神力,未曾有华异华异色,甚为鲜净,极妙端严,散如来上。是菩萨身又不坠地亦不现空。此诸华等至佛身上,即复还踊住虚空中,成大华盖覆四天下。是华盖中,垂悬华贯,出大光明。是光明中现妙莲华,是莲华上有菩萨坐,如无量志庄严王。是菩萨等从华台起,顶礼佛足,同声请言:“唯愿世尊,说如来秘密藏法,无令断绝,及护如来密藏眷属。”
尔时,大德摩诃迦葉,生希有心,叹未曾有,白言:“世尊,是无量志庄严王菩萨,以身庄严供养如来。以身供养于如来已,现是菩萨诸庄严事。世尊,愿令一切诸众生等得于如是庄严之身!愿使如来常寿住世!世尊,我等今者快得大利,乃得见是善大丈夫闻其说法。”
尔时,佛告摩诃迦葉:“汝今见是无量志庄严王菩萨不?”
“已见,世尊。”
“迦葉,是善男子,于恒河沙等佛所,恒得咨请如是如来秘密藏法。贤劫诸佛所,亦当请问如是如来秘密藏法。”
尔时,大德摩诃迦葉,复白佛言:“善哉!世尊,唯愿敷演,说是如来秘密藏法,如此菩萨所启请者。”
尔时,世尊告大迦葉:“汝今善听如来密藏少许法分。何以故?若于一劫演说此法,不可穷尽。”
迦葉白言:“如是,世尊。”尔时,迦葉及诸大众受教而听。
佛言:“迦葉,于意云何?汝谓我行菩萨道时,所舍手足、头目耳鼻、皮肉、骨髓、血及妻子,略说乃至一切财物,处处遍恼于菩萨者,是诸众生不堕地狱、畜生、饿鬼及诸恶趣。何以故?本菩萨时志意净故,及大誓愿净戒聚故,于诸众生大悲纯至及坚忍故,以大慈故,大功德法故,牢强精进定向大乘故,自心净故,大愿丰饶故,不嬉自乐故。其有众生触娆菩萨毁骂之者,菩萨德故不堕恶道。迦葉,我今引喻以明斯义。迦葉,犹如病人,良医授药,而是病人毁骂是药及与良医,先毁已后乃服此药。迦葉,汝意云何?药以骂故不为药耶?病不除耶?”
“不也,世尊,虽复毁骂,不失药势而能除病。”
“如是,迦葉。菩萨如彼药及良医,虽不恭敬种种触恼,然是菩萨纯净志意无有缺减。迦葉,如大宝珠,众德所成,其性纯净除诸瑕秽,若有人天毁骂是宝而不恭敬。迦葉,于意云何?是大宝珠畏毁骂故失宝力耶?”
“不也,世尊。”
佛言:“迦葉,是净宝珠犹彼菩萨志意清净,一切众生虽不恭敬,所有功德无有折减。迦葉,如大油灯,假令人天而毁骂之,以毁骂故便闇冥耶?”
“不也,世尊。”
佛言:“迦葉,菩萨志意纯净如是,虽复触恼不失其性。迦葉,以是事故,当知众生虽有触娆于菩萨者,不堕恶道。何以故?由是菩萨本愿净故所愿皆成。”
尔时,大德摩诃迦葉白言:“世尊,如我解佛所说义趣,若于如来起不善业,是众生等亦复不畏堕于恶道。”
佛言:“如是,迦葉,若有众生于大悲如来,生信敬心解入进趣,若佛现在、若灭度后,若有奉施如来及塔,若幢幡盖、华鬘涂香及与末香,若宝、若衣及诸饮食,随于种种所有诸物,若取、若食、若自取、若教取。迦葉,我说是人无有所犯。迦葉,贫为最苦,不恭敬故,作劫夺故,无畏惧故,不信敬故,不解业故,不虑报故,以贪求故,难调伏故,贪嗔痴故,无惭愧故,凶横恶故。不思如来有大慈悲,不信如来多利众生,取如来塔物乃至一线,若自取、若使人取,我说是人不名少犯,我不说彼不堕恶道。迦葉,若有众生于如来物及佛塔物,若自取、若教人取,如来今者悉知是人、悉见是人当堕恶趣。又以此缘当得断结。何以故?是人心行为佛护故。迦葉,若于如来、若如来塔,生心缘念,乃至起于少许悔心。迦葉,是众生心自当改悔,以缘如来生悔心故,背弃生死一切之罪,结使微缓。迦葉,假有人天坠堕于地,堕大地已还依大地而得起住。如是,迦葉,是众生等于如来所,生不善故堕在恶道,堕恶道已还缘如来速得出离。云何名为缘于如来?于如来所生殷重心。”
尔时,大德迦葉白言:“世尊,是人以是恶贼之心,若能生心缘念如来,尚得大利,况净心者!”
佛言:“迦葉,如汝所言,若有众生起念如来、思忆如来、观缘如来,是等一切悉皆当得涅槃果证。”
大德迦葉白言:“世尊,如我解知佛所说义,宁于如来起不善业,非于外道邪见者所施作供养。何以故?若如来所起不善业当有悔心,究竟必得至于涅槃。随外道见,当堕地狱、饿鬼、畜生。”
佛言:“迦葉,如汝所言。迦葉,设有人天骂赤栴檀,以手打捶速撩弃地。迦葉,于意云何?如是人者有何等香?”
迦葉白言:“而是人者有栴檀香。”
“如是,迦葉。若有众生眼见耳闻及口宣说于如来者,当知是人有解脱香。迦葉,有人执把于粪污已,以诸伎乐一切众华而供养之,如是人者有何等香?”
迦葉白言:“世尊,是人唯有粪秽臭恶。”
“如是,迦葉。其有亲近恭敬供养诸外道者,当知是人亦复如是,有诸见畏、地狱畜生饿鬼等畏。迦葉,若善男子、善女人,信于如来有大慈悲,殷重敬信,除慢不憍,无有贪嗔及与愚痴,意志决定解知业报,质直无谄无有幻伪,于如来所得净信心,诸根无贪无有谄曲,志意不坏净信成就,信佛大悲多利众生,信佛本行,信于如来不舍一切诸众生等,有如是心,有如是意。设乏于食、病药所须,未得道果,未入正位;若得所须,能得道果,入于正位;若其不得,饥渴羸劣,不能修善,不得道果。是人若取如来佛物、衣服、饮食、病药所须,自服食之。迦葉,我不说是有恶道果。迦葉,是名如来秘密藏法,应当密持,善好守护,不应在彼见著者前开示演说,勿令是人重增所见。
“迦葉,云何为解?谓解如来说一切法。云何为缚?迦葉,所言缚者,所谓贪著。云何为解?谓不贪著,不分别二。迦葉,我今不说是无著者名之为犯。何以故?迦葉,羸劣烦恼从虚妄生。迦葉,若其不实,不以生故名之为实。迦葉,我今引喻为示不实妄想事故。迦葉,犹如人天持芥子火吹令增长,渐烧诸物成大火聚。如是,迦葉,愚小凡夫起少不正思惟妄念,坚著诸见随所妄想,随是诸处增长结使。迦葉,若有火聚如须弥山无有所依。迦葉,于意云何?而是火者,为当增长?为当渐灭?”
迦葉白言:“是火当灭,更不增长。”
佛言:“迦葉,不实妄想诸烦恼等,若更不起,若更不著,更不妄想,更不嬉乐,更不分别,此当渐灭而不增长。迦葉,以是事故,应当解知羸劣不实妄想烦恼是不真实。迦葉,犹如有人至毒家舍,竟不服毒,自生惊怖受大苦痛,发声大呼:‘我今遇毒!我今遇毒!’有善良医持不实药,令是病人除不实病,得离众苦。迦葉,于意云何?若是良医持于实药与是人者,是人活不?”
“不也,世尊。是人实不服食于毒,自生毒想,须不实药以疗治之。”
佛言:“如是,迦葉。诸小凡夫为于不实烦恼所恼,是故如来说不实法。”
尔时,迦葉白言:“世尊,如来说法不真实耶?”
佛言:“迦葉,汝所解说,为是真实?为不真实?”
迦葉白言:“我所解说无有真实。何以故?世尊,所有贪欲以不净对,嗔恚、慈对,痴、因缘对。世尊,若不净是实,则不能除不实贪欲,亦非贪欲生不净观。若愚痴是实,起愚痴已非因缘对,亦非因缘能除愚痴。是故,世尊,一切结使及断结法,二俱不实,无物无定无有成就。是故不实诸烦恼等,习近不实,便得除去。世尊,结使无去。何以故?若有除去则为有去,若已有去则便有来。是故,世尊,一切结使无去无来。是故知诸一切有为无来无去,名离烦恼。”
佛言:“迦葉,此如来密藏,说一切法本性清净。”
尔时,大德摩诃迦葉白言:“世尊,是十恶道如佛所说,其性无垢本性净耶?”
佛言:“如是,如是。迦葉,何以故?无有自在而犯于杀,无可亲信而犯于盗,非无主无护而犯邪淫,非为护他而犯妄语,非为调伏而犯恶口,非为破坏外道邪增而犯两舌,无随应器而犯绮语,无粗恶教而犯嗔恚,无有希望增上善根名之为贪,无有将护自在者意少不正言而犯邪见。迦葉,是十恶道若不坚著,我不说彼名之有过。迦葉,是十恶道若不坚著,名为不犯。如是,迦葉,一切烦恼若不坚著,我说无犯。迦葉,诸不著者,名曰离见。”
迦葉白言:“世尊,十恶业道何者最重?”
佛言:“迦葉,是十恶业道,杀及邪见,名为最重。迦葉,随在在处诸恶不善,若不坚住,若不坚执,若不坚著,一切我说名为不犯。迦葉,若少不善,若其坚住、坚执、坚著,一切我说名之为犯。迦葉,五无间罪,若不坚住、坚执、坚著生于见者,我不说彼名曰为犯,况复余小不善业道?迦葉,我不以不善法而得菩提,亦不以善法而得菩提。迦葉,若以不善得于菩提,诸小凡夫亦得菩提。若以善法得菩提者,一切被烧草木丛林应还生长。迦葉,我今问汝,如来云何得于菩提?”
迦葉白言:“佛是法本,世尊是眼,世尊是依,如世尊说当共奉行。”
佛言:“迦葉,解知烦恼从因缘生,名得菩提。迦葉,云何为解知从因缘所生烦恼?解知是无自性起法,是无生法,如是解知名得菩提。迦葉,但假名字名得菩提,而是菩提不以文字言说而得。若无文字,无言无说,无得菩提,是第一义。迦葉,如汝所问,十恶业道何者为重?迦葉,如人有父得缘觉道,子断父命,名杀中重。夺三宝物,名盗中重。若复有人,其母出家得罗汉道,共为不净,是淫中重。若以不实谤毁如来,是妄语中重。若两舌语坏贤圣僧,是两舌中重。若骂圣人,是恶口中重。言说坏乱求法之人,是绮语中重。若五逆初业,是嗔恚中重。若欲劫夺持净戒人物,是贪中重。邪见中重,谓之边见。迦葉,此十恶道,是为最重。迦葉,如来知是十恶业是为最重。
“迦葉,若有一人具是十恶。迦葉,是恶众生,若解知如来说因缘法,是中无有众生、寿命,无人、无丈夫、无我、无年少,无作业者,无受者、起者,无知者、见者,无福伽罗,无生无灭无行,是为尽法,无染无著,无善不善,本性清净,一切诸法本性常净解知信入。迦葉,我不说彼趣向恶道,无恶道果。何以故?迦葉,法无积聚,法无集无恼。迦葉,一切诸法生灭不住,因缘和合而得生起,起已还灭。迦葉,若心生灭,一切结使亦生已灭。若如是解,无犯犯处。迦葉,若犯有住,无有是处。迦葉,如百千岁极大闇室不燃灯明,是极闇室无门窗牖,乃至无有如针鼻孔,日月珠火所有光明无能得入。迦葉,若闇室中燃火灯明,是闇颇能作如是说:‘我百千岁住,今不应去’?”
迦葉白言:“不也,世尊。当燃灯时,是闇已去。”
佛言:“如是,迦葉,百千万劫所造业障,信如来语解知缘法,修观察行,修于定慧,观无我、无命、无人、无丈夫等,我说是人名为无犯、无处、无集。迦葉,以是事故,当知羸劣诸烦恼等,智慧灯照,势不能住。迦葉,是说如来密藏住处无上,大师子吼转净法轮,天人魔梵所不能转。迦葉,若有众生信是如来秘密藏法,如是受持,如是观察,彼当如是大师子吼。”
是时,大德阿难白言:“世尊,是无量志庄严王菩萨,自以其身供养如来,当以何身觉菩提道?”
时,华台中诸菩萨等,问阿难言:“于意云何?可以身觉于菩提耶?阿难,勿作斯观——当以身心觉于菩提!”
阿难报言:“诸善丈夫,若非身心觉于菩提,当用何等而觉菩提?”
诸菩萨言:“大德阿难,身之实性是菩提实性,菩提实性是心实性,心之实性即是一切法之实性;觉是一切诸实性故,名觉菩提。”
时,诸华台所有菩萨,顶礼佛足,说如是言:“世尊,我等若至此大地时,是无量志庄严王菩萨,乃当得成阿耨多罗三藐三菩提。”
是时,阿难白言:“世尊,是诸华台众菩萨等,几时当至于此大地?”
佛告阿难:“是诸菩萨,于下方界分恒河沙等诸佛如来所,咨受请问于是如来秘密藏法,闻已解义。”
阿难白言:“世尊,是无量志庄严王菩萨,几时当成阿耨多罗三藐三菩提?”
佛告阿难:“是贤劫中,千佛已出、当出。阿难,最后如来号名卢志。阿难,卢志如来、应、正遍觉,诸声闻众多先诸佛所有声闻僧。阿难,是卢志如来乃当授是无量志庄严王菩萨无上道记云:‘无量志庄严王菩萨,过九十八劫,当得成佛,号庄严王,亦于是界得无上道。’是庄严王如来,坐此地时,是华台中诸菩萨等尔乃至地,复当闻此如来密藏法。阿难,尔时是庄严王如来世界,名作无量功德庄严。阿难,一切欲界诸天宫殿等,彼庄严王佛国土中,一宝台耳!是娑婆界,尔时当名妙好色土。阿难,庄严王如来寿命百劫。佛灭度后,正法住世满足十劫,纯菩萨僧。”
说是庄严王如来记已,佛土华盖便没不现,无量志庄严王菩萨现佛前住。
是时,阿难白言:“世尊,护持此法令得久住,于阎浮提增广流布,令善丈夫能持如来密藏法者,成满功德,手得是法。”
尔时,世尊告阿难言:“假令四大变易其性,终不令是善丈夫等,不闻是法而取命终。阿难,若有书写、受持读诵,当知是人即是如来所持。阿难,若有人能右手执持恒沙佛界满中七宝,左手复持恒沙世界满中七宝,若昼三时、夜三时持用布施,是人不懈经恒沙劫。阿难,是布施功德,若有书写、受持读诵是经典者,所得功德复过于是!是故,阿难,汝今受持读诵是经,令诸法器普得闻知,是诸人等则为受持如来秘密藏法。”
佛说此经已,无量志庄严王菩萨,大德阿难,大德迦葉,一切大众,天、人、阿修罗等,闻佛所说,皆大欢喜。
| 30.88455 | 879 | 0.566819 |
d0aac188a721506df59ae27532c601010a0cb529 | 2,643 | rst | reStructuredText | doc/source/Installation.rst | brittonsmith/yt_astro_analysis | f5d255a89dc1e882866cbfabcc627c3af3ee6d62 | [
"BSD-3-Clause-Clear"
] | null | null | null | doc/source/Installation.rst | brittonsmith/yt_astro_analysis | f5d255a89dc1e882866cbfabcc627c3af3ee6d62 | [
"BSD-3-Clause-Clear"
] | null | null | null | doc/source/Installation.rst | brittonsmith/yt_astro_analysis | f5d255a89dc1e882866cbfabcc627c3af3ee6d62 | [
"BSD-3-Clause-Clear"
] | null | null | null | .. _installation:
Installation
============
The most straightforward way to install ``yt_astro_analysis`` is to
first `install yt <https://github.com/yt-project/yt#installation>`__.
This will take care of all ``yt_astro_analysis`` dependencies. After
that, ``yt_astro_analysis`` can be installed with pip:
.. code-block:: bash
$ pip install yt_astro_analysis
If you use ``conda`` to manage packages, you can install ``yt_astro_analysis``
from conda-forge:
.. code-block:: bash
$ conda install -c conda-forge yt_astro_analysis
Installing from source
----------------------
To install from source, it is still recommended to first install ``yt``
in the manner described above. Then, clone the git repository and install
like this:
.. code-block:: bash
$ git clone https://github.com/yt-project/yt_astro_analysis
$ cd yt_astro_analysis
$ pip install -e .
.. _installation-rockstar:
Installing with Rockstar support
--------------------------------
.. note:: As of ``yt_astro_analysis`` version 1.1, ``yt_astro_analysis``
runs with the most recent version of ``rockstar-galaxies``. Older
versions of ``rockstar`` will not work.
Rockstar support requires ``yt_astro_analysis`` to be installed from source.
Before that, the ``rockstar-galaxies`` code must also be installed from source
and the installation path then provided to ``yt_astro_analysis``. Two
recommended repositories exist for installing ``rockstar-galaxies``,
`this one <https://bitbucket.org/pbehroozi/rockstar-galaxies/>`__, by the
original author, Peter Behroozi, and
`this one <https://bitbucket.org/jwise77/rockstar-galaxies>`__, maintained by
John Wise.
.. warning:: If using `Peter Behroozi's repository
<https://bitbucket.org/pbehroozi/rockstar-galaxies/>`__, the following
command must be issued after loading the resulting halo catalog in ``yt``:
.. code-block:: python
>>> ds = yt.load(...)
>>> ds.parameters['format_revision'] = 2
To install ``rockstar-galaxies``, do the following:
.. code-block:: bash
$ git clone https://bitbucket.org/jwise77/rockstar-galaxies
$ cd rockstar-galaxies
$ make lib
Then, go into the ``yt_astro_analysis`` source directory and add a file called
"rockstar.cfg" with the path the ``rockstar-galaxies`` repo you just cloned.
Then, install ``yt_astro_analysis``.
.. code-block:: bash
$ cd yt_astro_analysis
$ echo <path_to_rockstar> > rockstar.cfg
$ pip install -e .
Finally, you'll need to make sure that the location of ``librockstar-galaxies.so``
is in your LD_LIBRARY_PATH.
.. code-block:: bash
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_rockstar>
| 30.732558 | 82 | 0.72115 |
0ae96cbb87075afee3729d18fac9cbe2e43ce570 | 339 | rst | reStructuredText | docs/source/workflow-execution-config.rst | flyteorg/flytectl | 173676860ced0befd0793b28548e9f7fada41e65 | [
"Apache-2.0"
] | 19 | 2021-03-31T13:20:45.000Z | 2022-03-29T06:31:10.000Z | docs/source/workflow-execution-config.rst | flyteorg/flytectl | 173676860ced0befd0793b28548e9f7fada41e65 | [
"Apache-2.0"
] | 229 | 2021-01-31T08:27:49.000Z | 2022-03-31T23:25:56.000Z | docs/source/workflow-execution-config.rst | flyteorg/flytectl | 173676860ced0befd0793b28548e9f7fada41e65 | [
"Apache-2.0"
] | 31 | 2021-02-16T04:41:40.000Z | 2022-02-25T12:27:20.000Z | Workflow execution config
------
It specifies the actions to be performed on the resource 'workflow-execution-config'.
.. toctree::
:maxdepth: 1
:caption: Workflow execution config
gen/flytectl_get_workflow-execution-config
gen/flytectl_update_workflow-execution-config
gen/flytectl_delete_workflow-execution-config
| 28.25 | 86 | 0.772861 |
2830500937a0b1c0c1e688ddcfabfe1a5dd3b3fa | 384 | rst | reStructuredText | docs/twindb_backup.cache.rst | denssk/backup | 292d5f1b1a3765ce0ea8d3cab8bd1ae0c583f72e | [
"Apache-2.0"
] | 69 | 2016-06-29T16:13:55.000Z | 2022-03-21T06:38:37.000Z | docs/twindb_backup.cache.rst | denssk/backup | 292d5f1b1a3765ce0ea8d3cab8bd1ae0c583f72e | [
"Apache-2.0"
] | 237 | 2016-09-28T02:12:34.000Z | 2022-03-25T13:32:23.000Z | docs/twindb_backup.cache.rst | denssk/backup | 292d5f1b1a3765ce0ea8d3cab8bd1ae0c583f72e | [
"Apache-2.0"
] | 45 | 2017-01-04T21:20:27.000Z | 2021-12-29T10:42:22.000Z | twindb\_backup\.cache package
=============================
Submodules
----------
twindb\_backup\.cache\.cache module
-----------------------------------
.. automodule:: twindb_backup.cache.cache
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: twindb_backup.cache
:members:
:undoc-members:
:show-inheritance:
| 16.695652 | 41 | 0.539063 |
e73daee8df44b84d54a1e72cee91410d920d06f2 | 56 | rst | reStructuredText | docs/API/Canvas/Canvas.rst | scottwittenburg/vcs | 5b9f17fb78f7ab186fc0132ab81ada043a7ba348 | [
"BSD-3-Clause"
] | 11 | 2018-10-10T03:14:33.000Z | 2022-01-05T14:18:15.000Z | docs/API/Canvas/Canvas.rst | scottwittenburg/vcs | 5b9f17fb78f7ab186fc0132ab81ada043a7ba348 | [
"BSD-3-Clause"
] | 196 | 2018-03-21T19:44:56.000Z | 2021-12-21T21:56:24.000Z | docs/API/Canvas/Canvas.rst | scottwittenburg/vcs | 5b9f17fb78f7ab186fc0132ab81ada043a7ba348 | [
"BSD-3-Clause"
] | 5 | 2019-12-09T21:54:45.000Z | 2022-03-20T04:22:14.000Z | Canvas
------
.. automodule:: vcs.Canvas
:members:
| 9.333333 | 26 | 0.571429 |
6f4fa83303b5a797ecc162d28324bec33b8f5c91 | 355 | rst | reStructuredText | docs/example_pymc_ammonia.rst | glangsto/pyspeckit | 346b24fb828d1d33c7891cdde7609723e51af34c | [
"MIT"
] | 79 | 2015-03-03T15:06:20.000Z | 2022-03-27T21:29:47.000Z | docs/example_pymc_ammonia.rst | glangsto/pyspeckit | 346b24fb828d1d33c7891cdde7609723e51af34c | [
"MIT"
] | 240 | 2015-01-04T02:59:12.000Z | 2021-11-13T15:11:14.000Z | docs/example_pymc_ammonia.rst | glangsto/pyspeckit | 346b24fb828d1d33c7891cdde7609723e51af34c | [
"MIT"
] | 68 | 2015-03-02T12:23:12.000Z | 2022-02-28T10:26:36.000Z | .. include:: <isogrk3.txt>
Ammonia Monte Carlo examples
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This example shows the same process as in `<example_pymc>`_, but for
the ammonia model. Here we emphasize the degeneracy between
the excitation temperature and the total column density.
.. literalinclude:: ../examples/example_pymc_ammonia.py
:language: python
| 25.357143 | 68 | 0.707042 |
91f16894d4c79617502ec905d2f6441dc127d8ec | 1,871 | rst | reStructuredText | source/advanced/how_to_use_codegen.rst | MegEngine/doc | 22947b04aed5f534f15988d531d494fedaa36a28 | [
"Apache-2.0"
] | 43 | 2020-03-24T11:29:34.000Z | 2021-09-09T11:43:04.000Z | source/advanced/how_to_use_codegen.rst | MegEngine/Doc | 22947b04aed5f534f15988d531d494fedaa36a28 | [
"Apache-2.0"
] | 4 | 2020-05-07T14:25:46.000Z | 2020-07-12T02:59:37.000Z | source/advanced/how_to_use_codegen.rst | MegEngine/Doc | 22947b04aed5f534f15988d531d494fedaa36a28 | [
"Apache-2.0"
] | 19 | 2020-03-25T05:38:12.000Z | 2021-03-30T15:39:44.000Z | .. _how_to_use_codegen:
如何使用 MegEngine 的 codegen
===================================================
通常,模型中不仅含有计算受限的操作,还含有一些访存受限操作(如 Elemwsie)。MegEngine 内嵌了 codegen 优化机制,它可以在运行时将模型中多个操作融合起来并生成可以在目标机器上运行的代码,以此减少访存操作从而达到加速的目的。
打开 codegen
---------------------------------------
我们在 :class:`~.megengine.jit.tracing.trace` 接口中传入 ``symbolic=True, opt_level=3``
,即可打开 MegEngine codegen 优化。
指定 codegen 的后端
---------------------------------------
MegEngine 的 codegen 目前集成了三种后端,分别是 NVRTC, HALIDE 和 MLIR。其中 NVRTC 和 HALIDE 仅支持在 GPU 上使用,MLIR 则同时支持 GPU 和 CPU, 不同的后端生成代码的策略有所不同,所以运行效率也各异。
我们可以通过设置如下的环境变量来改变 codegen 的后端,例如想要使用 NVRTC 后端,可以:
.. code-block:: bash
export MGB_JIT_BACKEND="NVRTC"
该环境变量在 NVIDIA GPU 环境下可取的值为 NVRTC, HALIDE 和 ``MLIR``, 默认值为 HALIDE 。CPU 暂时仅支持 ``MLIR`` 后端。
(如果使用 ``MLIR`` 后端, 需要单独编译 MegEngine)
使用 codegen 的 MLIR 后端
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
由于 MLIR 默认关闭,所以需要源码编译安装 MegEngine, 详见 github 主页, cmake 时换成如下命令:
.. code-block:: bash
cmake .. -DMGE_WITH_JIT=ON -DMGE_WITH_JIT_MLIR=ON -DMGE_WITH_HALIDE=OFF
然后设置如下的环境变量:
.. code-block:: bash
export MGB_JIT_BACKEND="MLIR"
代码示例
---------------------------------------
.. code-block:: python
:linenos:
from megengine.jit import trace
import megengine.autodiff as ad
import megengine.optimizer as optim
if __name__ == '__main__':
gm = ad.GradManager().attach(model.parameters())
opt = optim.SGD(model.parameters(), lr=0.0125, momentum=0.9, weight_decay=1e-4,)
# 通过 trace 转换为静态图
@trace(symbolic=True, opt_level=3)
def train():
with gm:
logits = model(image)
loss = F.loss.cross_entropy(logits, label)
gm.backward(loss)
opt.step()
opt.clear_grad()
return loss
loss = train()
loss.numpy()
| 26.352113 | 135 | 0.592731 |
bfe1439612c43cc5b545a10cca9c8421808ce1cc | 473 | rst | reStructuredText | cmake/share/cmake-3.3/Help/variable/CPACK_PACKAGING_INSTALL_PREFIX.rst | htfy96/htscheme | b44c9f9672f69d9b3c2eb1c80969bcfcfec9990f | [
"MIT"
] | 5 | 2015-07-07T01:30:37.000Z | 2020-08-14T10:45:01.000Z | cmake/share/cmake-3.3/Help/variable/CPACK_PACKAGING_INSTALL_PREFIX.rst | htfy96/htscheme | b44c9f9672f69d9b3c2eb1c80969bcfcfec9990f | [
"MIT"
] | null | null | null | cmake/share/cmake-3.3/Help/variable/CPACK_PACKAGING_INSTALL_PREFIX.rst | htfy96/htscheme | b44c9f9672f69d9b3c2eb1c80969bcfcfec9990f | [
"MIT"
] | null | null | null | CPACK_PACKAGING_INSTALL_PREFIX
------------------------------
The prefix used in the built package.
Each CPack generator has a default value (like /usr). This default
value may be overwritten from the CMakeLists.txt or the cpack command
line by setting an alternative value.
e.g. set(CPACK_PACKAGING_INSTALL_PREFIX "/opt")
This is not the same purpose as CMAKE_INSTALL_PREFIX which is used
when installing from the build tree without building a package.
| 33.785714 | 70 | 0.735729 |
51aa6703cad230a69514a9b82933408d2ef88a3d | 424 | rst | reStructuredText | docs/usage.rst | exolever/lib-exo-messages | da0736fc3b945acf1b9479fb387b36c09ab9ccb2 | [
"MIT"
] | null | null | null | docs/usage.rst | exolever/lib-exo-messages | da0736fc3b945acf1b9479fb387b36c09ab9ccb2 | [
"MIT"
] | null | null | null | docs/usage.rst | exolever/lib-exo-messages | da0736fc3b945acf1b9479fb387b36c09ab9ccb2 | [
"MIT"
] | null | null | null | =====
Usage
=====
To use exo_messages in a project, add it to your `INSTALLED_APPS`:
.. code-block:: python
INSTALLED_APPS = (
...
'exo_messages.apps.ExoMessagesConfig',
...
)
Add exo_messages's URL patterns:
.. code-block:: python
from exo_messages import urls as exo_messages_urls
urlpatterns = [
...
url(r'^', include(exo_messages_urls)),
...
]
| 15.703704 | 66 | 0.57783 |
4b599e69cfb5fc4c460784d243ee659648ef11a7 | 6,649 | rst | reStructuredText | docs/about.rst | guilhemmarchand/nmon-for-splunk | bbd84d7bd1397abb07ebb05ac3a777d7dae0006a | [
"Apache-2.0"
] | 23 | 2015-02-27T20:32:59.000Z | 2021-09-05T05:41:07.000Z | docs/about.rst | guilhemmarchand/nmon-for-splunk | bbd84d7bd1397abb07ebb05ac3a777d7dae0006a | [
"Apache-2.0"
] | 124 | 2015-07-29T19:15:46.000Z | 2021-08-28T15:34:37.000Z | docs/about.rst | guilhemmarchand/nmon-for-splunk | bbd84d7bd1397abb07ebb05ac3a777d7dae0006a | [
"Apache-2.0"
] | 13 | 2015-03-09T22:52:40.000Z | 2022-02-18T20:37:00.000Z |
#########################################
About Nmon Performance monitor for Splunk
#########################################
* Author: Guilhem Marchand
* First release was published on starting 2014
* Purposes:
The Nmon Performance application for Splunk implements the excellent and powerful nmon binary known as Nigel's performance monitor.
Originally developed for IBM AIX performance monitoring and analysis, it is now an Open source project that made it available to many other systems.
It is fully available for any Linux flavor, and thanks to the excellent work of Guy Deffaux, it also available for Solaris 10/11 systems using the sarmon project.
The Nmon Performance monitor application for Splunk will generate performance and inventory data for your servers, and provides a rich number of monitors and tools to manage your AIX / Linux / Solaris systems.
.. image:: img/Octamis_Logo_v3_no_bg.png
:alt: Octamis_Logo_v3_no_bg.png
:align: right
:target: http://www.octamis.com
**Nmon Performance is now associated with Octamis to provide professional solutions for your business, and professional support for the Nmon Performance solution.**
*For more information:* :ref:`octamis_support`
---------------
Splunk versions
---------------
It is recommended to use Splunk 6.5.x or superior to run the latest core application release. (in distributed deployments, only search heads may have this requirement)
The last release can be downloaded from Splunk base: https://splunkbase.splunk.com/app/1753
**Compatibility matrix for core application:**
* **Current major release Version 1.9.x:** Splunk 6.5.x or superior are officially supported
Splunk 6.4 will globally perform as expected, but there might be some unwanted behaviors such as css issues as this Splunk version is not supported anymore by the core application.
**Stopped versions for older Splunk releases:**
* Last version compatible with Splunk 6.4.x with release 1.7.9 (Splunk certified): https://github.com/guilhemmarchand/nmon-for-splunk/releases
* Last version compatible with Splunk 6.2.x with release 1.6.15 (Splunk certified): https://github.com/guilhemmarchand/nmon-for-splunk/releases
* Last version compatible with Splunk 6.1.x, with release 1.4.902 (not Splunk certified): https://github.com/guilhemmarchand/nmon-for-splunk/blob/last_release_splunk_61x
**Compatibility matrix for TA-nmon addon:**
Consult the TA-nmon documentation: http://ta-nmon.readthedocs.io
* Both add-ons are compatible with any Splunk version 6.x (full instance of Universal Forwarder)
The TA-nmon add-on is designed to be deployed on full Splunk instances or Universal Forwarders, **it is only compatible with Splunk 6.x.**
The PA-nmon_light add-on is a minimal add-on designed to be installed on indexers (clusters or standalone), this package contains the default "nmon" index definition and parsing configuration. It excludes any kind of binaries, inputs or scripts, and does not collect nmon data.
---------------------
Index time operations
---------------------
The application operates index time operations, the PA-nmon_light add-on must be installed in indexers in order for the application to operate normally.
If there are any Heavy forwarders acting as intermediate forwarders between indexers and Universal Forwarders, the TA-nmon add-on must deployed on the intermediate forwarders to achieve successfully index time extractions.
--------------
Index creation
--------------
**The Nmon core application does not create any index at installation time.**
An index called "nmon" must be created manually by Splunk administrators to use the default TA-nmon indexing parameters. (this can be tuned)
However, deploying the PA-nmon_light will automatically defines the default "nmon" index. (pre-configured for clusters replication)
Note: The application supports any index starting with the "nmon*" name, however the default index for the TA-nmon inputs is set to "nmon" index.
In distributed deployments using clusters of indexers, the PA-nmon add-on will automatically creates the "nmon" replicated index.
----------------------------
Summarization implementation
----------------------------
**Accelerated data models:**
Nmon for Splunk App intensively uses data model acceleration in almost every user interfaces, reports and dashboards.
**Splunk certification requirements prohibit the default activation of data models acceleration.**
**Since version 1.9.12, none of the data models are accelerated by default, this is your responsibility to decide if you wish to do so, bellow are the recommended acceleration parameters:**
- metrics related data models accelerated over a period of 1 year
- non metrics data models accelerated over the last 30 days (Nmon config, Nmon processing)
Splunk Accelerated data models provide a great and efficient user experience.
**Accelerated reports:**
The following report(s) use report acceleration feature:
- Volume of Data indexed Today, accelerated for last 7 days
- Number of notable events in Data Processing or Data Collect since last 24 Hours, accelerated for last 24 hours
Please review the :ref:`large_scale_deployment` documentation.
------------------------------
About Nmon Performance Monitor
------------------------------
Nmon Performance Monitor for Splunk is provided in Open Source, you are totally free to use it for personal or professional use without any limitation,
and you are free to modify sources or participate in the development if you wish.
**Feedback and rating the application will be greatly appreciated.**
* Join the Google group: https://groups.google.com/d/forum/nmon-splunk-app
* App's Github page: https://github.com/guilhemmarchand/nmon-for-splunk
* Videos: https://www.youtube.com/channel/UCGWHd40x0A7wjk8qskyHQcQ
* Gallery: https://flic.kr/s/aHskFZcQBn
--------------------------------------------
Open source and licensed materials reference
--------------------------------------------
- css materials from http://www.w3schools.com
- d3 from Michael Bostock: https://bl.ocks.org
- various extensions and components from the Splunk 6.x Dashboard Examples application: https://splunkbase.splunk.com/app/1603
- dark.css from: http://www.brainfold.net/2016/04/splunk-dashboards-looks-more-beautiful.html
- Take the tour component from https://github.com/ftoulouse/splunk-components-collection
- hover.css from http://ianlunn.github.io/Hover
- free of use icons from /www.iconfinder.com
- Javascript tips (inputs highlighting) from https://splunkbase.splunk.com/app/3171 - https://blog.octoinsight.com/splunk-dashboards-highlighting-required-inputs
| 48.889706 | 277 | 0.750338 |
63b4a03b9cbe3e56dcd12923d1f0e848ea2db488 | 83 | rst | reStructuredText | help/variable/YCM_USE_3RDPARTY.rst | ExternalRepositories/ycm | fbf543c5d909529b93d822813660b32a30c6868f | [
"BSD-3-Clause"
] | 39 | 2015-01-30T03:47:57.000Z | 2022-02-04T22:32:38.000Z | help/variable/YCM_USE_3RDPARTY.rst | diegoferigo/ycm-cmake-modules | a73e9167c4f16b1b842c780cb453290c25f2f1c5 | [
"BSD-3-Clause"
] | 299 | 2015-01-07T17:36:14.000Z | 2022-03-31T09:45:47.000Z | help/variable/YCM_USE_3RDPARTY.rst | diegoferigo/ycm-cmake-modules | a73e9167c4f16b1b842c780cb453290c25f2f1c5 | [
"BSD-3-Clause"
] | 24 | 2015-04-22T15:20:06.000Z | 2021-06-02T15:52:02.000Z | YCM_USE_3RDPARTY
----------------
Use modules downloaded from 3rd party projects.
| 16.6 | 47 | 0.674699 |
db24f3ac732ea13db1d8a4a18e6a556ae836531c | 5,343 | rst | reStructuredText | doc/install/apacheauth.rst | wiraqutra/photrackjp | e120cba2a5d5d30f99ad084c6521e61f09694ee6 | [
"BSD-3-Clause"
] | null | null | null | doc/install/apacheauth.rst | wiraqutra/photrackjp | e120cba2a5d5d30f99ad084c6521e61f09694ee6 | [
"BSD-3-Clause"
] | null | null | null | doc/install/apacheauth.rst | wiraqutra/photrackjp | e120cba2a5d5d30f99ad084c6521e61f09694ee6 | [
"BSD-3-Clause"
] | null | null | null | .. index::
pair: Apache; authentication
.. highlight:: apache
.. _install-apacheauth:
========================
Authentication on Apache
========================
.. index::
triple: Apache; basic; authentication
.. _install-apacheauth-basic:
Basic Authentication
====================
Create the htpasswd file using the program of the same name:
.. code-block:: bash
htpasswd -c trac.htpasswd $USERNAME
Then add the following to your VirtualHost::
<Location /trac/login>
AuthType Basic
AuthName "Trac Login"
AuthUserFile /path/to/trac.htpasswd
Require valid-user
</Location>
The ``AuthName`` can be set to whatever you like, and will shown to the user
in the authentication dialog in their browser.
In a multiple environment setup, you can use the following to use the same
authentication on all environments::
<LocationMatch /trac/[^/]+/login>
AuthType Basic
AuthName "Trac Login"
AuthUserFile /path/to/htpasswd
Require valid-user
</LocationMatch>
.. seealso::
`Authentication, Authorization and Access Control <http://httpd.apache.org/docs/2.2/howto/auth.html>`_
Apache guide to setting up authentication.
`mod_auth_basic <http://httpd.apache.org/docs/2.2/mod/mod_auth_basic.html>`_
Documentation for mod_auth_basic.
.. index::
triple: Apache; digest; authentication
Digest Authentication
=====================
Create the htdigest file as with basic:
.. code-block:: bash
htdigest -c trac.htdigest realm $USERNAME
``realm`` needs to match the value of ``AuthName`` used in the configuration.
Then add the following to your VirtualHost::
<Location /trac/login>
AuthType Digest
AuthName "realm"
AuthDigestFile /path/to/trac.htdigest
Require valid-user
</Location>
You can use the same ``LocationMatch`` as above for multiple environments.
.. seealso::
`mod_auth_digest <http://httpd.apache.org/docs/2.2/mod/mod_auth_digest.html>`_
Documentation for mod_auth_digest.
.. index::
triple: Apache; LDAP; authentication
LDAP Authentication
===================
You can use ``mod_authnz_ldap`` to authenticate against an LDAP directory.
Add the following to your VirtualHost::
<Location /trac/login>
AuthType Basic
AuthName "Trac Login"
AuthBasicProvider ldap
AuthLDAPURL "ldap://127.0.0.1/dc=example,dc=com?uid?sub?(objectClass=inetOrgPerson)"
AuthzLDAPAuthoritative Off
Require valid-user
</Location>
You can also require the user be a member of a certain LDAP group, instead of
just having a valid login::
Require ldap-group CN=Trac Users,CN=Users,DC=example,DC=com
.. index::
triple: Apache; Active Directory; authentication
Windows Active Directory
------------------------
You can use LDAP as a way to authenticate to a AD server.
Use the following as your LDAP URL::
AuthLDAPURL "ldap://directory.example.com:3268/DC=example,DC=com?sAMAccountName?sub?(objectClass=user)"
You will also need to provide an account for Apache to use when checking
credentials. As this password will be listed in plaintext in the
config, you should be sure to use an account specifically for this task::
AuthLDAPBindDN ldap-auth-user@example.com
AuthLDAPBindPassword "password"
.. seealso::
`mod_authnz_ldap <http://httpd.apache.org/docs/2.2/mod/mod_authnz_ldap.html>`_
Documentation for mod_authnz_ldap.
`mod_ldap <http://httpd.apache.org/docs/2.2/mod/mod_ldap.html>`_
Documentation for mod_ldap, which provides connection pooling and a
shared cache.
`LdapPlugin <http://trac-hacks.org/wiki/LdapPlugin>`_
Store :ref:`Trac permissions <admin-permissions>` in LDAP.
.. index::
triple: Apache; SSPI; authentication
SSPI Authentication
===================
If you are using Apache on Windows, you can use mod_auth_sspi to provide
single-sign-on. Download the module `from its webpage`__ and then add the
following to your VirtualHost::
<Location /trac/login>
AuthType SSPI
AuthName "Trac Login"
SSPIAuth On
SSPIAuthoritative On
SSPIDomain MyLocalDomain
SSPIOfferBasic On
SSPIOmitDomain Off
SSPIBasicPreferred On
Require valid-user
</Location>
__ http://sourceforge.net/project/showfiles.php?group_id=162518
Using the above, usernames in Trac will be of the form ``DOMAIN\username``, so
you may have to re-add permissions and such. If you do not want the domain to
be part of the username, set ``SSPIOmitDomain On`` instead.
.. note::
Version 1.0.2 and earlier of mod_auth_sspi do not support SSPIOmitDomain
and have bug in basic authentication. >= 1.0.3 is recommended.
.. seealso::
`mod_auth_sspi <http://mod-auth-sspi.sourceforge.net/>`_
Apache 2.x SSPI authentication module.
Some common problems with SSPI authentication
`#1055 <http://trac.edgewall.org/ticket/1055>`_,
`#1168 <http://trac.edgewall.org/ticket/1168>`_,
`#3338 <http://trac.edgewall.org/ticket/3338>`_
| 29.357143 | 108 | 0.660303 |
89dddcf352fcf099bdd1cd3a5a01e63c8ab0be8e | 1,084 | rst | reStructuredText | docs/recipes/heir-build-two-libs.rst | mwichmann/scons-cookbook | c5966b2b30c329c83c20d9f8b9debb18856a3a1f | [
"MIT"
] | 6 | 2020-06-08T00:28:23.000Z | 2022-02-09T12:14:33.000Z | docs/recipes/heir-build-two-libs.rst | mwichmann/scons-cookbook | c5966b2b30c329c83c20d9f8b9debb18856a3a1f | [
"MIT"
] | 5 | 2020-10-22T15:38:33.000Z | 2021-09-30T20:02:36.000Z | docs/recipes/heir-build-two-libs.rst | mwichmann/scons-cookbook | c5966b2b30c329c83c20d9f8b9debb18856a3a1f | [
"MIT"
] | 4 | 2020-06-08T00:28:30.000Z | 2022-03-23T17:33:01.000Z | Hierarchical Build of Two Libraries Linked With a Program
---------------------------------------------------------
``SConstruct``:
.. code:: python
env = Environment(LIBPATH=['#libA', '#libB'])
Export('env')
SConscript('libA/SConscript')
SConscript('libB/SConscript')
SConscript('Main/SConscript')
``libA/SConscript``:
.. code:: python
Import('env')
env.Library('a', Split('a1.c a2.c a3.c'))
``libB/SConscript``:
.. code:: python
Import('env')
env.Library('b', Split('b1.c b2.c b3.c'))
``Main/SConscript``:
.. code:: python
Import('env')
e = env.Clone(LIBS=['a', 'b'])
e.Program('foo', Split('m1.c m2.c m3.c'))
The ``#`` in the ``LIBPATH`` directories specify that they're relative to
the top-level directory, so they don't turn into ``Main/libA`` when
they're used in ``Main/SConscript``
Specifying only 'a' and 'b' for the library names allows ``scons`` to
attach the appropriate library prefix and suffix for the current
platform in creating the library filename (for example, ``liba.a`` on
POSIX systems, ``a.lib`` on Windows).
| 24.088889 | 73 | 0.627306 |
d6012b41ba6ef3e92addc123e1d63eb10308580f | 108 | rst | reStructuredText | docs/api/generated/torchq.helper.evaluate.rst | xinetzone/torch-quantization | 256c58e455b961d70a01ef83899596ca220f14a9 | [
"Apache-2.0"
] | null | null | null | docs/api/generated/torchq.helper.evaluate.rst | xinetzone/torch-quantization | 256c58e455b961d70a01ef83899596ca220f14a9 | [
"Apache-2.0"
] | null | null | null | docs/api/generated/torchq.helper.evaluate.rst | xinetzone/torch-quantization | 256c58e455b961d70a01ef83899596ca220f14a9 | [
"Apache-2.0"
] | null | null | null | torchq.helper.evaluate
======================
.. currentmodule:: torchq.helper
.. autofunction:: evaluate | 18 | 32 | 0.601852 |
4b736dcaf0459efc86a85edabc1626b586a28b39 | 4,539 | rst | reStructuredText | doc/source/devref/xensmvolume.rst | CiscoSystems/cinder-old | eb9a58d3eaa10a8a661baf0c1ed534aeb0877dcf | [
"Apache-2.0"
] | null | null | null | doc/source/devref/xensmvolume.rst | CiscoSystems/cinder-old | eb9a58d3eaa10a8a661baf0c1ed534aeb0877dcf | [
"Apache-2.0"
] | null | null | null | doc/source/devref/xensmvolume.rst | CiscoSystems/cinder-old | eb9a58d3eaa10a8a661baf0c1ed534aeb0877dcf | [
"Apache-2.0"
] | null | null | null | Xen Storage Manager Volume Driver
=================================
The Xen Storage Manager (xensm) driver for Cinder-Volume is based on XenAPI Storage Manager. This will not only provide basic storage functionality (like volume creation, and destruction) on a number of different storage back-ends, such as Netapp, NFS, etc. but it will also enable the capability of using more sophisticated storage back-ends for operations like cloning/snapshotting etc. To have an idea of the benefits of using XenAPI SM to provide back-end storage services, the list below shows some of the storage plugins already supported in XenServer/XCP:
- NFS VHD: SR plugin which stores disks as VHD files on a remote NFS filesystem
- Local VHD on LVM: SR plugin which represents disks as VHD disks on Logical Volumes within a locally-attached Volume Group
- HBA LUN-per-VDI driver: SR plugin which represents LUNs as VDIs sourced by hardware HBA adapters, e.g. hardware-based iSCSI or FC support
- NetApp: SR driver for mapping of LUNs to VDIs on a NETAPP server, providing use of fast snapshot and clone features on the filer
- LVHD over FC: SR plugin which represents disks as VHDs on Logical Volumes within a Volume Group created on an HBA LUN, e.g. hardware-based iSCSI or FC support
- iSCSI: Base ISCSI SR driver, provides a LUN-per-VDI. Does not support creation of VDIs but accesses existing LUNs on a target.
- LVHD over iSCSI: SR plugin which represents disks as Logical Volumes within a Volume Group created on an iSCSI LUN
- EqualLogic: SR driver for mapping of LUNs to VDIs on a EQUALLOGIC array group, providing use of fast snapshot and clone features on the array
Glossary
=========
XenServer: Commercial, supported product from Citrix
Xen Cloud Platform (XCP): Open-source equivalent of XenServer (and the development project for the toolstack). Everything said about XenServer below applies equally to XCP
XenAPI: The management API exposed by XenServer and XCP
xapi: The primary daemon on XenServer and Xen Cloud Platform; the one that exposes the XenAPI
Design
=======
Definitions
-----------
Backend: A term for a particular storage backend. This could be iSCSI, NFS, Netapp etc.
Backend-config: All the parameters required to connect to a specific backend. For e.g. For NFS, this would be the server, path, etc.
Flavor: This term is equivalent to volume "types". A user friendly term to specify some notion of quality of service. For example, "gold" might mean that the volumes will use a backend where backups are possible.
A flavor can be associated with multiple backends. The volume scheduler, with the help of the driver, will decide which backend will be used to create a volume of a particular flavor. Currently, the driver uses a simple "first-fit" policy, where the first backend that can successfully create this volume is the one that is used.
Operation
----------
Using the cinder-manage command detailed in the implementation, an admin can add flavors and backends.
One or more cinder-volume service instances will be deployed per availability zone. When an instance is started, it will create storage repositories (SRs) to connect to the backends available within that zone. All cinder-volume instances within a zone can see all the available backends. These instances are completely symmetric and hence should be able to service any create_volume request within the zone.
Commands
=========
A category called "sm" has been added to cinder-manage in the class StorageManagerCommands.
The following actions will be added:
- flavor_list
- flavor_create
- flavor_delete
- backend_list
- backend_add
- backend_remove
Usage:
------
cinder-manage sm flavor_create <label> <description>
cinder-manage sm flavor_delete<label>
cinder-manage sm backend_add <flavor label> <SR type> [config connection parameters]
Note: SR type and config connection parameters are in keeping with the Xen Command Line Interface. http://support.citrix.com/article/CTX124887
cinder-manage sm backend_delete <backend-id>
Examples:
---------
cinder-manage sm flavor_create gold "Not all that glitters"
cinder-manage sm flavor_delete gold
cinder-manage sm backend_add gold nfs name_label=toybox-renuka server=myserver serverpath=/local/scratch/myname
cinder-manage sm backend_remove 1
API Changes
===========
No API changes have been introduced so far. The existing euca-create-volume and euca-delete-volume commands (or equivalent Openstack API commands) should be used.
| 51 | 562 | 0.772857 |
a23d51ac7adac41ea549cc6a16ab38c2a2553fea | 1,950 | rst | reStructuredText | src/command_modules/azure-cli-network/HISTORY.rst | saurabsa/azure-cli-old | f77477a98c9aa9cb55daf5b0d2f410d1455a9225 | [
"MIT"
] | null | null | null | src/command_modules/azure-cli-network/HISTORY.rst | saurabsa/azure-cli-old | f77477a98c9aa9cb55daf5b0d2f410d1455a9225 | [
"MIT"
] | null | null | null | src/command_modules/azure-cli-network/HISTORY.rst | saurabsa/azure-cli-old | f77477a98c9aa9cb55daf5b0d2f410d1455a9225 | [
"MIT"
] | null | null | null | .. :changelog:
Release History
===============
2.0.1 (2017-03-13)
++++++++++++++++++
* Fix: 'None' already exists. Replacing values. (#2390)
* Convert network creates to use SDK (#2371)
* Convert PublicIP Create to use SDK (#2294)
* Convert VNet Create to use SDK (#2269)
2.0.0 (2017-02-27)
++++++++++++++++++
* GA release.
0.1.2rc2 (2017-02-22)
+++++++++++++++++++++
* Fix VPN connection create shared-key validator.
* Add delete confirmation for DNS record-set delete.
* Fix bug with local address prefixes.
* Documentation updates.
0.1.2rc1 (2017-02-17)
+++++++++++++++++++++
* DNS/Application-Gateway Fixes
* Show commands return empty string with exit code 0 for 404 responses (#2117)'
* DNS Zone Import/Export (#2040)
* Restructure DNS Commands (#2112)
0.1.1b2 (2017-01-30)
+++++++++++++++++++++
* Table output for 'network dns record-set list'.
* Prompt confirmation for 'network dns zone delete'.
* Support Python 3.6.
0.1.1b1 (2017-01-17)
+++++++++++++++++++++
**Breaking changes**
Renames --sku-name to --sku and removes the --sku-tier parameter. It is parsed from the SKU name.
For the application-gateway {subresource} list commands, changes the alias for the application gateway name from --name/-n to --gateway-name.
Renames vpn-gateway commands to vnet-gateway commands for consistency with the SDK, Powershell, and the VPN connection commands.
Adds 'name-or-id' logic to vpn-connection create so that you can specify the appropriate resource name instead of only the ID. Renames the related arguments to omit -id.
Removes --enable-bgp from the vnet-gateway create command.
* Improvements to ExpressRoute update commands
* RouteTable/Route command updates
* VPN connection fixes
* VNet Gateway Fixes and Enhancements
* Application Gateway Commands and Fixes
* DNS Fixes
* DNS Record Set Create Updates
* ExpressRoute peering client-side validation
0.1.0b11 (2016-12-12)
+++++++++++++++++++++
* Preview release.
| 26.712329 | 169 | 0.688718 |
5eb2fb9765a12657595072e5115fc428a616c7d9 | 102 | rst | reStructuredText | docs/source/releasenotes.rst | diogofgm/qnap_app | cd7e4e374da1926b28991406ae03eac38ebe4609 | [
"Apache-2.0"
] | 1 | 2019-09-26T02:04:41.000Z | 2019-09-26T02:04:41.000Z | docs/source/releasenotes.rst | diogofgm/qnap_app | cd7e4e374da1926b28991406ae03eac38ebe4609 | [
"Apache-2.0"
] | null | null | null | docs/source/releasenotes.rst | diogofgm/qnap_app | cd7e4e374da1926b28991406ae03eac38ebe4609 | [
"Apache-2.0"
] | null | null | null | =============
Release Notes
=============
v0.1.0 - August 2019
--------------------
- Public release
| 12.75 | 20 | 0.382353 |
e8704bd9ed77012b6c15b7f6f2391ee5fec27872 | 28,149 | rst | reStructuredText | docs/google/typescript/language.rst | xinetzone/styleguide | 99673b7da759351a70271395335a5278e671e295 | [
"Apache-2.0"
] | null | null | null | docs/google/typescript/language.rst | xinetzone/styleguide | 99673b7da759351a70271395335a5278e671e295 | [
"Apache-2.0"
] | null | null | null | docs/google/typescript/language.rst | xinetzone/styleguide | 99673b7da759351a70271395335a5278e671e295 | [
"Apache-2.0"
] | null | null | null | 语言特性
################################################################################
.. _ts-visibility:
可见性
********************************************************************************
限制属性、方法以及类型的可见性有助于代码解耦合。因此:
* 应当尽可能限制符号的可见性。
* 可以将私有方法在同一文件中改写为独立于所有类以外的内部函数,并将私有属性移至单独的内部类中。
* 在 TypeScript 中,符号默认的可见性即为 ``public`` ,因此,除了在构造函数中声明公开( ``public`` )且非只读( ``readonly`` )的参数属性之外,不要使用 ``public`` 修饰符。
.. code-block:: typescript
class Foo {
public bar = new Bar(); // 不要这样做!不需要 public 修饰符!
constructor(public readonly baz: Baz) {} // 不要这样做!readonly 修饰符已经表明了 baz 是默认 public 的属性,因此不需要 public 修饰符!
}
.. code-block:: typescript
class Foo {
bar = new Bar(); // 应当这样做!将不需要的 public 修饰符省略!
constructor(public baz: Baz) {} // 可以这样做!公开且非只读的参数属性允许使用 public 修饰符!
}
关于可见性,还可参见 :ref:`ts-export-visibility` 一节。
.. _ts-constructors:
构造函数
********************************************************************************
调用构造函数时必须使用括号,即使不传递任何参数。
.. code-block:: typescript
// 不要这样做!
const x = new Foo;
// 应当这样做!
const x = new Foo();
没有必要提供一个空的或者仅仅调用父类构造函数的构造函数。在 ES2015 标准中,如果没有为类显式地提供构造函数,编译器会提供一个默认的构造函数。但是,含有参数属性、访问修饰符或参数装饰器的构造函数即使函数体为空也不能省略。
.. code-block:: typescript
// 不要这样做!没有必要声明一个空的构造函数!
class UnnecessaryConstructor {
constructor() {}
}
.. code-block:: typescript
// 不要这样做!没有必要声明一个仅仅调用基类构造函数的构造函数!
class UnnecessaryConstructorOverride extends Base {
constructor(value: number) {
super(value);
}
}
.. code-block:: typescript
// 应当这样做!默认构造函数由编译器提供即可!
class DefaultConstructor {
}
// 应当这样做!含有参数属性的构造函数不能省略!
class ParameterProperties {
constructor(private myService) {}
}
// 应当这样做!含有参数装饰器的构造函数不能省略!
class ParameterDecorators {
constructor(@SideEffectDecorator myService) {}
}
// 应当这样做!私有的构造函数不能省略!
class NoInstantiation {
private constructor() {}
}
.. _ts-class-members:
类成员
********************************************************************************
.. _ts-no-private-fields:
``#private`` 语法
================================================================================
不要使用 ``#private`` 私有字段(又称私有标识符)语法声明私有成员。
.. code-block::
// 不要这样做!
class Clazz {
#ident = 1;
}
而应当使用 TypeScript 的访问修饰符。
.. code-block:: typescript
// 应当这样做!
class Clazz {
private ident = 1;
}
为什么?因为私有字段语法会导致 TypeScipt 在编译为 JavaScript 时出现体积和性能问题。同时,ES2015 之前的标准都不支持私有字段语法,因此它限制了 TypeScript 最低只能被编译至 ES2015。另外,在进行静态类型和可见性检查时,私有字段语法相比访问修饰符并无明显优势。
.. _ts-use-readonly:
使用 ``readonly``
================================================================================
对于不会在构造函数以外进行赋值的属性,应使用 ``readonly`` 修饰符标记。这些属性并不需要具有深层不可变性。
参数属性
================================================================================
不要在构造函数中显式地对类成员进行初始化。应当使用 TypeScript 的 `参数属性 <https://www.typescriptlang.org/docs/handbook/classes.html#parameter-properties>`_ 语法。
.. code-block:: typescript
// 不要这样做!重复的代码太多了!
class Foo {
private readonly barService: BarService;
constructor(barService: BarService) {
this.barService = barService;
}
}
.. code-block:: typescript
// 应当这样做!简洁明了!
class Foo {
constructor(private readonly barService: BarService) {}
}
如果需要为参数属性添加文档,应使用 JSDoc 的 ``@param`` 标签,参见 :ref:`ts-parameter-property-comments` 一节。
字段初始化
================================================================================
如果某个成员并非参数属性,应当在声明时就对其进行初始化,这样有时可以完全省略掉构造函数。
.. code-block:: typescript
// 不要这样做!没有必要单独把初始化语句放在构造函数里!
class Foo {
private readonly userList: string[];
constructor() {
this.userList = [];
}
}
.. code-block:: typescript
// 应当这样做!省略了构造函数!
class Foo {
private readonly userList: string[] = [];
}
.. _ts-properties-used-outside-of-class-lexical-scope:
用于类的词法范围之外的属性
================================================================================
如果一个属性被用于它们所在类的词法范围之外,例如用于模板(template)的 AngularJS 控制器(controller)属性,则禁止将其设为 ``private`` ,因为显然这些属性是用于外部的。
对于这类属性,应当将其设为 ``public`` ,如果有需要的话也可以使用 ``protected`` 。例如,Angular 和 Polymer 的模板属性应使用 ``public`` ,而 AngularJS 应使用 ``protected`` 。
此外,禁止在 TypeScript 代码中使用 ``obj['foo']`` 语法绕过可见性限制进行访问。
为什么?
如果一个属性被设为 ``private``\ ,就相当于向自动化工具和读者声明对这个属性的访问局限于类的内部。例如,用于查找未被使用的代码的工具可能会将一个私有属性标记为未使用,即使在其它文件中有代码设法绕过了可见性限制对其进行访问。
虽然 ``obj['foo']`` 可以绕过 TypeScript 编译器对可见性的检查,但是这种访问方法可能会由于调整了构建规则而失效。此外,它也违反了后文中所提到的 :ref:`ts-optimization-compatibility-for-property-access` 规则。
.. _ts-getters-and-setters-accessors:
取值器与设值器(存取器)
================================================================================
可以在类中使用存取器,其中取值器方法必须是纯函数(即结果必须是一致稳定的,且不能有副作用)。存取器还可以用于隐藏内部复杂的实现细节。
.. code-block:: typescript
class Foo {
constructor(private readonly someService: SomeService) {}
get someMember(): string {
return this.someService.someVariable;
}
set someMember(newValue: string) {
this.someService.someVariable = newValue;
}
}
如果存取器被用于隐藏类内部的某个属性,则被隐藏的属性应当以诸如 ``internal`` 或 ``wrapped`` 此类的完整单词作为前缀或后缀。在使用这些私有属性时,应当尽可能地通过存取器进行访问。取值器和设值器二者至少要有一个是非平凡的,也就是说,存取器不能只用于传递属性值,更不能依赖这种存取器对属性进行隐藏。这种情况下,应当直接将属性设为 ``public``\ 。对于只有取值器没有设值器的属性,则应当考虑直接将其设为 ``readonly``\ 。
.. code-block:: typescript
class Foo {
private wrappedBar = '';
get bar() {
return this.wrappedBar || 'bar';
}
set bar(wrapped: string) {
this.wrappedBar = wrapped.trim();
}
}
.. code-block:: typescript
class Bar {
private barInternal = '';
// 不要这样做!取值器和设值器都没有任何逻辑,这种情况下应当直接将属性 bar 设为 public。
get bar() {
return this.barInternal;
}
set bar(value: string) {
this.barInternal = value;
}
}
.. _ts-primitive-types-wrapper-classes:
原始类型与封装类
********************************************************************************
在 TypeScript 中,不要实例化原始类型的封装类,例如 ``String`` 、 ``Boolean`` 、 ``Number`` 等。封装类有许多不合直觉的行为,例如 ``new Boolean(false)`` 在布尔表达式中会被求值为 ``true``\ 。
.. code-block:: typescript
// 不要这样做!
const s = new String('hello');
const b = new Boolean(false);
const n = new Number(5);
.. code-block:: typescript
// 应当这样做!
const s = 'hello';
const b = false;
const n = 5;
.. _ts-array-constructor:
数组构造函数
********************************************************************************
在 TypeScript 中,禁止使用 ``Array()`` 构造函数(无论是否使用 ``new`` 关键字)。它有许多不合直觉又彼此矛盾的行为,例如:
.. code-block:: typescript
// 不要这样做!同样的构造函数,其构造方式却却完全不同!
const a = new Array(2); // 参数 2 被视作数组的长度,因此返回的结果是 [undefined, undefined]
const b = new Array(2, 3); // 参数 2, 3 被视为数组中的元素,返回的结果此时变成了 [2, 3]
应当使用方括号对数组进行初始化,或者使用 ``from`` 构造一个具有确定长度的数组:
.. code-block:: typescript
const a = [2];
const b = [2, 3];
// 等价于 Array(2):
const c = [];
c.length = 2;
// 生成 [0, 0, 0, 0, 0]
Array.from<number>({length: 5}).fill(0);
.. _ts-type-coercion:
强制类型转换
********************************************************************************
在 TypeScript 中,可以使用 ``String()`` 和 ``Boolean()`` 函数(注意不能和 ``new`` 一起使用!)、模板字符串和 ``!!`` 运算符进行强制类型转换。
.. code-block:: typescript
const bool = Boolean(false);
const str = String(aNumber);
const bool2 = !!str;
const str2 = `result: ${bool2}`;
不建议通过字符串连接操作将类型强制转换为 ``string`` ,这会导致加法运算符两侧的运算对象具有不同的类型。
在将其它类型转换为数字时,必须使用 ``Number()`` 函数,并且,在类型转换有可能失败的场合,必须显式地检查其返回值是否为 ``NaN`` 。
.. tip::
``Number('')`` 、 ``Number(' ')`` 和 ``Number('\t')`` 返回 ``0`` 而不是 ``NaN`` 。 ``Number('Infinity')`` 和 ``Number('-Infinity')`` 分别返回 ``Infinity`` 和 ``-Infinity`` 。这些情况可能需要特殊处理。
.. code-block:: typescript
const aNumber = Number('123');
if (isNaN(aNumber)) throw new Error(...); // 如果输入字符串有可能无法被解析为数字,就需要处理返回 NaN 的情况。
assertFinite(aNumber, ...); // 如果输入字符串已经保证合法,可以在这里添加断言。
禁止使用一元加法运算符 ``+`` 将字符串强制转换为数字。用这种方法进行解析有失败的可能,还有可能出现奇怪的边界情况。而且,这样的写法往往成为代码中的坏味道, ``+`` 在代码审核中非常容易被忽略掉。
.. code-block:: typescript
// 不要这样做!
const x = +y;
同样地,代码中也禁止使用 ``parseInt`` 或 ``parseFloat`` 进行转换,除非用于解析表示非十进制数字的字符串。因为这两个函数都会忽略字符串中的后缀,这有可能在无意间掩盖了一部分原本会发生错误的情形(例如将 ``12 dwarves`` 解析成 ``12``\ )。
.. code-block:: typescript
const n = parseInt(someString, 10); // 无论传不传基数,
const f = parseFloat(someString); // 都很容易造成错误。
对于需要解析非十进制数字的情况,在调用 ``parseInt`` 进行解析之前必须检查输入是否合法。
.. code-block:: typescript
if (!/^[a-fA-F0-9]+$/.test(someString)) throw new Error(...);
// 需要解析 16 进制数。
// tslint:disable-next-line:ban
const n = parseInt(someString, 16); // 只允许在非十进制的情况下使用 parseInt。
应当使用 ``Number()`` 和 ``Math.floor`` 或者 ``Math.trunc`` (如果支持的话)解析整数。
.. code-block:: typescript
let f = Number(someString);
if (isNaN(f)) handleError();
f = Math.floor(f);
不要在 ``if`` 、 ``for`` 或者 ``while`` 的条件语句中显式地将类型转换为 ``boolean`` ,因为这里原本就会执行隐式的类型转换。
.. code-block:: typescript
// 不要这样做!
const foo: MyInterface|null = ...;
if (!!foo) {...}
while (!!foo) {...}
.. code-block:: typescript
// 应当这样做!
const foo: MyInterface|null = ...;
if (foo) {...}
while (foo) {...}
最后,在代码中使用显式和隐式的比较均可。
.. code-block:: typescript
// 显式地和 0 进行比较,没问题!
if (arr.length > 0) {...}
// 依赖隐式类型转换,也没问题!
if (arr.length) {...}
.. _ts-variables:
变量
********************************************************************************
必须使用 ``const`` 或 ``let`` 声明变量。尽可能地使用 ``const`` ,除非这个变量需要被重新赋值。禁止使用 ``var`` 。
.. code-block:: typescript
const foo = otherValue; // 如果 foo 不可变,就使用 const。
let bar = someValue; // 如果 bar 在之后会被重新赋值,就使用 let。
与大多数其它编程语言类似,使用 ``const`` 和 ``let`` 声明的变量都具有块级作用域。与之相反的是,使用 ``var`` 声明的变量在 JavaScript 中具有函数作用域,这会造成许多难以理解的 bug,因此禁止在 TypeScript 中使用 ``var`` 。
.. code-block:: typescript
// 不要这么做!
var foo = someValue;
最后,变量必须在使用前进行声明。
.. _ts-exceptions:
异常
********************************************************************************
在实例化异常对象时,必须使用 ``new Error()`` 语法而非调用 ``Error()`` 函数。虽然这两种方法都能够创建一个异常实例,但是使用 ``new`` 能够与代码中其它的对象实例化在形式上保持更好的一致性。
.. code-block:: typescript
// 应当这样做!
throw new Error('Foo is not a valid bar.');
// 不要这样做!
throw Error('Foo is not a valid bar.');
.. _ts-iterating-objects:
对象迭代
********************************************************************************
对对象使用 ``for (... in ...)`` 语法进行迭代很容易出错,因为它同时包括了对象从原型链中继承得来的属性。因此,禁止使用裸的 ``for (... in ...)`` 语句。
.. code-block:: typescript
// 不要这样做!
for (const x in someObj) {
// x 可能包括 someObj 从原型中继承得到的属性。
}
在对对象进行迭代时,必须使用 ``if`` 语句对对象的属性进行过滤,或者使用 ``for (... of Object.keys(...))`` 。
.. code-block:: typescript
// 应当这样做!
for (const x in someObj) {
if (!someObj.hasOwnProperty(x)) continue;
// 此时 x 必然是定义在 someObj 上的属性。
}
.. code-block:: typescript
// 应当这样做!
for (const x of Object.keys(someObj)) { // 注意:这里使用的是 for _of_ 语法!
// 此时 x 必然是定义在 someObj 上的属性。
}
.. code-block:: typescript
// 应当这样做!
for (const [key, value] of Object.entries(someObj)) { // 注意:这里使用的是 for _of_ 语法!
// 此时 key 必然是定义在 someObj 上的属性。
}
.. _ts-iterating-containers:
容器迭代
********************************************************************************
不要在数组上使用 ``for (... in ...)`` 进行迭代。这是一个违反直觉的操作,因为它是对数组的下标而非元素进行迭代(还会将其强制转换为 ``string`` 类型)!
.. code-block:: typescript
// 不要这样做!
for (const x in someArray) {
// 这里的 x 是数组的下标!(还是 string 类型的!)
}
如果要在数组上进行迭代,应当使用 ``for (... of someArr)`` 语句或者传统的 ``for`` 循环语句。
.. code-block:: typescript
// 应当这样做!
for (const x of someArr) {
// 这里的x 是数组的元素。
}
.. code-block:: typescript
// 应当这样做!
for (let i = 0; i < someArr.length; i++) {
// 如果需要使用下标,就对下标进行迭代,否则就使用 for/of 循环。
const x = someArr[i];
// ...
}
.. code-block:: typescript
// 应当这样做!
for (const [i, x] of someArr.entries()) {
// 上面例子的另一种形式。
}
不要使用 ``Array.prototype.forEach`` 、 ``Set.prototype.forEach`` 和 ``Map.prototype.forEach`` 。这些方法会使代码难以调试,还会令编译器的某些检查(例如可见性检查)失效。
.. code-block:: typescript
// 不要这样做!
someArr.forEach((item, index) => {
someFn(item, index);
});
为什么?考虑下面这段代码:
.. code-block:: typescript
let x: string|null = 'abc';
myArray.forEach(() => { x.charAt(0); });
从读者的角度看,这段代码并没有什么问题: ``x`` 没有被初始化为 ``null`` ,并且在被访问之前也没有发生过任何变化。但是对编译器而言,它并不知道传给 ``.forEach()`` 的闭包 ``() => { x.charAt(0); }`` 会被立即执行。因此,编译器有理由认为闭包有可能在之后的某处代码中被调用,而到那时 ``x`` 已经被设为 ``null`` 。于是,这里出现了一个编译错误。与之等价的 ``for-of`` 形式的迭代就不会有任何问题。
读者可以在 `这里 <https://www.typescriptlang.org/play?#code/DYUwLgBAHgXBDOYBOBLAdgcwD5oK7GAgF4IByAQwCMBjUgbgCgBtAXQDoAzAeyQFFzqACwAUwgJTEAfBADeDCNDZDySAIJhhABjGMAvjoYNQkAJ5xEqTDnyESFGvQbckEYdS5pEEAPoQuHCFYJOQUTJUEVdS0DXQYgA>`_ 对比这两个版本的代码。
在工程实践中,代码路径越复杂、越违背直觉,越容易在进行控制流分析时出现这类问题。
.. _ts-using-the-spread-operator:
展开运算符
********************************************************************************
在复制数组或对象时,展开运算符 ``[...foo]``\ 、\ ``{...bar}`` 是一个非常方便的语法。使用展开运算符时,对于同一个键,后出现的值会取代先出现的值。
.. code-block:: typescript
const foo = {
num: 1,
};
const foo2 = {
...foo,
num: 5,
};
const foo3 = {
num: 5,
...foo,
}
// 对于 foo2 而言,1 先出现,5 后出现。
foo2.num === 5;
// 对于 foo3 而言,5 先出现,1 后出现。
foo3.num === 1;
在使用展开运算符时,被展开的值必须与被创建的值相匹配。也就是说,在创建对象时只能展开对象,在创建数组时只能展开可迭代类型。
禁止展开原始类型,包括 ``null`` 和 ``undefined`` 。
.. code-block:: typescript
// 不要这样做!
const foo = {num: 7};
const bar = {num: 5, ...(shouldUseFoo && foo)}; // 展开运算符有可能作用于 undefined。
.. code-block:: typescript
// 不要这样做!这会创建出一个没有 length 属性的对象 {0: 'a', 1: 'b', 2: 'c'}。
const fooStrings = ['a', 'b', 'c'];
const ids = {...fooStrings};
.. code-block:: typescript
// 应当这样做!在创建对象时展开对象。
const foo = shouldUseFoo ? {num: 7} : {};
const bar = {num: 5, ...foo};
// 应当这样做!在创建数组时展开数组。
const fooStrings = ['a', 'b', 'c'];
const ids = [...fooStrings, 'd', 'e'];
.. _ts-control-flow-statements-blocks:
控制流语句 / 语句块
********************************************************************************
多行控制流语句必须使用大括号。
.. code-block:: typescript
// 应当这样做!
for (let i = 0; i < x; i++) {
doSomethingWith(i);
andSomeMore();
}
if (x) {
doSomethingWithALongMethodName(x);
}
.. code-block:: typescript
// 不要这样做!
if (x)
x.doFoo();
for (let i = 0; i < x; i++)
doSomethingWithALongMethodName(i);
这条规则的例外时,能够写在同一行的 ``if`` 语句可以省略大括号。
.. code-block:: typescript
// 可以这样做!
if (x) x.doFoo();
.. _ts-switch-statements:
``switch`` 语句
********************************************************************************
所有的 ``switch`` 语句都必须包含一个 ``default`` 分支,即使这个分支里没有任何代码。
.. code-block:: typescript
// 应当这样做!
switch (x) {
case Y:
doSomethingElse();
break;
default:
// 什么也不做。
}
非空语句组( ``case ...`` )不允许越过分支向下执行(编译器会进行检查):
.. code-block:: typescript
// 不能这样做!
switch (x) {
case X:
doSomething();
// 不允许向下执行!
case Y:
// ...
}
空语句组可以这样做:
.. code-block:: typescript
// 可以这样做!
switch (x) {
case X:
case Y:
doSomething();
break;
default: // 什么也不做。
}
.. _ts-equality-checks:
相等性判断
********************************************************************************
必须使用三等号( ``===`` )和对应的不等号( ``!==`` )。两等号会在比较的过程中进行类型转换,这非常容易导致难以理解的错误。并且在 JavaScript 虚拟机上,两等号的运行速度比三等号慢。参见 `JavaScript 相等表 <https://dorey.github.io/JavaScript-Equality-Table/>`_ 。
.. code-block:: typescript
// 不要这样做!
if (foo == 'bar' || baz != bam) {
// 由于发生了类型转换,会导致难以理解的行为。
}
.. code-block:: typescript
// 应当这样做!
if (foo === 'bar' || baz !== bam) {
// 一切都很好!
}
**例外**:和 ``null`` 字面量的比较可以使用 ``==`` 和 ``!=`` 运算符,这样能够同时覆盖 ``null`` 和 ``undefined`` 两种情况。
.. code-block:: typescript
// 可以这样做!
if (foo == null) {
// 不管 foo 是 null 还是 undefined 都会执行到这里。
}
.. _ts-function-declarations:
函数声明
********************************************************************************
使用 ``function foo() { ... }`` 的形式声明具名函数,包括嵌套在其它作用域中,例如其它函数内部的函数。
不要使用将函数表达式赋值给局部变量的写法(例如 ``const x = function() {...};`` )。TypeScript 本身已不允许重新绑定函数,所以在函数声明中使用 ``const`` 来阻止重写函数是没有必要的。
**例外**:如果函数需要访问外层作用域的 ``this`` ,则应当使用将箭头函数赋值给变量的形式代替函数声明的形式。
.. code-blocK:: typescript
// 应当这样做!
function foo() { ... }
.. code-block:: typescript
// 不要这样做!
// 在有上一段代码中的函数声明的情况下,下面这段代码无法通过编译:
foo = () => 3; // 错误:赋值表达式的左侧不合法。
// 因此像这样进行函数声明是没有必要的。
const foo = function() { ... }
请注意这里所说的函数声明( ``function foo() {}`` )和下面要讨论的函数表达式( ``doSomethingWith(function() {});`` )之间的区别。
顶层箭头函数可以用于显式地声明这一函数实现了一个接口。
.. code-block:: typescript
interface SearchFunction {
(source: string, subString: string): boolean;
}
const fooSearch: SearchFunction = (source, subString) => { ... };
.. _ts-function-expressions:
函数表达式
********************************************************************************
.. _ts-use-arrow-functions-in-expressions:
在表达式中使用箭头函数
================================================================================
不要使用 ES6 之前使用 ``function`` 关键字定义函数表达式的版本。应当使用箭头函数。
.. code-block:: typescript
// 应当这样做!
bar(() => { this.doSomething(); })
.. code-block:: typescript
// 不要这样做!
bar(function() { ... })
只有当函数需要动态地重新绑定 ``this`` 时,才能使用 ``function`` 关键字声明函数表达式,但是通常情况下代码中不应当重新绑定 ``this`` 。常规函数(相对于箭头函数和方法而言)不应当访问 ``this`` 。
.. _ts-expression-bodies-vs-block-bodies:
表达式函数体 和 代码块函数体
================================================================================
使用箭头函数时,应当根据具体情况选择表达式或者代码块作为函数体。
.. code-block:: typescript
// 使用函数声明的顶层函数。
function someFunction() {
// 使用代码块函数体的箭头函数,也就是使用 => { } 的函数,没问题:
const receipts = books.map((b: Book) => {
const receipt = payMoney(b.price);
recordTransaction(receipt);
return receipt;
});
// 如果用到了函数的返回值的话,使用表达式函数体也没问题:
const longThings = myValues.filter(v => v.length > 1000).map(v => String(v));
function payMoney(amount: number) {
// 函数声明也没问题,但是不要在函数中访问 this。
}
}
只有在确实需要用到函数返回值的情况下才能使用表达式函数体。
.. code-block:: typescript
// 不要这样做!如果不需要函数返回值的话,应当使用代码块函数体({ ... })。
myPromise.then(v => console.log(v));
.. code-block:: typescript
// 应当这样做!使用代码块函数体。
myPromise.then(v => {
console.log(v);
});
// 应当这样做!即使需要函数返回值,也可以为了可读性使用代码块函数体。
const transformed = [1, 2, 3].map(v => {
const intermediate = someComplicatedExpr(v);
const more = acrossManyLines(intermediate);
return worthWrapping(more);
});
.. _ts-rebinding-this:
重新绑定 ``this``
================================================================================
不要在函数表达式中使用 ``this`` ,除非它们明确地被用于重新绑定 ``this`` 指针。大多数情况下,使用箭头函数或者显式指定函数参数都能够避免重新绑定 ``this`` 的需求。
.. code-block:: typescript
// 不要这样做!
function clickHandler() {
// 这里的 this 到底指向什么?
this.textContent = 'Hello';
}
// 不要这样做!this 指针被隐式地设为 document.body。
document.body.onclick = clickHandler;
.. code-block:: typescript
// 应当这样做!在箭头函数中显式地对对象进行引用。
document.body.onclick = () => { document.body.textContent = 'hello'; };
// 可以这样做!函数显式地接收一个参数。
const setTextFn = (e: HTMLElement) => { e.textContent = 'hello'; };
document.body.onclick = setTextFn.bind(null, document.body);
.. _ts-arrow-functions-as-properties:
使用箭头函数作为属性
================================================================================
通常情况下,类不应该将任何属性初始化为箭头函数。箭头函数属性需要调用函数意识到被调用函数的 ``this`` 已经被绑定了,这让 ``this`` 的指向变得令人费解,也让对应的调用和引用在形式上看着似乎是不正确的,也就是说,需要额外的信息才能确认这样的使用方式是正确的。在调用实例方法时,必须使用箭头函数的形式(例如 ``const handler = (x) => { this.listener(x); };`` )。此外,不允许持有或传递实例方法的引用(例如不要使用 ``const handler = this.listener; handler(x);`` 的写法)。
.. tip::
在一些特殊的情况下,例如需要将函数绑定到模板时,使用箭头函数作为属性是很有用的做法,同时还能令代码的可读性提高。因此,在这些情况下对于这条规则可视具体情况加以变通。此外, :ref:`ts-event-handlers` 一节中有相关讨论。
.. code-block:: typescript
// 不要这样做!
class DelayHandler {
constructor() {
// 这里有个问题,回调函数里的 this 指针不会被保存。
// 因此回调函数里的 this 不再是 DelayHandler 的实例了。
setTimeout(this.patienceTracker, 5000);
}
private patienceTracker() {
this.waitedPatiently = true;
}
}
.. code-block:: typescript
// 不要这样做!一般而言不应当使用箭头函数作为属性。
class DelayHandler {
constructor() {
// 不要这样做!这里看起来就是像是忘记了绑定 this 指针。
setTimeout(this.patienceTracker, 5000);
}
private patienceTracker = () => {
this.waitedPatiently = true;
}
}
.. code-block:: typescript
// 应当这样做!在调用时显式地处理 this 指针的指向问题。
class DelayHandler {
constructor() {
// 在这种情况下,应尽可能使用匿名函数。
setTimeout(() => {
this.patienceTracker();
}, 5000);
}
private patienceTracker() {
this.waitedPatiently = true;
}
}
.. _ts-event-handlers:
事件句柄
================================================================================
对于事件句柄,如果它不需要被卸载的话,可以使用箭头函数的形式,例如事件是由类自身发送的情况。如果句柄必须被卸载,则应当使用箭头函数属性,因为箭头函数属性能够自动正确地捕获 ``this`` 指针,并且能够提供一个用于卸载的稳定引用。
.. code-block:: typescript
// 应当这样做!事件句柄可以使用匿名函数或者箭头函数属性的形式。
class Component {
onAttached() {
// 事件是由类本身发送的,因此这个句柄不需要卸载。
this.addEventListener('click', () => {
this.listener();
});
// 这里的 this.listener 是一个稳定引用,因此可以在之后被卸载。
window.addEventListener('onbeforeunload', this.listener);
}
onDetached() {
// 这个事件是由 window 发送的。如果不卸载这个句柄,this.listener
// 会因为绑定了 this 而保存对 this 的引用,从而导致内存泄漏。
window.removeEventListener('onbeforeunload', this.listener);
}
// 使用箭头函数作为属性能够自动地正确绑定 this 指针。
private listener = () => {
confirm('Do you want to exit the page?');
}
}
不要在注册事件句柄的表达式中使用 ``bind`` ,这会创建一个无法卸载的临时引用。
.. code-block:: typescript
// 不要这样做!对句柄使用 bind 会创建一个无法卸载的临时引用。
class Component {
onAttached() {
// 这里创建了一个无法卸载的临时引用。
window.addEventListener('onbeforeunload', this.listener.bind(this));
}
onDetached() {
// 这里的 bind 创建了另一个引用,所以这一行代码实际上没有实现任何功能。
window.removeEventListener('onbeforeunload', this.listener.bind(this));
}
private listener() {
confirm('Do you want to exit the page?');
}
}
.. _ts-automatic-semicolon-insertion:
自动分号插入
********************************************************************************
不要依赖自动分号插入(ASI),必须显式地使用分号结束每一个语句。这能够避免由于不正确的分号插入所导致的 Bug,也能够更好地兼容对 ASI 支持有限的工具(例如 clang-format)。
.. _ts-ts-ignore:
``@ts-ignore``
********************************************************************************
不要使用 ``@ts-ignore`` 。表面上看,这是一个“解决”编译错误的简单方法,但实际上,编译错误往往是由其它更大的问题导致的,因此正确的做法是直接解决这些问题本身。
举例来说,如果使用 ``@ts-ignore`` 关闭了一个类型错误,那么便很难推断其它相关代码最终会接收到何种类型。对于许多与类型相关的错误, :ref:`ts-any-type` 一节有一些关于如何正确使用 ``any`` 的有用的建议。
.. _ts-type-and-non-nullability-assertions:
类型断言与非空断言
********************************************************************************
类型断言( ``x as SomeType`` )和非空断言( ``y!`` )是不安全的。这两种语法只能够绕过编译器,而并不添加任何运行时断言检查,因此有可能导致程序在运行时崩溃。
因此,除非有明显或确切的理由,否则 *不应* 使用类型断言和非空断言。
.. code-block:: typescript
// 不要这样做!
(x as Foo).foo();
y!.bar();
如果希望对类型和非空条件进行断言,最好的做法是显式地编写运行时检查。
.. code-block:: typescript
// 应当这样做!
// 这里假定 Foo 是一个类。
if (x instanceof Foo) {
x.foo();
}
if (y) {
y.bar();
}
有时根据代码中的上下文可以确定某个断言必然是安全的。在这种情况下, *应当* 添加注释详细地解释为什么这一不安全的行为可以被接受:
.. code-block:: typescript
// 可以这样做!
// x 是一个 Foo 类型的示例,因为……
(x as Foo).foo();
// y 不可能是 null,因为……
y!.bar();
如果使用断言的理由很明显,注释就不是必需的。例如,生成的协议代码总是可空的,但有时根据上下文可以确认其中某些特定的由后端提供的字段必然不为空。在这些情况下应当根据具体场景加以判断和变通。
.. _ts-type-assertions-syntax:
类型断言语法
================================================================================
类型断言必须使用 ``as`` 语法,不要使用尖括号语法,这样能强制保证在断言外必须使用括号。
.. code-block:: typescript
// 不要这样做!
const x = (<Foo>z).length;
const y = <Foo>z.length;
.. code-block:: typescript
// 应当这样做!
const x = (z as Foo).length;
.. _ts-type-assertions-and-object-literals:
类型断言和对象字面量
================================================================================
使用类型标记( ``: Foo`` )而非类型断言( ``as Foo`` )标明对象字面量的类型。在日后对接口的字段类型进行修改时,前者能够帮助程序员发现 Bug。
.. code-block:: typescript
interface Foo {
bar: number;
baz?: string; // 这个字段曾经的名称是“bam”,后来改名为“baz”。
}
const foo = {
bar: 123,
bam: 'abc', // 如果使用类型断言,改名之后这里并不会报错!
} as Foo;
function func() {
return {
bar: 123,
bam: 'abc', // 如果使用类型断言,改名之后这里也不会报错!
} as Foo;
}
.. _ts-member-property-declarations:
成员属性声明
********************************************************************************
接口和类的声明必须使用 ``;`` 分隔每个成员声明。
.. code-block:: typescript
// 应当这样做!
interface Foo {
memberA: string;
memberB: number;
}
为了与类的写法保持一致,不要在接口中使用 ``,`` 分隔字段。
.. code-block:: typescript
// 不要这样做!
interface Foo {
memberA: string,
memberB: number,
}
然而,内联对象类型声明必须使用 ``,`` 作为分隔符。
.. code-block:: typescript
// 应当这样做!
type SomeTypeAlias = {
memberA: string,
memberB: number,
};
let someProperty: {memberC: string, memberD: number};
.. _ts-optimization-compatibility-for-property-access:
优化属性访问的兼容性
================================================================================
不要混用方括号属性访问和句点属性访问两种形式。
.. code-block:: typescript
// 不要这样做!
// 必须从两种形式中选择其中一种,以保证整个程序的一致性。
console.log(x['someField']);
console.log(x.someField);
代码应当尽可能为日后的属性重命名需求进行优化,并且为所有程序外部的对象属性声明对应的字段。
.. code-block:: typescript
// 应当这样做!声明一个对应的接口。
declare interface ServerInfoJson {
appVersion: string;
user: UserJson;
}
const data = JSON.parse(serverResponse) as ServerInfoJson;
console.log(data.appVersion); // 这里是类型安全的,如果需要重命名也是安全的!
.. _ts-optimization-compatibility-for-module-object-imports:
优化模块对象导入的兼容性
================================================================================
导入模块对象时应当直接访问对象上的属性,而不要传递对象本身的引用,以保证模块能够被分析和优化。也可以将导入的模块视作命名空间,参见 :ref:`ts-module-versus-destructuring-imports` 一节。
.. code-block:: typescript
// 应当这样做!
import {method1, method2} from 'utils';
class A {
readonly utils = {method1, method2};
}
.. code-block:: typescript
// 不要这样做!
import * as utils from 'utils';
class A {
readonly utils = utils;
}
.. _ts-optimization-exception:
例外情况
================================================================================
这里所提到的优化规则适用于所有的 Web 应用,但不需要强制应用于只运行在服务端的程序。不过,出于代码整洁性的考虑,这里仍然强烈建议声明所有的类型,并且避免混用两种属性访问的形式。
.. _ts-enums:
枚举
********************************************************************************
对于枚举类型,必须使用 ``enum`` 关键字,但不要使用 ``const enum`` 。TypeScript 的枚举类型本身就是不可变的, ``const enum`` 的写法是另一种独立的语言特性,其目的是让枚举对 JavaScript 程序员透明。
.. _ts-debugger-statements:
``debugger`` 语句
********************************************************************************
不允许在生产环境代码中添加 ``debugger`` 语句。
.. code-block:: typescript
// 不要这样做!
function debugMe() {
debugger;
}
.. _ts-decorators:
装饰器
********************************************************************************
装饰器以 ``@`` 为前缀,例如 ``@MyDecorator`` 。
不要定义新的装饰器,只使用框架中已定义的装饰器,例如:
* Angular(例如 ``@Component`` 、 ``@NgModule`` 等等)
* Polymer(例如 ``@property`` 等等)
为什么?
通常情况下,应当避免使用装饰器。这是由于装饰器是一个实验性功能,仍然处于 TC39 委员会的提案阶段,且目前存在已知的无法被修复的 Bug。
使用装饰器时,装饰器必须紧接被装饰的符号,中间不允许有空行。
.. code-block:: typescript
/** JSDoc 注释应当位于装饰器之前 */
@Component({...}) // 装饰器之后不能有空行。
class MyComp {
@Input() myField: string; // 字段的装饰器和和字段位于同一行……
@Input()
myOtherField: string; // ……或位于字段之前。
} | 24.456125 | 290 | 0.548368 |
94c3881a00834977090ded243424f307e80704c7 | 309 | rst | reStructuredText | docs/index.rst | bwgref/nustar_jupiter | 40d829513cd35c2c83190b55ccf27fcada85d7a0 | [
"MIT"
] | null | null | null | docs/index.rst | bwgref/nustar_jupiter | 40d829513cd35c2c83190b55ccf27fcada85d7a0 | [
"MIT"
] | null | null | null | docs/index.rst | bwgref/nustar_jupiter | 40d829513cd35c2c83190b55ccf27fcada85d7a0 | [
"MIT"
] | 1 | 2018-08-16T14:13:55.000Z | 2018-08-16T14:13:55.000Z | Welcome to the documentation for NuSTAR's moving target planning
================================================================
Contents:
.. toctree::
:maxdepth: 2
io
lunar_planning
jupiter_planning
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 15.45 | 64 | 0.511327 |
e5ef1cda59f71a30f782bdb6ae2e61c12c6abce1 | 322 | rst | reStructuredText | docs-src/source/pvmysql.rst | cszielke/SolarPy | 7a0575c98033c42c391783428adad1f9782943e6 | [
"MIT"
] | null | null | null | docs-src/source/pvmysql.rst | cszielke/SolarPy | 7a0575c98033c42c391783428adad1f9782943e6 | [
"MIT"
] | null | null | null | docs-src/source/pvmysql.rst | cszielke/SolarPy | 7a0575c98033c42c391783428adad1f9782943e6 | [
"MIT"
] | null | null | null | pvmysql package
===============
Submodules
----------
pvmysql.pvmysql module
----------------------
.. automodule:: pvmysql.pvmysql
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: pvmysql
:members:
:undoc-members:
:show-inheritance:
| 14.636364 | 32 | 0.518634 |
37ee4e6bedd09af0b33ee0578bf75433be3ca51e | 1,229 | rst | reStructuredText | installation.rst | exeal/boostjp-xpressive | cb6c8d02b0e8d6aa7df41f0399c0c4440079a4cf | [
"BSL-1.0"
] | null | null | null | installation.rst | exeal/boostjp-xpressive | cb6c8d02b0e8d6aa7df41f0399c0c4440079a4cf | [
"BSL-1.0"
] | null | null | null | installation.rst | exeal/boostjp-xpressive | cb6c8d02b0e8d6aa7df41f0399c0c4440079a4cf | [
"BSL-1.0"
] | null | null | null | xpressive のインストール
------------------------
xpressive の入手
^^^^^^^^^^^^^^^^
xpressive の入手方法は 2 つある。第 1 のより簡単な方法は Boost の最新版をダウンロードすることである。http://sf.net/projects/boost へ行き、“Download” リンクをたどるだけである。
2 番目の方法は Boost の Subversion リポジトリに直接アクセスすることである。http://svn.boost.org/trac/boost へ行き、そこにある匿名 Subversion アクセス方法に従うとよい。Boost Subversion にあるのは不安定版である。
xpressive を使ったビルド
^^^^^^^^^^^^^^^^^^^^^^^^
xpressive はヘッダのみのテンプレートライブラリであり、あなたのビルドスクリプトを書き直したり個別のライブラリファイルにリンクする必要はない。:code:`#include <boost/xpressive/xpressive.hpp>` とするだけでよい。使用するのが静的正規表現だけであれば、:file:`xpressive_static.hpp` だけをインクルードすることでコンパイル時間を短縮できる。同様に動的正規表現だけを使用するのであれば :file:`xpressive_dynamic.hpp` をインクルードするとよい。
静的正規表現とともに意味アクションやカスタム表明を使用したければ、\ :file:`regex_actions.hpp` も追加でインクルードする必要がある。
必要要件
^^^^^^^^
xpressive を使用するには Boost 1.34.1 以降が必要である。
サポートするコンパイラ
^^^^^^^^^^^^^^^^^^^^^^
* Visual C++ 7.1 以降
* GNU C++ 3.4 以降
* Intel for Linux 8.1 以降
* Intel for Windows 10 以降
* tru64cxx 71 以降
* MinGW 3.4 以降
* HP C/C++ A.06.14 以降
.. * QNX qcc 3.3 以降
.. * Metrowerks CodeWarrior 9.4 以降
Boost の\ `退行テスト結果のページ <http://beta.boost.org/development/tests/trunk/developer/xpressive.html>`_\にある最新テスト結果を参照するとよい。
.. note:: 質問、コメント、バグ報告は eric <at> boost-consulting <dot> com に送ってほしい。
| 28.581395 | 274 | 0.725793 |
0192dd6d14efdf1dd7f4aff7c1b45d43ef01f66a | 18,253 | rst | reStructuredText | README.rst | faragher/LibreSignage | 729a34189b0d015f8be42e4c90758eab32242827 | [
"BSD-3-Clause"
] | null | null | null | README.rst | faragher/LibreSignage | 729a34189b0d015f8be42e4c90758eab32242827 | [
"BSD-3-Clause"
] | null | null | null | README.rst | faragher/LibreSignage | 729a34189b0d015f8be42e4c90758eab32242827 | [
"BSD-3-Clause"
] | null | null | null | ######################################################
LibreSignage - An open source digital signage solution
######################################################
Table Of Contents
-----------------
`1. General`_
`2. Features`_
`3. Project goals`_
`4. Installation`_
`5. Default users`_
`6. How to install npm`_
`7. LibreSignage in GIT`_
`8. LibreSignage versioning`_
`9. FAQ`_
`10. Screenshots`_
`11. Make rules`_
`12. Documentation`_
`13. Third-party dependencies`_
`14. Build system dependencies`_
`15. License`_
1. General
----------
Digital Signage is everything from large-scale commercial billboards
to smaller advertisement displays, notice boards or digital restaurant
menus. The possibilities of digital signage are endless. If you need
to display periodically changing content to users on a wall-mounted
TV for example, digital signage is probably what you are looking for.
LibreSignage is a free and open source, lightweight and easy-to-use
digital signage solution for use in schools, cafés, restaurants and
shops among others. LibreSignage can be used to manage a network of
digital signage displays. Don't let the word network fool you though;
a network can be as small as one display on an office wall or as big
as 50+ displays spread throughout a larger building.
LibreSignage also includes multi-user support with password authentication
and configurable access control to specific features. If a school wants
to setup a digital notice board system for example, they might give
every teacher an account with slide editing permissions so that teachers
could manage the content on the internal digital signage network. This
way the teachers could inform students about important things such as
upcoming tests for example.
LibreSignage uses a HTTP web server to serve content to the individual
signage displays. This means that he displays only need to run a web
browser pointed to the central LibreSignage server to actually display
content. This approach has a few advantages.
1. It's simple - No specific hardware/software platform is required.
Any system with a fairly recent web browser works.
2. It's cheap - You don't necessarily need to buy lots of expensive
equipment to get started. Just dust off the old PC in the closet,
install an up-to-date OS like Linux on it, install a web browser,
hide the mouse pointer by default and connect the system to a
display. That's it. The only other thing you need is the server,
which in fact can run on the same system if needed.
3. It's reliable - The web infrastructure is already implemented and
well tested so why not use it.
4. It makes editing easy - Displaying content in a browser has the
advantage of making slide previewing very simple. You can either
use the 'Live Preview' in the editor or check the exact results
from the actual 'Display' page that's displayed on the clients too.
2. Features
-----------
* Web interface for editing slides and managing the LibreSignage instance.
* Many per slide settings like durations, transitions, etc.
* Special markup syntax for easily formatting slides.
* Live preview of the slide markup in the slide editor.
* Support for embedding remote or uploaded image and video files.
* Support for scheduling specific slides for a specific time-frame.
* Collaboration features with other users on the network.
* Separate slide queues for different sets of signage clients.
* Multi user support with configurable access control.
* User management features for admin users in the web interface.
* Configurable quota for the amount of slides a user can create.
* Rate limited API for reducing server load.
* Extensive documentation of features including docs for developers.
* Extensive configuration possibilities.
3. Project goals
----------------
* Create a lightweight alternative to other digital signage solutions.
* Create a system that's both easy to set up and easy to use.
* Write a well documented and modular API so that implementing new
user interfaces is simple.
* Avoid scope creep.
* Document all features.
* Keep it simple.
4. Installation
---------------
LibreSignage has currently only been tested on Linux based systems,
however it should be possible to run it on other systems aswell. Running
LibreSignage on other systems will require some manual configuration
though, since the build and installation systems won't work out of the
box. The only requirement for running a LibreSignage server instance is
the Apache web server with PHP support, which should be available on most
systems. Building LibreSignage on the other hand requires some additional
software.
LibreSignage is designed to be used with Apache 2.0 and the default
install system is programmed to use Apache's Virtual Host configuration
features.
In a nutshell, Virtual Hosts are a way of hosting multiple websites on
one server, which is ideal in the case of LibreSignage. Using Virtual
Hosts makes it really simple to host one or more LibreSignage instances
on a server and adding or removing instances is also rather easy. You
can look up more information about Virtual Hosts on the
`Apache website <https://httpd.apache.org/docs/2.4/vhosts/>`_.
Doing a basic install of LibreSignage is quite simple. The required steps
are listed below.
1. Install software needed for building LibreSignage. You will need the
following packages: ``git, apache2, php, php-gd, pandoc, npm, make``.
On Debian Stretch all other packages except *npm* can installed by
running ``sudo apt install git apache2 php php-gd pandoc make``.
Currently *npm* is only available in the Debian Sid repos and even
there the package is so old it doesn't work correctly. You can,
however, install npm manually. See `6. How to install NPM`_ for
more info.
2. Use ``cd`` to move to the directory where you want to download the
LibreSignage repository.
3. Run ``git clone https://github.com/eerotal/LibreSignage.git``.
The repository will be cloned into the directory *LibreSignage/*.
4. Run ``cd LibreSignage`` to move into the LibreSignage repository.
5. Install dependencies from NPM by running ``npm install``.
6. Run ``make configure``. This script asks you to enter the
following configuration values.
* Document root (default: /var/www)
* The document root to use.
* Server name (domain)
* The domain name to use for configuring apache2. If you
don't have a domain and you are just testing the system,
you can either use 'localhost', your machines LAN IP or
a testing domain you don't actually own. If you use a testing
domain, you can add that domain to your */etc/hosts* file.
See the end of this section for more info.
* Server name aliases
* Admin name
* Shown to users on the main page.
* Admin email
* Shown to users on the main page.
* Enable debugging (y/N)
* Whether to enable debugging. N is default.
This command generates an instance configuration file needed
for building LibreSignage. The file is saved in ``build/`` as
``<DOMAIN>.iconf`` where ``<DOMAIN>`` is the domain name you
specified.
7. Run ``make`` to build LibreSignage. You can use the ``-j<MAXJOBS>``
CLI option to specify a maximum number of parallel jobs to speed up
the building process. The usual recommended value for the max number
of jobs is one per CPU core, meaning that for eg. a quad core CPU you
should use -j4. See `11. Make rules`_ for more advanced options.
8. Finally, to install LibreSignage, run ``sudo make install`` and answer
the questions asked.
After this the LibreSignage instance is fully installed and ready to be
used via the web interface. If you specified a domain name you don't
actually own just for testing the install, you can add it to your
*/etc/hosts* file to be able to test the site using a normal browser.
This only applies on Linux based systems of course. For example, if you
specified the server name *example.com*, you could add the following
line to your */etc/hosts* file.
``example.com 127.0.0.1``
This will redirect all requests for *example.com* to *127.0.0.1*
(loopback), making it possible to access the site by connecting
to *example.com*.
5. Default users
----------------
The initial configured users and their groups and passwords are listed
below. It goes without saying that you should create new users and
change the passwords if you intend to use LibreSignage on a production
system.
=========== ======================== ==========
User Groups Password
=========== ======================== ==========
admin admin, editor, display admin
user editor, display user
display display display
=========== ======================== ==========
6. How to install npm
---------------------
If npm doesn't exist in the repos of your Linux distribution of choice,
is very outdated (like in the case of Debian) or you are not using a
Linux based distribution at all, you must install it manually. You can
follow the installation instructions for your OS on the
`node.js website <https://nodejs.org/en/download/package-manager/>`_.
There are other ways to install npm too. One alternative way to install
npm is described below. *Note that if you use this method to install
npm, you shouldn't update npm via it's own update mechanism
(running npm install npm) since that will install the new version into
a different directory. To update npm when it's installed this way,
you should just follow steps 1-3 again.*
1. Download the *node.js* binaries for your system from
https://nodejs.org/en/download/.
2. Extract the tarball with ``tar -xvf <name of tarball>``.
3. Create a new directory ``/opt/npm`` and copy the extracted
files into it.
4. Run ``ln -s /opt/npm/bin/npm /usr/local/bin/npm`` and
``ln -s /opt/npm/bin/npx /usr/local/bin/npx``. You need to
be root when running these commands so prefix them with ``sudo``
or log in as root first.
5. Run ``cd ~/`` to go back to your home directory and verify the
installation by running ``npm -v``. This should now print the
installed npm version.
7. LibreSignage in GIT
----------------------
LibreSignage uses the GIT version control system. The LibreSignage
repository contains multiple branches that all have some differences.
master
The master branch always contains the latest stable version of
LibreSignage with all the latest backported fixes. If you just
wan't to use a fully functioning version of LibreSignage, clone
this branch. The actual LibreSignage release points are also marked
in the GIT tree as annotated tags. You can clone a release tag too
but note that the latest patch release doesn't necessarily contain
the latest backports if new fixes have just been backported to master.
v<MAJOR>.<MINOR>.<PATCH>
These branches are release branches. Development for a specific
LibreSignage version happens in the release branch for that specific
version. A new release branch is created every time either the major
or the minor version number changes. New eelease branches aren't created
for patch releases. Release branches are often quite stable and they
generally already work, but they might still contain serious bugs from
time to time.
feature/*, bugfix/*, ...
Branches that start with a category and have the branch name after
a forward slash are development branches. You normally shouldn't
clone these because they are actively being worked on and even
commit history might be rewritten from time to time. These branches
aren't meant to be used by anyone else other than the developers
working on the branch.
8. LibreSignage versioning
--------------------------
Each LibreSignage release has a designated version number of the
form MAJOR.MINOR.PATCH.
* The PATCH version is incremented for each patch release. Patch
releases only contain fixes and never contain new features.
* The MINOR version is incremented for every release where
incrementing the MAJOR number is not justified. Minor releases
can contain new features and bugfixes etc.
* The MAJOR version number is only incremented for very big and
major releases.
The LibreSignage API also has its own version number that's just
an integer which is incremented every time a backwards incompatible
API change is made.
9. FAQ
------
Why doesn't LibreSignage use framework/library X?
To avoid bloat; LibreSignage is designed to be minimal and lightweight
and it only uses external libraries where they are actually needed.
Most UI frameworks for example are huge. LibreSignage does use
Bootstrap though, since it's a rather clean and simple framework.
Why doesn't LibreSignage have feature X?
You can suggest new features in the bug tracker. If you know a bit
about programming in PHP, JS, HTML and CSS, you can also implement
the feature yourself and create a pull request.
Is LibreSignage really free?
YES! In fact LibreSignage is not only free, it's also open source.
You can find information about the LibreSignage license in the
`15. License`_ section.
10. Screenshots
---------------
Open these images in a new tab to view the full resolution versions.
*Note that these screenshots are always the latest ones no matter what
branch or commit you are viewing.*
**LibreSignage Login**
.. image:: http://etal.mbnet.fi/libresignage/v0.2.0/login.png
:width: 320 px
:height: 180 px
**LibreSignage Control Panel**
.. image:: http://etal.mbnet.fi/libresignage/v0.2.0/control.png
:width: 320 px
:height: 180 px
**LibreSignage Editor**
.. image:: http://etal.mbnet.fi/libresignage/v0.2.0/editor.png
:width: 320 px
:height: 180 px
**LibreSignage Media Uploader**
.. image:: http://etal.mbnet.fi/libresignage/v0.2.0/media_uploader.png
:width: 320 px
:height: 180 px
**LibreSignage User Manager**
.. image:: http://etal.mbnet.fi/libresignage/v0.2.0/user_manager.png
:width: 320 px
:height: 180 px
**LibreSignage User Settings**
.. image:: http://etal.mbnet.fi/libresignage/v0.2.0/user_settings.png
:width: 320 px
:height: 180 px
**LibreSignage Display**
.. image:: http://etal.mbnet.fi/libresignage/v0.2.0/display.png
:width: 320 px
:height: 180 px
**LibreSignage Documentation**
.. image:: http://etal.mbnet.fi/libresignage/v0.2.0/docs.png
:width: 320 px
:height: 180 px
11. Make rules
--------------
The following ``make`` rules are implemented in the makefile.
all
The default rule that builds the LibreSignage distribution.
install
Install LibreSignage. This copies the LibreSignage distribution files
into a virtual host directory in the configured document root.
utest
Run the LibreSignage unit testing scripts. Note that you must install
LibreSignage before running this rule.
clean
Clean files generated by building LibreSignage.
realclean
Same as *clean* but removes all generated files too. This rule
effectively resets the LibreSignage directory to how it was right
after cloning the repo.
LOC
Count the lines of code in LibreSignage.
LOD
Count the lines of documentation in LibreSignage. This target will
only work after building LibreSignage since the documentation lines
are counted from the docs in the dist/ directory. This way the
generated API endpoint docs can be taken into account too.
You can also pass some other arguments to the LibreSignage makefile.
INST=<config file> - (default: Last generated config.)
Manually specify a config file to use.
VERBOSE=<y/n> - (default: y)
Print verbose log output.
NOHTMLDOCS=<y/n> - (default: n)
Don't generate HTML documentation from the reStructuredText docs
or the API endpoint files. This setting can be used with make rules
that build files. Using it with eg. ``make install`` has no effect.
12. Documentation
-----------------
LibreSignage documentation is written in reStructuredText, which is
a plaintext format often used for writing technical documentation.
The reStructuredText syntax is also human-readable as-is, so you can
read the documentation files straight from the source tree. The docs
are located in the directory *src/doc/rst/*.
The reStructuredText files are also compiled into HTML when LibreSignage
is built and they can be accessed from the *Help* page of LibreSignage.
13. Third-party dependencies
----------------------------
Bootstrap (Library, MIT License)
Copyright (c) 2011-2016 Twitter, Inc.
JQuery (Library, MIT License)
Copyright JS Foundation and other contributors, https://js.foundation/
Popper.JS (Library, MIT License)
Copyright (C) 2016 Federico Zivolo and contributors
Ace (Library, 3-clause BSD License)
Copyright (c) 2010, Ajax.org B.V. All rights reserved.
Raleway (Font, SIL Open Font License 1.1)
Copyright (c) 2010, Matt McInerney (matt@pixelspread.com),
Copyright (c) 2011, Pablo Impallari (www.impallari.com|impallari@gmail.com),
Copyright (c) 2011, Rodrigo Fuenzalida (www.rfuenzalida.com|hello@rfuenzalida.com),
with Reserved Font Name Raleway
Montserrat (Font, SIL Open Font License 1.1)
Copyright 2011 The Montserrat Project Authors (https://github.com/JulietaUla/Montserrat)
Inconsolata (Font, SIL Open Font License 1.1)
Copyright 2006 The Inconsolata Project Authors (https://github.com/cyrealtype/Inconsolata)
Font-Awesome (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
Font Awesome Free 5.1.0 by @fontawesome - https://fontawesome.com
The full licenses for these third party libraries and resources can be
found in the file *src/doc/rst/LICENSES_EXT.rst* in the source
distribution.
14. Build system dependencies
-----------------------------
* SASS (https://sass-lang.com/)
* Browserify (http://browserify.org/)
* PostCSS (https://postcss.org/)
* Autoprefixer (https://github.com/postcss/autoprefixer)
15. License
-----------
LibreSignage is licensed under the BSD 3-clause license, which can be
found in the files *LICENSE.rst* and *src/doc/rst/LICENSE.rst* in the
source distribution. Third party libraries and resources are licensed
under their respective licenses. See `13. Third-party dependencies`_ for
more information.
Copyright Eero Talus 2018
| 37.480493 | 92 | 0.740262 |
c584325d6eec9aaa4b3831fab18a16c7f58fe3cb | 209 | rst | reStructuredText | doc/gui_guide/index.rst | aak65/glue | 7f03fe5dac4add2cb0d7347832cb9d1850f73440 | [
"BSD-3-Clause"
] | null | null | null | doc/gui_guide/index.rst | aak65/glue | 7f03fe5dac4add2cb0d7347832cb9d1850f73440 | [
"BSD-3-Clause"
] | null | null | null | doc/gui_guide/index.rst | aak65/glue | 7f03fe5dac4add2cb0d7347832cb9d1850f73440 | [
"BSD-3-Clause"
] | null | null | null | User Interface Guide
====================
.. toctree::
:maxdepth: 2
link_tutorial.rst
merging
components.rst
custom_subsets.rst
spectrum.rst
slice.rst
dendro.rst
configuration.rst
| 13.933333 | 21 | 0.626794 |
47c55e07e7d84ef5c1ece94160a3b0be4a5faad6 | 300 | rst | reStructuredText | docs/Reference/generated/zepid.base.RiskRatio.rst | joannadiong/zEpid | 7377ed06156d074aa2b571be520e8e004a564353 | [
"MIT"
] | 101 | 2018-12-17T20:32:20.000Z | 2022-03-29T08:51:46.000Z | docs/Reference/generated/zepid.base.RiskRatio.rst | joannadiong/zEpid | 7377ed06156d074aa2b571be520e8e004a564353 | [
"MIT"
] | 124 | 2018-12-13T22:30:41.000Z | 2022-02-10T00:24:25.000Z | docs/Reference/generated/zepid.base.RiskRatio.rst | joannadiong/zEpid | 7377ed06156d074aa2b571be520e8e004a564353 | [
"MIT"
] | 26 | 2019-02-07T17:45:15.000Z | 2022-01-03T00:39:34.000Z | zepid.base.RiskRatio
====================
.. currentmodule:: zepid.base
.. autoclass:: RiskRatio
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~RiskRatio.__init__
~RiskRatio.fit
~RiskRatio.plot
~RiskRatio.summary
| 12 | 29 | 0.526667 |
3aa4fa18be38c7cb72fb7047276629652e0d71a1 | 1,186 | rst | reStructuredText | doc/src/Connection_Strings.rst | smkgeekfreak/BrightstarDB | 76295b643f2358369a98dfb2dc86ad0ab47590eb | [
"MIT"
] | 1 | 2015-02-11T01:29:50.000Z | 2015-02-11T01:29:50.000Z | doc/src/Connection_Strings.rst | smkgeekfreak/BrightstarDB | 76295b643f2358369a98dfb2dc86ad0ab47590eb | [
"MIT"
] | null | null | null | doc/src/Connection_Strings.rst | smkgeekfreak/BrightstarDB | 76295b643f2358369a98dfb2dc86ad0ab47590eb | [
"MIT"
] | null | null | null | .. _Connection_Strings:
*******************
Connection Strings
*******************
BrightstarDB makes use of connection strings for accessing both embedded and remote
BrightstarDB instances. The following section describes the different connection string
properties.
**Type** : allowed values **embedded**, **http**, **tcp**, and **namedpipe**. This indicates
the type of connection to create.
**StoresDirectory** : value is a file system path to the directory containing all BrightstarDB
data. Only valid for use with **Type** set to **embedded**.
**Endpoint** : a URI that points to the service endpoint for the specified remote service.
Valid for **http**, **tcp**, and **namedpipe**
**StoreName** : The name of a specific store to connect to.
The following are examples of connection strings. Property value pairs are separated by ';'
and property names are case insensitive.::
"type=http;endpoint=http://localhost:8090/brightstar;storename=test"
"type=tcp;endpoint=net.tcp://localhost:8095/brightstar;storename=test"
"type=namedpipe;endpoint=net.pipe://localhost/brightstar;storename=test"
"type=embedded;storesdirectory=c:\\brightstar;storename=test"
| 34.882353 | 95 | 0.731029 |
3a11daa452be3e1917deb25fc168070eedc58f89 | 5,113 | rst | reStructuredText | doc/source/admin/identity-tokens.rst | mail2nsrajesh/keystone | bebd7056ad33d294871013067cb7367bc6db1a13 | [
"Apache-2.0"
] | null | null | null | doc/source/admin/identity-tokens.rst | mail2nsrajesh/keystone | bebd7056ad33d294871013067cb7367bc6db1a13 | [
"Apache-2.0"
] | null | null | null | doc/source/admin/identity-tokens.rst | mail2nsrajesh/keystone | bebd7056ad33d294871013067cb7367bc6db1a13 | [
"Apache-2.0"
] | null | null | null | ===============
Keystone tokens
===============
Tokens are used to authenticate and authorize your interactions with the
various OpenStack APIs. Tokens come in many flavors, representing various
authorization scopes and sources of identity. There are also several different
"token providers", each with their own user experience, performance, and
deployment characteristics.
Authorization scopes
--------------------
Tokens can express your authorization in different scopes. You likely have
different sets of roles, in different projects, and in different domains.
While tokens always express your identity, they may only ever express one set
of roles in one authorization scope at a time.
Each level of authorization scope is useful for certain types of operations in
certain OpenStack services, and are not interchangeable.
Unscoped tokens
~~~~~~~~~~~~~~~
An unscoped token contains neither a service catalog, any roles, a project
scope, nor a domain scope. Their primary use case is simply to prove your
identity to keystone at a later time (usually to generate scoped tokens),
without repeatedly presenting your original credentials.
The following conditions must be met to receive an unscoped token:
* You must not specify an authorization scope in your authentication request
(for example, on the command line with arguments such as
``--os-project-name`` or ``--os-domain-id``),
* Your identity must not have a "default project" associated with it that you
also have role assignments, and thus authorization, upon.
Project-scoped tokens
~~~~~~~~~~~~~~~~~~~~~
Project-scoped tokens are the bread and butter of OpenStack. They express your
authorization to operate in a specific tenancy of the cloud and are useful to
authenticate yourself when working with most other services.
They contain a service catalog, a set of roles, and details of the project upon
which you have authorization.
Domain-scoped tokens
~~~~~~~~~~~~~~~~~~~~
Domain-scoped tokens also have limited use cases in OpenStack. They express
your authorization to operate a domain-level, above that of the user and
projects contained therein (typically as a domain-level administrator).
Depending on Keystone's configuration, they are useful for working with a
single domain in Keystone.
They contain a limited service catalog (only those services which do not
explicitly require per-project endpoints), a set of roles, and details of the
project upon which you have authorization.
They can also be used to work with domain-level concerns in other services,
such as to configure domain-wide quotas that apply to all users or projects in
a specific domain.
Token providers
---------------
The token type issued by keystone is configurable through the
``/etc/keystone/keystone.conf`` file. Currently, there are four supported
token types and they include ``UUID``, ``fernet``, ``PKI``, and ``PKIZ``.
UUID tokens
~~~~~~~~~~~
UUID was the first token type supported and is currently the default token
provider. UUID tokens are 32 bytes in length and must be persisted in a back
end. Clients must pass their UUID token to the Identity service in order to
validate it.
As mentioned above, UUID tokens must be persisted. By default, keystone
persists UUID tokens using a SQL backend. An unfortunate side-effect is that
the size of the database will grow over time regardless of the token's
expiration time. Expired UUID tokens can be pruned from the backend using
keystone's command line utility:
.. code-block:: bash
$ keystone-manage token_flush
We recommend invoking this command periodically using ``cron``.
.. NOTE::
It is not required to run this command at all if using Fernet tokens. Fernet
tokens are not persisted and do not contribute to database bloat.
Fernet tokens
~~~~~~~~~~~~~
The fernet token format was introduced in the OpenStack Kilo release. Unlike
the other token types mentioned in this document, fernet tokens do not need to
be persisted in a back end. ``AES256`` encryption is used to protect the
information stored in the token and integrity is verified with a ``SHA256
HMAC`` signature. Only the Identity service should have access to the keys used
to encrypt and decrypt fernet tokens. Like UUID tokens, fernet tokens must be
passed back to the Identity service in order to validate them. For more
information on the fernet token type, see the :doc:`identity-fernet-token-faq`.
PKI and PKIZ tokens
~~~~~~~~~~~~~~~~~~~
PKI tokens are signed documents that contain the authentication context, as
well as the service catalog. Depending on the size of the OpenStack deployment,
these tokens can be very long. The Identity service uses public/private key
pairs and certificates in order to create and validate PKI tokens.
The same concepts from PKI tokens apply to PKIZ tokens. The only difference
between the two is PKIZ tokens are compressed to help mitigate the size issues
of PKI. For more information on the certificate setup for PKI and PKIZ tokens,
see the :doc:`identity-certificates-for-pki`.
.. note::
PKI and PKIZ tokens are deprecated and not supported in Ocata.
| 40.579365 | 79 | 0.770976 |
b295aae1e2475d40464297f390b905ad9d88b7cc | 324 | rst | reStructuredText | docs/source/cui/fcl.rst | msquared2/columns_ui-sdk | d8ca0a1fe727f1416be25cdb2b059b62228a7e2e | [
"0BSD"
] | 5 | 2016-04-12T13:48:01.000Z | 2022-02-06T09:59:14.000Z | docs/source/cui/fcl.rst | msquared2/columns_ui-sdk | d8ca0a1fe727f1416be25cdb2b059b62228a7e2e | [
"0BSD"
] | null | null | null | docs/source/cui/fcl.rst | msquared2/columns_ui-sdk | d8ca0a1fe727f1416be25cdb2b059b62228a7e2e | [
"0BSD"
] | 3 | 2016-04-12T13:48:02.000Z | 2018-10-24T18:17:49.000Z | FCL files
=========
.. doxygenclass:: cui::fcl::dataset
.. doxygenclass:: cui::fcl::dataset_v2
.. doxygenclass:: cui::fcl::dataset_factory
.. doxygennamespace:: cui::fcl::groups
.. doxygenclass:: cui::fcl::group_impl_factory
.. doxygenclass:: cui::fcl::t_import_feedback
.. doxygenclass:: cui::fcl::t_export_feedback
| 19.058824 | 46 | 0.703704 |
b3d2b53f880f4431bec5c39859a13b800a9ac2df | 13,754 | rst | reStructuredText | tuna/optimizers/examples/steepestascent.rst | russellnakamura/thetuna | 0e445baf780fb65e1d92fe1344ebdf21bf81573c | [
"MIT"
] | null | null | null | tuna/optimizers/examples/steepestascent.rst | russellnakamura/thetuna | 0e445baf780fb65e1d92fe1344ebdf21bf81573c | [
"MIT"
] | null | null | null | tuna/optimizers/examples/steepestascent.rst | russellnakamura/thetuna | 0e445baf780fb65e1d92fe1344ebdf21bf81573c | [
"MIT"
] | null | null | null | Steepest Ascent Examples
========================
These are examples for the :ref:`Hill-Climbing with Steepest Ascent <optimization-optimizers-steepestascent>` meta-heuristic.
Contents:
* :ref:`Normal Distribution Example <optimization-optimizers-steepestascent-normalexample>`
* :ref:`Needle in a Haystack Example <optimization-optimizers-steepestascent-needleinahaystack>`
* :ref:`Gaussian Convolution Noisy Example <optimization-optimizers-steepestascent-gaussianconvolution>`
* :ref:`Gaussian Convolution Normal Example <optimization-optimizers-steepestascent-gaussianconvolution-normal>`
* :ref:`Gaussian Convolution Sphere Example <optimization-optimizers-steepestascent-gaussianconvolution-sphere>`
.. _optimization-optimizers-steepestascent-normalexample:
A Normal Distribution Example
-----------------------------
The SteepestAscent climber with UniformConvolution is more aggresive than the hill-climber but still has a problem with local-optima so I'll just test it on the normal-data here.
.. '
.. uml::
HillClimber o- XYSolution
HillClimber o- XYTweak
HillClimber o- UniformConvolution
HillClimber o- StopConditionIdeal
HillClimber o- NormalSimulation
.. currentmodule:: optimization.optimizers.steepestascent
.. autosummary::
:toctree: api
SteepestAscent
.. currentmodule:: optimization.datamappings.normalsimulation
.. autosummary::
:toctree: api
NormalSimulation
.. currentmodule:: optimization.components.stopcondition
.. autosummary::
:toctree: api
StopConditionIdeal
.. currentmodule:: optimization.components.convolutions
.. autosummary::
:toctree: api
UniformConvolution
.. currentmodule:: optimization.components.xysolution
.. autosummary::
:toctree: api
XYSolution
XYTweak
::
# python standard library
from collections import OrderedDict
# third party
import numpy
# this package
from optimization.datamappings.normalsimulation import NormalSimulation
from optimization.components.stopcondition import StopConditionIdeal
from optimization.components.convolutions import UniformConvolution, Gaussi
anConvolution
from optimization.components.xysolution import XYSolution, XYTweak
from optimization.optimizers.steepestascent import SteepestAscent
::
outcomes = OrderedDict()
simulator = NormalSimulation(domain_start=-4,
domain_end=4,
steps=1000)
stop = StopConditionIdeal(ideal_value=simulator.ideal_solution,
delta=0.0001,
time_limit=300)
tweak = UniformConvolution(half_range=0.1,
lower_bound=simulator.domain_start,
upper_bound=simulator.domain_end)
xytweak = XYTweak(tweak)
# try a bad-case to start
inputs = numpy.array([simulator.domain_start])
candidate = XYSolution(inputs=inputs)
climber = SteepestAscent(solution=candidate,
stop_condition=stop,
tweak=xytweak,
quality=simulator,
local_searches=4)
outcomes['Uniform Normal'] = run_climber(climber)
::
Solution: Inputs: [-0.01712284] Output: 0.398862340139
Ideal: 0.398939082483
Difference: -7.67423442817e-05
Elapsed: 0.008633852005
Quality Checks: 601
Comparisons: 300.0
Solutions: 58
Solutions/Comparisons: 0.193333333333
The number of quality checks is related to the number of comparisons made between candidate solutions::
if Quality(solution) > Quality(candidate):
solution = candidate
Each comparison uses two checks and there is an initializing check before the optimization starts (to establish the first candidate as the best solution found so far). So the total number of comparisons is:
.. math::
Comparisons &\gets \frac{QualityChecks - 1}{2}\\
.. figure:: figures/normal_steepest_ascent.svg
The x-axis in the figure shows the number of times a better solution was found and the y-axis is the value (quality) of the solution. In this case the solution is a point on the x-axis and the quality is the height of the curve for the solution.
.. figure:: figures/steepest_ascent_normal_data.svg
The blue-line in the figure is a plot of the data-set while the red line is the solution found by the hill-climber.
Changing the Parameters
-----------------------
The two main parameters that will affect the performance of the climber will be the number of local searches (``SteepestAscent.local_searches``) it does and the size of the random changes it makes (``UniformConvolution.half_range``). In this case we know that the input-data is actually one-dimensional and the distribution is unimodal so it might help to reduce the number of local searches and increase the half-range to see if it will climb faster.
::
candidate.output = None
climber._solutions = None
climber.solution = candidate
stop._end_time = None
climber.local_searches = 8
tweak.half_range = 1
run_climber(climber)
::
Solution: Inputs: [-0.01748886] Output: 0.398862340139
Ideal: 0.398939082483
Difference: -7.67423442817e-05
Elapsed: 0.00139498710632
Quality Checks: 692
Comparisons: 345.5
Solutions: 5
Solutions/Comparisons: 0.0144717800289
I tried it with different numbers of local-search values and it actually seems to take longer if you go with either fewer or more of them. I guess if there are too few then you run a greater risk of moving in the wrong-direction and if there are too many you multiply the number of searches you make before checking the overall-best solution so it inevitably uses more comparisons. I guess the parameters have to be tuned according to the data with a certain amount of trial and error.
.. _optimization-optimizers-steepestascent-needleinahaystack:
Needle In a Haystack
--------------------
Now a :ref:`Needle in a Haystack <optimization-simulations-needle-in-haystack>` case.
::
simulator.reset()
simulator.domain_start = -100
simulator.domain_end = 150
simulator.steps = 10000
candidate.output = None
climber._solutions = None
climber.solution = candidate
climber.emit = False
stop._end_time = None
stop.ideal_value = simulator.ideal_solution
stop.delta = 0.001
outcomes['Uniform Needle'] = run_climber(climber)
::
Solution: Inputs: [ 0.03634837] Output: 0.398697954224
Ideal: 0.398922329796
Difference: -0.000224375571834
Elapsed: 0.00327515602112
Quality Checks: 127
Comparisons: 63.0
Solutions: 7
Solutions/Comparisons: 0.111111111111
.. figure:: figures/needle_haystack_steepest_ascent.svg
.. figure:: figures/steepest_ascent_uniform_needle_haystack_data.svg
Since the solutions are randomly generated, the figures don't look exactly the same every time, but usually the solutions-plot for the *needle in a haystack* case will show a large dip in it as the hill-climber accidentally overshoots the peak. Since we're using *Steepest Ascent Hill Climbing* it can usually find its way back, as long as the curve has information for it and the amount of randomization is small-enough that it will eventually find the peak. In this case I'm actually cheating by using a Normal Curve, since it always has a slope leading to the peak. A true needle in the haystack case would have flat ends, but this hill climber has no real way to find that case except by chance.
.. '
.. _optimization-optimizers-steepestascent-gaussianconvolution:
Using Gaussian Convolution
--------------------------
The UniformConvolution used as the tweak tends to get stuck in local optima. You can make the half-range larger but then it will have a harder time finding an optima as it approaches randomness. One way to improve the hill-climbers is to sample random values from a normal distribution. Since 68% of the points are within one standard deviation from the mean, 95% are within two standard deviations from the mean, 99% are within three standard deviations, etc., you will tend to get most sampled points centered around the mean (0 for the standard-normal distribution) and only occasionally will you get samples that are far from the mean.
As a comparison, I'll first use a data-set that has local optima. Using the UniformConvolution doesn't always find the solution (because it's stuck at a local optima) so I'm only going to run the gaussian convolution version.
::
# change the randomization
tweak = GaussianConvolution(lower_bound=simulator.domain_start,
upper_bound=simulator.domain_end)
tweaker = XYTweak(tweak)
climber._solutions = None
climber.tweak = tweaker
# change the dataset
simulator.functions = [lambda x: numpy.sin(x),
lambda x: numpy.cos(x)**2]
simulator._range = None
simulator.quality_checks = 0
candidate.output = None
simulator(candidate)
climber.solution = candidate
stop.ideal_value = simulator.ideal_solution
stop._end_time = None
# run the optimization
outcomes['Gaussian Noise'] = run_climber(climber)
::
Solution: Inputs: [ 0.43583544] Output: 1.60675092559
Ideal: 1.60675092559
Difference: 0.0
Elapsed: 0.33668088913
Quality Checks: 13340
Comparisons: 6669.5
Solutions: 8
Solutions/Comparisons: 0.00119949021666
.. figure:: figures/gaussian_convolution_steepest_ascent_solutions.svg
.. figure:: figures/gaussian_convolution_steepest_ascent_dataplot.svg
.. _optimization-optimizers-steepestascent-gaussianconvolution-normal:
Gaussian Convolution Normal Example
-----------------------------------
To see how the two algorithms compare we can re-run the normal example using the GaussianConvolution.
::
simulator.reset()
simulator.domain_start = -4
simulator.domain_end = 4
simulator.steps = 1000
candidate.output = None
climber._solutions = None
climber.solution = candidate
stop._end_time = None
stop.ideal_value = simulator.ideal_solution
outcomes['Gaussian Normal'] = run_climber(climber)
::
Solution: Inputs: [ 0.00265114] Output: 0.398939082483
Ideal: 0.398939082483
Difference: 0.0
Elapsed: 0.00111103057861
Quality Checks: 73
Comparisons: 36.0
Solutions: 4
Solutions/Comparisons: 0.111111111111
.. figure:: figures/steepest_ascent_gaussian_convolution_normal_solutions.svg
.. figure:: figures/steepest_ascent_gaussian_convolution_normal_dataset.svg
.. _optimization-optimizers-steepestascent-gaussianconvolution-needle:
Gaussian Convolution Needle Example
-----------------------------------
::
simulator.reset()
simulator.domain_start = -100
simulator.domain_end = 150
simulator.steps = 10000
candidate.output = None
climber._solutions = None
climber.solution = candidate
stop._end_time = None
stop.ideal_value = simulator.ideal_solution
outcomes['Gaussian Needle'] = run_climber(climber)
::
Solution: Inputs: [-0.03550444] Output: 0.398623190415
Ideal: 0.398922329796
Difference: -0.000299139380939
Elapsed: 0.00325894355774
Quality Checks: 127
Comparisons: 63.0
Solutions: 4
Solutions/Comparisons: 0.0634920634921
.. figure:: figures/steepest_ascent_gaussian_convolution_needle_solutions.svg
.. figure:: figures/steepest_ascent_gaussian_convolution_needle_dataset.svg
.. _optimization-optimizers-steepestascent-gaussianconvolution-sphere:
Gaussian Convolution Sphere
---------------------------
This uses a 3-d spherical dataset.
::
import numpy
from optimization.datamappings.examples.functions import SphereMapping
# for plotting only we want few steps
plot_sphere = SphereMapping(steps=120)
# for data we want more
data_sphere = SphereMapping()
::
output = 'figures/sphere_plot.svg'
figure = plt.figure()
axe = figure.add_subplot(111, projection='3d')
X = numpy.linspace(-5.12, 5.12, 100)
Y = numpy.linspace(-5.12, 5.12, 100)
X, Y = numpy.meshgrid(X, Y)
Z = X**2 + Y**2
surface = axe.plot_wireframe(X,
Y,
Z,
rstride=5, cstride=5)
figure.savefig(output)
::
# change the data source to the sphere mapping
simulator = data_sphere.mapping
climber.quality = simulator
# change the limits of the tweak
tweak.lower_bound = data_sphere.start
tweak.upper_bound = data_sphere.stop
# change the candidate to 2D
candidate.inputs = numpy.array([0,0])
candidate.output = None
climber._solutions = None
climber.solution = candidate
stop._end_time = None
stop.ideal_value = simulator.ideal
outcomes['Gaussian Sphere'] = run_climber(climber)
::
Solution: Inputs: [ 5.12 -5.12] Output: 52.4288
Ideal: 52.4288
Difference: 0.0
Elapsed: 0.00174617767334
Quality Checks: 109
Comparisons: 54.0
Solutions: 6
Solutions/Comparisons: 0.111111111111
.. figure:: figures/sphere_plot.svg
.. csv-table:: Run-Time Comparisons
:header: ,Solutions, Comparisons, Solutions/Comparisons
Uniform Normal,58.000,300.000,0.193
Uniform Needle,7.000,63.000,0.111
Gaussian Noise,8.000,6669.500,0.001
Gaussian Normal,4.000,36.000,0.111
Gaussian Needle,4.000,63.000,0.063
Gaussian Sphere,6.000,54.000,0.111
| 30.977477 | 699 | 0.702123 |
65a6e3a0e8c92bdc099d7031714ab473ba537127 | 266 | rst | reStructuredText | Documentation/source/reference/aft-string/types/aft-maybe-int.rst | Vavassor/AdString | 7558bb3056441909b83a0e67828cc4b1509f5332 | [
"CC0-1.0"
] | null | null | null | Documentation/source/reference/aft-string/types/aft-maybe-int.rst | Vavassor/AdString | 7558bb3056441909b83a0e67828cc4b1509f5332 | [
"CC0-1.0"
] | 3 | 2019-01-15T22:46:23.000Z | 2019-01-24T21:20:09.000Z | Documentation/source/reference/aft-string/types/aft-maybe-int.rst | Vavassor/AdString | 7558bb3056441909b83a0e67828cc4b1509f5332 | [
"CC0-1.0"
] | null | null | null | AftMaybeInt
===========
.. c:type:: AftMaybeInt
An optional type representing either an :c:type:`int` or nothing.
.. c:member:: bool valid
True when its value is valid.
.. c:member:: int value
An :c:type:`int` that may be invalid.
| 16.625 | 69 | 0.593985 |
910bdd0bbd2804f0aa6affa33f5b0b238a57d829 | 4,229 | rst | reStructuredText | README.rst | randomir/qrbg-cpp | 3b829d11c638d5a9582d90a14ac4945013c08085 | [
"MIT"
] | 1 | 2016-07-14T08:33:35.000Z | 2016-07-14T08:33:35.000Z | README.rst | randomir/qrbg-cpp | 3b829d11c638d5a9582d90a14ac4945013c08085 | [
"MIT"
] | null | null | null | README.rst | randomir/qrbg-cpp | 3b829d11c638d5a9582d90a14ac4945013c08085 | [
"MIT"
] | null | null | null | Quantum Random Bit Generator Service
====================================
The work on `QRBG Service`_ has been motivated by scientific necessity
(primarily of local scientific community) of running various simulations, whose
results are often greatly affected by quality (distribution, nondeterminism,
entropy, etc.) of used random numbers. Since true random numbers are impossible
to generate with a finite state machine (such as today's computers), scientists
are forced to either use specialized expensive hardware number generators, or,
more frequently, to content themselves with suboptimal solutions (like
pseudo-random numbers generators).
The Service has begun as a result of an attempt to fulfill the scientists' needs
for quality random numbers, but has now grown to a global (public) high-quality
random numbers service.
Design requirements for our service [1]_ were:
- true randomness of data served (high per-bit-entropy of served data)
- high speed of data generation and serving
- high availability of the service (including easy and transparent access to
random data),
- great robustness of the service, and
- high security for users that require it.
So far, all these features, except the last one, have been implemented. And the
solution developed tops other currently available random number acquisition
methods (including existing Internet services) in at least one of the numbered
categories.
To ensure high-quality of the supplied random numbers (true randomness) and high
speed of serving, we have used fast non-deterministic, stand-alone hardware
number generator relying on photonic emission in semiconductors. The used
`Quantum Random Bit Generator`_ was previously developed at Rudjer Boskovic
Institute, in Laboratory for Stochastic Signals and Process Research.
QRBG C++ Client
---------------
The C++ interface to QRBG service is provided through ``QRBG`` class, as defined
in `src`_. The library can be compiled together with your source, or it can be
installed as a shared Linux library (``libqrbg``) and linked dynamically.
Installation
------------
The easiest way to install QRBG as a shared library (on 64-bit Linux
architectures) is from packages in ``dist/``. On Debian/Ubuntu::
$ dpkg -i dist/libqrbg_0.4.0-1_amd64.deb
On RHEL/Fedora/CentOS/SUSE/Mandriva/SciLinux use the RPM package, e.g.::
$ yum install dist/libqrbg-0.4.0-2.x86_64.rpm
On other configurations, you can install from source, in which case you should
have ``autotools`` and ``libtool``. To build and install, type::
$ ./autogen.sh
$ ./configure && make
$ sudo make install
Example Usage
-------------
.. code-block:: c++
#include "QRBG.h"
int main() {
QRBG rndService;
rndService.defineServer("random.irb.hr", 1227);
rndService.defineUser("username", "password");
int a = rndService.getInt();
double b = rndService.getDouble();
}
For more examples, see `test`_.
Build
-----
You can embed the library, or link it with a system-wide shared lib. To
illustrate, we'll build ``example0.cpp``::
$ cd test
$ g++ -Wall -O6 example0.cpp ../src/*.cpp -I ../src -o example0.out
or, use ``make``::
$ make linked
To link it with shared library::
$ g++ -Wall -O6 example0.cpp -l qrbgcpp -o example0.out
or::
$ make shared
Copyright and License
---------------------
QRBG service C++ library is Copyright (c) 2007 Radomir Stevanovic and Rudjer
Boskovic Institute. Licensed under the MIT license. See the LICENSE file for
full details.
.. _`QRBG Service`: http://random.irb.hr/
.. _`Quantum Random Bit Generator`: http://qrbg.irb.hr/
.. [1] R. Stevanovic, G. Topic, K. Skala, M. Stipcevic and B.M. Rogina,
"Quantum Random Bit Generator Service for Monte Carlo and Other
Stochastic Simulations," in Lecture Notes in Computer Science, Vol. 4818,
I. Lirkov, S. Margenov, J. Wasniewski (Eds.), Springer-Verlag: Berlin
Heidelberg, 2008, pp. 508-515. (http://www.springer.com/computer/theoretical+computer+science/book/978-3-540-78825-6)
.. _`src`: https://github.com/randomir/qrbg-cpp/tree/master/src
.. _`test`: https://github.com/randomir/qrbg-cpp/tree/master/test
| 33.039063 | 124 | 0.718846 |
d11f76450945db230bc3fc262bc65027d440f9cc | 754 | rst | reStructuredText | README.rst | hanztura/simple-facebook-clone-python | 522bc3b9d998877cd19da9d89726f093ea8e4a4b | [
"MIT"
] | 2 | 2019-12-15T11:46:11.000Z | 2020-12-25T04:29:16.000Z | README.rst | hanztura/simple-facebook-clone-python | 522bc3b9d998877cd19da9d89726f093ea8e4a4b | [
"MIT"
] | null | null | null | README.rst | hanztura/simple-facebook-clone-python | 522bc3b9d998877cd19da9d89726f093ea8e4a4b | [
"MIT"
] | null | null | null | ###############################
Simple Facebook Clone in Python
###############################
This project is part of a free guide on python programming for absolute beginners.
The goal of this project is to practice concepts learned on the said tutorial:
- Data Types and Structures
- Operations, Variables, Flow Control, and Loops
- Functions
- Modules, Files, and Exceptions
- Class
We will not be building a facebook clone that is as beautiful but complicated as the original Facebook.
What we will have is the simulation of basic facebook features and actions. These only include the following:
- Sign up
- Log in
- Log out
- Show friends list
- Add a friend
In your terminal, run: ``python3 main.py``
| 31.416667 | 109 | 0.669761 |
c6b7da7302f2491d0c4a3880d02c4d6fe898ee47 | 2,167 | rst | reStructuredText | README.rst | ivco19/countries | 97aba40ea37428273895f02c40bb6ea1db6f349b | [
"BSD-3-Clause"
] | null | null | null | README.rst | ivco19/countries | 97aba40ea37428273895f02c40bb6ea1db6f349b | [
"BSD-3-Clause"
] | null | null | null | README.rst | ivco19/countries | 97aba40ea37428273895f02c40bb6ea1db6f349b | [
"BSD-3-Clause"
] | null | null | null | # covid19_iate
Tools to analize covid19 data:
- Construct the infection curve from a given model
- Predict evolution of the infection curve using current data
- Compare behaviour of the infection curve to those of other countries
.. image:: https://readthedocs.org/projects/ivcov19-countries/badge/?version=latest
:target: https://ivcov19-countries.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
This repository is part of the `ARCOVID project <https://arcovid19.readthedocs.io/en/latest/>`_.
Authors
-------
- Dr. Juan B Cabral (CIFASIS-UNR, IATE-OAC-UNC).
- Sr. Mauricio Koraj (Liricus SRL.).
- Lic. Vanessa Daza (IATE-OAC-UNC, FaMAF-UNC).
- Dr. Mariano Dominguez (IATE-OAC-UNC, FaMAF-UNC).
- Dr. Marcelo Lares (IATE-OAC-UNC, FaMAF-UNC).
- Mgt. Nadia Luczywo (LIMI-FCEFyN-UNC, IED-FCE-UNC, FCA-IUA-UNDEF)
- Dr. Dante Paz (IATE-OAC-UNC, FaMAF-UNC).
- Dr. Rodrigo Quiroga (INFIQC-CFQ, FCQ-UNC).
- Dr. Martín de los Ríos (ICTP-SAIFR).
- Dr. Bruno Sanchez (Department of Physics, Duke University).
- Dr. Federico Stasyszyn (IATE-OAC, FaMAF-UNC).
**Afiliations:**
- [Centro Franco Argentino de Ciencias de la Información y de Sistemas (CIFASIS-UNR)](https://www.cifasis-conicet.gov.ar/)
- [Instituto de Astronomía Téorico y Experimental (IATE-OAC-UNC)](http://iate.oac.uncor.edu/)
- [Facultad de Matemática Física y Computación (FaMAF-UNC)](https://www.famaf.unc.edu.ar/)
- [Laboratorio de Ingeniería y Mantenimiento Industrial (LIMI-FCEFyN-UNC)](https://fcefyn.unc.edu.ar/facultad/secretarias/investigacion-y-posgrado/-investigacion/laboratorio-de-ingenieria-y-mantenimiento-industrial/)
- [Instituto De Estadística Y Demografía - Facultad de Ciencias Económicas (IED-FCE-UNC)](http://www.eco.unc.edu.ar/instituto-de-estadistica-y-demografia)
- [Department of Physics, Duke University](https://phy.duke.edu/)
- [Facultad de Ciencias de la Administación (FCA-IUA-UNDEF)](https://www.iua.edu.ar/)
- [Instituto de Investigaciones en Físico-Química de Córdoba (INFIQC-CONICET)](http://infiqc-fcq.psi.unc.edu.ar/)
- [Liricus SRL](http://www.liricus.com.ar/)
- [ICTP South American Institute for Fundamental Research (ICTP-SAIFR)](ICTP-SAIFR)
| 49.25 | 216 | 0.748962 |
96801bf6e07982455ab8f1170ba2df95d5ba26a8 | 607 | rst | reStructuredText | blogger/2008/05/01/blog-post_21.rst | hoamon/www-hoamon-info | 4260057d8c63abb3b5f52f6cce3d236c312a5834 | [
"BSD-3-Clause"
] | null | null | null | blogger/2008/05/01/blog-post_21.rst | hoamon/www-hoamon-info | 4260057d8c63abb3b5f52f6cce3d236c312a5834 | [
"BSD-3-Clause"
] | null | null | null | blogger/2008/05/01/blog-post_21.rst | hoamon/www-hoamon-info | 4260057d8c63abb3b5f52f6cce3d236c312a5834 | [
"BSD-3-Clause"
] | null | null | null | 很難相信,但還是得接受
================================================================================
在軟體領域,年齡就不是那麼重要了。影片是一場在 Google 的演講,講題是 JQuery ,講者是一位 12 歲的年青人。
Old Comments in Blogger
--------------------------------------------------------------------------------
`魏藥 <http://www.blogger.com/profile/06111695002534492956>`_ at 2008-05-22T21:07:00.000+08:00:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
果然是英雄出少年啊~^_^"
除了 jQuery 以外他也專精 Drupal,並且很熱衷下次的 GHOP 耶~@_@...
.. author:: default
.. categories:: chinese
.. tags:: google
.. comments:: | 27.590909 | 106 | 0.401977 |
9df19fb6bd67ab4f6d0229dda5ab5795a5d71a5d | 2,301 | rst | reStructuredText | README.rst | yaraki0912/Project_1 | 48735c9a50cacd554134692525ebe5380b8050a1 | [
"MIT"
] | null | null | null | README.rst | yaraki0912/Project_1 | 48735c9a50cacd554134692525ebe5380b8050a1 | [
"MIT"
] | null | null | null | README.rst | yaraki0912/Project_1 | 48735c9a50cacd554134692525ebe5380b8050a1 | [
"MIT"
] | null | null | null | =========
CHE 477 Langevin Dynamics Project
=========
.. image:: https://img.shields.io/pypi/v/Project_1.svg
:target: https://pypi.python.org/pypi/Project_1
.. image:: https://img.shields.io/travis/yaraki0912/Project_1.svg
:target: https://travis-ci.org/yaraki0912/Project_1
.. image:: https://readthedocs.org/projects/Project-1/badge/?version=latest
:target: https://Project-1.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://pyup.io/repos/github/yaraki0912/Project_1/shield.svg
:target: https://pyup.io/repos/github/yaraki0912/Project_1/
:alt: Updates
.. image:: https://coveralls.io/repos/github/yaraki0912/Project_1/badge.svg?branch=master
:target: https://coveralls.io/github/yaraki0912/Project_1?branch=master
* Free software: MIT license
* Documentation: https://Project-1.readthedocs.io.
Overview
--------
This is one dimensional python implementaion Langevin Dynamics Simulation. It utilizes Euler integration and calculates position and velocity of a particle based on the initial position, velocity, temperature, damping coefficient, time step and total time users input.
Installation
------------
To install this project, use following command in your terminal
git clone git@github.com:yaraki0912/Project_1.git
pip install Project_1
How to Use
--------
This simulator has a command line interface where the user can input parameters using the command libe bellow (values are arbitrary).
py Project_1/Project_1.py --temperature 300 --total_time 1000 --time_step 0.1 --initial_position 0 --initial_velocity 0 --damping_coefficient 0.1
Outputs are textfile, histogram, and trajectory.
Result is in the form of a text file which contains position and velocity for each time step.
histogram.png plots how many times the particle hits the wall at the time out of 100 runs with a given condition.
trajectory.png plots a path of the particle untill it hits the wall. The graph represents one of the 100 runs performed above.
Credits
-------
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
| 37.112903 | 270 | 0.757931 |
b57f1f7314815cb476995fb2bac6158c458847a1 | 399 | rst | reStructuredText | docs/source/correct_programs/common.rst | mristin/python-by-contract-corpus | c96ed00389c3811d7d63560ac665d410a7ee8493 | [
"MIT"
] | 8 | 2021-05-07T17:37:37.000Z | 2022-02-26T15:08:42.000Z | docs/source/correct_programs/common.rst | mristin/python-by-contract-corpus | c96ed00389c3811d7d63560ac665d410a7ee8493 | [
"MIT"
] | 22 | 2021-04-28T21:55:48.000Z | 2022-03-04T07:41:37.000Z | docs/source/correct_programs/common.rst | mristin/aocdbc | c96ed00389c3811d7d63560ac665d410a7ee8493 | [
"MIT"
] | 3 | 2021-03-26T22:29:12.000Z | 2021-04-11T20:45:45.000Z | ******
Common
******
This is a module containing common functions shared among the different solutions.
While we tried to make solutions as stand-alone as possible, we could not help but encapsulate a couple of patterns to help readability.
.. automodule:: python_by_contract_corpus.common
:special-members:
:members:
:exclude-members: __abstractmethods__, __module__, __annotations__
| 33.25 | 136 | 0.769424 |
9b15bf6be3f1f8b8a8484dde4df0cefe40acce3d | 413 | rst | reStructuredText | docs/source/Text Editor/Atom/Customize-Package/README.rst | MacHu-GWU/Dev-Exp-Share | 4215d3872e5b2b26c3a37301d0dbe39c2bfecaea | [
"MIT"
] | 2 | 2021-07-23T03:03:43.000Z | 2021-10-04T12:03:54.000Z | docs/source/Text Editor/Atom/Customize-Package/README.rst | MacHu-GWU/Dev-Exp-Share | 4215d3872e5b2b26c3a37301d0dbe39c2bfecaea | [
"MIT"
] | 3 | 2021-09-23T23:32:14.000Z | 2022-03-30T16:35:27.000Z | docs/source/Text Editor/Atom/Customize-Package/README.rst | MacHu-GWU/Dev-Exp-Share | 4215d3872e5b2b26c3a37301d0dbe39c2bfecaea | [
"MIT"
] | null | null | null | Customize Hackable Packages
===========================
Atom作为GitHub家的代码编辑器,处处充满着开源的气息。
Atom一开始就以超开放的API支持着第三方插件。大部分的插件的源代码,配置文件都是可以修改的。Atom一开始就以超开放的API支持着第三方插件。大部分的插件的源代码,配置文件都是可以修改的。
方法如下:
1. 进入 ``Settings``: ``Ctrl/Command + Shift + P`` 呼出 Command Palette, 输入 ``Settings`` 打开Settings面板。或者从菜单栏进入 Preferences。
2. 进入 ``Packages``, 找到你想要定制的Package。
3. 选择 ``view code`` (不是非常好找)。
你可以去GitHub上研究插件作者的源代码,研究如何自定义。
| 29.5 | 119 | 0.736077 |
079e71e5bd2b86095b55746848d685fad539e6dd | 572 | rst | reStructuredText | doc/anatomy.rst | dweiss044/AFQ-Browser | e4c47a88d9e179999d51045af6be65391f250f86 | [
"BSD-3-Clause"
] | 30 | 2017-02-10T13:12:09.000Z | 2021-11-02T14:51:20.000Z | doc/anatomy.rst | dweiss044/AFQ-Browser | e4c47a88d9e179999d51045af6be65391f250f86 | [
"BSD-3-Clause"
] | 239 | 2016-09-21T22:16:25.000Z | 2021-06-22T05:37:23.000Z | doc/anatomy.rst | dweiss044/AFQ-Browser | e4c47a88d9e179999d51045af6be65391f250f86 | [
"BSD-3-Clause"
] | 9 | 2016-10-10T21:15:22.000Z | 2021-06-03T16:04:06.000Z | .. _anatomy:
anatomy.js
----------
.. js:autofunction:: afqb.three.initAndAnimate
.. js:autofunction:: afqb.three.buildthreeGui
.. js:autofunction:: afqb.three.init
.. js:autofunction:: afqb.three.onWindowResize
.. js:autofunction:: afqb.three.animate
.. js:autofunction:: afqb.three.brushOn3D
.. js:autofunction:: afqb.three.lightUpdate
.. js:autofunction:: afqb.three.makeInvisible
.. js:autofunction:: afqb.three.makeVisible
.. js:autofunction:: afqb.three.highlightBundle
.. js:autofunction:: afqb.three.mouseoutBundle
.. js:autofunction:: afqb.three.mouseoverBundle
| 33.647059 | 47 | 0.756993 |
5d4b50340a28e13ee0e3c20276578d6160574614 | 70,581 | rst | reStructuredText | api/sphinx/tempsource/dabo.ui.uiwx.dPemMixin.dPemMixin.rst | EdLeafe/dabodoc | d51be11e4ace84cfc9404278b299bd4436d6a692 | [
"MIT"
] | null | null | null | api/sphinx/tempsource/dabo.ui.uiwx.dPemMixin.dPemMixin.rst | EdLeafe/dabodoc | d51be11e4ace84cfc9404278b299bd4436d6a692 | [
"MIT"
] | null | null | null | api/sphinx/tempsource/dabo.ui.uiwx.dPemMixin.dPemMixin.rst | EdLeafe/dabodoc | d51be11e4ace84cfc9404278b299bd4436d6a692 | [
"MIT"
] | null | null | null |
.. include:: _static/headings.txt
.. module:: dabo.ui.uiwx.dPemMixin
.. _dabo.ui.uiwx.dPemMixin.dPemMixin:
============================================
|doc_title| **dPemMixin.dPemMixin** - class
============================================
Provides Property/Event/Method interfaces for dForms and dControls.
Subclasses can extend the property sheet by defining their own get/set
functions along with their own property() statements.
|hierarchy| Inheritance Diagram
===============================
Inheritance diagram for: **dPemMixin**
.. inheritance-diagram:: dPemMixin
|supclasses| Known Superclasses
===============================
* :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
|subclasses| Known Subclasses
=============================
* :ref:`dabo.ui.dControlMixinBase.dControlMixinBase`
* :ref:`dabo.ui.uiwx.dFormMixin.dFormMixin`
* :ref:`dabo.ui.uiwx.dMenu.dMenu`
* :ref:`dabo.ui.uiwx.dMenuBar.dMenuBar`
* :ref:`dabo.ui.uiwx.dMenuItem.dMenuItem`
* :ref:`dabo.ui.uiwx.dMenuItem.dSeparatorMenuItem`
* :ref:`dabo.ui.uiwx.dTimer.dTimer`
|API| Class API
===============
.. autoclass:: dabo.ui.uiwx.dPemMixin.dPemMixin
.. automethod:: dabo.ui.uiwx.dPemMixin.dPemMixin.__init__
|method_summary| Properties Summary
===================================
======================================== ========================
:ref:`Application <no-11623>` Read-only object reference to the Dabo Application object. (dApp).
:ref:`BackColor <no-11624>` Specifies the background color of the object. (str, 3-tuple, or wx.Colour)
:ref:`BaseClass <no-11625>` The base Dabo class of the object. Read-only. (class)
:ref:`BasePrefKey <no-11626>` Base key used when saving/restoring preferences (str)
:ref:`BorderColor <no-11627>` Specifies the color of the border drawn around the control, if any.
:ref:`BorderLineStyle <no-11628>` Style of line for the border drawn around the control.
:ref:`BorderStyle <no-11629>` Specifies the type of border for this window. (str).
:ref:`BorderWidth <no-11630>` Width of the border drawn around the control, if any. (int)
:ref:`Bottom <no-11631>` The position of the bottom side of the object. This is a
:ref:`Caption <no-11632>` The caption of the object. (str)
:ref:`Children <no-11633>` Returns a list of object references to the children of
:ref:`Class <no-11634>` The class the object is based on. Read-only. (class)
:ref:`ControllingSizer <no-11635>` Reference to the sizer that controls this control's layout. (dSizer)
:ref:`ControllingSizerItem <no-11636>` Reference to the sizer item that control's this control's layout.
:ref:`DroppedFileHandler <no-11637>` Reference to the object that will handle files dropped on this control.
:ref:`DroppedTextHandler <no-11638>` Reference to the object that will handle text dropped on this control.
:ref:`DynamicBackColor <no-11639>` Dynamically determine the value of the BackColor property.
:ref:`DynamicBorderColor <no-11640>` Dynamically determine the value of the BorderColor property.
:ref:`DynamicBorderLineStyle <no-11641>` Dynamically determine the value of the BorderLineStyle property.
:ref:`DynamicBorderStyle <no-11642>` Dynamically determine the value of the BorderStyle property.
:ref:`DynamicBorderWidth <no-11643>` Dynamically determine the value of the BorderWidth property.
:ref:`DynamicCaption <no-11644>` Dynamically determine the value of the Caption property.
:ref:`DynamicEnabled <no-11645>` Dynamically determine the value of the Enabled property.
:ref:`DynamicFont <no-11646>` Dynamically determine the value of the Font property.
:ref:`DynamicFontBold <no-11647>` Dynamically determine the value of the FontBold property.
:ref:`DynamicFontFace <no-11648>` Dynamically determine the value of the FontFace property.
:ref:`DynamicFontItalic <no-11649>` Dynamically determine the value of the FontItalic property.
:ref:`DynamicFontSize <no-11650>` Dynamically determine the value of the FontSize property.
:ref:`DynamicFontUnderline <no-11651>` Dynamically determine the value of the FontUnderline property.
:ref:`DynamicForeColor <no-11652>` Dynamically determine the value of the ForeColor property.
:ref:`DynamicHeight <no-11653>` Dynamically determine the value of the Height property.
:ref:`DynamicLeft <no-11654>` Dynamically determine the value of the Left property.
:ref:`DynamicMousePointer <no-11655>` Dynamically determine the value of the MousePointer property.
:ref:`DynamicPosition <no-11656>` Dynamically determine the value of the Position property.
:ref:`DynamicSize <no-11657>` Dynamically determine the value of the Size property.
:ref:`DynamicStatusText <no-11658>` Dynamically determine the value of the StatusText property.
:ref:`DynamicTag <no-11659>` Dynamically determine the value of the Tag property.
:ref:`DynamicToolTipText <no-11660>` Dynamically determine the value of the ToolTipText property.
:ref:`DynamicTop <no-11661>` Dynamically determine the value of the Top property.
:ref:`DynamicTransparency <no-11662>` Dynamically determine the value of the Transparency property.
:ref:`DynamicVisible <no-11663>` Dynamically determine the value of the Visible property.
:ref:`DynamicWidth <no-11664>` Dynamically determine the value of the Width property.
:ref:`Enabled <no-11665>` Specifies whether the object and children can get user input. (bool)
:ref:`Font <no-11666>` Specifies font object for this control. (dFont)
:ref:`FontBold <no-11667>` Specifies if the font is bold-faced. (bool)
:ref:`FontDescription <no-11668>` Human-readable description of the current font settings. (str)
:ref:`FontFace <no-11669>` Specifies the font face. (str)
:ref:`FontInfo <no-11670>` Specifies the platform-native font info string. Read-only. (str)
:ref:`FontItalic <no-11671>` Specifies whether font is italicized. (bool)
:ref:`FontSize <no-11672>` Specifies the point size of the font. (int)
:ref:`FontUnderline <no-11673>` Specifies whether text is underlined. (bool)
:ref:`ForeColor <no-11674>` Specifies the foreground color of the object. (str, 3-tuple, or wx.Colour)
:ref:`Form <no-11675>` Object reference to the dForm containing the object. Read-only. (dForm).
:ref:`Height <no-11676>` Specifies the height of the object. (int)
:ref:`HelpContextText <no-11677>` Specifies the context-sensitive help text associated with this
:ref:`Hover <no-11678>` When True, Mouse Enter events fire the onHover method, and
:ref:`Left <no-11679>` Specifies the left position of the object. (int)
:ref:`LogEvents <no-11680>` Specifies which events to log. (list of strings)
:ref:`MaximumHeight <no-11681>` Maximum allowable height for the control in pixels. (int)
:ref:`MaximumSize <no-11682>` Maximum allowable size for the control in pixels. (2-tuple of int)
:ref:`MaximumWidth <no-11683>` Maximum allowable width for the control in pixels. (int)
:ref:`MinimumHeight <no-11684>` Minimum allowable height for the control in pixels. (int)
:ref:`MinimumSize <no-11685>` Minimum allowable size for the control in pixels. (2-tuple of int)
:ref:`MinimumWidth <no-11686>` Minimum allowable width for the control in pixels. (int)
:ref:`MousePointer <no-11687>` Specifies the shape of the mouse pointer when it enters this window. (obj)
:ref:`Name <no-11688>` Specifies the name of the object, which must be unique among siblings.
:ref:`NameBase <no-11689>` Specifies the base name of the object.
:ref:`Parent <no-11690>` The containing object. (obj)
:ref:`Position <no-11691>` The (x,y) position of the object. (tuple)
:ref:`PreferenceManager <no-11692>` Reference to the Preference Management object (dPref)
:ref:`RegID <no-11693>` A unique identifier used for referencing by other objects. (str)
:ref:`Right <no-11694>` The position of the right side of the object. This is a
:ref:`Size <no-11695>` The size of the object. (tuple)
:ref:`Sizer <no-11696>` The sizer for the object.
:ref:`StatusText <no-11697>` Specifies the text that displays in the form's status bar, if any.
:ref:`Tag <no-11698>` A property that user code can safely use for specific purposes.
:ref:`ToolTipText <no-11699>` Specifies the tooltip text associated with this window. (str)
:ref:`Top <no-11700>` The top position of the object. (int)
:ref:`Transparency <no-11701>` Transparency level of the control; ranges from 0 (transparent) to 255 (opaque).
:ref:`TransparencyDelay <no-11702>` Time in seconds to change transparency. Set it to zero to see instant changes.
:ref:`Visible <no-11703>` Specifies whether the object is visible at runtime. (bool)
:ref:`VisibleOnScreen <no-11704>` Specifies whether the object is physically visible at runtime. (bool)
:ref:`Width <no-11705>` The width of the object. (int)
:ref:`WindowHandle <no-11706>` The platform-specific handle for the window. Read-only. (long)
======================================== ========================
Properties
==========
.. _no-11624:
**BackColor**
Specifies the background color of the object. (str, 3-tuple, or wx.Colour)
-------
.. _no-11627:
**BorderColor**
Specifies the color of the border drawn around the control, if any.
Default='black' (str, 3-tuple, or wx.Colour)
-------
.. _no-11628:
**BorderLineStyle**
Style of line for the border drawn around the control.
Possible choices are:
"Solid" (default)
"Dash"
"Dot"
"DotDash"
"DashDot"
-------
.. _no-11629:
**BorderStyle**
Specifies the type of border for this window. (str).
Possible choices are:
"None"
"Simple"
"Sunken"
"Raised"
-------
.. _no-11630:
**BorderWidth**
Width of the border drawn around the control, if any. (int)
Default=0 (no border)
-------
.. _no-11632:
**Caption**
The caption of the object. (str)
-------
.. _no-11635:
**ControllingSizer**
Reference to the sizer that controls this control's layout. (dSizer)
-------
.. _no-11636:
**ControllingSizerItem**
Reference to the sizer item that control's this control's layout.
This is useful for getting information about how the item is being
sized, and for changing those settings. (SizerItem)
-------
.. _no-11637:
**DroppedFileHandler**
Reference to the object that will handle files dropped on this control.
When files are dropped, a list of them will be passed to this object's
'processDroppedFiles()' method. Default=None (object or None)
-------
.. _no-11638:
**DroppedTextHandler**
Reference to the object that will handle text dropped on this control.
When text is dropped, that text will be passed to this object's
'processDroppedText()' method. Default=None (object or None)
-------
.. _no-11639:
**DynamicBackColor**
Dynamically determine the value of the BackColor property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
BackColor property. If DynamicBackColor is set to None (the default), BackColor
will not be dynamically evaluated.
-------
.. _no-11640:
**DynamicBorderColor**
Dynamically determine the value of the BorderColor property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
BorderColor property. If DynamicBorderColor is set to None (the default), BorderColor
will not be dynamically evaluated.
-------
.. _no-11641:
**DynamicBorderLineStyle**
Dynamically determine the value of the BorderLineStyle property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
BorderLineStyle property. If DynamicBorderLineStyle is set to None (the default), BorderLineStyle
will not be dynamically evaluated.
-------
.. _no-11642:
**DynamicBorderStyle**
Dynamically determine the value of the BorderStyle property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
BorderStyle property. If DynamicBorderStyle is set to None (the default), BorderStyle
will not be dynamically evaluated.
-------
.. _no-11643:
**DynamicBorderWidth**
Dynamically determine the value of the BorderWidth property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
BorderWidth property. If DynamicBorderWidth is set to None (the default), BorderWidth
will not be dynamically evaluated.
-------
.. _no-11644:
**DynamicCaption**
Dynamically determine the value of the Caption property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Caption property. If DynamicCaption is set to None (the default), Caption
will not be dynamically evaluated.
-------
.. _no-11645:
**DynamicEnabled**
Dynamically determine the value of the Enabled property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Enabled property. If DynamicEnabled is set to None (the default), Enabled
will not be dynamically evaluated.
-------
.. _no-11646:
**DynamicFont**
Dynamically determine the value of the Font property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Font property. If DynamicFont is set to None (the default), Font
will not be dynamically evaluated.
-------
.. _no-11647:
**DynamicFontBold**
Dynamically determine the value of the FontBold property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
FontBold property. If DynamicFontBold is set to None (the default), FontBold
will not be dynamically evaluated.
-------
.. _no-11648:
**DynamicFontFace**
Dynamically determine the value of the FontFace property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
FontFace property. If DynamicFontFace is set to None (the default), FontFace
will not be dynamically evaluated.
-------
.. _no-11649:
**DynamicFontItalic**
Dynamically determine the value of the FontItalic property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
FontItalic property. If DynamicFontItalic is set to None (the default), FontItalic
will not be dynamically evaluated.
-------
.. _no-11650:
**DynamicFontSize**
Dynamically determine the value of the FontSize property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
FontSize property. If DynamicFontSize is set to None (the default), FontSize
will not be dynamically evaluated.
-------
.. _no-11651:
**DynamicFontUnderline**
Dynamically determine the value of the FontUnderline property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
FontUnderline property. If DynamicFontUnderline is set to None (the default), FontUnderline
will not be dynamically evaluated.
-------
.. _no-11652:
**DynamicForeColor**
Dynamically determine the value of the ForeColor property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
ForeColor property. If DynamicForeColor is set to None (the default), ForeColor
will not be dynamically evaluated.
-------
.. _no-11653:
**DynamicHeight**
Dynamically determine the value of the Height property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Height property. If DynamicHeight is set to None (the default), Height
will not be dynamically evaluated.
-------
.. _no-11654:
**DynamicLeft**
Dynamically determine the value of the Left property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Left property. If DynamicLeft is set to None (the default), Left
will not be dynamically evaluated.
-------
.. _no-11655:
**DynamicMousePointer**
Dynamically determine the value of the MousePointer property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
MousePointer property. If DynamicMousePointer is set to None (the default), MousePointer
will not be dynamically evaluated.
-------
.. _no-11656:
**DynamicPosition**
Dynamically determine the value of the Position property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Position property. If DynamicPosition is set to None (the default), Position
will not be dynamically evaluated.
-------
.. _no-11657:
**DynamicSize**
Dynamically determine the value of the Size property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Size property. If DynamicSize is set to None (the default), Size
will not be dynamically evaluated.
-------
.. _no-11658:
**DynamicStatusText**
Dynamically determine the value of the StatusText property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
StatusText property. If DynamicStatusText is set to None (the default), StatusText
will not be dynamically evaluated.
-------
.. _no-11659:
**DynamicTag**
Dynamically determine the value of the Tag property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Tag property. If DynamicTag is set to None (the default), Tag
will not be dynamically evaluated.
-------
.. _no-11660:
**DynamicToolTipText**
Dynamically determine the value of the ToolTipText property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
ToolTipText property. If DynamicToolTipText is set to None (the default), ToolTipText
will not be dynamically evaluated.
-------
.. _no-11661:
**DynamicTop**
Dynamically determine the value of the Top property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Top property. If DynamicTop is set to None (the default), Top
will not be dynamically evaluated.
-------
.. _no-11662:
**DynamicTransparency**
Dynamically determine the value of the Transparency property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Transparency property. If DynamicTransparency is set to None (the default), Transparency
will not be dynamically evaluated.
-------
.. _no-11663:
**DynamicVisible**
Dynamically determine the value of the Visible property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Visible property. If DynamicVisible is set to None (the default), Visible
will not be dynamically evaluated.
-------
.. _no-11664:
**DynamicWidth**
Dynamically determine the value of the Width property.
Specify a function and optional arguments that will get called from the
update() method. The return value of the function will get set to the
Width property. If DynamicWidth is set to None (the default), Width
will not be dynamically evaluated.
-------
.. _no-11665:
**Enabled**
Specifies whether the object and children can get user input. (bool)
-------
.. _no-11666:
**Font**
Specifies font object for this control. (dFont)
-------
.. _no-11667:
**FontBold**
Specifies if the font is bold-faced. (bool)
-------
.. _no-11668:
**FontDescription**
Human-readable description of the current font settings. (str)
-------
.. _no-11669:
**FontFace**
Specifies the font face. (str)
-------
.. _no-11670:
**FontInfo**
Specifies the platform-native font info string. Read-only. (str)
-------
.. _no-11671:
**FontItalic**
Specifies whether font is italicized. (bool)
-------
.. _no-11672:
**FontSize**
Specifies the point size of the font. (int)
-------
.. _no-11673:
**FontUnderline**
Specifies whether text is underlined. (bool)
-------
.. _no-11674:
**ForeColor**
Specifies the foreground color of the object. (str, 3-tuple, or wx.Colour)
-------
.. _no-11676:
**Height**
Specifies the height of the object. (int)
-------
.. _no-11677:
**HelpContextText**
Specifies the context-sensitive help text associated with this
window. (str)
-------
.. _no-11678:
**Hover**
When True, Mouse Enter events fire the onHover method, and
MouseLeave events fire the endHover method (bool)
-------
.. _no-11679:
**Left**
Specifies the left position of the object. (int)
-------
.. _no-11681:
**MaximumHeight**
Maximum allowable height for the control in pixels. (int)
-------
.. _no-11682:
**MaximumSize**
Maximum allowable size for the control in pixels. (2-tuple of int)
-------
.. _no-11683:
**MaximumWidth**
Maximum allowable width for the control in pixels. (int)
-------
.. _no-11684:
**MinimumHeight**
Minimum allowable height for the control in pixels. (int)
-------
.. _no-11685:
**MinimumSize**
Minimum allowable size for the control in pixels. (2-tuple of int)
-------
.. _no-11686:
**MinimumWidth**
Minimum allowable width for the control in pixels. (int)
-------
.. _no-11687:
**MousePointer**
Specifies the shape of the mouse pointer when it enters this window. (obj)
-------
.. _no-11689:
**NameBase**
Specifies the base name of the object.
The base name specified will become the object's Name, unless another sibling
already has that name, in which case Dabo will find the next unique name by
adding integers to the end of the base name. For example, if your code says:
self.NameBase = "txtAddress"
and there is already a sibling object with that name, your object will end up
with Name = "txtAddress1".
This property is write-only at runtime.
-------
.. _no-11691:
**Position**
The (x,y) position of the object. (tuple)
-------
.. _no-11693:
**RegID**
A unique identifier used for referencing by other objects. (str)
-------
.. _no-11695:
**Size**
The size of the object. (tuple)
-------
.. _no-11696:
**Sizer**
The sizer for the object.
-------
.. _no-11697:
**StatusText**
Specifies the text that displays in the form's status bar, if any.
The text will appear when the control gets the focus, or when the
mouse hovers over the control, and will clear when the control loses
the focus, or when the mouse is no longer hovering.
For forms, set StatusText whenever you want to display a message.
-------
.. _no-11698:
**Tag**
A property that user code can safely use for specific purposes.
-------
.. _no-11699:
**ToolTipText**
Specifies the tooltip text associated with this window. (str)
-------
.. _no-11700:
**Top**
The top position of the object. (int)
-------
.. _no-11701:
**Transparency**
Transparency level of the control; ranges from 0 (transparent) to 255 (opaque).
Default=0. Does not currently work on Gtk/Linux. (int)
-------
.. _no-11702:
**TransparencyDelay**
Time in seconds to change transparency. Set it to zero to see instant changes.
Default=0.25 (float)
-------
.. _no-11703:
**Visible**
Specifies whether the object is visible at runtime. (bool)
-------
.. _no-11704:
**VisibleOnScreen**
Specifies whether the object is physically visible at runtime. (bool)
The Visible property could return True even if the object isn't actually
shown on screen, due to a parent object or sizer being invisible.
The VisibleOnScreen property will return True only if the object and all
parents are visible.
-------
.. _no-11705:
**Width**
The width of the object. (int)
-------
.. _no-11706:
**WindowHandle**
The platform-specific handle for the window. Read-only. (long)
-------
Properties - inherited
========================
.. _no-11623:
**Application**
Read-only object reference to the Dabo Application object. (dApp).
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11625:
**BaseClass**
The base Dabo class of the object. Read-only. (class)
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11626:
**BasePrefKey**
Base key used when saving/restoring preferences (str)
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11631:
**Bottom**
The position of the bottom side of the object. This is a
convenience property, and is equivalent to setting the Top property
to this value minus the Height of the control. (int)
Inherited from: :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
-------
.. _no-11633:
**Children**
Returns a list of object references to the children of
this object. Only applies to containers. Children will be None for
non-containers. (list or None)
Inherited from: :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
-------
.. _no-11634:
**Class**
The class the object is based on. Read-only. (class)
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11675:
**Form**
Object reference to the dForm containing the object. Read-only. (dForm).
Inherited from: :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
-------
.. _no-11680:
**LogEvents**
Specifies which events to log. (list of strings)
If the first element is 'All', all events except the following listed events
will be logged.
Event logging is resource-intensive, so in addition to setting this LogEvents
property, you also need to make the following call:
>>> dabo.eventLogging = True
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11688:
**Name**
Specifies the name of the object, which must be unique among siblings.
If the specified name isn't unique, an exception will be raised. See also
NameBase, which let's you set a base name and Dabo will automatically append
integers to make it unique.
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11690:
**Parent**
The containing object. (obj)
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11692:
**PreferenceManager**
Reference to the Preference Management object (dPref)
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11694:
**Right**
The position of the right side of the object. This is a
convenience property, and is equivalent to setting the Left property
to this value minus the Width of the control. (int)
Inherited from: :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
-------
|method_summary| Events Summary
===============================
======================================== ========================
:ref:`BackgroundErased <no-11707>` Occurs when a window background has been erased and needs repainting.
:ref:`Create <no-11708>` Occurs after the control or form is created.
:ref:`Destroy <no-11709>` Occurs when the control or form is destroyed.
:ref:`FontPropertiesChanged <no-11710>` Occurs when the properties of a dFont have changed.
:ref:`GotFocus <no-11711>` Occurs when the control gets the focus.
:ref:`Idle <no-11712>` Occurs when the event loop has no active events to process.
:ref:`KeyChar <no-11713>` Occurs when a key is depressed and released on the
:ref:`KeyDown <no-11714>` Occurs when any key is depressed on the focused control or form.
:ref:`KeyEvent <no-11715>`
:ref:`KeyUp <no-11716>` Occurs when any key is released on the focused control or form.
:ref:`LostFocus <no-11717>` Occurs when the control loses the focus.
:ref:`MenuClose <no-11718>` Occurs when a menu has just been closed.
:ref:`MenuOpen <no-11719>` Occurs when a menu is about to be opened.
:ref:`MouseEnter <no-11720>` Occurs when the mouse pointer enters the form or control.
:ref:`MouseEvent <no-11721>`
:ref:`MouseLeave <no-11722>` Occurs when the mouse pointer leaves the form or control.
:ref:`MouseLeftClick <no-11723>` Occurs when the mouse's left button is depressed
:ref:`MouseLeftDoubleClick <no-11724>` Occurs when the mouse's left button is double-clicked on the control.
:ref:`MouseLeftDown <no-11725>` Occurs when the mouse's left button is depressed on the control.
:ref:`MouseLeftUp <no-11726>` Occurs when the mouse's left button is released on the control.
:ref:`MouseMiddleClick <no-11727>` Occurs when the mouse mouse's middle button is depressed
:ref:`MouseMiddleDoubleClick <no-11728>` Occurs when the mouse's middle button is double-clicked
:ref:`MouseMiddleDown <no-11729>` Occurs when the mouse's middle button is depressed on the control.
:ref:`MouseMiddleUp <no-11730>` Occurs when the mouse's middle button is released on the control.
:ref:`MouseMove <no-11731>` Occurs when the mouse moves in the control.
:ref:`MouseRightClick <no-11732>` Occurs when the mouse mouse's right button is depressed
:ref:`MouseRightDoubleClick <no-11733>` Occurs when the mouse's right button is double-clicked on the control.
:ref:`MouseRightDown <no-11734>` Occurs when the mouse's right button is depressed on the control.
:ref:`MouseRightUp <no-11735>` Occurs when the mouse's right button is released on the control.
:ref:`MouseWheel <no-11736>` Occurs when the user scrolls the mouse wheel.
:ref:`Move <no-11737>` Occurs when the control's position changes.
:ref:`Paint <no-11738>` Occurs when it is time to paint the control.
:ref:`Resize <no-11739>` Occurs when the control or form is resized.
:ref:`TreeBeginDrag <no-11740>` Occurs when a drag operation begins in a tree.
:ref:`TreeEndDrag <no-11741>` Occurs when a drag operation ends in a tree.
:ref:`Update <no-11742>` Occurs when a container wants its controls to update
======================================== ========================
Events
=======
.. _no-11707:
**BackgroundErased**
Occurs when a window background has been erased and needs repainting.
-------
.. _no-11708:
**Create**
Occurs after the control or form is created.
-------
.. _no-11709:
**Destroy**
Occurs when the control or form is destroyed.
-------
.. _no-11710:
**FontPropertiesChanged**
Occurs when the properties of a dFont have changed.
-------
.. _no-11711:
**GotFocus**
Occurs when the control gets the focus.
-------
.. _no-11712:
**Idle**
Occurs when the event loop has no active events to process.
This is a good place to put redraw or other such UI-intensive code, so that it
will only run when the application is otherwise not busy doing other (more
important) things.
-------
.. _no-11713:
**KeyChar**
Occurs when a key is depressed and released on the
focused control or form.
-------
.. _no-11714:
**KeyDown**
Occurs when any key is depressed on the focused control or form.
-------
.. _no-11715:
**KeyEvent**
-------
.. _no-11716:
**KeyUp**
Occurs when any key is released on the focused control or form.
-------
.. _no-11717:
**LostFocus**
Occurs when the control loses the focus.
-------
.. _no-11718:
**MenuClose**
Occurs when a menu has just been closed.
-------
.. _no-11719:
**MenuOpen**
Occurs when a menu is about to be opened.
-------
.. _no-11720:
**MouseEnter**
Occurs when the mouse pointer enters the form or control.
-------
.. _no-11721:
**MouseEvent**
-------
.. _no-11722:
**MouseLeave**
Occurs when the mouse pointer leaves the form or control.
-------
.. _no-11723:
**MouseLeftClick**
Occurs when the mouse's left button is depressed
and released on the control.
-------
.. _no-11724:
**MouseLeftDoubleClick**
Occurs when the mouse's left button is double-clicked on the control.
-------
.. _no-11725:
**MouseLeftDown**
Occurs when the mouse's left button is depressed on the control.
-------
.. _no-11726:
**MouseLeftUp**
Occurs when the mouse's left button is released on the control.
-------
.. _no-11727:
**MouseMiddleClick**
Occurs when the mouse mouse's middle button is depressed
and released on the control.
-------
.. _no-11728:
**MouseMiddleDoubleClick**
Occurs when the mouse's middle button is double-clicked
on the control.
-------
.. _no-11729:
**MouseMiddleDown**
Occurs when the mouse's middle button is depressed on the control.
-------
.. _no-11730:
**MouseMiddleUp**
Occurs when the mouse's middle button is released on the control.
-------
.. _no-11731:
**MouseMove**
Occurs when the mouse moves in the control.
-------
.. _no-11732:
**MouseRightClick**
Occurs when the mouse mouse's right button is depressed
and released on the control.
-------
.. _no-11733:
**MouseRightDoubleClick**
Occurs when the mouse's right button is double-clicked on the control.
-------
.. _no-11734:
**MouseRightDown**
Occurs when the mouse's right button is depressed on the control.
-------
.. _no-11735:
**MouseRightUp**
Occurs when the mouse's right button is released on the control.
-------
.. _no-11736:
**MouseWheel**
Occurs when the user scrolls the mouse wheel.
-------
.. _no-11737:
**Move**
Occurs when the control's position changes.
-------
.. _no-11738:
**Paint**
Occurs when it is time to paint the control.
-------
.. _no-11739:
**Resize**
Occurs when the control or form is resized.
-------
.. _no-11740:
**TreeBeginDrag**
Occurs when a drag operation begins in a tree.
-------
.. _no-11741:
**TreeEndDrag**
Occurs when a drag operation ends in a tree.
-------
.. _no-11742:
**Update**
Occurs when a container wants its controls to update
their properties.
-------
|method_summary| Methods Summary
================================
======================================= ========================
:ref:`absoluteCoordinates <no-11743>` Translates a position value for a control to absolute screen position.
:ref:`addObject <no-11744>` Instantiate object as a child of self.
:ref:`afterInit <no-11745>` Subclass hook. Called after the object's __init__ has run fully.
:ref:`afterInitAll <no-11746>`
:ref:`afterSetProperties <no-11747>`
:ref:`autoBindEvents <no-11748>` Automatically bind any on*() methods to the associated event.
:ref:`beforeInit <no-11749>` Subclass hook. Called before the object is fully instantiated.
:ref:`beforeSetProperties <no-11750>`
:ref:`bindEvent <no-11751>` Bind a dEvent to a callback function.
:ref:`bindEvents <no-11752>` Bind a sequence of [dEvent, callback] lists.
:ref:`bindKey <no-11753>` Bind a key combination such as "ctrl+c" to a callback function.
:ref:`bringToFront <no-11754>` Makes this object topmost
:ref:`clear <no-11755>` Clears the background of custom-drawn objects.
:ref:`clone <no-11756>` Create another object just like the passed object. It assumes that the
:ref:`containerCoordinates <no-11757>` Given a position relative to this control, return a position relative
:ref:`copy <no-11758>` Called by uiApp when the user requests a copy operation.
:ref:`cut <no-11759>` Called by uiApp when the user requests a cut operation.
:ref:`drawArc <no-11760>` Draws an arc (pie slice) of a circle centered around the specified point,
:ref:`drawBitmap <no-11761>` Draws a bitmap on the object at the specified position.
:ref:`drawCircle <no-11762>` Draws a circle of the specified radius around the specified point.
:ref:`drawEllipse <no-11763>` Draws an ellipse contained within the rectangular space defined by
:ref:`drawEllipticArc <no-11764>` Draws an arc (pie slice) of a ellipse contained by the specified
:ref:`drawGradient <no-11765>` Draws a horizontal or vertical gradient on the control. Default
:ref:`drawLine <no-11766>` Draws a line between (x1,y1) and (x2, y2).
:ref:`drawPolyLines <no-11767>` Draws a series of connected line segments defined by the specified points.
:ref:`drawPolygon <no-11768>` Draws a polygon defined by the specified points.
:ref:`drawRectangle <no-11769>` Draws a rectangle of the specified size beginning at the specified
:ref:`drawRoundedRectangle <no-11770>` Draws a rounded rectangle of the specified size beginning at the specified
:ref:`drawText <no-11771>` Draws text on the object at the specified position
:ref:`endHover <no-11772>`
:ref:`fitToSizer <no-11773>` Resize the control to fit the size required by its sizer.
:ref:`fontZoomIn <no-11774>` Zoom in on the font, by setting a higher point size.
:ref:`fontZoomNormal <no-11775>` Reset the font zoom back to zero.
:ref:`fontZoomOut <no-11776>` Zoom out on the font, by setting a lower point size.
:ref:`formCoordinates <no-11777>` Given a position relative to this control, return a position relative
:ref:`getAbsoluteName <no-11778>` Return the fully qualified name of the object.
:ref:`getCaptureBitmap <no-11779>` Return a bitmap snapshot of self as it appears in the UI at this moment.
:ref:`getContainingPage <no-11780>` Return the dPage or WizardPage that contains self.
:ref:`getDisplayLocker <no-11781>` Returns an object that locks the current display when created, and
:ref:`getMousePosition <no-11782>` Returns the current mouse position on the entire screen
:ref:`getPositionInSizer <no-11783>` Convenience method to let you call this directly on the object.
:ref:`getProperties <no-11784>` Returns a dictionary of property name/value pairs.
:ref:`getSizerProp <no-11785>` Gets the current setting for the given property from the object's
:ref:`getSizerProps <no-11786>` Returns a dict containing the object's sizer property info. The
:ref:`hide <no-11787>` Make the object invisible.
:ref:`initEvents <no-11788>` Hook for subclasses. User code should do custom event binding
:ref:`initProperties <no-11789>` Hook for subclasses. User subclasses should set properties
:ref:`isContainedBy <no-11790>` Returns True if the containership hierarchy for this control
:ref:`iterateCall <no-11791>` Call the given function on this object and all of its Children. If
:ref:`lockDisplay <no-11792>` Locks the visual updates to the control.
:ref:`moveTabOrderAfter <no-11793>` Moves this object's tab order after the passed obj.
:ref:`moveTabOrderBefore <no-11794>` Moves this object's tab order before the passed obj.
:ref:`objectCoordinates <no-11795>` Given a position relative to the form, return a position relative
:ref:`onHover <no-11796>`
:ref:`paste <no-11797>` Called by uiApp when the user requests a paste operation.
:ref:`posIsWithin <no-11798>`
:ref:`processDroppedFiles <no-11799>` Handler for files dropped on the control. Override in your
:ref:`processDroppedText <no-11800>` Handler for text dropped on the control. Override in your
:ref:`raiseEvent <no-11801>` Raise the passed Dabo event.
:ref:`reCreate <no-11802>` Abstract method: subclasses MUST override for UI-specifics.
:ref:`recreate <no-11803>` Recreate the object.
:ref:`redraw <no-11804>` Called when the object is (re)drawn.
:ref:`refresh <no-11805>` Repaints this control and all contained objects.
:ref:`relativeCoordinates <no-11806>` Translates an absolute screen position to position value for a control.
:ref:`release <no-11807>` Destroys the object.
:ref:`removeDrawnObject <no-11808>`
:ref:`sendToBack <no-11809>` Places this object behind all others.
:ref:`setAll <no-11810>` Set all child object properties to the passed value.
:ref:`setFocus <no-11811>` Sets focus to the object.
:ref:`setPositionInSizer <no-11812>` Convenience method to let you call this directly on the object.
:ref:`setProperties <no-11813>` Sets a group of properties on the object all at once.
:ref:`setPropertiesFromAtts <no-11814>` Sets a group of properties on the object all at once. This
:ref:`setSizerProp <no-11815>` Tells the object's ControllingSizer to adjust the requested property.
:ref:`setSizerProps <no-11816>` Convenience method for setting multiple sizer item properties at once. The
:ref:`show <no-11817>` Make the object visible.
:ref:`showContainingPage <no-11818>` If this object is inside of any paged control, it will force all containing
:ref:`showContextMenu <no-11819>` Display a context menu (popup) at the specified position.
:ref:`super <no-11820>` This method used to call superclass code, but it's been removed.
:ref:`unbindEvent <no-11821>` Remove a previously registered event binding.
:ref:`unbindKey <no-11822>` Unbind a previously bound key combination.
:ref:`unlockDisplay <no-11823>` Unlocks the visual updates to the control.
:ref:`unlockDisplayAll <no-11824>` Immediately unlocks the display, no matter how many previous
:ref:`update <no-11825>` Update the properties of this object and all contained objects.
======================================= ========================
Methods
=======
.. _no-11743:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.absoluteCoordinates(self, pos=None)
Translates a position value for a control to absolute screen position.
-------
.. _no-11744:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.addObject(self, classRef, Name=None, \*args, \**kwargs)
Instantiate object as a child of self.
The classRef argument must be a Dabo UI class definition. (it must inherit
dPemMixin). Alternatively, it can be a saved class definition in XML format,
as created by the Class Designer.
The name argument, if passed, will be sent along to the object's
constructor, which will attempt to set its Name accordingly. If the name
argument is not passed (or None), the object will get a default Name as
defined in the object's class definition.
Additional positional and/or keyword arguments will be sent along to the
object's constructor.
-------
.. _no-11746:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.afterInitAll(self)
-------
.. _no-11747:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.afterSetProperties(self)
-------
.. _no-11750:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.beforeSetProperties(self, properties)
-------
.. _no-11753:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.bindKey(self, keyCombo, callback, \**kwargs)
Bind a key combination such as "ctrl+c" to a callback function.
See dKeys.keyStrings for the valid string key codes.
See dKeys.modifierStrings for the valid modifier codes.
Examples::
# When user presses <esc>, close the form:
form.bindKey("esc", form.Close)
# When user presses <ctrl><alt><w>, close the form:
form.bindKey("ctrl+alt+w", form.Close)
-------
.. _no-11754:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.bringToFront(self)
Makes this object topmost
-------
.. _no-11755:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.clear(self)
Clears the background of custom-drawn objects.
-------
.. _no-11756:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.clone(self, obj, name=None)
Create another object just like the passed object. It assumes that the
calling object will be the container of the newly created object.
-------
.. _no-11757:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.containerCoordinates(self, cnt, pos=None)
Given a position relative to this control, return a position relative
to the specified container. If no position is passed, returns the position
of this control relative to the container.
-------
.. _no-11758:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.copy(self)
Called by uiApp when the user requests a copy operation.
Return None (the default) and uiApp will try a default copy operation.
Return anything other than None and uiApp will assume that the copy
operation has been handled.
-------
.. _no-11759:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.cut(self)
Called by uiApp when the user requests a cut operation.
Return None (the default) and uiApp will try a default cut operation.
Return anything other than None and uiApp will assume that the cut
operation has been handled.
-------
.. _no-11760:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawArc(self, xPos, yPos, rad, startAngle, endAngle, penColor='black', penWidth=1, fillColor=None, lineStyle=None, hatchStyle=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws an arc (pie slice) of a circle centered around the specified point,
starting from 'startAngle' degrees, and sweeping counter-clockwise
until 'endAngle' is reached.
See the 'drawCircle()' method above for more details.
-------
.. _no-11761:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawBitmap(self, bmp, x=0, y=0, mode=None, persist=True, transparent=True, visible=True, dc=None, useDefaults=False)
Draws a bitmap on the object at the specified position.
-------
.. _no-11762:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawCircle(self, xPos, yPos, rad, penColor='black', penWidth=1, fillColor=None, lineStyle=None, hatchStyle=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws a circle of the specified radius around the specified point.
You can set the color and thickness of the line, as well as the
color and hatching style of the fill. Normally, when persist=True,
the circle will be re-drawn on paint events, but if you pass False,
it will be drawn once only.
A drawing object is returned, or None if persist=False. You can
'remove' the drawing by setting the Visible property of the
returned object to False. You can also manipulate the position, size,
color, and fill by changing the various properties of the object.
-------
.. _no-11763:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawEllipse(self, xPos, yPos, width, height, penColor='black', penWidth=1, fillColor=None, lineStyle=None, hatchStyle=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws an ellipse contained within the rectangular space defined by
the position and size coordinates
See the 'drawCircle()' method above for more details.
-------
.. _no-11764:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawEllipticArc(self, xPos, yPos, width, height, startAngle, endAngle, penColor='black', penWidth=1, fillColor=None, lineStyle=None, hatchStyle=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws an arc (pie slice) of a ellipse contained by the specified
dimensions, starting from 'startAngle' degrees, and sweeping
counter-clockwise until 'endAngle' is reached.
See the 'drawCircle()' method above for more details.
-------
.. _no-11765:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawGradient(self, orientation, x=0, y=0, width=None, height=None, color1=None, color2=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws a horizontal or vertical gradient on the control. Default
is to cover the entire control, although you can specify positions.
The gradient is drawn with 'color1' as the top/left color, and 'color2'
as the bottom/right color.
-------
.. _no-11766:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawLine(self, x1, y1, x2, y2, penColor='black', penWidth=1, fillColor=None, lineStyle=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws a line between (x1,y1) and (x2, y2).
See the 'drawCircle()' method above for more details.
-------
.. _no-11767:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawPolyLines(self, points, penColor='black', penWidth=1, lineStyle=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws a series of connected line segments defined by the specified points.
The 'points' parameter should be a tuple of (x,y) pairs defining the shape. Lines
are drawn connecting the points sequentially, but a segment from the last
point to the first is not drawn, leaving an 'open' polygon. As a result, there is no
FillColor or HatchStyle defined for this.
See the 'drawCircle()' method above for more details.
-------
.. _no-11768:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawPolygon(self, points, penColor='black', penWidth=1, fillColor=None, lineStyle=None, hatchStyle=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws a polygon defined by the specified points.
The 'points' parameter should be a tuple of (x,y) pairs defining the
polygon.
See the 'drawCircle()' method above for more details.
-------
.. _no-11769:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawRectangle(self, xPos, yPos, width, height, penColor='black', penWidth=1, fillColor=None, lineStyle=None, hatchStyle=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws a rectangle of the specified size beginning at the specified
point.
See the 'drawCircle()' method above for more details.
-------
.. _no-11770:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawRoundedRectangle(self, xPos, yPos, width, height, radius, penColor='black', penWidth=1, fillColor=None, lineStyle=None, hatchStyle=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws a rounded rectangle of the specified size beginning at the specified
point, with the specified corner radius.
See the 'drawCircle()' method above for more details.
-------
.. _no-11771:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.drawText(self, text, x=0, y=0, angle=0, fontFace=None, fontSize=None, fontBold=None, fontItalic=None, fontUnderline=None, foreColor=None, backColor=None, mode=None, persist=True, visible=True, dc=None, useDefaults=False)
Draws text on the object at the specified position
using the specified characteristics. Any characteristics
not specified will be set to the system default.
-------
.. _no-11772:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.endHover(self, evt=None)
-------
.. _no-11773:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.fitToSizer(self, extraWidth=0, extraHeight=0)
Resize the control to fit the size required by its sizer.
-------
.. _no-11777:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.formCoordinates(self, pos=None)
Given a position relative to this control, return a position relative
to the containing form. If no position is passed, returns the position
of this control relative to the form.
-------
.. _no-11779:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.getCaptureBitmap(self)
Return a bitmap snapshot of self as it appears in the UI at this moment.
-------
.. _no-11780:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.getContainingPage(self)
Return the dPage or WizardPage that contains self.
-------
.. _no-11781:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.getDisplayLocker(self)
Returns an object that locks the current display when created, and
unlocks it when destroyed. This is generally safer than calling lockDisplay()
and unlockDisplay(), especially when used with callAfterInterval(), when
the unlockDisplay() calls may not all happen.
-------
.. _no-11782:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.getMousePosition(self)
Returns the current mouse position on the entire screen
relative to this object.
-------
.. _no-11783:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.getPositionInSizer(self)
Convenience method to let you call this directly on the object.
-------
.. _no-11785:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.getSizerProp(self, prop)
Gets the current setting for the given property from the object's
ControllingSizer. Returns None if object is not in a sizer.
-------
.. _no-11786:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.getSizerProps(self)
Returns a dict containing the object's sizer property info. The
keys are the property names, and the values are the current
values for those props.
-------
.. _no-11787:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.hide(self)
Make the object invisible.
-------
.. _no-11790:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.isContainedBy(self, obj)
Returns True if the containership hierarchy for this control
includes the passed object reference.
-------
.. _no-11792:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.lockDisplay(self)
Locks the visual updates to the control.
This can significantly improve performance when many items are being
updated at once.
IMPORTANT: you must call unlockDisplay() when you are done, or your
object will never draw. unlockDisplay() must be called once for every
time lockDisplay() is called in order to resume repainting of the
control. Alternatively, you can call lockDisplay() many times, and
then call unlockDisplayAll() once when you are done.
Note that lockDisplay currently doesn't do anything on GTK.
-------
.. _no-11793:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.moveTabOrderAfter(self, obj)
Moves this object's tab order after the passed obj.
-------
.. _no-11794:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.moveTabOrderBefore(self, obj)
Moves this object's tab order before the passed obj.
-------
.. _no-11795:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.objectCoordinates(self, pos=None)
Given a position relative to the form, return a position relative
to this object. If no position is passed, returns the position
of this control relative to the form.
-------
.. _no-11796:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.onHover(self, evt=None)
-------
.. _no-11797:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.paste(self)
Called by uiApp when the user requests a paste operation.
Return None (the default) and uiApp will try a default paste operation.
Return anything other than None and uiApp will assume that the paste
operation has been handled.
-------
.. _no-11798:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.posIsWithin(self, xpos, ypos=None)
-------
.. _no-11799:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.processDroppedFiles(self, filelist, x, y)
Handler for files dropped on the control. Override in your
subclass/instance for your needs .
-------
.. _no-11800:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.processDroppedText(self, txt, x, y)
Handler for text dropped on the control. Override in your
subclass/instance for your needs .
-------
.. _no-11801:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.raiseEvent(self, eventClass, nativeEvent=None, \*args, \**kwargs)
Raise the passed Dabo event.
-------
.. _no-11803:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.recreate(self, child=None)
Recreate the object.
Warning: this is experimental and is known to cause hair loss.
-------
.. _no-11804:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.redraw(self, dc)
Called when the object is (re)drawn.
This is a user subclass hook, where you should put any drawing routines
to affect the object appearance.
-------
.. _no-11805:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.refresh(self, fromRefresh=False)
Repaints this control and all contained objects.
-------
.. _no-11806:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.relativeCoordinates(self, pos=None)
Translates an absolute screen position to position value for a control.
-------
.. _no-11807:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.release(self)
Destroys the object.
-------
.. _no-11808:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.removeDrawnObject(self, obj)
-------
.. _no-11809:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.sendToBack(self)
Places this object behind all others.
-------
.. _no-11810:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.setAll(self, prop, val, recurse=True, filt=None, instancesOf=None)
Set all child object properties to the passed value.
No bad effects will happen if the property doesn't apply to a child - only
children with the property will have their property updated.
If 'recurse' is True, setAll() will be called on each child as well.
If 'filt' is not empty, only children that match the expression in 'filt'
will be affected. The expression will be evaluated assuming the child
object is prefixed to the expression. For example, if you want to only
affect objects that are instances of dButton, you'd call::
form.setAll("FontBold", True, filt="BaseClass == dabo.ui.dButton")
If the instancesOf sequence is passed, the property will only be set if
the child object is an instance of one of the passed classes.
-------
.. _no-11811:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.setFocus(self)
Sets focus to the object.
-------
.. _no-11812:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.setPositionInSizer(self, pos)
Convenience method to let you call this directly on the object.
-------
.. _no-11815:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.setSizerProp(self, prop, val)
Tells the object's ControllingSizer to adjust the requested property.
-------
.. _no-11816:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.setSizerProps(self, propDict)
Convenience method for setting multiple sizer item properties at once. The
dict should have the property name as the key and the desired new value
as the associated value.
-------
.. _no-11817:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.show(self)
Make the object visible.
-------
.. _no-11818:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.showContainingPage(self)
If this object is inside of any paged control, it will force all containing
paged controls to switch to the page that contains this object.
-------
.. _no-11819:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.showContextMenu(self, menu, pos=None, release=True)
Display a context menu (popup) at the specified position.
If no position is specified, the menu will be displayed at the current
mouse position.
If release is True (the default), the menu will be released after the user
has dismissed it.
-------
.. _no-11822:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.unbindKey(self, keyCombo)
Unbind a previously bound key combination.
Fail silently if the key combination didn't exist already.
-------
.. _no-11823:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.unlockDisplay(self)
Unlocks the visual updates to the control.
Use in conjunction with lockDisplay(), when you are doing lots of things
that would result in lengthy screen updates.
Since lockDisplay() may be called several times on an object, calling
unlockDisplay() will "undo" one locking call. When all locks have been
removed, repainting of the display will resume.
-------
.. _no-11824:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.unlockDisplayAll(self)
Immediately unlocks the display, no matter how many previous
lockDisplay calls have been made. Useful in a callAfterInterval()
construction to avoid flicker.
-------
.. _no-11825:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.update(self)
Update the properties of this object and all contained objects.
-------
Methods - inherited
=====================
.. _no-11745:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.afterInit(self)
:noindex:
Subclass hook. Called after the object's __init__ has run fully.
Subclasses should place their __init__ code here in this hook, instead of
overriding __init__ directly, to avoid conflicting with base Dabo behavior.
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11748:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.autoBindEvents(self, force=True)
:noindex:
Automatically bind any on*() methods to the associated event.
User code only needs to define the callback, and Dabo will automatically
set up the event binding. This will satisfy lots of common cases where
you want an object or its parent to respond to the object's events.
To use this feature, just define a method on<EventName>(), or if you
want a parent container to respond to the event, make a method in the
parent on<EventName>_<object Name or RegID>().
For example::
class MyButton(dabo.ui.dButton):
def onHit(self, evt):
print "Hit!"
class MyPanel(dabo.ui.dPanel):
def afterInit(self):
self.addObject(MyButton, RegID="btn1")
def onHit_btn1(self, evt):
print "panel: button hit!"
When the button is pressed, you'll see both 'hit' messages because of
auto event binding.
If you want to bind your events explicitly, you can turn off auto event
binding by issuing::
dabo.autoBindEvents = False
This feature is inspired by PythonCard.
Inherited from: :ref:`dabo.lib.eventMixin.EventMixin`
-------
.. _no-11749:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.beforeInit(self, \*args, \**kwargs)
:noindex:
Subclass hook. Called before the object is fully instantiated.
Usually, user code should override afterInit() instead, but there may be
cases where you need to set an attribute before the init stage is fully
underway.
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11751:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.bindEvent(self, eventClass, function, _auto=False)
:noindex:
Bind a dEvent to a callback function.
Inherited from: :ref:`dabo.lib.eventMixin.EventMixin`
-------
.. _no-11752:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.bindEvents(self, bindings)
:noindex:
Bind a sequence of [dEvent, callback] lists.
Inherited from: :ref:`dabo.lib.eventMixin.EventMixin`
-------
.. _no-11774:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.fontZoomIn(self, amt=1)
:noindex:
Zoom in on the font, by setting a higher point size.
Inherited from: :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
-------
.. _no-11775:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.fontZoomNormal(self)
:noindex:
Reset the font zoom back to zero.
Inherited from: :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
-------
.. _no-11776:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.fontZoomOut(self, amt=1)
:noindex:
Zoom out on the font, by setting a lower point size.
Inherited from: :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
-------
.. _no-11778:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.getAbsoluteName(self)
:noindex:
Return the fully qualified name of the object.
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11784:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.getProperties(self, propertySequence=(), propsToSkip=(), ignoreErrors=False, \*propertyArguments)
:noindex:
Returns a dictionary of property name/value pairs.
If a sequence of properties is passed, just those property values
will be returned. Otherwise, all property values will be returned.
The sequence of properties can be a list, tuple, or plain string
positional arguments. For instance, all of the following are
equivilent::
print self.getProperties("Caption", "FontInfo", "Form")
print self.getProperties(["Caption", "FontInfo", "Form"])
t = ("Caption", "FontInfo", "Form")
print self.getProperties(t)
print self.getProperties(\*t)
An exception will be raised if any passed property names don't
exist, aren't actual properties, or are not readable (do not have
getter functions).
However, if an exception is raised from the property getter function,
the exception will get caught and used as the property value in the
returned property dictionary. This allows the property list to be
returned even if some properties can't be evaluated correctly by the
object yet.
Inherited from: :ref:`dabo.lib.propertyHelperMixin.PropertyHelperMixin`
-------
.. _no-11788:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.initEvents(self)
:noindex:
Hook for subclasses. User code should do custom event binding
here, such as::
self.bindEvent(dEvents.GotFocus, self.customGotFocusHandler)
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11789:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.initProperties(self)
:noindex:
Hook for subclasses. User subclasses should set properties
here, such as::
self.Name = "MyTextBox"
self.BackColor = (192,192,192)
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11791:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.iterateCall(self, funcName, \*args, \**kwargs)
:noindex:
Call the given function on this object and all of its Children. If
any object does not have the given function, no error is raised; it
is simply ignored.
Inherited from: :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
-------
.. _no-11802:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.reCreate(self, child=None)
:noindex:
Abstract method: subclasses MUST override for UI-specifics.
Inherited from: :ref:`dabo.ui.dPemMixinBase.dPemMixinBase`
-------
.. _no-11813:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.setProperties(self, propDict={}, ignoreErrors=False, \**propKw)
:noindex:
Sets a group of properties on the object all at once.
You have the following options for sending the properties:
1) Property/Value pair dictionary
2) Keyword arguments
3) Both
The following examples all do the same thing::
self.setProperties(FontBold=True, ForeColor="Red")
self.setProperties({"FontBold": True, "ForeColor": "Red"})
self.setProperties({"FontBold": True}, ForeColor="Red")
Inherited from: :ref:`dabo.lib.propertyHelperMixin.PropertyHelperMixin`
-------
.. _no-11814:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.setPropertiesFromAtts(self, propDict={}, ignoreExtra=True, context=None)
:noindex:
Sets a group of properties on the object all at once. This
is different from the regular setProperties() method because
it only accepts a dict containing prop:value pairs, and it
assumes that the value is always a string. It will convert
the value to the correct type for the property, and then set
the property to that converted value. If the value needs to be evaluated
in a specific namespace, pass that as the 'context' parameter.
Inherited from: :ref:`dabo.lib.propertyHelperMixin.PropertyHelperMixin`
-------
.. _no-11820:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.super(self, \*args, \**kwargs)
:noindex:
This method used to call superclass code, but it's been removed.
Inherited from: :ref:`dabo.dObject.dObject`
-------
.. _no-11821:
.. function:: dabo.ui.uiwx.dPemMixin.dPemMixin.unbindEvent(self, eventClass=None, function=None)
:noindex:
Remove a previously registered event binding.
Removes all registrations that exist for the given binding for this
object. If event is None, all bindings for the passed function are
removed. If function is None, all bindings for the passed event are
removed. If both event and function are None, all event bindings are
removed.
Inherited from: :ref:`dabo.lib.eventMixin.EventMixin`
-------
|
| 23.852991 | 267 | 0.686913 |
50236dfc966e6cbd302c99820b979da3b1ee8c20 | 5,592 | rst | reStructuredText | README.rst | Peshal1067/OpenClimateGIS | 297db6ae1f6dd8459ede6bed905c8d85bd93c5d6 | [
"BSD-3-Clause"
] | 3 | 2015-04-23T09:09:04.000Z | 2020-02-26T17:40:19.000Z | README.rst | arthur-e/OpenClimateGIS | 297db6ae1f6dd8459ede6bed905c8d85bd93c5d6 | [
"BSD-3-Clause"
] | null | null | null | README.rst | arthur-e/OpenClimateGIS | 297db6ae1f6dd8459ede6bed905c8d85bd93c5d6 | [
"BSD-3-Clause"
] | 2 | 2017-05-30T10:27:36.000Z | 2020-11-09T13:52:58.000Z | ==============
OpenClimateGIS
==============
OpenClimateGIS is a Python-based web server that distributes climate model data
in geospatial (vector) formats.
------------
Dependencies
------------
* PostgreSQL_ - a client-server relational database
* PostGIS_ - adds geospatial functionality to PostgreSQL databases
* psycopg2_ - a Python interface for working with PostgreSQL databases
* numpy_ - a multi-dimensionsal data library for Python
* netcdf4-python_ - a Python interface for working with netCDF4 files
* pyKML_ - a library for manipulating KML documents
.. _PostgreSQL: http://www.postgresql.org/
.. _PostGIS: http://postgis.refractions.net/
.. _psycopg2: http://initd.org/psycopg/
.. _numpy: http://numpy.scipy.org/
.. _netcdf4-python: http://code.google.com/p/netcdf4-python/
.. _pyKML: http://pypi.python.org/pypi/pykml/
------------
Installation
------------
The following instructions for installing OpenClimateGIS on Ubuntu 10.04
running on Amazon's `Elastic Compute Cloud (EC2)`_, a component of
`Amazon Web Services (AWS)`_.
.. _Elastic Compute Cloud (EC2): http://aws.amazon.com/ec2/
.. _Amazon Web Services (AWS): http://aws.amazon.com/
~~~~~~~~~~~~~~~~~~~~~~~~
Creating an AWS Instance
~~~~~~~~~~~~~~~~~~~~~~~~
Although it is not required, installing OpenClimateGIS on an AWS has the
benefits having an isolated instance with specific library versions.
Deploying on AWS also allows for the ability to scale to multiple servers
in the future.
An EC2 instance can be created from within Python, using boto_, a Python
interface to Amazon Web Services. The following is an example script that
creates an EC2 instance and returns the Public DNS.
Note that this assumes that you have set the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables as described in the boto
documentation.::
from time import sleep
from boto.ec2.connection import EC2Connection
conn = EC2Connection()
# start an instance of Ubuntu 10.04
ami_ubuntu10_04 = conn.get_all_images(image_ids=['ami-3202f25b'])
reservation = ami_ubuntu10_04[0].run( \
key_name='ec2-keypair', \
security_groups=['OCG_group'], \
instance_type='m1.large', \
)
instance = reservation.instances[0]
sleep(1)
while instance.state!=u'running':
print("Instance state = {0}".format(instance.state))
instance.update()
sleep(10)
print "Instance state = {0}".format(instance.state)
# add a tag to name the instance
instance.add_tag('Name','OpenClimateGIS')
print "DNS="
print instance.dns_name
exit()
Once you configured the Security Group for the instance to allow access on
port 22 and created an public/private key pair (See: `AWS Security Credentials`_)
you can connect to the instance using ssh::
ssh -i ~/.ssh/aws_openclimategis/ec2-keypair.pem ubuntu@DNSNAME
.. _boto: http://code.google.com/p/boto/
.. _AWS Security Credentials: https://aws-portal.amazon.com/gp/aws/developer/account/index.html?action=access-key
~~~~~~~~~~~~~~~~~~~~~~~~~
Installing OpenClimateGIS
~~~~~~~~~~~~~~~~~~~~~~~~~
The dependencies for OpenClimateGIS are installed via a series of bash scripts.
The main install script (INSTALL.sh) calls numerous other scripts found in the
install_scripts directory. To download (clone) the respository, including the
install script::
# download the installation script
sudo apt-get install curl
curl -O https://raw.github.com/tylere/OpenClimateGIS/master/INSTALL.sh
# run the installation script
chmod u+x INSTALL.sh
. INSTALL.sh >& ~/log_install.log
The OpenClimateGIS Django project requires a settings file (which includes
database passwords) in order to operate. This file should be placed at
/etc/openclimategis/settings.ini
sudo mkdir /etc/openclimategis
You can copy over the settings file that you use for development to your AWS
server using the secure copy command (executed on your development machine)::
sudo scp -i ~/.ssh/aws_openclimategis/ec2-keypair.pem /etc/openclimategis/settings.ini ubuntu@PUBLICDNS:
The scp copy places the file in the ubuntu user's home directory. Move the
file to its destination using::
sudo mv ~/settings.ini /etc/openclimategis/settings.ini
~~~~~~~~~~~~~~~~~~~~
Javascript Libraries
~~~~~~~~~~~~~~~~~~~~
The API Query Builder uses the ExtJS framework, the Google Maps API, and Wicket
to provide a user-friendly interface for constructing OpenClimateGIS queries. It
requires that ExtJS 4.x media files are installed and accessible. Currently, the
API Query Builder uses ExtJS 4.0.7 and version 3.7 of the Google Maps API. The
Google Maps API is loaded automatically with the Query Builder.
Download ExtJS: http://www.sencha.com/products/extjs/download
ExtJS 4.0.7 should be extracted into the following directory off of localhost
(or whatever the host name is):
localhost/static/extjs/4.0.7/
Resources should be accessible at the following locations, again using localhost
as an example:
localhost/static/extjs/4.0.7/resources/css/ext-all-gray.css
localhost/static/extjs/4.0.7/ext-debug.js
Fabric can also be used to install the ExtJS resources on AWS:
fab apache2.make_local_copy_of_extjs
Download Wicket: http://github.com/arthur-e/Wicket
git clone git@github.com:arthur-e/Wicket.git
The Wicket library (for WKT transforms) should be made accessible from:
localhost/static/Wicket/
------------
Source Code
------------
The source code for OpenClimateGIS is available at::
https://github.com/tylere/OpenClimateGIS
| 33.890909 | 113 | 0.725501 |
cecd316468f1867b6230e492196599c69c3eff66 | 2,719 | rst | reStructuredText | docs/source/basics/content_delivery_networks.rst | ansible-lab-trafficcontrol-jbatta/trafficcontrol | 0b69aaa83a213fd4e05ec7e1cc5aedb5d7842893 | [
"MIT",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | docs/source/basics/content_delivery_networks.rst | ansible-lab-trafficcontrol-jbatta/trafficcontrol | 0b69aaa83a213fd4e05ec7e1cc5aedb5d7842893 | [
"MIT",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | 1 | 2019-08-16T16:03:12.000Z | 2019-08-16T16:03:12.000Z | docs/source/basics/content_delivery_networks.rst | ansible-lab-trafficcontrol-jbatta/trafficcontrol | 0b69aaa83a213fd4e05ec7e1cc5aedb5d7842893 | [
"MIT",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause"
] | null | null | null | ..
..
.. Licensed under the Apache License, Version 2.0 (the "License");
.. you may not use this file except in compliance with the License.
.. You may obtain a copy of the License at
..
.. http://www.apache.org/licenses/LICENSE-2.0
..
.. Unless required by applicable law or agreed to in writing, software
.. distributed under the License is distributed on an "AS IS" BASIS,
.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. See the License for the specific language governing permissions and
.. limitations under the License.
..
*************************
Content Delivery Networks
*************************
The vast majority of today's Internet traffic is media files (often video or audio) being sent from a single source (the *Content Provider*) to many thousands or even millions of destinations (the *Content Consumers*). :abbr:`CDN (Content Delivery Network)`\ s are the technology that make that one-to-many distribution efficient. A :abbr:`CDN (Content Delivery Network)` is a distributed system of servers for delivering content over HTTP(S). These servers are deployed in multiple locations with the goal of optimizing the delivery of content to the end users, while minimizing the traffic on the network. A :abbr:`CDN (Content Delivery Network)` typically consists of the following:
:term:`Cache Server`\ s
The :dfn:`cache server` is a server that both proxies the requests and caches the results for reuse. Traffic Control uses `Apache Traffic Server <http://trafficserver.apache.org/>`_ to provide :term:`cache server`\ s.
Content Router
A :dfn:`content router` ensures that the end user is connected to the optimal :term:`cache server` for the location of the end user and content availability. Traffic Control uses :ref:`tr-overview` as a :dfn:`content router`.
Health Protocol
The :ref:`health-proto` monitors the usage of the :term:`cache server`\ s and tenants in the :abbr:`CDN (Content Delivery Network)`.
Configuration Management System
In many cases a :abbr:`CDN (Content Delivery Network)` encompasses hundreds or even thousands of servers across a large geographic area. In such cases, manual configuration of servers becomes impractical, and so a central authority on configuration is used to automate the tasks as much as possible. :ref:`to-overview` is the Traffic Control configuration management system, which is interacted with via :ref:`tp-overview`.
Log File Analysis System
Statistics and analysis are extremely important to the management and administration of a :abbr:`CDN (Content Delivery Network)`. Transaction logs and usage statistics for a Traffic Control :abbr:`CDN (Content Delivery Network)` are gathered into :ref:`ts-overview`.
| 77.685714 | 685 | 0.762413 |
98a7877062167b9ab2fecbe9e9c390a7b0a25ab5 | 3,640 | rst | reStructuredText | docs/guide/reference/api/delete.rst | krux/pyes | dce9738da5f648c41f3de6d81b29d86199130644 | [
"BSD-3-Clause"
] | null | null | null | docs/guide/reference/api/delete.rst | krux/pyes | dce9738da5f648c41f3de6d81b29d86199130644 | [
"BSD-3-Clause"
] | null | null | null | docs/guide/reference/api/delete.rst | krux/pyes | dce9738da5f648c41f3de6d81b29d86199130644 | [
"BSD-3-Clause"
] | 1 | 2019-10-22T19:34:57.000Z | 2019-10-22T19:34:57.000Z | .. _es-guide-reference-api-delete:
======
Delete
======
The delete API allows to delete a typed JSON document from a specific index based on its id. The following example deletes the JSON document from an index called twitter, under a type called tweet, with id valued 1:
.. code-block:: js
$ curl -XDELETE 'http://localhost:9200/twitter/tweet/1'
The result of the above delete operation is:
.. code-block:: js
{
"ok" : true,
"_index" : "twitter",
"_type" : "tweet",
"_id" : "1",
"found" : true
}
Versioning
==========
Each document indexed is versioned. When deleting a document, the **version** can be specified to make sure the relevant document we are trying to delete is actually being deleted and it has not changed in the meantime.
Routing
=======
When indexing using the ability to control the routing, in order to delete a document, the routing value should also be provided. For example:
.. code-block:: js
$ curl -XDELETE 'http://localhost:9200/twitter/tweet/1?routing=kimchy'
The above will delete a tweet with id 1, but will be routed based on the user. Note, issuing a delete without the correct routing, will cause the document to not be deleted.
Many times, the routing value is not known when deleting a document. For those cases, when specifying the **_routing** mapping as **required**, and no routing value is specified, the delete will be broadcasted automatically to all shards.
Parent
======
The **parent** parameter can be set, which will basically be the same as setting the routing parameter.
Automatic for the Index
=======================
The delete operation automatically creates an index if it has not been created before (check out the :ref:`create index API <es-guide-reference-api-admin-indices-create-index>` for manually creating an index), and also automatically creates a dynamic type mapping for the specific type if it has not been created before (check out the :ref:`put mapping <es-guide-reference-api-admin-indices-put-mapping>` API for manually creating type mapping).
Distributed
===========
The delete operation gets hashed into a specific shard id. It then gets redirected into the primary shard within that id group, and replicated (if needed) to shard replicas within that id group.
Replication Type
================
The replication of the operation can be done in an asynchronous manner to the replicas (the operation will return once it has be executed on the primary shard). The **replication** parameter can be set to **async** (defaults to **sync**) in order to enable it.
Write Consistency
=================
Control if the operation will be allowed to execute based on the number of active shards within that partition (replication group). The values allowed are **one**, **quorum**, and **all**. The parameter to set it is **consistency**, and it defaults to the node level setting of **action.write_consistency** which in turn defaults to **quorum**.
For example, in a N shards with 2 replicas index, there will have to be at least 2 active shards within the relevant partition (**quorum**) for the operation to succeed. In a N shards with 1 replica scenario, there will need to be a single shard active (in this case, **one** and **quorum** is the same).
Refresh
=======
The **refresh** parameter can be set to **true** in order to refresh the relevant shard after the delete operation has occurred and make it searchable. Setting it to **true** should be done after careful thought and verification that this does not cause a heavy load on the system (and slows down indexing).
| 40.444444 | 448 | 0.725 |
ffd9160bd41d346cd6c2241e6053d7e5c78a5e4e | 217 | rst | reStructuredText | docs/trace/apis.rst | jo2y/google-cloud-python | 1b76727be16bc4335276f793340bb72d32be7166 | [
"Apache-2.0"
] | 1 | 2018-06-29T17:53:28.000Z | 2018-06-29T17:53:28.000Z | docs/trace/apis.rst | jo2y/google-cloud-python | 1b76727be16bc4335276f793340bb72d32be7166 | [
"Apache-2.0"
] | 1 | 2021-06-25T15:16:57.000Z | 2021-06-25T15:16:57.000Z | docs/trace/apis.rst | jo2y/google-cloud-python | 1b76727be16bc4335276f793340bb72d32be7166 | [
"Apache-2.0"
] | 1 | 2021-06-30T11:44:03.000Z | 2021-06-30T11:44:03.000Z | API Reference
=============
APIs
----
.. autosummary::
.. :toctree::
google.cloud.trace_v1.gapic.trace_service_client
API types
~~~~~~~~~
.. autosummary::
.. :toctree::
google.cloud.trace_v1.gapic.enums
| 10.333333 | 51 | 0.612903 |
9c39d0d5f7657aba7b3c28594bfabb320aa2cc7b | 795 | rst | reStructuredText | docs/api/tg/methods/get_chat.rst | Smertig/banana | c8a41c510feaa4ffec6614ecfc17402ea7dadaf0 | [
"MIT"
] | 23 | 2021-03-30T18:02:10.000Z | 2022-03-05T18:45:15.000Z | docs/api/tg/methods/get_chat.rst | Smertig/banana | c8a41c510feaa4ffec6614ecfc17402ea7dadaf0 | [
"MIT"
] | 16 | 2021-06-06T07:16:52.000Z | 2021-09-14T15:43:27.000Z | docs/api/tg/methods/get_chat.rst | Smertig/banana | c8a41c510feaa4ffec6614ecfc17402ea7dadaf0 | [
"MIT"
] | 4 | 2021-05-28T10:36:07.000Z | 2021-07-11T23:38:29.000Z | .. _banana-api-tg-methods-get_chat:
get_chat
========
.. cpp:namespace:: banana::api
.. cpp:function:: template <class Agent> \
api_result<chat_t, Agent&&> get_chat(Agent&& agent, get_chat_args_t args)
``agent`` is any object satisfying :ref:`agent concept <banana-api-banana-agents>`.
Use this method to get up to date information about the chat (current name of the user for one-on-one conversations, current username of a user, group or channel, etc.). Returns a Chat object on success.
.. cpp:struct:: get_chat_args_t
Arguments that should be passed to :cpp:func:`get_chat`.
.. cpp:member:: variant_t<integer_t, string_t> chat_id
Unique identifier for the target chat or username of the target supergroup or channel (in the format @channelusername)
| 36.136364 | 206 | 0.719497 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.