hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e25ef76ec6f02e1d96e717aff4941f000570389b | 1,029 | rst | reStructuredText | odoo-13.0/web_oca/web_notify/readme/USAGE.rst | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 3 | 2019-04-02T13:52:50.000Z | 2019-04-11T03:19:03.000Z | odoo-13.0/web_oca/web_notify/readme/USAGE.rst | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | null | null | null | odoo-13.0/web_oca/web_notify/readme/USAGE.rst | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 1 | 2019-03-26T06:39:48.000Z | 2019-03-26T06:39:48.000Z |
To send a notification to the user you just need to call one of the new methods defined on res.users:
.. code-block:: python
self.env.user.notify_success(message='My success message')
or
.. code-block:: python
self.env.user.notify_danger(message='My danger message')
or
.. code-block:: python
self.env.user.notify_warning(message='My warning message')
or
.. code-block:: python
self.env.user.notify_info(message='My information message')
or
.. code-block:: python
self.env.user.notify_default(message='My default message')
.. figure:: static/description/notifications_screenshot.png
:scale: 80 %
:alt: Sample notifications
You can test the behaviour of the notifications by installing this module in a demo database.
Access the users form through Settings -> Users & Companies. You'll see a tab called "Test web notify", here you'll find two buttons that'll allow you test the module.
.. figure:: static/description/test_notifications_demo.png
:scale: 80 %
:alt: Sample notifications
| 24.5 | 167 | 0.742468 |
03cfa6363c5dbfa7e4955bac796c2ec4304e8739 | 441 | rst | reStructuredText | docs/list_of_categories.rst | soukyomi/aiowaifus | 7b8c47b01e186ad3b453c294a39c8ab92c646c84 | [
"MIT"
] | 1 | 2021-05-01T22:39:54.000Z | 2021-05-01T22:39:54.000Z | docs/list_of_categories.rst | soukyomi/aiowaifus | 7b8c47b01e186ad3b453c294a39c8ab92c646c84 | [
"MIT"
] | null | null | null | docs/list_of_categories.rst | soukyomi/aiowaifus | 7b8c47b01e186ad3b453c294a39c8ab92c646c84 | [
"MIT"
] | 1 | 2021-05-03T05:16:04.000Z | 2021-05-03T05:16:04.000Z | .. _list_of_categories:
List of Categories
==================
.. _sfw:
SFW (Safe For Work)
~~~~~~~~~~~~~~~~~~~
- waifu
- neko
- shinobu
- megumin
- bully
- cuddle
- cry
- hug
- awoo
- kiss
- lick
- pat
- smug
- bonk
- yeet
- smile
- wave
- highfive
- handhold
- nom
- bite
- glomp
- kill
- slap
- happy
- wink
- poke
- dance
- cringe
- blush
.. _nsfw:
NSFW (Not Safe For Work)
~~~~~~~~~~~~~~~~~~~~~~~~
- waifu
- neko
- trap
- blowjob
| 8.647059 | 24 | 0.537415 |
372ecd40c5f0fe65feee8f2b192dc252d2b50e5e | 729 | rst | reStructuredText | README.rst | simotukiainen/cron-schedule-evaluator | 3a2e914cad5a3a3a7c5ef330a23ae98585a066d4 | [
"FSFAP"
] | null | null | null | README.rst | simotukiainen/cron-schedule-evaluator | 3a2e914cad5a3a3a7c5ef330a23ae98585a066d4 | [
"FSFAP"
] | null | null | null | README.rst | simotukiainen/cron-schedule-evaluator | 3a2e914cad5a3a3a7c5ef330a23ae98585a066d4 | [
"FSFAP"
] | null | null | null | Cron-schedule-evaluator
#######################
Cron expression parser and evaluator for Java is a small exercise I worked out
for fun.
It is archived here for sharing purposes. I may expand on it later as many of
the other evaluators I found were either part of a larger scheduling library
(that one might not need) or were difficult to understand.
You should not use this code for anything serious, but if you need to implement
something yourself, it may help you.
Currently it only supports either fixed or * flags, and those only with four
fields i.e minute, hour, day and month. Day of week field, ranges, fractions
and lists are not supported yet, but the code is such that it would be pretty
easy to add any of them.
| 38.368421 | 79 | 0.759945 |
a2e43a6353fba52e3e1aaffb2ccb2cfec2cf3155 | 2,260 | rst | reStructuredText | object-oriented-constructs/virtual-methods.rst | hexcoder-/mapping-high-level-constructs-to-llvm-ir | 659052d3a055ff9cdf81223e4842ad13aab6bc59 | [
"CC-BY-4.0",
"CC0-1.0"
] | null | null | null | object-oriented-constructs/virtual-methods.rst | hexcoder-/mapping-high-level-constructs-to-llvm-ir | 659052d3a055ff9cdf81223e4842ad13aab6bc59 | [
"CC-BY-4.0",
"CC0-1.0"
] | null | null | null | object-oriented-constructs/virtual-methods.rst | hexcoder-/mapping-high-level-constructs-to-llvm-ir | 659052d3a055ff9cdf81223e4842ad13aab6bc59 | [
"CC-BY-4.0",
"CC0-1.0"
] | null | null | null | Virtual Methods
---------------
A virtual method is no more than a compiler-controlled function pointer.
Each virtual method is recorded in the ``vtable``, which is a structure
of all the function pointers needed by a given class:
.. code-block:: cpp
class Foo
{
public:
virtual int GetLengthTimesTwo() const
{
return _length * 2;
}
void SetLength(size_t value)
{
_length = value;
}
private:
int _length;
};
int main()
{
Foo foo;
foo.SetLength(4);
return foo.GetLengthTimesTwo();
}
This becomes:
.. code-block:: llvm
%Foo_vtable_type = type { i32(%Foo*)* }
%Foo = type { %Foo_vtable_type*, i32 }
define i32 @Foo_GetLengthTimesTwo(%Foo* %this) nounwind {
%1 = getelementptr %Foo, %Foo* %this, i32 0, i32 1
%2 = load i32, i32* %1
%3 = mul i32 %2, 2
ret i32 %3
}
@Foo_vtable_data = global %Foo_vtable_type {
i32(%Foo*)* @Foo_GetLengthTimesTwo
}
define void @Foo_Create_Default(%Foo* %this) nounwind {
%1 = getelementptr %Foo, %Foo* %this, i32 0, i32 0
store %Foo_vtable_type* @Foo_vtable_data, %Foo_vtable_type** %1
%2 = getelementptr %Foo, %Foo* %this, i32 0, i32 1
store i32 0, i32* %2
ret void
}
define void @Foo_SetLength(%Foo* %this, i32 %value) nounwind {
%1 = getelementptr %Foo, %Foo* %this, i32 0, i32 1
store i32 %value, i32* %1
ret void
}
define i32 @main(i32 %argc, i8** %argv) nounwind {
%foo = alloca %Foo
call void @Foo_Create_Default(%Foo* %foo)
call void @Foo_SetLength(%Foo* %foo, i32 4)
%1 = getelementptr %Foo, %Foo* %foo, i32 0, i32 0
%2 = load %Foo_vtable_type*, %Foo_vtable_type** %1
%3 = getelementptr %Foo_vtable_type, %Foo_vtable_type* %2, i32 0, i32 0
%4 = load i32(%Foo*)*, i32(%Foo*)** %3
%5 = call i32 %4(%Foo* %foo)
ret i32 %5
}
Please notice that some C++ compilers store ``_vtable`` at a negative
offset into the structure so that things like
``memset(this, 0, sizeof(*this))`` work, even though such commands
should always be avoided in an OOP context.
| 27.228916 | 79 | 0.580973 |
b24dbd6feadaff568780e77821a3d4c004506f1f | 118 | rst | reStructuredText | docs/ImportFile.py.rst | reeset/COUNTER-5-Report-Tool | 6a88663d20eff9434b682fed65ecb64a2cb140f8 | [
"MIT"
] | 23 | 2020-05-13T12:11:42.000Z | 2022-03-25T06:38:23.000Z | docs/ImportFile.py.rst | reeset/COUNTER-5-Report-Tool | 6a88663d20eff9434b682fed65ecb64a2cb140f8 | [
"MIT"
] | 6 | 2020-05-01T05:45:46.000Z | 2021-07-14T20:21:30.000Z | docs/ImportFile.py.rst | reeset/COUNTER-5-Report-Tool | 6a88663d20eff9434b682fed65ecb64a2cb140f8 | [
"MIT"
] | 3 | 2020-07-01T22:40:35.000Z | 2022-03-24T20:35:06.000Z | ImportFile module
=================
.. automodule:: ImportFile
:members:
:undoc-members:
:show-inheritance:
| 14.75 | 26 | 0.59322 |
c2a6e4a3df2a5612b196cdd6f1b87e2d54cddb2b | 204 | rst | reStructuredText | doc/devel/tools/index.rst | neurospin/nipy | cc54600a0dca1e003ad393bc05c46f91eef30a68 | [
"BSD-3-Clause"
] | 1 | 2016-03-08T15:01:06.000Z | 2016-03-08T15:01:06.000Z | doc/devel/tools/index.rst | fperez/nipy | 559f17150bd9fa8ead4fd088b330d7cf7db7aa79 | [
"BSD-3-Clause"
] | null | null | null | doc/devel/tools/index.rst | fperez/nipy | 559f17150bd9fa8ead4fd088b330d7cf7db7aa79 | [
"BSD-3-Clause"
] | null | null | null | .. _developer_tools:
=================
Developer Tools
=================
.. htmlonly::
:Release: |version|
:Date: |today|
.. toctree::
:maxdepth: 2
tricked_out_emacs
virtualenv-tutor
| 12 | 22 | 0.534314 |
06ed7274c3917e49e04a4cc49a4f8e71971083da | 89,805 | rst | reStructuredText | docs/nlu/components.rst | sd-z/rasa | 4830308fcc21ba3c82f296e510ba7544fb5166d4 | [
"Apache-2.0"
] | null | null | null | docs/nlu/components.rst | sd-z/rasa | 4830308fcc21ba3c82f296e510ba7544fb5166d4 | [
"Apache-2.0"
] | 41 | 2020-07-18T21:47:50.000Z | 2022-01-01T14:12:46.000Z | docs/nlu/components.rst | sd-z/rasa | 4830308fcc21ba3c82f296e510ba7544fb5166d4 | [
"Apache-2.0"
] | 1 | 2020-07-01T12:07:55.000Z | 2020-07-01T12:07:55.000Z | :desc: Customize the components and parameters of Rasa's Machine Learning based
Natural Language Understanding pipeline
.. _components:
Components
==========
.. edit-link::
This is a reference of the configuration options for every built-in
component in Rasa Open Source. If you want to build a custom component, check
out :ref:`custom-nlu-components`.
.. contents::
:local:
Word Vector Sources
-------------------
The following components load pre-trained models that are needed if you want to use pre-trained
word vectors in your pipeline.
.. _MitieNLP:
MitieNLP
~~~~~~~~
:Short: MITIE initializer
:Outputs: Nothing
:Requires: Nothing
:Description:
Initializes MITIE structures. Every MITIE component relies on this,
hence this should be put at the beginning
of every pipeline that uses any MITIE components.
:Configuration:
The MITIE library needs a language model file, that **must** be specified in
the configuration:
.. code-block:: yaml
pipeline:
- name: "MitieNLP"
# language model to load
model: "data/total_word_feature_extractor.dat"
For more information where to get that file from, head over to
:ref:`installing MITIE <install-mitie>`.
.. _SpacyNLP:
SpacyNLP
~~~~~~~~
:Short: spaCy language initializer
:Outputs: Nothing
:Requires: Nothing
:Description:
Initializes spaCy structures. Every spaCy component relies on this, hence this should be put at the beginning
of every pipeline that uses any spaCy components.
:Configuration:
You need to specify the language model to use.
By default the language configured in the pipeline will be used as the language model name.
If the spaCy model to be used has a name that is different from the language tag (``"en"``, ``"de"``, etc.),
the model name can be specified using the configuration variable ``model``.
The name will be passed to ``spacy.load(name)``.
.. code-block:: yaml
pipeline:
- name: "SpacyNLP"
# language model to load
model: "en_core_web_md"
# when retrieving word vectors, this will decide if the casing
# of the word is relevant. E.g. `hello` and `Hello` will
# retrieve the same vector, if set to `False`. For some
# applications and models it makes sense to differentiate
# between these two words, therefore setting this to `True`.
case_sensitive: False
For more information on how to download the spaCy models, head over to
:ref:`installing SpaCy <install-spacy>`.
.. _HFTransformersNLP:
HFTransformersNLP
~~~~~~~~~~~~~~~~~
:Short: HuggingFace's Transformers based pre-trained language model initializer
:Outputs: Nothing
:Requires: Nothing
:Description:
Initializes specified pre-trained language model from HuggingFace's `Transformers library
<https://huggingface.co/transformers/>`__. The component applies language model specific tokenization and
featurization to compute sequence and sentence level representations for each example in the training data.
Include :ref:`LanguageModelTokenizer` and :ref:`LanguageModelFeaturizer` to utilize the output of this
component for downstream NLU models.
.. note:: To use ``HFTransformersNLP`` component, install Rasa Open Source with ``pip install rasa[transformers]``.
:Configuration:
You should specify what language model to load via the parameter ``model_name``. See the below table for the
available language models.
Additionally, you can also specify the architecture variation of the chosen language model by specifying the
parameter ``model_weights``.
The full list of supported architectures can be found
`here <https://huggingface.co/transformers/pretrained_models.html>`__.
If left empty, it uses the default model architecture that original Transformers library loads (see table below).
.. code-block:: none
+----------------+--------------+-------------------------+
| Language Model | Parameter | Default value for |
| | "model_name" | "model_weights" |
+----------------+--------------+-------------------------+
| BERT | bert | bert-base-uncased |
+----------------+--------------+-------------------------+
| GPT | gpt | openai-gpt |
+----------------+--------------+-------------------------+
| GPT-2 | gpt2 | gpt2 |
+----------------+--------------+-------------------------+
| XLNet | xlnet | xlnet-base-cased |
+----------------+--------------+-------------------------+
| DistilBERT | distilbert | distilbert-base-uncased |
+----------------+--------------+-------------------------+
| RoBERTa | roberta | roberta-base |
+----------------+--------------+-------------------------+
The following configuration loads the language model BERT:
.. code-block:: yaml
pipeline:
- name: HFTransformersNLP
# Name of the language model to use
model_name: "bert"
# Pre-Trained weights to be loaded
model_weights: "bert-base-uncased"
# An optional path to a specific directory to download and cache the pre-trained model weights.
# The `default` cache_dir is the same as https://huggingface.co/transformers/serialization.html#cache-directory .
cache_dir: null
.. _tokenizers:
Tokenizers
----------
Tokenizers split text into tokens.
If you want to split intents into multiple labels, e.g. for predicting multiple intents or for
modeling hierarchical intent structure, use the following flags with any tokenizer:
- ``intent_tokenization_flag`` indicates whether to tokenize intent labels or not. Set it to ``True``, so that intent
labels are tokenized.
- ``intent_split_symbol`` sets the delimiter string to split the intent labels, default is underscore
(``_``).
.. _WhitespaceTokenizer:
WhitespaceTokenizer
~~~~~~~~~~~~~~~~~~~
:Short: Tokenizer using whitespaces as a separator
:Outputs: ``tokens`` for user messages, responses (if present), and intents (if specified)
:Requires: Nothing
:Description:
Creates a token for every whitespace separated character sequence.
:Configuration:
Make the tokenizer case insensitive by adding the ``case_sensitive: False`` option, the
default being ``case_sensitive: True``.
.. code-block:: yaml
pipeline:
- name: "WhitespaceTokenizer"
# Flag to check whether to split intents
"intent_tokenization_flag": False
# Symbol on which intent should be split
"intent_split_symbol": "_"
# Text will be tokenized with case sensitive as default
"case_sensitive": True
JiebaTokenizer
~~~~~~~~~~~~~~
:Short: Tokenizer using Jieba for Chinese language
:Outputs: ``tokens`` for user messages, responses (if present), and intents (if specified)
:Requires: Nothing
:Description:
Creates tokens using the Jieba tokenizer specifically for Chinese
language. It will only work for the Chinese language.
.. note::
To use ``JiebaTokenizer`` you need to install Jieba with ``pip install jieba``.
:Configuration:
User's custom dictionary files can be auto loaded by specifying the files' directory path via ``dictionary_path``.
If the ``dictionary_path`` is ``None`` (the default), then no custom dictionary will be used.
.. code-block:: yaml
pipeline:
- name: "JiebaTokenizer"
dictionary_path: "path/to/custom/dictionary/dir"
# Flag to check whether to split intents
"intent_tokenization_flag": False
# Symbol on which intent should be split
"intent_split_symbol": "_"
MitieTokenizer
~~~~~~~~~~~~~~
:Short: Tokenizer using MITIE
:Outputs: ``tokens`` for user messages, responses (if present), and intents (if specified)
:Requires: :ref:`MitieNLP`
:Description: Creates tokens using the MITIE tokenizer.
:Configuration:
.. code-block:: yaml
pipeline:
- name: "MitieTokenizer"
# Flag to check whether to split intents
"intent_tokenization_flag": False
# Symbol on which intent should be split
"intent_split_symbol": "_"
SpacyTokenizer
~~~~~~~~~~~~~~
:Short: Tokenizer using spaCy
:Outputs: ``tokens`` for user messages, responses (if present), and intents (if specified)
:Requires: :ref:`SpacyNLP`
:Description:
Creates tokens using the spaCy tokenizer.
:Configuration:
.. code-block:: yaml
pipeline:
- name: "SpacyTokenizer"
# Flag to check whether to split intents
"intent_tokenization_flag": False
# Symbol on which intent should be split
"intent_split_symbol": "_"
.. _ConveRTTokenizer:
ConveRTTokenizer
~~~~~~~~~~~~~~~~
:Short: Tokenizer using `ConveRT <https://github.com/PolyAI-LDN/polyai-models#convert>`__ model.
:Outputs: ``tokens`` for user messages, responses (if present), and intents (if specified)
:Requires: Nothing
:Description:
Creates tokens using the ConveRT tokenizer. Must be used whenever the :ref:`ConveRTFeaturizer` is used.
.. note::
Since ``ConveRT`` model is trained only on an English corpus of conversations, this tokenizer should only
be used if your training data is in English language.
.. note::
To use ``ConveRTTokenizer``, install Rasa Open Source with ``pip install rasa[convert]``.
:Configuration:
Make the tokenizer case insensitive by adding the ``case_sensitive: False`` option, the
default being ``case_sensitive: True``.
.. code-block:: yaml
pipeline:
- name: "ConveRTTokenizer"
# Flag to check whether to split intents
"intent_tokenization_flag": False
# Symbol on which intent should be split
"intent_split_symbol": "_"
# Text will be tokenized with case sensitive as default
"case_sensitive": True
.. _LanguageModelTokenizer:
LanguageModelTokenizer
~~~~~~~~~~~~~~~~~~~~~~
:Short: Tokenizer from pre-trained language models
:Outputs: ``tokens`` for user messages, responses (if present), and intents (if specified)
:Requires: :ref:`HFTransformersNLP`
:Description:
Creates tokens using the pre-trained language model specified in upstream :ref:`HFTransformersNLP` component.
Must be used whenever the :ref:`LanguageModelFeaturizer` is used.
:Configuration:
.. code-block:: yaml
pipeline:
- name: "LanguageModelTokenizer"
# Flag to check whether to split intents
"intent_tokenization_flag": False
# Symbol on which intent should be split
"intent_split_symbol": "_"
.. _text-featurizers:
Text Featurizers
----------------
Text featurizers are divided into two different categories: sparse featurizers and dense featurizers.
Sparse featurizers are featurizers that return feature vectors with a lot of missing values, e.g. zeros.
As those feature vectors would normally take up a lot of memory, we store them as sparse features.
Sparse features only store the values that are non zero and their positions in the vector.
Thus, we save a lot of memory and are able to train on larger datasets.
All featurizers can return two different kind of features: sequence features and sentence features.
The sequence features are a matrix of size ``(number-of-tokens x feature-dimension)``.
The matrix contains a feature vector for every token in the sequence.
This allows us to train sequence models.
The sentence features are represented by a matrix of size ``(1 x feature-dimension)``.
It contains the feature vector for the complete utterance.
The sentence features can be used in any bag-of-words model.
The corresponding classifier can therefore decide what kind of features to use.
Note: The ``feature-dimension`` for sequence and sentence features does not have to be the same.
.. _MitieFeaturizer:
MitieFeaturizer
~~~~~~~~~~~~~~~
:Short:
Creates a vector representation of user message and response (if specified) using the MITIE featurizer.
:Outputs: ``dense_features`` for user messages and responses
:Requires: :ref:`MitieNLP`
:Type: Dense featurizer
:Description:
Creates features for entity extraction, intent classification, and response classification using the MITIE
featurizer.
.. note::
NOT used by the ``MitieIntentClassifier`` component. But can be used by any component later in the pipeline
that makes use of ``dense_features``.
:Configuration:
The sentence vector, i.e. the vector of the complete utterance, can be calculated in two different ways, either via
mean or via max pooling. You can specify the pooling method in your configuration file with the option ``pooling``.
The default pooling method is set to ``mean``.
.. code-block:: yaml
pipeline:
- name: "MitieFeaturizer"
# Specify what pooling operation should be used to calculate the vector of
# the complete utterance. Available options: 'mean' and 'max'.
"pooling": "mean"
.. _SpacyFeaturizer:
SpacyFeaturizer
~~~~~~~~~~~~~~~
:Short:
Creates a vector representation of user message and response (if specified) using the spaCy featurizer.
:Outputs: ``dense_features`` for user messages and responses
:Requires: :ref:`SpacyNLP`
:Type: Dense featurizer
:Description:
Creates features for entity extraction, intent classification, and response classification using the spaCy
featurizer.
:Configuration:
The sentence vector, i.e. the vector of the complete utterance, can be calculated in two different ways, either via
mean or via max pooling. You can specify the pooling method in your configuration file with the option ``pooling``.
The default pooling method is set to ``mean``.
.. code-block:: yaml
pipeline:
- name: "SpacyFeaturizer"
# Specify what pooling operation should be used to calculate the vector of
# the complete utterance. Available options: 'mean' and 'max'.
"pooling": "mean"
.. _ConveRTFeaturizer:
ConveRTFeaturizer
~~~~~~~~~~~~~~~~~
:Short:
Creates a vector representation of user message and response (if specified) using
`ConveRT <https://github.com/PolyAI-LDN/polyai-models>`__ model.
:Outputs: ``dense_features`` for user messages and responses
:Requires: :ref:`ConveRTTokenizer`
:Type: Dense featurizer
:Description:
Creates features for entity extraction, intent classification, and response selection.
It uses the `default signature <https://github.com/PolyAI-LDN/polyai-models#tfhub-signatures>`_ to compute vector
representations of input text.
.. note::
Since ``ConveRT`` model is trained only on an English corpus of conversations, this featurizer should only
be used if your training data is in English language.
.. note::
To use ``ConveRTTokenizer``, install Rasa Open Source with ``pip install rasa[convert]``.
:Configuration:
.. code-block:: yaml
pipeline:
- name: "ConveRTFeaturizer"
.. _LanguageModelFeaturizer:
LanguageModelFeaturizer
~~~~~~~~~~~~~~~~~~~~~~~
:Short:
Creates a vector representation of user message and response (if specified) using a pre-trained language model.
:Outputs: ``dense_features`` for user messages and responses
:Requires: :ref:`HFTransformersNLP` and :ref:`LanguageModelTokenizer`
:Type: Dense featurizer
:Description:
Creates features for entity extraction, intent classification, and response selection.
Uses the pre-trained language model specified in upstream :ref:`HFTransformersNLP` component to compute vector
representations of input text.
.. note::
Please make sure that you use a language model which is pre-trained on the same language corpus as that of your
training data.
:Configuration:
Include :ref:`HFTransformersNLP` and :ref:`LanguageModelTokenizer` components before this component. Use
:ref:`LanguageModelTokenizer` to ensure tokens are correctly set for all components throughout the pipeline.
.. code-block:: yaml
pipeline:
- name: "LanguageModelFeaturizer"
.. _RegexFeaturizer:
RegexFeaturizer
~~~~~~~~~~~~~~~
:Short: Creates a vector representation of user message using regular expressions.
:Outputs: ``sparse_features`` for user messages and ``tokens.pattern``
:Requires: ``tokens``
:Type: Sparse featurizer
:Description:
Creates features for entity extraction and intent classification.
During training the ``RegexFeaturizer`` creates a list of regular expressions defined in the training
data format.
For each regex, a feature will be set marking whether this expression was found in the user message or not.
All features will later be fed into an intent classifier / entity extractor to simplify classification (assuming
the classifier has learned during the training phase, that this set feature indicates a certain intent / entity).
Regex features for entity extraction are currently only supported by the :ref:`CRFEntityExtractor` and the
:ref:`diet-classifier` components!
:Configuration:
.. code-block:: yaml
pipeline:
- name: "RegexFeaturizer"
.. _CountVectorsFeaturizer:
CountVectorsFeaturizer
~~~~~~~~~~~~~~~~~~~~~~
:Short: Creates bag-of-words representation of user messages, intents, and responses.
:Outputs: ``sparse_features`` for user messages, intents, and responses
:Requires: ``tokens``
:Type: Sparse featurizer
:Description:
Creates features for intent classification and response selection.
Creates bag-of-words representation of user message, intent, and response using
`sklearn's CountVectorizer <https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html>`_.
All tokens which consist only of digits (e.g. 123 and 99 but not a123d) will be assigned to the same feature.
:Configuration:
See `sklearn's CountVectorizer docs <https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html>`_
for detailed description of the configuration parameters.
This featurizer can be configured to use word or character n-grams, using the ``analyzer`` configuration parameter.
By default ``analyzer`` is set to ``word`` so word token counts are used as features.
If you want to use character n-grams, set ``analyzer`` to ``char`` or ``char_wb``.
The lower and upper boundaries of the n-grams can be configured via the parameters ``min_ngram`` and ``max_ngram``.
By default both of them are set to ``1``.
.. note::
Option ``char_wb`` creates character n-grams only from text inside word boundaries;
n-grams at the edges of words are padded with space.
This option can be used to create `Subword Semantic Hashing <https://arxiv.org/abs/1810.07150>`_.
.. note::
For character n-grams do not forget to increase ``min_ngram`` and ``max_ngram`` parameters.
Otherwise the vocabulary will contain only single letters.
Handling Out-Of-Vocabulary (OOV) words:
.. note:: Enabled only if ``analyzer`` is ``word``.
Since the training is performed on limited vocabulary data, it cannot be guaranteed that during prediction
an algorithm will not encounter an unknown word (a word that were not seen during training).
In order to teach an algorithm how to treat unknown words, some words in training data can be substituted
by generic word ``OOV_token``.
In this case during prediction all unknown words will be treated as this generic word ``OOV_token``.
For example, one might create separate intent ``outofscope`` in the training data containing messages of
different number of ``OOV_token`` s and maybe some additional general words.
Then an algorithm will likely classify a message with unknown words as this intent ``outofscope``.
You can either set the ``OOV_token`` or a list of words ``OOV_words``:
- ``OOV_token`` set a keyword for unseen words; if training data contains ``OOV_token`` as words in some
messages, during prediction the words that were not seen during training will be substituted with
provided ``OOV_token``; if ``OOV_token=None`` (default behavior) words that were not seen during
training will be ignored during prediction time;
- ``OOV_words`` set a list of words to be treated as ``OOV_token`` during training; if a list of words
that should be treated as Out-Of-Vocabulary is known, it can be set to ``OOV_words`` instead of manually
changing it in training data or using custom preprocessor.
.. note::
This featurizer creates a bag-of-words representation by **counting** words,
so the number of ``OOV_token`` in the sentence might be important.
.. note::
Providing ``OOV_words`` is optional, training data can contain ``OOV_token`` input manually or by custom
additional preprocessor.
Unseen words will be substituted with ``OOV_token`` **only** if this token is present in the training
data or ``OOV_words`` list is provided.
If you want to share the vocabulary between user messages and intents, you need to set the option
``use_shared_vocab`` to ``True``. In that case a common vocabulary set between tokens in intents and user messages
is build.
.. code-block:: yaml
pipeline:
- name: "CountVectorsFeaturizer"
# Analyzer to use, either 'word', 'char', or 'char_wb'
"analyzer": "word"
# Set the lower and upper boundaries for the n-grams
"min_ngram": 1
"max_ngram": 1
# Set the out-of-vocabulary token
"OOV_token": "_oov_"
# Whether to use a shared vocab
"use_shared_vocab": False
.. container:: toggle
.. container:: header
The above configuration parameters are the ones you should configure to fit your model to your data.
However, additional parameters exist that can be adapted.
.. code-block:: none
+-------------------+-------------------------+--------------------------------------------------------------+
| Parameter | Default Value | Description |
+===================+=========================+==============================================================+
| use_shared_vocab | False | If set to 'True' a common vocabulary is used for labels |
| | | and user message. |
+-------------------+-------------------------+--------------------------------------------------------------+
| analyzer | word | Whether the features should be made of word n-gram or |
| | | character n-grams. Option ‘char_wb’ creates character |
| | | n-grams only from text inside word boundaries; |
| | | n-grams at the edges of words are padded with space. |
| | | Valid values: 'word', 'char', 'char_wb'. |
+-------------------+-------------------------+--------------------------------------------------------------+
| token_pattern | r"(?u)\b\w\w+\b" | Regular expression used to detect tokens. |
| | | Only used if 'analyzer' is set to 'word'. |
+-------------------+-------------------------+--------------------------------------------------------------+
| strip_accents | None | Remove accents during the pre-processing step. |
| | | Valid values: 'ascii', 'unicode', 'None'. |
+-------------------+-------------------------+--------------------------------------------------------------+
| stop_words | None | A list of stop words to use. |
| | | Valid values: 'english' (uses an internal list of |
| | | English stop words), a list of custom stop words, or |
| | | 'None'. |
+-------------------+-------------------------+--------------------------------------------------------------+
| min_df | 1 | When building the vocabulary ignore terms that have a |
| | | document frequency strictly lower than the given threshold. |
+-------------------+-------------------------+--------------------------------------------------------------+
| max_df | 1 | When building the vocabulary ignore terms that have a |
| | | document frequency strictly higher than the given threshold |
| | | (corpus-specific stop words). |
+-------------------+-------------------------+--------------------------------------------------------------+
| min_ngram | 1 | The lower boundary of the range of n-values for different |
| | | word n-grams or char n-grams to be extracted. |
+-------------------+-------------------------+--------------------------------------------------------------+
| max_ngram | 1 | The upper boundary of the range of n-values for different |
| | | word n-grams or char n-grams to be extracted. |
+-------------------+-------------------------+--------------------------------------------------------------+
| max_features | None | If not 'None', build a vocabulary that only consider the top |
| | | max_features ordered by term frequency across the corpus. |
+-------------------+-------------------------+--------------------------------------------------------------+
| lowercase | True | Convert all characters to lowercase before tokenizing. |
+-------------------+-------------------------+--------------------------------------------------------------+
| OOV_token | None | Keyword for unseen words. |
+-------------------+-------------------------+--------------------------------------------------------------+
| OOV_words | [] | List of words to be treated as 'OOV_token' during training. |
+-------------------+-------------------------+--------------------------------------------------------------+
| alias | CountVectorFeaturizer | Alias name of featurizer. |
+-------------------+-------------------------+--------------------------------------------------------------+
.. _LexicalSyntacticFeaturizer:
LexicalSyntacticFeaturizer
~~~~~~~~~~~~~~~~~~~~~~~~~~
:Short: Creates lexical and syntactic features for a user message to support entity extraction.
:Outputs: ``sparse_features`` for user messages
:Requires: ``tokens``
:Type: Sparse featurizer
:Description:
Creates features for entity extraction.
Moves with a sliding window over every token in the user message and creates features according to the
configuration (see below). As a default configuration is present, you don't need to specify a configuration.
:Configuration:
You can configure what kind of lexical and syntactic features the featurizer should extract.
The following features are available:
.. code-block:: none
============== ==========================================================================================
Feature Name Description
============== ==========================================================================================
BOS Checks if the token is at the beginning of the sentence.
EOS Checks if the token is at the end of the sentence.
low Checks if the token is lower case.
upper Checks if the token is upper case.
title Checks if the token starts with an uppercase character and all remaining characters are
lowercased.
digit Checks if the token contains just digits.
prefix5 Take the first five characters of the token.
prefix2 Take the first two characters of the token.
suffix5 Take the last five characters of the token.
suffix3 Take the last three characters of the token.
suffix2 Take the last two characters of the token.
suffix1 Take the last character of the token.
pos Take the Part-of-Speech tag of the token (``SpacyTokenizer`` required).
pos2 Take the first two characters of the Part-of-Speech tag of the token
(``SpacyTokenizer`` required).
============== ==========================================================================================
As the featurizer is moving over the tokens in a user message with a sliding window, you can define features for
previous tokens, the current token, and the next tokens in the sliding window.
You define the features as a [before, token, after] array.
If you want to define features for the token before, the current token, and the token after,
your features configuration would look like this:
.. code-block:: yaml
pipeline:
- name: LexicalSyntacticFeaturizer
"features": [
["low", "title", "upper"],
["BOS", "EOS", "low", "upper", "title", "digit"],
["low", "title", "upper"],
]
This configuration is also the default configuration.
.. note:: If you want to make use of ``pos`` or ``pos2`` you need to add ``SpacyTokenizer`` to your pipeline.
Intent Classifiers
------------------
Intent classifiers assign one of the intents defined in the domain file to incoming user messages.
MitieIntentClassifier
~~~~~~~~~~~~~~~~~~~~~
:Short:
MITIE intent classifier (using a
`text categorizer <https://github.com/mit-nlp/MITIE/blob/master/examples/python/text_categorizer_pure_model.py>`_)
:Outputs: ``intent``
:Requires: ``tokens`` for user message and :ref:`MitieNLP`
:Output-Example:
.. code-block:: json
{
"intent": {"name": "greet", "confidence": 0.98343}
}
:Description:
This classifier uses MITIE to perform intent classification. The underlying classifier
is using a multi-class linear SVM with a sparse linear kernel (see
`MITIE trainer code <https://github.com/mit-nlp/MITIE/blob/master/mitielib/src/text_categorizer_trainer.cpp#L222>`_).
.. note:: This classifier does not rely on any featurizer as it extracts features on its own.
:Configuration:
.. code-block:: yaml
pipeline:
- name: "MitieIntentClassifier"
SklearnIntentClassifier
~~~~~~~~~~~~~~~~~~~~~~~
:Short: Sklearn intent classifier
:Outputs: ``intent`` and ``intent_ranking``
:Requires: ``dense_features`` for user messages
:Output-Example:
.. code-block:: json
{
"intent": {"name": "greet", "confidence": 0.78343},
"intent_ranking": [
{
"confidence": 0.1485910906220309,
"name": "goodbye"
},
{
"confidence": 0.08161531595656784,
"name": "restaurant_search"
}
]
}
:Description:
The sklearn intent classifier trains a linear SVM which gets optimized using a grid search. It also provides
rankings of the labels that did not "win". The ``SklearnIntentClassifier`` needs to be preceded by a dense
featurizer in the pipeline. This dense featurizer creates the features used for the classification.
For more information about the algorithm itself, take a look at the
`GridSearchCV <https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html>`__
documentation.
:Configuration:
During the training of the SVM a hyperparameter search is run to find the best parameter set.
In the configuration you can specify the parameters that will get tried.
.. code-block:: yaml
pipeline:
- name: "SklearnIntentClassifier"
# Specifies the list of regularization values to
# cross-validate over for C-SVM.
# This is used with the ``kernel`` hyperparameter in GridSearchCV.
C: [1, 2, 5, 10, 20, 100]
# Specifies the kernel to use with C-SVM.
# This is used with the ``C`` hyperparameter in GridSearchCV.
kernels: ["linear"]
# Gamma parameter of the C-SVM.
"gamma": [0.1]
# We try to find a good number of cross folds to use during
# intent training, this specifies the max number of folds.
"max_cross_validation_folds": 5
# Scoring function used for evaluating the hyper parameters.
# This can be a name or a function.
"scoring_function": "f1_weighted"
.. _keyword_intent_classifier:
KeywordIntentClassifier
~~~~~~~~~~~~~~~~~~~~~~~
:Short: Simple keyword matching intent classifier, intended for small, short-term projects.
:Outputs: ``intent``
:Requires: Nothing
:Output-Example:
.. code-block:: json
{
"intent": {"name": "greet", "confidence": 1.0}
}
:Description:
This classifier works by searching a message for keywords.
The matching is case sensitive by default and searches only for exact matches of the keyword-string in the user
message.
The keywords for an intent are the examples of that intent in the NLU training data.
This means the entire example is the keyword, not the individual words in the example.
.. note:: This classifier is intended only for small projects or to get started. If
you have few NLU training data, you can take a look at the recommended pipelines in
:ref:`choosing-a-pipeline`.
:Configuration:
.. code-block:: yaml
pipeline:
- name: "KeywordIntentClassifier"
case_sensitive: True
DIETClassifier
~~~~~~~~~~~~~~
:Short: Dual Intent Entity Transformer (DIET) used for intent classification and entity extraction
:Description:
You can find the detailed description of the :ref:`diet-classifier` under the section
`Combined Entity Extractors and Intent Classifiers`.
Entity Extractors
-----------------
Entity extractors extract entities, such as person names or locations, from the user message.
MitieEntityExtractor
~~~~~~~~~~~~~~~~~~~~
:Short: MITIE entity extraction (using a `MITIE NER trainer <https://github.com/mit-nlp/MITIE/blob/master/mitielib/src/ner_trainer.cpp>`_)
:Outputs: ``entities``
:Requires: :ref:`MitieNLP` and ``tokens``
:Output-Example:
.. code-block:: json
{
"entities": [{
"value": "New York City",
"start": 20,
"end": 33,
"confidence": null,
"entity": "city",
"extractor": "MitieEntityExtractor"
}]
}
:Description:
``MitieEntityExtractor`` uses the MITIE entity extraction to find entities in a message. The underlying classifier
is using a multi class linear SVM with a sparse linear kernel and custom features.
The MITIE component does not provide entity confidence values.
.. note:: This entity extractor does not rely on any featurizer as it extracts features on its own.
:Configuration:
.. code-block:: yaml
pipeline:
- name: "MitieEntityExtractor"
.. _SpacyEntityExtractor:
SpacyEntityExtractor
~~~~~~~~~~~~~~~~~~~~
:Short: spaCy entity extraction
:Outputs: ``entities``
:Requires: :ref:`SpacyNLP`
:Output-Example:
.. code-block:: json
{
"entities": [{
"value": "New York City",
"start": 20,
"end": 33,
"confidence": null,
"entity": "city",
"extractor": "SpacyEntityExtractor"
}]
}
:Description:
Using spaCy this component predicts the entities of a message. spaCy uses a statistical BILOU transition model.
As of now, this component can only use the spaCy builtin entity extraction models and can not be retrained.
This extractor does not provide any confidence scores.
:Configuration:
Configure which dimensions, i.e. entity types, the spaCy component
should extract. A full list of available dimensions can be found in
the `spaCy documentation <https://spacy.io/api/annotation#section-named-entities>`_.
Leaving the dimensions option unspecified will extract all available dimensions.
.. code-block:: yaml
pipeline:
- name: "SpacyEntityExtractor"
# dimensions to extract
dimensions: ["PERSON", "LOC", "ORG", "PRODUCT"]
EntitySynonymMapper
~~~~~~~~~~~~~~~~~~~
:Short: Maps synonymous entity values to the same value.
:Outputs: Modifies existing entities that previous entity extraction components found.
:Requires: Nothing
:Description:
If the training data contains defined synonyms, this component will make sure that detected entity values will
be mapped to the same value. For example, if your training data contains the following examples:
.. code-block:: json
[
{
"text": "I moved to New York City",
"intent": "inform_relocation",
"entities": [{
"value": "nyc",
"start": 11,
"end": 24,
"entity": "city",
}]
},
{
"text": "I got a new flat in NYC.",
"intent": "inform_relocation",
"entities": [{
"value": "nyc",
"start": 20,
"end": 23,
"entity": "city",
}]
}
]
This component will allow you to map the entities ``New York City`` and ``NYC`` to ``nyc``. The entity
extraction will return ``nyc`` even though the message contains ``NYC``. When this component changes an
existing entity, it appends itself to the processor list of this entity.
:Configuration:
.. code-block:: yaml
pipeline:
- name: "EntitySynonymMapper"
.. _CRFEntityExtractor:
CRFEntityExtractor
~~~~~~~~~~~~~~~~~~
:Short: Conditional random field (CRF) entity extraction
:Outputs: ``entities``
:Requires: ``tokens`` and ``dense_features`` (optional)
:Output-Example:
.. code-block:: json
{
"entities": [{
"value": "New York City",
"start": 20,
"end": 33,
"entity": "city",
"confidence": 0.874,
"extractor": "CRFEntityExtractor"
}]
}
:Description:
This component implements a conditional random fields (CRF) to do named entity recognition.
CRFs can be thought of as an undirected Markov chain where the time steps are words
and the states are entity classes. Features of the words (capitalization, POS tagging,
etc.) give probabilities to certain entity classes, as are transitions between
neighbouring entity tags: the most likely set of tags is then calculated and returned.
:Configuration:
``CRFEntityExtractor`` has a list of default features to use.
However, you can overwrite the default configuration.
The following features are available:
.. code-block:: none
============== ==========================================================================================
Feature Name Description
============== ==========================================================================================
low Checks if the token is lower case.
upper Checks if the token is upper case.
title Checks if the token starts with an uppercase character and all remaining characters are
lowercased.
digit Checks if the token contains just digits.
prefix5 Take the first five characters of the token.
prefix2 Take the first two characters of the token.
suffix5 Take the last five characters of the token.
suffix3 Take the last three characters of the token.
suffix2 Take the last two characters of the token.
suffix1 Take the last character of the token.
pos Take the Part-of-Speech tag of the token (``SpacyTokenizer`` required).
pos2 Take the first two characters of the Part-of-Speech tag of the token
(``SpacyTokenizer`` required).
pattern Take the patterns defined by ``RegexFeaturizer``.
bias Add an additional "bias" feature to the list of features.
============== ==========================================================================================
As the featurizer is moving over the tokens in a user message with a sliding window, you can define features for
previous tokens, the current token, and the next tokens in the sliding window.
You define the features as [before, token, after] array.
Additional you can set a flag to determine whether to use the BILOU tagging schema or not.
- ``BILOU_flag`` determines whether to use BILOU tagging or not. Default ``True``.
.. code-block:: yaml
pipeline:
- name: "CRFEntityExtractor"
# BILOU_flag determines whether to use BILOU tagging or not.
"BILOU_flag": True
# features to extract in the sliding window
"features": [
["low", "title", "upper"],
[
"bias",
"low",
"prefix5",
"prefix2",
"suffix5",
"suffix3",
"suffix2",
"upper",
"title",
"digit",
"pattern",
],
["low", "title", "upper"],
]
# The maximum number of iterations for optimization algorithms.
"max_iterations": 50
# weight of the L1 regularization
"L1_c": 0.1
# weight of the L2 regularization
"L2_c": 0.1
# Name of dense featurizers to use.
# If list is empty all available dense features are used.
"featurizers": []
.. note::
If POS features are used (``pos`` or ``pos2`), you need to have ``SpacyTokenizer`` in your pipeline.
.. note::
If "``pattern` features are used, you need to have ``RegexFeaturizer`` in your pipeline.
.. _DucklingHTTPExtractor:
DucklingHTTPExtractor
~~~~~~~~~~~~~~~~~~~~~
:Short: Duckling lets you extract common entities like dates,
amounts of money, distances, and others in a number of languages.
:Outputs: ``entities``
:Requires: Nothing
:Output-Example:
.. code-block:: json
{
"entities": [{
"end": 53,
"entity": "time",
"start": 48,
"value": "2017-04-10T00:00:00.000+02:00",
"confidence": 1.0,
"extractor": "DucklingHTTPExtractor"
}]
}
:Description:
To use this component you need to run a duckling server. The easiest
option is to spin up a docker container using
``docker run -p 8000:8000 rasa/duckling``.
Alternatively, you can `install duckling directly on your
machine <https://github.com/facebook/duckling#quickstart>`_ and start the server.
Duckling allows to recognize dates, numbers, distances and other structured entities
and normalizes them.
Please be aware that duckling tries to extract as many entity types as possible without
providing a ranking. For example, if you specify both ``number`` and ``time`` as dimensions
for the duckling component, the component will extract two entities: ``10`` as a number and
``in 10 minutes`` as a time from the text ``I will be there in 10 minutes``. In such a
situation, your application would have to decide which entity type is be the correct one.
The extractor will always return `1.0` as a confidence, as it is a rule
based system.
:Configuration:
Configure which dimensions, i.e. entity types, the duckling component
should extract. A full list of available dimensions can be found in
the `duckling documentation <https://duckling.wit.ai/>`_.
Leaving the dimensions option unspecified will extract all available dimensions.
.. code-block:: yaml
pipeline:
- name: "DucklingHTTPExtractor"
# url of the running duckling server
url: "http://localhost:8000"
# dimensions to extract
dimensions: ["time", "number", "amount-of-money", "distance"]
# allows you to configure the locale, by default the language is
# used
locale: "de_DE"
# if not set the default timezone of Duckling is going to be used
# needed to calculate dates from relative expressions like "tomorrow"
timezone: "Europe/Berlin"
# Timeout for receiving response from http url of the running duckling server
# if not set the default timeout of duckling http url is set to 3 seconds.
timeout : 3
DIETClassifier
~~~~~~~~~~~~~~
:Short: Dual Intent Entity Transformer (DIET) used for intent classification and entity extraction
:Description:
You can find the detailed description of the :ref:`diet-classifier` under the section
`Combined Entity Extractors and Intent Classifiers`.
Selectors
----------
Selectors predict a bot response from a set of candidate responses.
.. _response-selector:
ResponseSelector
~~~~~~~~~~~~~~~~
:Short: Response Selector
:Outputs: A dictionary with key as ``direct_response_intent`` and value containing ``response`` and ``ranking``
:Requires: ``dense_features`` and/or ``sparse_features`` for user messages and response
:Output-Example:
.. code-block:: json
{
"response_selector": {
"faq": {
"response": {"confidence": 0.7356462617, "name": "Supports 3.5, 3.6 and 3.7, recommended version is 3.6"},
"ranking": [
{"confidence": 0.7356462617, "name": "Supports 3.5, 3.6 and 3.7, recommended version is 3.6"},
{"confidence": 0.2134543431, "name": "You can ask me about how to get started"}
]
}
}
}
:Description:
Response Selector component can be used to build a response retrieval model to directly predict a bot response from
a set of candidate responses. The prediction of this model is used by :ref:`retrieval-actions`.
It embeds user inputs and response labels into the same space and follows the exact same
neural network architecture and optimization as the :ref:`diet-classifier`.
.. note:: If during prediction time a message contains **only** words unseen during training
and no Out-Of-Vocabulary preprocessor was used, an empty response ``None`` is predicted with confidence
``0.0``. This might happen if you only use the :ref:`CountVectorsFeaturizer` with a ``word`` analyzer
as featurizer. If you use the ``char_wb`` analyzer, you should always get a response with a confidence
value ``> 0.0``.
:Configuration:
The algorithm includes almost all the hyperparameters that :ref:`diet-classifier` uses.
If you want to adapt your model, start by modifying the following parameters:
- ``epochs``:
This parameter sets the number of times the algorithm will see the training data (default: ``300``).
One ``epoch`` is equals to one forward pass and one backward pass of all the training examples.
Sometimes the model needs more epochs to properly learn.
Sometimes more epochs don't influence the performance.
The lower the number of epochs the faster the model is trained.
- ``hidden_layers_sizes``:
This parameter allows you to define the number of feed forward layers and their output
dimensions for user messages and intents (default: ``text: [256, 128], label: [256, 128]``).
Every entry in the list corresponds to a feed forward layer.
For example, if you set ``text: [256, 128]``, we will add two feed forward layers in front of
the transformer. The vectors of the input tokens (coming from the user message) will be passed on to those
layers. The first layer will have an output dimension of 256 and the second layer will have an output
dimension of 128. If an empty list is used (default behavior), no feed forward layer will be
added.
Make sure to use only positive integer values. Usually, numbers of power of two are used.
Also, it is usual practice to have decreasing values in the list: next value is smaller or equal to the
value before.
- ``embedding_dimension``:
This parameter defines the output dimension of the embedding layers used inside the model (default: ``20``).
We are using multiple embeddings layers inside the model architecture.
For example, the vector of the complete utterance and the intent is passed on to an embedding layer before
they are compared and the loss is calculated.
- ``number_of_transformer_layers``:
This parameter sets the number of transformer layers to use (default: ``0``).
The number of transformer layers corresponds to the transformer blocks to use for the model.
- ``transformer_size``:
This parameter sets the number of units in the transformer (default: ``None``).
The vectors coming out of the transformers will have the given ``transformer_size``.
- ``weight_sparsity``:
This parameter defines the fraction of kernel weights that are set to 0 for all feed forward layers
in the model (default: ``0.8``). The value should be between 0 and 1. If you set ``weight_sparsity``
to 0, no kernel weights will be set to 0, the layer acts as a standard feed forward layer. You should not
set ``weight_sparsity`` to 1 as this would result in all kernel weights being 0, i.e. the model is not able
to learn.
|
In addition, the component can also be configured to train a response selector for a particular retrieval intent.
The parameter ``retrieval_intent`` sets the name of the intent for which this response selector model is trained.
Default is ``None``, i.e. the model is trained for all retrieval intents.
|
.. container:: toggle
.. container:: header
The above configuration parameters are the ones you should configure to fit your model to your data.
However, additional parameters exist that can be adapted.
.. code-block:: none
+---------------------------------+-------------------+--------------------------------------------------------------+
| Parameter | Default Value | Description |
+=================================+===================+==============================================================+
| hidden_layers_sizes | text: [256, 128] | Hidden layer sizes for layers before the embedding layers |
| | label: [256, 128] | for user messages and labels. The number of hidden layers is |
| | | equal to the length of the corresponding. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| share_hidden_layers | False | Whether to share the hidden layer weights between user |
| | | messages and labels. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| transformer_size | None | Number of units in transformer. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| number_of_transformer_layers | 0 | Number of transformer layers. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| number_of_attention_heads | 4 | Number of attention heads in transformer. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| use_key_relative_attention | False | If 'True' use key relative embeddings in attention. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| use_value_relative_attention | False | If 'True' use value relative embeddings in attention. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| max_relative_position | None | Maximum position for relative embeddings. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| unidirectional_encoder | False | Use a unidirectional or bidirectional encoder. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| batch_size | [64, 256] | Initial and final value for batch sizes. |
| | | Batch size will be linearly increased for each epoch. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| batch_strategy | "balanced" | Strategy used when creating batches. |
| | | Can be either 'sequence' or 'balanced'. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| epochs | 300 | Number of epochs to train. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| random_seed | None | Set random seed to any 'int' to get reproducible results. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| learning_rate | 0.001 | Initial learning rate for the optimizer. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| embedding_dimension | 20 | Dimension size of embedding vectors. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| dense_dimension | text: 512 | Dense dimension for sparse features to use if no dense |
| | label: 512 | features are present. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| concat_dimension | text: 512 | Concat dimension for sequence and sentence features. |
| | label: 512 | |
+---------------------------------+-------------------+--------------------------------------------------------------+
| number_of_negative_examples | 20 | The number of incorrect labels. The algorithm will minimize |
| | | their similarity to the user input during training. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| similarity_type | "auto" | Type of similarity measure to use, either 'auto' or 'cosine' |
| | | or 'inner'. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| loss_type | "softmax" | The type of the loss function, either 'softmax' or 'margin'. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| ranking_length | 10 | Number of top actions to normalize scores for loss type |
| | | 'softmax'. Set to 0 to turn off normalization. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| maximum_positive_similarity | 0.8 | Indicates how similar the algorithm should try to make |
| | | embedding vectors for correct labels. |
| | | Should be 0.0 < ... < 1.0 for 'cosine' similarity type. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| maximum_negative_similarity | -0.4 | Maximum negative similarity for incorrect labels. |
| | | Should be -1.0 < ... < 1.0 for 'cosine' similarity type. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| use_maximum_negative_similarity | True | If 'True' the algorithm only minimizes maximum similarity |
| | | over incorrect intent labels, used only if 'loss_type' is |
| | | set to 'margin'. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| scale_loss | True | Scale loss inverse proportionally to confidence of correct |
| | | prediction. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| regularization_constant | 0.002 | The scale of regularization. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| negative_margin_scale | 0.8 | The scale of how important is to minimize the maximum |
| | | similarity between embeddings of different labels. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| weight_sparsity | 0.8 | Sparsity of the weights in dense layers. |
| | | Value should be between 0 and 1. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| drop_rate | 0.2 | Dropout rate for encoder. Value should be between 0 and 1. |
| | | The higher the value the higher the regularization effect. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| drop_rate_attention | 0.0 | Dropout rate for attention. Value should be between 0 and 1. |
| | | The higher the value the higher the regularization effect. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| use_sparse_input_dropout | False | If 'True' apply dropout to sparse input tensors. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| use_dense_input_dropout | False | If 'True' apply dropout to dense input tensors. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| evaluate_every_number_of_epochs | 20 | How often to calculate validation accuracy. |
| | | Set to '-1' to evaluate just once at the end of training. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| evaluate_on_number_of_examples | 0 | How many examples to use for hold out validation set. |
| | | Large values may hurt performance, e.g. model accuracy. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| use_masked_language_model | False | If 'True' random tokens of the input message will be masked |
| | | and the model should predict those tokens. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| retrieval_intent | None | Name of the intent for which this response selector model is |
| | | trained. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| tensorboard_log_directory | None | If you want to use tensorboard to visualize training |
| | | metrics, set this option to a valid output directory. You |
| | | can view the training metrics after training in tensorboard |
| | | via 'tensorboard --logdir <path-to-given-directory>'. |
+---------------------------------+-------------------+--------------------------------------------------------------+
| tensorboard_log_level | "epoch" | Define when training metrics for tensorboard should be |
| | | logged. Either after every epoch ("epoch") or for every |
| | | training step ("minibatch"). |
+---------------------------------+-------------------+--------------------------------------------------------------+
| featurizers | [] | List of featurizer names (alias names). Only features |
| | | coming from the listed names are used. If list is empty |
| | | all available features are used. |
+---------------------------------+-------------------+--------------------------------------------------------------+
.. note:: For ``cosine`` similarity ``maximum_positive_similarity`` and ``maximum_negative_similarity`` should
be between ``-1`` and ``1``.
.. note:: There is an option to use linearly increasing batch size. The idea comes from
`<https://arxiv.org/abs/1711.00489>`_.
In order to do it pass a list to ``batch_size``, e.g. ``"batch_size": [64, 256]`` (default behavior).
If constant ``batch_size`` is required, pass an ``int``, e.g. ``"batch_size": 64``.
.. note:: Parameter ``maximum_negative_similarity`` is set to a negative value to mimic the original
starspace algorithm in the case ``maximum_negative_similarity = maximum_positive_similarity``
and ``use_maximum_negative_similarity = False``.
See `starspace paper <https://arxiv.org/abs/1709.03856>`_ for details.
Combined Entity Extractors and Intent Classifiers
-------------------------------------------------
.. _diet-classifier:
DIETClassifier
~~~~~~~~~~~~~~
:Short: Dual Intent Entity Transformer (DIET) used for intent classification and entity extraction
:Outputs: ``entities``, ``intent`` and ``intent_ranking``
:Requires: ``dense_features`` and/or ``sparse_features`` for user message and optionally the intent
:Output-Example:
.. code-block:: json
{
"intent": {"name": "greet", "confidence": 0.8343},
"intent_ranking": [
{
"confidence": 0.385910906220309,
"name": "goodbye"
},
{
"confidence": 0.28161531595656784,
"name": "restaurant_search"
}
],
"entities": [{
"end": 53,
"entity": "time",
"start": 48,
"value": "2017-04-10T00:00:00.000+02:00",
"confidence": 1.0,
"extractor": "DIETClassifier"
}]
}
:Description:
DIET (Dual Intent and Entity Transformer) is a multi-task architecture for intent classification and entity
recognition. The architecture is based on a transformer which is shared for both tasks.
A sequence of entity labels is predicted through a Conditional Random Field (CRF) tagging layer on top of the
transformer output sequence corresponding to the input sequence of tokens.
For the intent labels the transformer output for the complete utterance and intent labels are embedded into a
single semantic vector space. We use the dot-product loss to maximize the similarity with the target label and
minimize similarities with negative samples.
If you want to learn more about the model, please take a look at our
`videos <https://www.youtube.com/playlist?list=PL75e0qA87dlG-za8eLI6t0_Pbxafk-cxb>`__ where we explain the model
architecture in detail.
.. note:: If during prediction time a message contains **only** words unseen during training
and no Out-Of-Vocabulary preprocessor was used, an empty intent ``None`` is predicted with confidence
``0.0``. This might happen if you only use the :ref:`CountVectorsFeaturizer` with a ``word`` analyzer
as featurizer. If you use the ``char_wb`` analyzer, you should always get an intent with a confidence
value ``> 0.0``.
:Configuration:
If you want to use the ``DIETClassifier`` just for intent classification, set ``entity_recognition`` to ``False``.
If you want to do only entity recognition, set ``intent_classification`` to ``False``.
By default ``DIETClassifier`` does both, i.e. ``entity_recognition`` and ``intent_classification`` are set to
``True``.
You can define a number of hyperparameters to adapt the model.
If you want to adapt your model, start by modifying the following parameters:
- ``epochs``:
This parameter sets the number of times the algorithm will see the training data (default: ``300``).
One ``epoch`` is equals to one forward pass and one backward pass of all the training examples.
Sometimes the model needs more epochs to properly learn.
Sometimes more epochs don't influence the performance.
The lower the number of epochs the faster the model is trained.
- ``hidden_layers_sizes``:
This parameter allows you to define the number of feed forward layers and their output
dimensions for user messages and intents (default: ``text: [], label: []``).
Every entry in the list corresponds to a feed forward layer.
For example, if you set ``text: [256, 128]``, we will add two feed forward layers in front of
the transformer. The vectors of the input tokens (coming from the user message) will be passed on to those
layers. The first layer will have an output dimension of 256 and the second layer will have an output
dimension of 128. If an empty list is used (default behavior), no feed forward layer will be
added.
Make sure to use only positive integer values. Usually, numbers of power of two are used.
Also, it is usual practice to have decreasing values in the list: next value is smaller or equal to the
value before.
- ``embedding_dimension``:
This parameter defines the output dimension of the embedding layers used inside the model (default: ``20``).
We are using multiple embeddings layers inside the model architecture.
For example, the vector of the complete utterance and the intent is passed on to an embedding layer before
they are compared and the loss is calculated.
- ``number_of_transformer_layers``:
This parameter sets the number of transformer layers to use (default: ``2``).
The number of transformer layers corresponds to the transformer blocks to use for the model.
- ``transformer_size``:
This parameter sets the number of units in the transformer (default: ``256``).
The vectors coming out of the transformers will have the given ``transformer_size``.
- ``weight_sparsity``:
This parameter defines the fraction of kernel weights that are set to 0 for all feed forward layers
in the model (default: ``0.8``). The value should be between 0 and 1. If you set ``weight_sparsity``
to 0, no kernel weights will be set to 0, the layer acts as a standard feed forward layer. You should not
set ``weight_sparsity`` to 1 as this would result in all kernel weights being 0, i.e. the model is not able
to learn.
.. container:: toggle
.. container:: header
The above configuration parameters are the ones you should configure to fit your model to your data.
However, additional parameters exist that can be adapted.
.. code-block:: none
+---------------------------------+------------------+--------------------------------------------------------------+
| Parameter | Default Value | Description |
+=================================+==================+==============================================================+
| hidden_layers_sizes | text: [] | Hidden layer sizes for layers before the embedding layers |
| | label: [] | for user messages and labels. The number of hidden layers is |
| | | equal to the length of the corresponding. |
+---------------------------------+------------------+--------------------------------------------------------------+
| share_hidden_layers | False | Whether to share the hidden layer weights between user |
| | | messages and labels. |
+---------------------------------+------------------+--------------------------------------------------------------+
| transformer_size | 256 | Number of units in transformer. |
+---------------------------------+------------------+--------------------------------------------------------------+
| number_of_transformer_layers | 2 | Number of transformer layers. |
+---------------------------------+------------------+--------------------------------------------------------------+
| number_of_attention_heads | 4 | Number of attention heads in transformer. |
+---------------------------------+------------------+--------------------------------------------------------------+
| use_key_relative_attention | False | If 'True' use key relative embeddings in attention. |
+---------------------------------+------------------+--------------------------------------------------------------+
| use_value_relative_attention | False | If 'True' use value relative embeddings in attention. |
+---------------------------------+------------------+--------------------------------------------------------------+
| max_relative_position | None | Maximum position for relative embeddings. |
+---------------------------------+------------------+--------------------------------------------------------------+
| unidirectional_encoder | False | Use a unidirectional or bidirectional encoder. |
+---------------------------------+------------------+--------------------------------------------------------------+
| batch_size | [64, 256] | Initial and final value for batch sizes. |
| | | Batch size will be linearly increased for each epoch. |
+---------------------------------+------------------+--------------------------------------------------------------+
| batch_strategy | "balanced" | Strategy used when creating batches. |
| | | Can be either 'sequence' or 'balanced'. |
+---------------------------------+------------------+--------------------------------------------------------------+
| epochs | 300 | Number of epochs to train. |
+---------------------------------+------------------+--------------------------------------------------------------+
| random_seed | None | Set random seed to any 'int' to get reproducible results. |
+---------------------------------+------------------+--------------------------------------------------------------+
| learning_rate | 0.001 | Initial learning rate for the optimizer. |
+---------------------------------+------------------+--------------------------------------------------------------+
| embedding_dimension | 20 | Dimension size of embedding vectors. |
+---------------------------------+------------------+--------------------------------------------------------------+
| dense_dimension | text: 512 | Dense dimension for sparse features to use if no dense |
| | label: 20 | features are present. |
+---------------------------------+------------------+--------------------------------------------------------------+
| concat_dimension | text: 512 | Concat dimension for sequence and sentence features. |
| | label: 20 | |
+---------------------------------+------------------+--------------------------------------------------------------+
| number_of_negative_examples | 20 | The number of incorrect labels. The algorithm will minimize |
| | | their similarity to the user input during training. |
+---------------------------------+------------------+--------------------------------------------------------------+
| similarity_type | "auto" | Type of similarity measure to use, either 'auto' or 'cosine' |
| | | or 'inner'. |
+---------------------------------+------------------+--------------------------------------------------------------+
| loss_type | "softmax" | The type of the loss function, either 'softmax' or 'margin'. |
+---------------------------------+------------------+--------------------------------------------------------------+
| ranking_length | 10 | Number of top actions to normalize scores for loss type |
| | | 'softmax'. Set to 0 to turn off normalization. |
+---------------------------------+------------------+--------------------------------------------------------------+
| maximum_positive_similarity | 0.8 | Indicates how similar the algorithm should try to make |
| | | embedding vectors for correct labels. |
| | | Should be 0.0 < ... < 1.0 for 'cosine' similarity type. |
+---------------------------------+------------------+--------------------------------------------------------------+
| maximum_negative_similarity | -0.4 | Maximum negative similarity for incorrect labels. |
| | | Should be -1.0 < ... < 1.0 for 'cosine' similarity type. |
+---------------------------------+------------------+--------------------------------------------------------------+
| use_maximum_negative_similarity | True | If 'True' the algorithm only minimizes maximum similarity |
| | | over incorrect intent labels, used only if 'loss_type' is |
| | | set to 'margin'. |
+---------------------------------+------------------+--------------------------------------------------------------+
| scale_loss | False | Scale loss inverse proportionally to confidence of correct |
| | | prediction. |
+---------------------------------+------------------+--------------------------------------------------------------+
| regularization_constant | 0.002 | The scale of regularization. |
+---------------------------------+------------------+--------------------------------------------------------------+
| negative_margin_scale | 0.8 | The scale of how important it is to minimize the maximum |
| | | similarity between embeddings of different labels. |
+---------------------------------+------------------+--------------------------------------------------------------+
| weight_sparsity | 0.8 | Sparsity of the weights in dense layers. |
| | | Value should be between 0 and 1. |
+---------------------------------+------------------+--------------------------------------------------------------+
| drop_rate | 0.2 | Dropout rate for encoder. Value should be between 0 and 1. |
| | | The higher the value the higher the regularization effect. |
+---------------------------------+------------------+--------------------------------------------------------------+
| drop_rate_attention | 0.0 | Dropout rate for attention. Value should be between 0 and 1. |
| | | The higher the value the higher the regularization effect. |
+---------------------------------+------------------+--------------------------------------------------------------+
| use_sparse_input_dropout | True | If 'True' apply dropout to sparse input tensors. |
+---------------------------------+------------------+--------------------------------------------------------------+
| use_dense_input_dropout | True | If 'True' apply dropout to dense input tensors. |
+---------------------------------+------------------+--------------------------------------------------------------+
| evaluate_every_number_of_epochs | 20 | How often to calculate validation accuracy. |
| | | Set to '-1' to evaluate just once at the end of training. |
+---------------------------------+------------------+--------------------------------------------------------------+
| evaluate_on_number_of_examples | 0 | How many examples to use for hold out validation set. |
| | | Large values may hurt performance, e.g. model accuracy. |
+---------------------------------+------------------+--------------------------------------------------------------+
| intent_classification | True | If 'True' intent classification is trained and intents are |
| | | predicted. |
+---------------------------------+------------------+--------------------------------------------------------------+
| entity_recognition | True | If 'True' entity recognition is trained and entities are |
| | | extracted. |
+---------------------------------+------------------+--------------------------------------------------------------+
| use_masked_language_model | False | If 'True' random tokens of the input message will be masked |
| | | and the model has to predict those tokens. It acts like a |
| | | regularizer and should help to learn a better contextual |
| | | representation of the input. |
+---------------------------------+------------------+--------------------------------------------------------------+
| tensorboard_log_directory | None | If you want to use tensorboard to visualize training |
| | | metrics, set this option to a valid output directory. You |
| | | can view the training metrics after training in tensorboard |
| | | via 'tensorboard --logdir <path-to-given-directory>'. |
+---------------------------------+------------------+--------------------------------------------------------------+
| tensorboard_log_level | "epoch" | Define when training metrics for tensorboard should be |
| | | logged. Either after every epoch ('epoch') or for every |
| | | training step ('minibatch'). |
+---------------------------------+------------------+--------------------------------------------------------------+
| featurizers | [] | List of featurizer names (alias names). Only features |
| | | coming from the listed names are used. If list is empty |
| | | all available features are used. |
+---------------------------------+------------------+--------------------------------------------------------------+
.. note:: For ``cosine`` similarity ``maximum_positive_similarity`` and ``maximum_negative_similarity`` should
be between ``-1`` and ``1``.
.. note:: There is an option to use linearly increasing batch size. The idea comes from
`<https://arxiv.org/abs/1711.00489>`_.
In order to do it pass a list to ``batch_size``, e.g. ``"batch_size": [64, 256]`` (default behavior).
If constant ``batch_size`` is required, pass an ``int``, e.g. ``"batch_size": 64``.
.. note:: Parameter ``maximum_negative_similarity`` is set to a negative value to mimic the original
starspace algorithm in the case ``maximum_negative_similarity = maximum_positive_similarity``
and ``use_maximum_negative_similarity = False``.
See `starspace paper <https://arxiv.org/abs/1709.03856>`_ for details.
| 56.233563 | 146 | 0.491365 |
a1a22c8827362dfb14e907836dcfd9e3b012231f | 705 | rst | reStructuredText | docs/modules/ae_automation.dal.oracle.conan.rst | arrayexpress/ae_auto | 78e50cc31997cb5a69d0d74258b6b1a089ba387a | [
"Apache-2.0"
] | null | null | null | docs/modules/ae_automation.dal.oracle.conan.rst | arrayexpress/ae_auto | 78e50cc31997cb5a69d0d74258b6b1a089ba387a | [
"Apache-2.0"
] | 4 | 2020-06-05T19:26:42.000Z | 2022-03-29T21:55:14.000Z | docs/modules/ae_automation.dal.oracle.conan.rst | arrayexpress/ae_auto | 78e50cc31997cb5a69d0d74258b6b1a089ba387a | [
"Apache-2.0"
] | 1 | 2019-03-27T13:15:37.000Z | 2019-03-27T13:15:37.000Z | conan Package
=============
:mod:`conan` Package
--------------------
.. automodule:: ae_automation.dal.oracle.conan
:members:
:undoc-members:
:show-inheritance:
:mod:`conan_tasks` Module
-------------------------
.. automodule:: ae_automation.dal.oracle.conan.conan_tasks
:members:
:undoc-members:
:show-inheritance:
:mod:`conan_transaction` Module
-------------------------------
.. automodule:: ae_automation.dal.oracle.conan.conan_transaction
:members:
:undoc-members:
:show-inheritance:
:mod:`conan_users` Module
-------------------------
.. automodule:: ae_automation.dal.oracle.conan.conan_users
:members:
:undoc-members:
:show-inheritance:
| 19.583333 | 64 | 0.590071 |
8312263e1a4ea9c05d3eebeaaf6f9918313ced3a | 1,401 | rst | reStructuredText | docs/overview.rst | kashopi/lymph | 973a54b3e1d65ffecfd93cf9e5362f057f4868d2 | [
"Apache-2.0"
] | null | null | null | docs/overview.rst | kashopi/lymph | 973a54b3e1d65ffecfd93cf9e5362f057f4868d2 | [
"Apache-2.0"
] | null | null | null | docs/overview.rst | kashopi/lymph | 973a54b3e1d65ffecfd93cf9e5362f057f4868d2 | [
"Apache-2.0"
] | null | null | null | Overview
========
Terms
~~~~~
.. glossary::
service interface
A collection of rpc methods and event listeners that are exposed by a service container.
Interfaces are implemented as subclasses of :class:`lymph.Interface`.
service container
A service container manages rpc and event connections, service discovery, logging, and configuration
for one or more service interfaces. There is one container per service instance.
Containers are :class:`ServiceContainer <lymph.core.container.ServiceContainer>` objects.
service instance
A single process that runs a service container.
It is usually created from the commandline with :ref:`lymph instance <cli-lymph-instance>`.
Each instance is assigned a unique identifier called *instances identity*.
Instances are described by :class:`ServiceInstance <lymph.core.services.ServiceInstance>` objects.
service
A set of all service instances that exposes a common service interface is called a service.
Though uncommon, instances may be part of more than one service.
Services are described by :class:`Service <lymph.core.services.Service>` objects.
node
A process monitor that runs service instances. You'd typically run one per machine.
A node is started from the commandline with :ref:`lymph node <cli-lymph-node>`.
| 38.916667 | 108 | 0.720914 |
6fef7f881730dcce89978bb6d8b3677d8331cc37 | 558 | rst | reStructuredText | code/contributors.rst | vbojko/f5-tls-automation | 2f4dff3d28f454785185bd635064258afacd2c94 | [
"Apache-2.0"
] | 1 | 2021-02-18T19:30:10.000Z | 2021-02-18T19:30:10.000Z | code/contributors.rst | vbojko/f5-tls-automation | 2f4dff3d28f454785185bd635064258afacd2c94 | [
"Apache-2.0"
] | 1 | 2021-08-13T12:31:14.000Z | 2021-08-13T12:31:14.000Z | code/contributors.rst | f5devcentral/f5-tls-automation | 6659510cca98e74fbea64e0fd8175af196c3205e | [
"Apache-2.0"
] | 3 | 2021-02-04T17:52:59.000Z | 2021-04-28T13:59:52.000Z | Contributions
=============
Amazing contributions_ from:
(Alphabetical)
- Jon Calalang (jmcalalang_)
- Aaron Laws
- Michael OLeary (mikeoleary_)
- Vladimir Bojkovic (vbojko_)
We welcome all feedback, please open a Issue_ with whats going on.
Cheers,
The Team
.. _contributions: https://github.com/f5devcentral/f5-tls-automation/graphs/contributors
.. _Issue: https://github.com/f5devcentral/f5-tls-automation/issues
.. _jmcalalang: https://www.github.com/jmcalalang
.. _mikeoleary: https://github.com/mikeoleary
.. _vbojko: https://github.com/vbojko
| 22.32 | 88 | 0.752688 |
feeb8a60a07e41ead51847bd28445b70365ac4c6 | 2,583 | rst | reStructuredText | README.rst | tylernorth/public-transit | e2430078557adf9d2ad03d794ea551a7b06ce145 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | README.rst | tylernorth/public-transit | e2430078557adf9d2ad03d794ea551a7b06ce145 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | README.rst | tylernorth/public-transit | e2430078557adf9d2ad03d794ea551a7b06ce145 | [
"BSD-2-Clause-FreeBSD"
] | 3 | 2017-03-17T11:54:09.000Z | 2022-01-21T05:07:16.000Z | ###################
Public Transit API
###################
Implements functionality in
- `NextBus XML Feed <http://www.nextbus.com/xmlFeedDocs/NextBusXMLFeed.pdf>`_
- `BART API <http://api.bart.gov/docs/overview/index.aspx>`_
- `AC Transit API <https://www.actransit.org/data-api-resource-center>`_
=======
Install
=======
.. code::
git clone https://github.com/tnoff/public-transit.git
pip install public-transit/
====================
Command Line Scripts
====================
There will be 4 command line scripts installed:
- actransit
- bart
- nextbus
- trip-planner
Actransit, Bart, and Nextbus are used for actions specific to their APIs.
Trip planner is a wrapper I created to track common routes and easily display them.
All of the CLIs have man pages that detail their use.
=========
API Usage
=========
You can use the API for bart and nextbus data as well.
---------
Actransit
---------
From the help page.
.. code::
>>> import transit
>>> help(transit.actransit)
----
Bart
----
From the help page.
.. code::
>>> import transit
>>> help(transit.bart)
-------
Nextbus
-------
From the help page.
.. code::
>>> import transit
>>> help(transit.nextbus)
============
Trip Planner
============
Trip planner was a small tool I wrote after realizing 99% of the time I use these APIs, I'm
looking up the same stops/routes. Trip planner will let you create routes that will be stored
in a databse, that can be easily retrieved and used.
Heres a brief example of how it's used::
$ trip-planner leg-create bart mont --destinations frmt
{
"stop_id": "mont",
"stop_title": "Montgomery St.",
"agency": "bart",
"stop_tag": null,
"includes": [
"frmt"
]
}
$ trip-planner trip-show 2
Agency bart
Stop | Destination | Times (Seconds)
--------------------------------------------------------------------------------
Concord | SF Airport | 2640
================================================================================
The CLI for Trip planner has a man page that can explain more of the functionality.
One note: The 'destinations' specified when creating a leg correspond to:
- The last station of the bart route, such as "DUBL" (dublin/pleasenton) or "FRMT" (fremont)
- The route you will board at the nextbus stop, such as the "M" Line on sf-muni.
=====
Tests
=====
Tests require extra pip modules to be installed, they reside in the ``tests/requirements.txt`` file.
| 24.367925 | 100 | 0.581494 |
27e8ecd308d9f8090b7e83376d38e7774fdfe9ed | 2,132 | rst | reStructuredText | docs_source/index.rst | wilsonify/sampyl | fb05a0d04393e4f1691bcc9bc664dbc1b688fc97 | [
"MIT"
] | 308 | 2015-06-30T18:16:04.000Z | 2022-03-14T17:21:59.000Z | docs_source/index.rst | wilsonify/sampyl | fb05a0d04393e4f1691bcc9bc664dbc1b688fc97 | [
"MIT"
] | 20 | 2015-07-02T06:12:20.000Z | 2020-11-26T16:06:57.000Z | docs_source/index.rst | wilsonify/sampyl | fb05a0d04393e4f1691bcc9bc664dbc1b688fc97 | [
"MIT"
] | 66 | 2015-07-27T11:19:03.000Z | 2022-03-24T03:35:53.000Z | .. Sampyl documentation master file, created by
sphinx-quickstart on Thu Aug 6 23:09:13 2015.
Sampyl: MCMC samplers in Python
===============================
Release v\ |version|
Sampyl is a Python library implementing Markov Chain Monte Carlo (MCMC) samplers
in Python. It's designed for use in Bayesian parameter estimation and provides a collection of distribution log-likelihoods for use in constructing models.
Our goal with Sampyl is allow users to define models completely with Python and
common packages like Numpy. Other MCMC packages require learning new syntax and
semantics while all that is really needed is a function that calculates :math:`\log{P(X)}`
for the sampling distribution.
Sampyl allows the user to define a model any way they want, all that is required
is a function that calculates log P(X). This function can be written completely
in Python, written in C/C++ and wrapped with Python, or anything else a user can
think of. For samplers that require the gradient of P(X), such as :ref:`NUTS <nuts>`,
Sampyl can calculate the gradients automatically with autograd_.
.. _autograd: https://github.com/HIPS/autograd/
To show you how simple this can be, let's sample from a 2D correlated normal distribution. ::
# To use automatic gradient calculations, use numpy (np) provided
# by autograd through Sampyl
import sampyl as smp
from sampyl import np
import seaborn
icov = np.linalg.inv(np.array([[1., .8], [.8, 1.]]))
def logp(x, y):
d = np.array([x, y])
return -.5 * np.dot(np.dot(d, icov), d)
start = {'x': 1., 'y': 1.}
nuts = smp.NUTS(logp, start)
chain = nuts.sample(1000)
seaborn.jointplot(chain.x, chain.y, stat_func=None)
.. image:: _static/normal_example.png
:align: center
Start here
----------
.. toctree::
:maxdepth: 2
introduction
tutorial
Examples
--------
.. toctree::
:maxdepth: 2
examples
API
---
.. toctree::
:maxdepth: 2
distributions
model
samplers
state
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 25.082353 | 155 | 0.678236 |
94245039faa840ca8f9aa08a7c09ef6040b6e75d | 4,777 | rst | reStructuredText | readme.rst | Godley/MusIc-Parser | 23cecafa1fdc0f2d6a87760553572b459f3c9904 | [
"MIT"
] | 5 | 2015-07-21T14:06:34.000Z | 2018-04-24T19:31:45.000Z | readme.rst | Godley/MusIc-Parser | 23cecafa1fdc0f2d6a87760553572b459f3c9904 | [
"MIT"
] | 37 | 2015-07-20T17:03:19.000Z | 2016-08-08T09:21:40.000Z | readme.rst | Godley/MusIc-Parser | 23cecafa1fdc0f2d6a87760553572b459f3c9904 | [
"MIT"
] | 5 | 2015-07-24T09:22:52.000Z | 2017-03-29T19:13:16.000Z | ============
MuseParse: Music Parser
============
.. image:: https://travis-ci.org/Godley/MuseParse.svg?branch=master
:target: https://travis-ci.org/Godley/MuseParse
.. image:: https://codeclimate.com/github/Godley/MuseParse/badges/gpa.svg
:target: https://codeclimate.com/github/Godley/MuseParse
:alt: Code Climate
.. image:: https://codeclimate.com/github/Godley/MuseParse/badges/coverage.svg
:target: https://codeclimate.com/github/Godley/MuseParse/coverage
:alt: Test Coverage
.. image:: https://codeclimate.com/github/Godley/MuseParse/badges/issue_count.svg
:target: https://codeclimate.com/github/Godley/MuseParse
:alt: Issue Count
Repository for a python music parser. This works with MusicXML as the input format which forms a tree of objects in memory representing the piece. This can be optionally outputted to lilypond which produces a PDF, or perused for your own uses. All classes are intentionally loosely coupled, so if you would like to put in another input or output format as may come later, please do suggest them in issues and if you want, work on it yourself. For now, MusicXML is a fairly standard format.
Written for python 3 only, python 2.7 support may come later but I'm not intending on doing that unless everything else is done.
Tested against Mac OSX Yosemite, GNU / Linux Ubuntu 14.04 Desktop and Windows 8.1 64 bit.
Originally written as part of my Final Year Project(or dissertation project) at university. I earned 93 % on this along with an application of this section so you'd hope it was good.
============
Installation
============
The current version is on pypi, so to get it you can just run:
.. code-block:: bash
pip3 install MuseParse
Otherwise clone this repo and run these commands from inside the main folder:
.. code-block:: bash
python3 setup.py build
python3 setup.py install
To use the lilypond rendering portion, you will need to install lilypond from http://lilypond.org.
Please note, Linux users, that whilst lilypond is on apt - get, this library expects the version to be 1.18, whilst currently apt - get only has 1.14, so I would advise downloading from the website rather than using apt - get.
============
Usage
============
****************
Setting up
****************
To aid the process of setting up lilypond, a helper is provided which does the environment variable set up so that you can run lilypond from commandline without modifying the variables yourself. The following code provides an example:
.. code-block:: python
from MuseParse.classes.Output.helpers import setupLilypondClean as setupLilypond
import os
default_path_to_lily = 'path/to/lilypond/install/bin'
setupLilypond(default_path_to_lily)
os.system('lilypond')
Assuming you provided the right path, you should see the default help text coming into STDOUT after os.system is ran. Various assumed paths for different operating systems are provided on the `lilypond install instructions page`_
.. _lilypond install instructions page:
http://lilypond.org/download.html
****************
Parsing music
****************
You can parse music from an xml file using the following code:
.. code-block:: python
from MuseParse.classes.Input import MxmlParser
parser = MxmlParser.MxmlParser()
object_hierarchy = parser.parse(filename)
This will return a hierarchy of objects - please view the docs(link below) for more information on the objects in this hierarchy.
********************
Outputting to PDF
********************
To send it to lilypond:
.. code-block:: python
from MuseParse.classes.Output import LilypondOutput
render_obj = LilypondOutput.LilypondRenderer(object_hierarchy, filename)
render_obj.run()
To provide the lilypond runner class with your own lilypond script(see http: // lilypond.org installation page for more information on this):
.. code-block:: python
from MuseParse.classes.Output import LilypondOutput
render_obj = LilypondOutput.LilypondRenderer(
object_hierarchy, filename, lyscript="path/to/script")
render_obj.run()
2 example scripts, 1 for OSX and 1 for Windows 8.1, are provided in MuseParse / demo / lilypond_scripts. If no script is provided it will assume to use the default for that platform. Linux users do not need to provide a script in any circumstance so long as lilypond is already installed.
Demo python scripts of things you could do with this are located in MuseParse / demo
=======
Documentation
=======
Please see `MuseParse @ docs.charlottegodley.co.uk`_
.. _MuseParse @ docs.charlottegodley.co.uk:
http://docs.charlottegodley.co.uk / MuseParse
for the documentation of each class in this library, and do let me know if it could be improved or submit a pull request.
| 39.808333 | 489 | 0.740632 |
899548a914b7f495d6ed1be39884fcf52f842c14 | 4,030 | rst | reStructuredText | docs/source/readme.rst | harlantwood/js-bigchaindb-driver2 | 698e418d0c845203d9a8a5f1696d2c5f11753bee | [
"Apache-2.0"
] | null | null | null | docs/source/readme.rst | harlantwood/js-bigchaindb-driver2 | 698e418d0c845203d9a8a5f1696d2c5f11753bee | [
"Apache-2.0"
] | null | null | null | docs/source/readme.rst | harlantwood/js-bigchaindb-driver2 | 698e418d0c845203d9a8a5f1696d2c5f11753bee | [
"Apache-2.0"
] | null | null | null | BigchainDB JavaScript Driver
============================
.. image:: https://img.shields.io/npm/v/bigchaindb-driver.svg
:target: https://www.npmjs.com/package/bigchaindb-driver
.. image:: https://codecov.io/gh/bigchaindb/js-bigchaindb-driver/branch/master/graph/badge.svg
:target: https://codecov.io/gh/bigchaindb/js-bigchaindb-driver
.. image:: https://img.shields.io/badge/js-ascribe-39BA91.svg
:target: https://github.com/ascribe/javascript
.. image:: https://travis-ci.org/bigchaindb/js-bigchaindb-driver.svg?branch=master
:target: https://travis-ci.org/bigchaindb/js-bigchaindb-driver
.. image:: https://badges.greenkeeper.io/bigchaindb/js-bigchaindb-driver.svg
:target: https://greenkeeper.io/
Features
--------
* Support for preparing, fulfilling, and sending transactions to a BigchainDB
node.
* Retrieval of transactions by id.
* Getting status of a transaction by id.
Compatibility Matrix
--------------------
+-----------------------+----------------------------------+
| **BigchainDB Server** | **BigchainDB Javascript Driver** |
+=======================+==================================+
| ``0.10`` | ``0.1.x`` |
+-----------------------+----------------------------------+
| ``1.0`` | ``0.3.x`` |
+-----------------------+----------------------------------+
| ``1.3`` | ``3.x.x`` |
+-----------------------+----------------------------------+
| ``2.0`` | ``4.x.x`` |
+-----------------------+----------------------------------+
Older versions
--------------------
#### Versions 4.x.x
As part of the changes in the BigchainDB 2.0 server, some endpoint were
modified. In order to be consistent with them, the JS driver does not have
anymore the `pollStatusAndFetchTransaction()` method as there are three
different ways of posting a transaction.
- `async` using the `postTransaction`: the response will return immediately and not wait to see if the transaction is valid.
- `sync` using the `postTransactionSync`: the response will return after the transaction is validated.
- `commit` using the `postTransactionCommit`: the response will return after the transaction is committed to a block.
By default in the docs we will use the `postTransactionCommit` as is way of
being sure that the transaction is validated and committed to a block, so
there will not be any issue if you try to transfer the asset immediately.
#### Versions 3.2.x
For versions below 3.2, a transfer transaction looked like:
.. code-block:: js
const createTranfer = BigchainDB.Transaction.makeTransferTransaction(
txCreated,
metadata, [BigchainDB.Transaction.makeOutput(
BigchainDB.Transaction.makeEd25519Condition(alice.publicKey))],
0
)
const signedTransfer = BigchainDB.Transaction.signTransaction(createTranfer,
keypair.privateKey)
In order to upgrade and do it compatible with the new driver version, this
transaction should be now:
.. code-block:: js
const createTranfer = BigchainDB.Transaction.makeTransferTransaction(
[{ tx: txCreated, output_index: 0 }],
[BigchainDB.Transaction.makeOutput(
BigchainDB.Transaction.makeEd25519Condition(alice.publicKey))],
metaData
)
const signedTransfer = BigchainDB.Transaction.signTransaction(createTranfer,
keypair.privateKey)
The upgrade allows to create transfer transaction spending outputs that belong
to different transactions. So for instance is now possible to create a transfer
transaction spending two outputs from two different create transactions:
.. code-block:: js
const createTranfer = BigchainDB.Transaction.makeTransferTransaction(
[{ tx: txCreated1, output_index: 0 },
{ tx: txCreated2, output_index: 0}],
[BigchainDB.Transaction.makeOutput(
BigchainDB.Transaction.makeEd25519Condition(alice.publicKey))],
metaData
)
const signedTransfer = BigchainDB.Transaction.signTransaction(createTranfer,
keypair.privateKey)
| 35.350877 | 124 | 0.645409 |
d982e04d4b31aa600dbf88325b6b5a17409310fe | 291 | rst | reStructuredText | teach/tutorials/text.rst | TPYBoard/turnipBit | 1738724801be844f811f6ed9548c6eac689015f8 | [
"MIT"
] | 5 | 2017-07-04T15:27:37.000Z | 2018-05-10T06:03:59.000Z | teach/tutorials/text.rst | TPYBoard/turnipBit | 1738724801be844f811f6ed9548c6eac689015f8 | [
"MIT"
] | null | null | null | teach/tutorials/text.rst | TPYBoard/turnipBit | 1738724801be844f811f6ed9548c6eac689015f8 | [
"MIT"
] | null | null | null | 文本块
=======================
本教程的目的是初步学习text开发板的拖拽控件
的使用和基本例程讲解,
*TurnipBit 文本块*
.. toctree::
:maxdepth: 1
text/new.rst
text/str.rst
text/str1.rst
text/len.rst
text/notlen.rst
text/text.find.rst
text/text0.rst
text/text1.rst
text/upper.rst
text/strip.rst
text/print.rst | 10.777778 | 23 | 0.66323 |
96ad52eba88b2a96255dda9873eb36d5a1543ac8 | 507 | rst | reStructuredText | competenze-digitali-dei-cittadini.rst | italia/strategia-nazionale-competenze-digitali-docs | 4b34a7f4128a3cb7d26df2642b0d4349cf568995 | [
"CC0-1.0"
] | 1 | 2022-02-14T21:42:12.000Z | 2022-02-14T21:42:12.000Z | competenze-digitali-dei-cittadini.rst | italia/strategia-nazionale-competenze-digitali-docs | 4b34a7f4128a3cb7d26df2642b0d4349cf568995 | [
"CC0-1.0"
] | null | null | null | competenze-digitali-dei-cittadini.rst | italia/strategia-nazionale-competenze-digitali-docs | 4b34a7f4128a3cb7d26df2642b0d4349cf568995 | [
"CC0-1.0"
] | null | null | null | 4. Competenze digitali dei cittadini
====================================
*Includere tutti, non lasciare indietro nessuno*
.. toctree::
:maxdepth: 3
:caption: Indice dei contenuti
competenze-digitali-dei-cittadini/la-situazione-attuale-3.rst
competenze-digitali-dei-cittadini/iniziative-in-corso-3.rst
competenze-digitali-dei-cittadini/priorità-e-linee-di-intervento-3.rst
competenze-digitali-dei-cittadini/impatto-e-indicatori-3.rst
competenze-digitali-dei-cittadini/quadro-dinsieme-3.rst
| 33.8 | 72 | 0.741617 |
a9b164b367ca9d94587574c36b2029f2c8a4802a | 1,533 | rst | reStructuredText | docs/source/modules/dashboard/index.rst | FelixMartel/DEVINE | 2668fe28dbc3e5ee45d3e64708be3a7ca7a00076 | [
"BSD-3-Clause"
] | 4 | 2018-12-17T19:59:33.000Z | 2020-11-17T09:15:22.000Z | docs/source/modules/dashboard/index.rst | FelixMartel/DEVINE | 2668fe28dbc3e5ee45d3e64708be3a7ca7a00076 | [
"BSD-3-Clause"
] | 4 | 2018-12-03T18:11:00.000Z | 2018-12-11T02:58:33.000Z | docs/source/modules/dashboard/index.rst | FelixMartel/DEVINE | 2668fe28dbc3e5ee45d3e64708be3a7ca7a00076 | [
"BSD-3-Clause"
] | 1 | 2019-11-28T20:10:44.000Z | 2019-11-28T20:10:44.000Z | Dashboard
#########
Description
===========
The dashboard is a web based project where we integrate all of the ROS nodes and gives us a centralized operation center.
You can subscribe to any ROS topic and see what is being send on any topic and you can also send information to them.
It's main goal is to allow us to verify that the whole DEVINE system works in harmony.
It can also be used to demo the project.
Usage
=====
Once the project is installed on your machine, you can simply launch the dashboard like so:
.. code-block:: bash
$ roslaunch devine devine.launch launch_all:=false dashboard:=true
The process will listen and update whenever there is a change in the code.
Manual installation
===================
.. code-block:: bash
$ sudo npm i -g webpack
$ npm install
$ pip3 install -r requirements.txt
$ sudo apt-get install ros-kinetic-rosbridge-server
Adding a view
=============
Create an html layout for your view. E.g: `views/myview.html`. Or reuse one similar to yours.
`include` it in `views/index.html`, keep these class attributes `uk-width-expand` `command-view` and change the name attribute.
.. code-block:: html
<div class="uk-width-expand command-view" name="myview" hidden>
{% include 'myview.html' %}
</div>
Add it to the menu with a class attribute matching the name you used previously.
.. code-block:: html
<li class="command-myview command-menu">My view</li>
Code your view in its own file (`src/myview.js`) and import it in `src/app.js`.
| 26.894737 | 127 | 0.702544 |
5aa5e28bde5ca8678ab54ed4cbc00436700ed4c1 | 6,920 | rst | reStructuredText | docs/source/user_guide/connector/DC_DBLP_tut.rst | peterirani/dataprep | 3e2ea20d21b5415ed69b5977c826062671a1c755 | [
"MIT"
] | 1 | 2021-03-04T23:09:26.000Z | 2021-03-04T23:09:26.000Z | docs/source/user_guide/connector/DC_DBLP_tut.rst | peterirani/dataprep | 3e2ea20d21b5415ed69b5977c826062671a1c755 | [
"MIT"
] | null | null | null | docs/source/user_guide/connector/DC_DBLP_tut.rst | peterirani/dataprep | 3e2ea20d21b5415ed69b5977c826062671a1c755 | [
"MIT"
] | null | null | null |
==================================================
Tutorial - Connector for DBLP
==================================================
.. toctree::
:maxdepth: 2
Overview
========
Connector is a component in the DataPrep library that aims to simplify the data access by providing a standard API set.
The goal is to help the users skip the complex API configuration. In this tutorial, we demonstrate how to use connector library with DBLP.
Preprocessing
================
If you haven't installed DataPrep, run command pip install dataprep or execute the following cell.
::
!pip install dataprep
Download and store the configuration files in DataPrep
================================================================
The configuration files are used to construct the parameters and initial setup for the API. The available configuration files can be manually downloaded here: `Configuration Files
<https://github.com/sfu-db/DataConnectorConfigs>`_ or automatically downloaded at usage.
To automatically download at usage, click on the clipboard button, unsure you are cloning with HTTPS. Go into your terminal, and find an appropriate locate to store the configuration files.
When you decided on a location, enter the command ``git clone https://github.com/sfu-db/DataConnectorConfigs.git``. This will clone the git repository to the desired location; as a suggestion store it with the DataPrep folder.
From here you can proceed with the next steps.
.. image:: ../../_static/images/tutorial/dc_git.png
:align: center
:width: 1000
:height: 500
.. image:: ../../_static/images/tutorial/dc_git_clone.png
:align: center
:width: 725
:height: 125
Below the configuration file are stored with DataPrep.
.. image:: ../../_static/images/tutorial/Config_destination.png
:align: center
:width: 586
:height: 132
Connector.info
------------------
| The info method gives information and guidelines on using the connector. There are 4 sections in the response and they are table, parameters, example and schema.
|
| a. Table - The table(s) being accessed.
| b. Parameters - Identifies which parameters can be used to call the method. For DBLP, there is no required **parameter**.
| c. Examples - Shows how you can call the methods in the Connector class.
| d. Schema - Names and data types of attributes in the response.
::
from dataprep.connector import connect, info
info('./DataConnectorConfigs/DBLP')
.. image:: ../../_static/images/tutorial/dc_dblp_info.png
:align: center
:width: 437
:height: 536
After a connector object has been initialized (see how below), info can also be called using the object::
dc.info()
Parameters
**********************
| A parameter is a piece of information you supply to a query right as you run it. The parameters for DBLP's publication query can either be required or optional. The required parameter is **q** while the optional parameters are **h** and **f**. The parameters are described below.
|
| a. **q** - Required - The query string to search for find author profiles, conferences, journals, or individual publications in the database.
| b. **h** - Optional - Maximum number of search results (hits) to return.
| c. **f** - Optional - The first hit in the numbered sequence of search results (starting with 0) to return. In combination with the h parameter, this parameter can be used for pagination of search results.
There are additional parameters to query with DBLP. If you are interested in reading up the other available parameters and setting up your own config files, please read this `DBLP link
<https://dblp.uni-trier.de/faq/13501473>`_ and this `Configuration Files link
<https://github.com/sfu-db/DataConnectorConfigs>`_.
Initialize connector
=============================
To initialize, run the following code.
::
dc = connect("./DataConnectorConfigs/DBLP")
Connector.query
------------------
The query method downloads the website data. The parameters must meet the requirements as indicated in connector.info for the operation to run.
When the data is received from the server, it will either be in a JSON or XML format. The connector reformats the data in pandas Dataframe for the convenience of downstream operations.
As an example, let's try to get the data from the "publication" table, providing the query search for "lee".
::
dc.query("publication", q="lee")
.. image:: ../../_static/images/tutorial/dc_dblp_query.png
:align: center
:width: 1000
:height: 500
From query results, you can see how easy it is to download the publication data from DBLP into a pandas Dataframe.
Now that you have an understanding of how connector operates, you can easily accomplish the task with two lines of code.
::
dc = Connector(...)
dc.query(...)
Pagination
===================
| Another feature available in the config files is pagination. Pagination is the process of dividing a document into discrete pages, breaking the content into pages and allow visitors to switch between them. It returns the maximum number of searches to return.
|
| To use pagination, you need to include **_count** in your query. The **_count** parameter represents the number of records a user would like to return, which can be larger than the maximum limit of records each return of API itself. Users can still fetch multiple pages of records by using parameters like limit and offset, however this requires users to understand how pagination works different website APIs.
|
::
dc.query("publication", q = "lee", _count = 200)
.. image:: ../../_static/images/tutorial/dc_dblp_pagination.png
:align: center
:width: 1000
:height: 500
Pagination does not concurrently work with the **h** parameter in a query, you need to select either **h** or **_count**.
All publications of one specific author
=========================================================
| In the query, **q** is a generic search parameter that find author profiles, conferences, journals, or individual publications in the database. As a parameter, **q** is not great when trying to find specific authors and their work. To solve for this issue, you can query the authors first and last name.
|
| To fetch all publications of one specific author, you need to include **first_name="______"**, **last_name="______"** in your query.
::
dc.query("publication", first_name = "Jeff", last_name = "Hawkins")
.. image:: ../../_static/images/tutorial/dc_dblp_author.png
:align: center
:width: 1000
:height: 500
That's all for now.
===================
Please visit the other tutorials that are available if you are interested in setting up a different connector.
If you are interested in writing your own configuration file or modify an existing one, refer to the `Configuration Files
<https://github.com/sfu-db/DataConnectorConfigs>`_. | 41.437126 | 412 | 0.706358 |
29b964b4b6f7a76ca6a40eb37e27a572b15a4c62 | 6,809 | rst | reStructuredText | doc/troubleshooting.rst | jlashner/ares | 6df2b676ded6bd59082a531641cb1dadd475c8a8 | [
"MIT"
] | 10 | 2020-03-26T01:08:10.000Z | 2021-12-04T13:02:10.000Z | doc/troubleshooting.rst | jlashner/ares | 6df2b676ded6bd59082a531641cb1dadd475c8a8 | [
"MIT"
] | 25 | 2020-06-08T14:52:28.000Z | 2022-03-08T02:30:54.000Z | doc/troubleshooting.rst | jlashner/ares | 6df2b676ded6bd59082a531641cb1dadd475c8a8 | [
"MIT"
] | 8 | 2020-03-24T14:11:25.000Z | 2021-11-06T06:32:59.000Z | Troubleshooting
===============
This page is an attempt to keep track of common errors and instructions for how to fix them. If you encounter a bug not listed below, `fork ares on bitbucket <https://bitbucket.org/mirochaj/ares/fork>`_ and an issue a pull request to contribute your patch, if you have one. Otherwise, shoot me an email and I can try to help. It would be useful if you can send me the dictionary of parameters for a particular calculation. For example, if you ran a global 21-cm calculation via
::
import ares
pars = {'parameter_1': 1e6, 'parameter_2': 2} # or whatever
sim = ares.simulations.Global21cm(**pars)
sim.run()
and you get weird or erroneous results, pickle the parameters:
::
import pickle
f = open('problematic_model.pkl', 'wb')
pickle.dump(pars, f)
f.close()
and send them to me. Thanks!
.. note :: If you've got a set of problematic models that you encountered
while running a model grid or some such thing, check out the section
on "problem realizations" in :doc:`example_grid_analysis`.
Plots not showing up
--------------------
If when running some *ARES* script the program runs to completion without errors but does not produce a figure, it may be due to your matplotlib settings. Most test scripts use ``draw`` to ultimately produce the figure because it is non-blocking and thus allows you to continue tinkering with the output if you'd like. One of two things is going on:
* You invoked the script with the standard Python interpreter (i.e., **not** iPython). Try running it with iPython, which will spit you back into an interactive session once the script is done, and thus keep the plot window open.
* Alternatively, your default ``matplotlib`` settings may have caused this. Check out your ``matplotlibrc`` file (in ``$HOME/.matplotlibrc``) and make sure ``interactive : True``.
Future versions of *ARES* may use blocking commands to ensure that plot windows don't disappear immediately. Email me if you have strong opinions about this.
``IOError: No such file or directory``
--------------------------------------
There are a few different places in the code that will attempt to read-in lookup tables of various sorts. If you get any error that suggests a required input file has not been found, you should:
- Make sure you have set the ``$ARES`` environment variable. See the :doc:`install` page for instructions.
- Make sure the required file is where it should be, i.e., nested under ``$ARES/input``.
In the event that a required file is missing, something has gone wrong. Run ``python remote.py fresh`` to download new copies of all files.
``LinAlgError: singular matrix``
--------------------------------
This is known to occur in ``ares.physics.Hydrogen`` when using ``scipy.interpolate.interp1d`` to compute the collisional coupling coefficients for spin-exchange. It is due to a bug in LAPACK version 3.4.2 (see `this thread <https://github.com/scipy/scipy/issues/3868>`_). One solution is to install a newer version of LAPACK. Alternatively, you could use linear interpolation, instead of a spline, by passing ``interp_cc='linear'`` as a keyword argument to whatever class you're instantiating, or more permanently by adding ``interp_cc='linear'`` to your custom defaults file (see :doc:`params` section for instructions).
21-cm Extrema-Finding Not Working
---------------------------------
If the derivative of the signal is noisy (due to numerical artifacts, for example) then the extrema-finding can fail. If you can visually see three extrema in the global 21-cm signal but they are either absent or crazy in ``ares.simulations.Global21cm.turning_points``, then this might be going on. Try setting the ``smooth_derivative`` parameter to a value of 0.1 or 0.2. This parameter will smooth the derivative with a boxcar of width :math:`\Delta z=` ``smooth_derivative`` before performing the extrema finding. Let me know if this happens (and under what circumstances), as it would be better to eliminate numerical artifacts than to smooth them out after the fact.
``AttributeError: No attribute blobs.``
---------------------------------------
This is a bit of a red herring. If you're running an MCMC fit and saving 2-D blobs, which always require you to pass the name of the function, this error occurs if you supply a function that does not exist. Check for typos and/or that the function exists where it should.
``TypeError: __init__() got an unexpected keyword argument 'assume_sorted'``
----------------------------------------------------------------------------
Turns out this parameter didn't exist prior to scipy version 0.14. If you update to scipy version >= 0.14, you should be set. If you're worried that upgrading scipy might break other codes of yours, you can also simply navigate to ``ares/physics/Hydrogen.py`` and delete each occurrence of ``assume_sorted=True``, which should have no real effect (except for perhaps a very slight slowdown).
``Failed to interpret file '<some-file>.npz' as a pickle``
----------------------------------------------------------
This is a strange one, which might arise due to differences in the Python and/or pickle version used to read/write lookup tables *ARES* uses. First, try to download new lookup tables via: ::
python remote.py fresh
If that doesn't magically fix it, please email me and I'll do what I can to help!
``ERROR: Cannot generate halo mass function``
---------------------------------------------
This error generally occurs because lookup tables for the halo mass function are not being found, and when that happens, *ARES* tries to make new tables. This process is slow and so is not recommended! Instead you should check that (i) you have correctly set the $ARES environment variable and (ii) that you have run the ``remote.py`` script (see :doc:`install`), which downloads the default HMF lookup table. If you have recently pulled changes, you may need to re-run ``remote.py`` since, e.g., the default HMF parameters may have been changed and corresponding tables may have been updated on the web. To save time, you can specify that you only want new HMF tables by executing ``python remote.py fresh hmf``.
General Mysteriousness
----------------------
- If you're running *ARES* from within an iPython (or Jupyter) notebook, be wary of initializing class instances in one notebook cell and modifying attributes in a separate cell. If you re-run the the second cell *without* re-running the first cell, this can cause problems because changes to attributes will not automatically propagate back up to any parent classes (should they exist). This is known to happen (at least) when using the ``ModelGrid`` and ``ModelSamples`` classes in the inference sub-module.
| 83.036585 | 713 | 0.717286 |
5b3a843373043438f0e5c81d5acd78da01ed1e0d | 133 | rst | reStructuredText | docs/source/_autosummary/odap.Aerodynamics.scale_height.rst | ReeceHumphreys/ODAP | a1994ee90b0a289c3c4d5d91184153ae76a75501 | [
"MIT"
] | 3 | 2022-03-04T21:50:25.000Z | 2022-03-29T04:47:07.000Z | docs/source/_autosummary/odap.Aerodynamics.scale_height.rst | ReeceHumphreys/ODAP | a1994ee90b0a289c3c4d5d91184153ae76a75501 | [
"MIT"
] | 5 | 2022-01-21T15:43:00.000Z | 2022-02-15T02:49:01.000Z | docs/source/_autosummary/odap.Aerodynamics.scale_height.rst | ReeceHumphreys/ODAP | a1994ee90b0a289c3c4d5d91184153ae76a75501 | [
"MIT"
] | null | null | null | odap.Aerodynamics.scale\_height
===============================
.. currentmodule:: odap.Aerodynamics
.. autofunction:: scale_height | 22.166667 | 36 | 0.609023 |
73ffbfad37ffa6742f56ce3afef20a28afbabc36 | 826 | rst | reStructuredText | docs/index.rst | dmpayton/qualpay-python | 60c4b246e259391d3429622f388d2d314b14e26a | [
"MIT"
] | null | null | null | docs/index.rst | dmpayton/qualpay-python | 60c4b246e259391d3429622f388d2d314b14e26a | [
"MIT"
] | null | null | null | docs/index.rst | dmpayton/qualpay-python | 60c4b246e259391d3429622f388d2d314b14e26a | [
"MIT"
] | null | null | null | ==============
qualpay-python
==============
Python_ bindings for Qualpay_.
:Author: `Derek Payton`_
:Version: 1.0.0
:License: `MIT`_
:Source: `github.com/dmpayton/qualpay-python <https://github.com/dmpayton/qualpay-python>`_
:Docs: `qualpay-python.readthedocs.org <https://qualpay-python.readthedocs.org/>`_
Contents:
.. toctree::
:maxdepth: 2
manual/getting-started
manual/gateway
manual/cards
manual/contributing
manual/ref
:download:`Payment Gateway Specification v1.2 </_static/Payment_Gateway_Specification_V1.2.pdf>`
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. _Python: https://www.python.org/
.. _Qualpay: https://www.qualpay.com/
.. _Derek Payton: http://dmpayton.com
.. _MIT: https://github.com/dmpayton/qualpay-python/blob/master/LICENSE
| 21.736842 | 96 | 0.690073 |
f89a21e0f0d311a7b89e1d36b26967459d955c25 | 4,583 | rst | reStructuredText | gh-faq.rst | slateny/devguide | aadaba625d7212a08829d9fc8e15e9678469fb45 | [
"CC0-1.0"
] | null | null | null | gh-faq.rst | slateny/devguide | aadaba625d7212a08829d9fc8e15e9678469fb45 | [
"CC0-1.0"
] | null | null | null | gh-faq.rst | slateny/devguide | aadaba625d7212a08829d9fc8e15e9678469fb45 | [
"CC0-1.0"
] | null | null | null | GitHub issues for BPO users
===========================
Here are some frequently asked quesions about how to do things in
GitHub issues that you used to be able to do on `bpo`_.
Before you ask your own question, make sure you read :doc:`tracker`
and :doc:`triaging` (specifically including :doc:`gh-labels`) as those
pages include a lot of introductory material.
How to format my comments nicely?
---------------------------------
There is a wonderful `beginner guide to writing and formatting on GitHub
<https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github>`_.
Highly recommended.
One pro-tip we can sell you right here is that if you want to paste
some longer log as a comment, attach a file instead (see how below).
If you still insist on pasting it in your comment, do it like this::
<details>
<summary>This is the summary text, click me to expand</summary>
Here goes the long, long text.
It will be collapsed by default!
</details>
How to attach files to an issue?
--------------------------------
Drag them into the comment field, wait until the file uploads, and GitHub
will automatically put a link to your file in your comment text.
How to link to file paths in the repository when writing comments?
------------------------------------------------------------------
Use Markdown links. If you link to the default GitHub path, the file
will link to the latest current version on the given branch.
You can get a permanent link to a given revision of a given file by
`pressing "y" <https://docs.github.com/en/repositories/working-with-files/using-files/getting-permanent-links-to-files>`_.
How to do advanced searches?
----------------------------
Use the `GitHub search syntax`_ or the interactive `advanced search`_ form
that generates search queries for you.
Where is the "nosy list"?
-------------------------
Subscribe another person to the issue by tagging them in the comment with
``@username``.
If you want to subscribe yourself to an issue, click the *🔔 Subscribe*
button in the sidebar.
Similarly, if you were tagged by somebody else but
decided this issue is not for you, you might click the *🔕 Unsubscribe*
button in the sidebar.
There is no exact equivalent of the "nosy list" feature, so to preserve
this information during the transfer, we list the previous members of
this list in the first message on the migrated issue.
How to add issue dependencies?
------------------------------
Add a checkbox list like this in the issue description::
- [x] #739
- [ ] https://github.com/octo-org/octo-repo/issues/740
- [ ] Add delight to the experience when all tasks are complete :tada:
then those will become sub-tasks on the given issue. Moreover, GitHub will
automatically mark a task as complete if the other referenced issue is
closed. More details in the `official GitHub documentation
<https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists>`_.
What on Earth is a "mannequin"?
-------------------------------
For issues migrated to GitHub from `bpo`_ where the authors or commenters
are not core developers, we opted not to link to their GitHub accounts
directly. Users not in the `python organization on GitHub
<https://github.com/orgs/python/people>`_ might not like comments to
appear under their name from an automated import. Others never linked GitHub on
`bpo`_ in the first place so linking their account, if any, would be impossible.
In those cases a "mannequin" account is present to help follow the conversation
that happened in the issue. In case the user did share their GitHub account
name in their `bpo`_ profile, we use that. Otherwise, their classic `bpo`_
username is used instead.
Where did the "Resolution" field go?
------------------------------------
Based on historical data we found it not being used very often.
Where did the "Low", "High", and "Critical" priorities go?
----------------------------------------------------------
Based on historical data we found those not being used very often.
How to find a random issue?
---------------------------
This is not supported by GitHub.
Where are regression labels?
----------------------------
We rarely updated this information and it turned out not to be
particularly useful outside of the change log.
.. _bpo: https://bugs.python.org/
.. _GitHub search syntax: https://docs.github.com/en/search-github/getting-started-with-searching-on-github/understanding-the-search-syntax
.. _advanced search: https://github.com/search/advanced | 38.191667 | 139 | 0.697142 |
2b54697bf923e448820233670fc37d0e5955bf10 | 3,834 | rst | reStructuredText | virtual/lib/python3.6/site-packages/django_category-2.0.1.dist-info/DESCRIPTION.rst | kenmutuma001/galleria | 1bbb9fbd3ca8bf7a030dbcbcbd1674d392055d72 | [
"Unlicense"
] | null | null | null | virtual/lib/python3.6/site-packages/django_category-2.0.1.dist-info/DESCRIPTION.rst | kenmutuma001/galleria | 1bbb9fbd3ca8bf7a030dbcbcbd1674d392055d72 | [
"Unlicense"
] | null | null | null | virtual/lib/python3.6/site-packages/django_category-2.0.1.dist-info/DESCRIPTION.rst | kenmutuma001/galleria | 1bbb9fbd3ca8bf7a030dbcbcbd1674d392055d72 | [
"Unlicense"
] | null | null | null | Django Category
===============
**Simple category app providing category and tag models.**
.. image:: https://travis-ci.org/praekelt/django-category.svg
:target: https://travis-ci.org/praekelt/django-category
:alt: Travis
.. image:: https://coveralls.io/repos/github/praekelt/django-category/badge.svg?branch=develop
:target: https://coveralls.io/github/praekelt/django-category?branch=develop
:alt: Coveralls
.. image:: https://badge.fury.io/py/django-category.svg
:target: https://badge.fury.io/py/django-category
:alt: Release
.. contents:: Contents
:depth: 5
Requirements
------------
#. Python 2.7, 3.5-3.7
#. Django 1.11, 2.0, 2.1
Installation
------------
#. Install or add ``django-category`` to your Python path.
#. Add ``category`` to your ``INSTALLED_APPS`` setting.
#. This package uses django's internal sites framework. Add ``django.contrib.sites`` to your ``INSTALLED_APPS``
setting and include the required ``SITE_ID = 1`` (or similiar). The official docs can be found here:
https://docs.djangoproject.com/en/2.1/ref/contrib/sites/.
#. Optional: ``django-object-tools`` provides a category tree view. See https://github.com/praekelt/django-object-tools
for installation instructions.
Usage
-----
Enable categorization and/or tagging on a model by creating ``ManyToMany`` fields to the models provided by ``django-category``, for example::
from django import models
class MyModel(models.Model):
categories = models.ManyToManyField(
'category.Category',
help_text='Categorize this item.'
)
tags = models.ManyToManyField(
'category.Tag',
help_text='Tag this item.'
)
Models
------
class Category
~~~~~~~~~~~~~~
Category model to be used for categorization of content. Categories are high level constructs to be used for grouping and organizing content, thus creating a site's table of contents.
Category.title
++++++++++++++
Short descriptive title for the category to be used for display.
Category.subtitle
+++++++++++++++++
Some titles may be the same and cause confusion in admin UI. A subtitle makes a distinction.
Category.slug
+++++++++++++
Short descriptive unique name to be used in urls.
Category.parent
+++++++++++++++
Optional parent to allow nesting of categories.
Category.sites
++++++++++++++
Limits category scope to selected sites.
class Tag
~~~~~~~~~
Tag model to be used for tagging content. Tags are to be used to describe your content in more detail, in essence providing keywords associated with your content. Tags can also be seen as micro-categorization of a site's content.
Tag.title
+++++++++
Short descriptive name for the tag to be used for display.
Tag.slug
++++++++
Short descriptive unique name to be used in urls.
Tag.categories
++++++++++++++
Categories to which the tag belongs.
Authors
=======
Praekelt Foundation
-------------------
* Shaun Sephton
* Jonathan Bydendyk
* Hedley Roos
Changelog
=========
next
----
#. String representation for Python 3.
2.0.1
-----
#. Django 2.1 support. The minimum supported Django version is now 1.11.
#. Added coveralls
2.0.0
-----
#. Django 2 support. The minimum supported Django version is now 1.10.
1.11.0
------
#. Compatibility for Python 3.5 and Django 1.11.
1.9
---
#. Actual unit tests.
#. Compatibility from Django 1.6 to 1.9.
0.1.3
-----
#. __unicode__ method now returns a sensible result.
0.1.2
-----
#. Fix tree view.
0.1.1
-----
#. Added sites and subtitle fields.
0.1
---
#. Dependency cleanup.
0.0.6
-----
#. Added get_absolute_url on Category
0.0.5
-----
#. Use prepopulate_fields for admin interface
#. Parent category field added
#. South migration path created
#. Tree view of categories and tags
0.0.4 (2011-08-24)
------------------
#. Docs, testrunner.
| 22.552941 | 229 | 0.67397 |
8d59b285e5310a88eca259e98edb3bfb15c42f6f | 1,389 | rst | reStructuredText | doc/source/simple_network_sim.rst | magicicada/simple_network_sim | f7d31bb97052951658a5954ecba2ffe8fc3f2aa7 | [
"BSD-2-Clause"
] | 1 | 2020-05-23T16:01:59.000Z | 2020-05-23T16:01:59.000Z | doc/source/simple_network_sim.rst | magicicada/simple_network_sim | f7d31bb97052951658a5954ecba2ffe8fc3f2aa7 | [
"BSD-2-Clause"
] | 3 | 2020-06-01T20:02:34.000Z | 2021-05-04T13:09:33.000Z | doc/source/simple_network_sim.rst | magicicada/simple_network_sim | f7d31bb97052951658a5954ecba2ffe8fc3f2aa7 | [
"BSD-2-Clause"
] | 1 | 2020-04-18T15:03:39.000Z | 2020-04-18T15:03:39.000Z | simple\_network\_sim package
============================
.. automodule:: simple_network_sim
:members:
:undoc-members:
:show-inheritance:
Submodules
----------
simple\_network\_sim.common module
----------------------------------
.. automodule:: simple_network_sim.common
:members:
:undoc-members:
:show-inheritance:
simple\_network\_sim.generateSampleNodeLocationFile module
----------------------------------------------------------
.. automodule:: simple_network_sim.generateSampleNodeLocationFile
:members:
:undoc-members:
:show-inheritance:
simple\_network\_sim.loaders module
-----------------------------------
.. automodule:: simple_network_sim.loaders
:members:
:undoc-members:
:show-inheritance:
simple\_network\_sim.network\_of\_individuals module
----------------------------------------------------
.. automodule:: simple_network_sim.network_of_individuals
:members:
:undoc-members:
:show-inheritance:
simple\_network\_sim.network\_of\_populations module
----------------------------------------------------
.. automodule:: simple_network_sim.network_of_populations
:members:
:undoc-members:
:show-inheritance:
simple\_network\_sim.sampleUseOfModel module
--------------------------------------------
.. automodule:: simple_network_sim.sampleUseOfModel
:members:
:undoc-members:
:show-inheritance:
| 23.15 | 65 | 0.589633 |
eb23f0f01892b28396224c8c26005e9630242a0a | 1,689 | rst | reStructuredText | docs/index.rst | dnidever/chronos | dc1c1b5b81f7969ec52ca7e685cb5bd08fe5fe97 | [
"MIT"
] | null | null | null | docs/index.rst | dnidever/chronos | dc1c1b5b81f7969ec52ca7e685cb5bd08fe5fe97 | [
"MIT"
] | null | null | null | docs/index.rst | dnidever/chronos | dc1c1b5b81f7969ec52ca7e685cb5bd08fe5fe97 | [
"MIT"
] | null | null | null | .. chronos documentation master file, created by
sphinx-quickstart on Tue Feb 16 13:03:42 2021.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
*******
Chronos
*******
Introduction
============
|Chronos| [#f1]_ is software to automatically fit isochrones to cluster photometry.
.. toctree::
:maxdepth: 1
install
gettingstarted
modules
Description
===========
|Chronos| has a number of modules to perform photometry.
|Chronos| can be called from python directly or the command-line script `hofer` can be used.
Examples
========
.. toctree::
:maxdepth: 1
examples
chronos
=======
Here are the various input arguments for command-line script `chronos`::
usage: chronos [-h] [--outfile OUTFILE] [--figfile FIGFILE] [-d OUTDIR]
[-l] [-p] [-v] [-t]
files [files ...]
Run Chronos on a catalog
positional arguments:
files Catalog FITS files or list
optional arguments:
-h, --help show this help message and exit
--outfile OUTFILE Output filename
--figfile FIGFILE Figure filename
-d OUTDIR, --outdir OUTDIR
Output directory
-l, --list Input is a list of FITS files
-p, --plot Save the plots
-v, --verbose Verbose output
-t, --timestamp Add timestamp to Verbose output
*****
Index
*****
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. rubric:: Footnotes
.. [#f1] `Chronos <https://en.wikipedia.org/wiki/Chronos>`_, is the personification of time in pre-Socratic philosophy and later literature.
| 23.136986 | 140 | 0.619301 |
e50e976a0435ae97ac2387632053bd5651b5779e | 285 | rst | reStructuredText | docs/tutorial.rst | timo/zasim | 54d8eb329af73700bf0df2be6e753e309e9d8191 | [
"BSD-3-Clause"
] | 2 | 2017-05-15T12:24:57.000Z | 2018-03-09T10:25:45.000Z | docs/tutorial.rst | timo/zasim | 54d8eb329af73700bf0df2be6e753e309e9d8191 | [
"BSD-3-Clause"
] | null | null | null | docs/tutorial.rst | timo/zasim | 54d8eb329af73700bf0df2be6e753e309e9d8191 | [
"BSD-3-Clause"
] | null | null | null | Tutorial section
================
.. toctree::
tutorial/installation
tutorial/invocation
tutorial/coding_simple_ca
tutorial/custom_stepfunc
tutorial/custom_computation
tutorial/debug_cagen
tutorial/simulator_without_cagen
tutorial/using_zasim_in_gui
| 19 | 36 | 0.733333 |
9c0756a68b0254170bb325fbf8270a387d4e8c3a | 333 | rst | reStructuredText | includes_server_rbac/includes_server_rbac_permissions.rst | nathenharvey/chef-docs | 21aa14a43cc0c81db14eb107071f0f7245945df8 | [
"CC-BY-3.0"
] | null | null | null | includes_server_rbac/includes_server_rbac_permissions.rst | nathenharvey/chef-docs | 21aa14a43cc0c81db14eb107071f0f7245945df8 | [
"CC-BY-3.0"
] | null | null | null | includes_server_rbac/includes_server_rbac_permissions.rst | nathenharvey/chef-docs | 21aa14a43cc0c81db14eb107071f0f7245945df8 | [
"CC-BY-3.0"
] | null | null | null | .. The contents of this file are included in multiple topics.
.. This file should not be changed in a way that hinders its ability to appear in multiple documentation sets.
Permissions are used in the |chef server| to define how users and groups can interact with objects on the server. Permissions are configured per-organization. | 66.6 | 158 | 0.798799 |
750f6570c4b9406169e4e6342aa3e14690adf855 | 4,152 | rst | reStructuredText | docs/usage.rst | suriyan/geo_sampling | 75bc018f37ea9583bdf3cf7fba4565b403aece40 | [
"MIT"
] | null | null | null | docs/usage.rst | suriyan/geo_sampling | 75bc018f37ea9583bdf3cf7fba4565b403aece40 | [
"MIT"
] | null | null | null | docs/usage.rst | suriyan/geo_sampling | 75bc018f37ea9583bdf3cf7fba4565b403aece40 | [
"MIT"
] | null | null | null | Usage
#####
geo_roads
---------
Get all the roads in a specific region from OpenStreetMap.
::
usage: geo_roads.py [-h] [-c COUNTRY] [-l {1,2,3,4}] [-n NAME]
[-t TYPES [TYPES ...]] [-o OUTPUT] [-d DISTANCE]
[--no-header] [--plot]
Geo roads data
optional arguments:
-h, --help show this help message and exit
-c COUNTRY, --country COUNTRY
Select country
-l {1,2,3,4}, --level {1,2,3,4}
Select administrative level
-n NAME, --name NAME Select region name
-t TYPES [TYPES ...], --types TYPES [TYPES ...]
Select road types (list)
-o OUTPUT, --output OUTPUT
Output file name
-d DISTANCE, --distance DISTANCE
Distance in meters to split
--no-header Output without header at the first row
--plot Plot the output
Output File Format
******************
#. *segment_id* - Unique ID (record number)
#. *osm_id* - ID from Open Street Map data
#. *osm_name* - Name from Open Street Map data (road name)
#. *osm_type* - Type from Open Street Map data (road type)
#. *start_lat* and *start_long* - Line segment start position (lat/long)
#. *end_lat* and *end_long* - Line segment end position (lat/long)
Examples
********
To get a list of all the country names:
::
geo_roads
To get a list of all boundary names of Thailand at a specific administrative level:
::
geo_roads -c Thailand -l 1
In this case, all boundary names (77 provinces) at the 1st `administrative divisions level <https://en.wikipedia.org/wiki/Table_of_administrative_divisions_by_country>`_ of Thailand will be listed.
To get road data for the ``Trang`` province (only the road types `trunk`, `primary`, `secondary` and `tertiary`):
::
geo_roads -c Thailand -l 1 -n Trang -t trunk primary secondary tertiary --plot
Default output file will be saved as ``output.csv`` and all the road segments will be plotted if *--plot* is specified
.. image:: _images/tha_trang.png
To run the script for ``Delhi of India`` and to save the output as ``delhi-roads.csv``:
::
geo_roads -c India -l 1 -n "NCT of Delhi" -o delhi-roads.csv --plot
.. image:: _images/delhi.png
By default, all road types will be outputted if `--types, -t` is not specified.
sample_roads
------------
Randomly sample a specific number of road segments of all roads or specific road types.
::
usage: sample_roads.py [-h] [-n SAMPLES] [-t TYPES [TYPES ...]]
[-o OUTPUT] [--no-header] [--plot]
input
Random sample road segments
positional arguments:
input Road segments input file
optional arguments:
-h, --help show this help message and exit
-n SAMPLES, --n-samples SAMPLES
Number of random samples
-t TYPES [TYPES ...], --types TYPES [TYPES ...]
Select road types (list)
-o OUTPUT, --output OUTPUT
Sample output file name
--no-header Output without header at the first row
--plot Plot the output
Examples
********
To get a random sample of 1,0000 road segments of road types `primary`, `secondary`, `tertiary` and `trunk`:
::
sample_roads -n 1000 -t primary secondary tertiary trunk -o delhi-roads-s1000.csv delhi-roads.csv
.. image:: _images/delhi_sampling1000.png
To get specific road types for Rhode Island in US:
::
geo_roads -c "United States" -l 1 -n "Rhode Island" -t trunk primary secondary tertiary road -o rhode-island-roads.csv --plot
.. image:: _images/rhode_island.png
And then get a random sample of 1,000:
::
sample_roads -n 1000 -o rhode-island-s1000.csv --plot rhode-island-roads.csv
.. image:: _images/rhode_island_sampling1000.png
To get a specific region at 3rd adm. level (Tambon) of Thailand (e.g. "Tambon Sattahip, Amphoe Sattahip, Chon Buri, Thailand"):
::
geo_roads -c Thailand -l 3 -n "Chon Buri+Sattahip+Sattahip" -o sattahip-roads.csv --plot
.. image:: _images/sattahip.png
| 26.113208 | 197 | 0.626204 |
f47ec2982d2d51b7b3061264fa5b5cdc6d20d66b | 13,085 | rst | reStructuredText | docs/source/CSUI-Storage.rst | wowshakhov/cloudstack-ui | 3715031bed2a137019b520c6ee759cdcf08c60b2 | [
"Apache-2.0"
] | null | null | null | docs/source/CSUI-Storage.rst | wowshakhov/cloudstack-ui | 3715031bed2a137019b520c6ee759cdcf08c60b2 | [
"Apache-2.0"
] | null | null | null | docs/source/CSUI-Storage.rst | wowshakhov/cloudstack-ui | 3715031bed2a137019b520c6ee759cdcf08c60b2 | [
"Apache-2.0"
] | null | null | null | .. _Storage:
Storage
----------
.. Contents::
In the *Virtual Machines* -> *Storage* section, you can create and manage drives for virtual machines. Here you can add new disks, create templates and snapshots of a volume, view the list of snapshots for each volume.
.. _static/Storage_VolumeManagement.png
Drive list
~~~~~~~~~~~~
.. note:: If you have just started working with CloudStack and you do not have virtual machines yet, you have no disks in the list. Once you create a VM, a root disk is created for it automatically. Creation of an additional disk takes resources and requires expenses. Please, make sure you definitely need an additional data disk.
Here you can find a list of disks existing for your user.
.. figure:: _static/Storage_List2.png
Domain Administrator can see disks of all accounts in the domain.
.. figure:: _static/Storage_List_Admin4.png
Disks can be viewed as a list or as a grid of cards. Switch the view by clicking a view icon |view icon|/|box icon| in the upper-right corner.
Filtering of Drives
""""""""""""""""""""""""""
Root disks are visually distinguished from data disks in the list. There is an option to display only spare disks which allows saving user's time in certain cases.
As in all lists, there is the filtering tool for selecting drives by zones and/or types. You also can apply the search tool selecting a drive by its name or a part of the name.
.. figure:: _static/Storage_FilterAndSearch3.png
For better distinguising of drives in the list you can group them by zones and/or types, like in the figure below:
.. figure:: _static/Storage_Grouping2.png
Domain Administrators can see the list of drives of all accounts in the domain. Filtering by accounts is available to Administrators.
.. figure:: _static/Storage_FilterAndSearch_Admin.png
For each drive in the list the following information is presented:
- Drive name,
- Size,
- State - Ready or Allocated.
The Actions button |actions icon| is available to the right. It expands the list of actions for a disk. See the information on actions in the :ref:`Actions_on_Disks` section below.
Create New Volume
~~~~~~~~~~~~~~~~~~~
In the *Storage* section you can create new volumes. Please, note that if you are aimed at creation of a virtual machine, we do not recommend starting from adding new disks to the system. You can go right to the *Virtual Machines* section and create a VM. A root disk will be cerated for the VM automatically.
.. _static/CreateVMwithRD.png
If necessary, you can create a data disk and attach it to your VM. By clicking the "Create" button |create icon| in the bottom-right corner you will open a creation form. Please, make sure you definitely need an additional disk as it takes resources and requires expenses. If you do not have disks yet, when clicking "Create", a dialog box will ask you if you surely want to create a drive. Confirm your creation action by clicking "CONTINUE":
.. figure:: _static/AdditionalDiskNotification.png
A creation form will appear.
.. figure:: _static/Storage_Create3.png
To create a new volume fill in the fields:
.. note:: Required fields are marked with an asterisk (*).
- Name * - Enter a name of the volume.
- Zone * - Select a zone from the drop-down list.
- Disk offering * - Select from the list of available offerings opening it in a modal window by clicking "SELECT". The list of available disk offerings is determined in the `configuration file <https://github.com/bwsw/cloudstack-ui/blob/master/config-guide.md#service-offering-availability>`_ by Administrator.
In the modal window you can see the name and short description for each disk offering and a radio-button to select any option.
.. figure:: _static/Storage_Create_Select1.png
For each disk offering you can expand detailed information by clicking the arrow icon or the whole line in the list. In the appeared section you will see a range of parameters. The following parameters are shown by default:
- Bandwidth (MB/s): Read/Write rates;
- IOPS: Read/Write rates and Min/Max values;
- Storage type;
- Provisioning type;
- Creation date.
Use the scrolling tool to view them all.
More parameters can be added via the `configuration file <https://github.com/bwsw/cloudstack-ui/blob/master/config-guide.md#disk-offering-parameters>`_ by an Administrator.
.. figure:: _static/Storage_Create_Select_Expand.png
Select a disk offering in the list and click "SELECT".
.. figure:: _static/Storage_Create_SelectDO.png
If the selected disk offering has a custom disk size (it is set by Administrator), you can change the disk size moving the slider to the volume size you wish or entering a value into the number field.
.. figure:: _static/Storage_Create_ResizeDisk1.png
Click "CREATE" to save the settings and create the new volume. You will see the drive appears in the list.
.. figure:: _static/Storage_Created1.png
Click "CANCEL" to drop all the settings. The drive will not be created then.
.. _Storage_Info:
Volume Details Sidebar
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By clicking a disk in the list you can access the information on the volume.
.. figure:: _static/Storage_Info3.png
At the right sidebar you can find two tabs:
1. Volume tab - Provides the information on the disk volume:
- General information - Presents disk size, date and time of creation, the storage type (shared, local).
- Description - Allows entering a short description to the drive. Click at the Description card and enter a short description in the text block.
.. figure:: _static/Storage_Description2.png
Click "Save" to save the description. Description will be saved to volume `tags <https://github.com/bwsw/cloudstack-ui/wiki/Tags>`_.
You can edit the description by clicking the "Edit" button |edit icon| in the tab.
.. figure:: _static/Storage_DescriptionEdit2.png
- Disk offering - Presents the information on the disk offering chosen at disk creation.
2. Snapshots tab - Allows creating disk snapshots. Snapshots can be taken for disks with the "Ready" status only.
Click the "Add" button |create icon| and enter in the dialog box:
- Name - Define a name for the snapshot. It is auto-generated in the format ``<date>-<time>``. But you can specify any name you wish.
- Description - Add a description of the snapshot to know what it contains.
Then click "Create" and see the snapshot has appeared in the list.
.. figure:: _static/Storage_CreateSnapshot2.png
Every snapshot is saved in a separate card. There you will see the name and time of the snapshot.
For each snapshot the list of actions is available. Find more information on snapshot actions in the :ref:`Actions_on_Snapshot_Volume` section below.
.. _Actions_on_Snapshot_Volume:
Snapshots Action Box
""""""""""""""""""""""""""""
.. note:: For a newly taken snapshot all actions except "Delete" are disabled until the snapshot is backed up to the Secondary Storage that may take some time. Once it is backed up, a full range of actions is available to a user.
Likewise the Virtual Machine information tab, the same actions are available for a snapshot:
- **Create a template** - Allows creating a template from the snapshot. This template can be used for VM creation.
Fill in the form to register a new template:
.. note:: Required fields are marked with an asterisk (*).
- Name * - Enter a name of the new template.
- Description * - Provide a short description of the template.
- OS type * - Select an OS type from the drop-down list.
- Group - Select a group from the drop-down list.
- Password enabled - Tick this option if the template has the password change script installed. That means the VM created on the base of this template will be accessed by a password, and this password can be reset.
- Dynamically scalable - Tick this option if the template contains XS/VM Ware tools to support the dynamic scaling of VM CPU/memory.
Click "SHOW ADDITIONAL FIELDS" to expand the list of optional settings. It allows creating a template that requires HVM.
Once all fields are filled in click "Create" to create the new template.
.. figure:: _static/Storage_CreateTemplate2.png
- **Create Volume** - Allows creating a volume from the snapshot.
Type a name for a new volume into the Name field in the modal window. Click “Create” to register a new volume.
.. figure:: _static/Storage_SnapshotActions_CreateVolume1.png
Click “Cancel” to cancel the volume creation.
- **Revert Volume To Snapshot** - Allows turning the volume back to the state of the snapshot.
In the dialog box confirm your action. Please, note, the virtual machine the volume is assigned to will be rebooted.
.. figure:: _static/Storage_SnapshotActions_Revert1.png
- **Delete** - Allows deleting the snapshot. Click “Delete” in the Action box and confirm your action in modal window. The snapshot will be deleted. Click “Cancel” to cancel the snapshot deleting.
.. Find the detailed description in the :ref:`Actions_on_Snapshots` section.
.. _Actions_on_Disks:
Volume Action Box
~~~~~~~~~~~~~~~~~~~
Action on drives are available under the Actions button |actions icon|.
The following actions are available on disk:
For root disks:
- Take a snapshot;
- Set up snapshot schedule;
- Resize the disk.
For data disks:
- Take a snapshot;
- Set up snapshot schedule;
- Detach;
- Resize the disk;
- Delete.
.. figure:: _static/Storage_Actions.png
**Take a snapshot**
You can take a snapshot of the disk to preserve the data volumes. Snapshots can be taken for disks with the "Ready" status only.
Click "Take a snapshot" in the disk Actions list and in the dialog window enter the following information:
.. note:: Required fields are marked with an asterisk (*).
- Name of the snapshot * - Define a name for the snapshot. It is autogenerated in the form ``<date>-<time>``. But you can specify any name you wish.
- Description - Add a description of the snapshot to know what it contains.
All snapshots are saved in the list of snapshots. For a snapshot you can:
- Create a template;
- Delete the snapshot.
See the :ref:`Actions_on_Snapshot_Volume` section for more information.
**Set up snapshot schedule**
This action is available for disks with the "Ready" status only.
You can schedule the regular snapshotting by clicking "Set up snapshot schedule" in the Actions list.
In the appeared window set up the schedule for recurring snapshots:
- Select the frequency of snapshotting - hourly, daily, weekly, monthly;
- Select a minute (for hourly scheduling), the time (for daily scheduling), the day of week (for weekly scheduling) or the day of month (for monthly scheduling) when the snapshotting is to be done;
- Select the timezone according to which the snapshotting is to be done at the specified time;
- Set the number of snapshots to be made.
Click "+" to save the schedule. You can add more than one schedule but only one per each type (hourly, daily, weekly, monthly).
.. figure:: _static/Storage_ScheduleSnapshotting1.png
**Resize the disk**
.. note:: This action is available to root disks as well as data disks created on the base of disk offerings with a custom disk size. Disk offerings with custom disk size can be created by Root Administrators only.
You can change the disk size by selecting "Resize the disk" option in the Actions list. You are able to enlarge disk size only.
In the appeared window set up a new size and click "RESIZE" to save the edits.
.. figure:: _static/Storage_ResizeDisk2.png
Click "Cancel" to drop the size changes.
**Attach/Detach**
This action can be applied to data disks. It allows attaching/detaching the data disk to/from the virtual machine.
Click "Attach" in the Actions list and in the dialog window select a virtual machine to attach the disk to. Click "ATTACH" to perform the attachment.
.. figure:: _static/Storage_AttachDisk1.png
An attached disk can be detached. Click "Detach" in the Actions list and confirm your action in the dialog window. The data disk will be detached from the virtual machine.
**Delete**
This action can be applied to data disks. It allows deleting a data disk from the system.
Click "Delete" in the Actions list and confirm your action in the dialog window.
If a volume has snapshots the system will ask you if you want to delete them as well. Click "YES" to delete the snapshots of the volume. Click "NO" to keep them.
The data disk will be deleted from the system.
.. |bell icon| image:: _static/bell_icon.png
.. |refresh icon| image:: _static/refresh_icon.png
.. |view icon| image:: _static/view_list_icon.png
.. |view box icon| image:: _static/box_icon.png
.. |view| image:: _static/view_icon.png
.. |actions icon| image:: _static/actions_icon.png
.. |edit icon| image:: _static/edit_icon.png
.. |box icon| image:: _static/box_icon.png
.. |create icon| image:: _static/create_icon.png
.. |copy icon| image:: _static/copy_icon.png
.. |color picker| image:: _static/color-picker_icon.png
.. |adv icon| image:: _static/adv_icon.png
| 44.206081 | 443 | 0.753687 |
f58f068afeeb5dc165722aa88cb97ee3abc815eb | 73 | rst | reStructuredText | sphinxdocs/stability.rst | NymanRobin/crl-interactivesessions | 2c1df279cde6c006d1741bed386ebbe2e5faf8ec | [
"BSD-3-Clause"
] | 2 | 2019-04-10T11:13:55.000Z | 2019-05-04T17:46:23.000Z | sphinxdocs/stability.rst | NymanRobin/crl-interactivesessions | 2c1df279cde6c006d1741bed386ebbe2e5faf8ec | [
"BSD-3-Clause"
] | 39 | 2019-03-04T14:20:24.000Z | 2021-12-03T17:14:19.000Z | sphinxdocs/stability.rst | NymanRobin/crl-interactivesessions | 2c1df279cde6c006d1741bed386ebbe2e5faf8ec | [
"BSD-3-Clause"
] | 5 | 2019-03-04T14:20:58.000Z | 2020-01-22T19:11:00.000Z | .. Copyright (C) 2019, Nokia
.. include:: ../stability-tests/README.rst
| 18.25 | 42 | 0.671233 |
f11f4ba63302f277de52f7ae228e36935e69ae35 | 613 | rst | reStructuredText | docs/source/_autosummary/bluebird.rst | Stoick01/bluebird | a6ab5fcbf42da24ef8268ba6bc110b9eadd9a2ac | [
"MIT"
] | 1 | 2020-08-04T10:44:51.000Z | 2020-08-04T10:44:51.000Z | docs/source/_autosummary/bluebird.rst | Stoick01/bluebird | a6ab5fcbf42da24ef8268ba6bc110b9eadd9a2ac | [
"MIT"
] | 3 | 2021-06-02T03:33:48.000Z | 2022-03-12T01:00:23.000Z | docs/source/_autosummary/bluebird.rst | Stoick01/bluebird | a6ab5fcbf42da24ef8268ba6bc110b9eadd9a2ac | [
"MIT"
] | null | null | null | bluebird
========
.. automodule:: bluebird
.. rubric:: Modules
.. autosummary::
:toctree:
:template: custom-module-template.rst
:recursive:
bluebird.activations
bluebird.data
bluebird.dataloader
bluebird.datasets
bluebird.exceptions
bluebird.layers
bluebird.loss
bluebird.metrics
bluebird.nn
bluebird.optimizers
bluebird.progress_tracker
bluebird.regularizators
bluebird.tensor
bluebird.utils
bluebird.weight_initializers
| 12.770833 | 57 | 0.569331 |
333c1607c6c79f581c32d05208b76cc962466828 | 591 | rst | reStructuredText | docs/source/services.rst | astandre/cb-compose-engine-ms | ed4141f57dcb544743fd17fe62001d573ae1efc9 | [
"MIT"
] | null | null | null | docs/source/services.rst | astandre/cb-compose-engine-ms | ed4141f57dcb544743fd17fe62001d573ae1efc9 | [
"MIT"
] | null | null | null | docs/source/services.rst | astandre/cb-compose-engine-ms | ed4141f57dcb544743fd17fe62001d573ae1efc9 | [
"MIT"
] | null | null | null | Services
=========
Here we present all the services used to communicate with the other microservices of the system.
By default, in the settings.ini is described every address of the other microservices
These are the main methods of this class:
.. autofunction:: services.discover_intent
.. autofunction:: services.discover_entities
.. autofunction:: services.get_requirements
.. autofunction:: services.get_options
.. autofunction:: services.get_answer
.. autofunction:: services.find_in_context
.. autofunction:: services.get_agent_data
.. autofunction:: services.get_intent_rq
| 22.730769 | 96 | 0.783418 |
8c10e21a69c4f0688c4868bb4a8af3cf31fd6e14 | 5,224 | rst | reStructuredText | source/configuration/modules/imhiredis.rst | inahga/rsyslog-doc | b63fc9a7169766e14adb6e64f78a69e3c16d8eaa | [
"Apache-2.0"
] | 77 | 2015-02-04T11:56:46.000Z | 2022-03-11T18:07:07.000Z | source/configuration/modules/imhiredis.rst | inahga/rsyslog-doc | b63fc9a7169766e14adb6e64f78a69e3c16d8eaa | [
"Apache-2.0"
] | 412 | 2015-01-11T13:18:16.000Z | 2022-03-30T22:23:20.000Z | source/configuration/modules/imhiredis.rst | inahga/rsyslog-doc | b63fc9a7169766e14adb6e64f78a69e3c16d8eaa | [
"Apache-2.0"
] | 263 | 2015-01-13T11:44:50.000Z | 2022-03-07T11:13:34.000Z |
*****************************
Imhiredis: Redis input plugin
*****************************
==================== =====================================
**Module Name:** **imhiredis**
**Author:** Jeremie Jourdin <jeremie.jourdin@advens.fr>
==================== =====================================
Purpose
=======
Imhiredis is an input module reading arbitrary entries from Redis.
It uses the `hiredis library <https://github.com/redis/hiredis.git>`_ to query Redis instances using 2 modes:
- **queues**, using `LIST <https://redis.io/commands#list>`_ commands
- **channels**, using `SUBSCRIBE <https://redis.io/commands#pubsub>`_ commands
.. _imhiredis_queue_mode:
Queue mode
----------
The **queue mode** uses Redis LISTs to push/pop messages to/from lists. It allows simple and efficient uses of Redis as a queueing system, providing both LIFO and FIFO methods.
This mode should be preferred if the user wants to use Redis as a caching system, with one (or many) Rsyslog instances POP'ing out entries.
.. Warning::
This mode was configured to provide optimal performances while not straining Redis, but as imhiredis has to poll the instance some trade-offs had to be made:
- imhiredis POPs entries by batches of 10 to improve performances (this is currently not configurable)
- when no entries are left in the list, the module sleeps for 1 second before checking the list again. This means messages might be delayed by as much as 1 second between a push to the list and a pop by imhiredis (entries will still be POP'ed out as fast as possible while the list is not empty)
.. _imhiredis_channel_mode:
Channel mode
------------
The **subscribe** mode uses Redis PUB/SUB system to listen to messages published to Redis' channels. It allows performant use of Redis as a message broker.
This mode should be preferred to use Redis as a message broker, with zero, one or many subscribers listening to new messages.
.. Warning::
This mode shouldn't be used if messages are to be reliably processed, as messages published when no Imhiredis is listening will result in the loss of the message.
Master/Replica
--------------
This module is able to automatically connect to the master instance of a master/replica(s) cluster. Simply providing a valid connection entry point (being the current master or a valid replica), Imhiredis is able to redirect to the master node on startup and when states change between nodes.
Configuration Parameters
========================
.. note::
Parameter names are case-insensitive
Input Parameters
----------------
.. _imhiredis_mode:
mode
^^^^
.. csv-table::
:header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
:widths: auto
:class: parameter-table
"word", "subscribe", "yes", "none"
Defines the mode to use for the module.
Should be either "**subscribe**" (:ref:`imhiredis_channel_mode`), or "**queue**" (:ref:`imhiredis_queue_mode`) (case-sensitive).
.. _imhiredis_key:
key
^^^
.. csv-table::
:header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
:widths: auto
:class: parameter-table
"word", "none", "yes", "none"
Defines either the name of the list to use (for :ref:`imhiredis_queue_mode`) or the channel to listen to (for :ref:`imhiredis_channel_mode`).
.. _imhiredis_socketPath:
socketPath
^^^^^^^^^^
.. csv-table::
:header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
:widths: auto
:class: parameter-table
"word", "no", "if no :ref:`imhiredis_server` provided", "none"
Defines the socket to use when trying to connect to Redis. Will be ignored if both :ref:`imhiredis_server` and :ref:`imhiredis_socketPath` are given.
.. _imhiredis_server:
server
^^^^^^
.. csv-table::
:header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
:widths: auto
:class: parameter-table
"ip", "127.0.0.1", "if no :ref:`imhiredis_socketPath` provided", "none"
The Redis server's IP to connect to.
.. _imhiredis_port:
port
^^^^
.. csv-table::
:header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
:widths: auto
:class: parameter-table
"number", "6379", "no", "none"
The Redis server's port to use when connecting via IP.
.. _imhiredis_password:
password
^^^^^^^^
.. csv-table::
:header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
:widths: auto
:class: parameter-table
"word", "none", "no", "none"
The password to use when connecting to a Redis node, if necessary.
.. _imhiredis_uselpop:
uselpop
^^^^^^^
.. csv-table::
:header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
:widths: auto
:class: parameter-table
"boolean", "no", "no", "none"
When using the :ref:`imhiredis_queue_mode`, defines if imhiredis should use a LPOP instruction instead of a RPOP (the default).
Has no influence on the :ref:`imhiredis_channel_mode` and will be ignored if set with this mode.
ruleset
^^^^^^^
.. csv-table::
:header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
:widths: auto
:class: parameter-table
"word", "none", "no", "none"
Assign messages from this input to a specific Rsyslog ruleset.
| 28.391304 | 296 | 0.678216 |
3dd19718d12c2c65df291049d18614c4edf2ce4e | 1,190 | rst | reStructuredText | doc/source/DEVELOP.rst | vishalbelsare/abcpy | 72d0d31ae3fa531b69ea3fef39c96af6628ee76f | [
"BSD-3-Clause-Clear"
] | 89 | 2017-02-23T23:34:52.000Z | 2022-03-25T20:35:17.000Z | doc/source/DEVELOP.rst | vishalbelsare/abcpy | 72d0d31ae3fa531b69ea3fef39c96af6628ee76f | [
"BSD-3-Clause-Clear"
] | 35 | 2017-03-31T13:24:52.000Z | 2022-01-09T11:31:38.000Z | doc/source/DEVELOP.rst | vishalbelsare/abcpy | 72d0d31ae3fa531b69ea3fef39c96af6628ee76f | [
"BSD-3-Clause-Clear"
] | 32 | 2017-03-22T06:27:43.000Z | 2021-09-17T15:50:42.000Z | Branching Scheme
================
We use the branching strategy described in this `blog post <http://nvie.com/posts/a-successful-git-branching-model>`_.
Deploy a new Release
====================
This documentation is mainly intended for the main developers. The deployment of
new releases is automated using Travis CI. However, there are still a few manual
steps required in order to deploy a new release. Assume we want to deploy the
new version `M.m.b':
1. Create a release branch `release-M.m.b`
2. Adapt `VERSION` file in the repos root directory: `echo M.m.b > VERSION`
3. Adapt `README.md` file: adapt links to correct version of `User Documentation` and `Reference`
4. Adapt `doc/source/installation.rst` file: to install correct version of ABCpy
5. Merge all desired feature branches into the release branch
6. Create a pull/ merge request: release branch -> master
After a successful merge:
7. Create tag vM.m.b (`git tag vM.m.b`)
8. Retag tag `stable` to the current version
9. Push the tag (`git push --tags`)
10. Create a release in GitHub
The new tag on master will signal Travis to deploy a new package to Pypi while
the GitHub release is just for user documentation.
| 38.387097 | 118 | 0.740336 |
b62eee3c5ee494b90881d25f3d18b694236cbfd5 | 87 | rst | reStructuredText | Misc/NEWS.d/next/Library/2017-08-30-20-27-00.bpo-31292.dKIaZb.rst | praleena/newpython | cb0748d3939c31168ab5d3b80e3677494497d5e3 | [
"CNRI-Python-GPL-Compatible"
] | 24 | 2016-05-09T12:15:47.000Z | 2020-06-23T11:56:01.000Z | Misc/NEWS.d/next/Library/2017-08-30-20-27-00.bpo-31292.dKIaZb.rst | praleena/newpython | cb0748d3939c31168ab5d3b80e3677494497d5e3 | [
"CNRI-Python-GPL-Compatible"
] | 4 | 2022-03-30T01:50:22.000Z | 2022-03-30T01:50:28.000Z | Misc/NEWS.d/next/Library/2017-08-30-20-27-00.bpo-31292.dKIaZb.rst | praleena/newpython | cb0748d3939c31168ab5d3b80e3677494497d5e3 | [
"CNRI-Python-GPL-Compatible"
] | 14 | 2016-11-01T16:02:43.000Z | 2021-06-20T19:25:03.000Z | Fix ``setup.py check --restructuredtext`` for
files containing ``include`` directives.
| 29 | 45 | 0.758621 |
a1cc0e1d9ae65767ae25b5afa34512390b4b0724 | 2,082 | rst | reStructuredText | doc/source/design/soc/hbirdv2.rst | riscv-mcu/hbird-sdk | 6327529ae6c46dc37372361cbf125252dfed0886 | [
"Apache-2.0"
] | 65 | 2020-10-21T09:36:54.000Z | 2022-03-30T07:03:00.000Z | hbird-sdk/doc/source/design/soc/hbirdv2.rst | OpenEDF/e203_hbirdv2 | 352812b7b157b36fd47c6f33db929247c53c1b07 | [
"Apache-2.0"
] | 4 | 2020-11-22T19:05:58.000Z | 2021-11-03T05:11:31.000Z | hbird-sdk/doc/source/design/soc/hbirdv2.rst | OpenEDF/e203_hbirdv2 | 352812b7b157b36fd47c6f33db929247c53c1b07 | [
"Apache-2.0"
] | 21 | 2020-08-06T09:14:37.000Z | 2022-03-26T11:25:35.000Z | .. _design_soc_hbirdv2:
HummingBird SoC V2
==================
HummingBird SoC V2 is an evaluation FPGA SoC based on HummingBird RISC-V Core
for customer to evaluate HummingBird Process Core.
To get the up to date documentation about this SoC, please click:
* `HummingBird SoC V2 online documentation`_
* `HummingBird SoC V2 project source code`_
.. _design_soc_hbirdv2_overview:
Overview
--------
To easy user to evaluate HummingBird RISC-V Processor Core, the prototype
SoC (called Hummingbird SoC) is provided for evaluation purpose.
This prototype SoC includes:
* Processor Core, it can be RISC-V Core.
* On-Chip SRAMs for instruction and data.
* The SoC buses.
* The basic peripherals, such as UART, GPIO, SPI, I2C, etc.
With this prototype SoC, user can run simulations, map it into the FPGA board,
and run with real embedded application examples.
The SoC diagram can be checked as below :ref:`figure_design_soc_hbirdv2_1`
.. _figure_design_soc_hbirdv2_1:
.. figure:: /asserts/images/hbirdv2_soc_diagram.jpg
:width: 80 %
:align: center
:alt: HummingBird V2 SoC Diagram
HummingBird V2 SoC Diagram
If you want to learn more about this evaluation SoC,
please click `HummingBird SoC V2 online documentation`_.
.. _design_soc_hbirdv2_boards:
Supported Boards
----------------
In HummingBird SDK, we support the following boards based on **HummingBird** SoC, see:
* :ref:`design_board_ddr200t`
* :ref:`design_board_mcu200t`
.. _design_soc_hbirdv2_usage:
Usage
-----
If you want to use this **HummingBird** SoC in HummingBird SDK, you need to set the
:ref:`develop_buildsystem_var_soc` Makefile variable to ``hbird``.
.. code-block:: shell
# Choose SoC to be hbird
# the following command will build application
# using default hbird SoC based board
# defined in Build System and application Makefile
make SOC=hbirdv2 all
.. _Nuclei: https://nucleisys.com/
.. _HummingBird SoC V2 online documentation: https://doc.nucleisys.com/hbirdv2
.. _HummingBird SoC V2 project source code: https://github.com/riscv-mcu/e203_hbirdv2
| 27.394737 | 86 | 0.747839 |
e758aa3fcb3d32061d3140a69032d8e2819b8bad | 406 | rst | reStructuredText | docs/source/index.rst | ken-mathenge/health_research_portal | e7e5ac8109c002a2d666c27ad076bbe040e00e5f | [
"MIT"
] | 1 | 2020-01-21T10:27:35.000Z | 2020-01-21T10:27:35.000Z | docs/source/index.rst | ken-mathenge/health_research_portal | e7e5ac8109c002a2d666c27ad076bbe040e00e5f | [
"MIT"
] | 13 | 2020-03-23T09:25:15.000Z | 2020-07-14T12:41:14.000Z | docs/source/index.rst | KennethMathenge/health_research_portal | e7e5ac8109c002a2d666c27ad076bbe040e00e5f | [
"MIT"
] | null | null | null | .. Health Research portal documentation master file, created by
sphinx-quickstart on Thu Jan 23 11:32:42 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Health Research portal's documentation!
==================================================
.. toctree::
:maxdepth: 3
:caption: Contents:
project_report | 31.230769 | 76 | 0.652709 |
2c1491c4e1cb5aa8ceee8bee375e8169041a981c | 52 | rst | reStructuredText | docs/source/reference/backbone.rst | tarikaltuncu/distanceclosure | 15a4663d2697f20c08e34a20a72676f881b66f13 | [
"MIT"
] | 9 | 2016-02-12T22:09:47.000Z | 2022-02-17T17:02:37.000Z | docs/source/reference/backbone.rst | tarikaltuncu/distanceclosure | 15a4663d2697f20c08e34a20a72676f881b66f13 | [
"MIT"
] | 3 | 2022-01-13T15:09:35.000Z | 2022-02-14T13:50:28.000Z | docs/source/reference/backbone.rst | tarikaltuncu/distanceclosure | 15a4663d2697f20c08e34a20a72676f881b66f13 | [
"MIT"
] | 3 | 2017-10-27T16:42:41.000Z | 2022-01-20T08:54:51.000Z |
.. automodule:: distanceclosure.backbone
:members: | 17.333333 | 40 | 0.769231 |
eb6565665ed35d1baa8b45bf08799338bf1b5ba2 | 4,536 | rst | reStructuredText | docs/tickets/124.rst | khchine5/book | b6272d33d49d12335d25cf0a2660f7996680b1d1 | [
"BSD-2-Clause"
] | 1 | 2018-01-12T14:09:58.000Z | 2018-01-12T14:09:58.000Z | docs/tickets/124.rst | khchine5/book | b6272d33d49d12335d25cf0a2660f7996680b1d1 | [
"BSD-2-Clause"
] | 4 | 2018-02-06T19:53:10.000Z | 2019-08-01T21:47:44.000Z | docs/tickets/124.rst | khchine5/book | b6272d33d49d12335d25cf0a2660f7996680b1d1 | [
"BSD-2-Clause"
] | null | null | null | :state: closed
:module: lino_welfare
#124 [closed] : Changements Châtelet Septembre 2014
===================================================
.. currentlanguage:: fr
Propositions de changement par :ref:`welcht` en septembre 2014.
DONE:
- impossible de mettre le mot de passe d'un nouvel utilisateur
--> OK
- Rendez-vous aujourd'hui:
Avoir la colonne « Résumé » dans le module.
Avoir la colonne « LOCAL »
Faire des colonnes moins larges.
--> OK
- Onglet "Situation familiale":
Mettre finalement composition de ménage au-dessus de Liens de
parenté.
--> OK
- Onglet "Interventants", panneau "Contacts":
Supprimer "Type de contact client".
Renommer "Remarques" en "Coordonnées".
- Page d'accueil : Placer la colonne « Bénéficiaire » entre « Quand »
et « Résumé »
- Onglet "Ateliers" : corriger mercrediT en mercredi (une fois qu’on a
cliqué sur un des ateliers)
- Peux-tu aussi retirer la case "personne" et mettre "auteur" dans la
liste des ateliers.
--> OK sauf que je propose "Instructeur" au lieu de "Auteur".
- Connaissances de langue : Renommer la colonne « Parlé » en «
Expression orale », « Ecrit » en « Expression écrite », etc.
- Niveaux de connaissance de langue: Très bien, Bien, Moyen, Faible,
Très faible
- Compétences professionnelles:
Renommer «Propriété» en «Fonction» (et reprendre les métiers
déjà encodés dans Fonctions)
- Freins: Nouvelle colonne "Détecté par" avec un menu déroulant pour
sélectionner un agent.
- Situation familiale: je propose l'approche suivant.
- nous convenons qu'il faut **encoder les ménages précédents**
éventuels pour pouvoir encoder les **enfants provenant de ces
ménages**. Notons qu'il est conseillé mais pas nécessaire
d'encoder le partenaire d'un ménage.
- Si un bénéficiaire est membre de plusieurs ménages, on peut
spécifier manuellement le "ménage primaire" en cliquant sur le
petit carré.
- Composition de ménage: pour ajouter un membre, on peut *soit*
sélectionner un bénéficiaire (et alors les 4 champs pour le nom,
le prénom, la date de naissance et le sexe deviennent
lecture-seule), *soit* remplir ces quatre champs. Si on les
remplit *tous*, alors Lino crée automatiquement un bénéficiaire.
- Pour les membres de ménage qui sont liés à un bénéficiaire, Lino
génère automatiquement les liens de parenté à partir des données
dans la compositions de ménage. Càd tous les membres de type
"parent" (càd le chef de famille et le partenaire) deviennent
père (mère) de tous les enfants.
- Onglet "Personne", panneau "Rendez-vous": Renommer « Utilisateur
responsable » en « Agent traitant » --> OK
- Onglet "Personne", panneau "Rendez-vous": Avoir la possibilité de
choisir « Pas venu » et « Pas excusé » en plus de « Recevoir » et
« Quitter » --> OK sauf que je les appelle "Excusé" et "Absent"
- Supprimer le panneau "Ateliers d'insertion sociale" (ces ateliers
sont transférés dans "Savoirs de base" (merge IntegEnrolments
into BasicEnrolments)
- Les PIIS ont maintenant leur onglet à eux seuls.
DISCUSS
- «EnrolmentsByPupil» : Renommer en "Orientations internes en attente"
--> Quel est le but de ce panneau? Pourquoi est-il dans le premier
onglet? Est-ce qu'il vous faut le panneau
:class:`ml.courses.SuggestedCoursesByPupil`?
- Dans configuration, je ne trouve pas les emplacements pour modifier
les Etats civils.
Oui, certaines listes ne sont pas prévus pour etre "directement"
modifiables. Dis-moi ce que tu voudrais changer.
- Peux-tu aussi changer le mot PARTICIPANTS en BENEFICIAIRES (afin que
l’on ait la liste de nos bénéficiaires). Ça va nous servir pour
compter le nombre de personnes présentes et garder un historique.
--> Mais le meme système pourrait servir pour des réunions internes,
(gestion des présences pour des personnes qui ne sont pas des
bénéficiaires)
- Alerte mail quand ajout nouvel intervenant.
- Vocabulaire: dans les "Ateliers" ("courses") nous avons deux
"catégories" (CourseAreas):
- basic : "Ateliers"
- job : "Modules de détermination d'un projet socioprofessionnel"
Je propose de les appeler p.ex. "Ateliers ouverts" et "Ateliers
modulaires"
- Formations : se peut-il que les Eupenois encodent PIIS de type
formation pour ce que vous voulez mettre dans "Formations"?
- Regarder `changes.Changes` et réfléchir s'il vous le faut.
TODO:
- Visite du chantier (Luc et Mathieu)
Pages referring to this:
.. refstothis::
| 33.6 | 70 | 0.719356 |
6a2e590c12b68567620d408c435ff1fc15e13c24 | 170 | rst | reStructuredText | docs/functions/gs_quant.markets.index.Index.visualise_tree.rst | rtsscy/gs-quant | b86e1ddad2ea9551479607ad001f43dfead366e5 | [
"Apache-2.0"
] | 4 | 2021-05-11T14:35:53.000Z | 2022-03-14T03:52:34.000Z | docs/functions/gs_quant.markets.index.Index.visualise_tree.rst | rtsscy/gs-quant | b86e1ddad2ea9551479607ad001f43dfead366e5 | [
"Apache-2.0"
] | null | null | null | docs/functions/gs_quant.markets.index.Index.visualise_tree.rst | rtsscy/gs-quant | b86e1ddad2ea9551479607ad001f43dfead366e5 | [
"Apache-2.0"
] | null | null | null | gs\_quant.markets.index.Index.visualise_tree
============================================
.. currentmodule:: gs_quant.markets.index
.. automethod:: Index.visualise_tree | 28.333333 | 44 | 0.594118 |
c9b71e4fde26c554c4b956875558be002c6ad898 | 775 | rst | reStructuredText | docs/eog/eog-in-practice/cw21/feasibility-stage/feasibility-stage.rst | softwaresaved/event-organisation-guide | c92979ec2882c33cebd0f736101f91659f3c3375 | [
"CC-BY-4.0"
] | 4 | 2019-11-07T18:42:08.000Z | 2021-12-03T23:56:16.000Z | docs/eog/eog-in-practice/cw21/feasibility-stage/feasibility-stage.rst | softwaresaved/event-organisation-guide | c92979ec2882c33cebd0f736101f91659f3c3375 | [
"CC-BY-4.0"
] | 85 | 2019-01-18T17:05:14.000Z | 2022-03-07T10:29:45.000Z | docs/eog/eog-in-practice/cw21/feasibility-stage/feasibility-stage.rst | softwaresaved/event-organisation-guide | c92979ec2882c33cebd0f736101f91659f3c3375 | [
"CC-BY-4.0"
] | 6 | 2019-07-24T10:45:49.000Z | 2020-07-30T14:16:24.000Z | .. _cw21-feasibility-stage:
CW21 Feasibility Stage
========================
During the `Feasibility Stage <https://event-organisation-guide.readthedocs.io/en/latest/eog/feasibility-stage.html>`_ , the event idea is being explored more thoroughly.
At this stage, various things are needed before a formal sign-off and progression to the Event Project Stage (i.e. before the Institute/main stakeholder agrees to take on the staff effort, financial risk and opportunities afforded by running the event).
The following sections include the information from the Feasibility Stage which is needed for evaluation by stakeholders.
.. toctree::
:maxdepth: 1
:caption: Sections:
fs-goals-and-objectives
fs-audience
fs-date
fs-venue
fs-outputs-and-outcomes
| 36.904762 | 253 | 0.752258 |
352ac3633f115ff8f6582a30bc0a5eb06e91326c | 577 | rst | reStructuredText | includes_cookbooks/includes_cookbooks_attribute_file_methods_accessor.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | 1 | 2020-02-02T21:57:47.000Z | 2020-02-02T21:57:47.000Z | includes_cookbooks/includes_cookbooks_attribute_file_methods_accessor.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | null | null | null | includes_cookbooks/includes_cookbooks_attribute_file_methods_accessor.rst | trinitronx/chef-docs | 948d76fc0c0cffe17ed6b010274dd626f53584c2 | [
"CC-BY-3.0"
] | null | null | null | .. The contents of this file are included in multiple topics.
.. This file should not be changed in a way that hinders its ability to appear in multiple documentation sets.
Attribute accessor methods are automatically created and the method invocation can be used interchangeably with the keys. For example:
.. code-block:: ruby
default.apache.dir = "/etc/apache2"
default.apache.listen_ports = [ "80","443" ]
This is a matter of style and preference for how attributes are reloaded from recipes, and may be seen when "retrieving" the value of an attribute. | 48.083333 | 147 | 0.757366 |
f57befe832919f9b70de35d7f2d32ea3b01f5d77 | 2,338 | rst | reStructuredText | source/javascript/jquery/plugin/scrollable-fixed-header-table.rst | pkimber/my-memory | 2ab4c924f1d2869e3c39de9c1af81094b368fb4a | [
"Apache-2.0"
] | null | null | null | source/javascript/jquery/plugin/scrollable-fixed-header-table.rst | pkimber/my-memory | 2ab4c924f1d2869e3c39de9c1af81094b368fb4a | [
"Apache-2.0"
] | null | null | null | source/javascript/jquery/plugin/scrollable-fixed-header-table.rst | pkimber/my-memory | 2ab4c924f1d2869e3c39de9c1af81094b368fb4a | [
"Apache-2.0"
] | null | null | null | Scrollable Fixed Header Table
*****************************
This plug-in works with :doc:`tablesorter`.
Links
=====
- `Scrollable Fixed Header Table`_
Install
=======
::
cd ~/repo/temp/
svn checkout http://jquery-sfht.googlecode.com/svn/trunk/ jquery-sfht
cd my/site/static/js/
mkdir jquery-sfht/
cd jquery-sfht/
cp ~/repo/temp/jquery-sfht/javascripts/jquery.cookie.pack.js .
cp ~/repo/temp/jquery-sfht/javascripts/jquery.dimensions.min.js .
cp ~/repo/temp/jquery-sfht/javascripts/jquery.scrollableFixedHeaderTable.js .
cd my/site/static/css/
mkdir jquery-sfht/
cd jquery-sfht/
cp -R ~/repo/temp/jquery-sfht/css/* .
Note: We don't actually want to copy everthing in the ``css`` folder as it will
include ``svn`` meta-data.
Usage
=====
.. code-block:: html
<link rel="stylesheet" type="text/css" href="{{ STATIC_URL }}css/common/jquery-sfht/themes/blue/style.css">
<link rel="stylesheet" type="text/css" href="{{ STATIC_URL }}css/common/jquery-sfht/scrollableFixedHeaderTable.css">
<script type="text/javascript" src="{{ STATIC_URL }}js/common/jquery-1.6.4.js"></script>
<script type="text/javascript" src="{{ STATIC_URL }}js/common/jquery.tablesorter.js"></script>
<script type="text/javascript" src="{{ STATIC_URL }}js/common/jquery-sfht/jquery.cookie.pack.js" ></script>
<script type="text/javascript" src="{{ STATIC_URL }}js/common/jquery-sfht/jquery.dimensions.min.js" ></script>
<script type="text/javascript" src="{{ STATIC_URL }}js/common/jquery-sfht/jquery.scrollableFixedHeaderTable.js" ></script>
To initialise the table:
.. code-block:: javascript
$('#myTable').scrollableFixedHeaderTable(500, 200);
$('#myTable').tablesorter().bind('sortEnd', function(){
var $cloneTH = $('.sfhtHeader thead th');
var $trueTH = $('.sfhtData thead th');
$cloneTH.each(function(index){
$(this).attr('class', $($trueTH[index]).attr('class'));
});
});
$('.sfhtHeader thead th').each(function(index){
var $cloneTH = $(this);
var $trueTH = $($('.sfhtData thead th')[index]);
$cloneTH.attr('class', $trueTH.attr('class'));
$cloneTH.click(function(){
$trueTH.click();
});
});
.. _`Scrollable Fixed Header Table`: http://jeromebulanadi.wordpress.com/2010/03/22/scrollable-fixed-header-table-a-jquery-plugin/
| 31.173333 | 130 | 0.666809 |
401f9cfedddad484b4a761458f7fd398d7429074 | 963 | rst | reStructuredText | README.rst | autocorr/tcal-polynomial-fitting | 405e4dd8d8b722d0cc9c8774bb390bf8d662e9a3 | [
"MIT"
] | null | null | null | README.rst | autocorr/tcal-polynomial-fitting | 405e4dd8d8b722d0cc9c8774bb390bf8d662e9a3 | [
"MIT"
] | null | null | null | README.rst | autocorr/tcal-polynomial-fitting | 405e4dd8d8b722d0cc9c8774bb390bf8d662e9a3 | [
"MIT"
] | null | null | null | Calibrator monitoring polynomials
=================================
Compute time and frequency polynomials from the VLA calibrator monitoring
program. This module is to be run with Python v3.
Getting started
---------------
First, clone or download this repository and run
.. code-block:: bash
pip install --user --requirement requirements.txt
Then add the module directory to your ``PYTHONPATH``.
To generate the plots, call:
.. code-block:: python
from tcal_poly import (core, plotting)
f_df = core.aggregate_flux_files()
w_df = core.read_weather()
plotting.plot_all_light_curves(f_df, bands=core.BANDS)
plotting.plot_all_seds_rel(f_df)
plotting.plot_all_weather_light_curves(
f_df, w_df, fields=plotting.FSCALE_FIELDS, bands=core.bands,
)
License
-------
The pipeline is authored by Brian Svoboda. The code and documentation is
released under the MIT License. A copy of the license is supplied in the
LICENSE file.
| 25.342105 | 73 | 0.721703 |
2bad5c9e2c27d5d5d49ff10ae563e80a8237986d | 2,257 | rst | reStructuredText | doc/build/dialects/mssql.rst | Dreamsorcerer/sqlalchemy | 153671df9d4cd7f2cdb3e14e6221f529269885d9 | [
"MIT"
] | 5,383 | 2018-11-27T07:34:03.000Z | 2022-03-31T19:40:59.000Z | doc/build/dialects/mssql.rst | Dreamsorcerer/sqlalchemy | 153671df9d4cd7f2cdb3e14e6221f529269885d9 | [
"MIT"
] | 2,719 | 2018-11-27T07:55:01.000Z | 2022-03-31T22:09:44.000Z | doc/build/dialects/mssql.rst | Dreamsorcerer/sqlalchemy | 153671df9d4cd7f2cdb3e14e6221f529269885d9 | [
"MIT"
] | 998 | 2018-11-28T09:34:38.000Z | 2022-03-30T19:04:11.000Z | .. _mssql_toplevel:
Microsoft SQL Server
====================
.. automodule:: sqlalchemy.dialects.mssql.base
SQL Server SQL Constructs
-------------------------
.. currentmodule:: sqlalchemy.dialects.mssql
.. autofunction:: try_cast
SQL Server Data Types
---------------------
As with all SQLAlchemy dialects, all UPPERCASE types that are known to be
valid with SQL server are importable from the top level dialect, whether
they originate from :mod:`sqlalchemy.types` or from the local dialect::
from sqlalchemy.dialects.mssql import \
BIGINT, BINARY, BIT, CHAR, DATE, DATETIME, DATETIME2, \
DATETIMEOFFSET, DECIMAL, FLOAT, IMAGE, INTEGER, JSON, MONEY, \
NCHAR, NTEXT, NUMERIC, NVARCHAR, REAL, SMALLDATETIME, \
SMALLINT, SMALLMONEY, SQL_VARIANT, TEXT, TIME, \
TIMESTAMP, TINYINT, UNIQUEIDENTIFIER, VARBINARY, VARCHAR
Types which are specific to SQL Server, or have SQL Server-specific
construction arguments, are as follows:
.. currentmodule:: sqlalchemy.dialects.mssql
.. autoclass:: BIT
:members: __init__
.. autoclass:: CHAR
:members: __init__
.. autoclass:: DATETIME2
:members: __init__
.. autoclass:: DATETIMEOFFSET
:members: __init__
.. autoclass:: IMAGE
:members: __init__
.. autoclass:: JSON
:members: __init__
.. autoclass:: MONEY
:members: __init__
.. autoclass:: NCHAR
:members: __init__
.. autoclass:: NTEXT
:members: __init__
.. autoclass:: NVARCHAR
:members: __init__
.. autoclass:: REAL
:members: __init__
.. autoclass:: ROWVERSION
:members: __init__
.. autoclass:: SMALLDATETIME
:members: __init__
.. autoclass:: SMALLMONEY
:members: __init__
.. autoclass:: SQL_VARIANT
:members: __init__
.. autoclass:: TEXT
:members: __init__
.. autoclass:: TIME
:members: __init__
.. autoclass:: TIMESTAMP
:members: __init__
.. autoclass:: TINYINT
:members: __init__
.. autoclass:: UNIQUEIDENTIFIER
:members: __init__
.. autoclass:: VARCHAR
:members: __init__
.. autoclass:: XML
:members: __init__
PyODBC
------
.. automodule:: sqlalchemy.dialects.mssql.pyodbc
mxODBC
------
.. automodule:: sqlalchemy.dialects.mssql.mxodbc
pymssql
-------
.. automodule:: sqlalchemy.dialects.mssql.pymssql
| 17.229008 | 73 | 0.681436 |
b6ab9f07e06fec1d9c49a03939e260365d2396e7 | 777 | rst | reStructuredText | source/lessons/L5/exercise-5.rst | gmwwho/site | ab6f990aa745b614ddc0d62f273d686b032e102c | [
"MIT"
] | 140 | 2019-10-22T18:09:12.000Z | 2022-03-30T16:03:39.000Z | source/lessons/L5/exercise-5.rst | gmwwho/site | ab6f990aa745b614ddc0d62f273d686b032e102c | [
"MIT"
] | 11 | 2019-10-23T08:37:05.000Z | 2021-03-29T14:53:38.000Z | source/lessons/L5/exercise-5.rst | gmwwho/site | ab6f990aa745b614ddc0d62f273d686b032e102c | [
"MIT"
] | 152 | 2019-10-25T16:34:43.000Z | 2022-03-14T08:24:38.000Z | Exercise 5
==========
.. image:: https://img.shields.io/badge/launch-CSC%20notebook-blue.svg
:target: https://notebooks.csc.fi/#/blueprint/d189695c52ad4c0d89ef72572e81b16c
.. admonition:: Start your assignment
You can start working on your copy of Exercise 5 by `accepting the GitHub Classroom assignment <https://classroom.github.com/a/Dx1aj7nT>`__.
**Exercise 5 is due by Thursday the 9th of December at 5pm** (day before the next practical session).
You can also take a look at the open course copy of `Exercise 5 in the course GitHub repository <https://github.com/AutoGIS-2021/Exercise-5>`__ (does not require logging in).
Note that you should not try to make changes to this copy of the exercise, but rather only to the copy available via GitHub Classroom.
| 45.705882 | 174 | 0.758044 |
ccd707796f3c9f56b42496da7f16a4c330efb4dc | 11,982 | rst | reStructuredText | syntax_option_type.rst | exeal/boostjp-regex | 240ca818fb0bb6c9ca86d03799039436ed895e03 | [
"BSL-1.0"
] | null | null | null | syntax_option_type.rst | exeal/boostjp-regex | 240ca818fb0bb6c9ca86d03799039436ed895e03 | [
"BSL-1.0"
] | null | null | null | syntax_option_type.rst | exeal/boostjp-regex | 240ca818fb0bb6c9ca86d03799039436ed895e03 | [
"BSL-1.0"
] | null | null | null | .. Copyright 2006-2007 John Maddock.
.. Distributed under the Boost Software License, Version 1.0.
.. (See accompanying file LICENSE_1_0.txt or copy at
.. http://www.boost.org/LICENSE_1_0.txt).
syntax_option_type
==================
.. contents::
:depth: 1
:local:
.. cpp:type:: implementation_specific_bitmask_type syntax_option_type
:cpp:type:`syntax_option_type` 型は実装固有のビットマスク型で、正規表現文字列の解釈方法を制御する。利便性のために、ここに挙げる定数はすべて :cpp:class:`basic_regex` テンプレートクラスのスコープにも複製していることに注意していただきたい。
.. _ref.syntax_option_type.syntax_option_type_synopsis:
syntax_option_type の概要
-------------------------
::
namespace std{ namespace regex_constants{
typedef implementation_specific_bitmask_type syntax_option_type;
// 以下のフラグは標準化されている:
static const syntax_option_type normal;
static const syntax_option_type ECMAScript = normal;
static const syntax_option_type JavaScript = normal;
static const syntax_option_type JScript = normal;
static const syntax_option_type perl = normal;
static const syntax_option_type basic;
static const syntax_option_type sed = basic;
static const syntax_option_type extended;
static const syntax_option_type awk;
static const syntax_option_type grep;
static const syntax_option_type egrep;
static const syntax_option_type icase;
static const syntax_option_type nosubs;
static const syntax_option_type optimize;
static const syntax_option_type collate;
//
// 残りのオプションは Boost.Regex 固有のものである:
//
// Perl および POSIX 正規表現共通のオプション:
static const syntax_option_type newline_alt;
static const syntax_option_type no_except;
static const syntax_option_type save_subexpression_location;
// Perl 固有のオプション:
static const syntax_option_type no_mod_m;
static const syntax_option_type no_mod_s;
static const syntax_option_type mod_s;
static const syntax_option_type mod_x;
static const syntax_option_type no_empty_expressions;
// POSIX 拡張固有のオプション:
static const syntax_option_type no_escape_in_lists;
static const syntax_option_type no_bk_refs;
// POSIX 基本のオプション:
static const syntax_option_type no_escape_in_lists;
static const syntax_option_type no_char_classes;
static const syntax_option_type no_intervals;
static const syntax_option_type bk_plus_qm;
static const syntax_option_type bk_vbar;
} // namespace regex_constants
} // namespace std
.. _ref.syntax_option_type.syntax_option_type_overview:
syntax_option_type の概観
-------------------------
:cpp:type:`syntax_option_type` 型は実装固有のビットマスク型である(C++ 標準 17.3.2.1.2 を見よ)。各要素の効果は以下の表に示すとおりである。:cpp:type:`syntax_option_type` 型の値は :cpp:var:`!normal` 、:cpp:var:`!basic` 、:cpp:var:`!extended` 、:cpp:var:`!awk` 、:cpp:var:`!grep` 、:cpp:var:`!egrep` 、:cpp:var:`!sed` 、:cpp:var:`!literal` 、:cpp:var:`!perl` のいずれか 1 つの要素を必ず含んでいなければならない。
利便性のために、ここに挙げる定数はすべて :cpp:class:`basic_regex` テンプレートクラスのスコープにも複製していることに注意していただきたい。よって、次のコードは、 ::
boost::regex_constants::constant_name
次のように書くことができる。 ::
boost::regex::constant_name
あるいは次のようにも書ける。 ::
boost::wregex::constant_name
以上はいずれも同じ意味である。
.. _ref.syntax_option_type.syntax_option_type_perl:
Perl 正規表現のオプション
-------------------------
Perl の正規表現では、以下のいずれか 1 つを必ず設定しなければならない。
.. list-table::
:header-rows: 1
* - 要素
- 標準か
- 設定した場合の効果
* - :cpp:var:`!ECMAScript`
- ○
- 正規表現エンジンが解釈する文法が通常のセマンティクスに従うことを指定する。ECMA-262, ECMAScript Language Specification, Chapter 15 part 10, RegExp (Regular Expression) Objects (FWD.1) に与えられているものと同じである。
これは :doc:`syntax_perl`\と機能的には等価である。
このモードでは、Boost.Regex は Perl 互換の :regexp:`(?…)` 拡張もサポートする。
* - :cpp:var:`!perl`
- ×
- 上に同じ。
* - :cpp:var:`!normal`
- ×
- 上に同じ。
* - :cpp:var:`!JavaScript`
- ×
- 上に同じ。
* - :cpp:var:`!JScript`
- ×
- 上に同じ。
Perl スタイルの正規表現を使用する場合は、以下のオプションを組み合わせることができる。
.. list-table::
:header-rows: 1
* - 要素
- 標準か
- 設定した場合の効果
* - :cpp:var:`!icase`
- ○
- 文字コンテナシーケンスに対する正規表現マッチにおいて、大文字小文字を区別しないことを指定する。
* - :cpp:var:`!nosubs`
- ○
- 文字コンテナシーケンスに対して正規表現マッチしたときに、与えられた :cpp:class:`match_results` 構造体に部分式マッチを格納しないように指定する。
* - :cpp:var:`!optimize`
- ○
- 正規表現エンジンに対し、正規表現オブジェクトの構築速度よりも正規表現マッチの速度についてより多くの注意を払うように指定する。設定しない場合でもプログラムの出力に検出可能な効果はない。Boost.Regex では現時点では何も起こらない。
* - :cpp:var:`!collate`
- ○
- :regexp:`[a-b]` 形式の文字範囲がロカールを考慮するように指定する。
* - :cpp:var:`!newline_alt`
- ×
- :regexp:`\n` 文字が選択演算子 :regexp:`|` と同じ効果を持つように指定する。これにより、改行で区切られたリストが選択のリストとして動作する。
* - :cpp:var:`!no_except`
- ×
- 不正な式が見つかった場合に :cpp:class:`basic_regex` が例外を投げるのを禁止する。
* - :cpp:var:`!no_mod_m`
- ×
- 通常 Boost.Regex は Perl の m 修飾子が設定された状態と同じ動作をし、表明 :regexp:`^` および :regexp:`$` はそれぞれ改行の直前および直後にマッチする。このフラグを設定するのは式の前に :regexp:`(?-m)` を追加するのと同じである。
* - :cpp:var:`!no_mod_s`
- ×
- 通常 Boost.Regex において :regexp:`.` が改行文字にマッチするかはマッチフラグ :cpp:var:`!match_dot_not_newline` により決まる。このフラグを設定するのは式の前に :regexp:`(?-s)` を追加するのと同じであり、:regexp:`.` はマッチフラグに :cpp:var:`!match_dot_not_newline` が設定されているかに関わらず改行文字にマッチしない。
* - :cpp:var:`!mod_s`
- ×
- 通常 Boost.Regex において :regexp:`.` が改行文字にマッチするかはマッチフラグ :cpp:var:`!match_dot_not_newline` により決まる。このフラグを設定するのは式の前に :regexp:`(?s)` を追加するのと同じであり、:regexp:`.` はマッチフラグに :cpp:var:`!match_dot_not_newline` が設定されているかに関わらず改行文字にマッチする。
* - :cpp:var:`!mod_x`
- ×
- Perl の x 修飾子を有効にする。正規表現中のエスケープされていない空白は無視される。
* - :cpp:var:`!no_empty_expressions`
- ×
- 空の部分式および選択を禁止する。
* - :cpp:var:`!save_subexpression_location`
- ×
- **元の正規表現文字列**\における個々の部分式の位置に、:cpp:class:`!basic_regex` の :cpp:func:`~basic_regex::subexpression()` メンバ関数でアクセス可能になる。
.. _ref.syntax_option_type.syntax_option_type_extended:
POSIX 拡張正規表現のオプション
------------------------------
:doc:`POSIX 拡張正規表現 <syntax_extended>`\では、以下のいずれか1つを必ず設定しなければならない。
.. list-table::
:header-rows: 1
* - 要素
- 標準か
- 設定した場合の効果
* - :cpp:var:`!extended`
- ○
- 正規表現エンジンが IEEE Std 1003.1-2001, Portable Operating System Interface (POSIX), Base Definitions and Headers, Section 9, Regular Expressions (FWD.1) の POSIX 拡張正規表現で使用されているものと同じ文法に従うことを指定する。
詳細は\ :doc:`POSIX 拡張正規表現ガイド <syntax_extended>`\を参照せよ。
Perl スタイルのエスケープシーケンスもいくつかサポートする(POSIX 標準の定義では「特殊な」文字のみがエスケープ可能であり、他のエスケープシーケンスを使用したときの結果は未定義である)。
* - :cpp:var:`!egrep`
- ○
- 正規表現エンジンが IEEE Std 1003.1-2001, Portable Operating System Interface (POSIX), Shells and Utilities, Section 4, Utilities, grep (FWD.1) の POSIX ユーティリティに :option:`!-E` オプションを与えた場合と同じ文法に従うことを指定する。
つまり :doc:`POSIX 拡張構文 <syntax_extended>`\と同じであるが、改行文字が :regexp:`|` と同じく選択文字として動作する。
* - :cpp:var:`!awk`
- ○
- 正規表現エンジンが IEEE Std 1003.1-2001, Portable Operating System Interface (POSIX), Shells and Utilities, Section 4, awk (FWD.1) の POSIX ユーティリティ :program:`awk` の文法に従うことを指定する。
つまり :doc:`POSIX 拡張構文 <syntax_extended>`\と同じであるが、文字クラス中のエスケープシーケンスが許容される。
さらに Perl スタイルのエスケープシーケンスもいくつかサポートする(実際には :program:`awk` の構文は :regexp:`\\a` 、:regexp:`\\b` 、:regexp:`\\t` 、:regexp:`\\v` 、:regexp:`\\f` 、:regexp:`\\n` および :regexp:`\\r` のみを要求しており、他のすべての Perl スタイルのエスケープシーケンスを使用したときの動作は未定義であるが、Boost.Regex では実際には後者も解釈する)。
POSIX 拡張正規表現を使用する場合は、以下のオプションを組み合わせることができる。
.. list-table::
:header-rows: 1
* - 要素
- 標準か
- 設定した場合の効果
* - :cpp:var:`!icase`
- ○
- 文字コンテナシーケンスに対する正規表現マッチにおいて、大文字小文字を区別しないことを指定する。
* - :cpp:var:`!nosubs`
- ○
- 文字コンテナシーケンスに対して正規表現マッチしたときに、与えられた :cpp:class:`match_results` 構造体に部分式マッチを格納しないように指定する。
* - :cpp:var:`!optimize`
- ○
- 正規表現エンジンに対し、正規表現オブジェクトの構築速度よりも正規表現マッチの速度についてより多くの注意を払うように指定する。設定しない場合でもプログラムの出力に検出可能な効果はない。Boost.Regex では現時点では何も起こらない。
* - :cpp:var:`!collate`
- ○
- :regexp:`[a-b]` 形式の文字範囲がロカールを考慮するように指定する。このビットは POSIX 拡張正規表現では既定でオンであるが、オフにして範囲をコードポイントのみで比較するようにすることが可能である。
* - :cpp:var:`!newline_alt`
- ×
- :regexp:`\\n` 文字が選択演算子 :regexp:`|` と同じ効果を持つように指定する。これにより、改行で区切られたリストが選択のリストとして動作する。
* - :cpp:var:`!no_escape_in_lists`
- ×
- 設定するとエスケープ文字はリスト内で通常の文字として扱われる。よって :regexp:`[\b]` は :regex-input:`\\` か :regex-input:`b` にマッチする。このビットは POSIX 拡張正規表現では既定でオンであるが、オフにしてリスト内でエスケープが行われるようにすることが可能である。
* - :cpp:var:`!no_bk_refs`
- ×
- 設定すると後方参照が無効になる。このビットは POSIX 拡張正規表現では既定でオンであるが、オフにして後方参照を有効にすることが可能である。
* - :cpp:var:`!no_except`
- ×
- 不正な式が見つかった場合に :cpp:class:`basic_regex` が例外を投げるのを禁止する。
* - :cpp:var:`!save_subexpression_location`
- ×
- **元の正規表現文字列**\における個々の部分式の位置に、:cpp:class:`!basic_regex` の :cpp:func:`~basic_regex::subexpression()` メンバ関数でアクセス可能になる。
.. _ref.syntax_option_type.syntax_option_type_basic:
POSIX 基本正規表現のオプション
------------------------------
POSIX 基本正規表現では、以下のいずれか 1 つを必ず設定しなければならない。
.. list-table::
:header-rows: 1
* - 要素
- 標準か
- 設定した場合の効果
* - :cpp:var:`!basic`
- ○
- 正規表現エンジンが IEEE Std 1003.1-2001, Portable Operating System Interface (POSIX), Base Definitions and Headers, Section 9, Regular Expressions (FWD.1) の :doc:`POSIX 基本正規表現 <syntax_basic>`\で使用されているものと同じ文法に従うことを指定する。
* - :cpp:var:`!sed`
- ×
- 上に同じ。
* - :cpp:var:`!grep`
- ○
- 正規表現エンジンが IEEE Std 1003.1-2001, Portable Operating System Interface (POSIX), Shells and Utilities, Section 4, Utilities, grep (FWD.1) の POSIX :program:`grep` ユーティリティで使用されているものと同じ文法に従うことを指定する。
つまり :doc:`POSIX 基本構文 <syntax_basic>`\と同じであるが、改行文字が選択文字として動作する。式は改行区切りの選択リストとして扱われる。
* - :cpp:var:`!emacs`
- ×
- 使用する文法が emacs プログラムで使われている :doc:`POSIX 基本構文 <syntax_basic>`\のスーパーセットであることを指定する。
POSIX 基本正規表現を使用する場合は、以下のオプションを組み合わせることができる。
.. list-table::
:header-rows: 1
* - 要素
- 標準か
- 設定した場合の効果
* - :cpp:var:`!icase`
- ○
- 文字コンテナシーケンスに対する正規表現マッチにおいて、大文字小文字を区別しないことを指定する。
* - :cpp:var:`!nosubs`
- ○
- 文字コンテナシーケンスに対して正規表現マッチしたときに、与えられた :cpp:class:`match_results` 構造体に部分式マッチを格納しないように指定する。
* - :cpp:var:`!optimize`
- ○
- 正規表現エンジンに対し、正規表現オブジェクトの構築速度よりも正規表現マッチの速度についてより多くの注意を払うように指定する。設定しない場合でもプログラムの出力に検出可能な効果はない。Boost.Regex では現時点では何も起こらない。
* - :cpp:var:`!collate`
- ○
- :regexp:`[a-b]` 形式の文字範囲がロカールを考慮するように指定する。このビットは :doc:`POSIX 基本正規表現 <syntax_basic>`\では既定でオンであるが、オフにして範囲をコードポイントのみで比較するようにすることが可能である。
* - :cpp:var:`!newline_alt`
- ○
- :regexp:`\\n` 文字が選択演算子 :regexp:`|` と同じ効果を持つように指定する。これにより、改行で区切られたリストが選択のリストとしてはたらく。:cpp:var:`!grep` オプションの場合はこのビットは常にオンである。
* - :cpp:var:`!no_char_classes`
- ×
- 設定すると :regexp:`[[:alnum:]]` のような文字クラスは認められないようになる。
* - :cpp:var:`!no_escape_in_lists`
- ×
- 設定するとエスケープ文字はリスト内で通常の文字として扱われる。よって :regexp:`[\\b]` は :regex-input:`\\` か :regex-input:`b` にマッチする。このビットは :doc:`POSIX 基本正規表現 <syntax_basic>`\では既定でオンであるが、オフにしてリスト内でエスケープが行われるようにすることが可能である。
* - :cpp:var:`!no_intervals`
- ×
- 設定すると :regexp:`{2,3}` のような境界付き繰り返しは認められないようになる。
* - :cpp:var:`!bk_plus_qm`
- ×
- 設定すると :regexp:`\\?` が 0 か 1 回の繰り返し演算子、:regexp:`\\+` が 1 回以上の繰り返し演算子として動作する。
* - :cpp:var:`!bk_vbar`
- ×
- 設定すると :regexp:`\\|` が選択演算子として動作する。
* - :cpp:var:`!no_except`
- ×
- 不正な式が見つかった場合に :cpp:class:`basic_regex` が例外を投げるのを禁止する。
* - :cpp:var:`!save_subexpression_location`
- ×
- **元の正規表現文字列**\における個々の部分式の位置に、:cpp:class:`!basic_regex` の :cpp:func:`~basic_regex::subexpression()` メンバ関数でアクセス可能になる。
.. _ref.syntax_option_type.syntax_option_type_literal:
直値文字列のオプション
----------------------
直値文字列では、以下のいずれか 1 つを必ず設定しなければならない。
.. list-table::
:header-rows: 1
* - 要素
- 標準か
- 設定した場合の効果
* - :cpp:var:`!literal`
- ○
- 文字列を直値として扱う(特殊文字が存在しない)。
:cpp:var:`!literal` フラグを使用する場合は、以下のオプションを組み合わせることができる。
.. list-table::
:header-rows: 1
* - 要素
- 標準か
- 設定した場合の効果
* - :cpp:var:`!icase`
- ○
- 文字コンテナシーケンスに対する正規表現マッチにおいて、大文字小文字を区別しないことを指定する。
* - :cpp:var:`!optimize`
- ○
- 正規表現エンジンに対し、正規表現オブジェクトの構築速度よりも正規表現マッチの速度についてより多くの注意を払うように指定する。設定しない場合でもプログラムの出力に検出可能な効果はない。Boost.Regex では現時点では何も起こらない。
| 33.752113 | 327 | 0.686864 |
0404615912f39363f410267e8cb1618799e97f59 | 136 | rst | reStructuredText | docs/ref/index.rst | yawd/yawd-elfinder | 955d39c8194ee61f1e24f5cd5e4530bb0e6e9b3c | [
"BSD-3-Clause"
] | 12 | 2015-03-26T13:06:11.000Z | 2019-04-30T18:30:39.000Z | docs/ref/index.rst | ppetrid/yawd-elfinder | 955d39c8194ee61f1e24f5cd5e4530bb0e6e9b3c | [
"BSD-3-Clause"
] | 2 | 2016-02-14T23:53:28.000Z | 2016-12-09T21:15:14.000Z | docs/ref/index.rst | ppetrid/yawd-elfinder | 955d39c8194ee61f1e24f5cd5e4530bb0e6e9b3c | [
"BSD-3-Clause"
] | 24 | 2015-03-25T11:03:01.000Z | 2018-12-04T10:14:11.000Z | *********
Reference
*********
.. toctree::
:maxdepth: 1
settings
other-settings
fields
connector
drivers
utils | 10.461538 | 17 | 0.536765 |
f5e373bf4876b72df54b1351971fb33071159b1a | 373 | rst | reStructuredText | docs/source/vbr.utils.rst | a2cps/python-vbr | 9d5d4480386d0530450d59157e0da6937320f928 | [
"BSD-3-Clause"
] | 1 | 2021-05-26T19:08:29.000Z | 2021-05-26T19:08:29.000Z | docs/source/vbr.utils.rst | a2cps/python-vbr | 9d5d4480386d0530450d59157e0da6937320f928 | [
"BSD-3-Clause"
] | 7 | 2021-05-04T13:12:39.000Z | 2022-03-09T21:04:33.000Z | docs/source/vbr.utils.rst | a2cps/python-vbr | 9d5d4480386d0530450d59157e0da6937320f928 | [
"BSD-3-Clause"
] | 2 | 2021-04-20T14:46:52.000Z | 2021-06-07T20:28:28.000Z | vbr.utils package
=================
.. automodule:: vbr.utils
:members:
:undoc-members:
:show-inheritance:
Subpackages
-----------
.. toctree::
:maxdepth: 2
vbr.utils.helpers
vbr.utils.redcaptasks
Submodules
----------
vbr.utils.time module
---------------------
.. automodule:: vbr.utils.time
:members:
:undoc-members:
:show-inheritance:
| 13.321429 | 30 | 0.568365 |
247cb27da0e12f858956508cca5b852afbb8d704 | 1,020 | rst | reStructuredText | docs/locale/en/source/getting-started/firmware-upload.rst | asvin-io/documentation | dc7d648d9f9a77f0fc45fb0d1940b8e22d5c4769 | [
"Apache-2.0"
] | 1 | 2020-09-15T08:08:53.000Z | 2020-09-15T08:08:53.000Z | docs/locale/en/source/getting-started/firmware-upload.rst | asvin-io/documentation | dc7d648d9f9a77f0fc45fb0d1940b8e22d5c4769 | [
"Apache-2.0"
] | null | null | null | docs/locale/en/source/getting-started/firmware-upload.rst | asvin-io/documentation | dc7d648d9f9a77f0fc45fb0d1940b8e22d5c4769 | [
"Apache-2.0"
] | 2 | 2020-09-15T08:08:36.000Z | 2020-09-24T09:51:45.000Z | Firmware Upload
===============
If you have completed the device registration process, then you must have a device in your account. Next, you need a firmware for the device. This can
be achieved in two steps. Firstly, create a firmware group by clicking on create new file group button under Firmware menu. When you have a firmware
group with a desired name then click on show button or on the name. It will show you list of firmware in the group. Obviously, it is empty now.
Secondly, create a simple firmware and click on Upload new file button to upload it. The steps are shown in the video below.
.. raw:: html
<video width="710" autoplay muted loop>
<source src="../_static/videos/firmware-upload.m4v" type="video/mp4">
Your browser does not support the video tag.
</video>
You will have a firmware group and firmware in it at completion of the task. The firmware metadata will be stored on the distributed ledger and firmware
will be saved on the private distributed network powered by IPFS protocol. | 60 | 152 | 0.764706 |
79b2df49322ec3e750b7fa0481e849a24d1e9c0e | 6,501 | rst | reStructuredText | docs/source/container/Level3/index.rst | makotow/NetAppDigitalTransformationLab | 7b832bc1660a0cbeaf5da340bf6d2767838fe7c9 | [
"MIT"
] | 5 | 2018-07-04T01:34:40.000Z | 2020-02-14T20:52:20.000Z | docs/source/container/Level3/index.rst | makotow/NetAppDigitalTransformationLab | 7b832bc1660a0cbeaf5da340bf6d2767838fe7c9 | [
"MIT"
] | 8 | 2018-08-27T12:48:50.000Z | 2019-09-17T15:36:53.000Z | docs/source/container/Level3/index.rst | makotow/NetAppDigitalTransformationLab | 7b832bc1660a0cbeaf5da340bf6d2767838fe7c9 | [
"MIT"
] | 3 | 2018-07-19T04:43:27.000Z | 2020-10-16T05:35:01.000Z | ==============================================================
Level 3: CI/CDパイプラインを構築
==============================================================
目的・ゴール: コンテナ化したアプリケーションのCICDを実現する
=============================================================
アプリケーションをコンテナ化したら、常にリリース可能な状態、自動でデプロイメントを出来る仕組みをつくるのが迅速な開発をするために必要になります。
そのためのCI/CDパイプラインを作成するのがこのレベルの目標です。
以下の図はこのレベルでCICDパイプラインを実現するためのツールを表したものになります。
実現するためには様々なツールが存在します。以下のツールはあくまで1例と捉えてください。
.. image:: resources/cicd_pipeline.png
登場しているツールの以下のように分類でき、それぞれ代表的なものをキーワードとして上げます。
- SCM: Git, GitHub, GitLab
- CICD: Jenkins, JenkinsX, Spinnaker, GitLab Runner
- アーティファクト管理: JFrog
- Image Registry: Harbor, DockerRegistry, GitLab
- Package管理: Helm
本ラボでは Level1, Level2 で行ったオペレーションをベースにCI/CDパイプラインを構築します。
Gitにソースがコミットされたら自動でテスト・ビルドを実現するためのツール(Jenkins)をkubernetes上へデプロイ、及び外部公開をします。
そして、Jenkinsがデプロイできたら実際にアプリケーションの変更を行い自動でデプロイするところまでを目指します。
流れ
=============================================================
#. Jenkins をインストールする
#. Jenkins 内部でジョブを定義する。
#. あるアクションをトリガーにビルド、テストを自動実行する。
#. 自動でk8sクラスタにデプロイメントできるようにする。
CI/CDパイプラインの定義
=============================================================
このラボでのCI/CDパイプラインの定義は以下を想定しています。
* アプリケーションビルド
* コンテナイメージのビルド
* レジストリへコンテナイメージのpush
* テスト実行
* k8sへアプリケーションデプロイ
GitはGitLabを共有で準備していますが、使いなれているサービス(GitHub等)があればそちらを使って頂いても構いません。
まずは、Jenkinsをkubernetes上にデプロイしてみましょう。
Git自体も併せてデプロイしてみたいということであればGitLabをデプロイすることをおすすめします。
GitLabを使えばコンテナのCI/CDパイプライン、構成管理、イメージレジストリを兼ねて使用することができます。
Jenkinsのデプロイ方法について
=============================================================
CI/CDパイプラインを実現するためのツールとしてJenkinsが非常に有名であることは周知の事実です。
このラボではJenkinsを使用しCI/CDを実現します。
まずは、各自Jenkinsをデプロイします。
方法としては3つ存在します。
#. Helm Chartでデプロイする方法 (手軽にインストールしたい人向け)
#. Level1,2と同じようにyamlファイルを作成し、デプロイする方法(仕組みをより深く知りたい人向け)
#. Kubernetes用にCI/CDを提供するJenkins Xをデプロイする方法(新しい物を使いたい人向け)
今回は最初のHelmでデプロイするバージョンを記載しました。
好みのもの、挑戦したい内容に沿って選択してください。
オリジナルでyamlファイルを作成する場合は以下のサイトが参考になります。
https://cloud.google.com/solutions/jenkins-on-kubernetes-engine
Helmを使ってJenkinsをデプロイ
=============================================================
.. include:: jenkins-install-with-helm.rst
Helm以外でJenkinsをデプロイした場合
=============================================================
本セクションに記載してあることはオプションです。
必要に応じて実施してください。
外部にアプリケーションを公開する方法として ``Ingress`` があります。
Helmを使ってJenkinsをインストー時にvalues.yamlで設定を行うことでIngressが作成されます。
それ以外の手法を取った場合は、kubernetesクラスタ外のネットワークからアクセスできるようにIngressを作成しアクセスする方法があります。
Ingressの導入についてはLevel4 運用編の :doc:`../Level4/ingress/ingress` にまとめました。
Jenkinsの設定をする
=============================================================
.. include:: jenkins-configuration.rst
Jenkins Pipelineの作成
=============================================================
* テスト実行
* アプリケーションビルド
* コンテナイメージのビルド
* レジストリへコンテナイメージのpush
* アプリケーションデプロイ
上記のようなパイプラインを作成にはJenkins pipeline機能が活用できます。
- https://jenkins.io/doc/book/pipeline/
- https://github.com/jenkinsci/kubernetes-plugin/blob/master/README.md
ここではテンプレートを準備しました、上記の様なパイプラインを実装してみましょう。
Jenkins ではパイプラインを構築するために2つの記述方法があります。
- Declarative pipeline syntax https://jenkins.io/doc/book/pipeline/#declarative-pipeline-fundamentals
- Scripted pipeline syntax https://jenkins.io/doc/book/pipeline/#scripted-pipeline-fundamentals
それぞれの違いついてはこちら。
- https://jenkins.io/doc/book/pipeline/#declarative-versus-scripted-pipeline-syntax
.. literalinclude:: resources/jenkins/jenkinsfile
:language: groovy
:caption: Jenkins pipelineのフォーマット
.. literalinclude:: resources/jenkins/KubernetesPod.yaml
:language: yaml
:caption: Jenkins pipelineをkubernetesで動作させるコンテナのテンプレートを定義
Jenkins pipeline の作成が完了したら任意のGitリポジトリにpushします。
以降のJenkins Pipelineの実行にJenkinsfileを使用します。
アプリケーションの変更を検知してデプロイメント可能にする
=============================================================
CI/CDのパイプラインを作成したら実際にアプリケーションの変更をトリガー(ソースコードの変更、Gitリポジトリへのpush等)としてk8sへアプリケーションをデプロイします。
ポリシーとして大きく2つに別れます、参考までに以下に記載いたします。
* デプロイ可能な状態までにし、最後のデプロイメントは人が実施する(クリックするだけ)
* デプロイメントまでを完全自動化する
実際にkubernetes環境へのデプロイができたかの確認とアプリケーションが稼働しているかを確認します。
今回はサンプルとしてJenkinsのBlueOcean pluginを使用してPipelineを作成します。
.. image:: resources/jenkins_blueocean.png
BlueOcean plugin を使用するとウィザード形式でPipelineを作成することができます。
各入力値については以下のURLにてどのような形式で入力されるかの記載があります。
- https://jenkins.io/doc/book/blueocean/creating-pipelines/
コンテナをCI/CDする方法 Helmを使ってみる
=============================================================
コンテナのCI/CDではいくつか方法があります。
ここではコンテナをCI/CDするために必要な検討事項を記載するとともに
個別のアプリケーションデプロイメントからHelm Chartを使ったデプロイメントに変更します。
作成したコンテナをHelm Chartを使ってデプロイするようにします。
Helm Chartの開発ガイドは以下のURLを確認ください。
- https://docs.helm.sh/chart_template_guide/#the-chart-template-developer-s-guide
他にも以下のようなCI/CDを行いやすくする構成管理・パッケージマネジメントのツールが存在しています。
- Kustomize
- Draft
- GitKube
- Skaffold
デプロイメントのさらなる進化
=============================================================
CI/CDプロセスを成熟させていくと常にリリース可能な状態となっていきます。
そのような状態になると本番環境へのデプロイを迅速にし、ダウンタイムを最小化するための方法が必要になってきます。
元々存在するプラクティスや考え方となりますがコンテナ技術、kubernetesのスケジューラー機能を使うことで今までの環境とくらべて実現がしやすくなっています。
Blue/Greenデプロイメント, Canary リリースというキーワードで紹介したいと思います。
:doc:`../Level4/index` , :doc:`../Level5/index` で登場するサービスメッシュ、Istioの機能で実現できます。
また、NetAppが提供しているNetApp Kubernetes ServiceでもKubernetesクラスタのデプロイから、Istioを使ったルーティングを視覚的に操作できる機能を提供しています。
詳細は :doc:`../Level4/stack-management/index` で章を設けます。
.. tip::
CDには2つの意味を含んでいるケースがあります。文脈に応じて見分けるか、どちらの意味か確認しましょう。
* Continuous Deployment: 常にデプロイ可能なものを生成するまでを自動化する、最後のデプロイメントは手動で実施。
* Continuous Delivery: 本番環境へのデプロイメントまでを自動化する。
Blue/Greenデプロイメント
-------------------------------------------------------------
従来のやり方では1つの環境にデプロイし何かあれば戻すという方法をほとんどのケースで採用していたかと思いますが、さらなる進化として常に戻せる環境を準備し迅速にロールバック
新バージョン、旧バージョンをデプロイしたままルータで切り替えるようになります。
様々な企業で行き着いている運用でもあるかと思いますが、2010年にBlueGreenデプロイメントという名称で説明しています。
- https://martinfowler.com/bliki/BlueGreenDeployment.html
実現方法、切り替えのタイミングなどあり、BlueGreenの実装の決定的なものはなく、1つのプラクティスとして存在しています。
2つの環境を準備し、どこかのタイミングで切り替えを行うためDBのマイグレーションの方法などを検討する必要はでてきます。
Canary
-------------------------------------------------------------
Canary リリースは BlueGreen デプロイメントと類似したデプロイメントになります。
Blue/Green デプロイメントはすぐに古いバージョンにもどせるように仕組みを整えたものですが、Canaryリリースは新しいバージョン、旧バージョンにアクセスする比率を決めてデプロイするプラクティスです。
こちらは2つの環境ではなく、1環境に複数バージョンのアプリケーションが存在することになります。そのためDBのデータをどのように取り扱うかは検討が必要となります。
まとめ
=============================================================
このラボではコンテナ化したアプリケーションのCI/CDパイプラインの構築に挑戦しました。
CI/CDパイプラインを作成するためのJenkins/GitLabをインストールするために必要なHelmが使えるようになりました。
本ラボでは簡易的なパイプラインを実際に構築しました。パイプライン内の処理については個々で実装したものから発展させ様々な処理を追加することができます。
ここまでで Level3 は終了です。
| 27.66383 | 104 | 0.72635 |
3136e33368d497ba81234016a045182b928ba386 | 92 | rst | reStructuredText | docs/source/_autosummary/panstamps.utKit.rst | djones1040/panstamps | b9e67b4dc168846ddb36e4b5f143c136660a0535 | [
"MIT"
] | null | null | null | docs/source/_autosummary/panstamps.utKit.rst | djones1040/panstamps | b9e67b4dc168846ddb36e4b5f143c136660a0535 | [
"MIT"
] | null | null | null | docs/source/_autosummary/panstamps.utKit.rst | djones1040/panstamps | b9e67b4dc168846ddb36e4b5f143c136660a0535 | [
"MIT"
] | null | null | null | panstamps.utKit (*module*)
===============
.. automodule:: panstamps.utKit
:members:
| 11.5 | 31 | 0.554348 |
f732fee755ed5da2f37e300e3336a31ef3fe75d3 | 74 | rst | reStructuredText | Misc/NEWS.d/next/Core and Builtins/2017-10-06-02-10-48.bpo-31708.66CCVU.rst | vyas45/cpython | 02e82a0596121e7b6fcd1142b60c744e8e254d41 | [
"PSF-2.0"
] | null | null | null | Misc/NEWS.d/next/Core and Builtins/2017-10-06-02-10-48.bpo-31708.66CCVU.rst | vyas45/cpython | 02e82a0596121e7b6fcd1142b60c744e8e254d41 | [
"PSF-2.0"
] | null | null | null | Misc/NEWS.d/next/Core and Builtins/2017-10-06-02-10-48.bpo-31708.66CCVU.rst | vyas45/cpython | 02e82a0596121e7b6fcd1142b60c744e8e254d41 | [
"PSF-2.0"
] | null | null | null | Allow use of asynchronous generator expressions in synchronous functions.
| 37 | 73 | 0.864865 |
731e4f0cbac875aa6234d1e2b51f2ba9277eae4c | 587 | rst | reStructuredText | doc/source/transformations/Histogram.rst | gnafit/gna | c1a58dac11783342c97a2da1b19c97b85bce0394 | [
"MIT"
] | 5 | 2019-10-14T01:06:57.000Z | 2021-02-02T16:33:06.000Z | doc/source/transformations/Histogram.rst | gnafit/gna | c1a58dac11783342c97a2da1b19c97b85bce0394 | [
"MIT"
] | null | null | null | doc/source/transformations/Histogram.rst | gnafit/gna | c1a58dac11783342c97a2da1b19c97b85bce0394 | [
"MIT"
] | null | null | null | .. _Histogram:
Histogram
~~~~~~~~~
Description
^^^^^^^^^^^
'Static' transformation. Represents the histogram.
See also ``HistEdges`` and ``Rebin`` transformations.
Arguments
^^^^^^^^^
* ``size_t`` — number of bins :math:`n`
* ``double*`` — array with bin edges of size :math:`n+1`
* ``double*`` — array with bin heights of size :math:`n`
In Python ``Histogram`` instance may be constructed from two numpy arrays:
.. code-block:: ipython
from gna.constructors import Histogram
h = Histogram(edges, data)
Outputs
^^^^^^^
1) ``hist.hist`` — static array of kind Histogram.
| 18.935484 | 74 | 0.654174 |
2aa41470be382c052eb0db441816bd8875751b5b | 6,009 | rst | reStructuredText | docs/Chapter4/Design.rst | onap/vnfrqts-requirements | 6a0388cd6e07f9d002cb21fbd0e46f98767ee442 | [
"Apache-2.0",
"CC-BY-4.0"
] | 3 | 2018-08-13T12:10:14.000Z | 2020-04-30T17:36:56.000Z | docs/Chapter4/Design.rst | onap/vnfrqts-requirements | 6a0388cd6e07f9d002cb21fbd0e46f98767ee442 | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | docs/Chapter4/Design.rst | onap/vnfrqts-requirements | 6a0388cd6e07f9d002cb21fbd0e46f98767ee442 | [
"Apache-2.0",
"CC-BY-4.0"
] | 1 | 2021-10-15T15:00:04.000Z | 2021-10-15T15:00:04.000Z | .. Modifications Copyright © 2017-2018 AT&T Intellectual Property.
.. Licensed under the Creative Commons License, Attribution 4.0 Intl.
(the "License"); you may not use this documentation except in compliance
with the License. You may obtain a copy of the License at
.. https://creativecommons.org/licenses/by/4.0/
.. Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
VNF Design
----------
Services are composed of VNFs and common components and are designed to
be agnostic of the location to leverage capacity where it exists in the
Network Cloud. VNFs can be instantiated in any location that meets the
performance and latency requirements of the service.
A key design principle for virtualizing services is decomposition of
network functions using NFV concepts into granular VNFs. This enables
instantiating and customizing only essential functions as needed for the
service, thereby making service delivery more nimble. It provides
flexibility of sizing and scaling and also provides flexibility with
packaging and deploying VNFs as needed for the service. It enables
grouping functions in a common cloud data center to minimize
inter-component latency. The VNFs should be designed with a goal of
being modular and reusable to enable using best-in-breed vendors.
Section 4.1 VNF Design in *VNF Guidelines* describes
the overall guidelines for designing VNFs from VNF Components (VNFCs).
Below are more detailed requirements for composing VNFs.
VNF Design Requirements
.. req::
:id: R-58421
:target: VNF
:keyword: SHOULD
The VNF **SHOULD** be decomposed into granular re-usable VNFCs.
.. req::
:id: R-82223
:target: VNF
:keyword: MUST
The VNF **MUST** be decomposed if the functions have
significantly different scaling characteristics (e.g., signaling
versus media functions, control versus data plane functions).
.. req::
:id: R-16496
:target: VNF
:keyword: MUST
The VNF **MUST** enable instantiating only the functionality that
is needed for the decomposed VNF (e.g., if transcoding is not needed it
should not be instantiated).
.. req::
:id: R-02360
:target: VNF
:keyword: MUST
The VNFC **MUST** be designed as a standalone, executable process.
.. req::
:id: R-34484
:target: VNF
:keyword: SHOULD
The VNF **SHOULD** create a single component VNF for VNFCs
that can be used by other VNFs.
.. req::
:id: R-23035
:target: VNF
:keyword: MUST
The VNF **MUST** be designed to scale horizontally (more
instances of a VNF or VNFC) and not vertically (moving the existing
instances to larger VMs or increasing the resources within a VM)
to achieve effective utilization of cloud resources.
.. req::
:id: R-30650
:target: VNF
:keyword: MUST
The VNF **MUST** utilize cloud provided infrastructure and
VNFs (e.g., virtualized Local Load Balancer) as part of the VNF so
that the cloud can manage and provide a consistent service resiliency
and methods across all VNF's.
.. req::
:id: R-12709
:target: VNF
:keyword: SHOULD
The VNFC **SHOULD** be independently deployed, configured,
upgraded, scaled, monitored, and administered by ONAP.
.. req::
:id: R-37692
:target: VNF
:keyword: MUST
The VNFC **MUST** provide API versioning to allow for
independent upgrades of VNFC.
.. req::
:id: R-86585
:target: VNF
:keyword: SHOULD
The VNFC **SHOULD** minimize the use of state within
a VNFC to facilitate the movement of traffic from one instance
to another.
.. req::
:id: R-65134
:target: VNF
:keyword: SHOULD
The VNF **SHOULD** maintain state in a geographically
redundant datastore that may, in fact, be its own VNFC.
.. req::
:id: R-75850
:target: VNF
:keyword: SHOULD
The VNF **SHOULD** decouple persistent data from the VNFC
and keep it in its own datastore that can be reached by all instances
of the VNFC requiring the data.
.. req::
:id: R-88199
:target: VNF
:keyword: MUST
The VNF **MUST** utilize a persistent datastore service that
can meet the data performance/latency requirements. (For example:
Datastore service could be a VNFC in VNF or a DBaaS in the Cloud
execution environment)
.. req::
:id: R-99656
:target: VNF
:keyword: MUST
The VNF **MUST** NOT terminate stable sessions if a VNFC
instance fails.
.. req::
:id: R-84473
:target: VNF
:keyword: MUST
The VNF **MUST** enable DPDK in the guest OS for VNF's requiring
high packets/sec performance. High packet throughput is defined as greater
than 500K packets/sec.
.. req::
:id: R-54430
:target: VNF
:keyword: MUST
The VNF **MUST** use the NCSP's supported library and compute
flavor that supports DPDK to optimize network efficiency if using DPDK. [#4.1.1]_
.. req::
:id: R-18864
:target: VNF
:keyword: MUST NOT
The VNF **MUST NOT** use technologies that bypass virtualization
layers (such as SR-IOV) unless approved by the NCSP (e.g., if necessary
to meet functional or performance requirements).
.. req::
:id: R-64768
:target: VNF
:keyword: MUST
The VNF **MUST** limit the size of application data packets
to no larger than 9000 bytes for SDN network-based tunneling when
guest data packets are transported between tunnel endpoints that
support guest logical networks.
.. req::
:id: R-74481
:target: VNF
:keyword: MUST NOT
The VNF **MUST NOT** require the use of a dynamic routing
protocol unless necessary to meet functional requirements.
.. [#4.1.1]
Refer to NCSP’s Network Cloud specification
| 28.889423 | 85 | 0.70461 |
6de815fe76ef3806249d075ade45f0144b7b700a | 7,421 | rst | reStructuredText | hc-venv/lib/python3.6/site-packages/croniter-0.3.20.dist-info/DESCRIPTION.rst | niti15/heroku | 21233761b3fc3113ce463c52af2ca6290d13e057 | [
"BSD-3-Clause"
] | null | null | null | hc-venv/lib/python3.6/site-packages/croniter-0.3.20.dist-info/DESCRIPTION.rst | niti15/heroku | 21233761b3fc3113ce463c52af2ca6290d13e057 | [
"BSD-3-Clause"
] | 1 | 2020-06-05T19:35:10.000Z | 2020-06-05T19:35:10.000Z | hc-venv/lib/python3.6/site-packages/croniter-0.3.20.dist-info/DESCRIPTION.rst | cogzidel/yourhealthchecks | a17315b523beef2e56e657227103212082aa84a7 | [
"BSD-3-Clause"
] | 1 | 2018-11-12T03:59:00.000Z | 2018-11-12T03:59:00.000Z | Introduction
============
.. contents::
croniter provides iteration for the datetime object with a cron like format.
::
_ _
___ _ __ ___ _ __ (_) |_ ___ _ __
/ __| '__/ _ \| '_ \| | __/ _ \ '__|
| (__| | | (_) | | | | | || __/ |
\___|_| \___/|_| |_|_|\__\___|_|
Website: https://github.com/kiorky/croniter
Travis badge
=============
.. image:: https://travis-ci.org/kiorky/croniter.png
:target: http://travis-ci.org/kiorky/croniter
Usage
============
A simple example::
>>> from croniter import croniter
>>> from datetime import datetime
>>> base = datetime(2010, 1, 25, 4, 46)
>>> iter = croniter('*/5 * * * *', base) # every 5 minutes
>>> print iter.get_next(datetime) # 2010-01-25 04:50:00
>>> print iter.get_next(datetime) # 2010-01-25 04:55:00
>>> print iter.get_next(datetime) # 2010-01-25 05:00:00
>>>
>>> iter = croniter('2 4 * * mon,fri', base) # 04:02 on every Monday and Friday
>>> print iter.get_next(datetime) # 2010-01-26 04:02:00
>>> print iter.get_next(datetime) # 2010-01-30 04:02:00
>>> print iter.get_next(datetime) # 2010-02-02 04:02:00
>>>
>>> iter = croniter('2 4 1 * wed', base) # 04:02 on every Wednesday OR on 1st day of month
>>> print iter.get_next(datetime) # 2010-01-27 04:02:00
>>> print iter.get_next(datetime) # 2010-02-01 04:02:00
>>> print iter.get_next(datetime) # 2010-02-03 04:02:00
>>>
>>> iter = croniter('2 4 1 * wed', base, day_or=False) # 04:02 on every 1st day of the month if it is a Wednesday
>>> print iter.get_next(datetime) # 2010-09-01 04:02:00
>>> print iter.get_next(datetime) # 2010-12-01 04:02:00
>>> print iter.get_next(datetime) # 2011-06-01 04:02:00
>>> iter = croniter('0 0 * * sat#1,sun#2', base)
>>> print iter.get_next(datetime) # datetime.datetime(2010, 2, 6, 0, 0)
All you need to know is how to use the constructor and the ``get_next``
method, the signature of these methods are listed below::
>>> def __init__(self, cron_format, start_time=time.time(), day_or=True)
croniter iterates along with ``cron_format`` from ``start_time``.
``cron_format`` is **min hour day month day_of_week**, you can refer to
http://en.wikipedia.org/wiki/Cron for more details. The ``day_or``
switch is used to control how croniter handles **day** and **day_of_week**
entries. Default option is the cron behaviour, which connects those
values using **OR**. If the switch is set to False, the values are connected
using **AND**. This behaves like fcron and enables you to e.g. define a job that
executes each 2nd friday of a month by setting the days of month and the
weekday.
::
>>> def get_next(self, ret_type=float)
get_next calculates the next value according to the cron expression and
returns an object of type ``ret_type``. ``ret_type`` should be a ``float`` or a
``datetime`` object.
Supported added for ``get_prev`` method. (>= 0.2.0)::
>>> base = datetime(2010, 8, 25)
>>> itr = croniter('0 0 1 * *', base)
>>> print itr.get_prev(datetime) # 2010-08-01 00:00:00
>>> print itr.get_prev(datetime) # 2010-07-01 00:00:00
>>> print itr.get_prev(datetime) # 2010-06-01 00:00:00
You can validate your crons using ``is_valid`` class method. (>= 0.3.18)::
>>> croniter.is_valid('0 0 1 * *') # True
>>> croniter.is_valid('0 wrong_value 1 * *') # False
About DST
=========
Be sure to init your croniter instance with a TZ aware datetime for this to work !::
>>> local_date = tz.localize(datetime(2017, 3, 26))
>>> val = croniter('0 0 * * *', local_date).get_next(datetime)
Develop this package
====================
::
git clone https://github.com/kiorky/croniter.git
cd croniter
virtualenv --no-site-packages venv
. venv/bin/activate
pip install --upgrade -r requirements/test.txt
py.test src
Make a new release
====================
We use zest.fullreleaser, a great release infrastructure.
Do and follow these instructions
::
. venv/bin/activate
pip install --upgrade -r requirements/release.txt
fullrelease
Contributors
===============
Thanks to all who have contributed to this project!
If you have contributed and your name is not listed below please let me know.
- mrmachine
- Hinnack
- shazow
- kiorky
- jlsandell
- mag009
- djmitche
- GreatCombinator
- chris-baynes
- ipartola
- yuzawa-san
Changelog
==============
0.3.20 (2017-11-06)
-------------------
- More DST fixes
[Kevin Rose <kbrose@github>]
0.3.19 (2017-08-31)
-------------------
- fix #87: backward dst changes
[kiorky]
0.3.18 (2017-08-31)
-------------------
- Add is valid method, refactor errors
[otherpirate, Mauro Murari <mauro_murari@hotmail.com>]
0.3.17 (2017-05-22)
-------------------
- DOW occurence sharp style support.
[kiorky, Kengo Seki <sekikn@apache.org>]
0.3.16 (2017-03-15)
-------------------
- Better test suite [mrcrilly@github]
- DST support [kiorky]
0.3.15 (2017-02-16)
-------------------
- fix bug around multiple conditions and range_val in
_get_prev_nearest_diff.
[abeja-yuki@github]
0.3.14 (2017-01-25)
-------------------
- issue #69: added day_or option to change behavior when day-of-month and
day-of-week is given
[Andreas Vogl <a.vogl@hackner-security.com>]
0.3.13 (2016-11-01)
-------------------
- `Real fix for #34 <https://github.com/taichino/croniter/pull/73>`_
[kiorky@github]
- `Modernize test infra <https://github.com/taichino/croniter/pull/72>`_
[kiorky@github]
- `Release as a universal wheel <https://github.com/kiorky/croniter/pull/16>`_
[adamchainz@github]
- `Raise ValueError on negative numbers <https://github.com/taichino/croniter/pull/63>`_
[josegonzalez@github]
- `Compare types using "issubclass" instead of exact match <https://github.com/taichino/croniter/pull/70>`_
[darkk@github]
- `Implement step cron with a variable base <https://github.com/taichino/croniter/pull/60>`_
[josegonzalez@github]
0.3.12 (2016-03-10)
-------------------
- support setting ret_type in __init__ [Brent Tubbs <brent.tubbs@gmail.com>]
0.3.11 (2016-01-13)
-------------------
- Bug fix: The get_prev API crashed when last day of month token was used. Some
essential logic was missing.
[Iddo Aviram <iddo.aviram@similarweb.com>]
0.3.10 (2015-11-29)
-------------------
- The fuctionality of 'l' as day of month was broken, since the month variable
was not properly updated
[Iddo Aviram <iddo.aviram@similarweb.com>]
0.3.9 (2015-11-19)
------------------
- Don't use datetime functions python 2.6 doesn't support
[petervtzand]
0.3.8 (2015-06-23)
------------------
- Truncate microseconds by setting to 0
[Corey Wright]
0.3.7 (2015-06-01)
------------------
- converting sun in range sun-thu transforms to int 0 which is
recognized as empty string; the solution was to convert sun to string "0"
0.3.6 (2015-05-29)
------------------
- Fix default behavior when no start_time given
Default value for `start_time` parameter is calculated at module init time rather than call time.
- Fix timezone support and stop depending on the system time zone
0.3.5 (2014-08-01)
------------------
- support for 'l' (last day of month)
0.3.4 (2014-01-30)
------------------
- Python 3 compat
- QA Relase
0.3.3 (2012-09-29)
------------------
- proper packaging
| 27.083942 | 118 | 0.630104 |
45e82325a21da307d4c733d12febacb2cc6074d5 | 130 | rst | reStructuredText | Misc/NEWS.d/2.7.10.rst | cemeyer/tauthon | 2c3328c5272cffa2a544542217181c5828afa7ed | [
"PSF-2.0"
] | 473 | 2017-02-03T04:03:02.000Z | 2022-02-12T17:44:25.000Z | Misc/NEWS.d/2.7.10.rst | cemeyer/tauthon | 2c3328c5272cffa2a544542217181c5828afa7ed | [
"PSF-2.0"
] | 70 | 2017-02-02T21:20:07.000Z | 2022-02-04T15:32:45.000Z | Misc/NEWS.d/2.7.10.rst | cemeyer/tauthon | 2c3328c5272cffa2a544542217181c5828afa7ed | [
"PSF-2.0"
] | 37 | 2017-02-11T21:02:34.000Z | 2020-11-16T10:51:45.000Z | .. bpo: 22931
.. date: 9589
.. nonce: 4CuWYD
.. release date: 2015-05-23
.. section: Library
Allow '[' and ']' in cookie values.
| 16.25 | 35 | 0.630769 |
68ea4d039acb2c039457d7cd492b5bc04b91889b | 1,124 | rst | reStructuredText | docs/reference/natural.rst | disco-lang/discrete-lang | 34eac429d0f033a2ba81d96ef67bb4e1381000a2 | [
"BSD-3-Clause"
] | null | null | null | docs/reference/natural.rst | disco-lang/discrete-lang | 34eac429d0f033a2ba81d96ef67bb4e1381000a2 | [
"BSD-3-Clause"
] | null | null | null | docs/reference/natural.rst | disco-lang/discrete-lang | 34eac429d0f033a2ba81d96ef67bb4e1381000a2 | [
"BSD-3-Clause"
] | null | null | null | Natural numbers
===============
The type of *natural numbers* is written ``N``, ``ℕ``, ``Nat``, or
``Natural`` (Disco always prints it as ``ℕ``, but you can use any of
these names when writing code). The natural numbers include the
counting numbers 0, 1, 2, 3, 4, 5, ...
:doc:`Adding <addition>` or :doc:`multiplying
<multiplication>` two natural numbers yields another natural number:
::
Disco> :type 2 + 3
5 : ℕ
Disco> :type 2 * 3
6 : ℕ
Natural numbers cannot be directly :doc:`subtracted <subtraction>` or
:doc:`divided <division>`. However, ``N`` is a :doc:`subtype` of all
the other numeric types, so using subtraction or division with natural
numbers will cause them to be automatically converted into a
different type like :doc:`integers <integer>` or :doc:`rationals
<rational>`:
::
Disco> :type 2 - 3
2 - 3 : ℤ
Disco> :type 2 / 3
2 / 3 : 𝔽
Note that some mathematicians use the phrase "natural numbers" to mean
the set of positive numbers 1, 2, 3, ..., that is, they do not include
zero. However, in the context of computer science, "natural numbers"
almost always includes zero.
| 30.378378 | 70 | 0.682384 |
e076bd613d865e1013ed02928be4623f30ce75b2 | 2,438 | rst | reStructuredText | docs/cookbook/logging/index.rst | ericadeckl/opensphere | fa22a53665f48fb53237187b0142700cb65f1cb6 | [
"Apache-2.0"
] | 1 | 2020-04-29T23:19:09.000Z | 2020-04-29T23:19:09.000Z | docs/cookbook/logging/index.rst | briedinger/opensphere | b6a39abf8f88a16d7308c6f6f67878f50d1e2c78 | [
"Apache-2.0"
] | null | null | null | docs/cookbook/logging/index.rst | briedinger/opensphere | b6a39abf8f88a16d7308c6f6f67878f50d1e2c78 | [
"Apache-2.0"
] | null | null | null | Logging
=======
Problem
-------
Your plugin needs to log different types of information to support debugging or usage metrics.
Solution
--------
Use the OpenSphere logging framework. There are three parts to enable this - adding the logger, using the logger, and adding the applicable :code:`goog.require` entries.
.. literalinclude:: src/cookbook-logging.js
:caption: Logging Cookbook example - requires
:linenos:
:lines: 3-5
:language: javascript
.. literalinclude:: src/cookbook-logging.js
:caption: Logging Cookbook example - adding the logger
:linenos:
:lines: 22-28
:language: javascript
.. literalinclude:: src/cookbook-logging.js
:caption: Logging Cookbook example - using the logger
:linenos:
:lines: 41-44
:language: javascript
Discussion
----------
The OpenSphere logging framework is mostly based on the `Closure logging library functions <https://google.github.io/closure-library/api/goog.log.html>`_. The code above shows the two required argument form (which allows logging at error, warning, info and fine levels) as well as the three required argument :code:`goog.log.log` form (which allows specifying of the log level for more options). Historically OpenSphere has only used the two required argument form (roughly half of the logging using :code:`goog.log.error`, with :code:`goog.log.warning`, :code:`goog.log.info` and :code:`goog.log.fine` sharing the other half reasonably evenly).
.. tip:: If you do need the :code:`goog.log.log` form, use :code:`goog.debug.Logger.Level` instead of :code:`goog.log.Level` to specify the level, in order to avoid logging that works in a debug environment and throws exceptions in a production (minified / compiled) environment.
Having adding logging support to your plugin, you can access logs from within OpenSphere from the Support menu, using the View Logs entry:
.. image:: images/SupportMenuViewLog.png
Each logger that has been added will appear in the Options menu. The entries for our logger appear as:
.. image:: images/SetupLogLevels.png
Note that there are entries for :code:`plugin.cookbook_logging` and :code:`plugin.cookbook_logging.CookbookLogging`. This makes it easy to configure the appropriate logging levels for your plugin, and for lower level namespaces as needed.
Full code
---------
.. literalinclude:: src/cookbook-logging.js
:caption: Logging Cookbook example - Full code
:linenos:
:language: javascript
| 42.034483 | 645 | 0.757588 |
43a5d84252a85e3a6824ce23ca8fef48be624fcb | 9,605 | rst | reStructuredText | classes/es/class_gltfnode.rst | Rindbee/godot-docs-l10n | 7d250e8e2af9d33a9089ee2110e57a4749a5dd95 | [
"CC-BY-3.0"
] | 3 | 2018-03-28T14:31:07.000Z | 2018-04-02T14:01:52.000Z | classes/es/class_gltfnode.rst | Rindbee/godot-docs-l10n | 7d250e8e2af9d33a9089ee2110e57a4749a5dd95 | [
"CC-BY-3.0"
] | null | null | null | classes/es/class_gltfnode.rst | Rindbee/godot-docs-l10n | 7d250e8e2af9d33a9089ee2110e57a4749a5dd95 | [
"CC-BY-3.0"
] | null | null | null | :github_url: hide
.. Generated automatically by doc/tools/make_rst.py in Godot's source tree.
.. DO NOT EDIT THIS FILE, but the GLTFNode.xml source instead.
.. The source is found in doc/classes or modules/<name>/doc_classes.
.. _class_GLTFNode:
GLTFNode
========
**Inherits:** :ref:`Resource<class_Resource>` **<** :ref:`Reference<class_Reference>` **<** :ref:`Object<class_Object>`
Propiedades
----------------------
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`int<class_int>` | :ref:`camera<class_GLTFNode_property_camera>` | ``-1`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`PoolIntArray<class_PoolIntArray>` | :ref:`children<class_GLTFNode_property_children>` | ``PoolIntArray( )`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`int<class_int>` | :ref:`height<class_GLTFNode_property_height>` | ``-1`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`bool<class_bool>` | :ref:`joint<class_GLTFNode_property_joint>` | ``false`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`int<class_int>` | :ref:`light<class_GLTFNode_property_light>` | ``-1`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`int<class_int>` | :ref:`mesh<class_GLTFNode_property_mesh>` | ``-1`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`int<class_int>` | :ref:`parent<class_GLTFNode_property_parent>` | ``-1`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`Quat<class_Quat>` | :ref:`rotation<class_GLTFNode_property_rotation>` | ``Quat( 0, 0, 0, 1 )`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`Vector3<class_Vector3>` | :ref:`scale<class_GLTFNode_property_scale>` | ``Vector3( 1, 1, 1 )`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`int<class_int>` | :ref:`skeleton<class_GLTFNode_property_skeleton>` | ``-1`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`int<class_int>` | :ref:`skin<class_GLTFNode_property_skin>` | ``-1`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`Vector3<class_Vector3>` | :ref:`translation<class_GLTFNode_property_translation>` | ``Vector3( 0, 0, 0 )`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
| :ref:`Transform<class_Transform>` | :ref:`xform<class_GLTFNode_property_xform>` | ``Transform( 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0 )`` |
+-----------------------------------------+---------------------------------------------------------+-----------------------------------------------------+
Descripciones de Propiedades
--------------------------------------------------------
.. _class_GLTFNode_property_camera:
- :ref:`int<class_int>` **camera**
+-----------+-------------------+
| *Default* | ``-1`` |
+-----------+-------------------+
| *Setter* | set_camera(value) |
+-----------+-------------------+
| *Getter* | get_camera() |
+-----------+-------------------+
----
.. _class_GLTFNode_property_children:
- :ref:`PoolIntArray<class_PoolIntArray>` **children**
+-----------+----------------------+
| *Default* | ``PoolIntArray( )`` |
+-----------+----------------------+
| *Setter* | set_children(value) |
+-----------+----------------------+
| *Getter* | get_children() |
+-----------+----------------------+
----
.. _class_GLTFNode_property_height:
- :ref:`int<class_int>` **height**
+-----------+-------------------+
| *Default* | ``-1`` |
+-----------+-------------------+
| *Setter* | set_height(value) |
+-----------+-------------------+
| *Getter* | get_height() |
+-----------+-------------------+
----
.. _class_GLTFNode_property_joint:
- :ref:`bool<class_bool>` **joint**
+-----------+------------------+
| *Default* | ``false`` |
+-----------+------------------+
| *Setter* | set_joint(value) |
+-----------+------------------+
| *Getter* | get_joint() |
+-----------+------------------+
----
.. _class_GLTFNode_property_light:
- :ref:`int<class_int>` **light**
+-----------+------------------+
| *Default* | ``-1`` |
+-----------+------------------+
| *Setter* | set_light(value) |
+-----------+------------------+
| *Getter* | get_light() |
+-----------+------------------+
----
.. _class_GLTFNode_property_mesh:
- :ref:`int<class_int>` **mesh**
+-----------+-----------------+
| *Default* | ``-1`` |
+-----------+-----------------+
| *Setter* | set_mesh(value) |
+-----------+-----------------+
| *Getter* | get_mesh() |
+-----------+-----------------+
----
.. _class_GLTFNode_property_parent:
- :ref:`int<class_int>` **parent**
+-----------+-------------------+
| *Default* | ``-1`` |
+-----------+-------------------+
| *Setter* | set_parent(value) |
+-----------+-------------------+
| *Getter* | get_parent() |
+-----------+-------------------+
----
.. _class_GLTFNode_property_rotation:
- :ref:`Quat<class_Quat>` **rotation**
+-----------+------------------------+
| *Default* | ``Quat( 0, 0, 0, 1 )`` |
+-----------+------------------------+
| *Setter* | set_rotation(value) |
+-----------+------------------------+
| *Getter* | get_rotation() |
+-----------+------------------------+
----
.. _class_GLTFNode_property_scale:
- :ref:`Vector3<class_Vector3>` **scale**
+-----------+------------------------+
| *Default* | ``Vector3( 1, 1, 1 )`` |
+-----------+------------------------+
| *Setter* | set_scale(value) |
+-----------+------------------------+
| *Getter* | get_scale() |
+-----------+------------------------+
----
.. _class_GLTFNode_property_skeleton:
- :ref:`int<class_int>` **skeleton**
+-----------+---------------------+
| *Default* | ``-1`` |
+-----------+---------------------+
| *Setter* | set_skeleton(value) |
+-----------+---------------------+
| *Getter* | get_skeleton() |
+-----------+---------------------+
----
.. _class_GLTFNode_property_skin:
- :ref:`int<class_int>` **skin**
+-----------+-----------------+
| *Default* | ``-1`` |
+-----------+-----------------+
| *Setter* | set_skin(value) |
+-----------+-----------------+
| *Getter* | get_skin() |
+-----------+-----------------+
----
.. _class_GLTFNode_property_translation:
- :ref:`Vector3<class_Vector3>` **translation**
+-----------+------------------------+
| *Default* | ``Vector3( 0, 0, 0 )`` |
+-----------+------------------------+
| *Setter* | set_translation(value) |
+-----------+------------------------+
| *Getter* | get_translation() |
+-----------+------------------------+
----
.. _class_GLTFNode_property_xform:
- :ref:`Transform<class_Transform>` **xform**
+-----------+-----------------------------------------------------+
| *Default* | ``Transform( 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0 )`` |
+-----------+-----------------------------------------------------+
| *Setter* | set_xform(value) |
+-----------+-----------------------------------------------------+
| *Getter* | get_xform() |
+-----------+-----------------------------------------------------+
.. |virtual| replace:: :abbr:`virtual (This method should typically be overridden by the user to have any effect.)`
.. |const| replace:: :abbr:`const (This method has no side effects. It doesn't modify any of the instance's member variables.)`
.. |vararg| replace:: :abbr:`vararg (This method accepts any number of arguments after the ones described here.)`
| 41.223176 | 155 | 0.290682 |
83427895fc0409328d9a918dc051b80e60a07480 | 1,556 | rst | reStructuredText | docs/api.rst | banagale/drf-turbo | e9a878117936d162b0646b20c39d11fef1088ce0 | [
"MIT"
] | 73 | 2021-11-10T12:52:48.000Z | 2022-03-21T20:57:51.000Z | docs/api.rst | banagale/drf-turbo | e9a878117936d162b0646b20c39d11fef1088ce0 | [
"MIT"
] | 2 | 2021-11-26T20:15:18.000Z | 2021-12-02T18:50:16.000Z | docs/api.rst | banagale/drf-turbo | e9a878117936d162b0646b20c39d11fef1088ce0 | [
"MIT"
] | 8 | 2021-11-10T12:52:56.000Z | 2022-01-08T01:12:29.000Z | *************
API Reference
*************
Serializer
==========
.. currentmodule:: drf_turbo
.. autoclass:: BaseSerializer
:members:
.. autoclass:: Serializer
:show-inheritance:
:inherited-members:
:members:
.. autoclass:: ModelSerializer
:show-inheritance:
:inherited-members:
:members:
Fields
======
.. autoclass:: Field
:members:
.. autoclass:: drf_turbo.StrField
:members:
.. autoclass:: drf_turbo.EmailField
:members:
.. autoclass:: drf_turbo.URLField
:members:
.. autoclass:: drf_turbo.RegexField
:members:
.. autoclass:: drf_turbo.IPField
:members:
.. autoclass:: drf_turbo.UUIDField
:members:
.. autoclass:: drf_turbo.PasswordField
:members:
.. autoclass:: drf_turbo.SlugField
:members:
.. autoclass:: IntField
:members:
.. autoclass:: FloatField
:members:
.. autoclass:: DecimalField
:members:
.. autoclass:: BoolField
:members:
.. autoclass:: ChoiceField
:members:
.. autoclass:: MultipleChoiceField
:members:
.. autoclass:: DateTimeField
:members:
.. autoclass:: DateField
:members:
.. autoclass:: TimeField
:members:
.. autoclass:: FileField
:members:
.. autoclass:: ArrayField
:members:
.. autoclass:: DictField
:members:
.. autoclass:: JSONField
:members:
.. autoclass:: RelatedField
:members:
.. autoclass:: ManyRelatedField
:members:
.. autoclass:: ConstantField
:members:
.. autoclass:: RecursiveField
:members:
.. autoclass:: MethodField
:members:
| 14.679245 | 38 | 0.633033 |
61786452227e2e5df31259b71c1b9c4ca69f81d4 | 91 | rst | reStructuredText | docs/api/openomics.set_cache_dir.rst | JonnyTran/open-omics | ef5db2dc2fdf486ee5e9fa4e0cf5be61b4531232 | [
"MIT"
] | 12 | 2021-01-14T19:33:48.000Z | 2022-01-06T16:13:03.000Z | docs/api/openomics.set_cache_dir.rst | JonnyTran/open-omics | ef5db2dc2fdf486ee5e9fa4e0cf5be61b4531232 | [
"MIT"
] | 13 | 2020-12-31T20:38:11.000Z | 2021-11-24T06:21:12.000Z | docs/api/openomics.set_cache_dir.rst | JonnyTran/open-omics | ef5db2dc2fdf486ee5e9fa4e0cf5be61b4531232 | [
"MIT"
] | 7 | 2021-02-08T13:42:01.000Z | 2021-10-21T21:37:14.000Z | set_cache_dir
=============
.. currentmodule:: openomics
.. autofunction:: set_cache_dir
| 13 | 31 | 0.659341 |
20f489ebc448fdcfac51725cc82ae9becb686f70 | 104 | rst | reStructuredText | docs/content/image/drawingImages.rst | andyclymer/drawbot | 5b160f1765a71ae1c774a7563060fca74a21db8a | [
"BSD-2-Clause"
] | 302 | 2015-01-10T21:13:26.000Z | 2022-03-31T21:14:06.000Z | docs/content/image/drawingImages.rst | andyclymer/drawbot | 5b160f1765a71ae1c774a7563060fca74a21db8a | [
"BSD-2-Clause"
] | 430 | 2015-04-10T12:48:47.000Z | 2022-03-01T21:35:15.000Z | docs/content/image/drawingImages.rst | andyclymer/drawbot | 5b160f1765a71ae1c774a7563060fca74a21db8a | [
"BSD-2-Clause"
] | 64 | 2015-01-26T04:12:30.000Z | 2022-01-24T11:12:45.000Z | Drawing Images
==============
.. autofunction:: drawBot.image(path, (x, y), alpha=1, pageNumber=None)
| 20.8 | 72 | 0.605769 |
39b1cb9e2250ca9a184603d1ccf6525b5ad74a43 | 590 | rst | reStructuredText | docs/packages/pkg/jsoncpp.rst | Costallat/hunter | dc0d79cb37b30cad6d6472d7143fe27be67e26d5 | [
"BSD-2-Clause"
] | 440 | 2019-08-25T13:07:04.000Z | 2022-03-30T21:57:15.000Z | docs/packages/pkg/jsoncpp.rst | koinos/hunter | fc17bc391210bf139c55df7f947670c5dff59c57 | [
"BSD-2-Clause"
] | 401 | 2019-08-29T08:56:55.000Z | 2022-03-30T12:39:34.000Z | docs/packages/pkg/jsoncpp.rst | koinos/hunter | fc17bc391210bf139c55df7f947670c5dff59c57 | [
"BSD-2-Clause"
] | 162 | 2019-09-02T13:31:36.000Z | 2022-03-30T09:16:54.000Z | .. spelling::
jsoncpp
.. index:: json ; jsoncpp
.. _pkg.jsoncpp:
jsoncpp
=======
.. |hunter| image:: https://img.shields.io/badge/hunter-v0.17.19-blue.svg
:target: https://github.com/cpp-pm/hunter/releases/tag/v0.17.19
:alt: Hunter v0.17.19
- `Official <https://github.com/open-source-parsers/jsoncpp>`__
- `Example <https://github.com/cpp-pm/hunter/blob/master/examples/jsoncpp/CMakeLists.txt>`__
- Available since |hunter|
.. code-block:: cmake
hunter_add_package(jsoncpp)
find_package(jsoncpp CONFIG REQUIRED)
target_link_libraries(... jsoncpp_lib_static)
| 22.692308 | 93 | 0.70339 |
30723d3494fc1b2b0c73ee995500c8ee9a14025f | 1,811 | rst | reStructuredText | docs/getting_started.rst | Miksus/red-base | 4c272e8cb2325b51f6293f608a773e011b1d05da | [
"MIT"
] | null | null | null | docs/getting_started.rst | Miksus/red-base | 4c272e8cb2325b51f6293f608a773e011b1d05da | [
"MIT"
] | null | null | null | docs/getting_started.rst | Miksus/red-base | 4c272e8cb2325b51f6293f608a773e011b1d05da | [
"MIT"
] | null | null | null | .. _tutorial:
Tutorial
========
This section covers basic tutorials of
Red Bird.
Installation
------------
Install the package:
.. code-block:: console
pip install redbird
See `PyPI for Red Bird releases <https://pypi.org/project/redbird/>`_.
Configuring Repository
----------------------
The full list of built-in repositories and their examples are found from
`repository section <repositories>`. Below is a simple example to
configure in-memory repository.
.. code-block:: python
from redbird.ext import MemoryRepo
repo = MemoryRepo()
By default, the items are manipulated as dictionaries. You may also create a
Pydantic model in order to have better data validation and control over
the structure of the items:
.. code-block:: python
from pydantic import BaseModel
class Car(BaseModel):
registration_number: str
color: str
value: float
from redbird.ext import MemoryRepo
repo = MemoryRepo(model=Car)
See more about configuring repositories from :ref:`here <repositories>`.
Usage Examples
--------------
Create operation:
.. code-block::
# If you use dict as model
repo.add({"registration_number": "123-456-789", "color": "red"})
# If you Pydantic model:
repo.add(Car(registration_number="111-222-333", color="red"))
repo.add(Car(registration_number="444-555-666", color="blue"))
Get operation:
.. code-block::
# One item
repo["123-456-789"]
# Multiple items
repo.filter_by(color="red").all()
Update operation:
.. code-block::
# One item
repo["123-456-789"] = {"condition": "good"}
# Multiple items
repo.filter_by(color="blue").update(color="green")
Delete operation:
.. code-block::
# One item
del repo["123-456-789"]
# Multiple items
repo.filter_by(color="red").delete()
| 19.473118 | 77 | 0.676974 |
3d89c9c08a51f686378679b7295c3a480fe858fb | 454 | rst | reStructuredText | docs/usage.rst | simodalla/pympa-organizations | 5e7ecd97f35e4dfca70f35554c54cb6afe36eb9d | [
"MIT"
] | null | null | null | docs/usage.rst | simodalla/pympa-organizations | 5e7ecd97f35e4dfca70f35554c54cb6afe36eb9d | [
"MIT"
] | null | null | null | docs/usage.rst | simodalla/pympa-organizations | 5e7ecd97f35e4dfca70f35554c54cb6afe36eb9d | [
"MIT"
] | null | null | null | =====
Usage
=====
To use Pympa Organizations in a project, add it to your `INSTALLED_APPS`:
.. code-block:: python
INSTALLED_APPS = (
...
'paorganizations.apps.PaorganizationsConfig',
...
)
Add Pympa Organizations's URL patterns:
.. code-block:: python
from paorganizations import urls as paorganizations_urls
urlpatterns = [
...
url(r'^', include(paorganizations_urls)),
...
]
| 16.814815 | 73 | 0.601322 |
fca1c1e8c23b331b9805f5ec6063de73ce03c31f | 3,959 | rst | reStructuredText | docs/source/rationale.rst | slorquet/elffile2 | ae15f7675dca2064ead5a13903d80c79cc6db258 | [
"MIT"
] | 4 | 2017-05-23T20:26:47.000Z | 2021-02-23T03:53:47.000Z | docs/source/rationale.rst | slorquet/elffile2 | ae15f7675dca2064ead5a13903d80c79cc6db258 | [
"MIT"
] | null | null | null | docs/source/rationale.rst | slorquet/elffile2 | ae15f7675dca2064ead5a13903d80c79cc6db258 | [
"MIT"
] | 3 | 2018-05-09T16:46:22.000Z | 2021-05-10T15:46:36.000Z | ===========
Rationale
===========
If you need access to object files other than ELF format then you
probably want to look at the `GNU project <http://gnu.org>`_'s BFD
library which is distributed with `GDB
<http://www.gnu.org/software/gdb>`_ and the `binutils
<http://www.gnu.org/software/binutils>`_. It is the only attempt at
producing a covering library for multiple object file formats of which
the author is aware.
.. note:: K Richard Pixley, the author of *elffile* was also one of
the original authors of BFD.
As software architecture goes, BFD is not a very good design in the
sense that using BFD also requires an intimate understanding of the
BFD internals. BFD based ports to new formats are typically difficult
and more time consuming than simple readers for those formats would
be. BFD doesn't cover or hide information in the way one might expect
from a traditional library but rather offers a sort of development
kit, a basis from which to write new formats. The two primary reasons
to use BFD are:
* In support of a port of the GNU toolchain, that is, `GCC <http://www.gnu.org/software/gcc>`_,
the `binutils <http://www.gnu.org/software/binutils>`_,
and `GDB <http://www.gnu.org/software/gdb>`_.
* As a means of translating between multiple formats.
Luckily, most formats aside from ELF have now conveniently faded away
with the notable exception of `MACH-o
<http://en.wikipedia.org/wiki/Mach-O>`_ on `Mac Os X
<http://en.wikipedia.org/wiki/Mac_OS_X>`_, and some of the alternate
representatons like `S-Record format
<http://en.wikipedia.org/wiki/SREC_%28file_format%29>`_ which may
still be used on in-circuit emulators, logic analysers, and PROM
programmers. This means that reading and writing ELF alone will solve
a majority of needs at a lighter weight than something as ambitious as
BFD.
Other python based ELF readers depend on the venerable libelf
interface which was originally distributed with `UNIX™ SysVr4
<http://en.wikipedia.org/wiki/System_V_Release_4>`_. There are
several free implementations of this reference library available
including `one <http://wiki.freebsd.org/LibElf>`_ from `FreeBSD
<http://www.freebsd.org>`_, `a very popular implementation by Michael
Riepe <http://www.mr511.de/software/english.html>`_, and one that
accompanies the `Fedora <http://fedoraproject.org>`_ hosted `elfutils
<https://fedorahosted.org/elfutils>`_.
The primary benefit for using a reference library of this sort is that
changes to the underlying format can happen at the libelf level and be
hidden from upper level applications. However, the elf format has
been quite stable over the last 15 years or so and has largely
replaced all other formats for both UNIX™ and UNIX-like operating
system families, (Linux, BSD), as well as most cross development
systems hosted on these systems. When changes have occurred they have
primarily been as extensions to the format for new processors, new
operating systems, and new facilities, each of which require
concomitant changes in higher level code as well.
The requirement for libelf isn't particularly difficult to address but
using it in a python library requires writing a python extension for
the libelf-to-python interface. This makes configuration and
installation somewhat more difficult for python users. In particular,
I wasn't able to get any of the available python and libelf based
readers to work on any handy system within a few hours.
More, the paradigm presented by libelf isn't exactly "pythonic". Most
python based applications are likely to use a different internal
format anyway so the utility of using libelf becomes questionable.
Your author also posits that the python extention necessary to
interface with any libelf impementation, (much less one which can work
with multiple installations), is more work to create and maintain than
a pure python library which reads elf format itself. That's the
gamble he's making by writing this library.
| 50.75641 | 95 | 0.785047 |
308e97f474693faf9dbaeaa9570382c0d5fe823b | 1,147 | rst | reStructuredText | docs/source/general/how_to_guides/vitis/gcc_optimization.rst | ultrazohm/ultrazohm_sw | 9f6d9e401319186bdce2d1b24d368c54a9dfa8ee | [
"Apache-2.0"
] | 3 | 2021-11-01T05:50:58.000Z | 2022-03-22T20:10:20.000Z | docs/source/general/how_to_guides/vitis/gcc_optimization.rst | ultrazohm/ultrazohm_sw | 9f6d9e401319186bdce2d1b24d368c54a9dfa8ee | [
"Apache-2.0"
] | null | null | null | docs/source/general/how_to_guides/vitis/gcc_optimization.rst | ultrazohm/ultrazohm_sw | 9f6d9e401319186bdce2d1b24d368c54a9dfa8ee | [
"Apache-2.0"
] | 1 | 2022-03-16T16:16:58.000Z | 2022-03-16T16:16:58.000Z | ===================================
Optimization Levels of the Compiler
===================================
* You can tell the compiler to use different levels of optimization.
* UltraZohm default for R5 is -O2
* UltraZohm default for A53 is -O3
* It is recommended to keep these options as they are.
* `Introduction to optimization levels <https://www.linuxtopia.org/online_books/an_introduction_to_gcc/gccintro_49.html>`_
.. warning:: If the compiler optimization is changed for debugging to -O1 or -O0, the timing of the program, interaction between processors as well as the PL changes. This might hide race conditions, prevent the bug that is searched from triggering, or increase the run time of the ISR over the allowed timing budget.
**Step-by-step**
^^^^^^^^^^^^^^^^^^
Open the project properties
.. image:: ./images_problems/include_math_lib1.png
:height: 400
Change the optimization level by following the steps:
1. C/C++ build -> Settings
2. ARM R5 gcc compiler -> Optimization
3. Optimization Level -> pull down menu to chose the desired level
4. Apply and Close
.. image:: ./images_problems/gcc_optimization_level.png
| 38.233333 | 319 | 0.713165 |
f671d9bd3fff403369c296daa26d5120caa4b552 | 253 | rst | reStructuredText | docs/api.rst | artemrizhov/django-mail-templated | 1b428e7b6e02a5cf775bc83d6f5fd8c5f56d7932 | [
"MIT"
] | 105 | 2015-01-01T00:36:49.000Z | 2021-07-31T22:47:55.000Z | docs/api.rst | artemrizhov/django-mail-templated | 1b428e7b6e02a5cf775bc83d6f5fd8c5f56d7932 | [
"MIT"
] | 30 | 2015-02-15T22:26:18.000Z | 2021-09-30T05:08:46.000Z | docs/api.rst | artemrizhov/django-mail-templated | 1b428e7b6e02a5cf775bc83d6f5fd8c5f56d7932 | [
"MIT"
] | 19 | 2015-07-16T19:22:51.000Z | 2021-07-31T22:46:04.000Z | API Reference
=============
.. automodule:: mail_templated
send_mail()
-----------
.. autofunction:: mail_templated.send_mail
EmailMessage
------------
.. autoclass:: mail_templated.EmailMessage
:special-members: __init__
:inherited-members:
| 14.882353 | 42 | 0.652174 |
c072a9c35a4289f53546d649d040e1c301df7b50 | 3,923 | rst | reStructuredText | Documentation/platforms/xtensa/esp32/boards/esp32-wrover-kit/index.rst | alvin1991/incubator-nuttx | b4fe0422624cfdc5a1925696f6ca7191a6d45326 | [
"Apache-2.0"
] | 201 | 2015-01-23T06:06:31.000Z | 2022-01-28T22:25:51.000Z | Documentation/platforms/xtensa/esp32/boards/esp32-wrover-kit/index.rst | alvin1991/incubator-nuttx | b4fe0422624cfdc5a1925696f6ca7191a6d45326 | [
"Apache-2.0"
] | 126 | 2015-01-02T12:54:29.000Z | 2022-02-15T15:01:00.000Z | Documentation/platforms/xtensa/esp32/boards/esp32-wrover-kit/index.rst | alvin1991/incubator-nuttx | b4fe0422624cfdc5a1925696f6ca7191a6d45326 | [
"Apache-2.0"
] | 380 | 2015-01-08T10:40:04.000Z | 2022-03-19T06:59:50.000Z | ==============
ESP-WROVER-KIT
==============
The `ESP-WROVER-KIT <https://docs.espressif.com/projects/esp-idf/en/latest/esp32/hw-reference/esp32/get-started-wrover-kit.html>`_ is a development board for the ESP32 SoC from Espressif, based on a ESP32-WROVER-B module.
.. list-table::
:align: center
* - .. figure:: esp-wrover-kit-v4.1-layout-back.png
:align: center
ESP-WROVER-KIT board layout - front
- .. figure:: esp-wrover-kit-v4.1-layout-front.png
:align: center
ESP-WROVER-KIT board layout - back
Features
========
- ESP32-WROVER-B module
- LCD screen
- MicroSD card slot
Its another distinguishing feature is the embedded FTDI FT2232HL chip,
an advanced multi-interface USB bridge. This chip enables to use JTAG
for direct debugging of ESP32 through the USB interface without a separate
JTAG debugger. ESP-WROVER-KIT makes development convenient, easy, and
cost-effective.
Most of the ESP32 I/O pins are broken out to the board’s pin headers for easy access.
Serial Console
==============
UART0 is, by default, the serial console. It connects to the on-board
FT2232HL converter and is available on the USB connector USB CON8 (J5).
It will show up as /dev/ttyUSB[n] where [n] will probably be 1, since
the first interface ([n] == 0) is dedicated to the USB-to-JTAG interface.
Buttons and LEDs
================
Buttons
-------
There are two buttons labeled Boot and EN. The EN button is not available
to software. It pulls the chip enable line that doubles as a reset line.
The BOOT button is connected to IO0. On reset it is used as a strapping
pin to determine whether the chip boots normally or into the serial
bootloader. After reset, however, the BOOT button can be used for software
input.
LEDs
----
There are several on-board LEDs for that indicate the presence of power
and USB activity.
There is an RGB LED available for software.
Pin Mapping
===========
===== ========================= ==========
Pin Signal Notes
===== ========================= ==========
0 RGB LED Red / BOOT Button
2 RGB LED Green
4 RGB LED Blue
5 LCD Backlight
18 LCD Reset
19 LCD Clock
21 LCD D/C
22 LCD CS
23 LCD MOSI
25 LCD MISO
===== ========================= ==========
Configurations
==============
nsh
---
Basic NuttShell configuration (console enabled in UART0, exposed via
USB connection by means of FT2232HL converter, at 115200 bps).
wapi
----
Enables Wi-Fi support.
gpio
----
This is a test for the GPIO driver. It includes the 3 LEDs and one, arbitrary, GPIO.
For this example, GPIO22 was used.
At the nsh, we can turn LEDs on and off with the following::
nsh> gpio -o 1 /dev/gpout0
nsh> gpio -o 0 /dev/gpout1
We can use the interrupt pin to send a signal when the interrupt fires::
nsh> gpio -w 14 /dev/gpint3
The pin is configured to as a rising edge interrupt, so after issuing the
above command, connect it to 3.3V.
spiflash
--------
This config tests the external SPI that comes with an ESP32 module connected
through SPI1.
By default a SmartFS file system is selected.
Once booted you can use the following commands to mount the file system::
mksmartfs /dev/smart0
mount -t smartfs /dev/smart0 /mnt
Note that `mksmartfs` is only needed the first time.
nx
--
This config adds a set of tests using the graphic examples at `apps/example/nx`.
This configuration illustrates the use of the LCD with the lower performance
SPI interface.
lvgl
----
This is a demonstration of the LVGL graphics library running on the NuttX LCD
driver. You can find LVGL here::
https://www.lvgl.io/
https://github.com/lvgl/lvgl
This configuration uses the LVGL demonstration at `apps/examples/lvgldemo`.
External devices
=================
BMP180
------
When using BMP180 (enabling ``CONFIG_SENSORS_BMP180``), it's expected this device is wired to I2C0 bus.
| 25.309677 | 221 | 0.68468 |
431e482c7d7bf46c2a786262778a048c2f966244 | 758 | rst | reStructuredText | docs/internal_api/index.rst | UXARRAY/uxarray | 6fc6af993c4d10194fbb6d7fbcae804bad4b1ad7 | [
"Apache-2.0"
] | 39 | 2021-11-09T17:04:02.000Z | 2022-03-25T08:35:09.000Z | docs/internal_api/index.rst | UXARRAY/uxarray | 6fc6af993c4d10194fbb6d7fbcae804bad4b1ad7 | [
"Apache-2.0"
] | 17 | 2021-11-03T19:49:02.000Z | 2022-03-31T17:23:54.000Z | docs/internal_api/index.rst | UXARRAY/uxarray | 6fc6af993c4d10194fbb6d7fbcae804bad4b1ad7 | [
"Apache-2.0"
] | 2 | 2021-11-10T22:19:33.000Z | 2021-11-12T20:24:11.000Z | .. currentmodule:: uxarray
Internal API
============
This page shows already-implemented Uxarray internal API functions. You can also
check the draft `Uxarray API
<https://github.com/UXARRAY/uxarray/blob/main/docs/user_api/uxarray_api.md>`_
documentation to see the tentative whole API and let us know if you have any feedback!
Grid Methods
------------
.. autosummary::
:nosignatures:
:toctree: ./generated/
grid.Grid.__init__
grid.Grid.__init_ds_var_names__
grid.Grid.__from_file__
grid.Grid.__from_vert__
Grid Helper Modules
--------------------
.. autosummary::
:nosignatures:
:toctree: ./generated/
_exodus._read_exodus
_exodus._write_exodus
_exodus._get_element_type
_ugrid._write_ugrid
_ugrid._read_ugrid
| 22.294118 | 86 | 0.718997 |
79aed3085407494c88aa3dabb38596c2041e5850 | 7,563 | rst | reStructuredText | docs/quickstart.rst | devtodor/pyeapi-py3 | a1b7241d489527fd69d4fb5135f881ebd4dbb970 | [
"BSD-3-Clause"
] | null | null | null | docs/quickstart.rst | devtodor/pyeapi-py3 | a1b7241d489527fd69d4fb5135f881ebd4dbb970 | [
"BSD-3-Clause"
] | null | null | null | docs/quickstart.rst | devtodor/pyeapi-py3 | a1b7241d489527fd69d4fb5135f881ebd4dbb970 | [
"BSD-3-Clause"
] | null | null | null | Python Client for eAPI
======================
The Python library for Arista's eAPI command API implementation provides a
client API work using eAPI and communicating with EOS nodes. The Python
library can be used to communicate with EOS either locally (on-box) or remotely
(off-box). It uses a standard INI-style configuration file to specify one or
more nodes and connection properites.
The pyeapi library also provides an API layer for building native Python
objects to interact with the destination nodes. The API layer is a convienent
implementation for working with the EOS configuration and is extensible for
developing custom implemenations.
This library is freely provided to the open source community for building
robust applications using Arista EOS. Support is provided as best effort
through Github issues.
## Requirements
* Arista EOS 4.12 or later
* Arista eAPI enabled for at least one transport (see Official EOS Config Guide
at arista.com for details)
* Python 2.7
# Getting Started
In order to use pyeapi, the EOS command API must be enabled using ``management
api http-commands`` configuration mode. This library supports eAPI calls over
both HTTP and UNIX Domain Sockets. Once the command API is enabled on the
destination node, create a configuration file with the node properities.
**Note:** The default search path for the conf file is ``~/.eapi.conf``
followed by ``/mnt/flash/eapi.conf``. This can be overridden by setting
``EAPI_CONF=<path file conf file>`` in your environment.
## Example eapi.conf File
Below is an example of an eAPI conf file. The conf file can contain more than
one node. Each node section must be prefaced by **connection:\<name\>** where
\<name\> is the name of the connection.
The following configuration options are available for defining node entries:
* **host** - The IP address or FQDN of the remote device. If the host
parameter is omitted then the connection name is used
* **username** - The eAPI username to use for authentication (only required for
http or https connections)
* **password** - The eAPI password to use for authentication (only required for
http or https connections)
* **enablepwd** - The enable mode password if required by the destination node
* **transport** - Configures the type of transport connection to use. The
default value is _https_. Valid values are:
* socket (available in EOS 4.14.5 or later)
* http_local (available in EOS 4.14.5 or later)
* http
* https
* **port** - Configures the port to use for the eAPI connection. A default
port is used if this parameter is absent, based on the transport setting
using the following values:
* transport: http, default port: 80
* transport: https, deafult port: 443
* transport: https_local, default port: 8080
* transport: socket, default port: n/a
_Note:_ See the EOS User Manual found at arista.com for more details on
configuring eAPI values.
All configuration values are optional.
```
[connection:veos01]
username: eapi
password: password
transport: http
[connection:veos02]
transport: http
[connection:veos03]
transport: socket
[connection:veos04]
host: 172.16.10.1
username: eapi
password: password
enablepwd: itsasecret
port: 1234
transport: https
[connection:localhost]
transport: http_local
```
The above example shows different ways to define EOS node connections. All
configuration options will attempt to use default values if not explicitly
defined. If the host parameter is not set for a given entry, then the
connection name will be used as the host address.
### Configuring \[connection:localhost]
The pyeapi library automatically installs a single default configuration entry
for connecting to localhost host using a transport of sockets. If using the
pyeapi library locally on an EOS node, simply enable the command API to use
sockets and no further configuration is needed for pyeapi to function. If you
specify an entry in a conf file with the name ``[connection:localhost]``, the
values in the conf file will overwrite the default.
## Using pyeapi
The Python client for eAPI was designed to be easy to use and implement for
writing tools and applications that interface with the Arista EOS management
plane.
### Creating a connection and sending commands
Once EOS is configured properly and the config file created, getting started
with a connection to EOS is simple. Below demonstrates a basic connection
using pyeapi. For more examples, please see the examples folder.
```
# start by importing the library
import pyeapi
# create a node object by specifying the node to work with
node = pyeapi.connect_to('veos01')
# send one or more commands to the node
node.enable('show hostname')
[{'command': 'show hostname', 'result': {u'hostname': u'veos01', u'fqdn':
u'veos01.arista.com'}, 'encoding': 'json'}]
# use the config method to send configuration commands
node.config('hostname veos01')
[{}]
# multiple commands can be sent by using a list (works for both enable or
config)
node.config(['interface Ethernet1', 'description foo'])
[{}, {}]
# return the running or startup configuration from the node (output omitted for
brevity)
node.running_config
node.startup_config
```
### Using the API
The pyeapi library provides both a client for send and receiving commands over
eAPI as well as an API for working directly with EOS resources. The API is
designed to be easy and straightforward to use yet also extensible. Below is
an example of working with the ``vlans`` API
```
# create a connection to the node
import pyeapi
node = pyeapi.connect_to('veos01')
# get the instance of the API (in this case vlans)
vlans = node.api('vlans')
# return all vlans from the node
vlans.getall()
{'1': {'state': 'active', 'name': 'default', 'vlan_id': 1, 'trunk_groups': []},
'10': {'state': 'active', 'name': 'VLAN0010', 'vlan_id': 10, 'trunk_groups':
[]}}
# return a specific vlan from the node
vlans.get(1)
{'state': 'active', 'name': 'default', 'vlan_id': 1, 'trunk_groups': []}
# add a new vlan to the node
vlans.create(100)
True
# set the new vlan name
vlans.set_name(100, 'foo')
True
```
All API implementations developed by Arista EOS+ CS are found in the pyeapi/api
folder. See the examples folder for additional examples.
# Installation
The source code for pyeapi is provided on Github at
http://github.com/arista-eosplus/pyeapi. All current development is done in
the develop branch. Stable released versions are tagged in the master branch
and uploaded to PyPi.
* To install the latest stable version of pyeapi, simply run ``pip install
pyeapi`` (or ``pip install --upgrade pyeapi``)
* To install the latest development version from Github, simply clone the
develop branch and run ``python setup.py install``
# Testing
The pyeapi library provides both unit tests and system tests. The unit tests
can be run without an EOS node. To run the system tests, you will need to
update the ``dut.conf`` file found in test/fixtures.
* To run the unit tests, simply run ``make unittest`` from the root of the
pyeapi source folder
* To run the system tests, simply run ``make systest`` from the root of the
pyeapi source fodler
* To run all tests, use ``make tests`` from the root of the pyeapi source
folder
# Contributing
Contributing pull requests are gladly welcomed for this repository. Please
note that all contributions that modify the library behavior require
corresponding test cases otherwise the pull request will be rejected.
# License
New BSD, See [LICENSE](LICENSE) file
| 34.692661 | 79 | 0.756843 |
915d2dc2fd01745296c84539fab8749bdc8ada46 | 506 | rst | reStructuredText | docs/index.rst | claus022015/casper_project | 0cb807c73489c98e4e7994d1a9500c86578baf3c | [
"MIT"
] | null | null | null | docs/index.rst | claus022015/casper_project | 0cb807c73489c98e4e7994d1a9500c86578baf3c | [
"MIT"
] | null | null | null | docs/index.rst | claus022015/casper_project | 0cb807c73489c98e4e7994d1a9500c86578baf3c | [
"MIT"
] | null | null | null |
.. demo::
<button>Click me!</button>
======================
CasperJS documentation modified by ZXC on May, 15th.
======================
CasperJS_ is a navigation scripting & testing utility for the PhantomJS_ (WebKit) and SlimerJS_ (Gecko) headless browsers, written in Javascript.
.. figure:: _static/images/casperjs-logo.png
:align: right
.. toctree::
:maxdepth: 2
01/index
.. _CasperJS: http://casperjs.org/
.. _PhantomJS: http://phantomjs.org/
.. _SlimerJS: http://slimerjs.org/
| 22 | 145 | 0.644269 |
b4e9d6881328f7363e5a55f3ed4e5924db1e62fc | 1,798 | rst | reStructuredText | README.rst | cucasf/iol-api | 07cadb56a89cf41d47317f321dfcfe3049970287 | [
"MIT"
] | null | null | null | README.rst | cucasf/iol-api | 07cadb56a89cf41d47317f321dfcfe3049970287 | [
"MIT"
] | null | null | null | README.rst | cucasf/iol-api | 07cadb56a89cf41d47317f321dfcfe3049970287 | [
"MIT"
] | null | null | null | ============================================================
iol-api libreria para consumir Invertir Online API en Python
============================================================
.. image:: https://img.shields.io/pypi/v/iol_api.svg
:target: https://pypi.python.org/pypi/iol_api
.. image:: https://readthedocs.org/projects/iol-api/badge/?version=latest
:target: https://iol-api.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
Que es iol-api
--------------
iol-api es una libreria no oficial para consumir datos de `Inververtir Online API <https://api.invertironline.com>`_
La libreria esta diseñana para funcinar asincronicamente utilizando aiohttp.
Como usarlo
-----------
Es necesiro contar con una cuenta en Invertir Online y tener activado el uso de la API.
Luego, instalar iol-api
.. code-block:: python
pip install iol-api
Ejemplo de como utilizar la libreria
.. code-block:: python
import asyncio
from iol_api import IOLClient
from iol_api.constants import Mercado
async def main():
iol_client = IOLClient('usurio@email.com', 'contrasena')
data = await iol_client.get_titulo('SUPV', Mercado.BCBA)
print(data)
asyncio.run(main())
iol-api devulve un diccionario con objetos nativos de Python, transformado cualquier fecha en un objecto de clase datetime
**Descargo de responsabilidad:** *iol-api es una libreria no oficial. De ninguna manera esta
respaldada o asociada a INVERTIR ONLINE o cualquier organización asociada.
Asegúrese de leer y comprender los términos de servicio de la API subyacente.
antes de usar este paquete. Estos autores no aceptan responsabilidad por
daños que pudieran derivarse del uso de este paquete. Consulte el archivo de LICENCIA para
más detalles.* | 32.107143 | 122 | 0.690211 |
447327ac04ddad30cbd3639c6f86cec247aa64ab | 1,680 | rst | reStructuredText | api/autoapi/Microsoft/AspNetCore/Mvc/ModelBinding/Validation/IModelValidator/index.rst | JakeGinnivan/Docs | 2e8d94b77a6a4197a8a1ad820085fbcadca65cf9 | [
"Apache-2.0"
] | 13 | 2019-02-14T19:48:34.000Z | 2021-12-24T13:38:23.000Z | api/autoapi/Microsoft/AspNetCore/Mvc/ModelBinding/Validation/IModelValidator/index.rst | JakeGinnivan/Docs | 2e8d94b77a6a4197a8a1ad820085fbcadca65cf9 | [
"Apache-2.0"
] | null | null | null | api/autoapi/Microsoft/AspNetCore/Mvc/ModelBinding/Validation/IModelValidator/index.rst | JakeGinnivan/Docs | 2e8d94b77a6a4197a8a1ad820085fbcadca65cf9 | [
"Apache-2.0"
] | 3 | 2017-12-29T18:10:16.000Z | 2018-07-24T18:41:45.000Z |
IModelValidator Interface
=========================
Validates a model value.
Namespace
:dn:ns:`Microsoft.AspNetCore.Mvc.ModelBinding.Validation`
Assemblies
* Microsoft.AspNetCore.Mvc.Abstractions
----
.. contents::
:local:
Syntax
------
.. code-block:: csharp
public interface IModelValidator
.. dn:interface:: Microsoft.AspNetCore.Mvc.ModelBinding.Validation.IModelValidator
:hidden:
.. dn:interface:: Microsoft.AspNetCore.Mvc.ModelBinding.Validation.IModelValidator
Methods
-------
.. dn:interface:: Microsoft.AspNetCore.Mvc.ModelBinding.Validation.IModelValidator
:noindex:
:hidden:
.. dn:method:: Microsoft.AspNetCore.Mvc.ModelBinding.Validation.IModelValidator.Validate(Microsoft.AspNetCore.Mvc.ModelBinding.Validation.ModelValidationContext)
Validates the model value.
:param context: The :any:`Microsoft.AspNetCore.Mvc.ModelBinding.Validation.ModelValidationContext`\.
:type context: Microsoft.AspNetCore.Mvc.ModelBinding.Validation.ModelValidationContext
:rtype: System.Collections.Generic.IEnumerable<System.Collections.Generic.IEnumerable`1>{Microsoft.AspNetCore.Mvc.ModelBinding.Validation.ModelValidationResult<Microsoft.AspNetCore.Mvc.ModelBinding.Validation.ModelValidationResult>}
:return:
A list of :any:`Microsoft.AspNetCore.Mvc.ModelBinding.Validation.ModelValidationResult` indicating the results of validating the model value.
.. code-block:: csharp
IEnumerable<ModelValidationResult> Validate(ModelValidationContext context)
| 20.487805 | 240 | 0.705357 |
5e1139beccfbc6798ed19e32cf22c9f391bb4fb5 | 165 | rst | reStructuredText | static_websites/python/docs/_sources/api/ndarray/_autogen/mxnet.ndarray.sparse.RowSparseNDArray.tanh.rst | IvyBazan/mxnet.io-v2 | fdfd79b1a2c86afb59f27e8700056cd9a32c3181 | [
"MIT"
] | null | null | null | static_websites/python/docs/_sources/api/ndarray/_autogen/mxnet.ndarray.sparse.RowSparseNDArray.tanh.rst | IvyBazan/mxnet.io-v2 | fdfd79b1a2c86afb59f27e8700056cd9a32c3181 | [
"MIT"
] | null | null | null | static_websites/python/docs/_sources/api/ndarray/_autogen/mxnet.ndarray.sparse.RowSparseNDArray.tanh.rst | IvyBazan/mxnet.io-v2 | fdfd79b1a2c86afb59f27e8700056cd9a32c3181 | [
"MIT"
] | null | null | null | mxnet.ndarray.sparse.RowSparseNDArray.tanh
==========================================
.. currentmodule:: mxnet.ndarray.sparse
.. automethod:: RowSparseNDArray.tanh | 27.5 | 42 | 0.6 |
c81ad5c94f08f4aabbaf262d20999656feeff39a | 5,181 | rst | reStructuredText | docs/packaging.rst | k-sunako/CryptoMath | 467288c26301606ed1667f424e276d81c20ab640 | [
"MIT"
] | null | null | null | docs/packaging.rst | k-sunako/CryptoMath | 467288c26301606ed1667f424e276d81c20ab640 | [
"MIT"
] | 1 | 2021-06-01T22:11:33.000Z | 2021-06-01T22:11:33.000Z | docs/packaging.rst | costrouc/python-package-template | 1058f8f2ec4a34c0a600072b9eb5fe8d6fcb9b09 | [
"MIT"
] | null | null | null | =========
Packaging
=========
In this section I will talk about how create a simple python package
that can be installed using ``python setup.py install``. These are the
basics sharing your package with other users. In order to get your
package to install with ``pip`` you will need to complete the steps in
this guide and :doc:`pypi`. The reason is that this guide only shows
how to let someone install your package if they have the package
directory on their machine.
This guide was taken from several resources:
- `setup.py reference documentation <https://setuptools.readthedocs.io/en/latest/setuptools.html>`_
- `pypi sample project <https://github.com/pypa/sampleproject>`_
- `kennethreitz setup.py <https://github.com/kennethreitz/setup.py>`_
- `pypi supports markdown <https://dustingram.com/articles/2018/03/16/markdown-descriptions-on-pypi>`_
Is anyone else troubled by the fact that so many links are necissary
for simple python package development?!
Overview of typical package
README.md
CHANGELOG.md
LICENSE.md
setup.py
<package>/
__init__.py
--------
setup.py
--------
The most important file is the ``setup.py`` file. All required and
optional fields are given ``<required>`` and ``<optional>``
respectively.
.. code-block:: python
from setuptools import setup, find_packages
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='<required>',
version='<required>',
description='<required>',
long_description=long_description,
long_description_content_type="text/markdown",
url='<optional>',
author='<optional>',
author_email='<optional>',
license='<optional>',
classifiers=[
# Trove classifiers
# Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: Implementation :: CPython'
],
keywords='<optional>',
packages=find_packages(exclude=['docs', 'tests']),
# setuptools > 38.6.0 needed for markdown README.md
setup_requires=['setuptools>=38.6.0'],
)
While `setuptools docs
<https://setuptools.readthedocs.io/en/latest/setuptools.html>`_ detail
each option. I still needed some of the keyworks in more plain
english. This is not an exhaustive list so make sure to reference the
setuptools docs.
name
the name of package on pypi and when listed in pip. This is not
the name of the package that you import via python. The name of the
import will always be the name of the package directory for example
``pypkgtemp``.
version
make sure that the version numbers when pushing to pypi are unique. Also best to
follow `semantic versioning <https://semver.org/>`_.
description
keep it short and describe your package
long_description
make sure that you have created a README.md file in
the project directory. Why use a README.md instead of README.rst?
It's simple, Github, Bitbucket, Gitlab, etc. all will display a
README.md as the homepage.
url
link to git repo url
author
give yourself credit!
author_email
nobody should really use this address to contact you about the package
license
need help choosing a license? use `choosealicense <https://choosealicense.com/>`_
classifiers
one day would be nice to know why they are important. list of available `tags <https://pypi.python.org/pypi?%3Aaction=list_classifiers>`_.
keywords
will help with searching for package on pypi
packages
which packages to include in python packaging. using
``find_packages`` is very helpful.
setup_requires
list of packages required for setup. Note that versioning uses `environment markers <https://www.python.org/dev/peps/pep-0508/#environment-markers>`_.
----------
LICENSE.md
----------
If you do not include a license it is by default copyrighted and
unable to be used by others. This is why it is so important to give
your work a license. A great resource for this is `choosealicense.com <https://choosealicense.com>`_.
---------
README.md
---------
A README is the first document someone sees when they visit your
project make it an inviting document with an overview of everthing the
programmer needs.
------------
CHANGELOG.md
------------
A changelog is something that I did not really adopt in my projects
until I started forgeting what I had done in the past week. I git log
is not designed for this! Some great advice can be found in `Keep a
CHANGELOG <https://keepachangelog.com/en/0.3.0/>`_. Their motto is
"Don’t let your friends dump git logs into CHANGELOGs™"
At this point you have a simple python package setup! Obviously the
readme, changelog, and license are all optional but HIGHLY
recommended. Next we will share our package with the whole world
through continuous deployment (:doc:`pypi`_).
| 32.584906 | 152 | 0.715885 |
15570ea10b01b6b7651862f6a6e853377e663dc3 | 1,818 | rst | reStructuredText | source/Prerequisites/Prerequisites-on-Linux.rst | pgiu/AutomatakHelp-2.0 | ef3d675bfb15736ac395bdbdecee700ae8b381ff | [
"MS-PL",
"Naumen",
"Condor-1.1",
"Apache-1.1"
] | 1 | 2020-05-25T21:30:54.000Z | 2020-05-25T21:30:54.000Z | source/Prerequisites/Prerequisites-on-Linux.rst | pgiu/AutomatakHelp-2.0 | ef3d675bfb15736ac395bdbdecee700ae8b381ff | [
"MS-PL",
"Naumen",
"Condor-1.1",
"Apache-1.1"
] | null | null | null | source/Prerequisites/Prerequisites-on-Linux.rst | pgiu/AutomatakHelp-2.0 | ef3d675bfb15736ac395bdbdecee700ae8b381ff | [
"MS-PL",
"Naumen",
"Condor-1.1",
"Apache-1.1"
] | null | null | null | Prerequisites on linux
======================
Content
---------
ADD an index here!!!
g++ 4.6.x
---------
The reason the compiler support required is so cutting edge is because of C++11. This was done primarily to reduce the required pieces of Boost. Installation may vary from platform to platform.
Ubuntu 12.04
^^^^^^^^^^^^
.. code-block:: bash
$ sudo apt-get install g++
Fedora18
^^^^^^^^
Use the package manager to install gcc and gcc-c++ or:-
.. code-block:: bash
$ sudo yum install gcc gcc-c++
Will install or update the compilers or indicate that they are already installed and latest version.
GNU autotools
-------------
Most developers already have this installed, but here are some platform specific hints.
Ubuntu 12.04
^^^^^^^^^^^^
.. code-block:: bash
$ sudo apt-get install autoconf libtool
Fedora18
^^^^^^^^
Use the package manager to install autoconf, m4, libtool and automake, or:-
.. code-block:: bash
$ sudo yum install autoconf m4 libtool automake
Boost
-----
Most linux distros have packages for at least 1.50.0.
.. code-block:: bash
$ sudo apt-get install liboost-1.50-all-dev
Fedora18
^^^^^^^^
Use the package manager to install boost, or:-
.. code-block:: bash
$ sudo yum install boost
If you want to install the latest and greatest you need to build from source but its pretty easy:
.. code-block:: bash
$ wget http://sourceforge.net/projects/boost/files/boost/1.52.0/boost_1_52_0.tar.gz/download -O boost.tar.gz
$ tar -xvf boost.tar.gz
$ cd boost_1_52_0
$ ./bootstrap.sh
$ sudo ./b2 install --prefix=/usr --with-system --with-test
If you just want to go ahead and install ALL of boost in case you want it for something else in the future just leave off the '--with-' statements. The prefix given may vary for another Linux disto.
| 22.725 | 198 | 0.687019 |
bb87e4cda12a6546b556effbaed2ca300b6cc5d2 | 1,040 | rst | reStructuredText | reference/NumpyDL-master/docs/index.rst | code4bw/deep-np | f477d7d3bd88bae8cea408926b3cc4509f78c9d0 | [
"MIT"
] | 186 | 2017-04-04T07:37:00.000Z | 2021-02-25T11:56:48.000Z | reference/NumpyDL-master/docs/index.rst | code4bw/deep-np | f477d7d3bd88bae8cea408926b3cc4509f78c9d0 | [
"MIT"
] | 9 | 2017-05-07T12:42:45.000Z | 2019-11-06T19:45:33.000Z | reference/NumpyDL-master/docs/index.rst | code4bw/deep-np | f477d7d3bd88bae8cea408926b3cc4509f78c9d0 | [
"MIT"
] | 74 | 2017-04-04T06:41:07.000Z | 2021-02-19T12:58:36.000Z |
Hi, NumpyDL
===========
NumpyDL is a simple deep learning library based on pure Python/Numpy. NumpyDL
is a work in progress, input is welcome. The project is on
`GitHub <https://github.com/oujago/NumpyDL>`_.
The main features of NumpyDL are as follows:
1. *Pure* in Numpy
2. *Native* to Python
3. *Automatic differentiations* are basically supported
4. *Commonly used models* are provided: MLP, RNNs, LSTMs and CNNs
5. *API* like ``Keras`` library
6. *Examples* for several AI tasks
7. *Application* for a toy chatbot
API References
==============
If you are looking for information on a specific function, class or
method, this part of the documentation is for you.
.. toctree::
:maxdepth: 2
api_references/layers
api_references/activations
api_references/initializations
api_references/objectives
api_references/optimizers
api_references/model
api_references/utils
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. _GitHub: https://github.com/oujago/NumpyDL
| 22.608696 | 77 | 0.719231 |
0b22d4c90bd1ea15495e45c4ebce9747f5ffd8f1 | 259 | rst | reStructuredText | readme.rst | planetis-m/cowstrings | 17aaae025bc41d239bff34e859bb6086c70e2884 | [
"MIT"
] | 2 | 2021-07-22T09:28:35.000Z | 2021-11-24T19:25:15.000Z | readme.rst | planetis-m/cowstrings | 17aaae025bc41d239bff34e859bb6086c70e2884 | [
"MIT"
] | 1 | 2021-07-29T20:15:45.000Z | 2021-11-26T16:12:50.000Z | readme.rst | planetis-m/cowstrings | 17aaae025bc41d239bff34e859bb6086c70e2884 | [
"MIT"
] | null | null | null | ====================================================
Copy-On-Write String
====================================================
Copy-On-Write string implementation according to `nim-lang/RFCs#221 <https://github.com/nim-lang/RFCs/issues/221>`_
| 43.166667 | 115 | 0.416988 |
d3f9e4f8a3d6122796c5dffd624d90319a35240d | 902 | rst | reStructuredText | README.rst | lamaral/serveradmin | d7444eef49b419dba89f9bf8a4883a82f0f143ac | [
"MIT"
] | 43 | 2017-02-23T17:30:54.000Z | 2021-04-14T06:25:51.000Z | README.rst | lamaral/serveradmin | d7444eef49b419dba89f9bf8a4883a82f0f143ac | [
"MIT"
] | 55 | 2017-08-16T16:52:39.000Z | 2022-03-30T08:48:06.000Z | README.rst | lamaral/serveradmin | d7444eef49b419dba89f9bf8a4883a82f0f143ac | [
"MIT"
] | 15 | 2017-10-04T18:02:33.000Z | 2022-03-25T10:15:12.000Z | .. image:: https://travis-ci.com/innogames/serveradmin.svg?branch=master
:target: https://travis-ci.com/innogames/serveradmin
:alt: Continuous Integration Status
.. image:: https://readthedocs.org/projects/serveradmin/badge/?version=latest
:target: https://serveradmin.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
Serveradmin
===========
Serveradmin is central server database management system of InnoGames. It
has a HTTP web interface and a HTTP JSON API. Check out `the documentation
<https://serveradmin.readthedocs.io/en/latest/>`_ or watch `this FOSDEM 19
talk <https://archive.org/details/youtube-nWuisFTIgME>`_ for a deepdive how
InnoGames works with serveradmin.
License
-------
The project is released under the MIT License. The MIT License is registered
with and approved by the `Open Source Initiative <https://opensource.org/licenses/MIT>`_.
| 37.583333 | 89 | 0.758315 |
5037554aad77565e72bd68d984b298ba76900e53 | 1,583 | rst | reStructuredText | docs/users.rst | astronmax/SteganoBot | c4b68b49c0463770dfd365c8df26544957212214 | [
"Apache-2.0"
] | null | null | null | docs/users.rst | astronmax/SteganoBot | c4b68b49c0463770dfd365c8df26544957212214 | [
"Apache-2.0"
] | null | null | null | docs/users.rst | astronmax/SteganoBot | c4b68b49c0463770dfd365c8df26544957212214 | [
"Apache-2.0"
] | null | null | null | VK bot usage guide
========================
---------------------------------------
How to encrypt your data in photos
---------------------------------------
In order to encrypt your data in photos you need to:
1) Say "Привет"
2) Say "Зашифровать в фото"
3) In ONE message send the photo as a document and write the text to hide
4) You'll get an encrypted photo with your text hidden in it
---------------------------------------
How to decrypt your photos
---------------------------------------
In order to decrypt your photos you need to:
1) Say "Расшифровать фото"
2) Send an encrypted photo as a document
3) You'll get a response from bot with the hidden text
---------------------------------------
How to encrypt your data in audio
---------------------------------------
In order to encrypt your data in audio you need to:
1) Say "Привет"
2) Say "Зашифровать в голосовом сообщении"
3) Record and send an audio message
4) Either send a text message OR file to hide, it should be less than the suggested size
5) You'll get an encrypted audio with your data hidden in it and the amount of bytes you have to remember
---------------------------------------
How to decrypt your audio
---------------------------------------
In order to decrypt your audio you need to:
1) Say "Расшифровать аудио"
2) Send an encrypted audio as a .md file with a message ".FILE_FORMAT:BYTES_AMOUNT" if you encrypted a file or just a number of bytes if it is text
3) You'll get a response from bot with the hidden text or file
| 34.413043 | 151 | 0.567277 |
7380f0b5f13af47753eb953f6420b0e858f00ad6 | 701 | rst | reStructuredText | docs/hazmat/primitives/asymmetric/utils.rst | elitest/cryptography | 7921375c6a8f1d3bd32ecd4c0ba9be0682c5a57a | [
"Apache-2.0",
"BSD-3-Clause"
] | 1 | 2015-09-25T16:03:32.000Z | 2015-09-25T16:03:32.000Z | docs/hazmat/primitives/asymmetric/utils.rst | elitest/cryptography | 7921375c6a8f1d3bd32ecd4c0ba9be0682c5a57a | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | docs/hazmat/primitives/asymmetric/utils.rst | elitest/cryptography | 7921375c6a8f1d3bd32ecd4c0ba9be0682c5a57a | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | .. hazmat::
Asymmetric Utilities
====================
.. currentmodule:: cryptography.hazmat.primitives.asymmetric.utils
.. function:: decode_rfc6979_signature(signature)
Takes in :rfc:`6979` signatures generated by the DSA/ECDSA signers and
returns a tuple ``(r, s)``.
:param bytes signature: The signature to decode.
:returns: The decoded tuple ``(r, s)``.
:raises ValueError: Raised if the signature is malformed.
.. function:: encode_rfc6979_signature(r, s)
Creates an :rfc:`6979` byte string from raw signature values.
:param int r: The raw signature value ``r``.
:param int s: The raw signature value ``s``.
:return bytes: The encoded signature.
| 24.172414 | 74 | 0.677603 |
9f7279319900422d913a2c31bcbaf20a49e120d9 | 2,477 | rst | reStructuredText | docs/source/gen/flytectl_update_cluster-resource-attribute.rst | SandraGH5/flytectl | d739d929235e20bd7fce3392b43820a314603f59 | [
"Apache-2.0"
] | null | null | null | docs/source/gen/flytectl_update_cluster-resource-attribute.rst | SandraGH5/flytectl | d739d929235e20bd7fce3392b43820a314603f59 | [
"Apache-2.0"
] | null | null | null | docs/source/gen/flytectl_update_cluster-resource-attribute.rst | SandraGH5/flytectl | d739d929235e20bd7fce3392b43820a314603f59 | [
"Apache-2.0"
] | null | null | null | .. _flytectl_update_cluster-resource-attribute:
flytectl update cluster-resource-attribute
------------------------------------------
Updates matchable resources of cluster attributes
Synopsis
~~~~~~~~
Updates cluster resource attributes for given project and domain combination or additionally with workflow name.
Updating to the cluster resource attribute is only available from a generated file. See the get section for generating this file.
Here the command updates takes the input for cluster resource attributes from the config file cra.yaml
eg: content of cra.yaml
.. code-block:: yaml
domain: development
project: flytectldemo
attributes:
foo: "bar"
buzz: "lightyear"
::
flytectl update cluster-resource-attribute --attrFile cra.yaml
Updating cluster resource attribute for project and domain and workflow combination. This will take precedence over any other
resource attribute defined at project domain level.
Also this will completely overwrite any existing custom project and domain and workflow combination attributes.
Would be preferable to do get and generate an attribute file if there is an existing attribute already set and then update it to have new values
Refer to get cluster-resource-attribute section on how to generate this file
Update the cluster resource attributes for workflow core.control_flow.run_merge_sort.merge_sort in flytectldemo, development domain
.. code-block:: yaml
domain: development
project: flytectldemo
workflow: core.control_flow.run_merge_sort.merge_sort
attributes:
foo: "bar"
buzz: "lightyear"
::
flytectl update cluster-resource-attribute --attrFile cra.yaml
Usage
::
flytectl update cluster-resource-attribute [flags]
Options
~~~~~~~
::
--attrFile string attribute file name to be used for updating attribute for the resource type.
-h, --help help for cluster-resource-attribute
Options inherited from parent commands
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
-c, --config string config file (default is $HOME/.flyte/config.yaml)
-d, --domain string Specifies the Flyte project's domain.
-o, --output string Specifies the output type - supported formats [TABLE JSON YAML DOT DOTURL]. NOTE: dot, doturl are only supported for Workflow (default "TABLE")
-p, --project string Specifies the Flyte project.
SEE ALSO
~~~~~~~~
* :doc:`flytectl_update` - Used for updating flyte resources eg: project.
| 30.207317 | 168 | 0.733549 |
2496df6184121a2f3a16f2f7ee5ae9438bce134f | 14,256 | rst | reStructuredText | doc/source/contributor/dev-quickstart.rst | GURUIFENG9139/rocky-mogan | 6008c1d12b00e70d2cc651f7bd5d47968fc3aec7 | [
"Apache-2.0"
] | null | null | null | doc/source/contributor/dev-quickstart.rst | GURUIFENG9139/rocky-mogan | 6008c1d12b00e70d2cc651f7bd5d47968fc3aec7 | [
"Apache-2.0"
] | null | null | null | doc/source/contributor/dev-quickstart.rst | GURUIFENG9139/rocky-mogan | 6008c1d12b00e70d2cc651f7bd5d47968fc3aec7 | [
"Apache-2.0"
] | null | null | null | .. _dev-quickstart:
=====================
Developer Quick-Start
=====================
This is a quick walkthrough to get you started developing code for Mogan.
This assumes you are already familiar with submitting code reviews to
an OpenStack project.
The gate currently runs the unit tests under Python 2.7, Python 3.4
and Python 3.5. It is strongly encouraged to run the unit tests locally prior
to submitting a patch.
.. note::
Do not run unit tests on the same environment as devstack due to
conflicting configuration with system dependencies.
.. note::
This document is compatible with Python (3.5), Ubuntu (16.04) and Fedora (23).
When referring to different versions of Python and OS distributions, this
is explicitly stated.
.. seealso::
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Preparing Development System
============================
System Prerequisites
--------------------
The following packages cover the prerequisites for a local development
environment on most current distributions. Instructions for getting set up with
non-default versions of Python and on older distributions are included below as
well.
- Ubuntu/Debian::
sudo apt-get install build-essential python-dev libssl-dev python-pip libmysqlclient-dev libxml2-dev libxslt-dev libpq-dev git git-review libffi-dev gettext ipmitool psmisc graphviz libjpeg-dev xinetd tftpd tftp
- Fedora 21/RHEL7/CentOS7::
sudo yum install python-devel openssl-devel python-pip mysql-devel libxml2-devel libxslt-devel postgresql-devel git git-review libffi-devel gettext ipmitool psmisc graphviz gcc libjpeg-turbo-devel
If using RHEL and yum reports "No package python-pip available" and "No
package git-review available", use the EPEL software repository.
Instructions can be found at `<https://fedoraproject.org/wiki/EPEL/FAQ#howtouse>`_.
- Fedora 22 or higher::
sudo dnf install python-devel openssl-devel python-pip mysql-devel libxml2-devel libxslt-devel postgresql-devel git git-review libffi-devel gettext ipmitool psmisc graphviz gcc libjpeg-turbo-devel
Additionally, if using Fedora 23, ``redhat-rpm-config`` package should be
installed so that development virtualenv can be built successfully.
- openSUSE/SLE 12::
sudo zypper install git git-review libffi-devel libmysqlclient-devel libopenssl-devel libxml2-devel libxslt-devel postgresql-devel python-devel python-nose python-pip gettext-runtime psmisc
Graphviz is only needed for generating the state machine diagram. To install it
on openSUSE or SLE 12, see
`<https://software.opensuse.org/download.html?project=graphics&package=graphviz>`_.
(Optional) Installing Py34 requirements
---------------------------------------
If you need Python 3.4, follow the instructions above to install prerequisites
and *additionally* install the following packages:
- On Ubuntu 14.x/Debian::
sudo apt-get install python3-dev
- On Ubuntu 16.04::
wget https://www.python.org/ftp/python/3.4.4/Python-3.4.4.tgz
sudo tar xzf Python-3.4.4.tgz
cd Python-3.4.4
sudo ./configure
sudo make altinstall
# This will install Python 3.4 without replacing 3.5. To check if 3.4 was installed properly
run this command:
python3.4 -V
- On Fedora 21/RHEL7/CentOS7::
sudo yum install python3-devel
- On Fedora 22 and higher::
sudo dnf install python3-devel
(Optional) Installing Py35 requirements
---------------------------------------
If you need Python 3.5 support on an older distro that does not already have
it, follow the instructions for installing prerequisites above and
*additionally* run the following commands.
- On Ubuntu 14.04::
wget https://www.python.org/ftp/python/3.5.2/Python-3.5.2.tgz
sudo tar xzf Python-3.5.2.tgz
cd Python-3.5.2
sudo ./configure
sudo make altinstall
# This will install Python 3.5 without replacing 3.4. To check if 3.5 was installed properly
run this command:
python3.5 -V
- On Fedora 23::
sudo dnf install dnf-plugins-core
sudo dnf copr enable mstuchli/Python3.5
dnf install python35-python3
Python Prerequisites
--------------------
If your distro has at least tox 1.8, use similar command to install
``python-tox`` package. Otherwise install this on all distros::
sudo pip install -U tox
You may need to explicitly upgrade virtualenv if you've installed the one
from your OS distribution and it is too old (tox will complain). You can
upgrade it individually, if you need to::
sudo pip install -U virtualenv
Running Unit Tests Locally
==========================
If you haven't already, Mogan source code should be pulled directly from git::
# from your home or source directory
cd ~
git clone https://git.openstack.org/openstack/mogan
cd mogan
Running Unit and Style Tests
----------------------------
All unit tests should be run using tox. To run Mogan's entire test suite::
# to run the py27, py34, py35 unit tests, and the style tests
tox
To run a specific test or tests, use the "-e" option followed by the tox target
name. For example::
# run the unit tests under py27 and also run the pep8 tests
tox -epy27 -epep8
.. note::
If tests are run under py27 and then run under py34 or py35 the following error may occur::
db type could not be determined
ERROR: InvocationError: '/home/ubuntu/mogan/.tox/py35/bin/ostestr'
To overcome this error remove the file `.testrepository/times.dbm`
and then run the py34 or py35 test.
You may pass options to the test programs using positional arguments.
To run a specific unit test, this passes the -r option and desired test
(regex string) to `os-testr <https://pypi.org/project/os-testr>`_::
# run a specific test for Python 2.7
tox -epy27 -- -r test_name
Debugging unit tests
--------------------
In order to break into the debugger from a unit test we need to insert
a breaking point to the code:
.. code-block:: python
import pdb; pdb.set_trace()
Then run ``tox`` with the debug environment as one of the following::
tox -e debug
tox -e debug test_file_name
tox -e debug test_file_name.TestClass
tox -e debug test_file_name.TestClass.test_name
For more information see the `oslotest documentation
<https://docs.openstack.org/oslotest/latest/user/features.html#debugging-with-oslo-debug-helper>`_.
Additional Tox Targets
----------------------
There are several additional tox targets not included in the default list, such
as the target which builds the documentation site. See the ``tox.ini`` file
for a complete listing of tox targets. These can be run directly by specifying
the target name::
# generate the documentation pages locally
tox -edocs
# generate the sample configuration file
tox -egenconfig
Deploying Mogan with DevStack
=============================
DevStack may be configured to deploy Mogan, It is easy to develop Mogan
with the devstack environment. Mogan depends on Ironic, Neutron, and Glance
to create and schedule virtual machines to simulate bare metal servers.
It is highly recommended to deploy on an expendable virtual machine and not
on your personal work station. Deploying Mogan with DevStack requires a
machine running Ubuntu 14.04 (or later) or Fedora 20 (or later). Make sure
your machine is fully up to date and has the latest packages installed before
beginning this process.
.. seealso::
https://docs.openstack.org/devstack/latest/
Devstack will no longer create the user 'stack' with the desired
permissions, but does provide a script to perform the task::
git clone https://git.openstack.org/openstack-dev/devstack.git devstack
sudo ./devstack/tools/create-stack-user.sh
Switch to the stack user and clone DevStack::
sudo su - stack
git clone https://git.openstack.org/openstack-dev/devstack.git devstack
Create devstack/local.conf with minimal settings required to enable Mogan::
cd devstack
cat >local.conf <<END
[[local|localrc]]
# Credentials
ADMIN_PASSWORD=password
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
SWIFT_HASH=password
SWIFT_TEMPURL_KEY=password
# Enable Ironic plugin
enable_plugin ironic https://git.openstack.org/openstack/ironic
# Enable Mogan plugin
enable_plugin mogan https://git.openstack.org/openstack/mogan
ENABLED_SERVICES=g-api,g-reg,q-agt,q-dhcp,q-l3,q-svc,key,mysql,rabbit,ir-api,ir-cond,s-account,s-container,s-object,s-proxy,tempest
# Swift temp URL's are required for agent_* drivers.
SWIFT_ENABLE_TEMPURLS=True
# Set resource_classes for nodes to use placement service
IRONIC_USE_RESOURCE_CLASSES=True
# Create 3 virtual machines to pose as Ironic's baremetal nodes.
IRONIC_VM_COUNT=3
IRONIC_VM_SSH_PORT=22
IRONIC_BAREMETAL_BASIC_OPS=True
# Enable Ironic drivers.
IRONIC_ENABLED_DRIVERS=fake,agent_ipmitool,pxe_ipmitool
# Change this to alter the default driver for nodes created by devstack.
# This driver should be in the enabled list above.
IRONIC_DEPLOY_DRIVER=agent_ipmitool
# Using Ironic agent deploy driver by default, so don't use whole disk
# image in tempest.
IRONIC_TEMPEST_WHOLE_DISK_IMAGE=False
# The parameters below represent the minimum possible values to create
# functional nodes.
IRONIC_VM_SPECS_RAM=1280
IRONIC_VM_SPECS_DISK=10
# To build your own IPA ramdisk from source, set this to True
IRONIC_BUILD_DEPLOY_RAMDISK=False
# Log all output to files
LOGFILE=$HOME/devstack.log
LOGDIR=$HOME/logs
IRONIC_VM_LOG_DIR=$HOME/ironic-bm-logs
END
If you want to use the multi-tenancy network in ironic, the setting of
local.conf should be as follows::
cd devstack
cat >local.conf <<END
[[local|localrc]]
PIP_UPGRADE=True
# Credentials
ADMIN_PASSWORD=password
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
SWIFT_HASH=password
SWIFT_TEMPURL_KEY=password
# Enable Ironic plugin
enable_plugin ironic https://git.openstack.org/openstack/ironic
# Enable Mogan plugin
enable_plugin mogan https://git.openstack.org/openstack/mogan
# Install networking-generic-switch Neutron ML2 driver that interacts with OVS
enable_plugin networking-generic-switch https://git.openstack.org/openstack/networking-generic-switch
ENABLED_SERVICES=g-api,g-reg,q-agt,q-dhcp,q-l3,q-svc,key,mysql,rabbit,ir-api,ir-cond,s-account,s-container,s-object,s-proxy,tempest
# Swift temp URL's are required for agent_* drivers.
SWIFT_ENABLE_TEMPURLS=True
# Add link local info when registering Ironic node
IRONIC_USE_LINK_LOCAL=True
IRONIC_ENABLED_NETWORK_INTERFACES=neutron, flat
IRONIC_NETWORK_INTERFACE=neutron
#Networking configuration
OVS_PHYSICAL_BRIDGE=brbm
PHYSICAL_NETWORK=mynetwork
IRONIC_PROVISION_NETWORK_NAME=ironic-provision
IRONIC_PROVISION_SUBNET_PREFIX=10.0.5.0/24
IRONIC_PROVISION_SUBNET_GATEWAY=10.0.5.1
Q_PLUGIN=ml2
ENABLE_TENANT_VLANS=True
Q_ML2_TENANT_NETWORK_TYPE=vlan
TENANT_VLAN_RANGE=100:150
Q_USE_PROVIDERNET_FOR_PUBLIC=False
# Set resource_classes for nodes to use placement service
IRONIC_USE_RESOURCE_CLASSES=True
# Create 3 virtual machines to pose as Ironic's baremetal nodes.
IRONIC_VM_COUNT=3
IRONIC_VM_SSH_PORT=22
IRONIC_BAREMETAL_BASIC_OPS=True
# Enable Ironic drivers.
IRONIC_ENABLED_DRIVERS=fake,agent_ipmitool,pxe_ipmitool
# Change this to alter the default driver for nodes created by devstack.
# This driver should be in the enabled list above.
IRONIC_DEPLOY_DRIVER=agent_ipmitool
# Using Ironic agent deploy driver by default, so don't use whole disk
# image in tempest.
IRONIC_TEMPEST_WHOLE_DISK_IMAGE=False
# The parameters below represent the minimum possible values to create
# functional nodes.
IRONIC_VM_SPECS_RAM=1280
IRONIC_VM_SPECS_DISK=10
# To build your own IPA ramdisk from source, set this to True
IRONIC_BUILD_DEPLOY_RAMDISK=False
# Log all output to files
LOGFILE=$HOME/devstack.log
LOGDIR=$HOME/logs
LOG_COLOR=True
IRONIC_VM_LOG_DIR=$HOME/ironic-bm-logs
END
.. note::
Git protocol requires access to port 9418, which is not a standard port that
corporate firewalls always allow. If you are behind a firewall or on a proxy that
blocks Git protocol, modify the ``enable_plugin`` line to use ``https://`` instead
of ``git://`` and add ``GIT_BASE=https://git.openstack.org`` to the credentials::
GIT_BASE=https://git.openstack.org
# Enable Mogan plugin
enable_plugin mogan https://git.openstack.org/openstack/mogan
Run stack.sh::
./stack.sh
Source credentials, and spawn a server as the ``demo`` user::
source ~/devstack/openrc
# query the image id of the default cirros image
image=$(openstack image show $DEFAULT_IMAGE_NAME -f value -c id)
# query the private network id
net=$(openstack network show private -f value -c id)
# spawn a server
openstack baremetalcompute server create --flavor $MOGAN_DEFAULT_FLAVOR --nic net-id=$net --image $image test
Building developer documentation
================================
If you would like to build the documentation locally, eg. to test your
documentation changes before uploading them for review, run these
commands to build the documentation set:
- On your local machine::
# activate your development virtualenv
source .tox/venv/bin/activate
# build the docs
tox -edocs
#Now use your browser to open the top-level index.html located at:
mogan/doc/build/html/index.html
- On a remote machine::
# Go to the directory that contains the docs
cd ~/mogan/doc/source/
# Build the docs
tox -edocs
# Change directory to the newly built HTML files
cd ~/mogan/doc/build/html/
# Create a server using python on port 8000
python -m SimpleHTTPServer 8000
#Now use your browser to open the top-level index.html located at:
http://host_ip:8000
| 31.892617 | 215 | 0.732253 |
32ea78113d07ef78da35b0808b20b4e60af2b350 | 1,829 | rst | reStructuredText | README.rst | jinzo/django-pluggable-filebrowser | 321d663211202baecdf7574f02c09a3e1ede78b6 | [
"BSD-3-Clause"
] | 1 | 2015-02-25T03:26:36.000Z | 2015-02-25T03:26:36.000Z | README.rst | jinzo/django-pluggable-filebrowser | 321d663211202baecdf7574f02c09a3e1ede78b6 | [
"BSD-3-Clause"
] | null | null | null | README.rst | jinzo/django-pluggable-filebrowser | 321d663211202baecdf7574f02c09a3e1ede78b6 | [
"BSD-3-Clause"
] | null | null | null | Django Pluggable FileBrowser
============================
**Media-Management with theme support**.
The Django Pluggable FileBrowser is an extension to the `Django <http://www.djangoproject.com>`_ administration interface in order to:
* browse directories on your server and upload/delete/edit/rename files.
* include images/documents to your models/database using the ``FileBrowseField``.
* select images/documents with TinyMCE.
Requirements
------------
Django Pluggable FileBrowser 3.5 requires
* Django (1.4/1.5/1.6) (http://www.djangoproject.com)
* Pillow (https://github.com/python-imaging/Pillow)
Differences from upstream
-------------------------
Django Pluggable Filebrowser is a fork of `Django Filebrowser <https://github.com/sehmaschine/django-filebrowser>`_ with the aim to make the Admin interfaces and Upload frontends choosable and easy changable.
Currently only Django stock admin interface and Grappelli (2.4, 2.5) are supported out of the box. But adding own interfaces is straightforward.
Further plans include support for pluggable upload frontends and django-xadmin support.
The project can be used as a drop in replacement for Django Filebrowser.
Installation
------------
Stable:
pip install django-pluggable-filebrowser
Development:
pip install -e git+git@github.com:jinzo/django-pluggable-filebrowser.git#egg=django-pluggable-filebrowser
Documentation
-------------
Build it from the sources.
Translation
-----------
You can help with translating upstream project at:
https://www.transifex.com/projects/p/django-filebrowser/
Releases
--------
* FileBrowser 3.5.7 (Development Version, not yet released, see Branch Stable/3.5.x)
* FileBrowser 3.5.6 (April 16th, 2014): Compatible with Django 1.4/1.5/1.6
Older versions are availabe at GitHub, but are not supported anymore.
| 31 | 208 | 0.742482 |
444dd421793f555df86acefddec4fa6bcc096f7f | 1,774 | rst | reStructuredText | docs/index.rst | zonca/iris_pipeline | a4c20a362037a94f66427521bb5cd5da1c918dd7 | [
"BSD-3-Clause"
] | null | null | null | docs/index.rst | zonca/iris_pipeline | a4c20a362037a94f66427521bb5cd5da1c918dd7 | [
"BSD-3-Clause"
] | 38 | 2019-03-07T01:25:03.000Z | 2022-03-01T13:02:29.000Z | docs/index.rst | zonca/iris_pipeline | a4c20a362037a94f66427521bb5cd5da1c918dd7 | [
"BSD-3-Clause"
] | 1 | 2019-02-28T02:39:06.000Z | 2019-02-28T02:39:06.000Z | ***************************
iris_pipeline Documentation
***************************
The IRIS Data Reduction System is based on the ``stpipe`` package released by Space Telescope
for the James Webb Space Telescope.
With ``stpipe`` we can configure each step of a pipeline through one or more text based .INI style files,
then we provide one input FITS file or a set of multiple inputs defined in JSON (named `Associations <https://jwst-pipeline.readthedocs.io/en/latest/jwst/associations/overview.html>`_).
Custom analysis steps and pipelines for IRIS are defined as classes in the current repository ``iris_pipeline``
Then execute the pipeline from the command line using the ``tmtrun`` executable or using
directly the Python library.
The pipeline also dynamically interfaces to the ``CRDS`` the Calibration References Data System,
to retrieve the best calibration datasets given the metadata in the headers of the input FITS files.
The ``CRDS`` client can also load data from a local cache, so for now we do not have a actual
``CRDS`` server and we only rely on a local cache.
The ``CRDS`` is not under our control, the Thirty Meter Telescope will deliver a database system
to replace the ``CRDS`` and we can adapt our code to that in the future.
Getting Started
===============
.. toctree::
:maxdepth: 2
getting-started
Example run
===========
.. toctree::
:maxdepth: 2
example-run
Design
======
.. toctree::
:maxdepth: 2
design
Calibration and CRDS
====================
.. toctree::
:maxdepth: 2
calibration-database
Algorithms
==========
.. toctree::
:maxdepth: 1
available-steps
algorithms
Subarrays
=========
.. toctree::
:maxdepth: 1
subarrays
Reference/API
=============
.. automodapi:: iris_pipeline
| 23.342105 | 185 | 0.688839 |
476e716e95f841aafb1446368f944ccc125f79f8 | 882 | rst | reStructuredText | docs/source/api/grid/deploy/heroku_node/index.rst | H4LL/PyGrid | 62d5ba6f207498ca365c12ac59dbcd11c1337881 | [
"Apache-2.0"
] | 1 | 2020-02-18T21:51:01.000Z | 2020-02-18T21:51:01.000Z | docs/source/api/grid/deploy/heroku_node/index.rst | jazken/PyGrid | 0538a3b84420cccb7e95312fb343c0479319afb4 | [
"Apache-2.0"
] | 1 | 2019-12-13T13:30:00.000Z | 2019-12-13T13:30:00.000Z | docs/source/api/grid/deploy/heroku_node/index.rst | jazken/PyGrid | 0538a3b84420cccb7e95312fb343c0479319afb4 | [
"Apache-2.0"
] | null | null | null | :mod:`grid.deploy.heroku_node`
==============================
.. py:module:: grid.deploy.heroku_node
Module Contents
---------------
.. py:class:: HerokuNodeDeployment(grid_name: str, verbose=True, check_deps=True, app_type: str = 'websocket', dev_user: str = 'OpenMined', branch: set = 'dev', env_vars={})
Bases: :class:`grid.deploy.BaseDeployment`
An abstraction of heroku grid node deployment process, the purpose of this class is set all configuration needed to deploy grid node application in heroku platform.
.. method:: deploy(self)
Method to deploy Grid Node app on heroku platform.
.. method:: __run_heroku_commands(self)
Add a set of commands/logs used to deploy grid node app on heroku platform.
.. method:: __check_heroku_dependencies(self)
Check specific dependencies to perform grid node deploy on heroku platform.
| 25.2 | 173 | 0.692744 |
63dfc04bb102632eaed58df4c7d4fcbcf20bcb34 | 852 | rst | reStructuredText | docs/source/_architecture/_data_services/_database/_views/view_foreign_flavors.rst | hep-gc/cloud-scheduler-2 | 180d9dc4f8751cf8c8254518e46f83f118187e84 | [
"Apache-2.0"
] | 3 | 2020-03-03T03:25:36.000Z | 2021-12-03T15:31:39.000Z | docs/source/_architecture/_data_services/_database/_views/view_foreign_flavors.rst | hep-gc/cloud-scheduler-2 | 180d9dc4f8751cf8c8254518e46f83f118187e84 | [
"Apache-2.0"
] | 341 | 2017-06-08T17:27:59.000Z | 2022-01-28T19:37:57.000Z | docs/source/_architecture/_data_services/_database/_views/view_foreign_flavors.rst | hep-gc/cloud-scheduler-2 | 180d9dc4f8751cf8c8254518e46f83f118187e84 | [
"Apache-2.0"
] | 3 | 2018-04-25T16:13:20.000Z | 2020-04-15T20:03:46.000Z | .. File generated by /opt/cloudscheduler/utilities/schema_doc - DO NOT EDIT
..
.. To modify the contents of this file:
.. 1. edit the template file ".../cloudscheduler/docs/schema_doc/views/view_foreign_flavors.yaml"
.. 2. run the utility ".../cloudscheduler/utilities/schema_doc"
..
Database View: view_foreign_flavors
===================================
This view was created for testing puposes but the management of foreign
VMs has changed since the creation of the view. It is probably
no longer required and should be deprecated.
Columns:
^^^^^^^^
* **group_name** (String(32)):
* **cloud_name** (String(32)):
* **authurl** (String(128)):
* **region** (String(32)):
* **project** (String(128)):
* **flavor_id** (String(128)):
* **count** (Integer):
* **name** (String(128)):
* **cores** (Integer):
* **ram** (Float):
| 17.387755 | 99 | 0.638498 |
04f5bf233394d0b83194439016db607f8785490e | 2,484 | rst | reStructuredText | docs/best-practices/Contributing-to-Hibernate.rst | apidae-tourisme/owsi-core-parent-apidae | e1fa228f4c37681ea3baeae6a4d0dc4c9bb8c11c | [
"Apache-2.0"
] | null | null | null | docs/best-practices/Contributing-to-Hibernate.rst | apidae-tourisme/owsi-core-parent-apidae | e1fa228f4c37681ea3baeae6a4d0dc4c9bb8c11c | [
"Apache-2.0"
] | null | null | null | docs/best-practices/Contributing-to-Hibernate.rst | apidae-tourisme/owsi-core-parent-apidae | e1fa228f4c37681ea3baeae6a4d0dc4c9bb8c11c | [
"Apache-2.0"
] | null | null | null | Contributing to Hibernate
=========================
Resources
---------
* `Full contribution procedure <https://github.com/hibernate/hibernate-orm/wiki/Contributing-Code>`_
* `How to develop using Eclipse <https://developer.jboss.org/wiki/ContributingToHibernateUsingEclipse>`_ (see below for more concrete explanations)
Developing
----------
Hibernate uses Gradle. This means some pain if you haven't had to work with it in Eclipse, ever.
In order to build using gradle:
* Check that your default JRE is recent enough (tested with JRE8 on Hibernate 5.0, it should work)
* Generate the Eclipse `.project` files: `./gradlew clean eclipse --refresh-dependencies`
* Install the Gradle Eclipse plugin from this update site: `http://dist.springsource.com/release/TOOLS/gradle`
* Import the projects **as standard Eclipse projects** (Gradle import seems to mess things up, at least with Eclipse 4.3)
* Pray that everything builds right. I personally couldn't make every project compile, but what I had to work on did, so...
Testing
-------
Running tests locally
~~~~~~~~~~~~~~~~~~~~~
Launch your test this way (example for a test in hibernate-core):
.. code-block:: bash
./gradlew :hibernate-core:test --tests 'MyTestClassName'
Running tests locally, with database vendor dependency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If your test relies on a specific database vendor, you'll need to do the following in order to run it locally (examples for PostgreSQL):
* Specify the Dialect to use with the following option `-Dhibernate.dialect=org.hibernate.dialect.PostgreSQL9Dialect`
* Specify JDBC information: `-Dhibernate.connection.url=...`, `-Dhibernate.connection.username=...`, `-Dhibernate.connection.password=...`, `-Dhibernate.connection.driver_class=...`
* Provide the vendor-specific driver jar. I couldn't find a way to do it other than changing the `hibernate-core/hibernate-core.gradle` file and adding this line in the `dependencies` block: `testCompile( 'org.postgresql:postgresql:9.4-1200-jdbc41' )`
You'll end up launching your test this way (example for a test in hibernate-core):
.. code-block:: bash
./gradlew -Dhibernate.dialect=org.hibernate.dialect.PostgreSQL9Dialect -Dhibernate.connection.url=jdbc:postgresql://localhost:5432/hibernate_test -Dhibernate.connection.username=hibernate -Dhibernate.connection.password=hibernate -Dhibernate.connection.driver_class=org.postgresql.Driver :hibernate-core:test --tests 'MyTestClassName'
| 49.68 | 337 | 0.741948 |
5a05d9c8a0a1f987f725b23b7f0fa30feda67d6c | 496 | rst | reStructuredText | docs/security_mapping/components/auditd/response_to_audit_processing_failures_audit_storage_capacity/control.rst | trevor-vaughan/simp-doc | 6c544cab47dc69fc5965a867ec22cb4a7101f007 | [
"Apache-2.0"
] | 25 | 2015-07-17T12:12:39.000Z | 2022-01-24T07:16:21.000Z | docs/security_mapping/components/auditd/response_to_audit_processing_failures_audit_storage_capacity/control.rst | Akshay-Hegde/simp-doc | e87a3d56f0b9672cc1db6bfb21f9171611a4a660 | [
"Apache-2.0"
] | 91 | 2015-05-29T19:32:39.000Z | 2022-01-31T22:12:25.000Z | docs/security_mapping/components/auditd/response_to_audit_processing_failures_audit_storage_capacity/control.rst | Akshay-Hegde/simp-doc | e87a3d56f0b9672cc1db6bfb21f9171611a4a660 | [
"Apache-2.0"
] | 69 | 2015-05-27T16:15:23.000Z | 2021-04-21T07:04:17.000Z | Response To Audit Processing Failures - Audit Storage Capacity
--------------------------------------------------------------
Auditd has been configured to handle audit failures or potential failures due to
storage capacity. Those settings include:
- Send a warning to syslog when there is less than 75Mb of space on the audit partition (space_left).
- Suspend the audit daemon when there is less than 50Mb of space left on the audit partition (admin_space_left).
References: :ref:`AU-5 (1)`
| 45.090909 | 112 | 0.689516 |
d4ec34cf4a58f8cf0c187e36bf1e6832fb023ed9 | 451 | rst | reStructuredText | doc/complex_systems.rst | ComplexNetTSP/CooperativeNetworking | ce74982820ee25c0e68e321976dd03fcfd952d9a | [
"MIT"
] | 12 | 2017-03-23T03:41:29.000Z | 2021-05-29T03:20:52.000Z | doc/complex_systems.rst | ComplexNetTSP/CooperativeNetworking | ce74982820ee25c0e68e321976dd03fcfd952d9a | [
"MIT"
] | null | null | null | doc/complex_systems.rst | ComplexNetTSP/CooperativeNetworking | ce74982820ee25c0e68e321976dd03fcfd952d9a | [
"MIT"
] | 10 | 2016-01-20T15:28:06.000Z | 2021-06-25T13:52:46.000Z | complex_systems Package
=======================
:mod:`complex_systems` Package
------------------------------
.. automodule:: complex_systems.__init__
:members:
:undoc-members:
:show-inheritance:
:mod:`dygraph` Module
---------------------
.. automodule:: complex_systems.dygraph
:members:
:undoc-members:
:show-inheritance:
Subpackages
-----------
.. toctree::
complex_systems.mobility
complex_systems.spatial
| 16.107143 | 40 | 0.574279 |
07cbc3cd83008e895c61d1e24dd798751892f077 | 1,080 | rst | reStructuredText | source/docs/comprehensions/set_comprehension.rst | LarryBrin/Python-Reference | 9a3b94e792c9122c94751183fdcc4cffb3d7ac11 | [
"MIT"
] | 114 | 2015-04-04T11:59:38.000Z | 2022-03-24T02:18:04.000Z | source/docs/comprehensions/set_comprehension.rst | LarryBrin/Python-Reference | 9a3b94e792c9122c94751183fdcc4cffb3d7ac11 | [
"MIT"
] | 9 | 2015-10-01T08:22:24.000Z | 2021-09-02T19:22:23.000Z | source/docs/comprehensions/set_comprehension.rst | LarryBrin/Python-Reference | 9a3b94e792c9122c94751183fdcc4cffb3d7ac11 | [
"MIT"
] | 123 | 2015-04-23T21:31:51.000Z | 2022-03-31T08:36:04.000Z | ====================
{} set comprehension
====================
Description
===========
Returns a set based on existing iterables.
Syntax
======
**{expression(variable) for variable in input_set [predicate][, …]}**
*expression*
Optional. An output expression producing members of the new set from members of the input set that satisfy the predicate expression.
*variable*
Required. Variable representing members of an input set.
*input_set*
Required. Represents the input set.
*predicate*
Optional. Expression acting as a filter on members of the input set.
*[, …]]*
Optional. Another nested comprehension.
Return Value
============
**set**
Time Complexity
===============
#TODO
Example 1
=========
>>> {s for s in [1, 2, 1, 0]}
set([0, 1, 2])
>>> {s**2 for s in [1, 2, 1, 0]}
set([0, 1, 4])
>>> {s**2 for s in range(10)}
set([0, 1, 4, 9, 16, 25, 36, 49, 64, 81])
Example 2
=========
>>> {s for s in [1, 2, 3] if s % 2}
set([1, 3])
Example 3
=========
>>> {(m, n) for n in range(2) for m in range(3, 5)}
set([(3, 0), (3, 1), (4, 0), (4, 1)])
See Also
========
#TODO | 20.377358 | 133 | 0.575 |
86fb4684114496ce8abad50cf4a95ae09d0fe9ef | 91 | rst | reStructuredText | doc/source/api/optimizer.rst | adrenadine33/graphvite | 34fc203f96ff13095073c605ecfcae32213e7f6a | [
"Apache-2.0"
] | 1,067 | 2019-07-16T21:02:12.000Z | 2022-03-30T10:51:55.000Z | doc/source/api/optimizer.rst | adrenadine33/graphvite | 34fc203f96ff13095073c605ecfcae32213e7f6a | [
"Apache-2.0"
] | 93 | 2019-08-06T16:28:48.000Z | 2022-03-30T13:53:21.000Z | doc/source/api/optimizer.rst | adrenadine33/graphvite | 34fc203f96ff13095073c605ecfcae32213e7f6a | [
"Apache-2.0"
] | 152 | 2019-08-05T14:57:03.000Z | 2022-03-31T08:13:39.000Z | graphvite.optimizer
===================
.. automodule:: graphvite.optimizer
:members:
| 15.166667 | 35 | 0.582418 |
96849acfaee5612a9a98cbb502d71f439a6e1655 | 124 | rst | reStructuredText | doc/keys.rst | aholkner/bacon | edf3810dcb211942d392a8637945871399b0650d | [
"MIT"
] | 37 | 2015-01-29T17:42:11.000Z | 2021-12-14T22:11:33.000Z | doc/keys.rst | aholkner/bacon | edf3810dcb211942d392a8637945871399b0650d | [
"MIT"
] | 3 | 2015-08-13T17:38:05.000Z | 2020-09-25T17:21:31.000Z | doc/keys.rst | aholkner/bacon | edf3810dcb211942d392a8637945871399b0650d | [
"MIT"
] | 7 | 2015-02-12T17:54:35.000Z | 2022-01-31T14:50:09.000Z | .. currentmodule:: bacon
Key code reference
------------------
.. autoclass:: Keys
:members:
:undoc-members:
:noindex:
| 12.4 | 24 | 0.58871 |
b816c04e4db22f6bd4ddefa7a48581af36a87432 | 20,408 | rst | reStructuredText | source_rst/tutorials/1_getting_started_enterprise/notebook.rst | tokyoquantopian/quantopian-doc-ja | 3861745cec8db79daf510f7e86b5433576d7c0c4 | [
"CC-BY-4.0"
] | 9 | 2020-04-04T07:31:21.000Z | 2020-06-13T05:07:46.000Z | source_rst/tutorials/1_getting_started_enterprise/notebook.rst | tokyoquantopian/quantopian-doc-ja | 3861745cec8db79daf510f7e86b5433576d7c0c4 | [
"CC-BY-4.0"
] | 80 | 2020-04-04T07:29:50.000Z | 2020-10-31T05:04:38.000Z | source_rst/tutorials/1_getting_started_enterprise/notebook.rst | tokyoquantopian/quantopian-doc-ja | 3861745cec8db79daf510f7e86b5433576d7c0c4 | [
"CC-BY-4.0"
] | 3 | 2020-06-21T00:44:48.000Z | 2020-08-09T17:07:28.000Z | Welcome
-------
Welcome to Quantopian. In this tutorial, we introduce Quantopian, the
problems it aims to solve, and the tools it provides to help you solve
those problems. At the end of this lesson, you should have a high level
understanding of what you can do with Quantopian.
The focus of the tutorial is to get you started, not to make you an
expert Quantopian user. If you already feel comfortable with the basics
of Quantopian, there are other resources to help you learn more about
Quantopian’s tools: -
`Documentation <https://factset.quantopian.com/docs/index>`__ -
`Pipeline
Tutorial <https://factset.quantopian.com/tutorials/pipeline>`__ -
`Alphalens
Tutorial <https://factset.quantopian.com/tutorials/alphalens>`__
All you need to get started on this tutorial is some basic
`Python <https://docs.python.org/3.5/>`__ programming skills.
Note: You are currently viewing this tutorial lesson in the Quantopian
**Research** environment. Research is a hosted Jupyter notebook
environment that allows you to interactively run Python code. Research
comes with a mix of proprietary and open-source Python libraries
pre-installed. To learn more about Research, see the
`documentation <https://factset.quantopian.com/docs/user-guide/environments/research>`__.
You can follow along with the code in this notebook by cloning it. Each
cell of code (grey boxes) can be run by pressing Shift + Enter. **This
tutorial notebook is read-only**. If you want to make changes to the
notebook, create a new notebook and copy the code from this tutorial.
What is Quantopian?
-------------------
Quantopian is a cloud-based software platform that allows you to
research cross-sectional factors in developed and emerging equity
markets around the world using Python. Quantopian makes it easy to
iterate on ideas by supplying a fast, uniform API on top of all sorts of
`financial
data <https://factset.quantopian.com/docs/data-reference/overview>`__.
Additionally, Quantopian provides tools to help you `upload your own
financial
datasets <https://factset.quantopian.com/docs/user-guide/tools/self-serve>`__,
analyze the efficacy of your factors, and download your work into a
local environment so that you can integrate it with other systems.
Typically, researching cross-sectional equity factors involves the
following steps: 1. Define a universe of assets. 2. Define a factor over
the universe. 3. Test the factor. 4. Export factor data for integration
with another system or application.
On Quantopian, steps 1 and 2 are achieved using `the Pipeline
API <https://factset.quantopian.com/docs/user-guide/tools/pipeline>`__,
step 3 is done using a tool called
`Alphalens <https://factset.quantopian.com/docs/user-guide/tools/alphalens>`__,
and step 4 is done using a tool called
`Aqueduct <https://factset.quantopian.com/docs/user-guide/tools/aqueduct>`__.
The rest of this tutorial will give a brief walkthrough of an end-to-end
factor research workflow on Quantopian.
Research Environment
~~~~~~~~~~~~~~~~~~~~
The code in this tutorial can be run in Quantopian’s **Research**
environment (this notebook is currently running in Research). Research
is a hosted
`Jupyter <https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html>`__
notebook environment that allows you to interactively run Python code.
Research comes with a mix of proprietary and open-source Python
libraries pre-installed. To learn more about Research, see the
`documentation <https://factset.quantopian.com/docs/user-guide/environments/research>`__.
Press **Shift+Enter** to run each cell of code (grey boxes).
Step 1 - Define a universe of assets.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first step to researching a cross-sectional equity factor is to
select a “universe” of equities over which our factor will be defined.
In this context, a universe represents the set of equities we want to
consider when performing computations later. On Quantopian, defining a
universe is done using the `the Pipeline
API <https://factset.quantopian.com/docs/user-guide/tools/pipeline>`__.
Later on, we will use the same API to compute factors over the equities
in this universe.
The Pipeline API provides a uniform interface to several `built-in
datasets <https://factset.quantopian.com/docs/data-reference/overview>`__,
as well as any `custom
datasets <https://factset.quantopian.com/custom-datasets>`__ that we
upload to our account. Pipeline makes it easy to define computations or
expressions using built-in and custom data. For example, the following
code snippet imports two built-in datasets, `FactSet
Fundamentals <https://factset.quantopian.com/docs/data-reference/factset_fundamentals>`__
and `FactSet Equity
Metadata <https://factset.quantopian.com/docs/data-reference/equity_metadata>`__,
and uses them to define an equity universe.
.. code:: ipython3
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.factset import Fundamentals, EquityMetadata
is_share = EquityMetadata.security_type.latest.eq('SHARE')
is_primary = EquityMetadata.is_primary.latest
primary_shares = (is_share & is_primary)
market_cap = Fundamentals.mkt_val.latest
universe = market_cap.top(1000, mask=primary_shares)
The above example defines a universe to be the top 1000 primary issue
common stocks ranked by market cap. Universes can be defined using any
of the data available on Quantopian. Additionally, you can upload your
own data, such as index constituents or another custom universe to the
platform using the Self-Serve Data tool. To learn more about uploading a
custom dataset, see the `Self-Serve Data
documentation <https://factset.quantopian.com/docs/user-guide/tools/self-serve>`__.
For now, we will stick with the universe definition above.
Step 2 - Define a factor.
~~~~~~~~~~~~~~~~~~~~~~~~~
After defining a universe, the next step is to define a factor for
testing. On Quantopian, a factor is a computation that produces
numerical values at a regular frequency for all assets in a universe.
Similar to step 1, we will use the `the Pipeline
API <https://factset.quantopian.com/docs/user-guide/tools/pipeline>`__
to define factors. In addition to providing a fast, uniform API on top
of pre-integrated and custom datasets, Pipeline also provides a set of
built-in
`classes <https://factset.quantopian.com/docs/api-reference/pipeline-api-reference#built-in-factors>`__
and
`methods <https://factset.quantopian.com/docs/api-reference/pipeline-api-reference#methods-that-create-factors>`__
that can be used to quickly define factors. For example, the following
code snippet defines a momentum factor using fast and slow moving
average computations.
.. code:: ipython3
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data import EquityPricing
from quantopian.pipeline.factors import SimpleMovingAverage
# 1-month (21 trading day) moving average factor.
fast_ma = SimpleMovingAverage(inputs=[EquityPricing.close], window_length=21)
# 6-month (126 trading day) moving average factor.
slow_ma = SimpleMovingAverage(inputs=[EquityPricing.close], window_length=126)
# Divide fast_ma by slow_ma to get momentum factor and z-score.
momentum = fast_ma / slow_ma
momentum_factor = momentum.zscore()
Now that we defined a universe and a factor, we can choose a market and
time period and simulate the factor. One of the defining features of the
Pipeline API is that it allows us to define universes and factors using
high level terms, without having to worry about common data engineering
problems like
`adjustments <https://factset.quantopian.com/docs/data-reference/overview#corporate-action-adjustments>`__,
`point-in-time
data <https://factset.quantopian.com/docs/data-reference/overview#point-in-time-data>`__,
`symbol
mapping <https://factset.quantopian.com/docs/data-reference/overview#asset-identifiers>`__,
delistings, and data alignment. Pipeline does all of that work behind
the scenes and allows us to focus our time on building and testing
factors.
The below code creates a Pipeline instance that adds our factor as a
column and screens down to equities in our universe. The Pipline is then
run over the US equities market from 2016 to 2019.
.. code:: ipython3
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data import EquityPricing
from quantopian.pipeline.data.factset import Fundamentals, EquityMetadata
from quantopian.pipeline.domain import US_EQUITIES, ES_EQUITIES
from quantopian.pipeline.factors import SimpleMovingAverage
is_share = EquityMetadata.security_type.latest.eq('SHARE')
is_primary = EquityMetadata.is_primary.latest
primary_shares = (is_share & is_primary)
market_cap = Fundamentals.mkt_val.latest
universe = market_cap.top(1000, mask=primary_shares)
# 1-month moving average factor.
fast_ma = SimpleMovingAverage(inputs=[EquityPricing.close], window_length=21)
# 6-month moving average factor.
slow_ma = SimpleMovingAverage(inputs=[EquityPricing.close], window_length=126)
# Divide fast_ma by slow_ma to get momentum factor and z-score.
momentum = fast_ma / slow_ma
momentum_factor = momentum.zscore()
# Create a US equities pipeline with our momentum factor, screening down to our universe.
pipe = Pipeline(
columns={
'momentum_factor': momentum_factor,
},
screen=momentum_factor.percentile_between(50, 100, mask=universe),
domain=US_EQUITIES,
)
# Run the pipeline from 2016 to 2019 and display the first few rows of output.
from quantopian.research import run_pipeline
factor_data = run_pipeline(pipe, '2016-01-01', '2019-01-01')
print("Result contains {} rows of output.".format(len(factor_data)))
factor_data.head()
.. parsed-literal::
.. raw:: html
<b>Pipeline Execution Time:</b> 8.43 Seconds
.. parsed-literal::
Result contains 376888 rows of output.
.. raw:: html
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th></th>
<th>momentum_factor</th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="5" valign="top">2016-01-04 00:00:00+00:00</th>
<th>Equity(67 [ADSK])</th>
<td>1.211037</td>
</tr>
<tr>
<th>Equity(76 [TAP])</th>
<td>1.252325</td>
</tr>
<tr>
<th>Equity(114 [ADBE])</th>
<td>0.816440</td>
</tr>
<tr>
<th>Equity(161 [AEP])</th>
<td>0.407423</td>
</tr>
<tr>
<th>Equity(185 [AFL])</th>
<td>0.288431</td>
</tr>
</tbody>
</table>
</div>
Running the above code in Research will produce a pandas dataframe,
stored in the variable ``factor_data``, and display the first few rows
of its output. The dataframe contains a momentum factor value per equity
per day, for each equity in our universe, based on the definition we
provided. Now that we have a momentum value for each equity in our
universe, and each day between 2016 and 2019, we can test to see if our
factor is predictive.
Step 3 - Test the factor.
~~~~~~~~~~~~~~~~~~~~~~~~~
The next step is to test the predictiveness of the factor we defined in
step 2. In order to determine if our factor is predictive, load returns
data from Pipeline, and then feed the factor and returns data into
`Alphalens <https://factset.quantopian.com/docs/user-guide/tools/alphalens>`__.
The following code cell loads the 1-day trailing returns for equities in
our universe, shifts them back, and formats the data for use in
Alphalens.
.. code:: ipython3
from quantopian.pipeline.factors import Returns
# Create and run a Pipeline to get day-over-day returns.
returns_pipe = Pipeline(
columns={
'1D': Returns(window_length=2),
},
domain=US_EQUITIES,
)
returns_data = run_pipeline(returns_pipe, '2016-01-01', '2019-02-01')
# Import alphalens and pandas.
import alphalens as al
import pandas as pd
# Shift the returns so that we can compare our factor data to forward returns.
shifted_returns = al.utils.backshift_returns_series(returns_data['1D'], 2)
# Merge the factor and returns data.
al_returns = pd.DataFrame(
data=shifted_returns,
index=factor_data.index,
columns=['1D'],
)
al_returns.index.levels[0].name = "date"
al_returns.index.levels[1].name = "asset"
# Format the factor and returns data so that we can run it through Alphalens.
al_data = al.utils.get_clean_factor(
factor_data['momentum_factor'],
al_returns,
quantiles=5,
bins=None,
)
.. parsed-literal::
.. raw:: html
<b>Pipeline Execution Time:</b> 1.78 Seconds
.. parsed-literal::
Dropped 0.3% entries from factor data: 0.3% in forward returns computation and 0.0% in binning phase (set max_loss=0 to see potentially suppressed Exceptions).
max_loss is 35.0%, not exceeded: OK!
Then, we can create a factor tearsheet to analyze our momentum factor.
.. code:: ipython3
from alphalens.tears import create_full_tear_sheet
create_full_tear_sheet(al_data)
.. parsed-literal::
Quantiles Statistics
.. raw:: html
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>min</th>
<th>max</th>
<th>mean</th>
<th>std</th>
<th>count</th>
<th>count %</th>
</tr>
<tr>
<th>factor_quantile</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>1</th>
<td>-6.142222</td>
<td>-0.168676</td>
<td>-0.725764</td>
<td>0.447399</td>
<td>150971</td>
<td>20.048870</td>
</tr>
<tr>
<th>2</th>
<td>-0.447661</td>
<td>0.162500</td>
<td>-0.138120</td>
<td>0.118217</td>
<td>150297</td>
<td>19.959363</td>
</tr>
<tr>
<th>3</th>
<td>-0.186003</td>
<td>0.421041</td>
<td>0.144362</td>
<td>0.109462</td>
<td>150587</td>
<td>19.997875</td>
</tr>
<tr>
<th>4</th>
<td>0.036037</td>
<td>0.749339</td>
<td>0.418450</td>
<td>0.117453</td>
<td>150296</td>
<td>19.959231</td>
</tr>
<tr>
<th>5</th>
<td>0.334028</td>
<td>8.979527</td>
<td>0.965140</td>
<td>0.466055</td>
<td>150864</td>
<td>20.034661</td>
</tr>
</tbody>
</table>
</div>
.. parsed-literal::
Returns Analysis
.. raw:: html
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1D</th>
</tr>
</thead>
<tbody>
<tr>
<th>Ann. alpha</th>
<td>0.012</td>
</tr>
<tr>
<th>beta</th>
<td>-0.110</td>
</tr>
<tr>
<th>Mean Period Wise Return Top Quantile (bps)</th>
<td>0.350</td>
</tr>
<tr>
<th>Mean Period Wise Return Bottom Quantile (bps)</th>
<td>-0.533</td>
</tr>
<tr>
<th>Mean Period Wise Spread (bps)</th>
<td>0.882</td>
</tr>
</tbody>
</table>
</div>
.. parsed-literal::
/venvs/py35/lib/python3.5/site-packages/alphalens/tears.py:275: UserWarning: 'freq' not set in factor_data index: assuming business day
UserWarning,
.. parsed-literal::
<matplotlib.figure.Figure at 0x7f64f2a88898>
.. image:: notebook_files/notebook_9_6.png
.. parsed-literal::
Information Analysis
.. raw:: html
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1D</th>
</tr>
</thead>
<tbody>
<tr>
<th>IC Mean</th>
<td>0.007</td>
</tr>
<tr>
<th>IC Std.</th>
<td>0.173</td>
</tr>
<tr>
<th>Risk-Adjusted IC</th>
<td>0.039</td>
</tr>
<tr>
<th>t-stat(IC)</th>
<td>1.066</td>
</tr>
<tr>
<th>p-value(IC)</th>
<td>0.287</td>
</tr>
<tr>
<th>IC Skew</th>
<td>-0.311</td>
</tr>
<tr>
<th>IC Kurtosis</th>
<td>0.256</td>
</tr>
</tbody>
</table>
</div>
.. parsed-literal::
/venvs/py35/lib/python3.5/site-packages/statsmodels/nonparametric/kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j
.. image:: notebook_files/notebook_9_10.png
.. parsed-literal::
/venvs/py35/lib/python3.5/site-packages/alphalens/utils.py:912: UserWarning: Skipping return periods that aren't exact multiples of days.
+ " of days."
.. parsed-literal::
Turnover Analysis
.. raw:: html
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1D</th>
</tr>
</thead>
<tbody>
<tr>
<th>Quantile 1 Mean Turnover</th>
<td>0.022</td>
</tr>
<tr>
<th>Quantile 2 Mean Turnover</th>
<td>0.050</td>
</tr>
<tr>
<th>Quantile 3 Mean Turnover</th>
<td>0.058</td>
</tr>
<tr>
<th>Quantile 4 Mean Turnover</th>
<td>0.051</td>
</tr>
<tr>
<th>Quantile 5 Mean Turnover</th>
<td>0.023</td>
</tr>
</tbody>
</table>
</div>
.. raw:: html
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>1D</th>
</tr>
</thead>
<tbody>
<tr>
<th>Mean Factor Rank Autocorrelation</th>
<td>0.999</td>
</tr>
</tbody>
</table>
</div>
.. image:: notebook_files/notebook_9_15.png
The Alphalens tearsheet offers insight into the predictive ability of a
factor.
To learn more about Alphalens, check out the
`documentation <https://factset.quantopian.com/docs/user-guide/tools/alphalens>`__.
Step 4 - Download Results Locally
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When we have a factor that we like, the next step is often to download
the factor data so we can integrate it with another system. On
Quantopian, downloading pipeline results to a local environment is done
using
`Aqueduct <https://factset.quantopian.com/docs/user-guide/tools/aqueduct>`__.
Aqueduct is an HTTP API that enables remote execution of pipelines, and
makes it possible to download results to a local environment.
Quantopian accounts do not have access to Aqueduct by default. It is an
additional feature to which you will need to request access. If you
would like to learn more about adding Aqueduct to your Quantopian
account, please contact us at feedback@quantopian.com.
Recap & Next Steps
~~~~~~~~~~~~~~~~~~
In this tutorial, we introduced Quantopian and walked through an example
factor research workflow using Pipeline, Alphalens, and Aqueduct.
Quantopian has a rich set of
`documentation <https://factset.quantopian.com/docs/index>`__ and
`tutorials <https://factset.quantopian.com/tutorials>`__ on these tools
and others. We recommend starting with the tutorials or the `User
Guide <https://factset.quantopian.com/docs/user-guide/overview>`__
section of the documentation if you would like to grow your
understanding of Quantopian.
If you would like to learn more about `Quantopian’s enterprise
offering <https://factset.quantopian.com/home>`__, please contact us at
enterprise@quantopian.com.
| 30.596702 | 199 | 0.656899 |
f7d3dfab86f719cf5883496aa989fbed74bbf3b5 | 4,448 | rst | reStructuredText | vendor/mockery/mockery/docs/reference/pass_by_reference_behaviours.rst | junior7695/EnDesarrollo | a9e99fbb41a00273786bbc8168d2cc71d0fbbeb1 | [
"MIT"
] | 4 | 2018-05-21T17:58:04.000Z | 2018-05-31T08:26:44.000Z | vendor/mockery/mockery/docs/reference/pass_by_reference_behaviours.rst | junior7695/EnDesarrollo | a9e99fbb41a00273786bbc8168d2cc71d0fbbeb1 | [
"MIT"
] | null | null | null | vendor/mockery/mockery/docs/reference/pass_by_reference_behaviours.rst | junior7695/EnDesarrollo | a9e99fbb41a00273786bbc8168d2cc71d0fbbeb1 | [
"MIT"
] | 11 | 2018-05-14T02:56:40.000Z | 2022-03-15T11:44:18.000Z | .. index::
single: Pass-By-Reference Method Parameter Behaviour
Preserving Pass-By-Reference Method Parameter Behaviour
=======================================================
PHP Class method may accept parameters by reference. In this case, changes
made to the parameter (a reference to the original variable passed to the
method) are reflected in the original variable. An example:
.. code-block:: php
class Foo
{
public function bar(&$a)
{
$a++;
}
}
$baz = 1;
$foo = new Foo;
$foo->bar($baz);
echo $baz; // will echo the integer 2
In the example above, the variable $baz is passed by reference to
``Foo::bar()`` (notice the ``&`` symbol in front of the parameter?). Any
change ``bar()`` makes to the parameter reference is reflected in the original
variable, ``$baz``.
Mockery handles references correctly for all methods where it can analyse
the parameter (using ``Reflection``) to see if it is passed by reference. To
mock how a reference is manipulated by the class method, we can use a closure
argument matcher to manipulate it, i.e. ``\Mockery::on()`` - see the
:ref:`argument-validation-complex-argument-validation` chapter.
There is an exception for internal PHP classes where Mockery cannot analyse
method parameters using ``Reflection`` (a limitation in PHP). To work around
this, we can explicitly declare method parameters for an internal class using
``\Mockery\Configuration::setInternalClassMethodParamMap()``.
Here's an example using ``MongoCollection::insert()``. ``MongoCollection`` is
an internal class offered by the mongo extension from PECL. Its ``insert()``
method accepts an array of data as the first parameter, and an optional
options array as the second parameter. The original data array is updated
(i.e. when a ``insert()`` pass-by-reference parameter) to include a new
``_id`` field. We can mock this behaviour using a configured parameter map (to
tell Mockery to expect a pass by reference parameter) and a ``Closure``
attached to the expected method parameter to be updated.
Here's a PHPUnit unit test verifying that this pass-by-reference behaviour is
preserved:
.. code-block:: php
public function testCanOverrideExpectedParametersOfInternalPHPClassesToPreserveRefs()
{
\Mockery::getConfiguration()->setInternalClassMethodParamMap(
'MongoCollection',
'insert',
array('&$data', '$options = array()')
);
$m = \Mockery::mock('MongoCollection');
$m->shouldReceive('insert')->with(
\Mockery::on(function(&$data) {
if (!is_array($data)) return false;
$data['_id'] = 123;
return true;
}),
\Mockery::any()
);
$data = array('a'=>1,'b'=>2);
$m->insert($data);
$this->assertTrue(isset($data['_id']));
$this->assertEquals(123, $data['_id']);
\Mockery::resetContainer();
}
Protected Methods
-----------------
When dealing with protected methods, and trying to preserve pass by reference
behavior for them, a different approach is required.
.. code-block:: php
class Model
{
public function test(&$data)
{
return $this->doTest($data);
}
protected function doTest(&$data)
{
$data['something'] = 'wrong';
return $this;
}
}
class Test extends \PHPUnit\Framework\TestCase
{
public function testModel()
{
$mock = \Mockery::mock('Model[test]')->shouldAllowMockingProtectedMethods();
$mock->shouldReceive('test')
->with(\Mockery::on(function(&$data) {
$data['something'] = 'wrong';
return true;
}));
$data = array('foo' => 'bar');
$mock->test($data);
$this->assertTrue(isset($data['something']));
$this->assertEquals('wrong', $data['something']);
}
}
This is quite an edge case, so we need to change the original code a little bit,
by creating a public method that will call our protected method, and then mock
that, instead of the protected method. This new public method will act as a
proxy to our protected method.
| 33.954198 | 90 | 0.610612 |
b237dfff9b28c3fa02ad44c22bd0cee296d692de | 423 | rst | reStructuredText | readme.rst | fathisiddiqi19/sim-ijasa | 525388671bf1ad8cbc1d5a488e374ae81c04b9c0 | [
"MIT"
] | null | null | null | readme.rst | fathisiddiqi19/sim-ijasa | 525388671bf1ad8cbc1d5a488e374ae81c04b9c0 | [
"MIT"
] | null | null | null | readme.rst | fathisiddiqi19/sim-ijasa | 525388671bf1ad8cbc1d5a488e374ae81c04b9c0 | [
"MIT"
] | null | null | null | ###################
Apa itu SIM - IJASA
###################
SIM - IJASA adalah Sistem informasi Bantuan Logistik Bencana.
SIM - IJASA adalah pengembangan dari IJasa yang berfungsi sebagai wadah atau situs untuk menerima donasi bantuan logistik yang akan disalurkan ke lokasi bencana tertentu.
*******************
Core Information
*******************
1. Codeigniter / PHP
2. JQuery
3. Ajax
4. Javascript
5. HTML
6. CSS
| 21.15 | 170 | 0.640662 |
a81b5543e195c9397455f14be63ab530fe9c88cd | 3,844 | rst | reStructuredText | docs/developer-guide.rst | acumos/model-runner-h2o-model-runner | 794e614e56919d9806ebd8f4237099dc74916c6f | [
"Apache-2.0"
] | null | null | null | docs/developer-guide.rst | acumos/model-runner-h2o-model-runner | 794e614e56919d9806ebd8f4237099dc74916c6f | [
"Apache-2.0"
] | null | null | null | docs/developer-guide.rst | acumos/model-runner-h2o-model-runner | 794e614e56919d9806ebd8f4237099dc74916c6f | [
"Apache-2.0"
] | null | null | null | .. ===============LICENSE_START=======================================================
.. Acumos CC-BY-4.0
.. ===================================================================================
.. Copyright (C) 2017-2018 AT&T Intellectual Property. All rights reserved.
.. ===================================================================================
.. This Acumos documentation file is distributed by AT&T
.. under the Creative Commons Attribution 4.0 International License (the "License");
.. you may not use this file except in compliance with the License.
.. You may obtain a copy of the License at
..
.. http://creativecommons.org/licenses/by/4.0
..
.. This file is distributed on an "AS IS" BASIS,
.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. See the License for the specific language governing permissions and
.. limitations under the License.
.. ===============LICENSE_END=========================================================
============================================
Acumos H2O Model Runner Python Developer Guide
============================================
This predictor will run predictions for H2O POJO (Non compiled Java code) as well as MOJO (Compiled jars) models. This service has a dependency to model-management to download the models. AsyncPredictions and status methods are yet to be implemented in this version. All the model runners follow a similar design pattern in that the expose the 3 endpoints asyncpredictions, syncpredictions and status.
Running this predictor in Windows requires changing the classpath argument as follows however it is assumed to be running on a *nix machine.
h2opredictordevelopment/predictor/h2o/wrapper.py
From
classpath_arg = '.;./' + jar_file
To
classpath_arg = '' + jar_file
The main class to start this service is /h2o-model-runner/microservice_flask.py
The command line interface gives options to run the application. Type help for a list of available options.
> python microservice_flask.py help
usage: microservice_flask.py [-h] [--host HOST] [--settings SETTINGS] [--port PORT]
By default without adding arguments the swagger interface should be available at: http://localhost:8061/v2/
Sample model creation
=====================
This is the R Script can generate both H2O and POJO models. The below sample uses the iris dataset that may be found anywhere online or use the one that is built into R.
.. code:: bash
$ library(h2o)
$ h2o.init()
$
$ iris.hex <- h2o.importFile("iris.csv")
$ iris.gbm <- h2o.gbm(y="species", training_frame=iris.hex, model_id="irisgbm")
$ h2o.download_pojo(model = iris.gbm, path="/home/project/h2o", get_jar = TRUE)
$ h2o.download_mojo(model=iris.gbm, path="/home/project/h2o", get_genmodel_jar=TRUE)
Testing
=======
The only prerequisite for running testing is installing python and tox. It is recommended to use a virtual environment for running any python application. If using a virtual environment make sure to run "pip install tox" to install it
We use a combination of 'tox', 'pytest', and 'flake8' to test
'h20-model-runner'. Code which is not PEP8 compliant (aside from E501) will be
considered a failing test. You can use tools like 'autopep8' to
"clean" your code as follows:
.. code:: bash
$ pip install autopep8
$ cd h2o-model-runner
$ autopep8 -r --in-place --ignore E501 acumo_h2o-model-runner/ test/
Run tox directly:
.. code:: bash
$ cd h2o-model-runner
$ tox
You can also specify certain tox environments to test:
.. code:: bash
$ tox -e py34 # only test against Python 3.4
$ tox -e flake8 # only lint code
And finally, you can run pytest directly in your environment *(recommended starting place)*:
.. code:: bash
$ pytest
$ pytest -s # verbose output
| 39.628866 | 404 | 0.654006 |
00a57eb6c56e99c227a012b554c9fe9e20e5eb7d | 253 | rst | reStructuredText | docs/source/learning/statistical_inference/relative_entropy_of_gaussians.rst | jmann277/blog | a1c91f823d7f86c4d23480690685ac4471e7f64c | [
"BSD-3-Clause",
"MIT"
] | null | null | null | docs/source/learning/statistical_inference/relative_entropy_of_gaussians.rst | jmann277/blog | a1c91f823d7f86c4d23480690685ac4471e7f64c | [
"BSD-3-Clause",
"MIT"
] | 3 | 2021-09-06T21:03:23.000Z | 2021-09-06T21:03:31.000Z | docs/source/learning/statistical_inference/relative_entropy_of_gaussians.rst | jmann277/blog | a1c91f823d7f86c4d23480690685ac4471e7f64c | [
"BSD-3-Clause",
"MIT"
] | null | null | null | Relative Entropy of Gaussian Distributions
------------------------------------------
.. admonition:: To Do
write out the computation of the relative entropy between two gaussian
distributions
.. image:: /_static/entropy_of_biased_coin.png
| 25.3 | 73 | 0.640316 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.