hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
207e6b9e6fc128ef9cd8471a88aed18c60196923 | 745 | rst | reStructuredText | CHANGELOG.rst | DLR-SC/prov2bigchaindb | a21c78a80e502409281aa25999756f2b695d8301 | [
"Apache-2.0"
] | 6 | 2017-04-06T07:34:20.000Z | 2020-12-31T07:56:29.000Z | CHANGELOG.rst | DLR-SC/prov2bigchaindb | a21c78a80e502409281aa25999756f2b695d8301 | [
"Apache-2.0"
] | 25 | 2017-04-07T12:45:11.000Z | 2018-11-08T11:21:04.000Z | CHANGELOG.rst | DLR-SC/prov2bigchaindb | a21c78a80e502409281aa25999756f2b695d8301 | [
"Apache-2.0"
] | null | null | null | Changelog
=========
Version 0.4.1 (2018-04-09)
--------------------------
- Updated BigchainDB to 1.3.0
- Updated db-driver to 0.4.1
- Updated networkx to 2.1
- Updated prov to 1.5.2
Version 0.4.0 (2017-06-29)
--------------------------
- Updated bigchaindb components to 1.0.0rc1
Version 0.3.1 (2017-04-07)
--------------------------
- Added travis-ci support
- Updated documentation
Version 0.3.0 (2017-04-01)
--------------------------
- Support for the role-based concept
Version 0.2.0 (2017-03-05)
--------------------------
- Support for the graph-based concept
- Added unit tests
Version 0.1.0 (2017-02-21)
--------------------------
- Support for the document-based concept
- Added basic unit tests
- Intergation of Gitlab-CI | 19.102564 | 43 | 0.557047 |
16abb527fafae5b9ef2ceecf19b767cf511909a3 | 376 | rst | reStructuredText | docs/objective_functions.rst | andy-yoo/lbann-andy | 237c45c392e7a5548796ac29537ab0a374e7e825 | [
"Apache-2.0"
] | null | null | null | docs/objective_functions.rst | andy-yoo/lbann-andy | 237c45c392e7a5548796ac29537ab0a374e7e825 | [
"Apache-2.0"
] | null | null | null | docs/objective_functions.rst | andy-yoo/lbann-andy | 237c45c392e7a5548796ac29537ab0a374e7e825 | [
"Apache-2.0"
] | null | null | null | Objective Functions
=================================
Objective functions are the measure which training attempts to optimize. Objective functions are defined in a user's model defintion prototext file. Available objective functions can be found below.
.. toctree::
:maxdepth: 2
loss_functions
weight_regularization
.. autodoxygenindex::
:project: obj_functions
| 26.857143 | 200 | 0.723404 |
ef8f83f1b64e6da642c417bff22b0026a64a4091 | 182 | rst | reStructuredText | docs/api_contextprocessors.rst | stephrdev/django-template-helpers | 88bb6b64b7d7d5a0e3d1648737846b1fd780664e | [
"MIT"
] | 1 | 2019-11-25T23:02:44.000Z | 2019-11-25T23:02:44.000Z | docs/api_contextprocessors.rst | stephrdev/django-template-helpers | 88bb6b64b7d7d5a0e3d1648737846b1fd780664e | [
"MIT"
] | 7 | 2021-05-13T19:10:15.000Z | 2022-02-10T07:24:43.000Z | docs/api_contextprocessors.rst | stephrdev/django-template-helpers | 88bb6b64b7d7d5a0e3d1648737846b1fd780664e | [
"MIT"
] | 1 | 2019-04-02T15:53:02.000Z | 2019-04-02T15:53:02.000Z | template_helpers.context_processors
===================================
.. automodule:: template_helpers.context_processors
:members:
:undoc-members:
:show-inheritance:
| 22.75 | 51 | 0.615385 |
156f653e3ad53f8519c2ae034661fc405fa4020d | 11,194 | rst | reStructuredText | doc-src/plugin_development.rst | alttch/pptop_web | eac1a3d6c161e84b959e4b55391babca497333b3 | [
"MIT"
] | null | null | null | doc-src/plugin_development.rst | alttch/pptop_web | eac1a3d6c161e84b959e4b55391babca497333b3 | [
"MIT"
] | null | null | null | doc-src/plugin_development.rst | alttch/pptop_web | eac1a3d6c161e84b959e4b55391babca497333b3 | [
"MIT"
] | null | null | null | Plugin development
******************
Here's the simple plugin example, get list of loaded modules from the process
we want to analyze:
.. literalinclude:: examples/pptopcontrib.listmodules/__init__.py
:language: python
Pretty easy isn't it? And here's the result:
.. image:: examples/pptopcontrib.listmodules/list_modules.png
:width: 768px
File location
=============
ppTOP search for the plugins as in default Python library directories, as in
*~/.pptop/lib*. Custom plugins should be named as *pptopcontrib.pluginname*,
so, let's put all code of our example to
*~/.pptop/lib/pptopcontrib.listmodules/__init__.py*
Plugin debugging
================
Consider that Python program we want to test our plugin on is already started
and its PID file is stored into */var/run/myprog.pid*
Launch ppTOP as:
.. code:: shell
pptop -o listmodules -d listmodules --log /tmp/debug.log /var/run/myprog.pid
Both ppTOP and injected plugins will write all messages to */tmp/debug.log*,
your plugin is automatically selected to be displayed first.
Parameter *"-o listmodules"* tells ppTOP to load plugin even if it isn't
present in configuration file.
Also, you can import two logging methods from *pptop.logger*:
.. code:: python
from pptop.logger import log, log_traceback
try:
# ... do something and log result
result = somefunc()
log(result)
except:
log_traceback()
Local part
==========
Each plugin should have at least class **Plugin**, what we already have. Local
plugin part is executed inside ppTOP program.
Class definition
----------------
.. code:: python
__version__ = '0.0.1'
from pptop.plugin import GenericPlugin, palette
class Plugin(GenericPlugin):
'''
list_modules plugin: list modules
'''
default_interval = 1
#...
Variable *__version__* should always be present in custom plugin module. If
module want to use colors, it's better to use prepared colors from
*pptop.palette*.
Class *Plugin* should have help documentation inside, it is displayed when user
press *F1* key.
on_load and on_unload
---------------------
Methods *self.on_load* and *self.on_unload* are called when plugin is
loaded/unloaded. The first method usually should be defined - it initializes
plugin, set its title, description etc.
Class variables
---------------
.. code:: python
self.data = [] # contains loaded data
self.data_lock = threading.Lock() # should be locked when accesing data
self.dtd = [] # data to be displayed (after sorting and filtering)
self.msg = '' # title message (reserved)
self.name = mod.__name__.rsplit('.', 1)[-1] # plugin name(id)
self.title = self.name.capitalize().replace('_', ' ') # title
self.short_name = self.name[:6].capitalize() # short name (bottom bar)
self.description = '' # plugin description
self.window = None # working window
self.status_line = None # status line, if requested (curses object)
self.shift = 0 # current vertical shifting
self.hshift = 0 # current horizontal shifting
self.cursor = 0 # current selected element in dtd
self.config = {} # plugin configuration
self.filter = '' # current filter
self.sorting_col = None # current sorting column
self.sorting_rev = True # current sorting direction
self.sorting_enabled = True # is sorting enabled
self.cursor_enabled = True # is cursor enabled
self.selectable = False # show item selector arrow
self.background = False # shouldn't be stopped when switched
self.background_loader = False # for heavy plugins - load data in bg
self.need_status_line = False # reserve status line
self.append_data = False # default load_data method will append data
self.data_records_max = None # max data records
self.inputs = {} # key - hot key, value - input value
self.key_code = None # last key pressed, for custom key event handling
self.key_event = None # last key event
self.injected = False # is plugin injected
Making executor async
---------------------
By default, plugin method *self.run* is called in separate thread. To keep your
plugin async, define
.. code:: python
async def run(self, *args, **kwargs):
super().run(*args, **kwargs)
Data flow
---------
* when plugin is started, it continuously run *self.run* method until stopped.
This method is also triggered when key is pressed by user but doesn't reload
data by default unless SPACE key is pressed.
* to load data, plugin calls *self.load_data* method, which asks ppTOP core to
obtain data from injected part and then stores it to *self.data* variable. By
default, data should always be *list* object. If your plugin doesn't have
injected part, you should always override this method and fill *self.data*
manually. When filling, always use *with self.data_lock:*.
* before data is stored, method *self.process_data(data)* is called which
should either process data object in-place or return new list object to
store. At this step, if default table-based rendering is used, data should be
converted to the list of dictionaries (preferred to list of ordered
dictionaries, to keep column ordering in table).
.. warning::
If your data contains mixed types, e.g. like in our example, version can be
string, integer, or tuple, the field should always be converted to string,
otherwise sorting errors may occor.
* After, method *self.handle_sorting_event()* is called, which process key
events and change sorting columns/direction if required.
* Loaded data is being kept untouched and plugin start working with *self.dtd*
(data-to-be-displayed) object. This object is being set when 3 generator
methods are called on *self.data*:
* **self.sort_dtd(dtd)** sorts data
* **self.format_dtd(dtd)** formats data (e.g. convert numbers to strings,
limiting digits after comma)
* **self.filter_dtd(dtd)** applies filter on the formatted data
If you want to override any of these methods (most probably
*self.format_dtd(dtd)*, don't forget it should return list generator, not a
list object itself.
* *self.handle_key_event(event=self.key_event, key=self.key_code dtd)* method
is called to process custom keyboard events.
* *self.handle_pager_event(dtd)* method is called to process paging/scrolling
events.
* *self.render(dtd)* method is called to display data.
Displaying data
---------------
Method *self.render(dtd)* calls *self.render_table* to display a table. If you
need to display anything more complex, e.g. a tree, you should completely
override
it.
Otherwise, it would be probably enough to override methods
*render_status_line()*, *get_table_row_color(self, element, raw)* (colorize
specified row according to element values) and/or
*format_table_row(self, element=None, raw=None)* (add additional formatting to
raw table row).
You may also define function *self.get_table_col_color(self, element, key,
value)*. In this case, row colors are ignored and each column is colorized
independently.
Input values
------------
Allowing user to input values is very easy:
Just define variable, e.g. "a" in *self.inputs*:
.. code:: python
def on_load(self):
# ....
self.inputs['a'] = None
And when user press "a" key, ppTOP automatically asks him to enter value for
"a" variable.
You may customize initial variable value, overriding method
*self.get_input(var)* (by default it returns value from *self.input*),
customize input prompt, overriding method *self.get_input_prompt(var)* and then
handle entered value with method *self.handle_input(var, value, prev_value)*
All class methods
-----------------
.. automodule:: pptop.plugin
.. autoclass:: GenericPlugin
:members:
Worker methods
--------------
All plugins are based on *neotasker.BackgroundIntervalWorker*, look `neotasker
<https://neotasker.readthedocs.io/en/latest/workers.html#backgroundintervalworker>`_
library documentation for more details.
Injected part
=============
Primary function
----------------
Module procedure called *injection* is automatically injected into analyzed
process and executed when plugin loads new data or when you manually call
*self.injection_command* function.
.. code:: python
def injection(**kwargs):
import sys
result = []
for i, mod in sys.modules.items():
if i != 'builtins':
try:
version = mod.__version__
except:
version = ''
try:
author = mod.__author__
except:
author = ''
try:
license = mod.__license__
except:
license = ''
result.append((i, version, author, license))
return result
Function arguments:
* when function is called to collect data, kwargs are empty
* when you call function with *self.injection_command(param1=1, param2=2)*, it
will get the arguments you've specified.
There are several important rules about this part:
* *injection* is launched with empty globals, so it should import all
required modules manually.
* *injection* should not return any complex objects which are heavy to transfer
or can not be unpickled by ppTOP. Practical example: sending *LogRecord*
object is fine, but for the complex object it's better to serialize it to
dict (use built-in object *__dict__* function or do it by yourself)
* *injection* should not perform too much module calls as it could affect
function profiling statistics. The best way is to implement most of the
functions locally, rather than import them.
* on the other hand, *injection* should not perform any heavy calculations or
data transformation, as ppTOP communication protocol is synchronous and only
one remote command is allowed per time.
On load and on unload
---------------------
If your plugin need to prepare remote process before calling *injection*
function, you may define in plugin module:
.. code:: python
def injection_load(**kwargs):
# ...
which is automatically executed once, when plugin is injected or re-injected.
Function kwargs are provided by local code part, method
*Plugin.get_injection_load_params* (which should return a dict) and are empty
by default.
If this function start any long-term running tasks (e.g. launch profiling
module), they should be stopped when plugin is unloaded. For this you should
define function:
.. code:: python
def injection_unload(**kwargs):
# ...
Function kwargs are empty and reserved for the future.
Talking to each other
---------------------
Functions are launched with the same globals, so you can either define global
variables (which is not recommended) or exchange data via *g* namespace:
.. code:: python
def injection_load(**kwargs):
g.loaded = True
def injection(**kwargs):
try:
loaded = g.loaded
except:
loaded = False
if not loaded:
raise RuntimeError('ppTOP forgot to load me! Help!')
| 31.982857 | 84 | 0.695819 |
3240ebe9e5c75e1768fb1f981e193bc3f64149ad | 373 | rst | reStructuredText | docs/index.rst | Snayff/nqp2 | 301df75c341e0f67a063e06da056e799de730a73 | [
"MIT"
] | 12 | 2022-01-02T11:21:46.000Z | 2022-03-12T20:53:40.000Z | docs/index.rst | Snayff/nqp2 | 301df75c341e0f67a063e06da056e799de730a73 | [
"MIT"
] | 142 | 2021-07-25T14:05:11.000Z | 2022-03-26T21:12:53.000Z | docs/index.rst | Snayff/nqp2 | 301df75c341e0f67a063e06da056e799de730a73 | [
"MIT"
] | 9 | 2021-08-25T00:25:24.000Z | 2022-03-12T20:53:41.000Z | Welcome to Not Quite Paradise 2's documentation!
===============================================================
Last updated on |today| for version |version|.
**API**
.. toctree::
:maxdepth: 1
api/core/_core
api/scenes/_scenes
api/ui_elements/ui_elements
**Developer Notes**
.. toctree::
:maxdepth: 1
developer_notes/_developer_notes
| 14.346154 | 63 | 0.557641 |
7ea97e5033e76d291052927816559dcfa8909d9d | 218 | rst | reStructuredText | README.rst | gbrammer/jwst_EERS | 226d2f28346bcc30f1a59ea10092dcde46a8d26b | [
"MIT"
] | 4 | 2022-03-18T03:12:33.000Z | 2022-03-24T14:17:34.000Z | README.rst | gbrammer/jwst_EERS | 226d2f28346bcc30f1a59ea10092dcde46a8d26b | [
"MIT"
] | null | null | null | README.rst | gbrammer/jwst_EERS | 226d2f28346bcc30f1a59ea10092dcde46a8d26b | [
"MIT"
] | null | null | null | jwst_EERS
=========
Um, don't use this for science.
FITS file generated from https://www.flickr.com/photos/nasawebbtelescope/51942047253/sizes/o/ and roughly aligned using a few GAIA stars.
.. image:: jwst_ds9.png
| 21.8 | 137 | 0.738532 |
d6994b38f587d662c0f8db92056aecfb66b0625d | 4,467 | rst | reStructuredText | README.rst | phcerdan/ITKModuleTemplate | 842564fcfa12506d37202c7b12090d22846f1a78 | [
"Apache-2.0"
] | null | null | null | README.rst | phcerdan/ITKModuleTemplate | 842564fcfa12506d37202c7b12090d22846f1a78 | [
"Apache-2.0"
] | null | null | null | README.rst | phcerdan/ITKModuleTemplate | 842564fcfa12506d37202c7b12090d22846f1a78 | [
"Apache-2.0"
] | null | null | null | ITKModuleTemplate
=================
.. |CircleCI| image:: https://circleci.com/gh/InsightSoftwareConsortium/ITKModuleTemplate.svg?style=shield
:target: https://circleci.com/gh/InsightSoftwareConsortium/ITKModuleTemplate
.. |TravisCI| image:: https://travis-ci.org/InsightSoftwareConsortium/ITKModuleTemplate.svg?branch=master
:target: https://travis-ci.org/InsightSoftwareConsortium/ITKModuleTemplate
.. |AppVeyor| image:: https://img.shields.io/appveyor/ci/itkrobot/itkmoduletemplate.svg
:target: https://ci.appveyor.com/project/itkrobot/itkmoduletemplate
=========== =========== ===========
Linux macOS Windows
=========== =========== ===========
|CircleCI| |TravisCI| |AppVeyor|
=========== =========== ===========
This is a module for the `Insight Toolkit (ITK) <http://itk.org>`_ for
segmentation and registration. It is designed to work with the ITK modular
system.
This module is a template to be used as a starting point for a new ITK module.
Getting Started
---------------
The following will get an external module started in a new repository::
python -m pip install cookiecutter
python -m cookiecutter gh:InsightSoftwareConsortium/ITKModuleTemplate
# Fill in the information requested at the prompts
Reasonable defaults will be provided for all of the parameters. The parameters are:
*full_name*
Your full name.
*email*
Your email.
*github_username*
Your GitHub username.
*project_name*
This is a name for the project, which is *ITK* followed by the
module name, by convention. Examples include *ITKIsotropicWavelets* or
*ITKBoneMorphometry*.
*module_name*
This is the name of the module. Since this is an external module, it does
not start with the *ITK* prefix. It is in CamelCase, by convention. Examples
include *IsotropicWavelets* and *BoneMorphometry*.
*filter_name*
The skeleton of an ``itk::ImageToImageFilter`` will be created by default.
Optionally specify this value, if you will be adding an
``itk::ImageToImageFilter`` to your module.
*python_package_name*
This is the name of the Python package that will be created from the module.
By convention, this is *itk-<project_name in lower case>*. For example,
*itk-isotropicwavelets* or *itk-bonemorphometry*.
*download_url*
This is the download url added to the Python package metadata. This can be
the GitHub repository URL.
*project_short_description*
A short description to use in the project README, module Doxygen
documentation, and Python package documentation.
*project_long_description*
A long description to use in the project README, module Doxygen
documentation, and Python package documentation.
The output of the cookiecutter is a buildable ITK external module with example
classes. Remove or replace the classes with your new classes. Push your new
module to GitHub, and enable builds on `CircleCI <https://circleci.com/>`_,
`TravisCI <https://travis-ci.org/>`_, and `AppVeyor
<https://www.appveyor.com/>`_.
Documentation on `how to populate the module
<https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch9.html#x50-1430009>`_
can be found in the `ITK Software Guide <https://itk.org/ITKSoftwareGuide/html/>`_.
Remote Module
-------------
After an `Insight Journal <http://www.insight-journal.org/>`_ article has been
submitted, the module can be included in ITK as a `remote module
<http://www.itk.org/Wiki/ITK/Policy_and_Procedures_for_Adding_Remote_Modules>`_.
Add a file in "ITK/Modules/Remote" called "YourModule.remote.cmake", for this
module it would be "ExternalExample.remote.cmake" with the followlowing
contents::
itk_fetch_module(MyModule
"A description of the a module."
GIT_REPOSITORY http://github.com/myuser/ITKMyModule.git
GIT_TAG abcdef012345
)
Python Packages
---------------
Continuous integration service configurations are included to build Python
packages for Linux, macOS, and Windows. These packages can be `downloaded
<https://itkpythonpackage.readthedocs.io/en/latest/Build_ITK_Module_Python_packages.html#github-automated-ci-package-builds>`_
and `uploaded to the Python Package Index (PyPI)
<https://itkpythonpackage.readthedocs.io/en/latest/Build_ITK_Module_Python_packages.html#upload-the-packages-to-pypi>`_.
License
-------
This software is distributed under the Apache 2.0 license. Please see
the *LICENSE* file for details.
Authors
-------
* Bradley Lowekamp
* Matt McCormick
* Jean-Baptiste VIMORT
| 34.361538 | 126 | 0.747705 |
dcea38ef893d73401be236312d2202190265a6bc | 1,511 | rst | reStructuredText | docs/tutorials/contributing/strategy/docstrings.rst | danilobellini/Axelrod | 2c9212553e06095c24adcb82a5979279cbdf45fb | [
"MIT"
] | 596 | 2015-03-30T17:34:14.000Z | 2022-03-21T19:32:38.000Z | docs/tutorials/contributing/strategy/docstrings.rst | danilobellini/Axelrod | 2c9212553e06095c24adcb82a5979279cbdf45fb | [
"MIT"
] | 1,018 | 2015-03-30T14:57:33.000Z | 2022-03-14T14:57:48.000Z | docs/tutorials/contributing/strategy/docstrings.rst | danilobellini/Axelrod | 2c9212553e06095c24adcb82a5979279cbdf45fb | [
"MIT"
] | 263 | 2015-03-31T10:26:28.000Z | 2022-03-29T09:26:02.000Z | Writing docstrings
==================
The project takes pride in its documentation for the strategies
and its corresponding bibliography. The docstring is a string
which describes a method, module or class. The docstrings help
the user in understanding the working of the strategy
and the source of the strategy. The docstring must be written in
the following way, i.e.::
"""This is a docstring.
It can be written over multiple lines.
"""
Sections
--------
The Sections of the docstring are:
1. **Working of the strategy**
A brief summary on how the strategy works, E.g.::
class TitForTat(Player):
"""
A player starts by cooperating and then mimics the
previous action of the opponent.
"""
2. **Bibliography/Source of the strategy**
A section to mention the source of the strategy
or the paper from which the strategy was taken.
The section must start with the Names section.
For E.g.::
class TitForTat(Player):
"""
A player starts by cooperating and then mimics the
previous action of the opponent.
Names:
- Rapoport's strategy: [Axelrod1980]_
- TitForTat: [Axelrod1980]_
"""
Here, the info written under the Names section
tells about the source of the TitforTat strategy.
:code:`[Axelrod1980]_` corresponds to the bibliographic item in
:code:`docs/reference/bibliography.rst`. If you are using a source
that is not in the bibliography please add it.
| 27.472727 | 69 | 0.682991 |
33ee54e70ef0b9bb55f930a54cd0bd46daa23e9e | 1,727 | rst | reStructuredText | docs/source/topics/prerequisites.rst | SmirnovEgorRu/oneDAL | 00095d72d858c17fdb895de18e23c86dadbbd7cf | [
"Apache-2.0"
] | null | null | null | docs/source/topics/prerequisites.rst | SmirnovEgorRu/oneDAL | 00095d72d858c17fdb895de18e23c86dadbbd7cf | [
"Apache-2.0"
] | null | null | null | docs/source/topics/prerequisites.rst | SmirnovEgorRu/oneDAL | 00095d72d858c17fdb895de18e23c86dadbbd7cf | [
"Apache-2.0"
] | null | null | null | .. ******************************************************************************
.. * Copyright 2019-2020 Intel Corporation
.. *
.. * Licensed under the Apache License, Version 2.0 (the "License");
.. * you may not use this file except in compliance with the License.
.. * You may obtain a copy of the License at
.. *
.. * http://www.apache.org/licenses/LICENSE-2.0
.. *
.. * Unless required by applicable law or agreed to in writing, software
.. * distributed under the License is distributed on an "AS IS" BASIS,
.. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. * See the License for the specific language governing permissions and
.. * limitations under the License.
.. *******************************************************************************/
.. |dpcpp_comp| replace:: Intel\ |reg|\ oneAPI DPC++/C++ Compiler
.. _dpcpp_comp: https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/dpc-compiler.html
.. _before_you_begin:
Before You Begin
~~~~~~~~~~~~~~~~
|short_name| is located in :file:`<install_dir>/daal` directory where :file:`<install_dir>`
is the directory in which Intel\ |reg|\ oneAPI Toolkit was installed.
The current version of |short_name| with
DPC++ is available for Linux\* and Windows\* 64-bit operating systems. The
prebuilt |short_name| libraries can be found in the :file:`<install_dir>/daal/<version>/redist`
directory.
The dependencies needed to build examples with DPC++ extensions API are:
- C/C++ Compiler with C++11 support (or C++14 support on Windows\*)
- |dpcpp_comp|_ 2019 August release or later (for DPC++ support)
- OpenCL™ runtime 1.2 or later (to run the DPC++ runtime)
- GNU\* Make on Linux\*, nmake on Windows\*
| 43.175 | 110 | 0.652577 |
38bd90157602dc61c85c579ba91d87d63a0d86b1 | 102 | rst | reStructuredText | docs/source/utils.rst | marco-mazzoli/neural_prophet | 151a1404e3141a3f6a77f04ab7ae02f6bf7810ac | [
"MIT"
] | 2,144 | 2020-06-12T00:50:31.000Z | 2022-03-31T13:51:30.000Z | docs/source/utils.rst | marco-mazzoli/neural_prophet | 151a1404e3141a3f6a77f04ab7ae02f6bf7810ac | [
"MIT"
] | 306 | 2020-06-12T21:15:18.000Z | 2022-03-31T23:07:13.000Z | docs/source/utils.rst | marco-mazzoli/neural_prophet | 151a1404e3141a3f6a77f04ab7ae02f6bf7810ac | [
"MIT"
] | 285 | 2020-08-21T00:42:14.000Z | 2022-03-29T12:21:59.000Z | Core Module Documentation
==========================
.. automodule:: neuralprophet.utils
:members: | 20.4 | 35 | 0.568627 |
19071d95fd76b4b90823179bf1a49cb76c60e8cf | 5,104 | rst | reStructuredText | docs/class2/module2/lab3.rst | f5devcentral/f5-agility-labs-kubernetes | 53ac535001e0473996f0947a4ae630ffee6896af | [
"MIT"
] | 12 | 2018-07-27T13:13:49.000Z | 2021-12-04T20:22:09.000Z | docs/class2/module2/lab3.rst | f5devcentral/f5-agility-labs-kubernetes | 53ac535001e0473996f0947a4ae630ffee6896af | [
"MIT"
] | 3 | 2018-07-17T21:19:34.000Z | 2021-08-04T13:55:49.000Z | docs/class2/module2/lab3.rst | f5devcentral/f5-agility-labs-kubernetes | 53ac535001e0473996f0947a4ae630ffee6896af | [
"MIT"
] | 18 | 2018-05-31T21:14:13.000Z | 2022-03-07T15:34:43.000Z | Lab 2.3 - Deploy Hello-World Using ConfigMap w/ AS3
===================================================
Just like the previous lab we'll deploy the f5-hello-world docker container.
But instead of using the Route resource we'll use ConfigMap.
App Deployment
--------------
On **okd-master1** we will create all the required files:
#. Create a file called ``deployment-hello-world.yaml``
.. tip:: Use the file in ~/agilitydocs/docs/class2/openshift
.. literalinclude:: ../openshift/deployment-hello-world.yaml
:language: yaml
:caption: deployment-hello-world.yaml
:linenos:
:emphasize-lines: 2,7,20
#. Create a file called ``f5-hello-world-service-cluster.yaml``
.. tip:: Use the file in ~/agilitydocs/docs/class2/openshift
.. literalinclude:: ../openshift/clusterip-service-hello-world.yaml
:language: yaml
:caption: clusterip-service-hello-world.yaml
:linenos:
:emphasize-lines: 2,8-10,17
#. Create a file called ``configmap-hello-world.yaml``
.. tip:: Use the file in ~/agilitydocs/docs/class2/openshift
.. literalinclude:: ../openshift/configmap-hello-world.yaml
:language: yaml
:caption: configmap-hello-world.yaml
:linenos:
:emphasize-lines: 2,5,7,8,27,30
#. We can now launch our application:
.. code-block:: bash
oc create -f deployment-hello-world.yaml
oc create -f clusterip-service-hello-world.yaml
oc create -f configmap-hello-world.yaml
.. image:: ../images/f5-container-connector-launch-app.png
#. To check the status of our deployment, you can run the following commands:
.. code-block:: bash
oc get pods -o wide
.. image:: ../images/f5-okd-hello-world-pods.png
.. code-block:: bash
oc describe svc f5-hello-world
.. image:: ../images/f5-okd-check-app-definition.png
.. attention:: To understand and test the new app pay attention to the
**Endpoints value**, this shows our 2 instances (defined as replicas in
our deployment file) and the overlay network IP assigned to the pod.
#. Now that we have deployed our application sucessfully, we can check the
configuration on bigip1. Switch back to the open management session on
firefox.
.. warning:: Don't forget to select the proper partition. Previously we
checked the "okd" partition. In this case we need to look at the "AS3"
partition. This partition was auto created by AS3 and named after the
Tenant which happens to be "AS3".
GoTo: :menuselection:`Local Traffic --> Virtual Servers`
Here you can see a new Virtual Server, "serviceMain" was created,
listening on 10.1.1.4:80 in partition "AS3".
.. image:: ../images/f5-container-connector-check-app-bigipconfig-as3.png
#. Check the Pools to see a new pool and the associated pool members.
GoTo: :menuselection:`Local Traffic --> Pools` and select the
"web_pool" pool. Click the Members tab.
.. image:: ../images/f5-container-connector-check-app-web-pool-as3.png
.. note:: You can see that the pool members IP addresses are assigned from
the overlay network (**ClusterIP mode**)
#. Access your web application via firefox on the jumpbox.
.. note:: Select the "Hello, World" shortcut or type http://10.1.1.4 in the
URL field.
.. image:: ../images/f5-container-connector-access-app.png
#. Hit Refresh many times and go back to your **BIG-IP** UI.
Goto: :menuselection:`Local Traffic --> Pools --> Pool list -->
"web_pool" --> Statistics` to see that traffic is distributed as expected.
.. image:: ../images/f5-okd-check-app-bigip-stats-clusterip.png
.. note:: Why is all the traffic directed to one pool member? The answer can
be found by instpecting the "serviceMain" virtual service in the
management GUI.
#. Scale the f5-hello-world app
.. code-block:: bash
oc scale --replicas=10 deployment/f5-hello-world-web -n default
#. Check that the pods were created
.. code-block:: bash
oc get pods
.. image:: ../images/f5-hello-world-pods-scale10-2.png
#. Check the pool was updated on bigip1. GoTo: :menuselection:`Local Traffic
--> Pools` and select the "web_pool" pool. Click the Members tab.
.. image:: ../images/f5-hello-world-pool-scale10-clusterip.png
.. attention:: Now we show 10 pool members. In module1 the number stayed at
3 and didn't change, why?
#. Remove Hello-World from BIG-IP.
.. attention:: In older versions of AS3 a "blank AS3 declaration" was
required to completely remove the application/declaration from BIG-IP. In
AS3 v2.20 and newer this is no longer a requirement.
.. code-block:: bash
oc delete -f configmap-hello-world.yaml
oc delete -f clusterip-service-hello-world.yaml
oc delete -f deployment-hello-world.yaml
.. note:: Be sure to verify the virtual server and "AS3" partition were
removed from BIG-IP.
.. attention:: This concludes **Class 2 - CIS and OpenShift**. Feel free to
experiment with any of the settings. The lab will be destroyed at the end of
the class/day.
| 33.142857 | 79 | 0.683582 |
12d253f746ee766f9fd969a15eec20aa2d9cc42d | 3,515 | rst | reStructuredText | README.rst | onlogic/Pykarbon | c968a95d669595614476f38a7b117fdf86b629e0 | [
"BSD-3-Clause"
] | 4 | 2020-01-06T17:26:26.000Z | 2021-12-16T09:36:03.000Z | README.rst | onlogic/Pykarbon | c968a95d669595614476f38a7b117fdf86b629e0 | [
"BSD-3-Clause"
] | 2 | 2020-03-27T20:59:21.000Z | 2020-06-26T15:31:49.000Z | README.rst | onlogic/Pykarbon | c968a95d669595614476f38a7b117fdf86b629e0 | [
"BSD-3-Clause"
] | 1 | 2020-03-10T14:11:03.000Z | 2020-03-10T14:11:03.000Z | ===============
Pykarbon Module
===============
For the full rundown, read `the rest of the docs <https://pykarbon.readthedocs.io/en/latest/>`_.
-----------
What is it?
-----------
The Pykarbon module provides a set of tools for interfacing with the hardware devices on
OnLogic's 'Karbon' series industrial PCs. These interfaces include the onboard CAN bus,
Digital IO, and a few other hardware devices.
The goal of this package is to provide a simple, but powerful, base platform that will allow
users to quickly and easily integrate a Karbon into their own application.
*The tools in this package are designed to work with specific hardware;
this will not work for more generalized projects*
----------------
How do I use it?
----------------
*You will need to install python 3 prior to following this guide.*
Getting started with pykarbon takes only a few minutes:
.. role:: bash(code)
:language: bash
- Open up a terminal, and run :bash:`pip install pykarbon`
+ On some systems you may need to run as admin, or use the :bash:`--user` flag
- Launch a python shell with :bash:`python`
+ Usually linux users do not have write access to serial ports; grant your user permanent access with :bash:`usermod -a -G dialout $USER` or use :bash:`sudo python`
.. role:: python(code)
:language: python
- Import pykarbon with :python:`import pykarbon.pykarbon as pk`
- And finally create a control object using :python:`dev = pk.Karbon()`
If all went well, you should now be ready to control a variety of systems, but for now, let's just print out some
configuration information:
- :python:`dev.show_info()`
And close our session:
- :python:`dev.close()`
-------------------
What else can I do?
-------------------
Pykarbon offers a number of tools for automating and using Karbon series hardware interfaces. These include:
- CAN and DIO background data monitoring
- Exporting logged data to .csv
- Registering and making function calls based on these bus events:
+ CAN data IDs
+ Digital Input Events
+ DIO Bus States (Allows partial states)
- Automated can message response to registered IDs
- Automated setting of Digital Output states
- Automatic CAN baudrate detection
- Updating user configuration information:
+ Ignition sense enable/disable
+ Power timing configurations
+ Low battery shutdown voltage
+ Etc.
- Firmware update
Additonally, as Pykarbon's CAN and Terminal sessions must connect to device serial ports, functionality has been added
to allow running these sessions using a context manager:
.. code-block:: python
import pykarbon.pykarbon as pk
import pykarbon.can as pkc
with pk.Karbon() as dev:
dev.show_info()
with pkc.Session() as dev:
dev.write(0x123, 0x11223344)
-------------------------------
A Simple Example: Pykarbon.Core
-------------------------------
.. code-block:: python
import pykarbon.core as pkcore
# Set up interfaces:
can = pkcore.Can()
term = pkcore.Terminal()
# Claim the serial ports for use:
can.claim()
term.claim()
# Configure the can baudrate, and view that config
term.command('set can-baudrate 800')
print("\nRead User Configuration:")
term.print_command('config')
# Write a message, and then listen for and print responses
can.send(0x123, 0x11223344)
print("\nMonitoring CAN Bus, Press CTRL+C to Stop!")
can.sniff() # Will block until you exit with ctrl+c
# Close the ports!
can.release()
term.release()
| 27.896825 | 166 | 0.691607 |
74ddef038d463856e67224a9d01273185453d734 | 351 | rst | reStructuredText | api/index.rst | kensipe/tilt.build | 7dc2276db7dd07f6468aca0b2c4c6f558484cfa7 | [
"Apache-2.0"
] | 11 | 2020-05-30T16:53:06.000Z | 2022-02-16T00:22:24.000Z | api/index.rst | kensipe/tilt.build | 7dc2276db7dd07f6468aca0b2c4c6f558484cfa7 | [
"Apache-2.0"
] | 169 | 2020-05-18T16:59:41.000Z | 2022-03-25T14:16:27.000Z | api/index.rst | kensipe/tilt.build | 7dc2276db7dd07f6468aca0b2c4c6f558484cfa7 | [
"Apache-2.0"
] | 39 | 2020-06-20T22:20:58.000Z | 2022-03-03T07:32:49.000Z | Tiltfile API Reference
======================
.. automodule:: api
:members:
.. automodule:: modules.os
:members:
.. automodule:: modules.os.path
:members:
.. automodule:: modules.config
:members:
.. automodule:: modules.shlex
:members:
.. automodule:: modules.sys
:members:
.. automodule:: modules.v1alpha1
:members:
| 14.625 | 32 | 0.609687 |
9612e9231eda2fa7e5bd9ec46df9e069b83d3910 | 366 | rst | reStructuredText | README.rst | dabio/rafi | 885d071889edb4d3c53ba636b701b3fd25302e90 | [
"MIT"
] | 1 | 2019-01-28T08:12:54.000Z | 2019-01-28T08:12:54.000Z | README.rst | dabio/rafi | 885d071889edb4d3c53ba636b701b3fd25302e90 | [
"MIT"
] | 85 | 2019-01-14T05:14:32.000Z | 2022-03-31T05:33:53.000Z | README.rst | dabio/rafi | 885d071889edb4d3c53ba636b701b3fd25302e90 | [
"MIT"
] | null | null | null | Rafi
====
A tiny route dispatcher for `Google Cloud Functions`_.
.. code-block:: python
app = rafi.App("demo_app")
@app.route("/hello/<name>")
def index(name):
return f"hello {name}"
In your `Google Cloud Function`__ set **Function to execute** to `app`.
.. _Google Cloud Functions: https://cloud.google.com/functions/
__ `Google Cloud Functions`_
| 20.333333 | 71 | 0.68306 |
fed6cbca425701e8e2da1900365d99962d625383 | 314 | rst | reStructuredText | doc/readme-qt.rst | payeer1982/ArenonGameWallett | 7fb0028e8d74fae59aeb74c9cbe867a35ed73f6c | [
"MIT"
] | 2 | 2018-07-22T03:03:54.000Z | 2019-12-25T20:15:49.000Z | doc/readme-qt.rst | payeer1982/ArenonGameWallett | 7fb0028e8d74fae59aeb74c9cbe867a35ed73f6c | [
"MIT"
] | null | null | null | doc/readme-qt.rst | payeer1982/ArenonGameWallett | 7fb0028e8d74fae59aeb74c9cbe867a35ed73f6c | [
"MIT"
] | 1 | 2019-12-25T20:16:36.000Z | 2019-12-25T20:16:36.000Z | Arenon-qt: Qt5 GUI for Arenon
===============================
Linux
-------
https://github.com/Arenoncoin/Source/blob/master/doc/build-unix.md
Windows
--------
https://github.com/Arenoncoin/Source/blob/master/doc/build-msw.md
Mac OS X
--------
https://github.com/Arenoncoin/Source/blob/master/doc/build-osx.md
| 20.933333 | 67 | 0.640127 |
488d983d14d13dc9f93105c62984f7d819b44491 | 7,401 | rst | reStructuredText | doc/source/indexing.rst | arharvey918/python-swat | 0db2db2d7c049b23391de419950954c8d505b325 | [
"Apache-2.0"
] | 133 | 2016-09-30T18:53:10.000Z | 2022-03-25T20:54:06.000Z | doc/source/indexing.rst | arharvey918/python-swat | 0db2db2d7c049b23391de419950954c8d505b325 | [
"Apache-2.0"
] | 113 | 2017-01-16T21:01:23.000Z | 2022-03-29T11:02:21.000Z | doc/source/indexing.rst | arharvey918/python-swat | 0db2db2d7c049b23391de419950954c8d505b325 | [
"Apache-2.0"
] | 65 | 2016-09-29T15:23:49.000Z | 2022-03-04T12:45:43.000Z |
.. Copyright SAS Institute
.. currentmodule:: swat.cas.table
.. _indexing:
***************************
Indexing and Data Selection
***************************
Indexing of :class:`CASTable` objects works much in the same way as they do
in :class:`pandas.DataFrame` objects. You can select one or more columns based on
column names or indexes, and you can select slices of columns. However, data
selection does have some limitations. CAS tables can be distributed across
a grid of computers and they do not have a specified order. Because of this,
indexing based on a row index is not possible at this time. However, it is
possible to apply `where` clauses to a the table parameters to filter rows
based on that.
There are a few properties that allow indexing a :class:`CASTable` object in
various ways. These properties work just like they :class:`pandas.DataFrame`
counterparts (with the limitations described above).
====================== ====================================================
Property / Method Description
====================== ====================================================
o[`columns`] Subset table based on column names
o.loc[:, `columns`] Subset table based on column names
o.iloc[:, `columns`] Subset table based on column indexes
o.ix[:, `columns`] Subset table based on mixed column names and indexes
o.xs(`column`, axis=1) Select a cross-section of the table
o[`boolean-column`] Filter data rows based on boolean column values
o.query('`expr`') Apply a filter to the data values
====================== ====================================================
.. ipython:: python
:suppress:
import os
import swat
hostname = os.environ['CASHOST']
port = os.environ['CASPORT']
username = os.environ.get('CASUSER', None)
password = os.environ.get('CASPASSWORD', None)
conn = swat.CAS(hostname, port, username, password)
The Basics
----------
Just as with :class:`pandas.DataFrames`, :class:`CASTable` objects implement
Python's ``__getitem__`` method to allow indexing using ``[ ]``. This allows
you to subset the columns that are visible in the table.
.. ipython:: python
tbl = conn.read_csv('https://raw.githubusercontent.com/'
'sassoftware/sas-viya-programming/master/data/cars.csv')
tbl.head()
Here we are selecting a single column from the table. This will return a
:class:`CASColumn` object.
.. ipython:: python
tbl['Make'].head()
Selecting multiple columns returns a new :class:`CASTable` object.
.. ipython:: python
tbl[['Make', 'Model', 'Horsepower']].head()
You can also access individual columns using attribute syntax.
.. ipython:: python
tbl.Make.head()
Caution should be used when using attribute syntax because it depends on the
fact that there are no existing attributes, methods, or CAS actions with that
same name on the :class:`CASTable`. It also requires that the column name
contains a valid Python identifier. Since CAS actions can be added dynamically,
attribute access should generally only be used in interactive programming.
For programs that will be reused, it is safer to use the ``[ ]`` syntax.
Selecting by Name
-----------------
The ``loc`` property is used to select columns based on the column names.
Column names can be specified as a string, a list of strings, or a slice.
If a string is given, a :class:`CASColumn` is returned. If a list of strings
or a slice is specified, a :class:`CASTable` is returned.
A single string selects a column. Since row selection is not supported at
this time, this is equivalent to ``tbl.loc['Make']``.
.. ipython:: python
tbl.loc[:, 'Make'].head()
Using a list of strings selects those columns and returns a new :class:`CASTable`
object. Again, this is equivalent to ``tbl[['Make', 'Model']]``.
.. ipython:: python
tbl.loc[:, ['Make', 'Model']].head()
Slicing using column names allows you to select a range of columns.
.. ipython:: python
tbl.loc[:, 'Model':'Invoice'].head()
You can even specify a step size.
.. ipython:: python
tbl.loc[:, 'Model':'Invoice':2].head()
Note that when using columns names in slices, both endpoints are included
in the slice. This is not the same behavior for numeric indexes, but is
consistent with the way that slicing works in :class:`pandas.DataFrame`
objects.
Selecting by Position
---------------------
The ``iloc`` property is used to select columns based on column indices.
Just like with ``loc``, the column indices can be specified as a single
integer, a list of integers, or a slice.
.. ipython:: python
tbl.iloc[:, 1].head()
Using a list of integers returns a new :class:`CASTable` object.
.. ipython:: python
tbl.iloc[:, [1, 5, 3]].head()
Of course, ranges work here as well, with or without a step size.
.. ipython:: python
tbl.iloc[:, 2:6].head()
.. ipython:: python
tbl.iloc[:, 6:2:-2].head()
Mixing Names and Position
-------------------------
The ``ix`` property works just like the ``loc`` and ``iloc`` properties
except that it takes a mix of column names and indexes.
.. ipython:: python
tbl.ix[:, 'Model'].head()
tbl.ix[:, 3].head()
.. ipython:: python
tbl.ix[:, ['Model', 4, 3]].head()
.. ipython:: python
tbl.ix[:, 'Model':6:2].head()
Selecting a Cross Section
-------------------------
The ``xs`` method currently only supports column selection (i.e., axis=1).
It is primarily here for future development.
.. ipython:: python
tbl.xs('Model', axis=1).head()
Boolean Indexing
----------------
It is possible to use a :class:`CASColumn` as a way to select rows in a
CAS table. The :class:`CASColumn` should contain values that are valid
booleans to CAS (typically integer values where 0 is `false` and non-zero
is `true`).
Here is a basic example that selects all cars with an MSRP value over 80,000.
.. ipython:: python
tbl[tbl.MSRP > 80000].head()
Conditions can be combined with ``|`` for `or`, ``&`` for `and`, and ``~``
for `not`. However, due to the order of precedence in Python, you must put
your comparisons operations in parentheses before combining them with
these operators.
.. ipython:: python
tbl[(tbl.MSRP > 80000) & (tbl.Horsepower > 400)].head()
Since each mask of a :class:`CASTable` object returns a new :class:`CASTable`
object, you can split operations across multiple steps.
.. ipython:: python
expensive = tbl[tbl.MSRP > 80000]
expensive[expensive.Horsepower > 400].head()
.. warning::
You can only use columns from within the same CAS table in boolean
operations. If you want to combine operations across tables, you
should create a view that contains all of the data, then use
the filtering features outlined above on that view.
The ``query`` Method
--------------------
Rather than using the boolean data selection described above, you can
write a CAS `where` expression and apply it to a :class:`CASTable` object
directly using the :meth:`CASTable.query` method. This can often result
in more readable code when using longer expressions.
.. ipython:: python
tbl.query('MSRP > 80000 and Horsepower > 400').head()
Of course, queries can be combined across multiple steps as well.
.. ipython:: python
expensive = tbl.query('MSRP > 80000')
expensive.query('Horsepower > 400').head()
.. ipython:: python
:suppress:
conn.close()
| 29.252964 | 83 | 0.671801 |
4aef400ca15b2a96d0be7d24250681413efb05a7 | 3,421 | rst | reStructuredText | docsrc/source/pages/getting_started/overview.rst | ydataai/pandas-profiling | 92c2b2ca0b6af48e7d9d43409e1c8b77a69e9360 | [
"MIT"
] | 189 | 2022-02-22T14:37:02.000Z | 2022-03-31T18:22:15.000Z | docsrc/source/pages/getting_started/overview.rst | ydataai/pandas-profiling | 92c2b2ca0b6af48e7d9d43409e1c8b77a69e9360 | [
"MIT"
] | 23 | 2022-02-22T23:44:19.000Z | 2022-03-31T20:25:45.000Z | docsrc/source/pages/getting_started/overview.rst | ydataai/pandas-profiling | 92c2b2ca0b6af48e7d9d43409e1c8b77a69e9360 | [
"MIT"
] | 48 | 2022-02-23T08:18:30.000Z | 2022-03-31T15:48:37.000Z | ========
Overview
========
.. image:: https://ydataai.github.io/pandas-profiling/docs/assets/logo_header.png
:alt: Pandas Profiling Logo Header
.. image:: https://github.com/ydataai/pandas-profiling/actions/workflows/tests.yml/badge.svg?branch=master
:alt: Build Status
:target: https://github.com/ydataai/pandas-profiling/actions/workflows/tests.yml
.. image:: https://codecov.io/gh/ydataai/pandas-profiling/branch/master/graph/badge.svg?token=gMptB4YUnF
:alt: Code Coverage
:target: https://codecov.io/gh/ydataai/pandas-profiling
.. image:: https://img.shields.io/github/release/pandas-profiling/pandas-profiling.svg
:alt: Release Version
:target: https://github.com/ydataai/pandas-profiling/releases
.. image:: https://img.shields.io/pypi/pyversions/pandas-profiling
:alt: Python Version
:target: https://pypi.org/project/pandas-profiling/
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:alt: Code style: black
:target: https://github.com/python/black
``pandas-profiling`` generates profile reports from a pandas ``DataFrame``.
The pandas ``df.describe()`` function is handy yet a little basic for exploratory data analysis. ``pandas_profiling`` extends pandas DataFrame with ``df.profile_report()``,
which automatically generates a standardized univariate and multivariate report for data understanding.
For each column, the following information (whenever relevant for the column type) is presented in an interactive HTML report:
* **Type inference**: detect the types of columns in a ``DataFrame``
* **Essentials**: type, unique values, indication of missing values
* **Quantile statistics**: minimum value, Q1, median, Q3, maximum, range, interquartile range
* **Descriptive statistics**: mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness
* **Most frequent and extreme values**
* **Histograms:** categorical and numerical
* **Correlations**: high correlation warnings, based on different correlation metrics (Spearman, Pearson, Kendall, Cramér's V, Phik)
* **Missing values**: through counts, matrix, heatmap and dendrograms
* **Duplicate rows**: list of the most common duplicated rows
* **Text analysis**: most common categories (uppercase, lowercase, separator), scripts (Latin, Cyrillic) and blocks (ASCII, Cyrilic)
* **File and Image analysis**: file sizes, creation dates, dimensions, indication of truncated images and existance of EXIF metadata
The report contains three additional sections:
* **Overview**: mostly global details about the dataset (number of records, number of variables, overall missigness and duplicates, memory footprint)
* **Alerts**: a comprehensive and automatic list of potential data quality issues (high correlation, skewness, uniformity, zeros, missing values, constant values, between others)
* **Reproduction**: technical details about the analysis (time, version and configuration)
The package can be used via code but also directly as a CLI utility. The generated interactive report can be consumed and shared as regular HTML or embedded in an interactive way inside Jupyter Notebooks.
.. NOTE::
**⚡ Looking for a Spark backend to profile large datasets?**
While not yet finished, a Spark backend is in development. Progress can be tracked `here <https://github.com/ydataai/pandas-profiling/projects/3>`_. Testing and contributions are welcome! | 58.982759 | 205 | 0.76615 |
f6cbd6457c64ff3d86cc078ab68bdaab2cb0e976 | 137 | rst | reStructuredText | docs/source/api/ocean/masternodes.rst | eric-volz/defichainLibrary | 458a8155bd595bf0fdf026651d95a5fe78dafc9c | [
"MIT"
] | 1 | 2022-03-29T15:15:17.000Z | 2022-03-29T15:15:17.000Z | docs/build/html/_sources/api/ocean/masternodes.rst.txt | eric-volz/defichainLibrary | 458a8155bd595bf0fdf026651d95a5fe78dafc9c | [
"MIT"
] | null | null | null | docs/build/html/_sources/api/ocean/masternodes.rst.txt | eric-volz/defichainLibrary | 458a8155bd595bf0fdf026651d95a5fe78dafc9c | [
"MIT"
] | 1 | 2022-03-24T12:25:44.000Z | 2022-03-24T12:25:44.000Z | .. _Ocean Masternodes:
.. automodule:: defichain.ocean
:noindex:
Masternodes
-----------
.. autoclass:: Masternodes
:members:
| 12.454545 | 31 | 0.627737 |
f03b3bab3ee3cbc4144e544e459440c2590947a7 | 1,473 | rst | reStructuredText | docs/usage.rst | alixedi/django_popcorn | fbe643d7f0edae723a9ca587d8e336d3be2425e5 | [
"BSD-3-Clause"
] | null | null | null | docs/usage.rst | alixedi/django_popcorn | fbe643d7f0edae723a9ca587d8e336d3be2425e5 | [
"BSD-3-Clause"
] | 1 | 2016-03-13T19:34:35.000Z | 2016-03-13T19:34:35.000Z | docs/usage.rst | alixedi/django_popcorn | fbe643d7f0edae723a9ca587d8e336d3be2425e5 | [
"BSD-3-Clause"
] | null | null | null | ========
Usage
========
1. Include the following in your ``INSTALLED_APPS`` settings: ::
'popcorn',
2. Add this to your ``settings.py`` (If you do not already have it): ::
TEMPLATE_CONTEXT_PROCESSORS = (
"django.contrib.auth.context_processors.auth",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.static",
"django.contrib.messages.context_processors.messages",
"django.core.context_processors.request",
"popcorn.context_processors.admin_media_prefix",
)
POPCORN_MODELS = ('auth.Group', 'auth.Permission')
3. Add the following to your ``base.html`` template: ::
<script src="{{ ADMIN_MEDIA_PREFIX }}js/admin/RelatedObjectLookups.js"></script>
4. We will create a view for ``auth.User`` and use the utility ``get_popcorn_urls`` function to generate popcorn views and urls: ::
urlpatterns = patterns('',
url(r'^$', CreateView.as_view(model=User, success_url='.'), name='auth_user_create'),
url(r'^admin/', include(admin.site.urls)),
)
urlpatterns += get_popcorn_urls()
7. Render your forms like so: ::
<form method="POST" action="{{ request.get_full_path }}">
{% csrf_token %}
{% include 'popcorn/form.html' %}
<button type="submit">Submit</button>
<a href="../">Cancel</a>
</form>
| 32.733333 | 131 | 0.637475 |
5c4fffe1f23294437ba3d9fdf43b57ade53b4bdc | 6,566 | rst | reStructuredText | requisiti.rst | italia/lg-design-servizi-web | 5276444dc94d07e6df79b3f1afcdba794234abdf | [
"CC0-1.0"
] | 1 | 2022-02-14T21:14:02.000Z | 2022-02-14T21:14:02.000Z | requisiti.rst | italia/lg-design-servizi-web | 5276444dc94d07e6df79b3f1afcdba794234abdf | [
"CC0-1.0"
] | null | null | null | requisiti.rst | italia/lg-design-servizi-web | 5276444dc94d07e6df79b3f1afcdba794234abdf | [
"CC0-1.0"
] | 1 | 2022-01-26T08:31:51.000Z | 2022-01-26T08:31:51.000Z | Requisiti
=========
All'interno della documentazione dei contratti pubblici concernenti
l'affidamento di design, restyling, sviluppo e/o gestione di siti e servizi
digitali, le Amministrazioni DEVONO inserire la seguente dicitura:
"Il fornitore incaricato di sviluppare il servizio web deve rispettare le
indicazioni riportate nelle Linee guida di design per i servizi web della PA".
Accessibilità
-------------
- **Finalità**: rendere accessibili a tutti gli utenti il contenuto, la
struttura e il comportamento degli strumenti informatici, secondo i requisiti
di legge.
- **Azioni richieste**:
- SI DEVE inserire nei contratti di gara per lo sviluppo dei siti web
l'indicazione di aderenza alle linee guida sull'accessibilità degli
strumenti informatici emanate da AGID.
- **Riferimenti normativi**: [LG_ACCESS]
Affidabilità e sicurezza
------------------------
- **Finalità**: sviluppare servizi digitali sicuri, nel rispetto del GDPR e del
Codice privacy.
- **Azioni richieste**:
- SI DEVE garantire la protezione dei dati personali, nello sviluppo di un
sito web o di un servizio digitale, fin dalla progettazione e per
impostazione predefinita, nel rispetto dell'art. 25 del GDPR;
- SI DEVE rispettare almeno il livello base di sicurezza stabilito dalle
"Misure minime di sicurezza ICT per le pubbliche amministrazioni", ove non
sia specificamente richiesto un livello superiore dal citato documento;
- SI DEVONO porre in atto misure tecniche e organizzative atte a garantire un
livello di sicurezza adeguato al rischio, nel rispetto di quanto richiesto
all'art. 32 del GDPR e in ottica di responsabilizzazione ai sensi dell'art.
5, par. 2 del GDPR;
- SI DEVE pubblicare, sul singolo sito, l'informativa sul trattamento dei dati
personali e chiedere il consenso laddove necessario, anche con riferimento
all'uso dei cookie;
- SI DEVE provvedere a inserire i trattamenti di dati personali nel Registro
dei trattamenti e nominare Responsabili del trattamento ai sensi dell'art.
28 del GDPR gli eventuali fornitori dei servizi web che trattano dati
personali per conto del soggetto titolare del trattamento.
- **Riferimenti normativi**: [GDPR], [Codice privacy],
[Direttiva (UE) 2016/1148 e D.Lgs. 65/2018]
Semplicità di consultazione ed uso
----------------------------------
- **Finalità**: comprendere i bisogni dell'utente a cui il servizio intende dare
risposta; costruire interfacce utente usabili, riducendo tempi e costi di
sviluppo, e osservare come gli utenti interagiscono con il servizio, per un
suo miglioramento costante.
- **Azioni richieste**:
- SI DEVONO definire in modo esplicito obiettivi, destinatari del servizio e
attori coinvolti nel processo di progettazione;
- SI DEVONO mappare le funzioni (user stories) del servizio e creare un
prototipo da testare per validare la soluzione progettuale e la sua
usabilità;
- SI DOVREBBE basare l'architettura dell'informazione sui risultati delle
ricerche sugli utenti;
- SI DEVONO condurre interviste e test di usabilità per comprendere se i
servizi digitali esistenti, o in fase di progettazione, corrispondono alle
esigenze degli utenti.
- SI DEVONO utilizzare ontologie e vocabolari controllati standard della
Pubblica Amministrazione;
- SI DEVE utilizzare un linguaggio e un'organizzazione dei contenuti web
adeguati all'utente destinatario;
- SI DEVE rendere facilmente trovabile, dai motori di ricerca esterni e
interni al sito, il contenuto pubblicato.
- **Riferimenti normativi**: [LG_ACCESS]
- **Strumenti a supporto**: [Des_IT]
Monitoraggio dei servizi
------------------------
- **Finalità**: analizzare e migliorare l'esperienza d'uso dei servizi digitali
mediante la rilevazione dei dati di fruizione.
- **Azioni richieste**:
- SI DOVREBBE aderire alla piattaforma `Web Analytics Italia
<https://webanalytics.italia.it/>`_ (WAI), soluzione open source di
raccolta, analisi e condivisione dei dati di traffico e comportamento utente
dedicata ai siti web delle amministrazioni pubbliche italiane;
- SI DEVE consentire agli utenti di comunicare facilmente all'amministrazione
il livello di soddisfazione ed eventuali difficoltà riscontrate, rispetto
alla qualità dell'informazione e dei servizi on line;
- SI DEVONO condurre attività di analisi e validazione dei feedback degli
utenti relativi alla qualità percepita;
- SI DOVREBBE condurre un'attività di manutenzione evolutiva dei servizi on
line, facendo ricorso alle principali metodologie di testing e ricerca
quantitativa e qualitativa.
- **Riferimenti normativi**: [art. 7, comma 3 del CAD]
- **Strumenti a supporto**: [Des_IT]
Contenuti obbligatori
---------------------
- **Finalità**: Garantire la presenza dei contenuti obbligatori per la Pubblica
Amministrazione.
- **Azioni richieste**:
- fermo restando il rispetto della normativa vigente in materia di trasparenza
amministrativa e protezione dei datipersonali, tutti i siti web della
Pubbliche Amministrazioni DEVONO pubblicare i seguenti contenuti:
- il link alla dichiarazione di accessibilità, nel footer del sito web;
- le informazioni, opportunamente aggregate, derivanti dal monitoraggio
statistico attivato sul singolo sito;
- SI DEVE pubblicare in ogni pagina la data dell'ultimo aggiornamento o
verifica del contenuto.
- **Riferimenti normativi**: [D.lgs. 33/2013, [LG_ACCESS], [CAD], [Codice privacy].
Interfaccia utente
------------------
- **Finalità**: mettere a disposizione interfacce semplici da utilizzare su ogni
tipo di dispositivo.
- **Azioni richieste**:
- SI DEVONO realizzare interfacce coerenti e consistenti;
- SI DEVONO realizzare interfacce ottimizzate per i diversi dispositivi.
- **Riferimenti normativi**: [LG_ACCESS]
- **Strumenti a supporto**: [Des_IT]
Integrazione delle piattaforme abilitanti
-----------------------------------------
- **Finalità**: prevedere un'esperienza d'uso comune alle diverse procedure on line.
- **Azioni richieste**:
- SI DEVE garantire l'accesso ai servizi digitali della PA con i sistemi di
autenticazione previsti dal CAD;
- SI DEVE consentire agli utenti di effettuare i pagamenti online tramite la
piattaforma `pagoPA <https://www.pagopa.gov.it/>`_.
- **Riferimenti normativi**: [art. 5, comma 1; art. 62; art. 64, comma
2-quarter; art. 64-bis, comma 1-bis del CAD], [Reg. UE eIDAS]
.. forum_italia::
:topic_id: 23512
:scope: document
| 39.793939 | 84 | 0.743375 |
6dd3a7e82679d2653ca3ce6c181818b8c421372f | 288 | rst | reStructuredText | README.rst | voidpp/vcab | f6a937d56c44ef06d94bf9c880ebcce2a06e7db1 | [
"MIT"
] | null | null | null | README.rst | voidpp/vcab | f6a937d56c44ef06d94bf9c880ebcce2a06e7db1 | [
"MIT"
] | null | null | null | README.rst | voidpp/vcab | f6a937d56c44ef06d94bf9c880ebcce2a06e7db1 | [
"MIT"
] | null | null | null | vcp + absinthe:
Bridge tool to connect `VCP <https://github.com/voidpp/vcp>`_ and `absinthe <https://github.com/voidpp/absinthe>`_
Provides an interface to search files in VCP's projects determined folders and opens with `absinthe-client <https://github.com/voidpp/absinthe-client>`_.
| 41.142857 | 153 | 0.767361 |
649c132769223c6c02ad2b040d1ec9cb31ca1462 | 271 | rst | reStructuredText | docs/source/api/generated/frites.core.gccmi_model_nd_cdnd.rst | adam2392/frites | 4a6afa6dd0a2d559f5ae81d455c77210f018450c | [
"BSD-3-Clause"
] | 35 | 2019-10-09T11:01:29.000Z | 2022-03-12T02:24:39.000Z | docs/source/api/generated/frites.core.gccmi_model_nd_cdnd.rst | adam2392/frites | 4a6afa6dd0a2d559f5ae81d455c77210f018450c | [
"BSD-3-Clause"
] | 14 | 2020-12-05T09:26:00.000Z | 2022-01-17T08:24:06.000Z | docs/source/api/generated/frites.core.gccmi_model_nd_cdnd.rst | adam2392/frites | 4a6afa6dd0a2d559f5ae81d455c77210f018450c | [
"BSD-3-Clause"
] | 13 | 2021-01-06T12:58:47.000Z | 2022-03-09T14:55:24.000Z | frites.core.gccmi\_model\_nd\_cdnd
==================================
.. currentmodule:: frites.core
.. autofunction:: gccmi_model_nd_cdnd
.. _sphx_glr_backreferences_frites.core.gccmi_model_nd_cdnd:
.. minigallery:: frites.core.gccmi_model_nd_cdnd
:add-heading: | 24.636364 | 60 | 0.690037 |
24adbeb2738fee904156f9f2e34bf72000e77fad | 1,682 | rst | reStructuredText | docs/api/xml-writers.rst | gramm/xsdata | 082c780757c6d76a5c31a6757276ef6912901ed2 | [
"MIT"
] | null | null | null | docs/api/xml-writers.rst | gramm/xsdata | 082c780757c6d76a5c31a6757276ef6912901ed2 | [
"MIT"
] | null | null | null | docs/api/xml-writers.rst | gramm/xsdata | 082c780757c6d76a5c31a6757276ef6912901ed2 | [
"MIT"
] | null | null | null | ===========
XML Writers
===========
xsData ships with multiple writers based on lxml and native python that may vary
in performance in some cases. The output of all them is consistent with a few
exceptions when handling mixed content with ``pretty_print=True``.
.. currentmodule:: xsdata.formats.dataclass.serializers.writers
.. autosummary::
:toctree: reference
:template: dataclass.rst
:nosignatures:
LxmlEventWriter
XmlEventWriter
.. currentmodule:: xsdata.formats.dataclass.serializers.mixins
.. autosummary::
:toctree: reference
:template: dataclass.rst
:nosignatures:
XmlWriter
**Benchmarks**
.. code-block::
Name (time in ms) Min Max Mean Median
------------------------------------------------------------------------------------------------------------
LxmlEventWriter-100 12.3876 (1.0) 14.0231 (1.02) 12.7359 (1.00) 12.6545 (1.00)
XmlEventWriter-100 12.4709 (1.01) 13.7136 (1.0) 12.7122 (1.0) 12.6516 (1.0)
LxmlEventWriter-1000 121.3230 (9.79) 127.0393 (9.26) 123.3760 (9.71) 122.5806 (9.69)
XmlEventWriter-1000 122.6532 (9.90) 125.3476 (9.14) 124.2966 (9.78) 124.6594 (9.85)
LxmlEventWriter-10000 1,223.8570 (98.80) 1,234.0158 (89.98) 1,230.9853 (96.84) 1,232.9678 (97.46)
XmlEventWriter-10000 1,228.0192 (99.13) 1,235.6687 (90.11) 1,232.3008 (96.94) 1,233.0478 (97.46)
--------------------------------------------------------------------------------------------------------------
| 40.047619 | 114 | 0.508918 |
9cc79072df926cdf2fddf61fd8642da36de31962 | 362 | rst | reStructuredText | docs/source/installation/custom_import_functions.rst | gmc-norr/GMS-uploader | f0681d239588a98fa75d6b3737b29169b099df15 | [
"MIT"
] | 2 | 2021-09-23T08:38:36.000Z | 2021-11-07T12:09:33.000Z | docs/source/installation/custom_import_functions.rst | genomic-medicine-sweden/GMS-uploader | f0681d239588a98fa75d6b3737b29169b099df15 | [
"MIT"
] | null | null | null | docs/source/installation/custom_import_functions.rst | genomic-medicine-sweden/GMS-uploader | f0681d239588a98fa75d6b3737b29169b099df15 | [
"MIT"
] | null | null | null | FX - creating custom importers
++++++++++++++++++++++
Custom import functions (FX) are provided via a plugin system.
FX importers are created as python modules and should be put under ``<GMS-upploader install path>/fx`` .
A good start to create a new importer is to look closely at the ``analytix`` importer that is already present in the fx modules folder.
| 36.2 | 135 | 0.720994 |
902ac4038f87cb535c1f8020ebaabe362a04d78a | 1,733 | rst | reStructuredText | docs/changelog.rst | sansanichfb/confluent-cli | f7ca0f44b0fcfc5e6ef0c72bdc54ef796d1412ba | [
"Apache-2.0"
] | null | null | null | docs/changelog.rst | sansanichfb/confluent-cli | f7ca0f44b0fcfc5e6ef0c72bdc54ef796d1412ba | [
"Apache-2.0"
] | null | null | null | docs/changelog.rst | sansanichfb/confluent-cli | f7ca0f44b0fcfc5e6ef0c72bdc54ef796d1412ba | [
"Apache-2.0"
] | null | null | null | .. _confluent_cli_changelog:
Changelog
=========
Version 4.1.0
-------------
Confluent CLI
~~~~~~~~~~~~~~
* `PR-71 <https://github.com/confluentinc/confluent-cli/pull/71>`_ - PLAT-15: Confluent CLI needs to set ksql-server state.dir
* `PR-69 <https://github.com/confluentinc/confluent-cli/pull/69>`_ - Changed the KSQL server port to 8088.
* `PR-65 <https://github.com/confluentinc/confluent-cli/pull/65>`_ - PLAT-1: Unified location for service logs under CONFLUENT_CURRENT
* `PR-66 <https://github.com/confluentinc/confluent-cli/pull/66>`_ - Renamed ksql service to ksql-server.
* `PR-64 <https://github.com/confluentinc/confluent-cli/pull/64>`_ - Update ksql properties file name
* `PR-60 <https://github.com/confluentinc/confluent-cli/pull/60>`_ - CP-502: Add confluent control center
* `PR-61 <https://github.com/confluentinc/confluent-cli/pull/61>`_ - Route stderr to same logfile as stdout
* `PR-58 <https://github.com/confluentinc/confluent-cli/pull/58>`_ - Added KSQL server service to Confluent CLI.
* `PR-55 <https://github.com/confluentinc/confluent-cli/pull/55>`_ - CP-503: Add version command
Version 4.0.0
-------------
Confluent CLI
~~~~~~~~~~~~~~
* `PR-51 <https://github.com/confluentinc/confluent-cli/pull/51>`_ - HOTFIX: Set absolute plugin.path if needed.
* `PR-41 <https://github.com/confluentinc/confluent-cli/pull/41>`_ - CLIENTS-523 : Integrate SR ACL CLI with Confluent CLI
Version 3.3.1
-------------
Confluent CLI
~~~~~~~~~~~~~~
* `PR-44 <https://github.com/confluentinc/confluent-cli/pull/44>`_ - Alphabetize help commands
* `PR-45 <https://github.com/confluentinc/confluent-cli/pull/45>`_ - CC-1129: Fix top on Linux
Version 3.3.0
-------------
Confluent CLI
~~~~~~~~~~~~~~
Initial Version
| 36.87234 | 134 | 0.693595 |
22376d3acfb0dfb6f490677f8d2b53c955f3dcf4 | 80 | rst | reStructuredText | tests/test_build_js/source/docs/autoclass_deprecated.rst | acivgin1/sphinx-js | 9c8afe4e2ea46c53916ae8278747722c84b52ced | [
"MIT"
] | 151 | 2017-03-16T22:17:36.000Z | 2020-07-02T03:01:29.000Z | tests/test_build_js/source/docs/autoclass_deprecated.rst | acivgin1/sphinx-js | 9c8afe4e2ea46c53916ae8278747722c84b52ced | [
"MIT"
] | 97 | 2018-12-04T09:40:33.000Z | 2022-03-31T10:30:37.000Z | tests/test_build_js/source/docs/autoclass_deprecated.rst | acivgin1/sphinx-js | 9c8afe4e2ea46c53916ae8278747722c84b52ced | [
"MIT"
] | 37 | 2018-11-26T15:36:05.000Z | 2022-02-05T00:32:37.000Z | .. js:autoclass:: DeprecatedClass
.. js:autoclass:: DeprecatedExplanatoryClass
| 20 | 44 | 0.7875 |
815bf036df067c43b111a54226ef3a111c27c64f | 8,176 | rst | reStructuredText | pcraster/pcraster-4.2.0/pcraster-4.2.0/documentation/pcraster_manual/sphinx/app_col2map.rst | quanpands/wflow | b454a55e4a63556eaac3fbabd97f8a0b80901e5a | [
"MIT"
] | null | null | null | pcraster/pcraster-4.2.0/pcraster-4.2.0/documentation/pcraster_manual/sphinx/app_col2map.rst | quanpands/wflow | b454a55e4a63556eaac3fbabd97f8a0b80901e5a | [
"MIT"
] | null | null | null | pcraster/pcraster-4.2.0/pcraster-4.2.0/documentation/pcraster_manual/sphinx/app_col2map.rst | quanpands/wflow | b454a55e4a63556eaac3fbabd97f8a0b80901e5a | [
"MIT"
] | null | null | null |
.. index::
single: col2map
.. _col2map:
*******
col2map
*******
.. topic:: col2map
Converts from column file format to PCRaster map format
::
col2map [options] columnfile PCRresult
columnfile
asciifile
PCRresult
spatial
specified by data type option; if data type option is not set: data type of :emphasis:`PCRclone`
Options
=======
:literal:`--clone`:emphasis:`PCRclone`
:emphasis:`PCRclone` is taken as clonemap. If you have set a global clonemap as :ref:`global option <GOClone>`, you don't need to set clone in the command line: the clonemap you have set as global option is taken as clonemap. If you have not set a global clonemap or if you want to use a different clonemap than the global clonemap, you must specify the clonemap in the command line with the clone option.
:literal:`--unittrue` or :literal:`--unitcell`
:literal:`--unittrue`
coordinates in columnfile are interpreted as real distance (default)
:literal:`--unitcell`
coordinates in columnfile are interpreted as distance in number of cell lengths
-B, -N, -O, -S, -D and -L
This data type option specifies the type options which is assigned to PCRresult (respectively boolean, nominal, ordinal, scalar, directional, ldd). If the option is not set, PCRresult is assigned the data type of :emphasis:`PCRclone` or the global clone. The data in columnfile must be in the domain of the data type which is assigned to PCRresult. For description of these domains see the description of the different :ref:`data types <secdatbasemaptype>`.
if option -D is set; --degrees of --radians
:literal:`--degrees`
values on columnfile are interpreted as degrees (default)
:literal:`--radians`
values on columnfile are interpreted as radians
-m :emphasis:`nodatavalue`
:emphasis:`nodatavalue` is the value in :emphasis:`columnfile` which is converted to a missing value on PCRresult. It can be one ascii character (letters, figures, symbols) or a string of ascii characters. For instance: -m -99.89 or -m j5w. Default, if this option is not set, 1e31 is recognized as a missing value.
-s :emphasis:`separator`
By default, whitespace (one or more tabs, spaces) is recognized as separator between the values of a row in the columnfile. If the values are separated by a different separator, you can specify it with the option. The :emphasis:`separator` can be one of the ascii characters (always one). In that case, col2map recognizes the specified separator with or without whitespace as separator. For instance, if the values in columnfile are separated by a ; character followed by 5 spaces, specify -s ; in the command line (you do not need to specify the whitespace characters).
columnnumbers
-x :emphasis:`columnnumberx`
is the column number of the x coordinate in columnfile (default 1)
-y :emphasis:`columnnumbery`
:emphasis:`columnnumbery` is the column number of the y coordinate in columnfile (default 2)
-v :emphasis:`columnnumberv`
:emphasis:`columnnumberv` is the column number of the cell values in columnfile (default 3)
Each cell on PCRresult is assigned the cell value on columnfile which has x,y coordinates that define a point in that cell";" for assignment of values in columnfile which have x,y coordinates at the edges of cells on PCRresult, the following options are used:
:literal:`--coorcentre`, :literal:`--coorul` or :literal:`--coorlr`
:literal:`--coorcentre` (default) or :literal:`--coorul`
values in columnfile that have x,y coordinates at the upper and left margins of a cell come into that cell, values at the bottom and right margins come into neighbouring cells. So, cell values with x, y coordinates at vertexes of cells come into the cell at the lower right side of the vertex.
:literal:`--coorlr`
values in columnfile that have x, y coordinates at the bottom and right margins of a cell come into that cell, values at the upper and left margins come into neighbouring cells. So, cell values with x, y coordinates at vertexes of cells come into the cell at the upper left side of the vertex.
Options to specify which value is assigned if two or more values in
columnfile are found which all come into the same cell on PCRresult:
-a, -h, -l, -H, -M, -t
-a
average value of the values found within the cell is assigned (default for scalar and directional data; for directional data and assignment of records without a direction, see notes)
-h
highest score: most occuring value found for the cell is assigned; if two values are found the same (largest) number of times, the highest value of these values is assigned, this is called a majority conflict (default for boolean, nominal, ordinal and ldd data)
-l
lowest score: least occurring value found for the cell is assigned (option for nominal, ordinal, boolean, ldd data); if two values are found the same (smallest) number of times, the smallest value of these values is assigned, this is called a minority conflict.
-H
highest value found for the cell is assigned (option for scalar or ordinal data)
-M
lowest value found for the cell is assigned (option for scalar or ordinal data)
-t
total (sum) of the columnfile values is assigned (option for scalar data)
Operation
=========
The columnfile is converted to PCRresult, which is an expression in PCRaster map format with the location attributes of :emphasis:`PCRclone`. The columnfile must be in the format described in :ref:`secdatbasepointform`.
For each cell on PCRresult the operator searches in columnfile for records that have x,y co-ordinates that come into that cell on PCRresult. If one single record is found, the value of this record is assigned to the cell, if several records are found, the value which is assigned is specified by the option (-a, -h, -l, -H or -M). A cell on PCRresult without a value on columnfile that falls into the cell is assigned a missing value on PCRresult.
Notes
=====
Directional data: If the option -a (average, default) is set, and both records
without a direction (value -1) and records with a direction come into a cell
(a so called direction conflict), the records without a direction are
discarded and the cell value is computed from the records containing a
direction only. Thus a cell is assigned a no direction value (value -1) only
if all records for that cell don't have a direction.
Using col2map for generating a PCRresult of data type ldd is quite risky: probably it will result in a ldd which is unsound. If you do want to create a PCRresult of data type ldd use the operator lddrepair afterwards. This operator will modify the ldd in such a way that it will be sound, see the operator lddrepair.
Group
=====
This operation belongs to the group of Creation of PCRaster maps
See Also
========
:ref:`asc2map`
Examples
========
#. ::
col2map --clone mapclone.map -S -m mv -v 4 ColFile1.txt Result1.map
==================================================== =========================================== ============================================
`ColFile1.txt` `Result1.map` `mapclone.map`
.. literalinclude:: ../examples/col2map_ColFile1.txt .. image:: ../examples/col2map_Result1.png .. image:: ../examples/mapattr_mapclone.png
==================================================== =========================================== ============================================
#. ::
col2map --clone mapclone.map -O -m mv -x 2 -y 3 -v 6 --coorlr -H ColFile2.txt Result2.map
==================================================== =========================================== ============================================
`ColFile2.txt` `Result2.map` `mapclone.map`
.. literalinclude:: ../examples/col2map_ColFile2.txt .. image:: ../examples/col2map_Result2.png .. image:: ../examples/mapattr_mapclone.png
==================================================== =========================================== ============================================
| 44.677596 | 573 | 0.679183 |
c0d700501c7bac3f33985be5a25cb1bb180f4f55 | 1,058 | rst | reStructuredText | docs/deserts/scones.rst | huln24/recipes | df6aff0984601b04e528d33fe132471961793a11 | [
"BSD-3-Clause"
] | 1 | 2021-12-19T18:00:29.000Z | 2021-12-19T18:00:29.000Z | docs/deserts/scones.rst | huln24/recipes | df6aff0984601b04e528d33fe132471961793a11 | [
"BSD-3-Clause"
] | 1 | 2021-12-19T18:08:33.000Z | 2021-12-19T18:09:43.000Z | docs/deserts/scones.rst | huln24/recipes | df6aff0984601b04e528d33fe132471961793a11 | [
"BSD-3-Clause"
] | 1 | 2021-12-19T18:00:32.000Z | 2021-12-19T18:00:32.000Z | Scones
======
Ingredients
~~~~~~~~~~~
* 250 g de farine (+ pour le plan de travail)
* 40 g de beurre
* 15 cl de lait
* 1.5 cuillère à soupe de sucre
* 1 sachet de levure
* Sel
Instructions
~~~~~~~~~~~~
#. Préchauffer le four à 220°C (thermostat 7-8).
#. Dans un saladier assez grand, mélanger farine, levure et beurre (coupé en morceaux) à la main.
#. Ajouter le sucre et le sel puis le lait.
Travailler jusqu'à obtenir une pâte souple (ajouter de la farine si la pâte est trop collante).
#. Sur un plan de travail fariné, étaler la pâte de façon à ce qu'elle fasse 2 cm d'épaisseur.
#. Découper à l'aide d'un verre ou d'un emporte pièce des disques de pâte
d'environ 5-6 cm de diamètre.
#. Déposer sur la plaque du four (préalablement recouverte de papier cuisson)
les disques de pâte et enfourner de 12 à 15 minutes selon l'épaisseur
(il faut qu'ils soient bien dorés et gonflés).
#. A déguster dès la sortie du four: fourrés de beurre, confiture, crème fouettée
| 37.785714 | 106 | 0.661626 |
2bd7a19005d958c71c7817dc6e28f8ddded7dd18 | 242 | rst | reStructuredText | doc/source/errorcodemsg.rst | haolp/python-zvm-sdk | 784b60b6528b57eb3fe9f795af439a25e20843b9 | [
"Apache-2.0"
] | 15 | 2019-08-14T20:15:17.000Z | 2020-09-28T01:09:48.000Z | doc/source/errorcodemsg.rst | haolp/python-zvm-sdk | 784b60b6528b57eb3fe9f795af439a25e20843b9 | [
"Apache-2.0"
] | 179 | 2019-09-05T03:53:10.000Z | 2020-10-09T07:21:40.000Z | doc/source/errorcodemsg.rst | haolp/python-zvm-sdk | 784b60b6528b57eb3fe9f795af439a25e20843b9 | [
"Apache-2.0"
] | 50 | 2019-06-14T16:51:53.000Z | 2020-08-27T05:36:38.000Z | Error Codes and Messages
************************
This is a reference to Feilong API error codes
and messages.
.. csv-table::
:header: "overallRC", "modID", "rc", "rs", "errmsg"
:delim: ;
:widths: 20,5,5,5,65
:file: errcode.csv
| 20.166667 | 54 | 0.578512 |
8f584105ab286b57484cb47a17c1a23adf994ac8 | 314 | rst | reStructuredText | docs/about.rst | chbndrhnns/exif | 65aa2d8bcdecf79d34752390310222a9bd5d5bb3 | [
"MIT"
] | null | null | null | docs/about.rst | chbndrhnns/exif | 65aa2d8bcdecf79d34752390310222a9bd5d5bb3 | [
"MIT"
] | null | null | null | docs/about.rst | chbndrhnns/exif | 65aa2d8bcdecf79d34752390310222a9bd5d5bb3 | [
"MIT"
] | null | null | null | #####
About
#####
************
Contributors
************
+ Tyler N. Thieding (Primary Author)
+ ArgiesDario (``delete_all()`` Method)
+ RKrahl (``setup.py`` Tweaks)
***********
Development
***********
:Repository: https://www.gitlab.com/TNThieding/exif
*******
License
*******
.. literalinclude:: ../LICENSE
| 13.083333 | 51 | 0.541401 |
4ac59b3a11f3edaef065f52f425fcb5659399fa7 | 134 | rst | reStructuredText | hakyll_generator/about.rst | vkorepanov/vkorepanov.github.io | ad7cfb12bd7798b7132ab5d0a247b31a2ef73a7a | [
"MIT"
] | 2 | 2017-09-15T06:48:32.000Z | 2017-10-19T11:24:00.000Z | hakyll_generator/about.rst | vkorepanov/vkorepanov.github.io | ad7cfb12bd7798b7132ab5d0a247b31a2ef73a7a | [
"MIT"
] | null | null | null | hakyll_generator/about.rst | vkorepanov/vkorepanov.github.io | ad7cfb12bd7798b7132ab5d0a247b31a2ef73a7a | [
"MIT"
] | null | null | null | ---
title: Обо мне
---
Резюме доступно онлайн `здесь </cv_ru.html>`_.
Резюме в формате pdf можно скачать по `ссылке </cv_ru.pdf>`_.
| 16.75 | 61 | 0.686567 |
bf2b160883c505fdacd116a497141934ca04eaff | 19,416 | rst | reStructuredText | doc/manual.rst | db3108/michi-c2 | d2a4cb80b3151d27fe6e4f16ff89ca69191d7e32 | [
"MIT"
] | 25 | 2015-05-19T06:04:21.000Z | 2021-04-20T22:58:19.000Z | doc/manual.rst | db3108/michi-c2 | d2a4cb80b3151d27fe6e4f16ff89ca69191d7e32 | [
"MIT"
] | 16 | 2015-06-10T17:49:29.000Z | 2021-11-16T23:03:00.000Z | doc/manual.rst | db3108/michi-c2 | d2a4cb80b3151d27fe6e4f16ff89ca69191d7e32 | [
"MIT"
] | 13 | 2015-05-29T01:25:36.000Z | 2021-11-16T22:45:07.000Z | =======================
User Manual for michi-c
=======================
Denis Blumstein
Revision : 1.4.2
1. Introduction
***************
Michi-c is a *Go gtp engine*. This means that
- it is a program that plays Go, the ancient asian game.
- it takes its inputs and provides its outputs in text mode using the
so-called *Go Text Protocol* (gtp) [1].
Put simply, this means that the program reads commands like
*play black C3*
which tells him to put a black stone on intersection C3
*genmove white*
which tells him to answer its move in the current position
It is perfectly possible to play games with michi-c that way.
But it becomes fastidious very soon.
Luckily Markus Enzenberger has built a wonderful program named *gogui* [2]
that allows us to play interactively with michi using the mouse and a graphical
display of a board (and much more as we will see in the next section).
So this manual is for you if you need information
- on how to use michi-c with gogui (section 2)
- on the behaviour of the gtp commands understood by michi-c (section 3)
- the command line arguments (section 4)
**References**
[1] Gtp Go Text Protocol : http://www.lysator.liu.se/~gunnar/gtp/
[2] Gogui : http://gogui.sourceforge.net/
[3] Smart Game Format : http://www.red-bean.com/sgf
2. Playing with michi-c through gogui
*************************************
**Prerequisite** we suppose that gogui has been installed on your system and
is working well (tested by playing manually a few moves for example)
2.1 Making gogui aware of michi-c
---------------------------------
The first thing is to add michi-c to the list of programs known by gogui.
This is done by selecting "New Program" in the "Program" menu
as shown in figure 1 below (See the gogui documentation chapter 4.)
Then a windows pops up with two text fields to fill (fig.2).
In the *command* field enter
/absolute/path/to/michi gtp
This will run michi-c with the default parameters. In section 4 you will see
how these parameters can be changed at initialization time.
I recommend that you enter the absolute path where the michi executable is.
In the *working directory* field enter any existing directory you like.
This is where the michi log file will go.
If you get a new window as shown in fig. 3 when you click OK, then
it seems that you are done.
.. figure:: img/img1.png
:scale: 75 %
:align: center
fig1. Definition of an New Program
..
.. figure:: img/img2.png
:scale: 75 %
:align: center
fig2. Define the command to run michi
..
.. figure:: img/img3.png
:scale: 75 %
:align: center
fig3. The new program gogui will add in its list
..
What happens is that gogui has began to talk to michi.
It has asked for its name and its version.
The fact that the window has appeared shows that michi has answered correctly.
Before you validate the introduction of michi-c in the list of programs known
by gogui (by clicking OK), you have the opportunity to change the label
which, by default is the name of the program. This can be useful to have many
versions of the same program but with different parameter settings.
(it is always possible to change it afterwards, using the "Edit Program" entry
in the Program menu).
michi-c is now usable in gogui and you can begin to play with it.
But before playing, I **strongly** recommend that you install the large patterns
provided by Petr Baudis at http://pachi.or.cz/michi-pat/ (see INSTALL.md).
If you don't do this michi-c will be much weaker and playing experience will be
much worse.
**Once the files are unpacked, patterns.prob and patterns.spat must be
placed in the working directory**
(or make a link on them in the working directory).
You play just by clicking on the intersection you want to play and as soon as
you have played the program begins to "think" and will reply after a few
seconds (the computer color has been set to white at initialization and can
be changed in the menu *Game*, see fig.4).
.. figure:: img/img5.png
:scale: 75 %
:align: center
fig4. How to change the computer color
..
**Troubleshooting** If you have not been as lucky as described in the above
paragraphs, the best advice I could give is to verify that the *command* field
is correct. On Linux or MacOS X the easy way to do it is to try to execute
this command in a terminal.
Then type **name** and the program should answer::
= michi-c
and wait for another command. You can then try **help** to obtain a list of all
available commands. **quit** will leave michi.
Normally, if you had trouble in running michi in gogui, something wrong should
happen in the previous sequence. This should give you enough feedback,
either from the shell (if you made an error in the path) or from michi, for you
to undersand what the problem is and to correct it.
If you are still in trouble, maybe is it time to read the section 4
"Troubleshooting" in the INSTALL file.
2.2 What michi think of the position ?
--------------------------------------
After you have played some moves with michi, you may want to know what is its
estimate of the position. You can obtain this information from commands
accessible in the *Analyze commands* window (as in fig6. below).
.. figure:: img/img6.png
:scale: 75 %
:align: center
fig6. Analyze commands
..
If this window is not visible, you can obtain it by selecting the
"Analyze commands" entry in the Tool Menu.
.. figure:: img/img7.png
:scale: 75 %
:align: center
fig7. Make the Analyze Commands window visible
..
Any of the analyze commands can be run either by double clicking on it or by
selecting a command and clicking on the *Run* button.
It is also possible to have a command executed automatically after each move.
The first three commands are for changing parameters that control the behavior
of michi-c. They will be described in next subsection (2.4).
The other commands give meaningful answer only just after michi-c has played
its move.
*Status Dead*
mark the stones estimated as dead by michi-c
(not always accurate as you will notice).
*Principal variation*
shows the next 5 best continuation as estimated by mcts.
.. figure:: img/img14.png
:scale: 75 %
:align: center
fig8. Best sequence
..
*Best moves*
shows the winrate of the 5 best moves (i.e most visited by MCTS)
.. figure:: img/img13.png
:scale: 75 %
:align: center
fig9. Best moves
..
*Owner_Map*
represent for each intersection the percentage of white or black possession
at the end of the playouts (big black square = 100 % black possession,
big white square = 100 % white possession, nothing = around 50 %)
.. figure:: img/img11.png
:scale: 75 %
:align: center
fig10. Owner Map
..
*Visit Count*
represent the number of visits for each move in the root node of MCTS.
The size of the square is maximum for the most visited move and the
surface of each square is proportional to the visit count (discretized by
step of 10 %, so 0%, 1%, .., 5% are the same
, 6 %, 7%, ..., 15 % the same , etc.)
.. figure:: img/img12.png
:scale: 75 %
:align: center
fig11. Visit count
..
*Histogram of scores*
This is a primitive representation of the histogram of the playout scores.
Should find a good way to show a beautiful python graphic in next version.
.. figure:: img/img15.png
:scale: 75 %
:align: center
fig12. Primitive view of histogram of playouts scores
..
2.3 Live graphics
-----------------
All the graphics commands (marked with a gfx prefix in the *Analyze commands*
windows) can also be updated at regular intervals
during the search providing an animation that can be fun to watch.
This is done by setting the *Live gfx* and *Live gfx interval* parameters in
the *General Parameters* setting window as seen on the figure 12 below.
2.4 Changing the michi-c parameters
-----------------------------------
By running one of the three commands *General Parameters*,
*Tree Policy Parameters* or *Random Policy Parameters* you will get a new
window (respectively shown in fig.12, fig.13 and fig.14) that will allow you
to change the parameters at any moment when michi is not thinking.
The modification process is natural and does not need any explanation.
It takes place after you click on OK.
There has not been any particular thought about the order of the parameters
and it could certainly be improved.
.. figure:: img/img8.png
:scale: 75 %
:align: center
fig12. General Parameters settings
..
Definitions::
use_dynamic_komi
dynamic komi is used in the current version of michi-c only for
handicap games (linear version). It can be enabled or disabled.
komi_per_handicap_stone
this value multiplied by the number of handicap stones will be the
delta komi at the beginning of the game
play_until_the_end
when checked this option disallows early passing (useful on cgos)
random_seed
random seed. Should normally be a positive integer
-1 generate a true random seed that will be different each time michi-c is restarted
REPORT_PERIOD
number of playouts between each report by michi on the standard output
note: useful only if verbosity > 0
verbosity
0, 1 or 2 : control the verbosity of michi on the standard output
RESIGN_THRES
winrate threshold (in [0.0,1.0].
When the winrate becomes below this threshold, michi resign
FASTPLAY20_THRES
if at 20% playouts winrate is > FASTPLAY20_THRES, stop reading
FASTPLAY5_THRES
if at 5% playouts winrate is > FASTPLAY5_THRES, stop reading
Live_gfx
None, best_moves, owner_map, principal_variation or visit_count
if different from None, gogui will display at regular intervals the
same graphics as in figures 8 to 11.
Live_gfx_interval
the interval (number of playouts) between live graphics refresh
.. figure:: img/img9.png
:scale: 75 %
:align: center
fig13. Tree Policy Parameters settings
..
Definitions::
N_SIMS
Number of simulations per move (search).
Note: michi-c use a different value when playing with time limited constraints
RAVE_EQUIV
number of visits which makes the weight of RAVE simulations and real simulations equal
EXPAND_VISITS
number of visits before a node is expanded
PRIOR_EVEN
should be even. This represent a 0.5 prior
PRIOR_SELFATARI
NEGATIVE prior if the move is a self-atari
PRIOR_CAPTURE_ONE
prior if the move captures one stone
PRIOR_CAPTURE_MANY
prior if the move captures many stones
PRIOR_PAT3
prior if the move match a 3x3 pattern
PRIOR_LARGEPATTERN
multiplier for the large patterns probability
note: most moves have relatively small probability
PRIOR_CFG[]
prior for moves in cfg distance 1, 2, 3
PRIOR_EMPTYAREA
prior for moves in empty area.
Negative for move on the first and second lines
Positive for move on the third and fourth lines
.. figure:: img/img10.png
:scale: 75 %
:align: center
fig14. Random Policy Parameters settings
..
Definitions::
PROB_HEURISTIC_CAPTURE (0.90)
probability of heuristic capture suggestions being taken in playout
PROB_HEURISTIC_PAT3 (0.95)
probability of heuristic 3x3 pattern suggestions being taken in playout
PROB_SSAREJECT (0.90)
probability of rejecting suggested self-atari in playout
PROB_RSAREJECT (0.50)
probability of rejecting random self-atari in playout
this is lower than above to allow nakade
3. Reference of michi-c gtp commands
************************************
The list of all gtp commands understood by michi-c (version 1.4) is
described in the following sections.
3.1 Standard gtp commands
-------------------------
The following standard gtp commands are implemented.
Please refer to [1] for the specification of each command.
*protocol_version,
name,
version,
known_command,
list_commands,
quit,
boardsize,
clear_board,
komi,
play,
genmove,
set_free_handicap,
loadsgf,
time_left,
time_settings,
final_score,
final_status_list,
undo*
The standard is implemented exept for
*time_settings*
only absolute time setting is implemented yet
*loadsgf*
michi-c can only read simple SGF files, i.e. files with no
variations nor games collections (see [3])
but this is not carefully checked so
expect some crash if you try to play with the limits.
These limitations will be removed in some next release.
3.2 Gogui specific commands (or extensions used by gogui)
---------------------------------------------------------
Please refer to [2] for the specification of each command.
*gogui-analyze_commands,
gogui-play_sequence,
gogui-setup,
gogui-setup_player,
gg-undo,
kgs-genmove_cleanup*
3.3 Commands to get or set parameters
-------------------------------------
Maybe the most important commands to know from a user point of view are the
three commands *param_general*, *param_playout* and *param_tree* that allow us
to change the parameters controlling the behavior of michi-c during the game.
If you give no argument to the command it will simply print the current value
of all the parameters it controls, else you must give two arguments : the name
of a parameter and the value you want to give to it.
If the given name is not known from the command, it is ignored and the command
behaves as if it was called without argument.
The names recognized by these commands are those described in section 2.4.
*param_general*::
Function: Read or write internal parameters of michi-c (General)
Arguments: name value or nothing
Fails: No value (if only a name was given)
Returns: A string formatted for gogui analyze command of param type
*param_playout*::
Function: Read or write internal parameters of michi-c (Playout Policy)
Arguments: name value or nothing
Fails: No value (if only a name was given)
Returns: A string formatted for gogui analyze command of param type
*param_tree*::
Function: Read or write internal parameters of michi-c (Tree Policy)
Arguments: name value or nothing
Fails: No value (if only a name was given)
Returns: A string formatted for gogui analyze command of param type
3.4 Rest of michi-c specific commands
-------------------------------------
*best_moves*::
Function: Build a list of the 5 best moves (5 couples: point winrate)
Arguments: none
Fails: never
Returns: A string formatted for gogui analyze command of pspair type
*cputime*
This is a command used by the gogui tool gogui-twogtp.
It returns a time in seconds, whose origin is unspecified.
*debug <subcmd>*
This command is only used for debugging purpose and regression testing.
People that need it are able to read the code and I believe it is not
necessary to make this manual longer in order to describe it.
*help*
This is a synonym for list_commands
*owner_map*::
Function: Compute a value in [-1,1] for each point: -1 = 100% white, 1=100 % black
Arguments: none
Fails: never
Returns: A string formatted for gogui analyze command of gfx/INFLUENCE type
*principal_variation*::
Function: Compute the best sequence (5 moves) from the current position
Arguments: none
Fails: never
Returns: A string formatted for gogui analyze command of gfx VAR type
*score_histogram*::
Function: Build histogram of playout scores
Arguments: none
Fails: never
Returns: A string formatted for gogui analyze command of hstring type
*visit_count*::
Function: Compute a value in [0,1] for each point: 0 = never, 1= most visited
Arguments: none
Fails: never
Returns: A string formatted for gogui analyze command of gfx/INFLUENCE type
4. Michi-c command line arguments
*********************************
When michi is run from the command line without any parameter or as::
$ ./michi -h
it will write a simple usage message::
usage: michi mode [config.gtp]
where mode =
* gtp play gtp commands read from standard input
* mcbenchmark run a series of playouts (number set in config.gtp)
* mcdebug run a series of playouts (verbose, nb of sims as above)
* tsdebug run a series of tree searches
* defaults write a template of config file on stdout (defaults values)
* selfplay run a sequence of self play games
and
* config.gtp an optional file containing gtp commands
The most commonly used mode values are **gtp** and **defaults**.
Mode **gtp** launches the gtp loop that will be ended by sending the
command *quit* to michi.
Mode **defaults** print on the standard output the current default value
of every modifiable parameter in michi and leave immediately.::
param_general use_dynamic_komi 0
param_general komi_per_handicap_stone 7.0
param_general play_until_the_end 0
param_general random_seed 1
param_general REPORT_PERIOD 200000
param_general verbosity 2
param_general RESIGN_THRES 0.20
param_general FASTPLAY20_THRES 0.80
param_general FASTPLAY5_THRES 0.95
param_general Live_gfx None
param_general Live_gfx_interval 1000
param_tree N_SIMS 2000
param_tree RAVE_EQUIV 3500
param_tree EXPAND_VISITS 8
param_tree PRIOR_EVEN 10
param_tree PRIOR_SELFATARI 10
param_tree PRIOR_CAPTURE_ONE 15
param_tree PRIOR_CAPTURE_MANY 30
param_tree PRIOR_PAT3 10
param_tree PRIOR_LARGEPATTERN 100
param_tree PRIOR_CFG[0] 24
param_tree PRIOR_CFG[1] 22
param_tree PRIOR_CFG[2] 8
param_tree PRIOR_EMPTYAREA 10
param_playout PROB_HEURISTIC_CAPTURE 0.90
param_playout PROB_HEURISTIC_PAT3 0.95
param_playout PROB_SSAREJECT 0.90
param_playout PROB_RSAREJECT 0.50
This list is interesting in itself.
In addition by redirecting it to a file, you obtain a configuration file that
you can use as the optional parameter (config.gtp).
When michi is used in **gtp** mode with this second argument, all the gtp
commands placed in the config.gtp file will be executed at initialization.
This feature can be used to modify the default parameters or for example to
load a given position from a SGF file.
The three other modes **mcbenchmark**, **mcdebug**, **tsdebug** or **selfplay**
are, as their name suggest useful mainly for debugging and benchmarking.
| 31.2657 | 94 | 0.685723 |
8281b1b2c49a16a28ca495fe72eda8f52a092441 | 190 | rst | reStructuredText | Misc/NEWS.d/next/Library/2019-02-24-00-04-10.bpo-35512.eWDjCJ.rst | happyforeverh/cpython | 16323cb2c3d315e02637cebebdc5ff46be32ecdf | [
"CNRI-Python-GPL-Compatible"
] | 1 | 2019-02-25T09:42:36.000Z | 2019-02-25T09:42:36.000Z | Misc/NEWS.d/next/Library/2019-02-24-00-04-10.bpo-35512.eWDjCJ.rst | happyforeverh/cpython | 16323cb2c3d315e02637cebebdc5ff46be32ecdf | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | Misc/NEWS.d/next/Library/2019-02-24-00-04-10.bpo-35512.eWDjCJ.rst | happyforeverh/cpython | 16323cb2c3d315e02637cebebdc5ff46be32ecdf | [
"CNRI-Python-GPL-Compatible"
] | null | null | null | :func:`unittest.mock.patch.dict` used as a decorator with string target
resolves the target during function call instead of during decorator
construction. Patch by Karthikeyan Singaravelan.
| 47.5 | 71 | 0.831579 |
6aa94c9b26179da3f8cea4b1d7e22278b05f769f | 122 | rst | reStructuredText | docs/pages/examples/output/dump/index.rst | jvdvegt/pytablewriter | 29e8e7597d29c747354a64705313ad2e08013e4e | [
"MIT"
] | 510 | 2016-05-24T15:11:07.000Z | 2022-03-30T00:38:43.000Z | docs/pages/examples/output/dump/index.rst | sundarsrst/pytablewriter | f9b045293d3f58606b3757875056357029293a82 | [
"MIT"
] | 52 | 2016-06-14T03:57:04.000Z | 2022-03-21T14:38:03.000Z | docs/pages/examples/output/dump/index.rst | sundarsrst/pytablewriter | f9b045293d3f58606b3757875056357029293a82 | [
"MIT"
] | 39 | 2016-05-26T15:40:10.000Z | 2022-02-25T11:12:16.000Z | Get Rendered Tabular Text as str
================================================================
.. include:: dumps.txt
| 24.4 | 64 | 0.344262 |
d1f3f93b874d094eccbbad262610817eacaa993b | 79 | rst | reStructuredText | ng1-to-ng2/notes.rst | octaflop/ng-conf-2016 | dc1632e31f0ec96a6cf6f8cb1473dfa41a219d49 | [
"MIT"
] | null | null | null | ng1-to-ng2/notes.rst | octaflop/ng-conf-2016 | dc1632e31f0ec96a6cf6f8cb1473dfa41a219d49 | [
"MIT"
] | null | null | null | ng1-to-ng2/notes.rst | octaflop/ng-conf-2016 | dc1632e31f0ec96a6cf6f8cb1473dfa41a219d49 | [
"MIT"
] | null | null | null | Migrating from Angular 1 to Angular 2
=====================================
| 13.166667 | 37 | 0.392405 |
ff92fcf742860daf75e96be7058a9a88864b78fa | 64,435 | rst | reStructuredText | docs/source/NewBeginnersGuide.rst | JuKa87/OpenMx | f055df183ca433abd194e494a433142825666128 | [
"Apache-2.0"
] | null | null | null | docs/source/NewBeginnersGuide.rst | JuKa87/OpenMx | f055df183ca433abd194e494a433142825666128 | [
"Apache-2.0"
] | null | null | null | docs/source/NewBeginnersGuide.rst | JuKa87/OpenMx | f055df183ca433abd194e494a433142825666128 | [
"Apache-2.0"
] | null | null | null | Beginners Guide to OpenMx
=========================
This document is a basic introduction to OpenMx. It assumes that the reader has installed the R statistical programming language [http://www.r-project.org/] and the OpenMx library for R [http://openmx.psyc.virginia.edu]. Detailed introductions to R can be found on the internet. We recommend [http://faculty.washington.edu/tlumley/Rcourse] for a short course but Google search for 'introduction to R' provides many options.
The OpenMx scripts for the examples in this guide are available in the following files:
* http://openmx.psyc.virginia.edu/repoview/1/trunk/demo/OneFactorPathDemo.R
* http://openmx.psyc.virginia.edu/repoview/1/trunk/demo/OneFactorMatrixDemo.R
* http://www.vipbg.vcu.edu/OpenMx/html/NewBeginnersGuide.R
.. _BasicIntroduction:
Basic Introduction
------------------
The main function of OpenMx is to design a statistical model which can be used to test a hypothesis with a set of data. The way to do this is to use one of the functions written in the R language as part of the OpenMx package. The function to create an Mx model is (you guessed it) ``mxModel()``. Note that i) R is case sensitive and ii) the OpenMx package must be loaded before any of the OpenMx functions are used (this only has to be done once in an R session).
.. cssclass:: input
..
.. code-block:: r
require(OpenMx)
We start by building a model with the ``mxModel()`` function, which takes a number of arguments. We explain each of these below. Usually we save the result of applying the ``mxModel()`` function as an R object, here called *myModel*.
.. cssclass:: input
..
.. code-block:: r
myModel <- mxModel()
This R object has a special class named MxModel. We might read the above line 'myModel gets mxModel left paren right paren'. In this case we have provided no arguments in the parentheses, but R has still created an empty MxModel object *myModel*. Obviously this model needs arguments to do anything useful, but just to get the idea of the process of 'Model Building - Model Fitting' here is how we would go about fitting this model. The *myModel* object becomes the argument of the OpenMx function ``mxRun()`` to fit the model. The result is stored in a new R object, *myModelRun* of the same class as the previous object but updated with the results from the model fitting.
.. cssclass:: input
..
.. code-block:: r
myModelRun <- mxRun(myModel)
A model typically contains data, free and/or fixed parameters and some function to be applied to the data according to the model. Models can be build in a variety of ways, due to the flexibility of OpenMx which it inherited from classic **Mx** - for those of you who familiar with that software - and further expanded.
One possibility for building models is to create all the elements as separate R objects, which are then combined by calling them up in the ``mxModel()`` statement. We will refer to this approach as the **piecewise style**.
A second type is to start by creating an ``mxModel()`` with one element, and then taking this model as the basis for the next model where you add another element to it and so on. This also provides a good way to make sure that each element is syntactically correct before adding anything new. We will call this approach the **iterative/recursive/stepwise style**.
Another approach more closely resembles the traditional classic **Mx** approach, where you specify all the elements of the model at once, all as arguments of the ``mxModel()``, separated by comma's. While this approach is often more compact and works well for scripts that run successfully, it is not the most recommended approach for debugging a new script. We will refer to this approach as the **classic style**.
A First mxModel
----------------
To introduce the model fitting process in OpenMx, we will present the basics of several OpenMx functions which can be used to write a simple model and view its contents.
Matrix Creation
^^^^^^^^^^^^^^^
Although ``mxModel()`` can have a range of arguments, we will start with the most simple one. Models are fitted to data which must be in numeric format (for continuous data) or factor format (for ordinal data). Here we consider continuous data. Numbers (data/parameter estimates) are typically put into matrices, except for fixed constants. The function created to put numbers into matrices is (unsurprisingly) ``mxMatrix()``. Here we start with a basic matrix call and make use of only some of its possible arguments. All arguments are separated by comma's. To make it clear and explicit, we will include the names of the arguments, although that is optional if the arguments are included in the default order.
.. cssclass:: input
..
.. code-block:: r
myAmatrix <- mxMatrix(type="Full", nrow=1, ncol=1, values=4, name="Amatrix")
The above call to the ``mxMatrix()`` function has five arguments. The ``type`` and ``name`` arguments are alphanumeric and therefore their values are in quotes. The ``nrows``, ``ncols`` and ``values`` arguments are numeric, and refer respectively to the number of rows, the number of columns of the matrix and the value for the (in this case only one) element of the matrix.
Matrix Contents
^^^^^^^^^^^^^^^
Once you have run/executed this statement in R, a new R object has been created, namely *myAmatrix*. When you view its contents, you'll notice it has a special class of object, made by OpenMx, called an MxMatrix object. This object has a number of attributes, all of which are listed when you call up the object.
.. cssclass:: output
..
.. code-block:: r
> myAmatrix
FullMatrix 'Amatrix'
$labels: No labels assigned.
$values
[,1]
[1,] 4
$free: No free parameters.
$lbound: No lower bounds assigned.
$ubound: No upper bounds assigned.
Most of these attributes start with the ``$`` symbol. The contents of a particular attribute can be displayed by typing the name of the R object followed by the ``$`` symbol and the name of the attribute, for example here we're displaying the values of the matrix *myAmatrix*
.. cssclass:: output
..
.. code-block:: r
> myAmatrix$values
[,1]
[1,] 4
Note that the attribute ``name`` is part of the header of the output but is not displayed as an ``$`` attribute. However, it does exist as one and can be seen by typing
.. cssclass:: output
..
.. code-block:: r
> myAmatrix$name
[1] "Amatrix"
Wait a minute, this is confusing. The matrix has a name, here "Amatrix", and the R object to represent the matrix has a name, here "myAmatrix". Remember that when you call up *myAmatrix* you get the contents of the entire MxMatrix R object. When you call up "Amatrix", you get
.. cssclass:: output
..
.. code-block:: r
Error: object 'Amatrix' not found
unless you had previously created another R object with that same name. Why do we need two names? The matrix name (here, "Amatrix") is used within OpenMx when performing an operation on this matrix using algebra (see below) or manipulating/using the matrix in any way within a model. When you want to manipulate/use/view the matrix outside of OpenMx, or build a model by building each of the elements as R objects in the 'piecewise' approach, you use the R object name (here, *myAmatrix*). Let's clarify this with an example.
Model Creation
^^^^^^^^^^^^^^
First, we will build a model *myModel1* with just one matrix. Obviously that is not very useful but it does serve to introduce the sequence of creating a model and running it.
.. cssclass:: input
..
.. code-block:: r
myModel1 <- mxModel( mxMatrix(type="Full", nrow=1, ncol=1, values=4, name="Amatrix") )
Model Execution
^^^^^^^^^^^^^^^^
The ``mxRun()`` function will run a model through the optimizer. The return value of this function is an identical MxModel object, with all the free parameters - in case there are any - in the elements of the matrices of the model assigned to their final values.
.. cssclass:: input
..
.. code-block:: r
myModel1Run <- mxRun(myModel1)
Model Contents
^^^^^^^^^^^^^^
Note that we have saved the result of applying ``mxRun()`` to *myModel1* into a new R object, called *myModel1Run* which is of the same class as *myModel1* but with values updated after fitting the model. Note that the MxModel is automatically given a name 'untitled2' as we did not specify a ``name`` argument for the ``mxModel()`` function.
.. cssclass:: output
..
.. code-block:: r
> myModel1Run
MxModel 'untitled2'
type : default
$matrices : 'Amatrix'
$algebras :
$constraints :
$intervals :
$latentVars : none
$manifestVars : none
$data : NULL
$submodels :
$expectation : NULL
$fitfunction : NULL
$compute : NULL
$independent : FALSE
$options :
$output : TRUE
As you can see from viewing the contents of the new object, the current model only uses two of the arguments, namely ``$matrices`` and ``$output``. Given the matrix was specified within the mxModel, we can explore its arguments by extending the level of detail as follows.
.. cssclass:: output
..
.. code-block:: r
> myModel1Run$matrices
$Amatrix
FullMatrix 'Amatrix'
$labels: No labels assigned.
$values
[,1]
[1,] 4
$free: No free parameters.
$lbound: No lower bounds assigned.
$ubound: No upper bounds assigned.
This lists all the matrices within the MxModel *myModel1Run*. In the current case there is only one. If we want to display just a specific argument of that matrix, we first add a dollar sign ``$``, followed by the name of the matrix, and an ``$`` sign prior to the required argument. Thus both arguments within an object and specific elements of the same argument type are preceded by the ``$`` symbol.
.. cssclass:: output
..
.. code-block:: r
> myModel1run$matrices$Amatrix$values
[,1]
[1,] 4
It is also possible to omit the ``$matrices`` part and use the more succinct ``myModel1Run$Amatrix$values``.
Similarly, we can inspect the output which also includes the matrices in ``$matrices``, but only displays the values. Furthermore, the output will list algebras (``$algebras``), model expectations (``$expectations``), status of optimization (``$status``), number of evaluations (``$evaluations``), openmx version (``$mxVersion``), and a series of time measures of which the CPU time might be most useful (``$cpuTime``).
.. cssclass:: output
..
.. code-block:: r
> myModel1Run$output
$matrices
$matrices$untitled2.Amatrix
[,1]
[1,] 4
....
$mxVersion
[1] "999.0.0-3297"
$frontendTime
Time difference of 0.05656791 secs
$backendTime
Time difference of 0.003615141 secs
$independentTime
Time difference of 3.385544e-05 secs
$wallTime
Time difference of 0.0602169 secs
$timestamp
[1] "2014-04-10 09:53:37 EDT"
$cpuTime
Time difference of 0.0602169 secs
Alternative
^^^^^^^^^^^
Now let's go back to the model *myModel1* for a minute. We specified the matrix "Amatrix" within the model. Given we had previously saved the "Amatrix" in the *myAmatrix* object, we could have just used the R object as the argument of the model as follows. Here we're adding one additional element to the ``MxModel()`` object, namely the ``name`` argument
.. cssclass:: input
..
.. code-block:: r
myModel2 <- mxModel(myAmatrix, name="model2")
myModel2Run <- mxRun(myModel2)
You can verify for yourself that the contents of *myModel2* is identical to that of *myModel1*, and the same applies to *myModel1Run* and *myModel2Run*, and as a result to the matrix contained in the model. The value of the matrix element is still 4, both in the original model and the fitted model, as we did not manipulate the matrix in any way. We refer to this alternative style of coding as **iterative**.
Algebra Creation
^^^^^^^^^^^^^^^^
Now, let's take it one step further and use OpenMx to evaluate some matrix algebra. It will come as a bit of a shock to learn that the OpenMx function to specify an algebra is called ``mxAlgebra()``. Its main argument is the ``expression``, in other words the matrix algebra formula you want to evaluate. In this case, we're simply adding 1 to the value of the matrix element, providing a name for the matrix "Bmatrix" and then save the new matrix as *myBmatrix*. Note that the matrix we are manipulating is the "Amatrix", the name given to the matrix within OpenMx.
.. cssclass:: input
..
.. code-block:: r
myBmatrix <- mxAlgebra(expression=Amatrix+1, name="Bmatrix")
Algebra Contents
^^^^^^^^^^^^^^^^
We can view the contents of this new matrix. Notice that the result has not yet computed, as we have not run the model yet.
.. cssclass:: output
..
.. code-block:: r
> myBmatrix
mxAlgebra 'Bmatrix'
$formula: Amatrix + 1
$result: (not yet computed) <0 x 0 matrix>
dimnames: NULL
Built Model
^^^^^^^^^^^
Now we can combine the two statements - one defining the matrix, and the other defining the algebra - in one model, simply by separating them by a comma, and run it to see the result of the operation.
.. cssclass:: input
..
.. code-block:: r
myModel3 <- mxModel(myAmatrix, myBmatrix, name="model3")
myModel3Run <- mxRun(myModel3)
First of all, let us view *myModel3* and more specifically the values of the matrices within that model. Note that the ``$matrices`` lists one matrix, "Amatrix", and that the ``$algebras`` lists another, "Bmatrix". To view values of matrices created with the ``mxMatrix()`` function, the argument is ``$values``; for matrices created with the ``mxAlgebra()`` function, the argument is ``$result``. Note that when viewing a specific matrix, you can omit the ``$matrices`` or the ``$algebras`` arguments.
.. cssclass:: output
..
.. code-block:: r
> myModel3
MxModel 'model3'
type : default
$matrices : 'Amatrix'
$algebras : 'Bmatrix'
$constraints :
$intervals :
$latentVars : none
$manifestVars : none
$data : NULL
$submodels :
$expectation : NULL
$fitfunction : NULL
$compute : NULL
$independent : FALSE
$options :
$output : FALSE
.. cssclass:: output
..
.. code-block:: r
> myModel3$Amatrix$values
[,1]
[1,] 4
.. cssclass:: output
..
.. code-block:: r
> myModel3$Bmatrix$result
<0 x 0 matrix>
Fitted Model
^^^^^^^^^^^^
Given we're looking at the model *myModel3* before it is run, results of algebra have not been computed yet. Let us see how things change after running the model and viewing *myModel3Run*.
.. cssclass:: output
..
.. code-block:: r
> myModel3Run
MxModel 'model3'
type : default
$matrices : 'Amatrix'
$algebras : 'Bmatrix'
$constraints :
$intervals :
$latentVars : none
$manifestVars : none
$data : NULL
$submodels :
$expectation : NULL
$fitfunction : NULL
$compute : NULL
$independent : FALSE
$options :
$output : TRUE
.. cssclass:: output
..
.. code-block:: r
> myModel3Run$Amatrix$values
[,1]
[1,] 4
.. cssclass:: output
..
.. code-block:: r
> myModel3Run$Bmatrix$result
[,1]
[1,] 5
You will notice that the structure of the MxModel objects is identical, the value of the "Amatrix" has not changed, as it was a fixed element. However, the value of the "Bmatrix" is now the result of the operation on the "Amatrix". Note that we're here looking at the "Bmatrix" within the MxModel object *myModel3Run*. Please verify that the original MxAlgebra objects *myBmatrix* and *myAmatrix* remain unchanged. The ``mxModel()`` function call has made its own internal copies of these objects, and it is only these internal copies that are being manipulated. In computer science terms, this is referred to as *pass by value*.
Pass By Value
^^^^^^^^^^^^^
Let us insert a mini-lecture on the R programming language. Our experience has found that this exercise will greatly increase your understanding of the OpenMx language.
As this is such a crucial concept in R (unlike many other programming languages), let us look at it in a simple R example. We will start by assigning the value 4 to the object *avariable*, and then display it. If we then add 1 to this object, and display it again, notice that the value of *avariable* has not changed.
.. cssclass:: output
..
.. code-block:: r
> avariable <- 4
> avariable
[1] 4
> avariable +1
[1] 5
> avariable
[1] 4
Now we introduce a function, as OpenMx is a collection of purposely built functions. The function takes a single argument (the object *number*), adds one to the argument *number* and assigns the result to *number*, and then returns the incremented number back to the user. This function is given the name ``addone()``. We then apply the function to the object *avariable*, as well as display *avariable*. Thus, the objects *addone* and *avariable* are defined. The object assigned to *addone* is a function, while the value assigned to *avariable* is the number 4.
.. cssclass:: output
..
.. code-block:: r
> addone <- function(number) {
number <- number + 1
return(number)
}
> addone(avariable)
[1] 5
> avariable
[1] 4
Note that it may be prudent to use the ``print()`` function to display the results back to the user. When R is run from a script rather than interactively, results will not be displayed unless the function ``print()`` is used as shown below.
.. cssclass:: output
..
.. code-block:: r
> print(addone(avariable))
[1] 5
> print(avariable)
[1] 4
What is the result of executing this code? Try it. The correct results are 5 and 4. But why is the object *avariable* still 4, even after the ``addone()`` function was called? The answer to this question is that R uses pass-by-value function call semantics.
In order to understand pass-by-value semantics, we must understand the difference between *objects* and *values*. The *objects* declared in this example are *addone*, *avariable*, and *number*. The *values* refer to the things that are stored by the *objects*. In programming languages that use pass-by-value semantics, at the beginning of a function call it is the *values* of the argument list that are passed to the function.
The object *avariable* cannot be modified by the function ``addone()``. If I wanted to update the value stored in the object, I would have needed to replace the expression as follows:
.. cssclass:: output
..
.. code-block:: r
> print(avariable <- addone(avariable))
[1] 5
> print(avariable)
[1] 5
Try it. The updated example prints out 5 and 5. The lesson from this exercise is that the only way to update a object in a function call is to capture the result of the function call [#f1]_. This lesson is sooo important that we'll repeat it:
*the only way to update an object in a function call is to capture the result of the function call.*
R has several built-in types of values that you are familiar with: numerics, integers, booleans, characters, lists, vectors, and matrices. In addition, R supports S4 object values to facilitate object-oriented programming. Most of the functions in the OpenMx library return S4 object values. You must always remember that R does not discriminate between built-in types and S4 object types in its call semantics. Both built-in types and S4 object types are passed by value in R (unlike many other languages).
.. rubric:: Footnotes
.. [#f1] There are a few exceptions to this rule, but you can be assured such trickery is not used in the OpenMx library.
Styles
------
In the beginning of the introduction, we discussed three styles of writing OpenMx code: the piecewise, stepwise and classic styles. Let's take the most recent model and show how it can be written in these three styles.
Piecewise Style
^^^^^^^^^^^^^^^
The style we used in *myModel3* is the piecewise style. We repeat the different statements here for clarity
.. cssclass:: input
..
.. code-block:: r
myAmatrix <- mxMatrix(type="Full", nrow=1, ncol=1, values=4, name="Amatrix")
myBmatrix <- mxAlgebra(expression=Amatrix+1, name="Bmatrix")
myModel3 <- mxModel(myAmatrix, myBmatrix, name="model3")
myModel3Run <- mxRun(myModel3)
Each argument of the ``mxModel()`` statement is defined separately first as independent R objects which are then combined in one model statement.
Stepwise Style
^^^^^^^^^^^^^^^
For the stepwise style, we start with an ``mxModel()`` with just one argument, as we originally did with the "Amatrix" in *myModel1*, as repeated below. We could run this model to make sure it's syntactically correct.
.. cssclass:: input
..
.. code-block:: r
myModel1 <- mxModel( mxMatrix(type="Full", nrow=1, ncol=1, values=4, name="Amatrix") )
myModel1Run <- mxRun(myModel1)
Then we would build a new model starting from the first model. To do this, we invoke a special feature of the first argument of an ``mxModel()``. If it is the name of a saved MxModel object, for example *myModel1*, the arguments of that model would be automatically included in the new model. These arguments can be changed (or not) and new arguments can be added. Thus, in our example, where we want to keep the "Amatrix" and add the "Bmatrix", our second model would look like this.
.. cssclass:: input
..
.. code-block:: r
myModel4 <- mxModel(myModel1,
mxAlgebra(expression=Amatrix+1, name="Bmatrix"),
name="model4"
)
myModel4Run <- mxRun(myModel4)
Note that we call it "model4", by adding a ``name`` argument to the ``mxModel()`` as to not overwrite our previous "model1".
Classic Style
^^^^^^^^^^^^^
The final style may be reminiscent of classic Mx. Here we build all the arguments explicitly within one ``mxModel()``. As a result only one R object is created prior to ``mxRun()`` ning the model. This style is more compact than the others but harder to debug.
.. cssclass:: input
..
.. code-block:: r
myModel5 <- mxModel(
mxMatrix(type="Full", nrow=1, ncol=1, values=4, name="Amatrix"),
mxAlgebra(expression=Amatrix+1, name="Bmatrix"),
name="model5"
)
myModel5Run <- mxRun(myModel5)
You may have seen an alternative version with the first argument in quotes. In that case, that argument refers to the name of the model and not to a previously defined model. Thus, the following specification is identical to the previous one. Note also that it is not necessary to add the 'names' of the arguments, as long as the arguments are listed in their default order, which can easily be verified by using the standard way to get help about a function (in this case ``?mxMatrix()`` ).
.. cssclass:: input
..
.. code-block:: r
myModel5 <- mxModel("model5",
mxMatrix(type="Full", nrow=1, ncol=1, values=4, name="Amatrix"),
mxAlgebra(expression=Amatrix+1, name="Bmatrix")
)
myModel5run <- mxRun(myModel5)
Note that all arguments are separated by commas. In this case, we've also separated the arguments on different lines, but that is only for clarity. No comma is needed after the last argument! If you accidentally put one in, you get the generic error message *'argument is missing, with no default'* meaning that you forgot something and R doesn't know what it should be. The bracket on the following line closes the ``mxModel()`` statement.
Data functions
--------------
Most models will be fitted to data, not just a single number. We will briefly introduce how to read data that are pre-packaged with the OpenMx library as well as reading in your own data. All standard R utilities can be used here. The critical part is to run an OpenMx model on these data, thus another OpenMx function ``mxData()`` is needed.
Reading Data
^^^^^^^^^^^^
The ``data`` function can be used to read sample data that has been pre-packaged into the OpenMx library. One such sample data set is called "demoOneFactor".
.. cssclass:: input
..
.. code-block:: r
data(demoOneFactor)
In order to read your own data, you will most likely use the ``read.table``, ``read.csv``, ``read.delim`` functions, or other specialized functions available from CRAN to read from 3rd party sources. We recommend you install the package **psych** which provides succinct descriptive statistics with the ``describe()`` function.
.. cssclass:: input
..
.. code-block:: r
require(psych)
describe(demoOneFactor)
The output of this function is shown below.
.. cssclass:: output
..
.. code-block:: r
var n mean sd median trimmed mad min max range skew kurtosis se
x1 1 500 -0.04 0.45 -0.03 -0.04 0.46 -1.54 1.22 2.77 -0.05 0.01 0.02
x2 2 500 -0.05 0.54 -0.03 -0.04 0.55 -2.17 1.72 3.89 -0.14 0.05 0.02
x3 3 500 -0.06 0.61 -0.03 -0.05 0.58 -2.29 1.83 4.12 -0.17 0.23 0.03
x4 4 500 -0.06 0.73 -0.08 -0.05 0.75 -2.48 2.45 4.93 -0.08 0.05 0.03
x5 5 500 -0.08 0.82 -0.08 -0.07 0.89 -2.62 2.18 4.80 -0.10 -0.23 0.04
Now that the data are accessible in R, we need to make them readable into our OpenMx model.
Data Source
^^^^^^^^^^^
A ``mxData()`` function is used to construct a data source for the model. OpenMx can handle fitting models to summary statistics and to raw data.
The most commonly used **summary statistics** are covariance matrices, means and correlation matrices; information on the variances is lost/unavailable with correlation matrices, so these are usually not recommended.
These days, the standard approach for model fitting applications is to use **raw data**, which is simply a data table or rectangular file with columns representing variables and rows representing subjects. The primary benefit of this approach is that it handles datasets with missing values very conveniently and appropriately.
Covariance Matrix
^^^^^^^^^^^^^^^^^
We will start with an example using summary data, so we are specifying a covariance matrix by using the R function ``cov`` to generate a covariance matrix from the data frame. In addition to reading in the actual covariance matrix as the first (``observed``) argument, we specify the ``type`` (one of "cov","cor","sscp" and "raw") and the number of observations (``numObs``).
.. cssclass:: input
..
.. code-block:: r
exampleDataCov <- mxData(observed=cov(demoOneFactor), type="cov", numObs=500)
We can view what *exampleDataCov* looks like for OpenMx.
.. cssclass:: output
..
.. code-block:: r
> exampleDataCov
MxData 'data'
type : 'cov'
numObs : '500'
Data Frame or Matrix :
x1 x2 x3 x4 x5
x1 0.1985443 0.1999953 0.2311884 0.2783865 0.3155943
x2 0.1999953 0.2916950 0.2924566 0.3515298 0.4019234
x3 0.2311884 0.2924566 0.3740354 0.4061291 0.4573587
x4 0.2783865 0.3515298 0.4061291 0.5332788 0.5610769
x5 0.3155943 0.4019234 0.4573587 0.5610769 0.6703023
Means : NA
Acov : NA
Thresholds : NA
Some models may include predictions for the mean(s). We could add an additional ``means`` argument to the ``mxData`` statement to read in the means as well.
.. cssclass:: input
..
.. code-block:: r
exampleDataCovMeans <- mxData(observed=cov(demoOneFactor),
means=colMeans(demoOneFactor), type="cov", numObs=500)
The output for *exampleDataCovMeans* would have the following extra lines.
.. cssclass:: output
..
.. code-block:: r
....
Means :
x1 x2 x3 x4 x5
[1,] -0.04007841 -0.04583873 -0.05588236 -0.05581416 -0.07555022
Raw Data
^^^^^^^^
Note that for most real life examples, raw data are the preferred option, except in cases where complete data are available on all variables included in the analyses. In that situation, using summary statistics is faster. To change the current example to use raw data, we would read in the data explicitly and specify the ``type`` as "raw". The ``numObs`` is no longer required as the sample size is counted automatically.
.. cssclass:: input
..
.. code-block:: r
exampleDataRaw <- mxData(observed=demoOneFactor, type="raw")
Printing this MxData object would result in listing the whole data set. We show just the first few lines here:
.. cssclass:: output
..
.. code-block:: r
> exampleData
MxData 'data'
type : 'raw'
numObs : '500'
Data Frame or Matrix :
x1 x2 x3 x4 x5
1 -1.086832e-01 -0.4669377298 -0.177839881 -0.080931127 -0.070650263
2 -1.464765e-01 -0.2782619339 -0.273882553 -0.154120074 0.092717293
3 -6.399140e-01 -0.9295294042 -1.407963429 -1.588974090 -1.993461644
4 2.150340e-02 -0.2552252972 0.097330513 -0.117444884 -0.380906486
5 ....
The data to be used for our example are now ready in either **covariance matrix** or **raw data** format.
Model functions
---------------
We introduce here several new features by building a basic factor model to real data. A useful tool to represent such a model is drawing a path diagram which is mathematically equivalent to equations describing the model. If you're not familiar with the method of path analysis, we suggest you read one of the key reference books [LI1986]_.
.. [LI1986] Li, C.C. (1986). Path Analysis - A Primer. The Boxwood Press, Pacific Grove, CA.
Briefly, squares are used for observed variables; latent variables are drawn in circles. One-headed arrows are drawn to represent causal relationships. Correlations between variables are represented with two-headed arrows. Double-headed paths are also used for variances of variables. Below is a figure of a one factor model with five indicators (x1..x5). We have added a value of 1.0 to the variance of the latent variable **G** as a fixed value. All the other paths in the models are considered free parameters and are to be estimated.
.. image:: graph/OneFactorModel.png
:height: 2in
Variables
^^^^^^^^^
To specify this path diagram in OpenMx, we need to indicate which variables are observed or manifest and which are latent. The ``mxModel()`` arguments ``manifestVars`` and ``latentVars`` both take a vector of variable names. In this case the manifest variables are "x1", "x2", "x3", "x4", "x5" and the latent variable is "G". The R function ``c()`` is used to build the vectors.
.. cssclass:: input
..
.. code-block:: r
manifests <- c("x1","x2","x3","x4","x5")
latents <- c("G")
manifestVars = manifests
latentVars = latents
This could be written more succinctly as follows.
.. cssclass:: input
..
.. code-block:: r
manifestVars = names(demoOneFactor)
latentVars = c("G")
because the R ``names()`` function call returns the vector of names that we want (the observed variables in the data frame "demoOneFactor").
Path Creation
^^^^^^^^^^^^^
Paths are created using the ``mxPath()`` function. Multiple paths can be created with a single invocation of the ``mxPath()`` function.
- The ``from`` argument specifies the path sources, and the ``to`` argument specifies the path sinks. If the ``to`` argument is missing, then it is assumed to be identical to the ``from`` argument.
- The ``connect`` argument specifies the type of the source to sink connection, which can be one of five types. For our example, we use the default "single" type in which the :math:`i^{th}` element of the ``from`` argument is matched with the :math:`i^{th}` element of the ``to`` argument, in order to create a path.
- The ``arrows`` argument specifies whether the path is unidirectional (single-headed arrow, "1") or bidirectional (double-headed arrow, "2").
- The next three arguments are vectors: ``free``, is a boolean vector that specifies whether a path is free or fixed; ``values`` is a numeric vector that specifies the starting value of the path; ``labels`` is a character vector that assigns a label to each free or fixed parameter. Paths with the same labels are constrained to be equal, and OpenMx insists that paths equated in this way have the same fixed or free status; if this is not the case it will report an error.
To specify the path model above, we need to specify three different sets of paths. The first are the single-headed arrows from the latent to the manifest variables, which we will put into the R object *causalPaths* as they represent causal paths. The second set are the residuals on the manifest variables, referred to as *residualVars*. The third ``mxPath()`` statement fixes the variance of the latent variable to one, and is called *factorVars*.
.. cssclass:: input
..
.. code-block:: r
causalPaths <- mxPath(from=latents, to=manifests)
residualVars <- mxPath(from=manifests, arrows=2)
factorVars <- mxPath(from=latents, arrows=2, free=FALSE, values=1.0)
Note that several arguments are optional. For example, we omitted the ``free`` argument for *causalPaths* and *residualVars* because the default is 'TRUE' which applies in our example. We also omitted the ``connect`` argument for all three paths. The default "single" type automatically generates paths from every variable back to itself for all the variances, both the *residualVars* or the *factorVars*, as neither of those statements includes the ``to`` argument. For the *causalPaths*, the default ``connect`` type will generate separate paths from the latent to each of the manifest variables. To keep things simple, we did not include ``values`` or ``labels`` arguments as they are not strictly needed for this example, but this may not be true in general. Once the variables and paths have been specified, the predicted covariance matrix will be generated from the implied path diagram in the backend of OpenMx using the RAM notation (see below).
Equations
^^^^^^^^^
For those more in tune with equations and matrix algebra, we can represent the model using matrix algebra rather than path specifications. For reasons that may become clear later, the expression for the expected covariances between the manifest variables is given by
.. math::
:nowrap:
\begin{eqnarray*}
\mbox{Cov} ( x_{ij}) = facLoadings * facVariances * facLoadings^\prime + resVariances
\end{eqnarray*}
where *facLoadings* is a column vector of factor loadings, *facVariances* is a symmetric matrix of factor variances and *resVariances* is a diagonal matrix of residual variances. You might have noticed the correspondence between *causalPaths* and *facLoadings*, between *residualVars* and *resVariances*, and between *factorVars* and *facVariances*. To translate this model into OpenMx using the matrix specification, we will define the three matrices first using the ``mxMatrix()`` function, and then specify the algebra using the ``mxAlgebra()`` function.
Matrix Creation
^^^^^^^^^^^^^^^
The next three lines create three ``MxMatrix()`` objects, using the ``mxMatrix()`` function. The first argument declares the ``type`` of the matrix, the second argument declares the number of rows in the matrix (``nrow``), and the third argument declares the number of columns (``ncol``). The ``free`` argument specifies whether an element is a free or fixed parameter. The ``values`` argument specifies the starting values for the elements in the matrix, and the ``name`` argument specifies the name of the matrix.
.. cssclass:: input
..
.. code-block:: r
mxFacLoadings <- mxMatrix(type="Full", nrow=5, ncol=1,
free=TRUE, values=0.2, name="facLoadings")
mxFacVariances <- mxMatrix(type="Symm", nrow=1, ncol=1,
free=FALSE, values=1, name="facVariances")
mxResVariances <- mxMatrix(type="Diag", nrow=5, ncol=5,
free=TRUE, values=1, name="resVariances")
Each ``MxMatrix()`` object is a container that stores five matrices of equal dimensions. The five matrices stored in a ``MxMatrix()`` object are: ``free``, ``values``, ``labels``, ``lbound``, and ``ubound``. ``Free`` stores a boolean vector that determines whether a element is free or fixed. ``Values`` stores the current values of each element in the matrix. ``Labels`` stores a character label for each element in the matrix. And ``lbound`` and ``ubound`` store the lower and upper bounds, respectively, for each element that is a free parameter. If a element has no label, lower bound, or upper bound, then an NA value is stored in the element of the respective matrix.
Algebra Creation
^^^^^^^^^^^^^^^^
An ``mxAlgebra()`` function is used to construct an expression for any algebra, in this case the expected covariance algebra. The first argument (``expression``) is the algebra expression that will be evaluated by the numerical optimizer. The matrix operations and functions that are permitted in an MxAlgebra expression are listed in the help for the ``mxAlgebra`` function (obtained by ``?mxAlgebra``). The algebra expression refers to entities according to ``name`` argument of the MxMatrix objects.
.. cssclass:: input
..
.. code-block:: r
mxExpCov <- mxAlgebra(expression=facLoadings %*% facVariances %*% t(facLoadings)
+ resVariances, name="expCov")
You can see a direct correspondence between the formula above and the expression used to create the expected covariance matrix *myExpCov*.
Expectation - Fit Function
--------------------------
To fit a model to data, the differences between the observed covariance matrix (the data, in this case the summary statistics) and model-implied expected covariance matrix are minimized using a fit function. Fit functions are functions for which free parameter values are chosen such that the value of the fit function is minimized. Now that we have specified data objects and path or matrix/algebra objects for the predicted covariances of our model, we need to link the two and execute them which is typically done with ``mxExpectation()`` and ``mxFitFunction()`` statements. PS. These two statements replace the ``mxObjective()`` functions`` in earlier versions of OpenMx.
RAM Expectation
^^^^^^^^^^^^^^^
When using a path specification of the model, the fit function is always ``RAM`` which is indicated by using the ``type`` argument. We don't have to specify the fit function explicitly with an ``mxExpectation()`` and ``FitFunction()`` argument, instead we simply add the following argument to the model.
.. cssclass:: input
..
.. code-block:: r
type="RAM"
To gain a better understanding of the RAM principles, we recommend reading [RAM1990]_
Normal Expectation
^^^^^^^^^^^^^^^^^^
When using a matrix specification, ``mxExpectationNormal()`` defines how model expectations are calculated using the matrices/algebra implied by the ``covariance`` argument and optionally the ``means``. For this example, we are specifying an expected covariance algebra (``covariance``) omitting an expected means algebra. The expected covariance algebra is referenced according to its name, i.e. the ``name`` argument of the MxAlgebra created above. We also need to assign ``dimnames`` for the rows and columns of this covariance matrix, such that a correspondence can be determined between the expected and the observed covariance matrices. Subsequently we are specifying a maximum likelihood fit function with the ``mxFitFunctionML()`` statement.
.. cssclass:: input
..
.. code-block:: r
expectCov <- mxExpectationNormal(covariance="expCov",
dimnames=names(demoOneFactor))
funML <- mxFitFunctionML()
The above expectation and fit function can be used when fitting to covariance matrices. A model for the predicted means is optional. However, when fitting to raw data, an expectation has to be used that specifies both a model for the means and for the covariance matrices, paired with the appropriate fit function. In the case of raw data, the ``mxFitFunctionML()`` function uses full-information maximum likelihood to provide maximum likelihood estimates of free parameters in the algebra defined by the ``covariance`` and ``means`` arguments. The ``covariance`` argument takes an ``MxMatrix`` or ``MxAlgebra`` object, which defines the expected covariance of an associated ``MxData`` object. Similarly, the ``means`` argument takes an ``MxMatrix`` or ``MxAlgebra`` object to define the expected means of an associated ``MxData`` object. The ``dimnames`` arguments takes an optional character vector. This vector is assigned to be the ``dimnames`` of the means vector, and the row and columns ``dimnames`` of the covariance matrix.
.. cssclass:: input
..
.. code-block:: r
expectCovMeans <- mxExpectationNormal(covariance="expCov", means="expMeans",
dimnames=names(demoOneFactor))
funML <- mxFitFunctionML()
Raw data can come in two forms, continuous or categorical. While **continuous data** have an unlimited number of possible values, their frequencies typically form a normal distribution.
There are basically two flavors of **categorical data**. If only two response categories exist, for example Yes and No, or affected and unaffected, we are dealing with binary data. Variables with three or more ordered categories are considered ordinal.
Continuous Data
^^^^^^^^^^^^^^^
When the data to be analyzed are continuous, and models are fitted to raw data, the ``mxFitFunctionML()`` function will take two arguments, the ``covariance`` and the ``means`` argument, as well as ``dimnames`` to match them up with the observed data.
.. cssclass:: input
..
.. code-block:: r
expectRaw <- mxExpectationNormal(covariance="expCov", means="expMeans",
dimnames=manifests)
funML <- mxFitFunctionML()
If the variables to be analyzed have at least 15 possible values, we recommend to treat them as continuous data. As will be discussed later in the documentation, the power of the study is typically higher when dealing with continuous rather than categorical data.
Categorical Data
^^^^^^^^^^^^^^^^
For categorical - be they binary or ordinal - data, an additional argument is needed for the ``mxFitFunctionML()`` function, besides the ``covariance`` and ``means`` arguments, namely the ``thresholds`` argument.
.. cssclass:: input
..
.. code-block:: r
expFunOrd <- mxExpectationNormal(covariance="expCov", means="expMeans",
thresholds="expThres", dimnames=manifests)
funML <- mxFitFunctionML()
For now, we will stick with the factor model example and fit it to covariance matrices, calculated from the raw continuous data.
Methods
-------
We have introduced two ways to create a model. One is the **path method**, in which observed and latent variables are specified as well as the causal and correlational paths that connect the variables to form the model. This method may be more intuitive as the model maps on directly to the diagram. This of course assumes that the path diagram is drawn mathematically correct. Once the model is 'drawn' or specified correctly in this way, OpenMx translates the paths into RAM notation for the predicted covariance matrices.
Alternatively, we can specify the model using the **matrix method** by creating the necessary matrices and combining them using algebra to generate the expected covariance matrices (and optionally the mean/threshold vectors). Although less intuitive, this method provides greater flexibility for developing more complex models. Let us look at examples of both.
Path Method
^^^^^^^^^^^
We have previously generated all the pieces that go into the model, using the path method specification. As we have discussed before, the ``mxModel()`` function is somewhat of a swiss-army knife. The first argument to the ``mxModel()`` function can be an argument of type ``name`` (and appear in quotes), in which case it is a newly generated model, or it can be a previously defined model object. In the latter case, the new model 'inherits' all the characteristics (arguments) of the old model, which can be changed with additional arguments. An ``mxModel()`` can contain ``mxData()``, ``mxPath()``, ``mxExpectation()``, ``mxFitFunction`` and other ``mxModel()`` statements as arguments.
The following ``mxModel()`` function is used to create the 'one-factor' model, shown on the path diagram above. The first argument is a ``name``, thus we are specifying a new model, called "One Factor". By specifying the ``type`` argument to equal "RAM", we create a path style model. A RAM style model must include a vector of manifest variables (``manifestVars=``) and a vector of latent variables (``latentVars=``). We then include the arguments for reading the example data *exampleDataCov*, and those that specify the paths of the path model *causalPaths*, *residualVars*, and *factorVars* which we created previously.
.. cssclass:: input
..
.. code-block:: r
factorModel1 <- mxModel(name="One Factor",
type="RAM",
manifestVars=manifests,
latentVars=latents,
exampleDataCov, causalPaths, residualVars, factorVars)
When we display the contents of this model, note that we now have manifest and latent variables specified. By using ``type``="RAM" we automatically use the expectation ``mxExpectationRAM`` which translates the path model into RAM specification [RAM1990] as reflected in the matrices **A**, **S** and **F**, and the function ``mxFitFunctionML()``. Briefly, the **A** matrix contains the asymmetric paths, which are the unidirectional paths in the *causalPaths* object, and represent the factor loadings from the latent variable onto the manifest variables. The **S** matrix contains the symmetric paths which include both the bidirectional paths in *residualVars* and in *factorVars*. The **F** matrix is the filter matrix.
The formula :math:`F(I-A)^-1*S*(I-A)^-1'F'`, where I is an identity matrix, :math:`^-1` denotes the inverse and ' the transpose, generates the expected covariance matrix.
.. cssclass:: output
..
.. code-block:: r
> factorModel1
MxModel 'One Factor'
type : RAM
$matrices : 'A', 'S', and 'F'
$algebras :
$constraints :
$intervals :
$latentVars : 'G'
$manifestVars : 'x1', 'x2', 'x3', 'x4', and 'x5'
$data : 5 x 5
$data means : NA
$data type: 'cov'
$submodels :
$expectation : MxExpectationRAM
$fitfunction : MxFitFunctionML
$compute : NULL
$independent : FALSE
$options :
$output : FALSE
You can verify that after running the model, the new R object *factorFit* has similar arguments, except that they now contain the estimates from the model rather than the starting values. For example, we can look at the values in the **A** matrix in the built model *factorModel*, and in the fitted model *factorFit*. We will get back to this later. Note also that from here on out, we use the convention the R object containing the built model will end with *Model* while the R object containing the fitted model will end with *Fit*.
.. cssclass:: input
..
.. code-block:: r
factorFit1 <- mxRun(factorModel1)
We can inspect the values of the **A** matrix in *factorModel1* and *factorFit1* respectively as follows.
.. cssclass:: output
..
.. code-block:: r
> factorModel1$A$values
x1 x2 x3 x4 x5 G
x1 0 0 0 0 0 0
x2 0 0 0 0 0 0
x3 0 0 0 0 0 0
x4 0 0 0 0 0 0
x5 0 0 0 0 0 0
G 0 0 0 0 0 0
> factorFit1$A$values
x1 x2 x3 x4 x5 G
x1 0 0 0 0 0 0.3971521
x2 0 0 0 0 0 0.5036611
x3 0 0 0 0 0 0.5772414
x4 0 0 0 0 0 0.7027737
x5 0 0 0 0 0 0.7962500
G 0 0 0 0 0 0.0000000
We can also specify all the arguments directly within the ``mxModel()`` function, using the **classical** style, as follows. The script reads data from disk, creates the one factor model, fits the model to the observed covariances, and prints a summary of the results.
.. cssclass:: input
..
.. code-block:: r
data(demoOneFactor)
manifests <- names(demoOneFactor)
latents <- c("G")
factorModel1 <- mxModel(name="One Factor",
type="RAM",
manifestVars=manifests,
latentVars=latents,
mxPath(from=latents, to=manifests),
mxPath(from=manifests, arrows=2),
mxPath(from=latents, arrows=2, free=FALSE, values=1.0),
mxData(observed=cov(demoOneFactor), type="cov", numObs=500)
)
factorFit1 <- mxRun(factorModel1)
summary(factorFit1)
For more details about the summary and alternative options to display model results, see below.
Matrix Method
^^^^^^^^^^^^^
We will now re-create the model from the previous section, but this time we will use a matrix specification technique. The script reads data from disk, creates the one factor model, fits the model to the observed covariances, and prints a summary of the results.
We have already created separate objects for each of the parts of the model, which can then be combined in an ``mxModel()`` statement at the end. To repeat ourselves, the name of an OpenMx entity bears no relation to the R object that is used to identify the entity. In our example, the object "mxFacLoadings" stores a value that is a MxMatrix object with the name "facLoadings". Note, however, that it is not necessary to use different names for the name within the ``mxMatrix`` object and the name of the R object generated with the statement. For more complicated models, using the same name for both rather different entities, may make it easier to keep track of the various pieces. For now, we will use different names to highlight which one should be used in which context.
.. cssclass:: input
..
.. code-block:: r
data(demoOneFactor)
factorModel2 <- mxModel(name="One Factor",
exampleDataCov, mxFacLoadings, mxFacVariances, mxResVariances,
mxExpCov, expectCov, funML)
factorFit2 <- mxRun(factorModel2)
summary(factorFit2)
Alternatively, we can write the script in the **classical** style and specify all the matrices, algebras, objective function and data as arguments to the ``mxModel()``.
.. cssclass:: input
..
.. code-block:: r
data(demoOneFactor)
factorModel2 <- mxModel(name="One Factor",
mxMatrix(type="Full", nrow=5, ncol=1, free=TRUE, values=0.2, name="facLoadings"),
mxMatrix(type="Symm", nrow=1, ncol=1, free=FALSE, values=1, name="facVariances"),
mxMatrix(type="Diag", nrow=5, ncol=5, free=TRUE, values=1, name="resVariances"),
mxAlgebra(expression=facLoadings %*% facVariances %*% t(facLoadings)
+ resVariances, name="expCov"),
mxExpectationNormal(covariance="expCov", dimnames=names(demoOneFactor)),
mxFitFunctionML(),
mxData(observed=cov(demoOneFactor), type="cov", numObs=500)
)
factorFit2 <- mxRun(factorModel2)
summary(factorFit2)
Now that we've specified the model with both methods, we can run both examples and verify that they indeed provide the same answer by inspecting the two fitted R objects *factorFit1* and *factorFit2*.
Output
------
We can generate output in a variety of ways. As you might expect, the **summary** function summarizes the model, including data, model parameters, goodness-of-fit and run statistics.
Note that the fitted model is an R object that can be further manipulated, for example, to output specific parts of the model or to use it as a basis for developing an alternative model.
Model Summary
^^^^^^^^^^^^^
The summary function (``summary(modelname)``) is a convenient method for displaying the highlights of a model after it has been executed. Many R functions have an associated ``summary()`` function which summarizes all key aspects of the model. In the case of OpenMx, the ``summary(model)`` includes a summary of the data, a list of all the free parameters with their name, matrix element locators, parameter estimate and standard error, as well as lower and upper bounds if those were assigned. Currently the list of goodness-of-fit statistics printed include the number of observed statistics, the number of estimated parameters, the degrees of freedom, minus twice the log-likelihood of the data, the number of observations, the chi-square and associated p-value and several information criteria. Various time-stamps and the OpenMx version number are also displayed.
.. cssclass:: output
..
.. code-block:: r
> summary(factorFit1)
data:
$`One Factor.data`
$`One Factor.data`$cov
x1 x2 x3 x4 x5
x1 0.1985443 0.1999953 0.2311884 0.2783865 0.3155943
x2 0.1999953 0.2916950 0.2924566 0.3515298 0.4019234
x3 0.2311884 0.2924566 0.3740354 0.4061291 0.4573587
x4 0.2783865 0.3515298 0.4061291 0.5332788 0.5610769
x5 0.3155943 0.4019234 0.4573587 0.5610769 0.6703023
free parameters:
name matrix row col Estimate Std.Error Std.Estimate Std.SE lbound ubound
1 One Factor.A[1,6] A x1 G 0.39715182 0.015549708 0.89130932 0.034897484
2 One Factor.A[2,6] A x2 G 0.50366066 0.018232433 0.93255458 0.033758321
3 One Factor.A[3,6] A x3 G 0.57724092 0.020448313 0.94384664 0.033435037
4 One Factor.A[4,6] A x4 G 0.70277323 0.024011318 0.96236250 0.032880581
5 One Factor.A[5,6] A x5 G 0.79624935 0.026669339 0.97255562 0.032574489
6 One Factor.S[1,1] S x1 x1 0.04081418 0.002812716 0.20556770 0.014166734
7 One Factor.S[2,2] S x2 x2 0.03801997 0.002805791 0.13034196 0.009618951
8 One Factor.S[3,3] S x3 x3 0.04082716 0.003152305 0.10915353 0.008427851
9 One Factor.S[4,4] S x4 x4 0.03938701 0.003408870 0.07385841 0.006392303
10 One Factor.S[5,5] S x5 x5 0.03628708 0.003678556 0.05413557 0.005487924
observed statistics: 15
estimated parameters: 10
degrees of freedom: 5
-2 log likelihood: -3648.281
saturated -2 log likelihood: -3655.665
number of observations: 500
chi-square: 7.384002
p: 0.1936117
Information Criteria:
df Penalty Parameters Penalty Sample-Size Adjusted
AIC: -2.615998 27.38400 NA
BIC: -23.689038 69.53008 37.78947
CFI: 0.9993583
TLI: 0.9987166
RMSEA: 0.03088043
timestamp: 2014-04-10 10:23:07
frontend time: 0.02934313 secs
backend time: 0.005492926 secs
independent submodels time: 1.907349e-05 secs
wall clock time: 0.03485513 secs
cpu time: 0.03485513 secs
openmx version number: 999.0.0
The table of free parameters requires a little more explanation. First, ``<NA>`` is given for the name of elements that were not assigned a label. Second, the columns 'row' and 'col' display the variables at the tail of the paths and the variables at the head of the paths respectively. Third, standard errors are calculated. We will discuss the use of standard errors versus confidence intervals later on.
Model Evaluation
^^^^^^^^^^^^^^^^
The ``mxEval()`` function should be your primary tool for observing and manipulating the final values stored within a MxModel object. The simplest form of the ``mxEval()`` function takes two arguments: an ``expression`` and a ``model``. The expression can be **any** arbitrary expresssion to be evaluated in R. That expression is evaluated, but the catch is that any named entities or parameter names are replaced with their current values from the model. The model can be either a built or a fitted model.
.. cssclass:: input
..
.. code-block:: r
myModel6 <- mxModel('topmodel',
mxMatrix('Full', 1, 1, values=1, free=TRUE, labels='p1', name='A'),
mxModel('submodel',
mxMatrix('Full', 1, 1, values=2, free=FALSE, labels='p2', name='B')
)
)
myModel6Run <- mxRun(myModel6)
The example above has a model ("submodel") embedded in another model ("topmodel"). Note that the name of the arguments can be omitted if they are used in the default order (``type``, ``nrow`` and ``ncol``).
The ``expression`` of the ``mxEval`` statement can include both matrices, algebras as well as matrix element labels, each taking on the value of the model specified in the ``model`` argument. To reinforce an earlier point, it is not necessary to restrict the expression only to valid MxAlgebra expressions. In the following example, we use the ``harmonic.mean()`` function from the ``psych`` package.
.. cssclass:: input
..
.. code-block:: r
mxEval(A + submodel.B + p1 + p2, myModel6) # initial values
mxEval(A + submodel.B + p1 + p2, myModel6Run) # final values
library(psych)
nVars <- 4
mxEval(nVars * harmonic.mean(c(A, submodel.B)), myModel6)
When the name of an entity in a model collides with the name of a built-in or user-defined function in R, the named entity will supercede the function. We strongly advice against naming entities with the same name as the predefined functions or values in R, such as `c`, `T`, and `F` among others.
The ``mxEval()`` function allows the user to inspect the values of named entities without explicitly poking at the internals of the components of a model. We encourage the use of ``mxEval()`` to look at the state of a model either before the execution of a model or after model execution.
Indexing Operator
^^^^^^^^^^^^^^^^^
MxModel objects support the ``$`` operator, also known as the list indexing operator, to access all the components contained within a model. Here is an example collection of models that will help explain the uses of the ``$`` operator:
.. cssclass:: input
..
.. code-block:: r
myModel7 <-
mxModel('topmodel',
mxMatrix(type='Full', nrow=1, ncol=1, name='A'),
mxAlgebra(A, name='B'),
mxModel('submodel1',
mxConstraint(topmodel1.A == topmodel1.B, name = 'C'),
mxModel('undersub1', mxData(diag(3), type='cov', numObs=10)
)
),
mxModel('submodel2',
mxFitFunctionAlgebra('topmodel1.A')
)
)
Access Elements
^^^^^^^^^^^^^^^
The first useful trick is entering the string ``model$`` in the R interpreter and then pressing the TAB key. You should see a list of all the named entities contained within the ``model`` object.
.. cssclass:: output
..
.. code-block:: r
> model$
model$A
model$B
model$submodel1
model$submodel2
model$submodel1.C
model$undersub1
model$undersub1.data
model$submodel2.fitfunction
The named entities of the model are displayed in one of three modes.
#. All of the submodels contained within the parent model are accessed by using their unique model name (``submodel1``, ``submodel2``, and ``undersub1``).
#. All of the named entities contained within the parent model are displayed by their names (``A`` and ``B``).
#. All of the named entities contained by the submodels are displayed in the ``modelname.entityname`` format (``submodel1.C``, ``submodel2.objective``, and ``undersub1.data``).
Modify Elements
^^^^^^^^^^^^^^^
The list indexing operator can also be used to modify the components of an existing model. There are three modes of using the list indexing operator to perform modifications, and they correspond to the three modes for accessing elements.
In the first mode, a submodel can be replaced using the unique name of the submodel or even eliminated.
.. cssclass:: output
..
.. code-block:: r
# replace 'submodel1' with the contents of the mxModel() expression
model$submodel1 <- mxModel(...)
# eliminate 'undersub1' and all children models
model$undersub1 <- NULL
In the second mode, the named entities of the parent model are modified using their names. Existing matrices can be eliminated or new matrices can be created.
.. cssclass:: output
..
.. code-block:: r
# eliminate matrix 'A'
model$A <- NULL
# create matrix 'D'
model$D <- mxMatrix(...)
In the third mode, named entities of a submodel can be modified using the ``modelname.entityname`` format. Again existing elements can be eliminated or new elements can be created.
.. cssclass:: output
..
.. code-block:: r
# eliminate constraint 'C' from submodel1
model$submodel1.C <- NULL
# create algebra 'D' in undersub1
model$undersub1.D <- mxAlgebra(...)
# create 'undersub2' as a child model of submodel1
model$submodel1.undersub2 <- mxModel(...)
Keep in mind that when using the list indexing operator to modify a named entity within a model, the name of the created or modified entity is always the name on the left-hand side of the ``<-`` operator. This feature can be convenient, as it avoids the need to specify a name of the entity on the right-hand side of the ``<-`` operator.
Classes
-------
We have introduced a number of OpenMx functions which correspond to specific classes which are summarized below.
The basic unit of abstraction in the OpenMx library is the model. A model serves as a container for a collection of matrices, algebras, constraints, expectation, fit functions, data sources, and nested sub-models. In the parlance of R, a model is a value that belongs to the class MxModel that has been defined by the OpenMx library. The following table indicates what classes are defined by the OpenMx library.
+--------------------+---------------------+
| entity | S4 class |
+====================+=====================+
| model | MxModel |
+--------------------+---------------------+
| data source | MxData |
+--------------------+---------------------+
| matrix | MxMatrix |
+--------------------+---------------------+
| algebra | MxAlgebra |
+--------------------+---------------------+
| expectation | MxExpectationRAM |
| | MxExpectationNormal |
+--------------------+---------------------+
| fit function | MxFitFunctionML |
+--------------------+---------------------+
| constraint | MxConstraint |
+--------------------+---------------------+
All of the entities listed in the table are identified by the OpenMx library by the name assigned to them. A name is any character string that does not contain the "." character. In the parlance of the OpenMx library, a model is a container of named entities. The name of an OpenMx entity bears no relation to the R object that is used to identify the entity. In our example, the object ``factorModel`` is created with the ``mxModel()`` function and stores a value that is an "MxModel" object with the name 'One Factor'.
.. [RAM1990] McArdle, J.J. & Boker, S.M. (1990). RAMpath: Path diagram software. Denver: Data Transforms Inc.
| 50.066045 | 1,036 | 0.688539 |
2621008a6c3335d27ff3a16aa452d6681ed8fbb0 | 6,800 | rst | reStructuredText | docs/logging-and-managing-experiment-results/updating-existing-experiment.rst | SiddhantSadangi/docs | 324803e71db4aa979de68dce2fec0697528fedfe | [
"MIT"
] | 3 | 2020-07-27T20:46:54.000Z | 2021-06-11T16:32:15.000Z | docs/logging-and-managing-experiment-results/updating-existing-experiment.rst | SiddhantSadangi/docs | 324803e71db4aa979de68dce2fec0697528fedfe | [
"MIT"
] | 21 | 2020-02-06T22:06:50.000Z | 2021-10-03T10:14:19.000Z | docs/logging-and-managing-experiment-results/updating-existing-experiment.rst | SiddhantSadangi/docs | 324803e71db4aa979de68dce2fec0697528fedfe | [
"MIT"
] | 5 | 2020-01-30T18:43:53.000Z | 2021-06-11T16:32:17.000Z | .. _update-existing-experiment:
Updating existing experiment
============================
|video-update|
You can update experiments even after they finished running. This lets you add new data or visualizations to the previously closed experiment and makes multi-stage training convenient.
.. _update-existing-experiment-basics:
Why you may want to update an existing experiment?
--------------------------------------------------
Updating existing experiment can come in handy in several situations:
* You want to add metrics or visualizations to the closed experiment.
* You finished model training and closed the experiment earlier, but now you want to continue training from that moment. Actually, you can even make multiple iterations of the procedure: ``resume experiment -> log more data``. Have a look at the simple example below for details.
.. _update-existing-experiment-basics-simple-example:
How to update existing experiment?
----------------------------------
To update the experiment you need to get the project where this experiment is. Then you need to get the :class:`~neptune.experiments.Experiment` object of the experiment you want to update.
.. code-block:: python3
import neptune
# Get project
project = neptune.init('my_workspace/my_project')
# Get experiment object for appropriate experiment, here 'SHOW-2066'
my_exp = project.get_experiments(id='SHOW-2066')[0]
Experiment with ``id='SHOW-2066'`` is now ready to be updated. Use ``my_exp`` to continue logging to it.
Note that with :meth:`~neptune.projects.Project.get_experiments` you can get experiments by ``id``, ``state``, ``owner``, ``tag`` and ``min_running_time``.
.. code-block:: python3
from neptunecontrib.api.table import log_chart
# Log metrics, images, text
my_exp.log_metric(...)
my_exp.log_image(...)
my_exp.log_text(...)
# Append tag
my_exp.append_tag('updated')
# Log new chart
log_chart('matplotlib-interactive', fig, my_exp)
Technique is the same as described in section about :ref:`logging by using experiment object <logging-advanced-using-experiment-object-explicitly>`.
.. note::
You can retrieve an experiment and log more data to it multiple times.
Example below shows updated experiment with more data-points logged to the ``'mse'`` metric and ``'pretty-random-metric'`` added.
+--------------------------------------------------------------------------------------------------------------------+
| .. image:: ../_static/images/logging-and-managing-experiment-results/updating-experiment/update-charts-before.png |
| :target: ../_static/images/logging-and-managing-experiment-results/updating-experiment/update-charts-before.png |
| :alt: Charts in original experiment |
+====================================================================================================================+
| Charts in original experiment |
+--------------------------------------------------------------------------------------------------------------------+
+-------------------------------------------------------------------------------------------------------------------+
| .. image:: ../_static/images/logging-and-managing-experiment-results/updating-experiment/update-charts-after.png |
| :target: ../_static/images/logging-and-managing-experiment-results/updating-experiment/update-charts-after.png |
| :alt: Charts in updated experiment |
+===================================================================================================================+
| Charts in updated experiment |
+-------------------------------------------------------------------------------------------------------------------+
|example-update|
.. _update-existing-experiment-what-you-can-cannot:
What you can/cannot update
--------------------------
You can freely use all :class:`~neptune.experiments.Experiment` methods. These include methods for logging more data:
* :meth:`~neptune.experiments.Experiment.log_metric`
* :meth:`~neptune.experiments.Experiment.log_artifact`
* :meth:`~neptune.experiments.Experiment.log_image`
* :meth:`~neptune.experiments.Experiment.log_text`
* :meth:`~neptune.experiments.Experiment.set_property`
* :meth:`~neptune.experiments.Experiment.append_tag`
Moreover, you can use all logging methods from ``neptunecontrib``, that is:
* :meth:`~neptunecontrib.api.audio.log_audio`
* :meth:`~neptunecontrib.api.chart.log_chart`
* :meth:`~neptunecontrib.api.video.log_video`
* :meth:`~neptunecontrib.api.table.log_table`
* :meth:`~neptunecontrib.api.html.log_html`
* :meth:`~neptunecontrib.api.explainers.log_explainer`
.. note::
Learn more about :ref:`logging options <what-you-can-log>` to see why and how to use each method.
However, updating experiment comes with some limitations. Specifically:
* you cannot update |parameters| and |source-code|, but you can upload sources as artifact, using :meth:`~neptune.experiments.Experiment.log_artifact`.
* |hardware-consumption| for the update will not be tracked.
* ``stdout`` and ``stderr`` are not logged during update.
* experiment status (failed/succeeded/aborted) will not be updated.
.. _update-existing-experiment-step-by-step:
.. External links
.. |parameters| raw:: html
<a href="https://ui.neptune.ai/o/USERNAME/org/example-project/e/HELLO-325/parameters" target="_blank">parameters</a>
.. |hardware-consumption| raw:: html
<a href="https://ui.neptune.ai/o/USERNAME/org/example-project/e/HELLO-325/monitoring" target="_blank">hardware consumption</a>
.. |source-code| raw:: html
<a href="https://ui.neptune.ai/o/USERNAME/org/example-project/e/HELLO-325/source-code" target="_blank">source code</a>
.. Buttons
.. |example-update| raw:: html
<div class="see-in-neptune">
<button><a target="_blank"
href="https://ui.neptune.ai/o/shared/org/showroom/e/SHOW-2066/charts">
<img width="50" height="50" style="margin-right:10px"
src="https://neptune.ai/wp-content/uploads/neptune-ai-blue-vertical.png">See example in Neptune</a>
</button>
</div>
.. Videos
.. |video-update| raw:: html
<div style="position: relative; padding-bottom: 56.872037914691944%; height: 0;"><iframe src="https://www.loom.com/embed/d2bb1e74c74a4892a68b0bc9dc0a0f11" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div> | 46.896552 | 319 | 0.611618 |
e4892e5264236d30dccd35c486b0b21a9f2ac9a2 | 3,049 | rst | reStructuredText | source/lessons/L3/geocoding.rst | ropitz/2018 | a806773378041dc08646e48ceda935c58b4a6a18 | [
"MIT"
] | null | null | null | source/lessons/L3/geocoding.rst | ropitz/2018 | a806773378041dc08646e48ceda935c58b4a6a18 | [
"MIT"
] | null | null | null | source/lessons/L3/geocoding.rst | ropitz/2018 | a806773378041dc08646e48ceda935c58b4a6a18 | [
"MIT"
] | null | null | null | Geocoding
=========
Overview of Geocoders
---------------------
Geocoding, i.e. converting addresses into coordinates or vice versa, is
a really common GIS task. Luckily, in Python there are nice libraries
that makes the geocoding really easy. One of the libraries that can do
the geocoding for us is
`geopy <http://geopy.readthedocs.io/en/1.11.0/>`__ that makes it easy to
locate the coordinates of addresses, cities, countries, and landmarks
across the globe using third-party geocoders and other data sources.
As said, **Geopy** uses third-party geocoders - i.e. services that does
the geocoding - to locate the addresses and it works with multiple
different service providers such as:
- `ESRI
ArcGIS <http://resources.arcgis.com/en/help/arcgis-rest-api/>`__
- `Baidu
Maps <http://developer.baidu.com/map/webservice-geocoding.htm>`__
- `Bing <http://www.microsoft.com/maps/developers/web.aspx>`__
- `geocoder.us <http://geocoder.us/>`__
- `GeocodeFarm <https://www.geocodefarm.com/>`__
- `GeoNames <http://www.geonames.org/>`__
- `Google Geocoding API
(V3) <https://developers.google.com/maps/documentation/geocoding/>`__
- `IGN
France <http://api.ign.fr/tech-docs-js/fr/developpeur/search.html>`__
- `Mapquest <http://www.mapquestapi.com/geocoding/>`__
- `Mapzen Search <https://mapzen.com/projects/search/>`__
- `NaviData <http://navidata.pl>`__
- `OpenCage <http://geocoder.opencagedata.com/api.html>`__
- `OpenMapQuest <http://developer.mapquest.com/web/products/open/geocoding-service>`__
- `Open Street Map Nominatim <https://wiki.openstreetmap.org/wiki/Nominatim>`__
- `SmartyStreets <https://smartystreets.com/products/liveaddress-api>`__
- `What3words <http://what3words.com/api/reference>`__
- `Yandex <http://api.yandex.com/maps/doc/intro/concepts/intro.xml>`__
Thus, there are plenty of geocoders where to choose from! However, for most of these services you might need to
request so called API access-keys from the service provider to be able to use the service.
Luckily, Nominatim, which is a geocoder based on OpenStreetMap data does not require a API key to use their service
if it is used for small scale geocoding jobs as the service is rate-limited to 1 request per second (3600 / hour).
As we are only making a small set of queries, we can do the geocoding by using Nominatim.
.. note::
- **Note 1:** If you need to do larger scale geocoding jobs, use and request an API key to some of the geocoders listed above.
- **Note 2:** There are also other Python modules in addition to geopy that can do geocoding such as `Geocoder <http://geocoder.readthedocs.io/>`__.
.. hint::
You can get your access keys to e.g. Google Geocoding API from `Google APIs console <https://code.google.com/apis/console>`__ by creating a Project
and enabling a that API from `Library <https://console.developers.google.com/apis/library>`__. Read a
short introduction about using Google API Console from `here <https://developers.googleblog.com/2016/03/introducing-google-api-console.html>`__.
| 51.677966 | 151 | 0.743194 |
7d1fd09c4588b22139300e6ef8fe281cf0d23bd8 | 7,583 | rst | reStructuredText | docs/oecophylla-inputs.rst | biocore/shrugged | 725abd9a09e6c5c447c3801e582b97bb790a2954 | [
"MIT"
] | 13 | 2017-09-27T08:55:56.000Z | 2020-05-01T16:47:22.000Z | docs/oecophylla-inputs.rst | biocore/shrugged | 725abd9a09e6c5c447c3801e582b97bb790a2954 | [
"MIT"
] | 123 | 2017-09-08T01:00:44.000Z | 2018-04-16T18:26:42.000Z | docs/oecophylla-inputs.rst | biocore/shrugged | 725abd9a09e6c5c447c3801e582b97bb790a2954 | [
"MIT"
] | 22 | 2017-09-07T23:54:47.000Z | 2020-12-28T13:21:44.000Z | More about inputs, parameters, and environments
===============================================
Fundamentally, Oecophylla is *workflow manager*, which means it's here to tell
the computer (or cluster) what to do to move a bunch of samples through a
series of steps. We've tried to provide some sensible starting points that make
it easy to perform basic processing on standard metagenomic datasets, but
you'll need to be able to make some modifications to fit your needs.
Ultimately, you have to know how to tell Oecophylla:
- where your data are (**input**)
- what you want to do with them (**parameters**)
- how to do actually run the programs (**environments**)
Each of these three elements are stored in a single file called ``config.yaml``
that Oecophylla creates in the ``output-dir``. Subsequent runs in the same
output directory will automatically use this ``config.yaml`` file if it is
found.
You can also have Oecophylla **only** output the ``config.yaml`` (and not do
the workflow run itslef) by passing the ``--just-config`` flag.
Input data
----------
Oecophylla operates on a *files per sample* basis. It has a concept of
*samples*, and needs a way to associate *samples* with input sequence *files*.
Once it does this first association, Oecophylla will combine all the forward
and all the reverse read files per sample (for example, it the sample was run
across multiple lanes) and keep track of things from there.
There are two automatic ways and one manual way to tell Oeocophylla how to find
your input data:
The quick way
~~~~~~~~~~~~~
If all of your samples are in the same folder, and **if they follow the
standard Illumina naming convention** [1]_, you can just pass the input folder to
``oecophylla workflow`` and it will guess sample names from file names.
.. code-block:: bash
:caption: this will find all six samples in the included test_reads
directory
oecophylla workflow \
--input-dir test_data/test_reads
The precise way
~~~~~~~~~~~~~~~
You can also pass Oecophylla the Illumina-formatted sample sheet that was used
to demultiplex the samples. This allows you to use different sample names for
your samples than what can be deduced from the read names themselves. This also
allows you to only run a subset of the samples from the run, by only including
those lines of the sample sheet. This also depends on your files being named
following standard Illumina naming conventions [1]_.
In this case, provide the sample name *you want Oecophylla to use* in a column
in the Sample Sheet called 'Description', and then pass both the input
directory and the path to the sample sheet to Oecophylla:
.. code-block:: bash
:caption: this will only run two of the 6 test samples provided:
oecophylla workflow \
--input-dir test_data/test_reads \
--sample-sheet test_data/test_config/example_sample_sheet.txt
The manual way
~~~~~~~~~~~~~~
If you want to get creative, or your files don't follow standard Illumina
naming conventions [1]_, you can just create or modify your own ``config.yaml``
file. Oecophylla will look for a ``samples`` block , with ``forward`` and
``reverse`` levels for each sample. Reads should be provided as a *matched*
list to each of these levels, as so:
.. code-block:: yaml
:caption: this is not recommended but some of you will do it anyway
samples:
sample_S22205:
forward:
- test_data/test_reads/S22205_S104_L001_R1_001.fastq.gz
reverse:
- test_data/test_reads/S22205_S104_L001_R2_001.fastq.gz
sample_S22282:
forward:
- test_data/test_reads/S22282_S102_L001_R1_001.fastq.gz
reverse:
- test_data/test_reads/S22282_S102_L001_R2_001.fastq.gz
Parameters
----------
Oecophylla wraps a lot of tools, and some subset of parameters for these tools
will be available to specify in a **parameters file**. An example parameters
file is provided in ``test_data/test_config/test_params.yml``. The parameters
file is combined with information about the samples and environments in the
single ``config.yaml`` file (which in principle should be enough to reproduce
an entire analysis).
This parameters file also includes paths to the databases used for each tool.
The included ``test_params.yml`` file includes paths to the tiny test databases
we package to verify installation. For example, the following portion of the
``test_params.yml`` file links to the test **humann2** databases, along with **Atropos** parameters suitable for trimming Kapa HyperPlus libraries:
.. code-block:: yaml
:caption: this has *relative* file paths, because executing the test runs with the --test parameter always produces outputs with ``test_data`` linked in the output directory
atropos: ' -a GATCGGAAGAGCACACGTCTGAACTCCAGTCAC -A GATCGGAAGAGCGTCGTGTAGGGAAAGGAGTGT
-q 15 --minimum-length 100 --pair-filter any'
humann2:
aa_db: test_data/test_dbs/uniref50_mini
nt_db: test_data/test_dbs/chocophlan_test
other: ''
If you're executing Oecophylla on your own computer or cluster, you'll want to
download appropriate databases and create a ``params.yml`` with the appropriate
paths. We've included one set up with default databases available on our
Barnacle cluster. For example, here's the same portion of the the parameters
file in ``cluster_configs/barnacle/tool_params.yml``:
.. code-block:: yaml
:caption: notice that this has *absolute* file paths
atropos: ' -a GATCGGAAGAGCACACGTCTGAACTCCAGTCAC -A GATCGGAAGAGCGTCGTGTAGGGAAAGGAGTGT
-q 15 --minimum-length 100 --pair-filter any'
humann2:
aa_db: /databases/humann2_data/uniref90/uniref
nt_db: /databases/humann2_data/full_chocophlan.v0.1.1/chocophlan
other: ''
When I'm running Oecophylla, I create a copy of my defaults parameters file in
the project output directory I'm using and modify it as necessary.
Environments
------------
Similar to the parameters file, Oecophylla needs an **environments file** to
tell the shell doing the execution of each job how to set up the environment
for that job [2]_. This file contains a one-line command sufficient to set up
the environment for each module, which is executed at the beginning of each
job run from that module. We've provided default ``envs.yml`` files in the
``test_data/test_config/test_envs.yml`` and
``cluster_configs/barnacle/envs.yml`` files suitable for running standard
analysis using the oecophylla-installed module Conda environments. They look
like this:
.. code-block:: yaml
humann2: source activate oecophylla-humann2
qc: source activate oecophylla-qc
raw: source activate oecophylla-qc
Eventually, we will install some standard module environments on Barnacle
centrally using the GNU Modules system. To use these environments once they
are available, we will change the lines per-module in
``cluster_configs/barnacle/envs.yml`` to look something like this:
.. code-block:: yaml
humann2: module load humann2
qc: module load oecophylla-qc
raw: module load oecophylla-qc
.. [1] as in ``sample1_S001_L001_R1_001.fastq.gz``, where ``sample1`` is
followed by an index, lane, read, and run number and has the ``.fastq.gz``
extension.
.. [2] This slightly reproduces Snakemake's built-in conda environment
specification feature. Why not use the former? We did this so that central
execution on a shared cluster could take advantage of centrally installed
environments per module, freeing people from having to maintain their own
module installations.
| 40.768817 | 177 | 0.747725 |
aace155f346ebc245d58477bddb0623e98b37875 | 194 | rst | reStructuredText | docs/api/pycatia/space_analyses_interfaces/clash_results.rst | evereux/catia_python | 08948585899b12587b0415ce3c9191a408b34897 | [
"MIT"
] | 90 | 2019-02-21T10:05:28.000Z | 2022-03-19T01:53:41.000Z | docs/api/pycatia/space_analyses_interfaces/clash_results.rst | Luanee/pycatia | ea5eef8178f73de12404561c00baf7a7ca30da59 | [
"MIT"
] | 99 | 2019-05-21T08:29:12.000Z | 2022-03-25T09:55:15.000Z | docs/api/pycatia/space_analyses_interfaces/clash_results.rst | Luanee/pycatia | ea5eef8178f73de12404561c00baf7a7ca30da59 | [
"MIT"
] | 26 | 2019-04-04T06:31:36.000Z | 2022-03-30T07:24:47.000Z | .. _Clash_results:
pycatia.space_analyses_interfaces.clash_results
===============================================
.. automodule:: pycatia.space_analyses_interfaces.clash_results
:members: | 27.714286 | 63 | 0.623711 |
902295a70d284aab6e6b7db0bc38b07145e6a03e | 15,040 | rst | reStructuredText | doc/kernel/services/other/float.rst | ldalek/zephyr | fa2054f93e4e80b079a4a6a9e84d642e51912042 | [
"Apache-2.0"
] | 21 | 2019-02-13T02:11:04.000Z | 2020-04-23T19:09:17.000Z | doc/kernel/services/other/float.rst | ldalek/zephyr | fa2054f93e4e80b079a4a6a9e84d642e51912042 | [
"Apache-2.0"
] | 189 | 2018-12-14T11:44:08.000Z | 2020-05-20T15:14:35.000Z | doc/kernel/services/other/float.rst | ldalek/zephyr | fa2054f93e4e80b079a4a6a9e84d642e51912042 | [
"Apache-2.0"
] | 186 | 2018-12-14T11:56:22.000Z | 2020-05-15T12:51:11.000Z | .. _float_v2:
Floating Point Services
#######################
The kernel allows threads to use floating point registers on board
configurations that support these registers.
.. note::
Floating point services are currently available only for boards
based on ARM Cortex-M SoCs supporting the Floating Point Extension,
the Intel x86 architecture, the SPARC architecture and ARCv2 SoCs
supporting the Floating Point Extension. The services provided
are architecture specific.
The kernel does not support the use of floating point registers by ISRs.
.. contents::
:local:
:depth: 2
Concepts
********
The kernel can be configured to provide only the floating point services
required by an application. Three modes of operation are supported,
which are described below. In addition, the kernel's support for the SSE
registers can be included or omitted, as desired.
No FP registers mode
====================
This mode is used when the application has no threads that use floating point
registers. It is the kernel's default floating point services mode.
If a thread uses any floating point register,
the kernel generates a fatal error condition and aborts the thread.
Unshared FP registers mode
==========================
This mode is used when the application has only a single thread
that uses floating point registers.
On x86 platforms, the kernel initializes the floating point registers so they can
be used by any thread (initialization in skipped on ARM Cortex-M platforms and
ARCv2 platforms). The floating point registers are left unchanged whenever a
context switch occurs.
.. note::
The behavior is undefined, if two or more threads attempt to use
the floating point registers, as the kernel does not attempt to detect
(or prevent) multiple threads from using these registers.
Shared FP registers mode
========================
This mode is used when the application has two or more threads that use
floating point registers. Depending upon the underlying CPU architecture,
the kernel supports one or more of the following thread sub-classes:
* non-user: A thread that cannot use any floating point registers
* FPU user: A thread that can use the standard floating point registers
* SSE user: A thread that can use both the standard floating point registers
and SSE registers
The kernel initializes and enables access to the floating point registers,
so they can be used
by any thread, then saves and restores these registers during
context switches to ensure the computations performed by each FPU user
or SSE user are not impacted by the computations performed by the other users.
ARM Cortex-M architecture (with the Floating Point Extension)
-------------------------------------------------------------
.. note::
The Shared FP registers mode is the default Floating Point
Services mode in ARM Cortex-M.
On the ARM Cortex-M architecture with the Floating Point Extension, the kernel
treats *all* threads as FPU users when shared FP registers mode is enabled.
This means that any thread is allowed to access the floating point registers.
The ARM kernel automatically detects that a given thread is using the floating
point registers the first time the thread accesses them.
Pretag a thread that intends to use the FP registers by
using one of the techniques listed below.
* A statically-created ARM thread can be pretagged by passing the
:c:macro:`K_FP_REGS` option to :c:macro:`K_THREAD_DEFINE`.
* A dynamically-created ARM thread can be pretagged by passing the
:c:macro:`K_FP_REGS` option to :c:func:`k_thread_create`.
Pretagging a thread with the :c:macro:`K_FP_REGS` option instructs the
MPU-based stack protection mechanism to properly configure the size of
the thread's guard region to always guarantee stack overflow detection,
and enable lazy stacking for the given thread upon thread creation.
During thread context switching the ARM kernel saves the *callee-saved*
floating point registers, if the switched-out thread has been using them.
Additionally, the *caller-saved* floating point registers are saved on
the thread's stack. If the switched-in thread has been using the floating
point registers, the kernel restores the *callee-saved* FP registers of
the switched-in thread and the *caller-saved* FP context is restored from
the thread's stack. Thus, the kernel does not save or restore the FP
context of threads that are not using the FP registers.
Each thread that intends to use the floating point registers must provide
an extra 72 bytes of stack space where the callee-saved FP context can
be saved.
`Lazy Stacking
<https://developer.arm.com/documentation/dai0298/a>`_
is currently enabled in Zephyr applications on ARM Cortex-M
architecture, minimizing interrupt latency, when the floating
point context is active.
When the MPU-based stack protection mechanism is not enabled, lazy stacking
is always active in the Zephyr application. When the MPU-based stack protection
is enabled, the following rules apply with respect to lazy stacking:
* Lazy stacking is activated by default on threads that are pretagged with
:c:macro:`K_FP_REGS`
* Lazy stacking is activated dynamically on threads that are not pretagged with
:c:macro:`K_FP_REGS`, as soon as the kernel detects that they are using the
floating point registers.
If an ARM thread does not require use of the floating point registers any
more, it can call :c:func:`k_float_disable`. This instructs the kernel
not to save or restore its FP context during thread context switching.
ARM64 architecture
------------------
.. note::
The Shared FP registers mode is the default Floating Point
Services mode on ARM64. The compiler is free to optimize code
using FP/SIMD registers, and library functions such as memcpy
are known to make use of them.
On the ARM64 (Aarch64) architecture the kernel treats each thread as a FPU
user on a case-by-case basis. A "lazy save" algorithm is used during context
switching which updates the floating point registers only when it is absolutely
necessary. For example, the registers are *not* saved when switching from an
FPU user to a non-user thread, and then back to the original FPU user.
FPU register usage by ISRs is supported although not recommended. When an
ISR uses floating point or SIMD registers, then the access is trapped, the
current FPU user context is saved in the thread object and the ISR is resumed
with interrupts disabled so to prevent another IRQ from interrupting the ISR
and potentially requesting FPU usage. Because ISR don't have a persistent
register context, there are no provision for saving an ISR's FPU context
either, hence the IRQ disabling.
Each thread object becomes 512 bytes larger when Shared FP registers mode
is enabled.
ARCv2 architecture
------------------
On the ARCv2 architecture, the kernel treats each thread as a non-user
or FPU user and the thread must be tagged by one of the
following techniques.
* A statically-created ARC thread can be tagged by passing the
:c:macro:`K_FP_REGS` option to :c:macro:`K_THREAD_DEFINE`.
* A dynamically-created ARC thread can be tagged by passing the
:c:macro:`K_FP_REGS` to :c:func:`k_thread_create`.
If an ARC thread does not require use of the floating point registers any
more, it can call :c:func:`k_float_disable`. This instructs the kernel
not to save or restore its FP context during thread context switching.
During thread context switching the ARC kernel saves the *callee-saved*
floating point registers, if the switched-out thread has been using them.
Additionally, the *caller-saved* floating point registers are saved on
the thread's stack. If the switched-in thread has been using the floating
point registers, the kernel restores the *callee-saved* FP registers of
the switched-in thread and the *caller-saved* FP context is restored from
the thread's stack. Thus, the kernel does not save or restore the FP
context of threads that are not using the FP registers. An extra 16 bytes
(single floating point hardware) or 32 bytes (double floating point hardware)
of stack space is required to load and store floating point registers.
RISC-V architecture
-------------------
On the RISC-V architecture, the kernel treats each thread as a non-user
or FPU user and the thread must be tagged by one of the
following techniques:
* A statically-created RISC-V thread can be tagged by passing the
:c:macro:`K_FP_REGS` option to :c:macro:`K_THREAD_DEFINE`.
* A dynamically-created RISC-V thread can be tagged by passing the
:c:macro:`K_FP_REGS` to :c:func:`k_thread_create`.
* A running RISC-V thread can be tagged by calling :c:func:`k_float_enable`.
This function can only be called from the thread itself.
If a RISC-V thread no longer requires the use of the floating point registers,
it can call :c:func:`k_float_disable`. This instructs the kernel not to
save or restore its FP context during thread context switching. This function
can only be called from the thread itself.
During thread context switching the RISC-V kernel saves the *callee-saved*
floating point registers, if the switched-out thread is tagged with
:c:macro:`K_FP_REGS`. Additionally, the *caller-saved* floating point
registers are saved on the thread's stack. If the switched-in thread has been
tagged with :c:macro:`K_FP_REGS`, then the kernel restores the *callee-saved*
FP registers of the switched-in thread and the *caller-saved* FP context is
restored from the thread's stack. Thus, the kernel does not save or restore the
FP context of threads that are not using the FP registers. An extra 84 bytes
(single floating point hardware) or 164 bytes (double floating point hardware)
of stack space is required to load and store floating point registers.
SPARC architecture
------------------
On the SPARC architecture, the kernel treats each thread as a non-user
or FPU user and the thread must be tagged by one of the
following techniques:
* A statically-created thread can be tagged by passing the
:c:macro:`K_FP_REGS` option to :c:macro:`K_THREAD_DEFINE`.
* A dynamically-created thread can be tagged by passing the
:c:macro:`K_FP_REGS` to :c:func:`k_thread_create`.
During thread context switch at exit from interrupt handler, the SPARC
kernel saves *all* floating point registers, if the FPU was enabled in
the switched-out thread. Floating point registers are saved on the thread's
stack. Floating point registers are restored when a thread context is restored
iff they were saved at the context save. Saving and restoring of the floating
point registers is synchronous and thus not lazy. The FPU is always disabled
when an ISR is called (independent of :kconfig:option:`CONFIG_FPU_SHARING`).
Floating point disabling with :c:func:`k_float_disable` is not implemented.
When :kconfig:option:`CONFIG_FPU_SHARING` is used, then 136 bytes of stack space
is required for each FPU user thread to load and store floating point
registers. No extra stack is required if :kconfig:option:`CONFIG_FPU_SHARING` is
not used.
x86 architecture
----------------
On the x86 architecture the kernel treats each thread as a non-user,
FPU user or SSE user on a case-by-case basis. A "lazy save" algorithm is used
during context switching which updates the floating point registers only when
it is absolutely necessary. For example, the registers are *not* saved when
switching from an FPU user to a non-user thread, and then back to the original
FPU user. The following table indicates the amount of additional stack space a
thread must provide so the registers can be saved properly.
=========== =============== ==========================
Thread type FP register use Extra stack space required
=========== =============== ==========================
cooperative any 0 bytes
preemptive none 0 bytes
preemptive FPU 108 bytes
preemptive SSE 464 bytes
=========== =============== ==========================
The x86 kernel automatically detects that a given thread is using
the floating point registers the first time the thread accesses them.
The thread is tagged as an SSE user if the kernel has been configured
to support the SSE registers, or as an FPU user if the SSE registers are
not supported. If this would result in a thread that is an FPU user being
tagged as an SSE user, or if the application wants to avoid the exception
handling overhead involved in auto-tagging threads, it is possible to
pretag a thread using one of the techniques listed below.
* A statically-created x86 thread can be pretagged by passing the
:c:macro:`K_FP_REGS` or :c:macro:`K_SSE_REGS` option to
:c:macro:`K_THREAD_DEFINE`.
* A dynamically-created x86 thread can be pretagged by passing the
:c:macro:`K_FP_REGS` or :c:macro:`K_SSE_REGS` option to
:c:func:`k_thread_create`.
* An already-created x86 thread can pretag itself once it has started
by passing the :c:macro:`K_FP_REGS` or :c:macro:`K_SSE_REGS` option to
:c:func:`k_float_enable`.
If an x86 thread uses the floating point registers infrequently it can call
:c:func:`k_float_disable` to remove its tagging as an FPU user or SSE user.
This eliminates the need for the kernel to take steps to preserve
the contents of the floating point registers during context switches
when there is no need to do so.
When the thread again needs to use the floating point registers it can re-tag
itself as an FPU user or SSE user by calling :c:func:`k_float_enable`.
Implementation
**************
Performing Floating Point Arithmetic
====================================
No special coding is required for a thread to use floating point arithmetic
if the kernel is properly configured.
The following code shows how a routine can use floating point arithmetic
to avoid overflow issues when computing the average of a series of integer
values.
.. code-block:: c
int average(int *values, int num_values)
{
double sum;
int i;
sum = 0.0;
for (i = 0; i < num_values; i++) {
sum += *values;
values++;
}
return (int)((sum / num_values) + 0.5);
}
Suggested Uses
**************
Use the kernel floating point services when an application needs to
perform floating point operations.
Configuration Options
*********************
To configure unshared FP registers mode, enable the :kconfig:option:`CONFIG_FPU`
configuration option and leave the :kconfig:option:`CONFIG_FPU_SHARING` configuration
option disabled.
To configure shared FP registers mode, enable both the :kconfig:option:`CONFIG_FPU`
configuration option and the :kconfig:option:`CONFIG_FPU_SHARING` configuration option.
Also, ensure that any thread that uses the floating point registers has
sufficient added stack space for saving floating point register values
during context switches, as described above.
For x86, use the :kconfig:option:`CONFIG_X86_SSE` configuration option to enable
support for SSEx instructions.
API Reference
*************
.. doxygengroup:: float_apis
| 42.366197 | 87 | 0.757646 |
3e0f019e009771740026f1f1d5961a69a448228f | 165 | rst | reStructuredText | AUTHORS.rst | elgertam/cookiepress | 9545fc2461354865e1c61842c2998471b2f854d9 | [
"BSD-3-Clause"
] | null | null | null | AUTHORS.rst | elgertam/cookiepress | 9545fc2461354865e1c61842c2998471b2f854d9 | [
"BSD-3-Clause"
] | 2 | 2020-03-24T17:06:16.000Z | 2021-02-02T22:03:57.000Z | AUTHORS.rst | elgertam/cookiepress | 9545fc2461354865e1c61842c2998471b2f854d9 | [
"BSD-3-Clause"
] | null | null | null | =======
Credits
=======
Development Lead
----------------
* Andrew M. Elgert <andrew.elgert@gmail.com>
Contributors
------------
None yet. Why not be the first?
| 11.785714 | 44 | 0.545455 |
a0ef7357d29c472ac41794b7a5d20c6deb1e0717 | 7,614 | rst | reStructuredText | build/lib/enemies.rst | bouzelba/super_tank | 28636fa063d97957817ec05f26d1741fb6efcf2b | [
"MIT"
] | null | null | null | build/lib/enemies.rst | bouzelba/super_tank | 28636fa063d97957817ec05f26d1741fb6efcf2b | [
"MIT"
] | null | null | null | build/lib/enemies.rst | bouzelba/super_tank | 28636fa063d97957817ec05f26d1741fb6efcf2b | [
"MIT"
] | null | null | null | 1
2 ;;; gcc for m6809 : Feb 15 2016 21:40:10
3 ;;; 4.3.6 (gcc6809)
4 ;;; ABI version 1
5 ;;; -mint8
6 .module enemies.c
7 .globl _enemie0_vectors
8 .area .data
C880 9 _enemie0_vectors:
C880 00 10 .byte 0
C881 1E 11 .byte 30
C882 02 12 .byte 2
C883 FF 13 .byte -1
C884 00 14 .byte 0
C885 FC 15 .byte -4
C886 FF 16 .byte -1
C887 E2 17 .byte -30
C888 00 18 .byte 0
C889 FF 19 .byte -1
C88A 00 20 .byte 0
C88B F8 21 .byte -8
C88C FF 22 .byte -1
C88D 0A 23 .byte 10
C88E 00 24 .byte 0
C88F FF 25 .byte -1
C890 05 26 .byte 5
C891 FE 27 .byte -2
C892 FF 28 .byte -1
C893 00 29 .byte 0
C894 FB 30 .byte -5
C895 FF 31 .byte -1
C896 FB 32 .byte -5
C897 FD 33 .byte -3
C898 FF 34 .byte -1
C899 D8 35 .byte -40
C89A 00 36 .byte 0
C89B FF 37 .byte -1
C89C F6 38 .byte -10
C89D 03 39 .byte 3
C89E FF 40 .byte -1
C89F 00 41 .byte 0
C8A0 22 42 .byte 34
C8A1 FF 43 .byte -1
C8A2 0A 44 .byte 10
C8A3 03 45 .byte 3
C8A4 FF 46 .byte -1
C8A5 28 47 .byte 40
C8A6 00 48 .byte 0
C8A7 FF 49 .byte -1
C8A8 05 50 .byte 5
C8A9 FD 51 .byte -3
C8AA FF 52 .byte -1
C8AB 00 53 .byte 0
C8AC FB 54 .byte -5
C8AD FF 55 .byte -1
C8AE FB 56 .byte -5
C8AF FE 57 .byte -2
C8B0 FF 58 .byte -1
C8B1 F6 59 .byte -10
C8B2 00 60 .byte 0
C8B3 FF 61 .byte -1
C8B4 00 62 .byte 0
C8B5 F8 63 .byte -8
C8B6 FF 64 .byte -1
C8B7 1E 65 .byte 30
C8B8 00 66 .byte 0
C8B9 01 67 .byte 1
C8BA 00 68 .byte 0
C8BB 00 69 .byte 0
70 .globl _enemie0_shape
C8BC 71 _enemie0_shape:
C8BC 00 72 .byte 0
C8BD 14 73 .byte 20
C8BE C8 80 74 .word _enemie0_vectors
75 .globl _enemie0
C8C0 76 _enemie0:
C8C0 00 77 .byte 0
C8C1 00 78 .byte 0
C8C2 20 79 .byte 32
C8C3 1E 80 .byte 30
C8C4 C8 BC 81 .word _enemie0_shape
82 .globl _enemie1_vectors
C8C6 83 _enemie1_vectors:
C8C6 00 00 84 .word 0 ;skip space 8
C8C8 00 00 85 .word 0 ;skip space 6
C8CA 00 00 86 .word 0 ;skip space 4
C8CC 00 00 87 .word 0 ;skip space 2
88 .globl _enemie1_shape
C8CE 89 _enemie1_shape:
C8CE 01 90 .byte 1
C8CF 09 91 .byte 9
C8D0 C8 C6 92 .word _enemie1_vectors
93 .globl _enemie1
C8D2 94 _enemie1:
C8D2 00 95 .byte 0
C8D3 00 96 .byte 0
C8D4 00 97 .byte 0
C8D5 23 98 .byte 35
C8D6 C8 CE 99 .word _enemie1_shape
100 .globl _enemie2_vectors
C8D8 101 _enemie2_vectors:
C8D8 00 00 102 .word 0 ;skip space 8
C8DA 00 00 103 .word 0 ;skip space 6
C8DC 00 00 104 .word 0 ;skip space 4
C8DE 00 00 105 .word 0 ;skip space 2
106 .globl _enemie2_shape
C8E0 107 _enemie2_shape:
C8E0 01 108 .byte 1
C8E1 09 109 .byte 9
C8E2 C8 D8 110 .word _enemie2_vectors
111 .globl _enemie2
C8E4 112 _enemie2:
C8E4 00 113 .byte 0
C8E5 00 114 .byte 0
C8E6 00 115 .byte 0
C8E7 4B 116 .byte 75
C8E8 C8 E0 117 .word _enemie2_shape
118 .globl _enemies
C8EA 119 _enemies:
C8EA C8 C0 120 .word _enemie0
C8EC C8 D2 121 .word _enemie1
C8EE C8 E4 122 .word _enemie2
123 .area .text
124 .globl _init_enemies
03D5 125 _init_enemies:
03D5 34 20 [ 6] 126 pshs y
03D7 10 BE C8 EA [ 7] 127 ldy _enemies
03DB BD 02 0A [ 8] 128 jsr __Random
03DE E7 21 [ 5] 129 stb 1,y
03E0 10 BE C8 EA [ 7] 130 ldy _enemies
03E4 BD 02 0A [ 8] 131 jsr __Random
03E7 E7 A4 [ 4] 132 stb ,y
03E9 35 A0 [ 7] 133 puls y,pc
134 .globl _move_enemies
03EB 135 _move_enemies:
03EB 34 20 [ 6] 136 pshs y
03ED 10 BE C8 EA [ 7] 137 ldy _enemies
03F1 BE C8 EA [ 6] 138 ldx _enemies
03F4 E6 01 [ 5] 139 ldb 1,x
03F6 CB 02 [ 2] 140 addb #2
03F8 E7 21 [ 5] 141 stb 1,y
03FA 35 A0 [ 7] 142 puls y,pc
143 .globl _draw_enemies
03FC 144 _draw_enemies:
03FC BE C8 EA [ 6] 145 ldx _enemies
03FF BD 09 8D [ 8] 146 jsr _draw_sprite
0402 39 [ 5] 147 rts
ASxxxx Assembler V05.00 (Motorola 6809), page 1.
Hexidecimal [16-Bits]
Symbol Table
.__.$$$. = 2710 L | .__.ABS. = 0000 G
.__.CPU. = 0000 L | .__.H$L. = 0001 L
__Random **** GX | 3 _draw_enemies 0027 GR
_draw_sprite **** GX | 2 _enemie0 0040 GR
2 _enemie0_shape 003C GR | 2 _enemie0_vecto 0000 GR
2 _enemie1 0052 GR | 2 _enemie1_shape 004E GR
2 _enemie1_vecto 0046 GR | 2 _enemie2 0064 GR
2 _enemie2_shape 0060 GR | 2 _enemie2_vecto 0058 GR
2 _enemies 006A GR | 3 _init_enemies 0000 GR
3 _move_enemies 0016 GR
ASxxxx Assembler V05.00 (Motorola 6809), page 2.
Hexidecimal [16-Bits]
Area Table
[_CSEG]
0 _CODE size 0 flags C080
2 .data size 70 flags 100
3 .text size 2E flags 100
[_DSEG]
1 _DATA size 0 flags C0C0
| 43.261364 | 72 | 0.391384 |
2e5cdda7c1229c3ce8fade0d9651791132910619 | 2,843 | rst | reStructuredText | docs/source/index.rst | BTETON/finance_ml | a585be2d04db5a749eb6b39b7336e5aeb30d6327 | [
"MIT"
] | 446 | 2018-09-05T18:28:51.000Z | 2022-03-28T23:45:41.000Z | docs/source/index.rst | BTETON/finance_ml | a585be2d04db5a749eb6b39b7336e5aeb30d6327 | [
"MIT"
] | 3 | 2019-03-26T13:48:51.000Z | 2021-10-31T11:00:14.000Z | docs/source/index.rst | BTETON/finance_ml | a585be2d04db5a749eb6b39b7336e5aeb30d6327 | [
"MIT"
] | 164 | 2018-09-12T18:37:25.000Z | 2022-03-17T06:30:12.000Z | .. finance_ml documentation master file, created by
sphinx-quickstart on Sat Dec 28 14:57:57 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to ``finance_ml``'s documentation!
===============================================
Python implementations of Machine Learning helper functions for Quantiative Finance based on a book,
`Advances in Financial Machine Learning`_, written by ``Marcos Lopez de Prado``.
.. _Advances in Financial Machine Learning: https://www.amazon.co.jp/Advances-Financial-Machine-Learning-English-ebook/dp/B079KLDW21
Installation
--------------
Excute the following command ::
python setup.py install
Examples
--------------
labeling
~~~~~~~~~
Triple Barriers Labeling and CUSUM sampling::
from finance_ml.labeling import get_barrier_labels, cusum_filter
from finance_ml.stats import get_daily_vol
vol = get_daily_vol(close)
trgt = vol
timestamps = cusum_filter(close, vol)
labels = get_barrier_labels(close, timestamps, trgt, sltp=[1, 1],
num_days=1, min_ret=0, num_threads=16)
print(labels.show())
Return the following pandas.Series::
2000-01-05 -1.0
2000-01-06 1.0
2000-01-10 -1.0
2000-01-11 1.0
2000-01-12 1.0
multiprocessing
~~~~~~~~~~~~~~~~
Parallel computing using ``multiprocessing`` library.
Here is the example of applying function to each element with parallelization.::
import pandas as pd
import numpy as np
def apply_func(x):
return x ** 2
def func(df, timestamps, f):
df_ = df.loc[timestamps]
for idx, x in df_.items():
df_.loc[idx] = f(x)
return df_
df = pd.Series(np.random.randn(10000))
from finance_ml.multiprocessing import mp_pandas_obj
results = mp_pandas_obj(func, pd_obj=('timestamps', df.index),
num_threads=24, df=df, f=apply_func)
print(results.head())
Output::
0 0.449278
1 1.411846
2 0.157630
3 4.949410
4 0.601459
Documentation for the Code
============================
.. toctree::
:maxdepth: 2
:caption: Contents:
Labeling
---------
.. automodule:: finance_ml.labeling.barriers
:members:
.. automodule:: finance_ml.labeling.sampling
:members:
.. automodule:: finance_ml.labeling.sides
:members:
.. automodule:: finance_ml.labeling.sizes
:members:
.. automodule:: finance_ml.labeling.utils
:members:
Multiprocessing
------------------
.. automodule:: finance_ml.multiprocessing.pandas
:members:
.. automodule:: finance_ml.multiprocessing.partition
:members:
.. automodule:: finance_ml.multiprocessing.utils
:members:
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 24.09322 | 132 | 0.6465 |
3e7d4445db0577f9c4a1b34a774f394420eea23c | 10,132 | rst | reStructuredText | admin_manual/configuration/configuration_mail.rst | alerque/owncloud-documentation | 0c235bf9d38e903cacbdc787ac9f5241925ad368 | [
"CC-BY-3.0"
] | 1 | 2020-12-17T07:50:14.000Z | 2020-12-17T07:50:14.000Z | admin_manual/configuration/configuration_mail.rst | alerque/owncloud-documentation | 0c235bf9d38e903cacbdc787ac9f5241925ad368 | [
"CC-BY-3.0"
] | null | null | null | admin_manual/configuration/configuration_mail.rst | alerque/owncloud-documentation | 0c235bf9d38e903cacbdc787ac9f5241925ad368 | [
"CC-BY-3.0"
] | null | null | null | Sending Mail Notifications
==========================
ownCloud does not contain a full email program but contains some parameters to
allow to send e.g. password reset email to the users. This function relies on
the `PHPMailer library <http://sourceforge.net/projects/phpmailer/>`_. To
take advantage of this function it needs to be configured properly.
Requirements
------------
Different requirements need to be matched, depending on the environment which
you are using and the way how you want to send email. You can choose between
**SMTP**, **PHP mail**, **Sendmail** and **qmail**.
Parameters
----------
All parameters need to be set in :file:`config/config.php`
SMTP
~~~~
If you want to send email using a local or remote SMTP server it is necessary
to enter the name or ip address of the server, optionally followed by a colon
separated port number, e.g. **:425**. If this value is not given the default
port 25/tcp will be used unless you will change that by modifying the
**mail_smtpport** parameter. Multiple server can be entered separated by
semicolon:
.. code-block:: php
<?php
"mail_smtpmode" => "smtp",
"mail_smtphost" => "smtp-1.server.dom;smtp-2.server.dom:425",
"mail_smtpport" => 25,
or
.. code-block:: php
<?php
"mail_smtpmode" => "smtp",
"mail_smtphost" => "smtp.server.dom",
"mail_smtpport" => 425,
If a malware or SPAM scanner is running on the SMTP server it might be
necessary that you increase the SMTP timeout to e.g. 30s:
.. code-block:: php
<?php
"mail_smtptimeout" => 30,
If the SMTP server accepts insecure connections, the default setting can be
used:
.. code-block:: php
<?php
"mail_smtpsecure" => '',
If the SMTP server only accepts secure connections you can choose between
the following two variants:
SSL
^^^
A secure connection will be initiated using the outdated SMTPS protocol
which uses the port 465/tcp:
.. code-block:: php
<?php
"mail_smtphost" => "smtp.server.dom:465",
"mail_smtpsecure" => 'ssl',
TLS
^^^
A secure connection will be initiated using the STARTTLS protocol which
uses the default port 25/tcp:
.. code-block:: php
<?php
"mail_smtphost" => "smtp.server.dom",
"mail_smtpsecure" => 'tls',
And finally it is necessary to configure if the SMTP server requires
authentication, if not, the default values can be taken as it.
.. code-block:: php
<?php
"mail_smtpauth" => false,
"mail_smtpname" => "",
"mail_smtppassword" => "",
If SMTP authentication is required you have to set the required username
and password and can optionally choose between the authentication types
**LOGIN** (default) or **PLAIN**.
.. code-block:: php
<?php
"mail_smtpauth" => true,
"mail_smtpauthtype" => "LOGIN",
"mail_smtpname" => "username",
"mail_smtppassword" => "password",
PHP mail
~~~~~~~~
If you want to use PHP mail it is necessary to have an installed and working
email system on your server. Which program in detail is used to send email is
defined by the configuration settings in the **php.ini** file. (On \*nix
systems this will most likely be Sendmail.) ownCloud should be able to send
email out of the box.
.. code-block:: php
<?php
"mail_smtpmode" => "php",
"mail_smtphost" => "127.0.0.1",
"mail_smtpport" => 25,
"mail_smtptimeout" => 10,
"mail_smtpsecure" => "",
"mail_smtpauth" => false,
"mail_smtpauthtype" => "LOGIN",
"mail_smtpname" => "",
"mail_smtppassword" => "",
Sendmail
~~~~~~~~
If you want to use the well known Sendmail program to send email, it is
necessary to have an installed and working email system on your \*nix server.
The sendmail binary (**/usr/sbin/sendmail**) is usually part of that system.
ownCloud should be able to send email out of the box.
.. code-block:: php
<?php
"mail_smtpmode" => "sendmail",
"mail_smtphost" => "127.0.0.1",
"mail_smtpport" => 25,
"mail_smtptimeout" => 10,
"mail_smtpsecure" => "",
"mail_smtpauth" => false,
"mail_smtpauthtype" => "LOGIN",
"mail_smtpname" => "",
"mail_smtppassword" => "",
qmail
~~~~~
If you want to use the qmail program to send email, it is necessary to have an
installed and working qmail email system on your server. The sendmail binary
(**/var/qmail/bin/sendmail**) will then be used to send email. ownCloud should
be able to send email out of the box.
.. code-block:: php
<?php
"mail_smtpmode" => "qmail",
"mail_smtphost" => "127.0.0.1",
"mail_smtpport" => 25,
"mail_smtptimeout" => 10,
"mail_smtpsecure" => "",
"mail_smtpauth" => false,
"mail_smtpauthtype" => "LOGIN",
"mail_smtpname" => "",
"mail_smtppassword" => "",
Send a Test Email
-----------------
The only way to test your email configuration is, to force a login failure,
because a function to send a test email has not be implemented yet.
First make sure that you are using a full qualified domain and not an ip address in the ownCloud URL, like::
http://my-owncloud-server.domain.dom/owncloud/
The password reset function fetches the domain name from that URL to build the
email sender address, e.g.::
john@domain.dom
Next you need to enter your login and an *invalid* password. As soon as you
press the login button the login mask reappears and a **I’ve forgotten my password** link will be shown above the login
field. Click on that link, re-enter your login and press the **Reset password** button - that's all.
Trouble shooting
----------------
My web domain is different from my mail domain?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The default domain name used for the sender address is the hostname where your ownCloud installation is served.
If you have a different mail domain name you can override this behavior by setting the following configuration parameter:
.. code-block:: php
<?php
"mail_domain" => "example.com",
Now every mail send by ownCloud e.g. password reset email, will have the domain part of the sender address look like::
no-reply@example.com
How can I find out if a SMTP server is reachable?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use the ping command to check the server availability::
ping smtp.server.dom
::
PING smtp.server.dom (ip-address) 56(84) bytes of data.
64 bytes from your-server.local.lan (192.168.1.10): icmp_req=1 ttl=64 time=3.64 ms
64 bytes from your-server.local.lan (192.168.1.10): icmp_req=2 ttl=64 time=0.055 ms
64 bytes from your-server.local.lan (192.168.1.10): icmp_req=3 ttl=64 time=0.062 ms
How can I find out if the SMTP server is listening on a specific tcp port?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A SMTP server is usually listening on port **25/tcp** (smtp) and/or in
rare circumstances is also listening on the outdated port **465/tcp** (smtps).
You can use the telnet command to check if a port is available::
telnet smtp.domain.dom 25
::
Trying 192.168.1.10...
Connected to smtp.domain.dom.
Escape character is '^]'.
220 smtp.domain.dom ESMTP Exim 4.80.1 Tue, 22 Jan 2013 22:28:14 +0100
How can I find out if a SMTP server supports the outdated SMTPS protocol?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A good indication that a SMTP server supports the SMTPS protocol is that it
is listening on port **465/tcp**. How this can be checked has been described
previously.
How can I find out if a SMTP server supports the TLS protocol?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A SMTP server usually announces the availability of STARTTLS right after a
connection has been established. This can easily been checked with the telnet command. You need to enter the marked lines to get the information displayed::
telnet smtp.domain.dom 25
::
Trying 192.168.1.10...
Connected to smtp.domain.dom.
Escape character is '^]'.
220 smtp.domain.dom ESMTP Exim 4.80.1 Tue, 22 Jan 2013 22:39:55 +0100
EHLO your-server.local.lan # <<< enter this command
250-smtp.domain.dom Hello your-server.local.lan [ip-address]
250-SIZE 52428800
250-8BITMIME
250-PIPELINING
250-AUTH PLAIN LOGIN CRAM-MD5
250-STARTTLS # <<< STARTTLS is supported!
250 HELP
QUIT # <<< enter this command
221 smtp.domain.dom closing connection
Connection closed by foreign host.
How can I find out which authentication types/methods a SMTP server supports?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A SMTP server usually announces the available authentication types/methods
right after a connection has been established. This can easily been checked
with the telnet command. You need to enter the marked lines to get the
information displayed::
telnet smtp.domain.dom 25
::
Trying 192.168.1.10...
Connected to smtp.domain.dom.
Escape character is '^]'.
220 smtp.domain.dom ESMTP Exim 4.80.1 Tue, 22 Jan 2013 22:39:55 +0100
EHLO your-server.local.lan # <<< enter this command
250-smtp.domain.dom Hello your-server.local.lan [ip-address]
250-SIZE 52428800
250-8BITMIME
250-PIPELINING
250-AUTH PLAIN LOGIN CRAM-MD5 # <<< available Authentication types
250-STARTTLS
250 HELP
QUIT # <<< enter this command
221 smtp.domain.dom closing connection
Connection closed by foreign host.
Enable Debug Mode
~~~~~~~~~~~~~~~~~
If you are still not able to send email it might be useful to activate
further debug messages by setting the following parameter. Right after
you have pressed the **Reset password** button, as described before, a
lot of **SMTP -> get_lines(): ...** messages will be written on the
screen.
.. code-block:: php
<?php
"mail_smtpdebug" => true;
| 31.271605 | 156 | 0.651204 |
caf5a40bcd1ea866b487c40518765be8fd25194f | 510 | rst | reStructuredText | src/source/index.rst | tinyos-io/tinyos-docs | 4eb52e542065f194853a2e35479dde7822715658 | [
"MIT"
] | null | null | null | src/source/index.rst | tinyos-io/tinyos-docs | 4eb52e542065f194853a2e35479dde7822715658 | [
"MIT"
] | null | null | null | src/source/index.rst | tinyos-io/tinyos-docs | 4eb52e542065f194853a2e35479dde7822715658 | [
"MIT"
] | null | null | null | .. TinyOS documentation master file, created by
sphinx-quickstart on Sun Dec 22 18:34:39 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to TinyOS's documentation!
==================================
.. toctree::
:maxdepth: 2
:caption: Contents:
introduction/index
installation/index
tutorials/index
teps/index
porting/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`search`
| 20.4 | 76 | 0.643137 |
43a7bb842c131e560dde9af8667ebd828dac2e62 | 3,876 | rst | reStructuredText | admin_guide/docker.rst | kanboard/documentation-tr | aab41eca6f4819b6ceab160f99aa47762482ca08 | [
"MIT"
] | 1 | 2019-07-04T20:36:52.000Z | 2019-07-04T20:36:52.000Z | admin_guide/docker.rst | kanboard/documentation-tr | aab41eca6f4819b6ceab160f99aa47762482ca08 | [
"MIT"
] | null | null | null | admin_guide/docker.rst | kanboard/documentation-tr | aab41eca6f4819b6ceab160f99aa47762482ca08 | [
"MIT"
] | 4 | 2020-04-15T11:34:42.000Z | 2021-03-30T08:46:12.000Z | Docker ile Kanboard nasıl çalıştırılır?
=======================================
Kanboard, `Docker <https://www.docker.com>`__ ile kolayca
çalıştırabilir.
Disk görüntü-image boyutu yaklaşık **70MB** olup aşağıdakileri içerir:
- `Alpine Linux <http://alpinelinux.org/>`__
- `Süreç yöneticisi S6 <http://skarnet.org/software/s6/>`__
- Nginx
- PHP 7
Kanboard cronjob’u her gece yarısı çalışıyor. URL yeniden yazma,
birlikte gelen yapılandırma dosyasında etkinleştirilmiştir.
Kapsayıcı-konteyner çalışırken, bellek kullanımı yaklaşık **30MB**
civarındadır.
Kararlı sürümü kullanmak
------------------------
Kanboard’un en kararlı sürümünü elde etmek için **stable** etiketini
kullanın:
.. code:: bash
docker pull kanboard/kanboard
docker run -d --name kanboard -p 80:80 -t kanboard/kanboard:stable
Geliştirme sürümünü kullanmak (otomatik yapı)
---------------------------------------------
Depodaki her yeni taahhüt, `Docker
Hub <https://registry.hub.docker.com/u/kanboard/kanboard/>`__ üzerinde
yeni bir yapı oluşturur.
.. code:: bash
docker pull kanboard/kanboard
docker run -d --name kanboard -p 80:80 -t kanboard/kanboard:latest
**latest** etiketi, Kanboard’un **geliştirme versiyonudur-development
version**, risk almak kendi sorumluluğunuzdadır.
Kendi Docker görüntü-image oluşturun
------------------------------------
Kendi görüntü-image inızı oluşturmak için Kanboard havuzunda-repository
bir ``Dockerfile`` var. Kanboard havuzunda-repository klonlayın ve
aşağıdaki komutu çalıştırın:
.. code:: bash
docker build -t youruser/kanboard:master .
veya
.. code:: bash
make docker-image
Bağlantı noktası 80 üzerinde arka planda kapsayıcı-konteyner çalıştırmak
için:
.. code:: bash
docker run -d --name kanboard -p 80:80 -t youruser/kanboard:master
Cilt-Volumes
------------
Kapsayıcınıza-konyetner 2 cilt bağlayabilirsiniz:
- Veri klasörü: ``/var/www/app/data``
- Eklentiler-Plugins klasörü: ``/var/www/app/plugins``
`Resmi Docker belgeleri <https://docs.docker.com/storage/volumes/>`__
’da açıklandığı gibi, ana makineye bir hacim bağlamak için ``-v``
parametresi-bayrağını kullanın.
Kapsayıcınızı-Konteyner Yükseltme
---------------------------------
- Yeni görüntü-image koy
- Eski kapsayıcı-konteyner çıkarın
- Aynı ciltlere sahip yeni bir kapsayıcı-konteyner yeniden başlat
Ortam Değişkenleri
------------------
Ortam değişkenleri, Kanboard konteyner (Docker) olarak kullanıldığında
yararlı olabilir.
+------+---------------------------------------------------------------+
| Deği | Tanım |
| şken | |
+======+===============================================================+
| DATA | ``[Veritabanı türü]://[kullanıcı adı]:[şifre]@[host]:[port]/[ |
| BASE | veritabanı adı]``, |
| _URL | örnek: ``postgres://foo:foo@myserver:5432/kanboard`` |
+------+---------------------------------------------------------------+
| DEBU | Hata ayıklama modunu etkinleştir/devre dışı bırak: “true” |
| G | veya “false” |
+------+---------------------------------------------------------------+
Yapılandırma dosyaları
----------------------
- Kapsayıcı-konteyner da zaten ``/var/www/app/config.php`` de bulunan
özel bir yapılandırma dosyası bulunmaktadır.
- Kendi yapılandırma dosyanızı veri hacmine kaydedebilirsiniz:
``/var/www/app/data/config.php``.
Kaynaklar
---------
- `Resmi Kanboard
görüntü-image <https://registry.hub.docker.com/u/kanboard/kanboard/>`__
- `Docker belgeleri <https://docs.docker.com/>`__
- `Dockerfile kararlı sürümü <https://github.com/kanboard/docker>`__
- `Dockerfile dev
sürümü <https://github.com/kanboard/kanboard/blob/master/Dockerfile>`__
| 31.258065 | 74 | 0.604747 |
30475f019f44425b14b185ac2e798c2f0ffe5b13 | 4,039 | rst | reStructuredText | docs/customization/api.rst | olimination/Sylius | 175782739d4efb6d62096565b00d2c2414c71ac8 | [
"MIT"
] | 1 | 2021-12-14T19:49:37.000Z | 2021-12-14T19:49:37.000Z | docs/customization/api.rst | olimination/Sylius | 175782739d4efb6d62096565b00d2c2414c71ac8 | [
"MIT"
] | null | null | null | docs/customization/api.rst | olimination/Sylius | 175782739d4efb6d62096565b00d2c2414c71ac8 | [
"MIT"
] | null | null | null | Customizing API
===============
We are using the API Platform to create all endpoints in Sylius API.
API Platform allows configuring an endpoint by ``yaml`` and ``xml`` files or by annotations.
In this guide, you will learn how to customize Sylius API endpoints using ``xml`` configuration.
How to prepare project for customization?
=========================================
If your project was created before v1.10, make sure your API Platform config follows the one below:
.. code-block:: yaml
# config/packages/api_platform.yaml
api_platform:
mapping:
paths:
- '%kernel.project_dir%/vendor/sylius/sylius/src/Sylius/Bundle/ApiBundle/Resources/config/api_resources'
- '%kernel.project_dir%/config/api_platform'
- '%kernel.project_dir%/src/Entity'
patch_formats:
json: ['application/merge-patch+json']
swagger:
versions: [3]
How to add an additional endpoint?
==================================
Let's assume that you want to add a new endpoint to the ``Order`` resource that will be dispatching a command.
If you want to customize any API resource, you need to copy the entire configuration of this resource from
``%kernel.project_dir%/vendor/sylius/sylius/src/Sylius/Bundle/ApiBundle/Resources/config/api_resources/`` to ``%kernel.project_dir%/config/api_platform``.
Add the following configuration in the config copied to ``config/api_platform/Order.xml``:
.. code-block:: xml
<collectionOperations>
<collectionOperation name="custom_operation">
<attribute name="method">POST</attribute>
<attribute name="path">/shop/orders/custom-operation</attribute>
<attribute name="messenger">input</attribute>
<attribute name="input">App\Command\CustomCommand</attribute>
</collectionOperation>
</collectionOperations>
And that's all, now you have a new endpoint with your custom logic.
.. tip::
Read more about API Platform endpoint configuration `here <https://api-platform.com/docs/core/operations/>`_
How to remove an endpoint?
==========================
Let's assume that your shop is offering only digital products. Therefore, while checking out,
your customers do not need to choose a shipping method for their orders.
Thus you will need to modify the configuration file of the ``Order`` resource and remove the shipping method choosing endpoint from it.
To remove the endpoint you only need to delete the unnecessary configuration from your ``config/api_platform/Order.xml`` which is a copied configuration file, that overwrites the one from Sylius.
.. code-block:: xml
<!-- delete this configuration -->
<itemOperation name="shop_select_shipping_method">
<!-- ... -->
</itemOperation>
How to rename an endpoint's path?
=================================
If you want to change an endpoint's path, you just need to change the ``path`` attribute in your config:
.. code-block:: xml
<itemOperations>
<itemOperation name="admin_get">
<attribute name="method">GET</attribute>
<attribute name="path">/admin/orders/renamed-path/{id}</attribute>
</itemOperation>
</itemOperations>
How to modify the endpoints prefixes?
=====================================
Let's assume that you want to have your own prefixes on paths (for example to be more consistent with the rest of your application).
As the first step you need to change the ``paths`` or ``route_prefix`` attribute in all needed resources.
The next step is to modify the security configuration in ``config/packages/security.yaml``, you need to overwrite the parameter:
.. code-block:: xml
parameters:
sylius.security.new_api_shop_route: "%sylius.security.new_api_route%/retail"
.. warning::
Changing prefix without security configuration update can expose confidential data (like customers addresses).
After these two steps you can start to use endpoints with new prefixes
| 40.79798 | 195 | 0.685566 |
968a2082639178585519af6f5621031fe756be42 | 146 | rst | reStructuredText | API.rst | tomerten/ctef2py | 190b83819e0ada830b76577e3f46d0d1cbbbacd7 | [
"MIT"
] | null | null | null | API.rst | tomerten/ctef2py | 190b83819e0ada830b76577e3f46d0d1cbbbacd7 | [
"MIT"
] | null | null | null | API.rst | tomerten/ctef2py | 190b83819e0ada830b76577e3f46d0d1cbbbacd7 | [
"MIT"
] | null | null | null | ***
API
***
.. automodule:: ctef2py
:members:
.. include:: ../ctef2py/f90_fmohl/fmohl.rst
.. include:: ../ctef2py/f90_getgauss/getgauss.rst
| 13.272727 | 49 | 0.650685 |
bed42a48b37051477a15a8536e6983e3cac87d31 | 10,523 | rst | reStructuredText | doc/sphinx/phk/10goingon50.rst | thedave42-org/varnish-cache | 225a251515e4ad3dcf0534cfb8c224b1b310f64a | [
"BSD-2-Clause"
] | 3,045 | 2016-03-07T17:18:59.000Z | 2022-03-31T19:33:52.000Z | doc/sphinx/phk/10goingon50.rst | thedave42-org/varnish-cache | 225a251515e4ad3dcf0534cfb8c224b1b310f64a | [
"BSD-2-Clause"
] | 2,189 | 2016-02-15T06:25:49.000Z | 2022-03-31T18:33:09.000Z | doc/sphinx/phk/10goingon50.rst | thedave42-org/varnish-cache | 225a251515e4ad3dcf0534cfb8c224b1b310f64a | [
"BSD-2-Clause"
] | 570 | 2016-01-07T14:19:34.000Z | 2022-03-18T18:25:51.000Z | ..
Copyright (c) 2016 Varnish Software AS
SPDX-License-Identifier: BSD-2-Clause
See LICENSE file for full text of license
.. _phk_10goingon50:
========================
Varnish - 10 going on 50
========================
Ten years ago, Dag-Erling and I were busy hashing out the big lines
of Varnish.
Hashing out had started on a blackboard at University of Basel
during the `EuroBSDcon 2005 <http://2005.eurobsdcon.org/>`_ conference,
and had continued in email and IRC ever since.
At some point in February 2006 Dag-Erling laid down the foundations of
our Subversion and source tree.
The earliest fragment which have survived the conversion to Git is
subversion commit number 9::
commit 523166ad2dd3a65e3987f13bc54f571f98453976
Author: Dag Erling Smørgrav <des@des.no>
Date: Wed Feb 22 14:31:39 2006 +0000
Additional subdivisions.
We consider this the official birth-certificate of the Varnish Cache
FOSS project, and therefore we will celebrate the 10 year birthday
of Varnish in a couple of weeks.
We're not sure exactly how and where we will celebrate this, but
follow Twitter user `@varnishcache <https://twitter.com/varnishcache>`_
if you want don't want to miss the partying.
--------
VCLOCC1
--------
One part of the celebration, somehow, sometime, will be the "VCL
Obfuscated Code Contest #1" in the same spirit as the `International
Obfuscated C Code Contest <http://www.ioccc.org/>`_.
True aficionados of Obfuscated Code will also appreciate this
amazing `Obfuscated PL/1 <http://www.multicians.org/proc-proc.html>`_.
The official VCLOCC1 contest rules are simple:
* VCL code must work with Varnish 4.1.1
* As many Varnishd instances as you'd like.
* No inline-C allowed
* Any VMOD you want is OK
* You get to choose the request(s) to send to Varnishd
* If you need backends, they must be simulated by varnishd (4.1.1) instances.
* *We* get to publish the winning entry on the Varnish project home-page.
The *only* thing which counts is the amazing/funny/brilliant
VCL code *you* write and what it does. VMODs and backends are just
scaffolding which the judges will ignore.
We will announce the submission deadline one month ahead of time, but
you are more than welcome to start already now.
--------
Releases
--------
Our 10 year anniversary was a good excuse to take stock and look at
the way we work, and changes are and will be happening.
Like any respectable FOSS project, the Varnish project has never been
accused, or guilty, of releasing on the promised date.
Not even close.
With 4.1 not even close to close.
Having been around that block a couple of times, (*cough* FreeBSD 5.0 *cough*)
I think I know why and I have decided to put a stop to it.
Come hell or high water [#f1]_, Varnish 5.0 will be released September
15th 2016.
And the next big release, whatever we call it, will be middle of
March 2017, and until we change our mind, you can trust a major
release of Varnish to happen every six months.
Minor releases, typically bugfixes, will be released as need arise,
and these should just be installable with no configuration changes.
Sounds wonderful, doesn't it ? Now you can plan your upgrades.
But nothing comes free: Until we are near September, we won't be able
to tell you what Varnish 5 contains.
We have plans and ideas for what *should* be there, and we will work
to reach those milestones, but we will not hold the release for "just this
one more feature" if they are not ready.
If it is in on September 15th, it will be in the release, if not, it wont.
And since the next release is guaranteed to come six months later,
it's not a catastrophe to miss the deadline.
So what's the problem and why is this draconian solution better ?
Usually, when FOSS projects start, they are started by "devops",
Varnish certainly did: Dag-Erling ran a couple of sites
with Varnish, as did Kristian, and obviously Anders and Audun of
VG did as well, so finding out if you improved or broke things
during development didn't take long.
But as a project grows, people gravitate from "devops" to "dev",
and suddenly we have to ask somebody else to "please test -trunk"
and these people have their own calendars, and are not sure why
they should test, or even if they should test, much less what they
should be looking for while they test, because they have not been
part of the development process.
In all honesty, going from Varnish1 to Varnish4 the amount of
real-life testing our releases have received *before* being released
has consistently dropped [#f2]_.
So we're moving the testing on the other side of the release date,
because the people who *can* live-test Varnish prefer to have a
release to test.
We'll run all the tests we can in our development environments and
we'll beg and cajole people with real sites into testing also, but
we won't wait for weeks and months for it to happen, like we did
with the 4.1 release.
All this obviously changes the dynamics of the project, and it we
find out it is a disaster, we'll change our mind.
But until then: Two major releases a year, as clock-work, mid-September
and mid-March.
----------------
Moving to github
----------------
We're also moving the project to github. We're trying to find out
a good way to preserve the old Trac contents, and once we've
figured that out, we'll pull the handle on the transition.
Trac is starting to creak in the joints and in particular we're
sick and tired of defending it against spammers. Moving to github
makes that Somebody Elses Problem.
We also want to overhaul the project home-page and try to get
a/the wiki working better.
We'll keep you posted about all this when and as it happens.
--------------------------------------------
We were hip before it was hip to be hipsters
--------------------------------------------
Moving to github also means moving into a different culture.
Githubs statistics are neat, but whenever you start to measure
something, it becomes a parameter for optimization and competition,
and there are people out there who compete on github statistics.
In one instance the "game" is simply to submit changes, no matter
how trivial, to as many different projects as you can manage in
order to claim that you "contribute to a lot of FOSS projects".
There is a similar culture of "trophy hunting" amongst so-called
"security-researchers" - who has most CVE's to their name? It
doesn't seem to matter to them how vacuous the charge or how
theoretical the "vulnerability" is, a CVE is a CVE to them.
I don't want to play that game.
If you are a contributor to Varnish, you should already have the
nice blue T-shirt and the mug to prove it. (Thanks Varnish-Software!)
If you merely stumble over a spelling mistake, you merely
stumbled over a spelling mistake, and we will happily
correct it, and put your name in the commit message.
But it takes a lot more that fixing a spelling mistake to
become recognized as "a Varnish contributor".
Yeah, we're old and boring.
Speaking of which...
----------------------------
Where does 50 come into it ?
----------------------------
On January 20th I celebrated my 50 year birthday, and this was a
much more serious affair than I had anticipated: For the first
time in my life I have received a basket with wine and flowers on
my birthday.
I also received books and music from certain Varnish users,
much appreciated guys!
Despite numerically growing older I will insist, until the day I
die, that I'm a man of my best age.
That doesn't mean I'm not changing.
To be honest, being middle-aged sucks.
Your body starts creaking and you get frustrated seeing people make
mistakes you warned them against.
But growing older also absolutely rulez, because your age allows
you to appreciate that you live in a fantastic future with a lot
of amazing changes - even if it will take a long time before
progress goes too far.
There does seem to be increasing tendency to want the kids off your
lawn, but I think I can control that.
But if not I hereby give them permission to steal my apples and
yell back at me, because I've seen a lot of men, in particular in
the technical world, grow into bitter old men who preface every
utterance with "As *I* already said *MANY* years ago...", totally
oblivious to how different the world has become, how wrong their
diagnosis is and how utterly useless their advice is.
I don't want to end up like that.
From now on my basic assumption is that I'm an old ass who is part
of the problem, and that being part of the solution is something I
have to work hard for, rather than the other way around.
In my case, the two primary physiological symptoms of middle age is
that after 5-6 hours my eyes tire from focusing on the monitor and
that my mental context-switching for big contexts is slower than
it used to be.
A couple of years ago I started taking "eye-breaks" after lunch.
Get away from the screen, preferably outside where I could rest my
eyes on stuff further away than 40cm, then later in the day
come back and continue hacking.
Going forward, this pattern will become more pronounced. The amount
of hours I work will be the same, but I will be splitting the workday
into two halves.
You can expect me to be at my keyboard morning (08-12-ish EU time)
and evening (20-24-ish EU time) but I may be doing other stuff,
away from the keyboard and screen, during the afternoon.
Starting this year I have also changed my calendar.
Rather than working on various projects and for various customers
in increments of half days, I'm lumping things together in bigger
units of days and weeks.
Anybody who knows anything about process scheduling can see that
this will increase throughput at the cost of latency.
The major latency impact is that one of the middle weeks of each
month I will not be doing Varnish. On the other hand, all
the weeks I do work on Varnish will now be full weeks.
And with those small adjustments, the Varnish project and I are
ready to tackle the next ten years.
Let me conclude with a big THANK YOU! to all Contributors and Users
of Varnish, for making the first 10 years more amazing than I ever
thought FOSS development could be.
Much Appreciated!
*phk*
.. rubric:: Footnotes
.. [#f1] I've always wondered about that expression. Is the assumption that
if *both* hell *and* high water arrives at the same time they will cancel
out ?
.. [#f2] I've seriously considered if I should start a porn-site, just to
test Varnish, but the WAF of that idea was well below zero.
| 36.665505 | 78 | 0.754728 |
1b4e4869c0e14ada0195f0589978f46fe4493576 | 92 | rst | reStructuredText | doc/src/modules/physics/quantum/represent.rst | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
] | 8,323 | 2015-01-02T15:51:43.000Z | 2022-03-31T13:13:19.000Z | doc/src/modules/physics/quantum/represent.rst | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
] | 15,102 | 2015-01-01T01:33:17.000Z | 2022-03-31T22:53:13.000Z | doc/src/modules/physics/quantum/represent.rst | utkarshdeorah/sympy | dcdf59bbc6b13ddbc329431adf72fcee294b6389 | [
"BSD-3-Clause"
] | 4,490 | 2015-01-01T17:48:07.000Z | 2022-03-31T17:24:05.000Z | =========
Represent
=========
.. automodule:: sympy.physics.quantum.represent
:members:
| 13.142857 | 47 | 0.586957 |
c34dc7b9900db4deeb69764f112b1fa25e5db341 | 281 | rst | reStructuredText | mpisppy/utils/listener_util/doc/index.rst | Matthew-Signorotti/mpi-sppy | 5c6b4b8cd26af517ff09706d11751f2fb05b1b5f | [
"BSD-3-Clause"
] | 2 | 2020-06-05T14:31:46.000Z | 2020-09-29T20:08:05.000Z | mpisppy/utils/listener_util/doc/index.rst | Matthew-Signorotti/mpi-sppy | 5c6b4b8cd26af517ff09706d11751f2fb05b1b5f | [
"BSD-3-Clause"
] | 22 | 2020-06-06T19:30:33.000Z | 2020-10-30T23:00:58.000Z | mpisppy/utils/listener_util/doc/index.rst | Matthew-Signorotti/mpi-sppy | 5c6b4b8cd26af517ff09706d11751f2fb05b1b5f | [
"BSD-3-Clause"
] | 6 | 2020-06-06T17:57:38.000Z | 2020-09-18T22:38:19.000Z | listener_util
==========
A utility for listener-worker asynchronous python. Develope for
the PH and APH use cases.
.. toctree::
:maxdepth: 2
touse.rst
philosophy.rst
api.rst
Indices and Tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 14.05 | 63 | 0.626335 |
ded025636bd9b0b633868c3bf97f7d853b79d0b4 | 564 | rst | reStructuredText | README.rst | msabramo/invoke | 186a5e4ed7afa0a96afbeaa763e566630b20345f | [
"BSD-2-Clause"
] | null | null | null | README.rst | msabramo/invoke | 186a5e4ed7afa0a96afbeaa763e566630b20345f | [
"BSD-2-Clause"
] | null | null | null | README.rst | msabramo/invoke | 186a5e4ed7afa0a96afbeaa763e566630b20345f | [
"BSD-2-Clause"
] | null | null | null | Invoke is a Python (2.6+) task execution and management tool. It's similar to
pre-existing tools in other languages (such as Make or Rake) but lets you
interface easily with Python codebases, and of course write your tasks
themselves in straight-up Python.
For documentation, including detailed installation information, please see
http://pyinvoke.org. Post-install usage information may be found in ``invoke
--help``.
You can install the `development version
<https://github.com/pyinvoke/invoke/tarball/master#egg=invoke-dev>`_ via ``pip
install invoke==dev``.
| 43.384615 | 78 | 0.785461 |
7812bf13b919496305fc8b76702ee7c920c67da3 | 638 | rst | reStructuredText | README.rst | yoophi/flask-facebook-login-sample | 0c1a8e5865e749d426dad3f01c5ab275402affd8 | [
"MIT"
] | null | null | null | README.rst | yoophi/flask-facebook-login-sample | 0c1a8e5865e749d426dad3f01c5ab275402affd8 | [
"MIT"
] | null | null | null | README.rst | yoophi/flask-facebook-login-sample | 0c1a8e5865e749d426dad3f01c5ab275402affd8 | [
"MIT"
] | null | null | null | ==========================
Flask Facebook Login Sample
==========================
Flask Facebook Login Sample
* Free software: MIT license
* Documentation: https://app.readthedocs.io.
* Real Python Link: https://realpython.com/flask-google-login/
Features
--------
* TODO
Links
-----
- `Facebook developers page
page <http://developers.facebook.com/>`__
Credits
-------
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
| 18.764706 | 103 | 0.680251 |
e6a9a694d2dfadacd559bdb161a454460762ab76 | 4,949 | rst | reStructuredText | docs/source/about/other_projects.rst | zachcran/CMakeDev | c9dd2bd2abc5fc7c6a1ba6734565f40885791ca9 | [
"Apache-2.0"
] | 1 | 2022-03-08T17:58:59.000Z | 2022-03-08T17:58:59.000Z | docs/source/about/other_projects.rst | zachcran/CMakeDev | c9dd2bd2abc5fc7c6a1ba6734565f40885791ca9 | [
"Apache-2.0"
] | 3 | 2019-11-08T20:01:29.000Z | 2022-02-03T21:51:51.000Z | docs/source/about/other_projects.rst | zachcran/CMakeDev | c9dd2bd2abc5fc7c6a1ba6734565f40885791ca9 | [
"Apache-2.0"
] | 2 | 2022-02-03T20:27:00.000Z | 2022-03-09T01:34:42.000Z | ***********************
Other Relevant Projects
***********************
In order to put CPP in perspective this chapter describes some other relevant
projects. The projects are listed in alphabetic order.
Autocmake
=========
Autocmake's home is `here <https://github.com/dev-cafe/autocmake>`_.
Build, Link, Triumph (BLT)
==========================
BLT's home is `here <https://github.com/llnl/blt>`_.
* Focus is on simplifying writing cross-platform CMake scripts
* Wrappers around common CMake functions like ``add_library``.
* No support for building external dependencies.
Cinch
=====
Cinch's home is `here <https://github.com/laristra/cinch>`_.
CMake++
=======
CMake++'s home page is `here <https://github.com/toeb/cmakepp>`_. CMake++
contains an abundance of CMake extensions. Many of those extensions have direct
overlap with extensions that are part of CMakePP. Features include (among many):
- maps
- objects
- tasks/promises
Unfortunately development of CMake++ has largely been done by a single
developer and development and support seems to have stopped in 2017.
Probably the biggest problem with CMake++ is the lack of documentation. While
there is some high-level documentation in place, there is little to no API
documentation. This makes it very challenging for a new developer to figure out
what is going on. We considered forking and expanding on CMake++, but ultimately
decided that the time it would take us to come to grips with CMake++ is on par
with the time it would take us to write CMakePP. History has shown that we
massively underestimated the time to write CMakePP, but alas we're committed
now...
Hunter
======
Hunter main page is available `here <https://github.com/ruslo/hunter>`_. As a
disclaimer, this section may come off as harsh on the Hunter project, but that's
just because we initially tried Hunter before deciding to write CMakePP. Hence,
we have more experience with Hunter than many other projects on this list.
Like CMakePP, Hunter is written entirely in CMake and will locate and build
dependencies for you automatically. Admittedly, Hunter was great at first
and seemed to fit our needs, but then a number of problems began to manifest:
- Documentation
- Hunter's documentation is lacking particularly when it comes to more
advanced usage cases. It's also hard to read.
- No support for virtual dependencies
- In all fairness, as of the time of writing, there were open issues on GitHub
regarding this.
- Poor support for continuous integration
- Hunter assumed from the start that projects would depend on a particular
version of a dependency. With a lot of projects moving to a continuous
integration model, "releases" are not always available. Hunter's solution
to the lack of releases is to use git submodules, but if you've ever tried
using git submodules you know that they are never the solution...
- Difficult to control
- Hunter is a package manager and thus it assumes that it knows about all of
the packages on your system. In turn if Hunter didn't build a dependency it
won't use it (or at least I could not figure out how to override this
behavior).
- Coupling of Hunter to the build recipes (``hunter/cmake/projects`` directory)
- The build recipes for dependencies are maintained by Hunter. In order to
make sure Hunter can build a dependency one needs to modify Hunter's
source code. While having a centralized place for recipes benefits the
community, having that place be Hunter's source makes Hunter appear
unstable, clutters the GitHub issues, and places a lot of responsibility on
the maintainers of the Hunter repo.
- Only supporting "official" recipes
- Admittedly this is related to the above problem, but Hunter will only use
recipes that are stored in the centralized Hunter repo. This makes it hard
(again git submodules) to rely on private dependencies and hard to use Hunter
until new dependencies are added to the repo.
- Requires patching repos
- Hunter requires projects to make config files and for those files to work
correctly. The problem is what do you do if a repo doesn't do that?
Hunter's solution is that you should fork the offending repo, and then patch
it. While this seems good at first, the problem is you introduce an
additional coupling. Let's say the official repo adds a new feature and you
want to use it. You're stuck waiting for the fork to patch the new version
(and like the recipes, forks are maintained by the Hunter organization so
you can't just use your fork). The other problem is what happens when a
user is trying to make your project use their pre-built version of the
dependency? Odds are they got that version from the official repo so it
won't work anyways.
Just A Working Setup (JAWS)
===========================
JAWS's home is `here <https://github.com/DevSolar/jaws>`_.
| 40.900826 | 81 | 0.743989 |
f120b223e3457b840ae7d10fcf9fb50b93ea1122 | 175 | rst | reStructuredText | docs/source/cli.rst | vedashree29296/BentoML | 79f94d543a0684e04551207d102a2d254b770ad3 | [
"Apache-2.0"
] | null | null | null | docs/source/cli.rst | vedashree29296/BentoML | 79f94d543a0684e04551207d102a2d254b770ad3 | [
"Apache-2.0"
] | null | null | null | docs/source/cli.rst | vedashree29296/BentoML | 79f94d543a0684e04551207d102a2d254b770ad3 | [
"Apache-2.0"
] | null | null | null | CLI Reference
=============
.. image:: https://static.scarf.sh/a.png?x-pxid=0beb35eb-7742-4dfb-b183-2228e8caf04c
.. click:: bentoml.cli:cli
:prog: bentoml
:show-nested:
| 19.444444 | 84 | 0.657143 |
98e31088ac66d6ebde68c290f24eb53c3721e6d9 | 1,258 | rst | reStructuredText | docs/installation.rst | QAtestinc/testspace-colab | 2ff6bc93be1083f71b47e593c4d4db35fb129802 | [
"MIT"
] | 2 | 2021-09-03T12:12:25.000Z | 2021-09-03T12:13:37.000Z | docs/installation.rst | QAtestinc/testspace-colab | 2ff6bc93be1083f71b47e593c4d4db35fb129802 | [
"MIT"
] | null | null | null | docs/installation.rst | QAtestinc/testspace-colab | 2ff6bc93be1083f71b47e593c4d4db35fb129802 | [
"MIT"
] | null | null | null | .. highlight:: shell
============
Installation
============
Dev release
-----------
.. code-block:: console
$ pip install --pre git+https://github.com/testspace-com/testspace-colab
Stable release
--------------
**This project is not published on pypi but on** https://m.devpi.net/testspace
To install testspace-colab, run this command in your terminal:
.. code-block:: console
$ pip install -i https://m.devpi.net/testspace/prod/+simple/ testspace_colab
This is the preferred method to install testspace-colab, as it will always install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
From sources
------------
The sources for testspace-colab can be downloaded from the `Github repo`_.
You can either clone the public repository:
.. code-block:: console
$ git clone git://github.com/testspace-com/testspace-colab
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ cd testspace-colab
$ pip install -e .
.. _Github repo: https://github.com/testspace-com/testspace-colab
| 23.296296 | 114 | 0.701908 |
89e569b3599c5b42e10cb1e2a058ebe43835cf38 | 1,011 | rst | reStructuredText | sesame/sesame-component/src/docs/isda-curve-snapshot-calibration-tool.rst | UbuntuEvangelist/OG-Platform | 0aac9675ff92fd702d2e53a991de085388bfdc83 | [
"Apache-2.0"
] | 12 | 2017-03-10T13:56:52.000Z | 2021-10-03T01:21:20.000Z | sesame/sesame-component/src/docs/isda-curve-snapshot-calibration-tool.rst | UbuntuEvangelist/OG-Platform | 0aac9675ff92fd702d2e53a991de085388bfdc83 | [
"Apache-2.0"
] | 1 | 2021-08-02T17:20:43.000Z | 2021-08-02T17:20:43.000Z | sesame/sesame-component/src/docs/isda-curve-snapshot-calibration-tool.rst | UbuntuEvangelist/OG-Platform | 0aac9675ff92fd702d2e53a991de085388bfdc83 | [
"Apache-2.0"
] | 16 | 2017-01-12T10:31:58.000Z | 2019-04-19T08:17:33.000Z | ====================================
ISDA Curve Snapshot Calibration Tool
====================================
The curve calibration tool can be used to validate your ISDA curve data. It
simply loads the snapshots referenced on the command line and iterates over
the stored curves calibrating them one by one. Results are dumped to the
console in a tabular CSV format, first for yield curves and then for credit
curves.
Tool overview
=============
To run the tool, you will need to have loaded a ``YieldCurveDataSnapshot``
and a ``CreditCurveDataSnapshot``. To view available snapshots, navigate to
the green screens page for NamedSnapshots (normally on
http://server:8080/jax/snapshots).
An example invocation is as follows::
IsdaCurveSnapshotCalibrationTool -c http://localhost:8080/jax \
-cs CompositesByConvention_20140806 \
-ys YieldCurves_20140806 \
-l com/opengamma/util/info-logback.xml
| 38.884615 | 76 | 0.647873 |
1280c3cb230864aa9b348713b61489ec76aab4e5 | 1,092 | rst | reStructuredText | docs/usage.rst | jfjlaros/trie | b12fe42162be9f43aa245ce000794dd395572c0e | [
"RSA-MD"
] | null | null | null | docs/usage.rst | jfjlaros/trie | b12fe42162be9f43aa245ce000794dd395572c0e | [
"RSA-MD"
] | null | null | null | docs/usage.rst | jfjlaros/trie | b12fe42162be9f43aa245ce000794dd395572c0e | [
"RSA-MD"
] | null | null | null | Usage
=====
Include the header file to use the trie library.
.. code-block:: cpp
#include "trie.tcc"
The library provides the ``Trie`` class, which takes two template arguments,
the first of which determines the alphabet size, the second determines the
type of the leaf.
.. code-block:: cpp
Trie<4, Leaf> trie;
.. code-block:: cpp
vector<uint8_t> word = {0, 1, 2, 3};
trie.add(word);
.. code-block:: cpp
Node<4, Leaf>* node = trie.find(word);
.. code-block:: cpp
trie.remove(word);
.. code-block:: cpp
for (Result<Leaf> result: trie.walk()) {
// result.leaf : Leaf node.
// result.path : Word leading up to the leaf.
}
.. code-block:: cpp
for (Result<Leaf> result: trie.hamming(word, 1)) {
// result.leaf : Leaf node.
// result.path : Word leading up to the leaf.
}
.. code-block:: cpp
struct MyLeaf : Leaf {
vector<size_t> lines;
}
.. code-block:: cpp
size_t line = 0;
for (vector<uint8_t> word: words) {
MyLeaf* leaf = trie.add(word);
leaf->lines.push_back(line++);
}
| 18.827586 | 76 | 0.60348 |
822282d63835a280799513f40d62461d360c246c | 4,088 | rst | reStructuredText | docs/source/user_guide/model/general/dgcf.rst | ValerieYang99/RecBole | 19ea101b300fbf31bbc79d8efc80c65926834488 | [
"MIT"
] | 1,773 | 2020-11-04T01:22:11.000Z | 2022-03-31T08:05:41.000Z | docs/source/user_guide/model/general/dgcf.rst | chenyushuo/RecBole | f04084b8d2cffcb79eb9e4b21325f8f6c75c638e | [
"MIT"
] | 378 | 2020-11-05T02:42:27.000Z | 2022-03-31T22:57:04.000Z | docs/source/user_guide/model/general/dgcf.rst | chenyushuo/RecBole | f04084b8d2cffcb79eb9e4b21325f8f6c75c638e | [
"MIT"
] | 354 | 2020-11-04T01:37:09.000Z | 2022-03-31T10:39:32.000Z | DGCF
===========
Introduction
---------------------
`[paper] <https://dl.acm.org/doi/10.1145/3397271.3401137>`_
**Title:** Disentangled Graph Collaborative Filtering
**Authors:** Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, Tat-Seng Chua
**Abstract:** Learning informative representations of users and items from the
interaction data is of crucial importance to collaborative filtering
(CF). Present embedding functions exploit user-item relationships
to enrich the representations, evolving from a single user-item
instance to the holistic interaction graph. Nevertheless, they largely
model the relationships in a uniform manner, while neglecting
the diversity of user intents on adopting the items, which could
be to pass time, for interest, or shopping for others like families.
Such uniform approach to model user interests easily results in
suboptimal representations, failing to model diverse relationships
and disentangle user intents in representations.
In this work, we pay special attention to user-item relationships
at the finer granularity of user intents. We hence devise a new
model, Disentangled Graph Collaborative Filtering (DGCF), to
disentangle these factors and yield disentangled representations.
Specifically, by modeling a distribution over intents for each
user-item interaction, we iteratively refine the intent-aware
interaction graphs and representations. Meanwhile, we encourage
independence of different intents. This leads to disentangled
representations, effectively distilling information pertinent to each
intent. We conduct extensive experiments on three benchmark
datasets, and DGCF achieves significant improvements over several
state-of-the-art models like NGCF, DisenGCN, and
MacridVAE. Further analyses offer insights into the advantages
of DGCF on the disentanglement of user intents and interpretability
of representations.
.. image:: ../../../asset/dgcf.jpg
:width: 700
:align: center
Running with RecBole
-------------------------
**Model Hyper-Parameters:**
- ``embedding_size (int)`` : The embedding size of users and items. Defaults to ``64``.
- ``n_factors (int)`` : The number of factors for disentanglement. Defaults to ``4``.
- ``n_iterations (int)`` : The number of iterations for each layer. Defaults to ``2``.
- ``n_layers (int)`` : The number of reasoning layers. Defaults to ``1``.
- ``reg_weight (float)`` : The L2 regularization weight. Defaults to ``1e-03``.
- ``cor_weight (float)`` : The correlation loss weight. Defaults to ``0.01``.
**A Running Example:**
Write the following code to a python file, such as `run.py`
.. code:: python
from recbole.quick_start import run_recbole
run_recbole(model='DGCF', dataset='ml-100k')
And then:
.. code:: bash
python run.py
**Notes:**
- ``embedding_size`` needs to be exactly divisible by ``n_factors``
Tuning Hyper Parameters
-------------------------
If you want to use ``HyperTuning`` to tune hyper parameters of this model, you can copy the following settings and name it as ``hyper.test``.
.. code:: bash
learning_rate choice [0.01,0.005,0.001,0.0005,0.0001]
n_factors choice [2,4,8]
reg_weight choice [1e-03]
cor_weight choice [0.005,0.01,0.02,0.05]
n_layers choice [1]
n_iterations choice [2]
delay choice [1e-03]
cor_delay choice [1e-02]
Note that we just provide these hyper parameter ranges for reference only, and we can not guarantee that they are the optimal range of this model.
Then, with the source code of RecBole (you can download it from GitHub), you can run the ``run_hyper.py`` to tuning:
.. code:: bash
python run_hyper.py --model=[model_name] --dataset=[dataset_name] --config_files=[config_files_path] --params_file=hyper.test
For more details about Parameter Tuning, refer to :doc:`../../../user_guide/usage/parameter_tuning`.
If you want to change parameters, dataset or evaluation settings, take a look at
- :doc:`../../../user_guide/config_settings`
- :doc:`../../../user_guide/data_intro`
- :doc:`../../../user_guide/train_eval_intro`
- :doc:`../../../user_guide/usage` | 37.163636 | 146 | 0.739237 |
56973a2b2f7fafd55f03889323e02020a9cba1fe | 1,678 | rst | reStructuredText | doc/en/announce/release-2.6.3.rst | markshao/pytest | 611b579d21f7e62b4c8ed54ab70fbfee7c6f5f64 | [
"MIT"
] | 9,225 | 2015-06-15T21:56:14.000Z | 2022-03-31T20:47:38.000Z | doc/en/announce/release-2.6.3.rst | markshao/pytest | 611b579d21f7e62b4c8ed54ab70fbfee7c6f5f64 | [
"MIT"
] | 7,794 | 2015-06-15T21:06:34.000Z | 2022-03-31T10:56:54.000Z | doc/en/announce/release-2.6.3.rst | markshao/pytest | 611b579d21f7e62b4c8ed54ab70fbfee7c6f5f64 | [
"MIT"
] | 2,598 | 2015-06-15T21:42:39.000Z | 2022-03-29T13:48:22.000Z | pytest-2.6.3: fixes and little improvements
===========================================================================
pytest is a mature Python testing tool with more than 1100 tests
against itself, passing on many different interpreters and platforms.
This release is drop-in compatible to 2.5.2 and 2.6.X.
See below for the changes and see docs at:
http://pytest.org
As usual, you can upgrade from pypi via::
pip install -U pytest
Thanks to all who contributed, among them:
Floris Bruynooghe
Oleg Sinyavskiy
Uwe Schmitt
Charles Cloud
Wolfgang Schnerring
have fun,
holger krekel
Changes 2.6.3
======================
- fix issue575: xunit-xml was reporting collection errors as failures
instead of errors, thanks Oleg Sinyavskiy.
- fix issue582: fix setuptools example, thanks Laszlo Papp and Ronny
Pfannschmidt.
- Fix infinite recursion bug when pickling capture.EncodedFile, thanks
Uwe Schmitt.
- fix issue589: fix bad interaction with numpy and others when showing
exceptions. Check for precise "maximum recursion depth exceed" exception
instead of presuming any RuntimeError is that one (implemented in py
dep). Thanks Charles Cloud for analysing the issue.
- fix conftest related fixture visibility issue: when running with a
CWD outside of a test package pytest would get fixture discovery wrong.
Thanks to Wolfgang Schnerring for figuring out a reproducible example.
- Introduce pytest_enter_pdb hook (needed e.g. by pytest_timeout to cancel the
timeout when interactively entering pdb). Thanks Wolfgang Schnerring.
- check xfail/skip also with non-python function test items. Thanks
Floris Bruynooghe.
| 32.269231 | 78 | 0.730632 |
5bfdd6c31aea2d193bd71423cf69fe25690a1992 | 10,510 | rst | reStructuredText | docs/user/eventing.rst | mpsiva89/protean | 315fa56da3f64178bbbf0edf1995af46d5eb3da7 | [
"BSD-3-Clause"
] | 6 | 2018-09-26T04:54:09.000Z | 2022-03-30T01:01:45.000Z | docs/user/eventing.rst | mpsiva89/protean | 315fa56da3f64178bbbf0edf1995af46d5eb3da7 | [
"BSD-3-Clause"
] | 261 | 2018-09-20T09:53:33.000Z | 2022-03-08T17:43:04.000Z | docs/user/eventing.rst | mpsiva89/protean | 315fa56da3f64178bbbf0edf1995af46d5eb3da7 | [
"BSD-3-Clause"
] | 6 | 2018-07-22T07:09:15.000Z | 2021-02-02T05:17:23.000Z | Processing Events
=================
Most applications have a definite state - they remember past user input and interactions. It is advantageous to model these past changes as a series of discrete events. Domain events happen to be those activities that domain experts care about and represent what happened as-is.
Domain events are the primary building blocks of a domain in Protean. They perform two major functions:
1. They **facilitate eventual consistency** in the same bounded context or across.
This makes it possible to define invariant rules across Aggregates. Every change to the system touches one and only one Aggregate, and other state changes are performed in separate transactions.
Such a design eliminates the need for two-phase commits (global transactions) across bounded contexts, optimizing performance at the level of each transaction.
2. Events also **keep boundaries clear and distinct** among bounded contexts.
Each domain is modeled in the architecture pattern that is appropriate for its use case. Events propagate information across bounded contexts, thus helping to sync changes throughout the application domain.
Defining Domain Events
----------------------
A Domain event is defined with the :meth:`~protean.Domain.event` decorator:
.. code-block:: python
@domain.event(aggregate_cls='Role')
class UserRegistered:
name = String(max_length=15, required=True)
email = String(required=True)
timezone = String()
Often, a Domain event will contain values and identifiers to answer questions about the activity that initiated the event. These values, such as who, what, when, why, where, and how much, represent the state when the event happened.
.. // FIXME Unimplemented Feature
Since Events are essentially Data Transfer Objects (DTOs), they can only hold simple field structures. You cannot define them to have associations or value objects.
Ideally, the Event only contains values that are directly relevant to that Event. A receiver that needs more information should listen to pertinent other Events and keep its own state to make decisions later. The receiver shouldn't query the current state of the sender, as the state of the sender might already be different from the state it had when it emitted the Event.
Because we observe Domain Events from the outside after they have happened, we should name them in the past tense. So "StockDepleted "is a better choice than the imperative "DepleteStock "as an event name.
Raising Events
--------------
.. image:: /images/raising-events.jpg
:alt: Raising Events
:scale: 100%
Domain events are best bubbled up from within Aggregates responding to the activity.
In the example below, the ``Role`` aggregate raises a ``RoleAdded`` event when a new role is added to the system.
.. code-block:: python
...
@domain.aggregate
class Role:
name = String(max_length=15, required=True)
created_on = DateTime(default=datetime.today())
@classmethod
def add_new_role(cls, params):
"""Factory method to add a new Role to the system"""
new_role = Role(name=params['name'])
current_domain.publish(RoleAdded(role_name=new_role.name, added_on=new_role.created_on))
return new_role
.. // FIXME Unimplemented Feature : Discussion #354
Adding a new role generates a ``RoleAdded`` event::
>>> role = Role.add_new_role({'name': 'ADMIN'})
>>> role.events
[RoleAdded]
UnitOfWork Schematics
```````````````````````
Events raised in the Domain Model should be exposed only after the changes are recorded. This way, if the changes are not persisted for some reason, like a technical fault in the database infrastructure, events are not accidentally published to the external world.
.. // FIXME Unimplemented Feature : Discussion ???
In Protean, domain changes being performed in the Application layer, within *Application Services*, *Command Handlers*, and *Subscribers* for example, are always bound within a :class:`UnitOfWork`. Events are exposed to the external world only after all changes have been committed to the persistent store atomically.
This is still a two-phase commit and is prone to errors. For example, the database transaction may be committed, but the system may fail to dispatch the events to the message broker because of technical issues. Protean supports advanced strategies that help maintain data and event sanctity to avoid these issues, as outlined in the :ref:`event-processing-strategies` section.
Consuming Events
----------------
.. image:: /images/consuming-events.jpg
:alt: Consuming Events
:scale: 100%
Subscribers live on the other side of event publishing. They are domain elements that subscribe to specific domain events and are notified by the domain on event bubble-up.
Subscribers can:
#. Help propagate a change to the rest of the system - across multiple aggregates - and eventually, make the state consistent.
#. Run secondary stuff, like sending emails, generating query models, populating reports, or updating cache, in the background, making the transaction itself performance-optimized.
A Subscriber can be defined and registered with the help of ``@domain.subscriber`` decorator:
.. code-block:: python
@domain.subscriber(event='OrderCancelled')
class UpdateInventory:
"""Update Stock Inventory and replenish order items"""
def __call__(self, event: Dict) -> None:
stock_repo = current_domain.repository_for(Stock)
for item in event['order_items']:
stock = stock_repo.get(item['sku'])
stock.add_quantity(item['qty'])
stock_repo.add(stock)
Just like :ref:`user-application-services` and :ref:`command-handlers`, Subscribers should adhere to the rule of thumb of not modifying more than one aggregate instance in a transaction.
.. _event-processing-strategies:
Processing Strategies
---------------------
Protean provides fine-grained control on how exactly you want domain events to be processed. These strategies, listed in the order of their complexity below, translate to increased robustness on the event processing side. These performance optimizations and processing stability come in handy at any scale but are imperative at a larger scale.
Depending on your application's lifecycle and your preferences, one or more of these strategies may make sense. But you can choose to start with the most robust option, ``DB_SUPPORTED_WITH_JOBS``, with minimal performance penalties.
Event processing strategy for your domain is set in the config attribute :attr:`~protean.Config.EVENT_STRATEGY`.
#. .. py:data:: INLINE
This is the default and most basic option. In this mode, Protean consumes and processes events inline as they are generated. Events are not persisted and are processed in an in-memory queue.
There is no persistence store involved in this mode, and events are not stored. If events are lost in transit for some reason, like technical faults, they are lost forever.
This mode is best suited for testing purposes. Events raised in tests are processed immediately so tests can include assertions for side-effects of events.
If you are processing events from within a single domain (if your application is a monolith, for example), you can simply use the built-in :class:`InlineBroker` as the message broker. If you want to exchange messages with other domains, you can use one of the other message brokers, like :class:`RedisBroker`.
``config.py``:
.. code-block:: python
...
EVENT_STRATEGY = "INLINE"
...
#. .. py:data:: DB_SUPPORTED
The ``DB_SUPPORTED`` strategy persists Events into the same persistence store in the same transaction along with the actual change. This guarantees data consistency and ensures events are never published without system changes.
This mode also performs better than ``INLINE`` mode because events are dispatched and processed in background threads. One background process monitors the ``EventLog`` table and dispatches the latest events to the message broker. Another gathers new events from the message broker and processes them in a thread pool.
Depending on the persistence store in use, you may need to manually run migration scripts to create the database structure. Consult :class:`EventLog` for available options.
Note that this mode needs the :class:`Server` to be started as a separate process. If your application already runs within a server (if you have an API gateway, for example), you can run the server as part of the same process. Check :doc:`user/server` for a detailed discussion.
``config.py``:
.. code-block:: python
...
EVENT_STRATEGY = "DB_SUPPORTED"
...
#. .. py:data:: DB_SUPPORTED_WITH_JOBS
This is the most robust mode of all. In this mode, Protean routes all events through the data store and tracks each subscriber's processing as separate records. This allows you to monitor errors at the level of each subscriber process and run automatic recovery tasks, like retrying jobs, generating alerts, and running failed processes manually.
This mode needs the :class:`Job` data structure to be created along with :class:`EventLog`.
``config.py``:
.. code-block:: python
...
EVENT_STRATEGY = "DB_SUPPORTED_WITH_JOBS"
...
Best Practices
--------------
* Your Event's name should preferably be in the past sense. Ex. `RoleAdded`, `UserProvisioned`, etc. They are representing facts that have already happened outside the system.
* Event objects are immutable in nature, so ensure you are passing all event data while creating a new event object.
* Events are simple data containers, so they should preferably have no methods. In the rare case that an event contains methods, they should be side-effect-free and return new event instances.
* Subscribers should never be constructed or invoked directly. The purpose of the message transport layer is to publish an event for system-wide consumption. So manually initializing or calling a subscriber method defeats the purpose.
* Events should enclose all the necessary information from the originating aggregate, including its unique identity. Typically, a subscriber should not have to contact the originating aggregate bounded context again for additional information because the sender's state could have changed by that time.
| 55.904255 | 376 | 0.751475 |
8d47e1adac15a8e2509f6cc150141a5ba638d9e6 | 843 | rst | reStructuredText | docs/source/api/stats.rst | cowirihy/pymc3 | f0b95773047af12f3c0ded04d707f02ddc4d4f6b | [
"Apache-2.0"
] | 1 | 2020-09-30T06:26:53.000Z | 2020-09-30T06:26:53.000Z | docs/source/api/stats.rst | cowirihy/pymc3 | f0b95773047af12f3c0ded04d707f02ddc4d4f6b | [
"Apache-2.0"
] | 2 | 2019-12-30T17:58:29.000Z | 2020-01-01T23:23:39.000Z | docs/source/api/stats.rst | cowirihy/pymc3 | f0b95773047af12f3c0ded04d707f02ddc4d4f6b | [
"Apache-2.0"
] | 1 | 2019-12-30T16:21:43.000Z | 2019-12-30T16:21:43.000Z | *****
Stats
*****
Statistics and diagnostics are delegated to the
`ArviZ <https://arviz-devs.github.io/arviz/index.html>`_.
library, a general purpose library for
"exploratory analysis of Bayesian models."
For statistics and diagnostics, ``pymc3.<function>`` are now aliases
for ArviZ functions. Thus, the links below will redirect you to
ArviZ docs:
.. currentmodule:: pymc3.stats
- :func:`pymc3.bfmi <arviz:arviz.bfmi>`
- :func:`pymc3.compare <arviz:arviz.compare>`
- :func:`pymc3.ess <arviz:arviz.ess>`
- :data:`pymc3.geweke <arviz:arviz.geweke>`
- :func:`pymc3.hpd <arviz:arviz.hpd>`
- :func:`pymc3.loo <arviz:arviz.loo>`
- :func:`pymc3.mcse <arviz:arviz.mcse>`
- :func:`pymc3.r2_score <arviz:arviz.r2_score>`
- :func:`pymc3.rhat <arviz:arviz.rhat>`
- :func:`pymc3.summary <arviz:arviz.summary>`
- :func:`pymc3.waic <arviz:arviz.waic>`
| 32.423077 | 68 | 0.718861 |
67f31b81622ace6c0309319306ab2fb707e96352 | 13,769 | rst | reStructuredText | Misc/NEWS.d/3.10.0rc1.rst | StevenHsuYL/cpython | cf96c279ac960f2a5025e2ac887f9b932a4f1474 | [
"0BSD"
] | 33 | 2021-07-25T14:23:35.000Z | 2022-03-31T00:17:30.000Z | Misc/NEWS.d/3.10.0rc1.rst | StevenHsuYL/cpython | cf96c279ac960f2a5025e2ac887f9b932a4f1474 | [
"0BSD"
] | 32 | 2019-04-26T12:29:36.000Z | 2022-03-08T14:24:30.000Z | Misc/NEWS.d/3.10.0rc1.rst | val-verde/cpython | 17aa701d799d5e071d83205d877f722f1498a09f | [
"0BSD"
] | 3 | 2019-11-12T15:21:58.000Z | 2020-09-04T14:27:55.000Z | .. bpo: 44600
.. date: 2021-07-25-20-04-54
.. nonce: 0WMldg
.. release date: 2021-08-02
.. section: Security
Fix incorrect line numbers while tracing some failed patterns in :ref:`match
<match>` statements. Patch by Charles Burkland.
..
.. bpo: 44792
.. date: 2021-07-31-12-12-57
.. nonce: mOReTW
.. section: Core and Builtins
Improve syntax errors for if expressions. Patch by Miguel Brito
..
.. bpo: 34013
.. date: 2021-07-27-11-14-22
.. nonce: SjLFe-
.. section: Core and Builtins
Generalize the invalid legacy statement custom error message (like the one
generated when "print" is called without parentheses) to include more
generic expressions. Patch by Pablo Galindo
..
.. bpo: 44732
.. date: 2021-07-26-15-27-03
.. nonce: IxObt3
.. section: Core and Builtins
Rename ``types.Union`` to ``types.UnionType``.
..
.. bpo: 44698
.. date: 2021-07-21-15-26-56
.. nonce: DA4_0o
.. section: Core and Builtins
Fix undefined behaviour in complex object exponentiation.
..
.. bpo: 44653
.. date: 2021-07-19-20-49-06
.. nonce: WcqGyI
.. section: Core and Builtins
Support :mod:`typing` types in parameter substitution in the union type.
..
.. bpo: 44676
.. date: 2021-07-19-19-53-46
.. nonce: WgIMvh
.. section: Core and Builtins
Add ability to serialise ``types.Union`` objects. Patch provided by Yurii
Karabas.
..
.. bpo: 44633
.. date: 2021-07-17-21-04-04
.. nonce: 5-zKeI
.. section: Core and Builtins
Parameter substitution of the union type with wrong types now raises
``TypeError`` instead of returning ``NotImplemented``.
..
.. bpo: 44662
.. date: 2021-07-17-13-41-58
.. nonce: q22kWR
.. section: Core and Builtins
Add ``__module__`` to ``types.Union``. This also fixes ``types.Union``
issues with ``typing.Annotated``. Patch provided by Yurii Karabas.
..
.. bpo: 44655
.. date: 2021-07-16-21-35-14
.. nonce: 95I7M6
.. section: Core and Builtins
Include the name of the type in unset __slots__ attribute errors. Patch by
Pablo Galindo
..
.. bpo: 44655
.. date: 2021-07-16-20-25-37
.. nonce: I3wRjL
.. section: Core and Builtins
Don't include a missing attribute with the same name as the failing one when
offering suggestions for missing attributes. Patch by Pablo Galindo
..
.. bpo: 44646
.. date: 2021-07-16-09-59-13
.. nonce: Yb6s05
.. section: Core and Builtins
Fix the hash of the union type: it no longer depends on the order of
arguments.
..
.. bpo: 44636
.. date: 2021-07-16-09-36-12
.. nonce: ZWebi8
.. section: Core and Builtins
Collapse union of equal types. E.g. the result of ``int | int`` is now
``int``. Fix comparison of the union type with non-hashable objects. E.g.
``int | str == {}`` no longer raises a TypeError.
..
.. bpo: 44635
.. date: 2021-07-14-13-54-07
.. nonce: 7ZMAdB
.. section: Core and Builtins
Convert ``None`` to ``type(None)`` in the union type constructor.
..
.. bpo: 44589
.. date: 2021-07-13-23-19-41
.. nonce: 59OH8T
.. section: Core and Builtins
Mapping patterns in ``match`` statements with two or more equal literal keys
will now raise a :exc:`SyntaxError` at compile-time.
..
.. bpo: 44606
.. date: 2021-07-13-20-22-12
.. nonce: S3Bv2w
.. section: Core and Builtins
Fix ``__instancecheck__`` and ``__subclasscheck__`` for the union type.
..
.. bpo: 42073
.. date: 2021-07-13-17-47-32
.. nonce: 9wopiC
.. section: Core and Builtins
The ``@classmethod`` decorator can now wrap other classmethod-like
descriptors.
..
.. bpo: 44490
.. date: 2021-07-06-22-22-15
.. nonce: BJxPbZ
.. section: Core and Builtins
:mod:`typing` now searches for type parameters in ``types.Union`` objects.
``get_type_hints`` will also properly resolve annotations with nested
``types.Union`` objects. Patch provided by Yurii Karabas.
..
.. bpo: 44490
.. date: 2021-07-01-11-59-34
.. nonce: xY80VR
.. section: Core and Builtins
Add ``__parameters__`` attribute and ``__getitem__`` operator to
``types.Union``. Patch provided by Yurii Karabas.
..
.. bpo: 44472
.. date: 2021-06-21-11-19-54
.. nonce: Vvm1yn
.. section: Core and Builtins
Fix ltrace functionality when exceptions are raised. Patch by Pablo Galindo
..
.. bpo: 44806
.. date: 2021-08-02-14-37-32
.. nonce: wOW_Qn
.. section: Library
Non-protocol subclasses of :class:`typing.Protocol` ignore now the
``__init__`` method inherited from protocol base classes.
..
.. bpo: 44793
.. date: 2021-07-31-20-28-20
.. nonce: woaQSg
.. section: Library
Fix checking the number of arguments when subscribe a generic type with
``ParamSpec`` parameter.
..
.. bpo: 44784
.. date: 2021-07-31-08-45-31
.. nonce: fIMIDS
.. section: Library
In importlib.metadata tests, override warnings behavior under expected
DeprecationWarnings (importlib_metadata 4.6.3).
..
.. bpo: 44667
.. date: 2021-07-30-23-27-30
.. nonce: tu0Xrv
.. section: Library
The :func:`tokenize.tokenize` doesn't incorrectly generate a ``NEWLINE``
token if the source doesn't end with a new line character but the last line
is a comment, as the function is already generating a ``NL`` token. Patch by
Pablo Galindo
..
.. bpo: 44752
.. date: 2021-07-27-22-11-29
.. nonce: _bvbrZ
.. section: Library
:mod:`rcompleter` does not call :func:`getattr` on :class:`property` objects
to avoid the side-effect of evaluating the corresponding method.
..
.. bpo: 44720
.. date: 2021-07-24-02-17-59
.. nonce: shU5Qm
.. section: Library
``weakref.proxy`` objects referencing non-iterators now raise ``TypeError``
rather than dereferencing the null ``tp_iternext`` slot and crashing.
..
.. bpo: 44704
.. date: 2021-07-21-23-16-30
.. nonce: iqHLxQ
.. section: Library
The implementation of ``collections.abc.Set._hash()`` now matches that of
``frozenset.__hash__()``.
..
.. bpo: 44666
.. date: 2021-07-21-10-43-22
.. nonce: CEThkv
.. section: Library
Fixed issue in :func:`compileall.compile_file` when ``sys.stdout`` is
redirected. Patch by Stefan Hölzl.
..
.. bpo: 42854
.. date: 2021-07-20-21-51-35
.. nonce: ThuDMI
.. section: Library
Fixed a bug in the :mod:`_ssl` module that was throwing :exc:`OverflowError`
when using :meth:`_ssl._SSLSocket.write` and :meth:`_ssl._SSLSocket.read`
for a big value of the ``len`` parameter. Patch by Pablo Galindo
..
.. bpo: 44353
.. date: 2021-07-19-22-43-15
.. nonce: HF81_Q
.. section: Library
Refactor ``typing.NewType`` from function into callable class. Patch
provided by Yurii Karabas.
..
.. bpo: 44524
.. date: 2021-07-19-14-04-42
.. nonce: Nbf2JC
.. section: Library
Add missing ``__name__`` and ``__qualname__`` attributes to ``typing``
module classes. Patch provided by Yurii Karabas.
..
.. bpo: 40897
.. date: 2021-07-16-13-40-31
.. nonce: aveAre
.. section: Library
Give priority to using the current class constructor in
:func:`inspect.signature`. Patch by Weipeng Hong.
..
.. bpo: 44648
.. date: 2021-07-15-16-51-32
.. nonce: 2o49TB
.. section: Library
Fixed wrong error being thrown by :func:`inspect.getsource` when examining a
class in the interactive session. Instead of :exc:`TypeError`, it should be
:exc:`OSError` with appropriate error message.
..
.. bpo: 44608
.. date: 2021-07-13-09-01-33
.. nonce: R3IcM1
.. section: Library
Fix memory leak in :func:`_tkinter._flatten` if it is called with a sequence
or set, but not list or tuple.
..
.. bpo: 44559
.. date: 2021-07-12-10-43-31
.. nonce: YunVKY
.. section: Library
[Enum] module reverted to 3.9; 3.10 changes pushed until 3.11
..
.. bpo: 41928
.. date: 2021-07-09-07-14-37
.. nonce: Q1jMrr
.. section: Library
Update :func:`shutil.copyfile` to raise :exc:`FileNotFoundError` instead of
confusing :exc:`IsADirectoryError` when a path ending with a
:const:`os.path.sep` does not exist; :func:`shutil.copy` and
:func:`shutil.copy2` are also affected.
..
.. bpo: 44566
.. date: 2021-07-05-18-13-25
.. nonce: o51Bd1
.. section: Library
handle StopIteration subclass raised from @contextlib.contextmanager
generator
..
.. bpo: 41249
.. date: 2021-07-04-11-33-34
.. nonce: sHdwBE
.. section: Library
Fixes ``TypedDict`` to work with ``typing.get_type_hints()`` and postponed
evaluation of annotations across modules.
..
.. bpo: 44461
.. date: 2021-06-29-21-17-17
.. nonce: acqRnV
.. section: Library
Fix bug with :mod:`pdb`'s handling of import error due to a package which
does not have a ``__main__`` module
..
.. bpo: 43625
.. date: 2021-06-29-07-27-08
.. nonce: ZlAxhp
.. section: Library
Fix a bug in the detection of CSV file headers by
:meth:`csv.Sniffer.has_header` and improve documentation of same.
..
.. bpo: 42892
.. date: 2021-06-24-19-16-20
.. nonce: qvRNhI
.. section: Library
Fixed an exception thrown while parsing a malformed multipart email by
:class:`email.message.EmailMessage`.
..
.. bpo: 27827
.. date: 2021-06-12-21-25-35
.. nonce: TMWh1i
.. section: Library
:meth:`pathlib.PureWindowsPath.is_reserved` now identifies a greater range
of reserved filenames, including those with trailing spaces or colons.
..
.. bpo: 38741
.. date: 2019-11-12-18-59-33
.. nonce: W7IYkq
.. section: Library
:mod:`configparser`: using ']' inside a section header will no longer cut
the section name short at the ']'
..
.. bpo: 27513
.. date: 2019-06-03-23-53-25
.. nonce: qITN7d
.. section: Library
:func:`email.utils.getaddresses` now accepts :class:`email.header.Header`
objects along with string values. Patch by Zackery Spytz.
..
.. bpo: 29298
.. date: 2017-09-20-14-43-03
.. nonce: _78CSN
.. section: Library
Fix ``TypeError`` when required subparsers without ``dest`` do not receive
arguments. Patch by Anthony Sottile.
..
.. bpo: 44740
.. date: 2021-07-26-23-48-31
.. nonce: zMFGMV
.. section: Documentation
Replaced occurences of uppercase "Web" and "Internet" with lowercase
versions per the 2016 revised Associated Press Style Book.
..
.. bpo: 44693
.. date: 2021-07-25-23-04-15
.. nonce: JuCbNq
.. section: Documentation
Update the definition of __future__ in the glossary by replacing the
confusing word "pseudo-module" with a more accurate description.
..
.. bpo: 35183
.. date: 2021-07-22-08-28-03
.. nonce: p9BWTB
.. section: Documentation
Add typical examples to os.path.splitext docs
..
.. bpo: 30511
.. date: 2021-07-20-21-03-18
.. nonce: eMFkRi
.. section: Documentation
Clarify that :func:`shutil.make_archive` is not thread-safe due to reliance
on changing the current working directory.
..
.. bpo: 44561
.. date: 2021-07-18-22-43-14
.. nonce: T7HpWm
.. section: Documentation
Update of three expired hyperlinks in Doc/distributing/index.rst: "Project
structure", "Building and packaging the project", and "Uploading the project
to the Python Packaging Index".
..
.. bpo: 44613
.. date: 2021-07-12-11-39-20
.. nonce: DIXNzc
.. section: Documentation
importlib.metadata is no longer provisional.
..
.. bpo: 44544
.. date: 2021-07-02-14-02-29
.. nonce: _5_aCz
.. section: Documentation
List all kwargs for :func:`textwrap.wrap`, :func:`textwrap.fill`, and
:func:`textwrap.shorten`. Now, there are nav links to attributes of
:class:`TextWrap`, which makes navigation much easier while minimizing
duplication in the documentation.
..
.. bpo: 44453
.. date: 2021-06-18-06-44-45
.. nonce: 3PIkj2
.. section: Documentation
Fix documentation for the return type of :func:`sysconfig.get_path`.
..
.. bpo: 44734
.. date: 2021-07-24-20-09-15
.. nonce: KKsNOV
.. section: Tests
Fixed floating point precision issue in turtle tests.
..
.. bpo: 44708
.. date: 2021-07-22-16-38-39
.. nonce: SYNaac
.. section: Tests
Regression tests, when run with -w, are now re-running only the affected
test methods instead of re-running the entire test file.
..
.. bpo: 44647
.. date: 2021-07-16-14-02-33
.. nonce: 5LzqIy
.. section: Tests
Added a permanent Unicode-valued environment variable to regression tests to
ensure they handle this use case in the future. If your test environment
breaks because of that, report a bug to us, and temporarily set
PYTHONREGRTEST_UNICODE_GUARD=0 in your test environment.
..
.. bpo: 44515
.. date: 2021-06-26-18-37-36
.. nonce: e9fO6f
.. section: Tests
Adjust recently added contextlib tests to avoid assuming the use of a
refcounted GC
..
.. bpo: 44572
.. date: 2021-07-13-15-32-49
.. nonce: gXvhDc
.. section: Windows
Avoid consuming standard input in the :mod:`platform` module
..
.. bpo: 40263
.. date: 2020-04-13-15-20-28
.. nonce: 1KKEbu
.. section: Windows
This is a follow-on bug from https://bugs.python.org/issue26903. Once that
is applied we run into an off-by-one assertion problem. The assert was not
correct.
..
.. bpo: 41972
.. date: 2021-07-12-15-42-02
.. nonce: yUjE8j
.. section: macOS
The framework build's user header path in sysconfig is changed to add a
'pythonX.Y' component to match distutils's behavior.
..
.. bpo: 34932
.. date: 2021-03-29-21-11-23
.. nonce: f3Hdyd
.. section: macOS
Add socket.TCP_KEEPALIVE support for macOS. Patch by Shane Harvey.
..
.. bpo: 44756
.. date: 2021-07-28-00-51-55
.. nonce: pvAajB
.. section: Tools/Demos
In the Makefile for documentation (:file:`Doc/Makefile`), the ``build`` rule
is dependent on the ``venv`` rule. Therefore, ``html``, ``latex``, and other
build-dependent rules are also now dependent on ``venv``. The ``venv`` rule
only performs an action if ``$(VENVDIR)`` does not exist.
:file:`Doc/README.rst` was updated; most users now only need to type ``make
html``.
..
.. bpo: 41103
.. date: 2021-07-29-16-04-28
.. nonce: hiKKcF
.. section: C API
Reverts removal of the old buffer protocol because they are part of stable
ABI.
..
.. bpo: 42747
.. date: 2021-07-20-16-21-06
.. nonce: rRxjUY
.. section: C API
The ``Py_TPFLAGS_HAVE_VERSION_TAG`` type flag now does nothing. The
``Py_TPFLAGS_HAVE_AM_SEND`` flag (which was added in 3.10) is removed. Both
were unnecessary because it is not possible to have type objects with the
relevant fields missing.
| 21.314241 | 76 | 0.709274 |
e4a2bb3f59e4a7cbea6f691d26e33330a665954b | 1,040 | rst | reStructuredText | docs/source/sysadmin_guide/generating_keys.rst | Capco-JayPanicker123/Sawtooth | d22a16a6a82da5627ff113a1a5290f83b3f82c45 | [
"Apache-2.0"
] | null | null | null | docs/source/sysadmin_guide/generating_keys.rst | Capco-JayPanicker123/Sawtooth | d22a16a6a82da5627ff113a1a5290f83b3f82c45 | [
"Apache-2.0"
] | null | null | null | docs/source/sysadmin_guide/generating_keys.rst | Capco-JayPanicker123/Sawtooth | d22a16a6a82da5627ff113a1a5290f83b3f82c45 | [
"Apache-2.0"
] | null | null | null | **********************************
Generating User and Validator Keys
**********************************
1. Generate your user key for Sawtooth.
.. code-block:: console
$ sawtooth keygen
This command stores user keys in ``$HOME/.sawtooth/keys/{yourname}.priv``
and ``$HOME/.sawtooth/keys/{yourname}.pub``.
#. Generate the key for the validator, which runs as root.
.. code-block:: console
$ sudo sawadm keygen
By default, this command stores the validator key files in
``/etc/sawtooth/keys/validator.priv`` and
``/etc/sawtooth/keys/validator.pub``.
However, settings in the path configuration file could change this location;
see :doc:`configuring_sawtooth/path_configuration_file`.
Sawtooth also includes a network key pair that is used to encrypt communication
between the validators in a Sawtooth network. The network keys are described in
a later procedure.
.. Licensed under Creative Commons Attribution 4.0 International License
.. https://creativecommons.org/licenses/by/4.0/
| 30.588235 | 79 | 0.6875 |
0897e31d8df8eec33a2d54e8867a6fcb5416d850 | 4,528 | rst | reStructuredText | sw^3/glossary.rst | vaporsphere-staging/swarm-docs | 94b44093d92ca2d906cfae090c40fe9cb99dc3db | [
"CC-BY-2.0"
] | 14 | 2016-05-12T14:59:18.000Z | 2021-05-27T09:08:13.000Z | sw^3/glossary.rst | vaporsphere-staging/swarm-docs | 94b44093d92ca2d906cfae090c40fe9cb99dc3db | [
"CC-BY-2.0"
] | 11 | 2018-08-09T14:26:48.000Z | 2021-03-18T17:00:53.000Z | sw^3/glossary.rst | vaporsphere-staging/swarm-docs | 94b44093d92ca2d906cfae090c40fe9cb99dc3db | [
"CC-BY-2.0"
] | 18 | 2016-11-15T06:55:57.000Z | 2020-06-30T18:51:51.000Z |
Glossary
======================
.. glossary::
owner
node that produces/originates content by sending a store request
storer
node that accepted a store request and stores the content
guardian
the first node to accept a store request of a chunk/
custodian
node that has no online peer that is closer to a chunk address
auditor
node that initiates an audit by sending an audit request
insurer
node that is commissioned by an owner to launch audit requests on their behalf
swear
the component of the system that handles membership registration, terms of membership and security deposit
swindle
the component of the system that handles audits and escalates to litigation
swap
information exchange between connected peers relating to contracting, request forwarding, accounting and payment
SWAP
Swarm Accounting Protocol, Secured With Automated PayIns. And the name of the suite of smart contracts on the blockchain handling delayed payments, payment channels, escrowed obligations, manage debt etc.
SWEAR
Storage With Enforced Archiving Rules or Swarm Enforcement And Registration
the smart contract on the ethereum blockchain which coordinates registration, handles deposits and verifies challenges and their refutations
sworn node, registered node, swarm member
a node which registered via the SWEAR contract and is able to issue storage receipts until the expire of its membership
suspension
punative measure that terminates a node's registered status and
burns all available deposit locked in the SWAR contract after
paying out all compensation
registration
nodes can register their public key in the SWEAR contract
by sending a transaction with deposit and parameters to the SWEAR contract
they will have an entry
audit
special form of litigation where possession of a chunk is proved by proof of custody. The litigation does not stop but forces node to iteratively prove they synced according to the rules.
SWINDLE
Secured With INsurance Deposit Litigation and Escrow
the module in the client code that drives the iterative litigation procedure, initiates litigation in case loss of a chunk is detected and respond with refutation if the node itself is challenged.
proof of custody
A cryptographic construct that can prove the possession of data without revealing it. Various schemes offer different properties in terms of compactness, repeatability, outsourceability.
audit
An integrity audit is a protocol to request and provide proof of custody of a chunk, document or collection.
Erasure codes
An error-correcting scheme to redundantly recode and distribute data so that it is recovered in full integrity even if parts of it are not available.
bzz protocol
The devp2p network communication protocol Swarm uses to exchange information with connected peers.
chequebook contract
A smart contract on the blockchain that handles future obligations, by issuing signed
cheques redeemable to the beneficiary.
syncing
The protocol that makes sure chunks are distributed properly finding their custodians as the node closest to the chunk's address.
deposit
The amount of ether that registered nodes need to lock up to serve as collateral in case they are proven to break the rules (lose a litigation).
registration
Swarm nodes need to register their public key and collateralise their service if they are to issue receipts for storage insurance.
litigation
A process of challenge response mediated by the blockchain which is initiated if a node is found suspect not to keep to their obligation (to store a chunk). The idea is that both challenge and its refutation is validated by a smart contract which can execute the terms agreed in the breached contract or any condition of service delivery.
chunk
A fixed-sized datablob, the underlying unit of storage in swarm. Documents input to the API are split into chunks and recoded as a Merkle tree, each node corresponding to a chunk.
content addressing
A scheme whereby certain information about a content is index by the content itself (with the hash of the content).
receipt
signed acknowledgement of receiving (or already having) a chunk.
SMASH proof
Secured with Masked Audit Secret Hash: a family of proof of custody schemes
CRASH proof
Collective Recursive Audit Secret Hash: a proof of custody scheme for collections
| 43.961165 | 342 | 0.778269 |
46a6c55cdbc9f0028c09bf7f7a376a9ec926179f | 290 | rst | reStructuredText | docs/source2/generated/generated/statsmodels.genmod.generalized_estimating_equations.NominalGEE.fit_regularized.rst | GreatWei/pythonStates | c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37 | [
"BSD-3-Clause"
] | 76 | 2019-12-28T08:37:10.000Z | 2022-03-29T02:19:41.000Z | docs/source2/generated/generated/statsmodels.genmod.generalized_estimating_equations.NominalGEE.fit_regularized.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 11 | 2015-07-22T22:11:59.000Z | 2020-10-09T08:02:15.000Z | docs/source2/generated/generated/statsmodels.genmod.generalized_estimating_equations.NominalGEE.fit_regularized.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 35 | 2020-02-04T14:46:25.000Z | 2022-03-24T03:56:17.000Z | :orphan:
statsmodels.genmod.generalized\_estimating\_equations.NominalGEE.fit\_regularized
=================================================================================
.. currentmodule:: statsmodels.genmod.generalized_estimating_equations
.. automethod:: NominalGEE.fit_regularized
| 32.222222 | 81 | 0.610345 |
233c1aa61706bdcd57590f751fe04a5caafa8608 | 668 | rst | reStructuredText | docs/src/datascience_starter.models.glm.rst | jordanparker6/datascience-starter | 3eef1640a45d19431e9fb26adf5e089d3708dab1 | [
"MIT"
] | 4 | 2020-10-01T23:20:29.000Z | 2021-06-24T08:34:41.000Z | docs/src/datascience_starter.models.glm.rst | jordanparker6/datascience-starter | 3eef1640a45d19431e9fb26adf5e089d3708dab1 | [
"MIT"
] | null | null | null | docs/src/datascience_starter.models.glm.rst | jordanparker6/datascience-starter | 3eef1640a45d19431e9fb26adf5e089d3708dab1 | [
"MIT"
] | null | null | null | datascience\_starter.models.glm package
=======================================
Submodules
----------
datascience\_starter.models.glm.bayesian\_glm module
----------------------------------------------------
.. automodule:: datascience_starter.models.glm.bayesian_glm
:members:
:undoc-members:
:show-inheritance:
datascience\_starter.models.glm.families module
-----------------------------------------------
.. automodule:: datascience_starter.models.glm.families
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: datascience_starter.models.glm
:members:
:undoc-members:
:show-inheritance:
| 22.266667 | 59 | 0.573353 |
98906456aa9a3f415d7b22dd53ea0e9693fe4925 | 4,239 | rst | reStructuredText | docs/index.rst | HDembinski/boost-histogram | 6071588d8b58504938f72818d22ff3ce2a5b45dc | [
"BSD-3-Clause"
] | null | null | null | docs/index.rst | HDembinski/boost-histogram | 6071588d8b58504938f72818d22ff3ce2a5b45dc | [
"BSD-3-Clause"
] | null | null | null | docs/index.rst | HDembinski/boost-histogram | 6071588d8b58504938f72818d22ff3ce2a5b45dc | [
"BSD-3-Clause"
] | null | null | null | .. boost-histogram documentation master file, created by
sphinx-quickstart on Tue Apr 23 12:12:27 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
.. image:: _images/BoostHistogramPythonLogo.png
:width: 400
:alt: Boost histogram logo
:align: center
Welcome to boost-histogram's documentation!
===========================================
|Gitter| |Build Status| |Actions Status| |Documentation Status| |DOI|
|Code style: black| |PyPI version| |Conda-Forge| |Scikit-HEP|
Boost-histogram (`source <https://github.com/scikit-hep/boost-histogram>`__) is
a Python package providing Python bindings for Boost.Histogram_ (`source
<https://github.com/boostorg/histogram>`__). You can install this library from
`PyPI <https://pypi.org/project/boost-histogram/>`__ with pip or you can use
Conda via `conda-forge <https://github.com/conda-forge/boost-histogram-feedstock>`__:
.. code:: bash
python -m pip install boost-histogram
.. code:: bash
conda install -c conda-forge boost-histogram
All the normal best-practices for Python apply; you should be in a
virtual environment, etc. See :ref:`usage-installation` for more details. An example of usage:
.. code:: python3
import boost_histogram as bh
# Compose axis however you like; this is a 2D histogram
hist = bh.Histogram(bh.axis.Regular(2, 0, 1),
bh.axis.Regular(4, 0.0, 1.0))
# Filling can be done with arrays, one per dimension
hist.fill([.3, .5, .2],
[.1, .4, .9])
# Numpy array view into histogram counts, no overflow bins
counts = hist.view()
See :ref:`usage-quickstart` for more.
.. toctree::
:maxdepth: 1
:caption: Contents:
usage/installation
usage/quickstart
usage/histogram
usage/axes
usage/storage
usage/accumulators
usage/transforms
usage/indexing
usage/analyses
usage/numpy
usage/comparison
.. toctree::
:maxdepth: 1
:caption: Examples:
notebooks/SimpleExample
notebooks/ThreadedFills
notebooks/PerformanceComparison
notebooks/xarray
notebooks/BoostHistogramHandsOn
.. toctree::
:caption: API Reference:
api/modules
Acknowledgements
----------------
This library was primarily developed by Henry Schreiner and Hans
Dembinski.
Support for this work was provided by the National Science Foundation
cooperative agreement OAC-1836650 (IRIS-HEP) and OAC-1450377
(DIANA/HEP). Any opinions, findings, conclusions or recommendations
expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation.
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. _Boost.Histogram: https://www.boost.org/doc/libs/release/libs/histogram/doc/html/index.html
.. |Gitter| image:: https://badges.gitter.im/HSF/PyHEP-histogramming.svg
:target: https://gitter.im/HSF/PyHEP-histogramming?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge
.. |Build Status| image:: https://dev.azure.com/scikit-hep/boost-histogram/_apis/build/status/bh-tests?branchName=develop
:target: https://dev.azure.com/scikit-hep/boost-histogram/_build/latest?definitionId=2&branchName=develop
.. |Actions Status| image:: https://github.com/scikit-hep/boost-histogram/workflows/Tests/badge.svg
:target: https://github.com/scikit-hep/boost-histogram/actions
.. |Documentation Status| image:: https://readthedocs.org/projects/boost-histogram/badge/?version=latest
:target: https://boost-histogram.readthedocs.io/en/latest/?badge=latest
.. |DOI| image:: https://zenodo.org/badge/148885351.svg
:target: https://zenodo.org/badge/latestdoi/148885351
.. |Code style: black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/ambv/black
.. |PyPI version| image:: https://badge.fury.io/py/boost-histogram.svg
:target: https://pypi.org/project/boost-histogram/
.. |Conda-Forge| image:: https://img.shields.io/conda/vn/conda-forge/boost-histogram
:target: https://github.com/conda-forge/boost-histogram-feedstock
.. |Scikit-HEP| image:: https://scikit-hep.org/assets/images/Scikit--HEP-Project-blue.svg
:target: https://scikit-hep.org/
| 34.185484 | 121 | 0.725643 |
340bc942f6f5a3beb240914edd9afab4880b2a80 | 501 | rst | reStructuredText | docs/source/api/api_utils.rst | ViniciusLima94/frites | 270230a2b3abeb676211d3dec54ce6d039eb2f91 | [
"BSD-3-Clause"
] | null | null | null | docs/source/api/api_utils.rst | ViniciusLima94/frites | 270230a2b3abeb676211d3dec54ce6d039eb2f91 | [
"BSD-3-Clause"
] | null | null | null | docs/source/api/api_utils.rst | ViniciusLima94/frites | 270230a2b3abeb676211d3dec54ce6d039eb2f91 | [
"BSD-3-Clause"
] | null | null | null | Utility functions
-----------------
:py:mod:`frites.utils`:
.. currentmodule:: frites.utils
.. automodule:: frites.utils
:no-members:
:no-inherited-members:
Time-series processing
++++++++++++++++++++++
.. autosummary::
:toctree: generated/
downsample
acf
Data smoothing
++++++++++++++
.. autosummary::
:toctree: generated/
savgol_filter
kernel_smoothing
Data selection
++++++++++++++
.. autosummary::
:toctree: generated/
time_to_sample
get_closest_sample | 13.540541 | 31 | 0.608782 |
746e0e71653a0b19cfc9e1349727421dcac826dc | 438 | rst | reStructuredText | docs/usage.rst | solocompt/plugs-payments | ef69a7e9707a325b588e2087aad21a487bcf6746 | [
"MIT"
] | null | null | null | docs/usage.rst | solocompt/plugs-payments | ef69a7e9707a325b588e2087aad21a487bcf6746 | [
"MIT"
] | null | null | null | docs/usage.rst | solocompt/plugs-payments | ef69a7e9707a325b588e2087aad21a487bcf6746 | [
"MIT"
] | null | null | null | =====
Usage
=====
To use Plugs Payments in a project, add it to your `INSTALLED_APPS`:
.. code-block:: python
INSTALLED_APPS = (
...
'plugs_payments.apps.PlugsPaymentsConfig',
...
)
Add Plugs Payments's URL patterns:
.. code-block:: python
from plugs_payments import urls as plugs_payments_urls
urlpatterns = [
...
url(r'^', include(plugs_payments_urls)),
...
]
| 16.222222 | 68 | 0.586758 |
a1d71b34f5280f6caff58a9b6ac098b3efa3e275 | 520 | rst | reStructuredText | docs/implementation.rst | dinga92/neuropredict | 8e7a445424f8c649a6583567f5692fdf73d7e1d9 | [
"MIT"
] | 94 | 2017-03-10T04:27:36.000Z | 2021-12-15T13:12:34.000Z | docs/implementation.rst | raamana/psy | da8eaa20285eb67ca4cd61f914c3eb11798bb8dc | [
"Apache-2.0"
] | 48 | 2017-08-09T23:29:13.000Z | 2021-07-07T23:01:50.000Z | docs/implementation.rst | raamana/psy | da8eaa20285eb67ca4cd61f914c3eb11798bb8dc | [
"Apache-2.0"
] | 31 | 2017-07-02T14:46:33.000Z | 2022-03-03T12:25:18.000Z | Implemenation details
---------------------
In line with the overraching goals of neuropredict, the following choices were made:
- cross-validation scheme (central to performance estimation) has been chosen to be repeated hold-out (referred to as ShuffleSplit in scikit-learn). As you may be aware, choice of CV scheme makes a difference, and this is a good choice among available options. RepeatedKFold is also possible, at the expense slightly more awkward implementation
You may also be interested in :doc:`faq`. | 57.777778 | 345 | 0.769231 |
4d3afd6e2de91023c47f2e99c6243e9bd1d12b91 | 229 | rst | reStructuredText | README.rst | rolandproud/pyechoplot | a3edbe5d6d5156d586871760de9e2bb10544fa61 | [
"MIT"
] | null | null | null | README.rst | rolandproud/pyechoplot | a3edbe5d6d5156d586871760de9e2bb10544fa61 | [
"MIT"
] | null | null | null | README.rst | rolandproud/pyechoplot | a3edbe5d6d5156d586871760de9e2bb10544fa61 | [
"MIT"
] | 1 | 2021-06-30T23:16:26.000Z | 2021-06-30T23:16:26.000Z | ==========
pyechoplot
==========
An echogram plotting tool (in development)
------------------------------------------
developed by Roland Proud (rp43@st-andrews.ac.uk)
Pelagic Ecology Research Group, University of St Andrews
| 20.818182 | 56 | 0.576419 |
1ad3f71fee0b49f881e37db08674e5b6b49b17ff | 3,015 | rst | reStructuredText | docs/source/tutorial.rst | xuan-w/CANA | 30cbb1f21560ebd480b26c2b0c716e906573b046 | [
"MIT"
] | 1 | 2021-01-08T22:04:05.000Z | 2021-01-08T22:04:05.000Z | docs/source/tutorial.rst | leonardogian/CANA | f070d6da5358f17f55eb80d6bc9185afaee91142 | [
"MIT"
] | null | null | null | docs/source/tutorial.rst | leonardogian/CANA | f070d6da5358f17f55eb80d6bc9185afaee91142 | [
"MIT"
] | null | null | null | Tutorials
=========
Below are some examples of how you might use this package.
For more detailed examples see the ``tutorials/`` folder inside the package.
There you will find several ``.ipynb`` files.
Instanciante a Boolean Node
-------------------------------
To instanciante a node from scratch:
.. code-block:: python
from cana import BooleanNode
print BooleanNode.from_output_list(outputs=[0,0,0,1], name='AND', inputs=['in1','in2'])
<BNode(name='AND', k=2, inputs=[in1,in2], state=0, outputs='[0,0,0,1]' constant=False)>
To load a predefined node
.. code-block:: python
from cana.datasets.bools import *
print AND()
<BNode(name='AND', k=2, inputs=[i1,i2], state=0, outputs='[0,0,0,1]' constant=False)>
print OR()
<BNode(name='OR', k=2, inputs=[i1,i2], state=0, outputs='[0,1,1,1]' constant=False)>
Instanciante a Boolean Network
-------------------------------
To instanciante a network from scratch:
.. code-block:: python
from cana import BooleanNetwork
logic = {
0:{'name':'in0', 'in':[0],'out':[0,1]},
1:{'name':'in1', 'in':[0],'out':[0,1]},
2:{'name':'out', 'in':[0,1], 'out':[0,0,0,1]}
}
print BooleanNetwork.from_dict(logic, name='AND Net')
<BNetwork(Name='AND Net', N=3, Nodes=['in0', 'in1', 'out'])>
To load a predefined network.
.. code-block:: python
from cana.datasets.bio import THALIANA
t = THALIANA()
print t
<BNetwork(Name='Arabidopsis Thaliana', N=15, Nodes=['AP3', 'UFO', 'FUL', 'FT', 'AP1', 'EMF1', 'LFY', 'AP2', 'WUS', 'AG', 'LUG', 'CLF', 'TFL1', 'PI', 'SEP'])>
Check inside the ``datasets/`` folder for other networks in ``.cnet`` format.
State-Transition-Graph (STG) & Attractors
-------------------------------------------
To compute the State-Transition-Graph (STG) of a Boolean Network.
.. code-block:: python
# this is a networkx.DiGraph() object
STG = t.state_transition_graph()
# a list of attractors
attrs = t.attractors(mode='stg')
Control Driver Nodes
----------------------
Discover which nodes control the network based on different methods.
Note that for large networks, some of these methods do not scale.
.. code-block:: python
# Find driver nodes (bruteforce, takes some time)
A = t.attractor_driver_nodes(min_dvs=4, max_dvs=5, verbose=True)
SC = t.structural_controllability_driver_nodes(keep_self_loops=False)
MDS = t.minimum_dominating_set_driver_nodes(max_search=10)
FVS = t.feedback_vertex_set_driver_nodes(method='bruteforce', remove_constants=True, keep_self_loops=True)
#
print t.get_node_name(SC)
[['UFO', 'EMF1', 'LUG', 'CLF'], ['UFO', 'LUG', 'CLF', 'SEP']]
LUT, F' and F'' schematas
---------------------------
.. code-block:: python
AND = BooleanNode.from_output_list([0,0,0,1])
AND.look_up_table()
In: Out:
0 00 0
1 01 0
2 10 0
3 11 1
.. code-block:: python
AND.schemata_look_up_table(type='pi')
In: Out:
0 0# 0
1 #0 0
2 11 1
.. code-block:: python
AND.schemata_look_up_table(type='ts')
In: Out:
0 0̊#̊ 0
1 11 1 | 25.991379 | 158 | 0.632172 |
ee1343181d2dddacf5e5a9208b20bf15e7bd9b6a | 124 | rst | reStructuredText | docs/gapic/v1/types.rst | plamut/python-grafeas | fcc6c1416c799942932c8466d9be8cf42c0e3e09 | [
"Apache-2.0"
] | 2 | 2021-11-26T07:08:43.000Z | 2022-03-07T20:20:04.000Z | docs/gapic/v1/types.rst | plamut/python-grafeas | fcc6c1416c799942932c8466d9be8cf42c0e3e09 | [
"Apache-2.0"
] | 40 | 2019-07-16T10:04:48.000Z | 2020-01-20T09:04:59.000Z | docs/gapic/v1/types.rst | plamut/python-grafeas | fcc6c1416c799942932c8466d9be8cf42c0e3e09 | [
"Apache-2.0"
] | 2 | 2019-07-18T00:05:31.000Z | 2019-11-27T14:17:22.000Z | Types for Grafeas API Client
=======================================
.. automodule:: grafeas.grafeas_v1.types
:members: | 24.8 | 40 | 0.508065 |
18d1adb8abb96c0ceaf99f53ff9e5ae720f79c32 | 1,255 | rst | reStructuredText | docs/rtd/troubleshooting.rst | xbabka01/yaramod | c6837f4ff4dfe2a731e5eefa95c0f58778e00c5d | [
"MIT",
"BSD-3-Clause"
] | 51 | 2019-04-25T14:54:05.000Z | 2022-03-16T22:34:09.000Z | docs/rtd/troubleshooting.rst | xbabka01/yaramod | c6837f4ff4dfe2a731e5eefa95c0f58778e00c5d | [
"MIT",
"BSD-3-Clause"
] | 100 | 2019-04-21T01:22:54.000Z | 2022-03-23T11:52:56.000Z | docs/rtd/troubleshooting.rst | xbabka01/yaramod | c6837f4ff4dfe2a731e5eefa95c0f58778e00c5d | [
"MIT",
"BSD-3-Clause"
] | 29 | 2019-04-22T02:43:41.000Z | 2022-03-19T01:56:21.000Z | ===============
Troubleshooting
===============
If you encounter a problem using Yaramod, it is usually easy to determine which of the four components the problem is related to:
* A ``ParserError`` indicates parsing problem and the message should include the first problematic token of the imput.
* A ``BuilderError`` probably means that your service uses the Builder incorrectly.
* If you use the ``getTextFormatted()`` (``text_formatted`` in Python) method of a YARA file and the output seems wrong, the problem will be probably in the the ``getText`` method of the ``TokenStream`` class. Please create a ticket and supply the input and the wrong output with some explanation so we can look into it.
* There can also be some problems when using modifying visitors. In the case your modifying visitor modifies the unformatted text very well, but the formatted text is wrong (usually missing tokens or bad ordering of tokens) it may indicate unused method ``cleanUpTokenStreams`` of the ``ModifyingVisitor`` class. Or the method could be used in a wrong matter. Please consult our examples in the section `Modifying Rulesets <https://yaramod.readthedocs.io/en/latest/modifying_rulesets.html>`_ and if that does not help consider contacting the authors.
| 114.090909 | 550 | 0.772112 |
5f7e2f0561491144a267c4b32d606371f8a863b9 | 1,116 | rst | reStructuredText | tests/validation-sets/common/bibliographic.rst | TheDubliner/confluencebuilder | f92d63ae9949c52cf8639643073aacb249cacd62 | [
"BSD-2-Clause"
] | null | null | null | tests/validation-sets/common/bibliographic.rst | TheDubliner/confluencebuilder | f92d63ae9949c52cf8639643073aacb249cacd62 | [
"BSD-2-Clause"
] | null | null | null | tests/validation-sets/common/bibliographic.rst | TheDubliner/confluencebuilder | f92d63ae9949c52cf8639643073aacb249cacd62 | [
"BSD-2-Clause"
] | 1 | 2019-02-25T13:08:42.000Z | 2019-02-25T13:08:42.000Z | bibliographic fields
====================
reStructuredText provides a `bibliographic fields`_ directive to provide a field
list designed to document bibliographic data.
:Author: Jane Smith
:Address:
200 Broadway Av
WEST BEACH SA 5024
AUSTRALIA
:Contact: jane.smith@example.org
:Authors: Me; Myself; I
:Organization: humankind
:Date: 2016-07-06 13:21:00
:Status: This is a "work in progress"
:Revision: 1.3
:Version: 1
:Copyright: This document has been placed in the public domain. You
may do with it as you wish. You may copy, modify,
redistribute, reattribute, sell, buy, rent, lease,
destroy, or improve it, quote it at length, excerpt,
incorporate, collate, fold, staple, or mutilate it, or do
anything else to it that your or anyone else's heart
desires.
:Field Name: This is a generic bibliographic field.
:Another Field:
Generic bibliographic fields may contain multiple body elements.
Like this.
.. _bibliographic fields: http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#bibliographic-fields
| 31.885714 | 113 | 0.704301 |
6e9178ab1a87147e1b0ed0d32d4e53ece73547ac | 3,807 | rst | reStructuredText | README.rst | UCL/scikit-surgerynditracker | 23388a9c7109258e8d29df5ab9838becfcb907de | [
"BSD-3-Clause"
] | 10 | 2020-09-10T14:02:32.000Z | 2021-10-21T05:34:35.000Z | README.rst | UCL/scikit-surgerynditracker | 23388a9c7109258e8d29df5ab9838becfcb907de | [
"BSD-3-Clause"
] | 15 | 2020-04-28T13:52:40.000Z | 2021-12-02T14:03:41.000Z | README.rst | UCL/scikit-surgerynditracker | 23388a9c7109258e8d29df5ab9838becfcb907de | [
"BSD-3-Clause"
] | 3 | 2020-06-15T10:51:33.000Z | 2021-07-07T10:36:25.000Z | scikit-surgerynditracker
===============================
.. image:: https://github.com/UCL/scikit-surgerynditracker/raw/master/weiss_logo.png
:height: 128px
:width: 128px
:target: https://github.com/UCL/scikit-surgerynditracker
:alt: Logo
|
.. image:: https://github.com/UCL/scikit-surgerynditracker/workflows/.github/workflows/ci.yml/badge.svg
:target: https://github.com/UCL/scikit-surgerynditracker/actions/
:alt: GitHub CI test status
.. image:: https://coveralls.io/repos/github/UCL/scikit-surgerynditracker/badge.svg?branch=master&service=github
:target: https://coveralls.io/github/UCL/scikit-surgerynditracker?branch=master
:alt: Test coverage
.. image:: https://readthedocs.org/projects/scikit-surgerynditracker/badge/?version=latest
:target: http://scikit-surgerynditracker.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://img.shields.io/badge/Cite-SciKit--Surgery-informational
:target: https://doi.org/10.1007/s11548-020-02180-5
:alt: The SciKit-Surgery paper
scikit-surgerynditracker is a python interface for Northern Digital (NDI) trackers. It should work with Polaris Vicra, Spectra, and Vega optical trackers and Aurora electromagnetic trackers. Tracking data is output as NumPy arrays.
Author: Stephen Thompson
scikit-surgerynditracker is part of the `SciKit-Surgery`_ software project, developed at the `Wellcome EPSRC Centre for Interventional and Surgical Sciences`_, part of `University College London (UCL)`_.
Installing
----------
::
pip install scikit-surgerynditracker
Using
-----
Configuration is done using Python libraries at instantiation. Invalid
configuration should raise exceptions. Tracking data is returned in a set of
lists, containing the port handles, timestamps, framenumbers, the tracking data
and a tracking quality metric. By default tracking data is returned as a 4x4 NumPy array,
though can be returned as a quaternion by changing the configuration.
::
from sksurgerynditracker.nditracker import NDITracker
SETTINGS = {
"tracker type": "polaris",
"romfiles" : ["../data/8700339.rom"]
}
TRACKER = NDITracker(SETTINGS)
TRACKER.start_tracking()
port_handles, timestamps, framenumbers, tracking, quality = TRACKER.get_frame()
for t in tracking:
print (t)
TRACKER.stop_tracking()
TRACKER.close()
See demo.py for a full example
Developing
----------
Cloning
^^^^^^^
You can clone the repository using the following command:
::
git clone https://github.com/UCL/scikit-surgerynditracker
Running the tests
^^^^^^^^^^^^^^^^^
You can run the unit tests by installing and running tox:
::
pip install tox
tox
Contributing
^^^^^^^^^^^^
Please see the `contributing guidelines`_.
Useful links
^^^^^^^^^^^^
* `Source code repository`_
* `Documentation`_
Licensing and copyright
-----------------------
Copyright 2018 University College London.
scikit-surgerynditracker is released under the BSD-3 license. Please see the `license file`_ for details.
Acknowledgements
----------------
Supported by `Wellcome`_ and `EPSRC`_.
.. _`Wellcome EPSRC Centre for Interventional and Surgical Sciences`: http://www.ucl.ac.uk/weiss
.. _`source code repository`: https://github.com/UCL/scikit-surgerynditracker
.. _`Documentation`: https://scikit-surgerynditracker.readthedocs.io
.. _`SciKit-Surgery`: https://www.github.com/UCL/scikit-surgery/wikis/home
.. _`University College London (UCL)`: http://www.ucl.ac.uk/
.. _`Wellcome`: https://wellcome.ac.uk/
.. _`EPSRC`: https://www.epsrc.ac.uk/
.. _`contributing guidelines`: https://github.com/UCL/scikit-surgerynditracker/blob/master/CONTRIBUTING.rst
.. _`license file`: https://github.com/UCL/scikit-surgerynditracker/blob/master/LICENSE
| 29.742188 | 231 | 0.727082 |
80870e9c8bce3c61fcae33e4afab386e203c53c1 | 1,040 | rst | reStructuredText | doc/index.rst | balins/fuzzy-tree | 8dde93cdd32c2c3643ce459fdd5d29b8a0904714 | [
"BSD-3-Clause"
] | 6 | 2021-06-06T13:32:48.000Z | 2021-10-13T14:15:04.000Z | doc/index.rst | balins/fuzzy-tree | 8dde93cdd32c2c3643ce459fdd5d29b8a0904714 | [
"BSD-3-Clause"
] | 1 | 2022-01-23T15:21:06.000Z | 2022-01-23T15:21:06.000Z | doc/index.rst | balins/fuzzytree | 8dde93cdd32c2c3643ce459fdd5d29b8a0904714 | [
"BSD-3-Clause"
] | null | null | null | ..
Welcome to fuzzytree's documentation!
=====================================
This is a reference to the `fuzzytree` module.
For implementation details, you can visit its `repository`_ on GitHub.
.. _repository: https://github.com/balins/fuzzytree
.. toctree::
:maxdepth: 2
:hidden:
:caption: Getting Started
quick_start
.. toctree::
:maxdepth: 2
:hidden:
:caption: Documentation
user_guide
api
.. toctree::
:maxdepth: 2
:hidden:
:caption: Examples
auto_examples/index
`Getting started <quick_start.html>`_
-------------------------------------
Basic information regarding usage of the `fuzzytree` module.
`User Guide <user_guide.html>`_
-------------------------------
A gentle introduction to the usage of `fuzzytree`'s API.
`API Documentation <api.html>`_
-------------------------------
The `fuzzytree`'s public API documentation.
`Examples <auto_examples/index.html>`_
--------------------------------------
A set of examples. It complements the `User Guide <user_guide.html>`_. | 20.392157 | 70 | 0.597115 |
95c21e6d00cbe164590c3bfeabd938a535691890 | 131 | rst | reStructuredText | docs/source/code/adapters/sqliteadapter/privex.eos.adapters.SqliteAdapter.DEFAULT_DB_FOLDER.rst | Privex/eos-python | df4f751c8cd46ba0a13385f0e5eb630d471d6f65 | [
"X11"
] | 1 | 2020-02-16T23:17:05.000Z | 2020-02-16T23:17:05.000Z | docs/source/code/adapters/sqliteadapter/privex.eos.adapters.SqliteAdapter.DEFAULT_DB_FOLDER.rst | Privex/eos-python | df4f751c8cd46ba0a13385f0e5eb630d471d6f65 | [
"X11"
] | 5 | 2020-03-24T18:07:08.000Z | 2021-08-23T20:40:04.000Z | docs/source/code/adapters/sqliteadapter/privex.eos.adapters.SqliteAdapter.DEFAULT_DB_FOLDER.rst | Privex/eos-python | df4f751c8cd46ba0a13385f0e5eb630d471d6f65 | [
"X11"
] | null | null | null | DEFAULT\_DB\_FOLDER
===================
.. currentmodule:: privex.eos.adapters
.. autoattribute:: SqliteAdapter.DEFAULT_DB_FOLDER | 21.833333 | 50 | 0.687023 |
4d915688e2f982606760ff48140735115e56d8fc | 4,429 | rst | reStructuredText | content/pages/jobs/determining-dynamics-perception-thresholds-of-bicycles.rst | thuiskens/mechmotum.github.io | 1773293f2ecedcc3ad027c7f86dac631b03ee846 | [
"CC-BY-4.0"
] | 5 | 2019-03-20T10:57:11.000Z | 2022-03-28T02:50:57.000Z | content/pages/jobs/determining-dynamics-perception-thresholds-of-bicycles.rst | thuiskens/mechmotum.github.io | 1773293f2ecedcc3ad027c7f86dac631b03ee846 | [
"CC-BY-4.0"
] | 80 | 2018-08-15T18:08:27.000Z | 2022-03-21T15:49:18.000Z | content/pages/jobs/determining-dynamics-perception-thresholds-of-bicycles.rst | thuiskens/mechmotum.github.io | 1773293f2ecedcc3ad027c7f86dac631b03ee846 | [
"CC-BY-4.0"
] | 19 | 2018-12-15T00:17:08.000Z | 2022-01-31T11:22:24.000Z | =======================================================
Determining Dynamics Perception Thresholds of Bicycles
=======================================================
:date: 2021-01-04
:status: hidden
:slug: jobs/msc/determining-dynamics-perception-thresholds-of-bicycles
The "handling qualities", or ease of control, of human controlled vehicles are
difficult to objectively characterize. The subjective nature of human
perception to changes in vehicle dynamics confounds the certainty in
hypothesized relationships between plant dynamics and the handling qualities.
For example, humans can even be tricked into incorrect judgements when there is
nuance in varied vehicles dynamics. Additionally, perception can be affected
differently for different tasks and for equivalent order plants with large
differences in the dynamics. A first step to reaching relationships between
plant dynamics, task type, and perception of handling is to understand what the
threshold precision of subjectively reported perception of a human subject
under controlled situations. This has been explored for laboratory tasks, such
as tracking a single degree of freedom motion on a screen with a joystick
[Wei2020]_ and aircraft stall [Smets2019]_, but not yet in more complex
experimental scenarios.
The bicycle-rider system is well suited to attempt perception threshold
identification due to the ability of constraining the plant as a SIMO system
and that it is a "lab scaled" and a low cost experimental platform. The bicycle
provides unique plant dynamics that can be exploited, such as variable
stability, non-minimum phase behavior, and non-trivial system order. Some prior
work with a bicycle has been performed in [Kresie2017]_. There are two primary
options for varying the dynamics of the vehicle: 1) manually change specific
physical aspects of the vehicle or 2) to make use of the Bicycle Lab's
steer-by-wire bicycle in which the open loop dynamics between the handlebars
and fork can be implemented in software. Both have been done in the lab before
and there are merits to each. A method of setting precise and repeatable plant
dynamics will be required.
The primary goal of the project will be to develop experimentally derived
measures of the perception thresholds to changes in vehicle dynamics, i.e.
what's the smallest change in a physical characteristics that riders can
successfully detect.
Experiments will have to be carefully designed such that the rider does not
make perception judgements based on preconceived ideas about how physical
characteristics relate to handling and appropriate control tasks (maneuvers)
will have to be chosen that maximize the ability to make objective measures of
control success. The dynamics will have to be able to varied precisely for
repeatability and also in a way that both randomizes the order of changes and
hones in on threshold limits in with a minimal number of trials. Multiple
riders will need to be evaluated to increase certainty in the perception
measures. Riders will need to be coaxed into utilizing similar passive
biomechanics or these differences will need to be measured. Lastly, careful
design of extracting the subjective assessment from the subject for a given
trial will need to be implemented.
We will collaborate with Rene van Paassen in Human Machine Systems in Aerospace
Engineering.
How to Apply
============
Send an email to j.k.moore@tudelft.nl with the title of the project in the
subject line. Include an approximately half-page motivation letter explaining
why you want to work in the Bicycle Lab on this project along with your current
resume or C.V.
References
==========
.. [Wei2020] Fu, Wei, M. M. van Paassen, and Max Mulder. "Human Threshold Model for
Perceiving Changes in System Dynamics." IEEE Transactions on Human-Machine
Systems 50, no. 5 (October 2020): 444–53. https://doi.org/10.1109/THMS.2020.2989383.
.. [Smets2019] Smets, Stephan C., Coen C. de Visser, and Daan M. Pool. "Subjective
Noticeability of Variations in Quasi-Steady Aerodynamic Stall Dynamics." In
AIAA Scitech 2019 Forum. AIAA SciTech Forum. American Institute of
Aeronautics and Astronautics, 2019. https://doi.org/10.2514/6.2019-1485.
.. [Kresie2017] Kresie, Scott W., Jason K. Moore, Mont Hubbard, and Ronald A.
Hess. "Experimental Validation of Bicycle Handling Prediction," September
13, 2017. https://doi.org/10.6084/m9.figshare.5405233.v1
| 56.063291 | 87 | 0.783021 |
1469131124b306b31eed05d4ac2af951c7755311 | 2,037 | rst | reStructuredText | Openharmony v1.0/vendor/hisi/hi35xx/third_party/uboot/doc/driver-model/serial-howto.rst | clkbit123/TheOpenHarmony | 0e6bcd9dee9f1a2481d762966b8bbd24baad6159 | [
"MIT"
] | 1 | 2021-11-21T19:56:29.000Z | 2021-11-21T19:56:29.000Z | machine/qemu/sources/u-boot/doc/driver-model/serial-howto.rst | muddessir/framework | 5b802b2dd7ec9778794b078e748dd1f989547265 | [
"MIT"
] | null | null | null | machine/qemu/sources/u-boot/doc/driver-model/serial-howto.rst | muddessir/framework | 5b802b2dd7ec9778794b078e748dd1f989547265 | [
"MIT"
] | null | null | null | .. SPDX-License-Identifier: GPL-2.0+
How to port a serial driver to driver model
===========================================
Almost all of the serial drivers have been converted as at January 2016. These
ones remain:
* serial_bfin.c
* serial_pxa.c
The deadline for this work was the end of January 2016. If no one steps
forward to convert these, at some point there may come a patch to remove them!
Here is a suggested approach for converting your serial driver over to driver
model. Please feel free to update this file with your ideas and suggestions.
- #ifdef out all your own serial driver code (#ifndef CONFIG_DM_SERIAL)
- Define CONFIG_DM_SERIAL for your board, vendor or architecture
- If the board does not already use driver model, you need CONFIG_DM also
- Your board should then build, but will not boot since there will be no serial
driver
- Add the U_BOOT_DRIVER piece at the end (e.g. copy serial_s5p.c for example)
- Add a private struct for the driver data - avoid using static variables
- Implement each of the driver methods, perhaps by calling your old methods
- You may need to adjust the function parameters so that the old and new
implementations can share most of the existing code
- If you convert all existing users of the driver, remove the pre-driver-model
code
In terms of patches a conversion series typically has these patches:
- clean up / prepare the driver for conversion
- add driver model code
- convert at least one existing board to use driver model serial
- (if no boards remain that don't use driver model) remove the old code
This may be a good time to move your board to use device tree also. Mostly
this involves these steps:
- define CONFIG_OF_CONTROL and CONFIG_OF_SEPARATE
- add your device tree files to arch/<arch>/dts
- update the Makefile there
- Add stdout-path to your /chosen device tree node if it is not already there
- build and get u-boot-dtb.bin so you can test it
- Your drivers can now use device tree
- For device tree in SPL, define CONFIG_SPL_OF_CONTROL
| 43.340426 | 79 | 0.762396 |
93dd46bd0413ce1e443630b6b8d0a0faa8dac431 | 3,858 | rst | reStructuredText | docs/api.rst | kayak/Flask-AppBuilder | cb0b0fbcac2c293cfcdcc0f227752b7c252dc7ed | [
"BSD-3-Clause"
] | 71 | 2016-11-02T06:45:42.000Z | 2021-11-15T12:33:48.000Z | docs/api.rst | Jayjake1/Flask-AppBuilder | 1cf35e1296d3e396fdf4667b256ba59df6c43d11 | [
"BSD-3-Clause"
] | 1 | 2021-02-23T14:56:42.000Z | 2021-02-23T14:56:42.000Z | docs/api.rst | Jayjake1/Flask-AppBuilder | 1cf35e1296d3e396fdf4667b256ba59df6c43d11 | [
"BSD-3-Clause"
] | 23 | 2016-11-02T06:45:44.000Z | 2022-02-08T14:55:13.000Z | =============
API Reference
=============
flask.ext.appbuilder
====================
AppBuilder
----------
.. automodule:: flask.ext.appbuilder.base
.. autoclass:: AppBuilder
:members:
.. automethod:: __init__
flask.ext.appbuilder.security.decorators
========================================
.. automodule:: flask.ext.appbuilder.security.decorators
.. autofunction:: has_access
.. autofunction:: permission_name
flask.ext.appbuilder.models.decorators
========================================
.. automodule:: flask.ext.appbuilder.models.decorators
.. autofunction:: renders
flask.ext.appbuilder.baseviews
==============================
.. automodule:: flask.ext.appbuilder.baseviews
.. autofunction:: expose
BaseView
--------
.. autoclass:: BaseView
:members:
BaseFormView
------------
.. autoclass:: BaseFormView
:members:
BaseModelView
-------------
.. autoclass:: BaseModelView
:members:
BaseCRUDView
------------
.. autoclass:: BaseCRUDView
:members:
flask.ext.appbuilder.views
==========================
.. automodule:: flask.ext.appbuilder.views
IndexView
---------
.. autoclass:: IndexView
:members:
SimpleFormView
--------------
.. autoclass:: SimpleFormView
:members:
PublicFormView
--------------
.. autoclass:: PublicFormView
:members:
ModelView
-----------
.. autoclass:: ModelView
:members:
MultipleView
----------------
.. autoclass:: MultipleView
:members:
MasterDetailView
----------------
.. autoclass:: MasterDetailView
:members:
CompactCRUDMixin
----------------
.. autoclass:: CompactCRUDMixin
:members:
flask.ext.appbuilder.actions
============================
.. automodule:: flask.ext.appbuilder.actions
.. autofunction:: action
flask.ext.appbuilder.security
=============================
.. automodule:: flask.ext.appbuilder.security.manager
BaseSecurityManager
-------------------
.. autoclass:: BaseSecurityManager
:members:
BaseRegisterUser
----------------
.. automodule:: flask.ext.appbuilder.security.registerviews
.. autoclass:: BaseRegisterUser
:members:
flask.ext.appbuilder.filemanager
================================
.. automodule:: flask.ext.appbuilder.filemanager
.. autofunction:: get_file_original_name
Aggr Functions for Group By Charts
==================================
.. automodule:: flask.ext.appbuilder.models.group
.. autofunction:: aggregate_count
.. autofunction:: aggregate_avg
.. autofunction:: aggregate_sum
flask.ext.appbuilder.charts.views
=================================
.. automodule:: flask.ext.appbuilder.charts.views
BaseChartView
-------------
.. autoclass:: BaseChartView
:members:
DirectByChartView
-----------------
.. autoclass:: DirectByChartView
:members:
GroupByChartView
----------------
.. autoclass:: GroupByChartView
:members:
(Deprecated) ChartView
----------------------
.. autoclass:: ChartView
:members:
(Deprecated) TimeChartView
--------------------------
.. autoclass:: TimeChartView
:members:
(Deprecated) DirectChartView
----------------------------
.. autoclass:: DirectChartView
:members:
flask.ext.appbuilder.models.mixins
==================================
.. automodule:: flask.ext.appbuilder.models.mixins
.. autoclass:: BaseMixin
:members:
.. autoclass:: AuditMixin
:members:
Extra Columns
-------------
.. autoclass:: FileColumn
:members:
.. autoclass:: ImageColumn
:members:
Generic Data Source (Beta)
--------------------------
flask.ext.appbuilder.models.generic
===================================
.. automodule:: flask.ext.appbuilder.models.generic
.. autoclass:: GenericColumn
:members:
.. autoclass:: GenericModel
:members:
.. autoclass:: GenericSession
:members:
| 16.701299 | 59 | 0.569984 |
5dd4d505c21522da7d544879e3c2f55e569b632a | 146 | rst | reStructuredText | docs/_autosummary/hstrat.helpers.AnyTreeAscendingIter.rst | mmore500/hstrat | 7fedcf3a7203e1e6c99ac16f4ec43ad160da3e6c | [
"MIT"
] | null | null | null | docs/_autosummary/hstrat.helpers.AnyTreeAscendingIter.rst | mmore500/hstrat | 7fedcf3a7203e1e6c99ac16f4ec43ad160da3e6c | [
"MIT"
] | 3 | 2022-02-28T17:33:57.000Z | 2022-02-28T21:41:33.000Z | docs/_autosummary/hstrat.helpers.AnyTreeAscendingIter.rst | mmore500/hstrat | 7fedcf3a7203e1e6c99ac16f4ec43ad160da3e6c | [
"MIT"
] | null | null | null | hstrat.helpers.AnyTreeAscendingIter
===================================
.. currentmodule:: hstrat.helpers
.. autofunction:: AnyTreeAscendingIter | 24.333333 | 38 | 0.623288 |
37c759d6f39f415828d2650dd43422053a1694db | 118 | rst | reStructuredText | README.rst | Midnighter/Everyday-Utilities | 03d18d636023fb0eff427e6634d6feb983e01aec | [
"BSD-3-Clause"
] | 1 | 2015-01-27T14:13:39.000Z | 2015-01-27T14:13:39.000Z | README.rst | Midnighter/Everyday-Utilities | 03d18d636023fb0eff427e6634d6feb983e01aec | [
"BSD-3-Clause"
] | null | null | null | README.rst | Midnighter/Everyday-Utilities | 03d18d636023fb0eff427e6634d6feb983e01aec | [
"BSD-3-Clause"
] | null | null | null | Everyday-Utilities
==================
a collection of classes and functions for various non-project related purposes
| 23.6 | 78 | 0.711864 |
d036a93b4b9a9a7143caa825bdf1176ae6ef6411 | 675 | rst | reStructuredText | relationships/material/examples.rst | MarekSuchanek/OntoUML | af23cedcd8da522addf579317077e166e5fe2961 | [
"CC0-1.0"
] | 9 | 2020-01-15T09:21:51.000Z | 2022-02-03T10:22:48.000Z | relationships/material/examples.rst | MarekSuchanek/OntoUML | af23cedcd8da522addf579317077e166e5fe2961 | [
"CC0-1.0"
] | 57 | 2018-09-26T05:28:37.000Z | 2022-03-04T14:51:37.000Z | relationships/material/examples.rst | MarekSuchanek/OntoUML | af23cedcd8da522addf579317077e166e5fe2961 | [
"CC0-1.0"
] | 8 | 2018-11-05T14:01:59.000Z | 2021-10-20T06:49:58.000Z | Examples
--------
.. _material-examples-ex1:
**EX1:**
.. container:: figure
|Example marriage|
.. _material-examples-ex2:
**EX2:**
.. container:: figure
|Example supervizor|
For more examples see «:ref:`relator`», «:ref:`derivation`», «:ref:`mediation`», and «:ref:`relator-pattern`».
**Quoted from:**
GUIZZARDI, Giancarlo. *Ontological Foundations for Structural Conceptual Models.* Enschede: CTIT, Telematica Instituut, 2005. GUIZZARDI, Giancarlo. *Introduction to Ontological Engineering.* [presentation] Prague: Prague University of Economics, 2011.
.. |Example marriage| image:: _images/marriage.png
.. |Example supervizor| image:: _images/supervizor.png | 27 | 251 | 0.72 |
b0c131687fd10b8a892f864238ec695ea8bc809a | 1,673 | rst | reStructuredText | doc/source/draft/enduser-guide/log_in_to_murano_instance.rst | ISCAS-VDI/murano-base | 34287bd9109b32a2bb0960c0428fe402dee6d9b2 | [
"Apache-2.0"
] | 1 | 2021-07-28T23:19:49.000Z | 2021-07-28T23:19:49.000Z | doc/source/draft/enduser-guide/log_in_to_murano_instance.rst | ISCAS-VDI/murano-base | 34287bd9109b32a2bb0960c0428fe402dee6d9b2 | [
"Apache-2.0"
] | null | null | null | doc/source/draft/enduser-guide/log_in_to_murano_instance.rst | ISCAS-VDI/murano-base | 34287bd9109b32a2bb0960c0428fe402dee6d9b2 | [
"Apache-2.0"
] | null | null | null | .. _login-murano-instance:
.. toctree::
:maxdepth: 2:
=================================
Log in to murano-spawned instance
=================================
After the application is successfully deployed, you may need to log in to the
virtual machine with the installed application.
All cloud images, including images imported from the
`OpenStack Application Catalog <http://apps.openstack.org/>`_,
have password authentication turned off. Therefore, it is not possible
to log in from the dashboard console. SSH is used to reach an instance spawned
by murano.
Possible default image users are:
* *ec2-user*
* *ubuntu* or *debian* (depending on the operating system)
To log in to murano-spawned instance, perform the following steps:
#. Prepare a key pair.
To log in through SSH, provide a key pair during the application creation.
If you do not have a key pair, click the plus sign to create one directly
from the :guilabel:`Configure Application` dialog.
.. image:: figures/add_key_pair.png
:alt: Application creation: key pair
:width: 630 px
#. After the deployment is completed, find out the instance IP address. For
this, see:
* Deployment logs
.. image:: figures/app_logs.png
:alt: Application logs: IP is provided
:width: 630 px
* Detailed instance parameters
See the :guilabel:`Instance name` link on the
:guilabel:`Component Details` page.
.. image:: figures/app_details.png
:alt: Application details: instance details link
:width: 630 px
#. To connect to the instance through SSH with the key pair, run:
.. code-block:: console
$ ssh <username>@<IP> -i <key.location>
| 27.883333 | 78 | 0.692767 |
92b8ca080f70c0f63223b7314baaf6938706e0ed | 1,419 | rst | reStructuredText | docs/index.rst | anukaal/django-classy-tags | 80f41d85ffa26ce5ea744f7392092c4b6b844d83 | [
"BSD-3-Clause"
] | 51 | 2020-09-04T10:51:38.000Z | 2022-03-19T20:26:42.000Z | docs/index.rst | anukaal/django-classy-tags | 80f41d85ffa26ce5ea744f7392092c4b6b844d83 | [
"BSD-3-Clause"
] | 23 | 2015-01-30T05:47:29.000Z | 2017-05-30T06:19:01.000Z | docs/index.rst | anukaal/django-classy-tags | 80f41d85ffa26ce5ea744f7392092c4b6b844d83 | [
"BSD-3-Clause"
] | 15 | 2015-01-29T13:47:56.000Z | 2018-01-22T05:08:28.000Z | .. django-classy-tags documentation master file, created by
sphinx-quickstart on Mon Aug 9 21:31:48 2010.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to django-classy-tags's documentation!
==============================================
django-classy-tags is an approach at making writing template tags in Django
easier, shorter and more fun by providing an extensible argument parser which
reduces most of the boiler plate code you usually have to write when coding
custom template tags.
django-classy-tags does **no magic by design**. Thus you will not get automatic
registering/loading of your tags like other solutions provide. You will not get
automatic argument guessing from function signatures but rather you have to
declare what arguments your tag accepts. There is also no magic in your template
tag class either, it's just a subclass of :class:`django.template.Node` which
invokes a parser class to parse the arguments when it's initialized and resolves
those arguments into keyword arguments in it's ``render`` method and calls it's
``render_tag`` method with those keyword arguments.
Contents:
.. toctree::
:maxdepth: 2
installation
usage
arguments
reference
extend
contribute
changes
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 33 | 80 | 0.730092 |
f96092280167a02e2b43221f707129df9f9507b8 | 6,216 | rst | reStructuredText | docs/sphinx/source/wiki/Workshop.rst | google-code-export/evennia | f458265bec64909d682736da6a118efa21b42dfc | [
"BSD-3-Clause"
] | null | null | null | docs/sphinx/source/wiki/Workshop.rst | google-code-export/evennia | f458265bec64909d682736da6a118efa21b42dfc | [
"BSD-3-Clause"
] | null | null | null | docs/sphinx/source/wiki/Workshop.rst | google-code-export/evennia | f458265bec64909d682736da6a118efa21b42dfc | [
"BSD-3-Clause"
] | null | null | null | rtclient protocol
=================
*Note: Most functionality of a webcliebnt implementation is already
added to trunk as of Nov 2010. That implementation does however not use
a custom protocol as suggested below. Rather it parses telnet-formatted
ansi text and converts it to html. Custom client operations (such as
opening windows or other features not relevant to telnet or other
protocols) should instead eb handled by a second "data" object being
passed to the server through the msg() method.*
rtclient is an extended and bastardized telnet protocol that processes
html and javascript embedded in the telnet session.
rtclient is implemented by the Teltola client, a web-based html/js
telnet client that is being integrated with Evennia and is written in
twisted/python.
There are two principle aspects to the rtclient protocol, mode control
and buffering.
Modes
=====
Unencoded Mode
--------------
All output is buffered until ascii char 10, 13, or 255 is encountered or
the mode changes or no output has been added to the buffer in the last
1/10th second and the buffer is not blank. When this occurs, the client
interprets the entire buffer as plain text and flushes the buffer.
HTML Mode
---------
All output is buffered. When the mode changes, the client then parses
the entire buffer as HTML.
Javascript Mode
---------------
All output is buffered. When the mode changes, the client then parses
the entire buffer as Javascript.
Sample Sessions
===============
# start html mode, send html, force buffer flush
::
session.msg(chr(240) + "<h1>Test</h1>" + chr(242) + chr(244))
# same as above, but realize that msg sends end-of-line # automatically
thus sending the buffer via AUTO\_CHNK
::
session.msg(chr(240) + "<h1>Test</h1>" + chr(242))
# more elaborate example sending javascript, html, and unbuffered text #
note we are using the tokens imported instead of the constants
::
from game.gamesrc.teltola.RTClient import HTML_TOKEN, JAVASCRIPT_TOKEN, UNENCODED_TOKEN
hello_world_js = "alert('hello world');"
welcome_html = "<h1>Hello World</h1>"
session.msg("".join([JAVASCRIPT_TOKEN, hello_world_js, HTML_TOKEN, welcome_html, UNENCODED_TOKEN,"Hello there."]))
::
session.msg(chr(243))
session.msg(my_text_with_line_breaks)
session.msg(chr(244))
Values of Tokens
================
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| chr() value \| name \| function |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 240 \| HTML\_TOKEN \| lets client know it is about to receive HTML |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 241 \| JAVASCRIPT\_TOKEN \| lets client know it is about to receive javascript |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 242 \| UNENCODED\_TOKEN \| lets client know it is about to receive plain telnet text |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 243 \| NO\_AUTOCHUNK\_TOKEN \| applies to unencoded mode only, prevents the chunking of text at end-of-line characters so that only mode changes force the buffer to be sent to the client |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 244 \| AUTOCHUNK\_TOKEN \| applies to unencoded mode only, enables automatic chunking of text by end-of-line characters and by non-blank buffers not having been written to in the last 1/10th second |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
identifying as an rtclient
--------------------------
rtclients send the text rtclient\\n immediately after connection so that
the server may enable rtclient extensions
Buffering Control
-----------------
Unbuffered output is not supported. There are two different buffering
methods supported. The default method is called AUTOCHUNK and applies
only to unencoded data (see data encoding section below). JAVASCRIPT and
HTML data is always treated as NO\_AUTOCHUNK.
NO\_AUTOCHUNK
~~~~~~~~~~~~~
Contents are never sent to the client until the encoding mode changes
(for example, switching from HTML to UNENCODED will send the HTML
buffer) or the buffering mode changes (for example, one could set
NO\_AUTOCHUNK, send some text, and set NO\_AUTOCHUNK again to force a
flush.
AUTOCHUNK
~~~~~~~~~
It sends the buffer to the client as unencoded text whenever one of
two things happen:
**the buffer is non-blank but hasn't had anything added to it very
recently (about 1/10th of a second)** the buffer ends with an
end-of-line character (10, 13, 255)
Autochunking strips end-of-line characters and the client adds in its
own EOL! If you would like to preserve them, send them from within
NO\_AUTOCHUNK.
| 47.815385 | 203 | 0.505148 |
ebf99c06dfe4d6a0b8ace1bb395f855b538b58c5 | 583 | rst | reStructuredText | docs/source/auto_examples/pipeline/sg_execution_times.rst | ZhiningLiu1998/imbalanced-ensemble | 26670c8a6b7bab26ae1e18cba3174a9d9038a680 | [
"MIT"
] | 87 | 2021-05-19T08:29:26.000Z | 2022-03-30T23:59:05.000Z | docs/source/auto_examples/pipeline/sg_execution_times.rst | breakend2010/imbalanced-ensemble | d6ed07689ad8c9cd27ecd589849f7122b98ab1a4 | [
"MIT"
] | 8 | 2021-05-28T10:27:28.000Z | 2022-01-11T11:21:03.000Z | docs/source/auto_examples/pipeline/sg_execution_times.rst | breakend2010/imbalanced-ensemble | d6ed07689ad8c9cd27ecd589849f7122b98ab1a4 | [
"MIT"
] | 18 | 2021-05-19T08:30:29.000Z | 2022-03-28T08:30:10.000Z |
:orphan:
.. _sphx_glr_auto_examples_pipeline_sg_execution_times:
Computation times
=================
**00:55.730** total execution time for **auto_examples_pipeline** files:
+--------------------------------------------------------------------------------------------------------------+-----------+---------+
| :ref:`sphx_glr_auto_examples_pipeline_plot_pipeline_classification.py` (``plot_pipeline_classification.py``) | 00:55.730 | 12.9 MB |
+--------------------------------------------------------------------------------------------------------------+-----------+---------+
| 44.846154 | 134 | 0.403087 |
2757807112194009b737791bc2e2e9230726c03d | 12,003 | rst | reStructuredText | docs/news/news-v3.0.x.rst | zhaog6/ompi | cc9647e6c1402f6432882ae9f426307ecb7ac721 | [
"BSD-3-Clause-Open-MPI"
] | null | null | null | docs/news/news-v3.0.x.rst | zhaog6/ompi | cc9647e6c1402f6432882ae9f426307ecb7ac721 | [
"BSD-3-Clause-Open-MPI"
] | null | null | null | docs/news/news-v3.0.x.rst | zhaog6/ompi | cc9647e6c1402f6432882ae9f426307ecb7ac721 | [
"BSD-3-Clause-Open-MPI"
] | 1 | 2022-02-04T06:10:07.000Z | 2022-02-04T06:10:07.000Z | Open MPI v3.0.x series
======================
This file contains all the NEWS updates for the Open MPI v3.0.x
series, in reverse chronological order.
Open MPI version 3.0.6
----------------------
:Date: March, 2020
- Fix one-sided shared memory window configuration bug.
- Fix support for PGI'18 compiler.
- Fix run-time linker issues with OMPIO on newer Linux distros.
- Allow the user to override modulefile_path in the Open MPI SRPM,
even if ``install_in_opt`` is set to 1.
- Properly detect ConnectX-6 HCAs in the openib BTL.
- Fix segfault in the MTL/OFI initialization for large jobs.
- Fix various portals4 control flow bugs.
- Fix communications ordering for alltoall and Cartesian neighborhood
collectives.
- Fix an infinite recursion crash in the memory patcher on systems
with glibc v2.26 or later (e.g., Ubuntu 18.04) when using certain
OS-bypass interconnects.
Open MPI version 3.0.5
----------------------
:Date: November, 2019
- Fix OMPIO issue limiting file reads/writes to 2GB. Thanks to
Richard Warren for reporting the issue.
- At run time, automatically disable Linux cross-memory attach (CMA)
for vader BTL (shared memory) copies when running in user namespaces
(i.e., containers). Many thanks to Adrian Reber for raising the
issue and providing the fix.
- Sending very large MPI messages using the ofi MTL will fail with
some of the underlying Libfabric transports (e.g., PSM2 with
messages >=4GB, verbs with messages >=2GB). Prior version of Open
MPI failed silently; this version of Open MPI invokes the
appropriate MPI error handler upon failure. See
https://github.com/open-mpi/ompi/issues/7058 for more details.
Thanks to Emmanuel Thomé for raising the issue.
- Fix case where 0-extent datatypes might be eliminated during
optimization. Thanks to Github user @tjahns for raising the issue.
- Ensure that the MPIR_Breakpoint symbol is not optimized out on
problematic platforms.
- Fix OMPIO offset calculations with ``SEEK_END`` and ``SEEK_CUR`` in
``MPI_FILE_GET_POSITION``. Thanks to Wei-keng Liao for raising the
issue.
- Fix corner case for datatype extent computations. Thanks to David
Dickenson for raising the issue.
- Fix MPI buffered sends with the "cm" PML.
- Update to PMIx v2.2.3.
- Fix ssh-based tree-based spawning at scale. Many thanks to Github
user @zrss for the report and diagnosis.
- Fix the Open MPI RPM spec file to not abort when grep fails. Thanks
to Daniel Letai for bringing this to our attention.
- Handle new SLURM CLI options (SLURM 19 deprecated some options that
Open MPI was using). Thanks to Jordan Hayes for the report and the
initial fix.
- OMPI: fix division by zero with an empty file view.
- Also handle ``shmat()``/``shmdt()`` memory patching with OS-bypass networks.
- Add support for unwinding info to all files that are present in the
stack starting from ``MPI_Init``, which is helpful with parallel
debuggers. Thanks to James Clark for the report and initial fix.
- Fixed inadvertant use of bitwise operators in the MPI C++ bindings
header files. Thanks to Bert Wesarg for the report and the fix.
- Added configure option ``--disable-wrappers-runpath`` (alongside the
already-existing ``--disable-wrappers-rpath`` option) to prevent Open
MPI's configure script from automatically adding runpath CLI options
to the wrapper compilers.
Open MPI version 3.0.4
----------------------
:Date: April, 2019
- Fix compile error when configured with ``--enable-mpi-java`` and
``--with-devel-headers``. Thanks to @g-raffy for reporting the issue.
- Fix possible floating point rounding and division issues in OMPIO
which led to crashes and/or data corruption with very large data.
Thanks to Axel Huebl and René Widera for identifing the issue,
supplying and testing the fix (** also appeared: v3.0.4).
- Use ``static_cast<>`` in ``mpi.h`` where appropriate. Thanks to @shadow-fx
for identifying the issue.
- Fix datatype issue with RMA accumulate. Thanks to Jeff Hammond for
raising the issue.
- Fix RMA accumulate of non-predefined datatypes with predefined
operators. Thanks to Jeff Hammond for raising the issue.
- Fix race condition when closing open file descriptors when launching
MPI processes. Thanks to Jason Williams for identifying the issue and
supplying the fix.
- Fix Valgrind warnings for some ``MPI_TYPE_CREATE_*`` functions. Thanks
to Risto Toijala for identifying the issue and supplying the fix.
- Fix ``MPI_TYPE_CREATE_F90_{REAL,COMPLEX}`` for r=38 and r=308.
- Fix assembly issues with old versions of gcc (<6.0.0) that affected
the stability of shared memory communications (e.g., with the vader
BTL).
- Fix the OFI MTL handling of ``MPI_ANY_SOURCE``.
- Fix noisy errors in the openib BTL with regards to
``ibv_exp_query_device()``. Thanks to Angel Beltre and others who
reported the issue.
Open MPI version 3.0.3
----------------------
:Date: October, 2018
- Fix race condition in ``MPI_THREAD_MULTIPLE`` support of non-blocking
send/receive path.
- Fix error handling ``SIGCHLD`` forwarding.
- Add support for ``CHARACTER`` and ``LOGICAL`` Fortran datatypes for ``MPI_SIZEOF``.
- Fix compile error when using OpenJDK 11 to compile the Java bindings.
- Fix crash when using a hostfile with a 'user@host' line.
- Numerous Fortran '08 interface fixes.
- TCP BTL error message fixes.
- OFI MTL now will use any provider other than shm, sockets, tcp, udp, or
rstream, rather than only supporting gni, psm, and psm2.
- Disable async receive of CUDA buffers by default, fixing a hang
on large transfers.
- Support the BCM57XXX and BCM58XXX Broadcomm adapters.
- Fix minmax datatype support in ROMIO.
- Bug fixes in vader shared memory transport.
- Support very large buffers with ``MPI_TYPE_VECTOR``.
- Fix hang when launching with mpirun on Cray systems.
- Bug fixes in OFI MTL.
- Assorted Portals 4.0 bug fixes.
- Fix for possible data corruption in ``MPI_BSEND``.
- Move shared memory file for vader btl into ``/dev/shm`` on Linux.
- Fix for ``MPI_ISCATTER``/``MPI_ISCATTERV`` Fortran interfaces with ``MPI_IN_PLACE``.
- Upgrade PMIx to v2.1.4.
- Fix for Power9 built-in atomics.
- Numerous One-sided bug fixes.
- Fix for race condition in uGNI BTL.
- Improve handling of large number of interfaces with TCP BTL.
- Numerous UCX bug fixes.
- Add support for QLogic and Broadcom Cumulus RoCE HCAs to Open IB BTL.
- Add patcher support for aarch64.
- Fix hang on Power and ARM when Open MPI was built with low compiler
optimization settings.
Open MPI version 3.0.2
----------------------
:Date: June, 2018
- Disable osc/pt2pt when using ``MPI_THREAD_MULTIPLE`` due to numerous
race conditions in the component.
- Fix dummy variable names for the mpi and mpi_f08 Fortran bindings to
match the MPI standard. This may break applications which use
name-based parameters in Fortran which used our internal names
rather than those documented in the MPI standard.
- Fixed ``MPI_SIZEOF`` in the "mpi" Fortran module for the NAG compiler.
- Fix RMA function signatures for ``use-mpi-f08`` bindings to have the
asynchonous property on all buffers.
- Fix Fortran ``MPI_COMM_SPAWN_MULTIPLE`` to properly follow the count
length argument when parsing the array_of_commands variable.
- Revamp Java detection to properly handle new Java versions which do
not provide a javah wrapper.
- Improved configure logic for finding the UCX library.
- Add support for HDR InfiniBand link speeds.
- Disable the POWER 7/BE block in configure. Note that POWER 7/BE is
still not a supported platform, but it is no longer automatically
disabled. See
https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
for more information.
Open MPI version 3.0.1
----------------------
:Date: March, 2018
- Fix ability to attach parallel debuggers to MPI processes.
- Fix a number of issues in MPI I/O found by the HDF5 test suite.
- Fix (extremely) large message transfers with shared memory.
- Fix out of sequence bug in multi-NIC configurations.
- Fix stdin redirection bug that could result in lost input.
- Disable the LSF launcher if CSM is detected.
- Plug a memory leak in ``MPI_Mem_free()``. Thanks to Philip Blakely for reporting.
- Fix the tree spawn operation when the number of nodes is larger than the radix.
Thanks to Carlos Eduardo de Andrade for reporting.
- Fix Fortran 2008 macro in MPI extensions. Thanks to Nathan T. Weeks for
reporting.
- Add UCX to list of interfaces that OpenSHMEM will use by default.
- Add ``--{enable|disable}-show-load-errors-by-default`` to control
default behavior of the load errors option.
- OFI MTL improvements: handle empty completion queues properly, fix
incorrect error message around ``fi_getinfo()``, use default progress
option for provider by default, Add support for reading multiple
CQ events in ofi_progress.
- PSM2 MTL improvements: Allow use of GPU buffers, thread fixes.
- Numerous corrections to memchecker behavior.
- Add a mca parameter ``ras_base_launch_orted_on_hn`` to allow for launching
MPI processes on the same node where mpirun is executing using a separate
orte daemon, rather than the mpirun process. This may be useful to set to
true when using SLURM, as it improves interoperability with SLURM's signal
propagation tools. By default it is set to false, except for Cray XC systems.
- Fix a problem reported on the mailing separately by Kevin McGrattan and Stephen
Guzik about consistency issues on NFS file systems when using OMPIO. This fix
also introduces a new mca parameter ``fs_ufs_lock_algorithm`` which allows to
control the locking algorithm used by ompio for read/write operations. By
default, ompio does not perfom locking on local UNIX file systems, locks the
entire file per operation on NFS file systems, and selective byte-range
locking on other distributed file systems.
- Add an mca parameter ``pmix_server_usock_connections`` to allow mpirun to
support applications statically built against the Open MPI v2.x release,
or installed in a container along with the Open MPI v2.x libraries. It is
set to false by default.
Open MPI version 3.0.0
----------------------
:Date: September, 2017
.. important:: Major new features:
- Use UCX allocator for OSHMEM symmetric heap allocations to optimize intra-node
data transfers. UCX SPML only.
- Use UCX multi-threaded API in the UCX PML. Requires UCX 1.0 or later.
- Added support for Flux PMI
- Update embedded PMIx to version 2.1.0
- Update embedded hwloc to version 1.11.7
- Per Open MPI's versioning scheme (see the README), increasing the
major version number to 3 indicates that this version is not
ABI-compatible with prior versions of Open MPI. In addition, there may
be differences in MCA parameter names and defaults from previous releases.
Command line options for mpirun and other commands may also differ from
previous versions. You will need to recompile MPI and OpenSHMEM applications
to work with this version of Open MPI.
- With this release, Open MPI supports ``MPI_THREAD_MULTIPLE`` by default.
- New configure options have been added to specify the locations of libnl
and zlib.
- A new configure option has been added to request Flux PMI support.
- The help menu for mpirun and related commands is now context based.
``mpirun --help compatibility`` generates the help menu in the same format
as previous releases.
.. attention:: Removed legacy support:
- AIX is no longer supported.
- Loadlever is no longer supported.
- OpenSHMEM currently supports the UCX and MXM transports via the ucx and ikrit
SPMLs respectively.
- Remove IB XRC support from the OpenIB BTL due to lack of support.
- Remove support for big endian PowerPC.
- Remove support for XL compilers older than v13.1
.. note:: Known issues:
- MPI_Connect/accept between applications started by different mpirun
commands will fail, even if ompi-server is running.
| 47.442688 | 86 | 0.752145 |
e7549679b98b57ec2234bdc18fca10f555c799c1 | 1,870 | rst | reStructuredText | docs/buildprocedure/build.instructions.rst | hashnfv/hashnfv-ovsnfv | 5b7a8fe64efa5d1bc32a0c5328e073463fd2393b | [
"Apache-2.0"
] | 3 | 2017-02-13T15:25:18.000Z | 2017-08-30T13:47:59.000Z | docs/buildprocedure/build.instructions.rst | opnfv/ovsnfv | 5b7a8fe64efa5d1bc32a0c5328e073463fd2393b | [
"Apache-2.0"
] | null | null | null | docs/buildprocedure/build.instructions.rst | opnfv/ovsnfv | 5b7a8fe64efa5d1bc32a0c5328e073463fd2393b | [
"Apache-2.0"
] | null | null | null | .. OPNFV - Open Platform for Network Function Virtualization
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
========
Abstract
========
This document describes the optional build of the OPNFV Colorado release
of the OVSNFV RPMs for the The dependencies and required
system resources are also described.
============
Introduction
============
This document describes how to build the OVSNFV RPMs. These RPMs are incorporated into the
Apex iso artifacts so there is no required action for Apex installation of OPNFV.
This document describes the optional standalone build of the OVSNFV RPMs.
============
Requirements
============
Minimum Software Requirements
=============================
The build host should run Centos 7.0
Setting up OPNFV Gerrit in order to being able to clone the code
----------------------------------------------------------------
- Start setting up OPNFV gerrit by creating a SSH key (unless you
don't already have one), create one with ssh-keygen
- Add your generated public key in OPNFV Gerrit <https://gerrit.opnfv.org/>
(this requires a Linux foundation account, create one if you do not
already have one)
- Select "SSH Public Keys" to the left and then "Add Key" and paste
your public key in.
Clone the OPNFV code Git repository with your SSH key
-----------------------------------------------------
Clone the code repository:
.. code-block:: bash
$ git clone ssh://<Linux foundation user>@gerrit.opnfv.org:29418/ovsnfv
Clone the OPNFV code Git repository using HTML
----------------------------------------------
.. code-block:: bash
$ git clone https://gerrit.opnfv.org:29418/ovsnfv
========
Building
========
Build using build.sh
--------------------
.. code-block:: bash
$ cd ovsnfv/ci
$ ./build.sh
| 25.616438 | 90 | 0.640642 |
dcd8e738e7cf704975aed30f150298ba8c8d581e | 287 | rst | reStructuredText | documents/sphinx/source/quick_start_verilog.rst | ryuz/BinaryBrain | 362fbd7ff6539e081af89e9fced05f3a85089914 | [
"MIT"
] | 71 | 2018-09-02T10:01:07.000Z | 2022-02-24T12:05:55.000Z | documents/sphinx/source/quick_start_verilog.rst | ryuz/lut_classifier | c12fc5267013246b7389fc97f8e759a7f10e35c2 | [
"MIT"
] | 2 | 2019-04-11T11:58:00.000Z | 2022-02-19T23:34:57.000Z | documents/sphinx/source/quick_start_verilog.rst | ryuz/lut_classifier | c12fc5267013246b7389fc97f8e759a7f10e35c2 | [
"MIT"
] | 8 | 2019-02-02T05:21:00.000Z | 2021-04-06T08:19:50.000Z | ===========================
クイックスタート(Verilog)
===========================
RTL Simulation の試し方
============================
C++, Pythonともに Verilog RTL のソースファイルの出力が可能です。
出力したRTLの試し方は
https://github.com/ryuz/BinaryBrain/blob/ver4_release/samples/verilog/mnist/README.md
のなどをご参照ください。
| 17.9375 | 85 | 0.567944 |
8ba25dbdc7051a1edd8e75ef472cd13eb4e6a82c | 49,387 | rst | reStructuredText | docs/relnotes.rst | check-spelling/poppy | 2640d89d3a326fc6ecf03dcd24c878279b7807e5 | [
"BSD-3-Clause"
] | 110 | 2018-08-24T13:49:45.000Z | 2022-03-24T10:43:18.000Z | docs/relnotes.rst | check-spelling/poppy | 2640d89d3a326fc6ecf03dcd24c878279b7807e5 | [
"BSD-3-Clause"
] | 462 | 2018-08-23T21:37:30.000Z | 2022-03-29T17:24:50.000Z | docs/relnotes.rst | check-spelling/poppy | 2640d89d3a326fc6ecf03dcd24c878279b7807e5 | [
"BSD-3-Clause"
] | 52 | 2018-08-27T20:49:14.000Z | 2022-02-22T13:02:40.000Z | .. _whatsnew:
Release Notes
===============
For a list of contributors, see :ref:`about`.
1.0.1
-----
.. _rel1.0.1:
*2021 December 9*
This is a very minor re-release, to fix some documentation formatting and release packaging issues with the 1.0.0 release. No changes in functionality.
1.0.0
-----
.. _rel1.0.0:
*2021 December 7*
This is a major release with significant enhancements and changes, in particular with regards to changes in wavefront sign convention representations.
.. admonition:: Changes and Clarifications in Signs for Wavefront Error and Phase
**Some sign conventions for wavefront error and optical phase have changed in this version of poppy**
This release includes optical algorithm updates after a thorough audit and cross-check of sign conventions for phase and wavefront error, disambiguating portions of the
sign conventions and code to ensure consistency with several other relevant optical modeling packages. Poppy now strictly follows the sign conventions as advocated in e.g.
Wyant and Creath's `Basic Wavefront Aberration Theory for Optical Metrology <https://ui.adsabs.harvard.edu/abs/1992aooe...11....2W/abstract>`_ (or see `here <https://wp.optics.arizona.edu/jcwyant/wp-content/uploads/sites/13/2016/08/03-BasicAberrations_and_Optical_Testing.pdf>`_). This makes poppy consistent with the convention more widely used in optical metrology and other optical software such as Code V; however this is not consistent with some other reference such as Goodman's classic text *Fourier Optics*.
To achieve that consistency, *this is a partially back-incompatible release*, with
changes in the signs of complex exponentials in some Fourier propagation calculations. Depending on your use case this may result in some changes in output PSFs or
different signs or orientations from prior results.
See `Sign Conventions for Coordinates, Phase, and Wavefront Error <https://poppy-optics.readthedocs.io/en/latest/sign_conventions_for_coordinates_and_phase.html>`_ for details, discussion, and demonstration.
Many thanks to Derek
Sabatke (Ball Aerospace); Matthew Bergkoetter, Alden Jurling, and Tom Zielinski (NASA GSFC); and
Randal Telfer (STScI) for invaluable discussions and aid in getting these
details onto a more rigorous footing.
**API Changes:**
* Several functions in the Zernike module were renamed for clarity, in particular the prior ``opd_expand`` is now :py:func:`~poppy.zernike.decompose_opd`, and ``opd_from_zernikes`` is now :py:func:`~poppy.zernike.compose_opd_from_basis`.
The prior function names also continue to work as aliases for backwards compatibility. (:pr:`471` by :user:`mperrin`)
**New Functionality:**
* New class :py:obj:`~poppy.TipTiltStage`, which allows putting additional tip-tilt on any arbitrary optic, and adjusting/controlling the tip and tilt. See `here <https://poppy-optics.readthedocs.io/en/latest/available_optics.html#Tip-Tilt-Stage>`_ for example. (:pr:`414` by :user:`mperrin`)
* New class :py:obj:`~poppy.CircularSegmentedDeformableMirror`, which models an aperture comprising several individually-controllable circular mirrors. See `here <https://poppy-optics.readthedocs.io/en/latest/available_optics.html#Circularly-Segmented-Deformable-Mirrors>`_ for example. (:pr:`407` and :pr:`424` by :user:`Teusia`)
* New class :py:obj:`~poppy.KolmogorovWFE`, which models the phase distortions in a turbulent atmosphere. See `this notebook <https://github.com/spacetelescope/poppy/blob/develop/notebooks/Propagation%20through%20turbulent%20atmosphere.ipynb>`_ for details. (:pr:`437` by :user:`DaPhil`)
* New class :py:obj:`~poppy.ThermalBloomingWFE`, which models the change in WFE from heating of air (or other transmission medium) due to high powered laser beams. See `this notebook <https://github.com/spacetelescope/poppy/blob/develop/notebooks/Thermal%20Blooming%20Demo.ipynb>`_ for details. (:pr:`438` by :user:`DaPhil`)
**Other enhancements and fixes:**
* Wavefront instances gain a `.wfe` attribute for the wavefront error in meters (computed from phase, so it will wrap if wavefront error exceeds +- 0.5 waves), and the wavefront display method can display wfe as well as intensity and phase.
* Faster algorithm for calculations in the :py:func:`~poppy.zernike.opd_from_zernikes` function (:pr:`400` by :user:`grbrady`). Run time of this function was reduced roughly in half.
* Various performance enhancements in FFTs, array rotations, zero padding, and array indexing in certain cases (:pr:`394`, :pr:`398`, :pr:`411`, :pr:`413` by :user:`mperrin`)
* Bug fix to a sign inconsistency in wavefront rotation: While the documentation states that positive rotations are counterclockwise, the code had the other sign. Updated code to match the documented behavior, which also matches the rotation convention for optical elements. (:pr:`411` by :user:`mperrin`)
* More robust algorithm for offset sources in optical systems with coordinate rotations and inversions (:pr:`420` by :user:`mperrin`). This ensures the correct sign of tilt is applied in the entrance pupil plane to achieve the requested source position in the output image plane.
* Added ``inwave=`` parameter to ``calc_psf`` and related functions, for both Fresnel and Fraunhofer propagation types, to allow providing a custom input wavefront, for instance the output of some prior upstream calculation. If provided, this is used instead of the default input wavefront (a plane wave of uniform intensity). (:pr:`402` by :user:`kian1377`)
* Improved support for astropy Quantities, including being able to specify monochromatic wavelengths using Quantities of wavelength, and to specify optic shifts using Quantities in length or angular units as appropriate (:pr:`445`, :pr:`447` by :user:`mperrin`).
**Software Infrastructure Updates and Internals:**
* Continuous integration system migrated to Github Actions, replacing previous use of Travis CI. (:pr:`434` by :user:`shanosborne`)
* Updates to recommended (not minimum) dependency versions to track latest numpy, scipy, etc (various PRs by :user:`shanosborne`)
* Updates to minimum dependency versions, generally to upstream releases as of mid-2020. (:pr:`415`, :pr:`472` by :user:`mperrin`)
* Swap to use of base ``synphot`` rather than ``stsynphot`` package, to avoid dependency on many GBs of reference data. (:pr:`421` by :user:`mperrin`)
0.9.2
-----
.. _rel0.9.2:
*2021 Feb 11*
This release includes several updated optical element classes, bug fixes, and improved documentation. This is intended as a maintenance release shortly before v 1.0 which will introduce some backwards-incompatible changes.
**New Functionality:**
* New OpticalElement classes for ScalarOpticalPathDifference, LetterFAperture, and LetterFOpticalPathDifference. (:pr:`386` by :user:`mperrin`)
* Improved `radial_profile` function to allow measurement of partial profiles for sources offset outside the FOV (:pr:`380` by :user:`mperrin`)
* Improved the CompoundAnalyticOptic class to correctly handle OPDS for compound optics with multiple non-overlapping apertures. (:pr:`386` by :user:`mperrin`)
**Other enhancements and fixes:**
* The ShackHartmannWavefrontSensor class was refactored and improved . (:pr:`369` by :user:`fanpeng-kong`). And a unit test case for this class was added (:pr:`376` by :user:`remorgan123` in collaboration with :user:`douglase`)
* Expanded documentation and example code for usage of astropy Units. (:pr:`374`, :pr:`378` by :user:`mperrin`; with thanks to :user:`keflavich’ and :user:`mcbeth`)
* Made the HexagonalSegmentedDeformableMirror class consistent with ContinuousDeformableMirror in having an 'include_factor_of_two' parameter, for control in physical surface versus wavefront error units
* Bug fix for influence functions of rotated hexagonally segmented deformable mirrors. (:pr:`371` by :user:`mperrin`)
* Bug fix for FWHM measurement on integer data type images. (:pr:`368` by :user:`kjbrooks`)
* Bug fix for StatisticalPSDWFE to avoid side effects from changing global numpy random generator state. (:pr:`377` by :user:`ivalaginja`)
* Bug fix for image display in cases using angular coordinates in units other than arc seconds. (:pr:`378` by :user:`mperrin`; with thanks to :user:`mcbeth`)
**Software Infrastructure Updates and Internals:**
* The minimum numpy version is now 1.16. (:pr:`356` by :user:`mperrin`)
* The main branches were renamed/relabeled to ’stable’ (rather than ‘master’) and ‘develop’. (:pr:`361`, :pr:`370` by :user:`mperrin`)
* Updates to Travis CI settings. (:pr:`367`, :pr:`395` by :user:`shanosborne`)
* Avoid deprecated modification of matplotlib colormaps (:pr:`379` by :user:`spacegal-spiff`)
* Minor doc string clarification for get_opd (:pr:`381` by :user:`douglase`)
* Remove unused parameter to Detector class (:pr:`385` by :user:`mperrin`)
* Updates to meet STScI INS's JWST Software Standards (:pr:`390` by :user:`shanosborne`)
* Use Github's Dependabot to test and update dependencies (:pr:`391: by :user:`shanosborne`)
0.9.1
-----
.. _rel0.9.1:
*2020 June 22*
This is a minor release primarily for updates in packaging infrastructure, plus a handful of small enhancements related to datacubes, segmented apertures, and new functionality for subsampled optics.
**New Functionality:**
* Adds new `Subapertures` class for modeling subsampled optics (i.e. optics that have multiple spatially disjoint output beams). Adds `ShackHartmannWavefrontSensor` class to model that type of sensor. See `this notebook <https://github.com/spacetelescope/poppy/blob/develop/notebooks/Shack%20Hartmann%20Wavefront%20Sensor%20Demo.ipynb>`_ for details and example codes. (:pr:`346` thanks to :user:`remorgan01` and :user:`douglase`)
**Other enhancements and fixes:**
* `calc_datacube` function now allows `nwavelengths>100`, removing a prior limitation of this function. (:pr:`351` by :user:`ojustino`)
* `radial_profile` function can now be applied to datacubes, with a `slice` keyword to specify which slice of the cube should be examined. (:pr:`352` by :user:`mperrin`)
* Improved the Zernike basis expansion function for segmented apertures, `opd_expand_segments`, to allow optional masking out of pixels at the segment borders. This can be useful in some circumstances for avoiding edge effects from partially illuminated pixels or interpolation artifacts when evaluating Zernike or hexike coefficients per segment. (:pr:`353` by :user:`mperrin`)
* Allows `Segmented_PTT_Basis` to pass through keyword arguments to parent class `MultiHexagonAperture`, in particular for selecting/excluding particular segments from the apreture geometry. (:pr:`357` by :user:`kjbrooks`)
* Fix a log string formatting bug encountered in MFT propagation under certain conditions (:pr:`360` by :user:`mperrin`)
**Software Infrastructure Updates and Internals:**
* Removed dependency on the deprecated astropy-helpers package framework. (:pr:`349` by :user:`shanosborne`). Fixes :issue:`355`.
* Switched code coverage CI service to codecov.io. (:pr:`349` by :user:`shanosborne`)
* The minimum Python version is now 3.6. (:pr:`356` by :user:`mperrin`)
0.9.0
-----
.. _rel0.9.0:
*2019 Nov 25*
**New Functionality:**
* **Chaining together multiple propagations calculations:** Multiple `OpticalSystem` instances can now be chained together into a `CompoundOpticalSystem`. This includes mixed
propagations that are partially Fresnel and partially Fraunhofer; Wavefront objects will be cast between types as
needed. (:pr:`290` by :user:`mperrin`)
* **Gray pixel subsampling of apertures:** Implemented "gray pixel" sampling for circular apertures and stops, providing more precise models of aperture edges.
For circular apertures this is done using a fast analytic geometry implementation adapted from open-source IDL code
originally by Marc Buie. (:pr:`325` by :user:`mperrin`, using Python code contributed by :user:`astrofitz`).
For subpixel / gray pixel sampling of other optics in general, a new function `fixed_sampling_optic` takes any
AnalyticOpticalElement and returns an equivalent ArrayOpticalElement with fixed sampling. This is useful for instance
for taking a computationally-slow optic such as MultiHexagonAperture and saving a discretized version for future
faster use. (:pr:`307` by :user:`mperrin`)
* **Modeling tilted optics:** New feature to model geometric projection (cosine scaling) of inclined optics, by setting an `inclination_x` or
`inclination_y` attribute to the tilt angle in degrees. For instance `inclination_x=30` will tilt an optic by 30
degrees around the X axis, and thus compress its apparent size in the Y axis by cosine(30 deg). Note, this
transformation only applies the cosine scaling to the optic's appearance, and does *not* introduce wavefront for
tilt. (:pr:`329` by :user:`mperrin`)
* **Many improvements to the Continuous Deformable Mirror class**:
* Enhance model of DM actuator influence functions for more precise subpixel spacing of DM actuators, rather than
pokes separated by integer pixel spacing. This applies to the 'convolution by influence function' method for
modeling DMs (:pr:`329` by :user:`mperrin`)
* Support distinct radii for the active controllable mirror size and the reflective mirror size (:pr:`293` by :user:`mperrin`)
* ContinuousDeformableMirror now supports `shift_x` and `shift_y` to translate / decenter the DM, consistent with
other optical element classes. (:pr:`307` by :user:`mperrin`)
* ContinuousDeformableMirror now also supports `flip_x` and `flip_y` attributes to flip its orientation along one or
both axes, as well as the new `inclination_x` and `inclination_y` attributes for geometric projection.
* **Improved models of certain kinds of wavefront error:**
* New class `StatisticalPSDWFE` that models random wavefront errors described by a power spectral density, as is
commonly used to specify and measure typical polishing residuals in optics. (:pr:`315` by :user:`ivalaginja`;
:pr:`317` by :user:`mperrin`)
* `FITSOpticalElement` can now support wavelength-independent phase maps defined in radians, for instance for modeling
Pancharatnam-Berry phase as used in certain vector coronagraph masks. (:pr:`306` by :user:`joseph-long`)
* `add_optic` in Fresnel systems can now insert optics at any index into an optical system, rather than just appending
at the end (:pr:`298` by :user:`sdwill`)
**Software Infrastructure Updates and Internals:**
* PR :pr:`290` for CompoundOpticalSystem involved refactoring the Wavefront and FresnelWavefront classes to both be child classes of a new abstract base class BaseWavefront. This change should be transparent for most/all users and requires no changes in calling code.
* PR :pr:`306` for wavelength-independent phase subsequently required refactoring of the optical element display code to correctly handle all cases. As a result the display code internals were clarified and made more consistent. (:pr:`314` and :pr:`321` by :user:`mperrin` with contributions from :user:`ivalaginja` and :user:`shanosborne`). Again this change should be transparent for users.
* Removed deprecated / unused decorator function in WFE classes, making their `get_opd` function API consistent with the rest of poppy. (:pr:`322` by :user:`mperrin`)
* Accomodate some upstream changes in astropy (:pr:`294` by :user:`shanosborne`, :pr:`330` by :user:`mperrin`)
* The `poppy.Instrument._get_optical_system` function, which has heretofore been an internal method (private, starting with
underscore) of the Instrument class, has been promoted to a public part of the API as
`Instrument.get_optical_system()`.
* Note, minimum supported versions of some upstream packages such as numpy and matplotlib have been updated.
**Bug Fixes and Misc Improvements:**
* Correctly assign BUNIT keyword after rescaling OPDs (:issue:`285`, :pr:`286` by :user:`laurenmarietta`).
* New header keywords in output PSF files for `OPD_FILE` and `OPDSLICE` to more cleanly record the information
previously stored together in the `PUPILOPD` keyword (:pr:`316` by :user:`mperrin`)
* Update docs and example notebooks to replace deprecated function names with the current ones (:pr:`288` by :user:`corcoted`).
* Improvements in resampling wavefronts onto Detector instances, particularly in cases where the wavefront is already at the right plane so no propagation is needed. (Part of :pr:`290` by :user:`mperrin`, then further improved in :pr:`304` by :user:`sdwill`)
* Allow passthrough of "normalize" keyword to measure_ee and measure_radius_at_ee functions (:pr:`333` by
:user:`mperrin`; :issue:`332` by :user:`ariedel`)
* Fix `wavefront.as_fits` complex wavefront output option (:pr:`293` by :user:`mperrin`)
* Stricter checking for consistent wavefront type and size parameters when summing wavefronts (:pr:`313` and :pr:`326` by :user:`mperrin`)
* Fix an issue with MultiHexagonAperture in the specific case of 3 rings of hexes (:issue:`303` by :user:`LucasMarquis` and :user:`FredericCassaing`; :pr:`307` by :user:`mperrin`)
* Fix an issue with BaseWavefront class refactor (:pr:`311` by :user:`douglase` and :user:`jlumbres`)
* Fix an issue with indexing in HexSegmentedDeformableMirror when missing the center segment (:issue:`318` by :user:`ivalaginja`; :pr:`320` by :user:`mperrin`)
* Fix title display by OpticalElement.display function (:pr:`299` by :user:`shanosborne`)
* Fix display issue in SemiAnalyticCoronagraph class (:pr:`324` by :user:`mperrin`).
* Small improvements in some display labels (:pr:`307` by :user:`mperrin`)
*Note*, the new functionality for gray pixel representation of circular apertures does not work precisely for elliptical
apertures such as from inclined optics. You may see warnings about this in cases when you use `inclination_y` or
`inclination_x` attributes on a circular aperture. This warning is generally benign; the calculation is still more
accurate than it would be without the subpixel sampling, though not perfectly precise. This known issue will likely be
improved upon in a future release.
0.8.0
-----
.. _rel0.8.0:
*2018 December 15*
.. admonition:: Py2.7 support and deprecated function names removed
As previously announced, support for Python 2 has been removed in this release,
as have the deprecated non-PEP8-compliant function names.
**New Functionality:**
* The `zernike` submodule has gained better support for dealing with wavefront error defined over
segmented apertures. The `Segment_Piston_Basis` and `Segment_PTT_Basis` classes implement basis
functions for piston-only or piston/tip/tilt motions of arbitrary numbers of hexagonal segments.
The `opd_expand_segments` function implements a version of the `opd_expand_orthonormal` algorithm
that has been updated to correctly handle disjoint (non-overlapping support) basis functions defined on
individual segments. (mperrin)
* Add new `KnifeEdge` optic class representing a sharp opaque half-plane, and a `CircularPhaseMask` representing a circular region with constant optical path difference. (#273, @mperrin)
* Fresnel propagation can now automatically resample wavefronts onto the right pixel scales at Detector objects,
same as Fraunhofer propagation. (#242, #264, @mperrin)
* The `display_psf` function now can also handle datacubes produced by `calc_datacube` (#265, @mperrin)
**Documentation:**
* Various documentation improvements and additions, in particular including a new "Available Optics" page showing
visual examples of all the available optical element classes.
**Bug Fixes and Software Infrastructure Updates:**
* Removal of Python 2 compatibility code, Python 2 test cases on Travis, and similar (#239, @mperrin)
* Removal of deprecated non-PEP8 function names (@mperrin)
* Fix for output PSF formatting to better handle variable numbers of extensions (#219, @shanosborne)
* Fix for FITSOpticalElement opd_index parameter for selecting slices in datacubes (@mperrin)
* Fix inconsistent sign of rotations for FITSOpticalElements vs. other optics (#275, @mperrin)
* Cleaned up the logic for auto-choosing input wavefront array sizes (#274, @mperrin)
* Updates to Travis doc build setup (#270, @mperrin, robelgeda)
* Update package organization and documentation theme for consistency with current STScI package template (#267, #268, #278, @robelgeda)
* More comprehensive unit tests for Fresnel propagation. (#191, #251, #264, @mperrin)
* Update astropy-helpers to current version, and install bootstrap script too (@mperrin, @jhunkeler)
* Minor: doc string correction in FresnelWavefront (@sdwill), fix typo in some error messages (#255, @douglase),
update some deprecated logging function calls (@mperrin).
0.7.0
-----
.. _rel0.7.0:
*2018 May 30*
.. admonition:: Python version support: Future releases will require Python 3.
Please note, this is the *final* release to support Python 2.7. All
future releases will require Python 3.5+. See `here <https://python3statement.org>`_ for more information on migrating to Python 3.
.. admonition:: Deprecated function names will go away in next release.
This is also the *final* release to support the older, deprecated
function names with mixed case that are not compatible with the Python PEP8
style guide (e.g. ``calcPSF`` instead of ``calc_psf``, etc). Future versions will
require the use of the newer syntax.
**Performance Improvements:**
* Major addition of GPU-accelerated calculations for FFTs and related operations in many
propagation calculations. GPU support is provided for both CUDA (NVidia GPUs) and OpenCL (AMD
GPUs); the CUDA implementation currently accelerates a slightly wider range of operations.
Obtaining optimal performance, and understanding tradeoffs between numpy, FFTW, and CUDA/OpenCL,
will in general require tests on your particular hardware. As part of this, much of the FFT
infrastructure has been refactored out of the Wavefront classes and into utility functions in
`accel_math.py`. This functionality and the resulting gains in performance are described more in
Douglas & Perrin, Proc. SPIE 2018. (`#239 <https://github.com/spacetelescope/poppy/pull/239>`_,
@douglase), (`#250 <https://github.com/spacetelescope/poppy/pull/250>`_, @mperrin and @douglase).
* Additional performance improvements to other aspects of calculations using the `numexpr` package.
Numexpr is now a *highly recommended* optional installation. It may well become a requirement in
a future release. (`#239 <https://github.com/spacetelescope/poppy/pull/239>`_, `#245
<https://github.com/spacetelescope/poppy/pull/245>`_, @douglase)
* More efficient display of AnalyticOptics, avoiding unnecessary repetition of optics sampling.
(@mperrin)
* Single-precision floating point mode added, for cases that do not require the default double
precision floating point and can benefit from the increased speed. (Experimental / beta; some
intermediate calculations may still be done in double precision, thus reducing speed gains).
**New Functionality:**
* New `PhysicalFresnelWavefront` class that uses physical units for the wavefront (e.g.
volts/meter) and intensity (watts). See `this notebook
<https://github.com/spacetelescope/poppy/blob/stable/notebooks/Physical%20Units%20Demo.ipynb>`_ for
examples and further discussion. (`#248 <https://github.com/spacetelescope/poppy/pull/248>`, @daphil).
* `calc_psf` gains a new parameter to request returning the complex wavefront (`#234
<https://github.com/spacetelescope/poppy/pull/234>`_,@douglase).
* Improved handling of irregular apertures in WFE basis functions (`zernike_basis`, `hexike_basis`,
etc.) and the `opd_expand`/`opd_expand_nonorthonormal` fitting functions (@mperrin).
* Added new function `measure_radius_at_ee` which finds the radius at which a PSF achieves some
given amount of encircled energy; in some sense an inverse to `measure_ee`. (`#244
<https://github.com/spacetelescope/poppy/pull/244>`_, @shanosborne)
* Much improved algorithm for `measure_fwhm`: the function now works by fitting a Gaussian rather
than interpolating between a radial profile on fixed sampling. This yields much better results on
low-sampled or under-sampled PSFs. (@mperrin)
* Add `ArrayOpticalElement` class, providing a cleaner interface for creating arbitrary optics at
runtime by generating numpy ndarrays on the fly and packing them into an ArrayOpticalElement.
(@mperrin)
* Added new classes for deformable mirrors, including both `ContinuousDeformableMirror` and
`HexSegmentedDeformableMirror` (@mperrin).
**Bug Fixes and Software Infrastructure Updates:**
* The Instrument class methods and related API were updated to PEP8-compliant names. Old names
remain for back compatibility, but are deprecated and will be removed in the next release.
Related code cleanup for better PEP8 compliance. (@mperrin)
* Substantial update to semi-analytic fast coronagraph propagation to make it more flexible about
optical plane setup. Fixes #169 (`#169 <https://github.com/spacetelescope/poppy/issues/169>`_, @mperrin)
* Fix for integer vs floating point division when padding array sizes in some circumstances (`#235
<https://github.com/spacetelescope/poppy/issues/235>`_, @exowanderer, @mperrin)
* Fix for aperture clipping in `zernike.arbitrary_basis` (`#241
<https://github.com/spacetelescope/poppy/pull/241>`_, @kvangorkom)
* Fix / documentation fix for divergence angle in the Fresnel code (`#237
<https://github.com/spacetelescope/poppy/pull/237>`_, @douglase). Note, the `divergence` function now
returns the *half angle* rather than the *full angle*.
* Fix for `markcentroid` and `imagecrop` parameters conflicting in some cases in `display_psf`
(`#231 <https://github.com/spacetelescope/poppy/pull/231>`_, @mperrin)
* For FITSOpticalElements with both shift and rotation set, apply the rotation first and then the
shift for more intuitive UI (@mperrin)
* Misc minor doc and logging fixes (@mperrin)
* Increment minimal required astropy version to 1.3, and minimal required numpy version to 1.10;
and various related Travis CI setup updates. Also added numexpr test case to Travis. (@mperrin)
* Improved unit test for Fresnel model of Hubble Space Telescope, to reduce memory usage and avoid
CI hangs on Travis.
* Update `astropy-helpers` submodule to current version; necessary for compatibility with recent
Sphinx releases. (@mperrin)
.. _rel0.6.1:
0.6.1
-----
*2017 August 11*
* Update ``ah_bootstrap.py`` to avoid an issue where POPPY would not successfully install when pulled in as a dependency by another package (@josephoenix)
.. _rel0.6.0:
0.6.0
-----
*2017 August 10*
* WavefrontError and subclasses now handle tilts and shifts correctly (`#229 <https://github.com/spacetelescope/poppy/issues/229>`_, @mperrin) Thanks @corcoted for reporting!
* Fix the ``test_zernikes_rms`` test case to correctly take the absolute value of the RMS error, support ``outside=`` for ``hexike_basis``, enforce which arguments are required for ``zernike()``. (`#223 <https://github.com/spacetelescope/poppy/issues/223>`_, @mperrin) Thanks to @kvangorkom for reporting!
* Bug fix for stricter Quantity behavior (``UnitTypeError``) in Astropy 2.0 (@mperrin)
* Added an optional parameter "mergemode" to CompoundAnalyticOptic which provides two ways to combine AnalyticOptics: ``mergemode="and"`` is the previous behavior (and new default), ``mergemode="or"`` adds the transmissions of the optics, correcting for any overlap. (`#227 <https://github.com/spacetelescope/poppy/pull/227>`_, @corcoted)
* Add HexagonFieldStop optic (useful for making hexagon image masks for JWST WFSC, among other misc tasks.) (@mperrin)
* Fix behavior where ``zernike.arbitrary_basis`` would sometimes clip apertures (`#222 <https://github.com/spacetelescope/poppy/pull/222>`_, @kvangorkom)
* Fix ``propagate_direct`` in fresnel wavefront as described in issue `#216 <https://github.com/spacetelescope/poppy/issues/216>_` (`#218 <https://github.com/mperrin/poppy/pull/218>`_, @maciekgroch)
* ``display_ee()`` was not passing the ``ext=`` argument through to ``radial_profile()``, but now it does. (`#220 <https://github.com/spacetelescope/poppy/pull/220>`_, @josephoenix)
* Fix displaying planes where ``what='amplitude'`` (`#217 <https://github.com/spacetelescope/poppy/pull/217>`_, @maciekgroch)
* Fix handling of FITSOpticalElement big-endian arrays to match recent changes in SciPy (@mperrin) Thanks to @douglase for reporting!
* ``radial_profile`` now handles ``nan`` values in radial standard deviations (`#214 <https://github.com/spacetelescope/poppy/pull/214>`_, @douglase)
* The FITS header keywords that are meaningful to POPPY are now documented in :doc:`fitsheaders` and a new ``PIXUNIT`` keyword encodes "units of the pixels in the header, typically either *arcsecond* or *meter*" (`#205 <https://github.com/spacetelescope/poppy/pull/205>`_, @douglase)
* A typo in the handling of the ``markcentroid`` argument to ``display_psf`` is now fixed (so the argument can be set ``True``) (`#211 <https://github.com/spacetelescope/poppy/pull/211>`_, @josephoenix)
* ``radial_profile`` now accepts an optional ``pa_range=`` argument to specify the [min, max] position angles to be included in the radial profile. (@mperrin)
* Fixes in POPPY to account for the fact that NumPy 1.12+ raises an ``IndexError`` when non-integers are used to index an array (`#203 <https://github.com/spacetelescope/poppy/pull/203>`_, @kmdouglass)
* POPPY demonstration notebooks have been refreshed by @douglase to match output of the current code
.. _rel0.5.1:
0.5.1
-----
*2016 October 28*
* Fix ConfigParser import (see `astropy/package-template#172 <https://github.com/astropy/package-template/pull/172>`_)
* Fixes to formatting of ``astropy.units.Quantity`` values (`#171 <https://github.com/spacetelescope/poppy/issues/171>`_, `#174 <https://github.com/mperrin/poppy/pull/174>`_, `#179 <https://github.com/mperrin/poppy/pull/174>`_; @josephoenix, @neilzim)
* Fixes to ``fftw_save_wisdom`` and ``fftw_load_wisdom`` (`#177 <https://github.com/spacetelescope/poppy/issues/177>`_, `#178 <https://github.com/mperrin/poppy/pull/178>`_; @mmecthley)
* Add ``calc_datacube`` method to ``poppy.Instrument`` (`#182 <https://github.com/spacetelescope/poppy/issues/182>`_; @mperrin)
* Test for Apple Accelerate more narrowly (`#176 <https://github.com/spacetelescope/poppy/issues/176>`_; @mperrin)
* ``Wavefront.display()`` correctly handles ``vmin`` and ``vmax`` args (`#183 <https://github.com/spacetelescope/poppy/pull/183>`_; @neilzim)
* Changes to Travis-CI configuration (`#197 <https://github.com/spacetelescope/poppy/pull/197>`_; @etollerud)
* Warn on requested field-of-view too large for pupil sampling (`#180 <https://github.com/spacetelescope/poppy/issues/180>`_; reported by @mmechtley, addressed by @mperrin)
* Bugfix for ``add_detector`` in ``FresnelOpticalSystem`` (`#193 <https://github.com/spacetelescope/poppy/pull/193>`_; @maciekgroch)
* Fixes to unit handling and short-distance propagation in ``FresnelOpticalSystem`` (`#194 <https://github.com/spacetelescope/poppy/issues/194>`_; @maciekgroch, @douglase, @mperrin)
* PEP8 renaming for ``poppy.fresnel`` for consistency with the rest of POPPY: ``propagateTo`` becomes ``propagate_to``, ``addPupil`` and ``addImage`` become ``add_pupil`` and ``add_image``, ``inputWavefront`` becomes ``input_wavefront``, ``calcPSF`` becomes ``calc_psf`` (@mperrin)
* Fix ``display_psf(..., markcentroid=True)`` (`#175 <https://github.com/spacetelescope/poppy/issues/175>`_, @josephoenix)
.. _rel0.5.0:
0.5.0
-----
*2016 June 10*
Several moderately large enhancements, involving lots of under-the-hood updates to the code. (*While we have tested this code extensively, it is possible that there may be
some lingering bugs. As always, please let us know of any issues encountered via `the github issues page
<https://github.com/spacetelescope/poppy/issues/>`_.*)
* Increased use of ``astropy.units`` to put physical units on quantities, in
particular wavelengths, pixel scales, etc. Instead of wavelengths always being
implicitly in meters, you can now explicitly say e.g. ``wavelength=1*u.micron``,
``wavelength=500*u.nm``, etc. You can also generally use Quantities for
arguments to OpticalElement classes, e.g. ``radius=2*u.cm``. This is *optional*; the
API still accepts bare floating-point numbers which are treated as implicitly in meters.
(`#145 <https://github.com/spacetelescope/poppy/issues/145>`_, `#165 <https://github.com/mperrin/poppy/pull/165>`_; @mperrin, douglase)
* The ``getPhasor`` function for all OpticalElements has been refactored to split it into 3
functions: ``get_transmission`` (for electric field amplitude transmission), ``get_opd``
(for the optical path difference affectig the phase), and ``get_phasor`` (which combines transmission
and OPD into the complex phasor). This division simplifies and makes more flexible the subclassing
of optics, since in many cases (such as aperture stops) one only cares about setting either the
transmission or the OPD. Again, there are back compatibility hooks to allow existing code calling
the deprecated ``getPhasor`` function to continue working.
(`#162 <https://github.com/spacetelescope/poppy/pull/162>`_; @mperrin, josephoenix)
* Improved capabilities for handling complex coordinate systems:
* Added new `CoordinateInversion` class to represent a change in orientation of axes, for instance the
flipping "upside down" of a pupil image after passage through an intermediate image plane.
* ``OpticalSystem.input_wavefront()`` became smart enough to check for ``CoordinateInversion`` and ``Rotation`` planes,
and, if the user has requested a source offset, adjust the input tilts such that the source will move as requested in
the final focal plane regardless of intervening coordinate transformations.
* ``FITSOpticalElement`` gets new options ``flip_x`` and ``flip_y`` to flip orientations of the
file data.
* Update many function names for `PEP8 style guide compliance <https://www.python.org/dev/peps/pep-0008/>`_.
For instance `calc_psf` replaces `calcPSF`. This was done with back compatible aliases to ensure
that existing code continues to run with no changes required at this time, but *at some
future point* (but not soon!) the older names will go away, so users are encouranged to migrate to the new names.
(@mperrin, josephoenix)
And some smaller enhancements and fixes:
* New functions for synthesis of OPDs from Zernike coefficients, iterative Zernike expansion on obscured
apertures for which Zernikes aren't orthonormal, 2x faster optimized computation of Zernike basis sets,
and computation of hexike basis sets using the alternate ordering of hexikes used by the JWST Wavefront Analysis System
software.
(@mperrin)
* New function for orthonormal Zernike-like basis on arbitrary aperture
(`#166 <https://github.com/spacetelescope/poppy/issues/166>`_; Arthur Vigan)
* Flip the sign of defocus applied via the ``ThinLens`` class, such that
positive defocus means a converging lens and negative defocus means
diverging. (`#164 <https://github.com/spacetelescope/poppy/issues/164>`_; @mperrin)
* New ``wavefront_display_hint`` optional attribute on OpticalElements in an OpticalSystem allows customization of
whether phase or intensity is displayed for wavefronts at that plane. Applies to ``calc_psf`` calls
with ``display_intermediates=True``. (@mperrin)
* When displaying wavefront phases, mask out and don't show the phase for any region with intensity less than
1/100th of the mean intensity of the wavefront. This is to make the display less visually cluttered with near-meaningless
noise, especially in cases where a Rotation has sprayed numerical interpolation noise outside
of the true beam. The underlying Wavefront values aren't affected at all, this just pre-filters a copy of
the phase before sending it to matplotlib.imshow. (@mperrin)
* remove deprecated parameters in some function calls
(`#148 <https://github.com/spacetelescope/poppy/issues/148>`_; @mperrin)
.. _rel0.4.1:
0.4.1
-----
2016 Apr 4:
Mostly minor bug fixes:
* Fix inconsistency between older deprecated ``angle`` parameter to some optic classes versus new ``rotation`` parameter for any AnalyticOpticalElement (`#140 <https://github.com/spacetelescope/poppy/issues/140>`_; @kvangorkom, @josephoenix, @mperrin)
* Update to newer API for ``psutil`` (`#139 <https://github.com/spacetelescope/poppy/issues/139>`_; Anand Sivaramakrishnan, @mperrin)
* "measure_strehl" function moved to ``webbpsf`` instead of ``poppy``. (`#138 <https://github.com/spacetelescope/poppy/issues/138>`_; Kathryn St.Laurent, @josephoenix, @mperrin)
* Add special case to handle zero radius pixel in circular BandLimitedOcculter. (`#137 <https://github.com/spacetelescope/poppy/issues/137>`_; @kvangorkom, @mperrin)
* The output FITS header of an `AnalyticOpticalElement`'s `toFITS()` function is now compatible with the input expected by `FITSOpticalElement`.
* Better saving and reloading of FFTW wisdom.
* Misc minor code cleanup and PEP8 compliance. (`#149 <https://github.com/spacetelescope/poppy/issues/149>`_; @mperrin)
And a few more significant enhancements:
* Added `MatrixFTCoronagraph` subclass for fast optimized propagation of coronagraphs with finite fields of view. This is a
related variant of the approach used in the `SemiAnalyticCoronagraph` class, suited for
coronagraphs with a focal plane field mask limiting their field of view, for instance those
under development for NASA's WFIRST mission. ( `#128 <https://github.com/spacetelescope/poppy/pull/128>`_; `#147 <https://github.com/mperrin/poppy/pull/147>`_; @neilzim)
* The `OpticalSystem` class now has `npix` and `pupil_diameter` parameters, consistent with the `FresnelOpticalSystem`. (`#141 <https://github.com/spacetelescope/poppy/issues/141>`_; @mperrin)
* Added `SineWaveWFE` class to represent a periodic phase ripple.
.. _rel0.4.0:
0.4.0
-----
2015 November 20
* **Major enhancement: the addition of Fresnel propagation** (
`#95 <https://github.com/spacetelescope/poppy/issue/95>`_,
`#100 <https://github.com/spacetelescope/poppy/pull/100>`_,
`#103 <https://github.com/spacetelescope/poppy/issue/103>`_,
`#106 <https://github.com/spacetelescope/poppy/issue/106>`_,
`#107 <https://github.com/spacetelescope/poppy/pull/107>`_,
`#108 <https://github.com/spacetelescope/poppy/pull/108>`_,
`#113 <https://github.com/spacetelescope/poppy/pull/113>`_,
`#114 <https://github.com/spacetelescope/poppy/issue/114>`_,
`#115 <https://github.com/spacetelescope/poppy/pull/115>`_,
`#100 <https://github.com/spacetelescope/poppy/pull/100>`_,
`#100 <https://github.com/spacetelescope/poppy/pull/100>`_; @douglase, @mperrin, @josephoenix) *Many thanks to @douglase for the initiative and code contributions that made this happen.*
* Improvements to Zernike aberration models (
`#99 <https://github.com/spacetelescope/poppy/pull/99>`_,
`#110 <https://github.com/spacetelescope/poppy/pull/110>`_,
`#121 <https://github.com/spacetelescope/poppy/pull/121>`_,
`#125 <https://github.com/spacetelescope/poppy/pull/125>`_; @josephoenix)
* Consistent framework for applying arbitrary shifts and rotations to any AnalyticOpticalElement
(`#7 <https://github.com/spacetelescope/poppy/pull/7>`_, @mperrin)
* When reading FITS files, OPD units are now selected based on BUNIT
header keyword instead of always being "microns" by default,
allowing the units of files to be set properly based on the FITS header.
* Added infrastructure for including field-dependent aberrations at an optical
plane after the entrance pupil (
`#105 <https://github.com/spacetelescope/poppy/pull/105>`_, @josephoenix)
* Improved loading and saving of FFTW wisdom (
`#116 <https://github.com/spacetelescope/poppy/issue/116>`_,
`#120 <https://github.com/spacetelescope/poppy/issue/120>`_,
`#122 <https://github.com/spacetelescope/poppy/issue/122>`_,
@josephoenix)
* Allow configurable colormaps and make image origin position consistent
(`#117 <https://github.com/spacetelescope/poppy/pull/117>`_, @josephoenix)
* Wavefront.tilt calls are now recorded in FITS header HISTORY lines
(`#123 <https://github.com/spacetelescope/poppy/pull/123>`_; @josephoenix)
* Various improvements to unit tests and test infrastructure
(`#111 <https://github.com/spacetelescope/poppy/pull/111>`_,
`#124 <https://github.com/spacetelescope/poppy/pull/124>`_,
`#126 <https://github.com/spacetelescope/poppy/pull/126>`_,
`#127 <https://github.com/spacetelescope/poppy/pull/127>`_; @josephoenix, @mperrin)
.. _rel0.3.5:
0.3.5
-----
2015 June 19
* Now compatible with Python 3.4 in addition to 2.7! (`#83 <https://github.com/spacetelescope/poppy/pull/82>`_, @josephoenix)
* Updated version numbers for dependencies (@josephoenix)
* Update to most recent astropy package template (@josephoenix)
* :py:obj:`~poppy.optics.AsymmetricSecondaryObscuration` enhanced to allow secondary mirror supports offset from the center of the optical system. (@mperrin)
* New optic :py:obj:`~poppy.optics.AnnularFieldStop` that defines a circular field stop with an (optional) opaque circular center region (@mperrin)
* display() functions now return Matplotlib.Axes instances to the calling functions.
* :py:obj:`~poppy.optics.FITSOpticalElement` will now determine if you are initializing a pupil plane optic or image plane optic based on the presence of a ``PUPLSCAL`` or ``PIXSCALE`` header keyword in the supplied transmission or OPD files (with the transmission file header taking precedence). (`#97 <https://github.com/spacetelescope/poppy/pull/97>`_, @josephoenix)
* The :py:func:`poppy.zernike.zernike` function now actually returns a NumPy masked array when called with ``mask_array=True``
* poppy.optics.ZernikeAberration and poppy.optics.ParameterizedAberration have been moved to poppy.wfe and renamed :py:obj:`~poppy.wfe.ZernikeWFE` and :py:obj:`~poppy.wfe.ParameterizedWFE`. Also, ZernikeWFE now takes an iterable of Zernike coefficients instead of (n, m, k) tuples.
* Various small documentation updates
* Bug fixes for:
* redundant colorbar display (`#82 <https://github.com/spacetelescope/poppy/pull/82>`_)
* Unnecessary DeprecationWarnings in :py:func:`poppy.utils.imshow_with_mouseover` (`#53 <https://github.com/spacetelescope/poppy/issues/53>`_)
* Error in saving intermediate planes during calculation (`#81 <https://github.com/spacetelescope/poppy/issues/81>`_)
* Multiprocessing causes Python to hang if used with Apple Accelerate (`#23 <https://github.com/spacetelescope/poppy/issues/23>`_, n.b. the fix depends on Python 3.4)
* Copy in-memory FITS HDULists that are passed in to FITSOpticalElement so that in-place modifications don't affect the caller's copy of the data (`#89 <https://github.com/spacetelescope/poppy/issues/89>`_)
* Error in the :py:func:`poppy.utils.measure_EE` function produced values for the edges of the radial bins that were too large, biasing EE values and leading to weird interpolation behavior near r = 0. (`#96 <https://github.com/spacetelescope/poppy/pull/96>`_)
.. _rel0.3.4:
0.3.4
-----
2015 February 17
* Continued improvement in unit testing (@mperrin, @josephoenix)
* Continued improvement in documentation (@josephoenix, @mperrin)
* Functions such as addImage, addPupil now also return a reference to the added optic, for convenience (@josephoenix)
* Multiprocessing code and semi-analytic coronagraph method can now return intermediate wavefront planes (@josephoenix)
* Display methods for radial profile and encircled energy gain a normalization keyword (@douglase)
* matrixDFT: refactor into unified function for all centering types (@josephoenix)
* matrixDFT bug fix for axes parity flip versus FFT transforms (Anand Sivaramakrishnan, @josephoenix, @mperrin)
* Bug fix: Instrument class can now pass through dict or tuple sources to OpticalSystem calc_psf (@mperrin)
* Bug fix: InverseTransmission class shape property works now. (@mperrin)
* Refactor instrument validateConfig method and calling path (@josephoenix)
* Code cleanup and rebalancing where lines had been blurred between poppy and webbpsf (@josephoenix, @mperrin)
* Misc packaging infrastructure improvements (@embray)
* Updated to Astropy package helpers 0.4.4
* Set up integration with Travis CI for continuous testing. See https://travis-ci.org/mperrin/poppy
.. _rel0.3.3:
0.3.3
-----
2014 Nov
:ref:`Bigger team!<about_team>`. This release log now includes github usernames of contributors:
* New classes for wavefront aberrations parameterized by Zernike polynomials (@josephoenix, @mperrin)
* ThinLens class now reworked to require explicitly setting an outer radius over which the wavefront is normalized. *Note this is an API change for this class, and will require minor changes in code using this class*. ThinLens is now a subclass of CircularAperture.
* Implement resizing of phasors to allow use of FITSOpticalElements with Wavefronts that have different spatial sampling. (@douglase)
* Installation improvements and streamlining (@josephoenix, @cslocum)
* Code cleanup and formatting (@josephoenix)
* Improvements in unit testing (@mperrin, @josephoenix, @douglase)
* Added normalize='exit_pupil' option; added documentation for normalization options. (@mperrin)
* Bug fix for "FQPM on an obscured aperture" example. Thanks to Github user qisaiman for the bug report. (@mperrin)
* Bug fix to compound optic display (@mperrin)
* Documentation improvements (team)
.. _rel0.3.2:
0.3.2
-----
Released 2014 Sept 8
* Bug fix: Correct pupil orientation for inverse transformed pupils using PyFFTW so that it is consistent with the result using numpy FFT.
.. _rel0.3.1:
0.3.1
-----
Released August 14 2014
* Astropy compatibility updated to 0.4.
* Configuration system reworked to accomodate the astropy.configuration transition.
* Package infrastructure updated to most recent `astropy package-template <https://github.com/astropy/package-template/>`_.
* Several OpticalElements got renamed, for instance ``IdealCircularOcculter`` became just ``CircularOcculter``. (*All* the optics in ``poppy`` are
fairly idealized and it seemed inconsistent to signpost that for only some of them. The explicit 'Ideal' nametag is kept only for the FQPM to emphasize that one
in particular uses a very simplified prescription and neglects refractive index variation vs wavelength.)
* Substantially improved unit test system.
* Some new utility functions added in poppy.misc for calculating analytic PSFs such as Airy functions for comparison (and use in the test system).
* Internal code reorganization, mostly which should not affect end users directly.
* Packaging improvements and installation process streamlining, courtesy of Christine Slocum and Erik Bray
* Documentation improvements, in particular adding an IPython notebook tutorial.
.. _rel0.3.0:
0.3.0
-----
Released April 7, 2014
* Dependencies updated to use astropy.
* Added documentation and examples for POPPY, separate from the WebbPSF documentation.
* Improved configuration settings system, using astropy.config framework.
* The astropy.config framework itself is in flux from astropy 0.3 to 0.4; some of the related functionality
in poppy may need to change in the future.
* Added support for rectangular subarray calculations. You can invoke these by setting fov_pixels or fov_arcsec with a 2-element iterable::
>> nc = webbpsf.NIRCam()
>> nc.calc_psf('F212N', fov_arcsec=[3,6])
>> nc.calc_psf('F187N', fov_pixels=(300,100) )
Those two elements give the desired field size as (Y,X) following the usual Python axis order convention.
* Added support for pyFFTW in addition to PyFFTW3.
* pyFFTW will auto save wisdom to disk for more rapid execution on subsequent invocations
* InverseTransmission of an AnalyticElement is now allowed inside a CompoundAnalyticOptic
* Added SecondaryObscuration optic to conveniently model an opaque secondary mirror and adjustible support spiders.
* Added RectangleAperture. Added rotation keywords for RectangleAperture and SquareAperture.
* Added AnalyticOpticalElement.sample() function to sample analytic functions onto a user defined grid. Refactored
the display() and toFITS() functions. Improved functionality of display for CompoundAnalyticOptics.
.. _rel0.2.8:
0.2.8
-----
* First release as a standalone package (previously was integrated as part of webbpsf). See the release notes for WebbPSF for prior verions.
* switched package building to use `setuptools` instead of `distutils`/`stsci_distutils_hack`
* new `Instrument` class in poppy provides much of the functionality previously in JWInstrument, to make it
easier to model generic non-JWST instruments using this code.
| 73.057692 | 519 | 0.761536 |
98ed3bdd5fe8afcec543c65411c9edbb81d95224 | 6,962 | rst | reStructuredText | docs/developing-locally-with-docker.rst | serpent-tracker/serpent_tracker | f9258baee1d1fa9269b59a9b291153cad3040101 | [
"MIT"
] | null | null | null | docs/developing-locally-with-docker.rst | serpent-tracker/serpent_tracker | f9258baee1d1fa9269b59a9b291153cad3040101 | [
"MIT"
] | null | null | null | docs/developing-locally-with-docker.rst | serpent-tracker/serpent_tracker | f9258baee1d1fa9269b59a9b291153cad3040101 | [
"MIT"
] | null | null | null | Getting Up and Running Locally With Docker
==========================================
.. index:: Docker
The steps below will get you up and running with a local development environment.
All of these commands assume you are in the root of your generated project.
.. note::
If you're new to Docker, please be aware that some resources are cached system-wide
and might reappear if you generate a project multiple times with the same name (e.g.
:ref:`this issue with Postgres <docker-postgres-auth-failed>`).
Prerequisites
-------------
* Docker; if you don't have it yet, follow the `installation instructions`_;
* Docker Compose; refer to the official documentation for the `installation guide`_.
.. _`installation instructions`: https://docs.docker.com/install/#supported-platforms
.. _`installation guide`: https://docs.docker.com/compose/install/
Build the Stack
---------------
This can take a while, especially the first time you run this particular command on your development system::
$ docker-compose -f local.yml build
Generally, if you want to emulate production environment use ``production.yml`` instead. And this is true for any other actions you might need to perform: whenever a switch is required, just do it!
Run the Stack
-------------
This brings up both Django and PostgreSQL. The first time it is run it might take a while to get started, but subsequent runs will occur quickly.
Open a terminal at the project root and run the following for local development::
$ docker-compose -f local.yml up
You can also set the environment variable ``COMPOSE_FILE`` pointing to ``local.yml`` like this::
$ export COMPOSE_FILE=local.yml
And then run::
$ docker-compose up
To run in a detached (background) mode, just::
$ docker-compose up -d
Execute Management Commands
---------------------------
As with any shell command that we wish to run in our container, this is done using the ``docker-compose -f local.yml run --rm`` command: ::
$ docker-compose -f local.yml run --rm django python manage.py migrate
$ docker-compose -f local.yml run --rm django python manage.py createsuperuser
Here, ``django`` is the target service we are executing the commands against.
(Optionally) Designate your Docker Development Server IP
--------------------------------------------------------
When ``DEBUG`` is set to ``True``, the host is validated against ``['localhost', '127.0.0.1', '[::1]']``. This is adequate when running a ``virtualenv``. For Docker, in the ``config.settings.local``, add your host development server IP to ``INTERNAL_IPS`` or ``ALLOWED_HOSTS`` if the variable exists.
.. _envs:
Configuring the Environment
---------------------------
This is the excerpt from your project's ``local.yml``: ::
# ...
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
# ...
The most important thing for us here now is ``env_file`` section enlisting ``./.envs/.local/.postgres``. Generally, the stack's behavior is governed by a number of environment variables (`env(s)`, for short) residing in ``envs/``, for instance, this is what we generate for you: ::
.envs
├── .local
│ ├── .django
│ └── .postgres
└── .production
├── .django
└── .postgres
By convention, for any service ``sI`` in environment ``e`` (you know ``someenv`` is an environment when there is a ``someenv.yml`` file in the project root), given ``sI`` requires configuration, a ``.envs/.e/.sI`` `service configuration` file exists.
Consider the aforementioned ``.envs/.local/.postgres``: ::
# PostgreSQL
# ------------------------------------------------------------------------------
POSTGRES_HOST=postgres
POSTGRES_DB=<your project slug>
POSTGRES_USER=XgOWtQtJecsAbaIyslwGvFvPawftNaqO
POSTGRES_PASSWORD=jSljDz4whHuwO3aJIgVBrqEml5Ycbghorep4uVJ4xjDYQu0LfuTZdctj7y0YcCLu
The three envs we are presented with here are ``POSTGRES_DB``, ``POSTGRES_USER``, and ``POSTGRES_PASSWORD`` (by the way, their values have also been generated for you). You might have figured out already where these definitions will end up; it's all the same with ``django`` service container envs.
One final touch: should you ever need to merge ``.envs/production/*`` in a single ``.env`` run the ``merge_production_dotenvs_in_dotenv.py``: ::
$ python merge_production_dotenvs_in_dotenv.py
The ``.env`` file will then be created, with all your production envs residing beside each other.
Tips & Tricks
-------------
Activate a Docker Machine
~~~~~~~~~~~~~~~~~~~~~~~~~
This tells our computer that all future commands are specifically for the dev1 machine. Using the ``eval`` command we can switch machines as needed.::
$ eval "$(docker-machine env dev1)"
Debugging
~~~~~~~~~
ipdb
"""""
If you are using the following within your code to debug: ::
import ipdb; ipdb.set_trace()
Then you may need to run the following for it to work as desired: ::
$ docker-compose -f local.yml run --rm --service-ports django
django-debug-toolbar
""""""""""""""""""""
In order for ``django-debug-toolbar`` to work designate your Docker Machine IP with ``INTERNAL_IPS`` in ``local.py``.
Mailhog
~~~~~~~
When developing locally you can go with MailHog_ for email testing provided ``use_mailhog`` was set to ``y`` on setup. To proceed,
#. make sure ``mailhog`` container is up and running;
#. open up ``http://127.0.0.1:8025``.
.. _Mailhog: https://github.com/mailhog/MailHog/
.. _`CeleryTasks`:
Celery tasks in local development
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When not using docker Celery tasks are set to run in Eager mode, so that a full stack is not needed. When using docker the task
scheduler will be used by default.
If you need tasks to be executed on the main thread during development set CELERY_TASK_ALWAYS_EAGER = True in config/settings/local.py.
Possible uses could be for testing, or ease of profiling with DJDT.
.. _`CeleryFlower`:
Celery Flower
~~~~~~~~~~~~~
`Flower`_ is a "real-time monitor and web admin for Celery distributed task queue".
Prerequisites:
* ``use_docker`` was set to ``y`` on project initialization;
* ``use_celery`` was set to ``y`` on project initialization.
By default, it's enabled both in local and production environments (``local.yml`` and ``production.yml`` Docker Compose configs, respectively) through a ``flower`` service. For added security, ``flower`` requires its clients to provide authentication credentials specified as the corresponding environments' ``.envs/.local/.django`` and ``.envs/.production/.django`` ``CELERY_FLOWER_USER`` and ``CELERY_FLOWER_PASSWORD`` environment variables. Check out ``localhost:5555`` and see for yourself.
.. _`Flower`: https://github.com/mher/flower
| 35.886598 | 494 | 0.688883 |
f127c53fbca539eb3eb27344cda17743395d5762 | 3,046 | rst | reStructuredText | doc/source/index.rst | lingyunfeng/PyDDA | 37228ccb70bd9fcc7f1b69dd06237d4e1ba73c6b | [
"BSD-3-Clause"
] | null | null | null | doc/source/index.rst | lingyunfeng/PyDDA | 37228ccb70bd9fcc7f1b69dd06237d4e1ba73c6b | [
"BSD-3-Clause"
] | null | null | null | doc/source/index.rst | lingyunfeng/PyDDA | 37228ccb70bd9fcc7f1b69dd06237d4e1ba73c6b | [
"BSD-3-Clause"
] | 1 | 2019-11-10T03:48:28.000Z | 2019-11-10T03:48:28.000Z | .. PyDDA documentation master file, created by
sphinx-quickstart on Tue May 15 12:19:06 2018.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to the PyDDA documentation!
===================================
.. image:: logo.png
:width: 400px
:align: center
:height: 200px
:alt: alternate text
This is the main page for the documentation of PyDDA. Below are links that
provide documentation on the installation and use of PyDDA as well as
description of each of PyDDA's subroutines.
=========================
System Requirements
=========================
This works on any modern version of Linux, Mac OS X, and Windows. For Windows,
HRRR data integration is not supported. In addition, since PyDDA takes advtange
of parallelism we recommend:
::
An Intel machine with at least 4 cores
8 GB RAM
1 GB hard drive space
::
While PyDDA will work on less than this, you may run into performance issues.
In addition, we do not support Python versions less than 3.6. If you have an older version installed, PyDDA may work just fine but we will not provide support for any issues unless you are using at least Python 3.6.
=========================
Installation instructions
=========================
The GitHub repository for PyDDA is available at:
`<https://github.com/openradar/PyDDA>`_
Before you install PyDDA, ensure that the following dependencies are installed:
::
Python 3.5+
Py-ART 1.9.0+
scipy 1.0.1+
numpy 1.13.1+
matplotlib 1.5.3+
cartopy 0.16.0+
::
In order to use the HRRR data constraint, cfgrib needs to be installed. `cfgrib
<http://github.com/ecmwf/cfgrib>`_ currently only works on Mac OS and Linux, so
this is an optional dependency of PyDDA so that Windows users can still use PyDDA.
In order to install cfgrib, simply do:
.. _cfgrib: https://github.com/ecmwf/cfgrib
::
pip install cfgrib
::
There are multiple ways to install PyDDA. The best way to install PyDDA is
through the use of the `Anaconda <http://anaconda.org>`_ package manager. If you
have anaconda installed simply type:
::
conda install -c conda-forge pydda
::
This will install pydda and all of the required dependencies. You still need to
install `cfgrib <http://github.com/ecmwf/cfgrib>`_ if you wish to read HRRR data.
Another recommended option is to use pip to install PyDDA. Running this command
will install PyDDA and the required dependencies:
::
pip install pydda
::
Another way to do this which is recommended if you wish to contribute to PyDDA
is to install PyDDA from source. To do this, just type in the following
commands assuming you have the above dependencies installed.
::
git clone https://github.com/openradar/PyDDA
cd PyDDA
python setup.py install
::
Contents:
.. toctree::
:maxdepth: 3
contributors_guide/index
source/auto_examples/index
dev_reference/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 27.441441 | 215 | 0.704202 |
61c51b2b29363a6dc9bdd9ec9f676a5334c3b74f | 826 | rst | reStructuredText | DEVELOPERS.rst | MoisesRG/zabbix-template-for-vha-agent | d98d0028f0d3c4f8e15456e6695ae1b5f268df10 | [
"BSD-2-Clause"
] | null | null | null | DEVELOPERS.rst | MoisesRG/zabbix-template-for-vha-agent | d98d0028f0d3c4f8e15456e6695ae1b5f268df10 | [
"BSD-2-Clause"
] | null | null | null | DEVELOPERS.rst | MoisesRG/zabbix-template-for-vha-agent | d98d0028f0d3c4f8e15456e6695ae1b5f268df10 | [
"BSD-2-Clause"
] | null | null | null | General tips
============
- A Varnish Plus license is required to setup the development environment. The Vagrantfile of the project assumes the following environment variables are defined:
- ``VARNISH_PLUS_USER``, including the username of your Varnish Plus license.
- ``VARNISH_PLUS_PASSWORD``, including the password if your Varnish Plus license.
- Helper script: ``/vagrant/extras/envs/dev/start-services.sh``.
- Templates:
1. ``http://192.168.100.174/zabbix``
- Default username/password is ``admin``/``zabbix``.
2. In 'Configuration > Templates' click on 'Import' and select ``template-app-vha-agent.xml``.
3. In 'Configuration > Hosts' click on 'Create host':
- Host name: ``dev``
- Group: ``Varnish Cache servers``
- Linked templates: ``Template App VHA Agent``
| 41.3 | 162 | 0.682809 |
56f9f063b6e2eca6d380aad735b456dde6cc4ce6 | 1,764 | rst | reStructuredText | docs/colour.characterisation.rst | BPearlstine/colour | 40f0281295496774d2a19eee017d50fd0c265bd8 | [
"Cube",
"BSD-3-Clause"
] | 2 | 2020-05-03T20:15:42.000Z | 2021-04-09T18:19:06.000Z | docs/colour.characterisation.rst | BPearlstine/colour | 40f0281295496774d2a19eee017d50fd0c265bd8 | [
"Cube",
"BSD-3-Clause"
] | null | null | null | docs/colour.characterisation.rst | BPearlstine/colour | 40f0281295496774d2a19eee017d50fd0c265bd8 | [
"Cube",
"BSD-3-Clause"
] | 1 | 2019-12-11T19:48:27.000Z | 2019-12-11T19:48:27.000Z | Colour Characterisation
=======================
.. contents:: :local:
Colour Fitting
--------------
``colour``
.. currentmodule:: colour
.. autosummary::
:toctree: generated/
POLYNOMIAL_EXPANSION_METHODS
polynomial_expansion
COLOUR_CORRECTION_MATRIX_METHODS
colour_correction_matrix
COLOUR_CORRECTION_METHODS
colour_correction
**Ancillary Objects**
``colour.characterisation``
.. currentmodule:: colour.characterisation
.. autosummary::
:toctree: generated/
augmented_matrix_Cheung2004
polynomial_expansion_Finlayson2015
polynomial_expansion_Vandermonde
colour_correction_matrix_Cheung2004
colour_correction_matrix_Finlayson2015
colour_correction_matrix_Vandermonde
colour_correction_Cheung2004
colour_correction_Finlayson2015
colour_correction_Vandermonde
Colour Rendition Charts
-----------------------
**Dataset**
``colour``
.. currentmodule:: colour
.. autosummary::
:toctree: generated/
COLOURCHECKERS
COLOURCHECKERS_SDS
**Ancillary Objects**
``colour.characterisation``
.. currentmodule:: colour.characterisation
.. autosummary::
:toctree: generated/
ColourChecker
Cameras
-------
``colour.characterisation``
.. currentmodule:: colour.characterisation
.. autosummary::
:toctree: generated/
RGB_SpectralSensitivities
**Dataset**
``colour``
.. currentmodule:: colour
.. autosummary::
:toctree: generated/
CAMERAS_RGB_SPECTRAL_SENSITIVITIES
Displays
--------
``colour.characterisation``
.. currentmodule:: colour.characterisation
.. autosummary::
:toctree: generated/
RGB_DisplayPrimaries
**Dataset**
``colour``
.. currentmodule:: colour
.. autosummary::
:toctree: generated/
DISPLAYS_RGB_PRIMARIES
| 15.610619 | 42 | 0.715986 |
26f1e15d4adddec43ae0accf1b37c9f06d61f3cb | 3,798 | rst | reStructuredText | docs/source/bestpractices.rst | zhenghh04/vol-cache | fb05201d5a01b8684a9dff8269fd2f9556de49c9 | [
"BSD-3-Clause"
] | null | null | null | docs/source/bestpractices.rst | zhenghh04/vol-cache | fb05201d5a01b8684a9dff8269fd2f9556de49c9 | [
"BSD-3-Clause"
] | null | null | null | docs/source/bestpractices.rst | zhenghh04/vol-cache | fb05201d5a01b8684a9dff8269fd2f9556de49c9 | [
"BSD-3-Clause"
] | null | null | null | Best Practices
===================================
The HDF5 Cache I/O VOL connector (Cache VOL) allows applications to partially hide the I/O time from the computation. Here we provide more information on how applications can take advantage of it.
-----------------------
Write workloads
-----------------------
1) MPI Thread multiple should be enabled for optimal performance;
2) There should be enough compute work after the H5Dwrite calls to overlap with the data migration from the fast storage layer to the parallel file system;
3) The compute work should be inserted in between H5Dwrite and H5Dclose. For iterative checkpointing workloads, one can postpone the dataset close and group close calls after next iteration of compute.
4) If there are multiple H5Dwrite calls issued consecutatively, one should pause the async excution first and then restart the async execution after all the H5Dwrite calls were issued.
5) For check pointing workloads, it is better to open / close the file only once to avoid unnecessary overhead on setting and removing file caches.
An application may have the following HDF5 operations to write check point data:
.. code-block::
// Synchronous file create at the beginning
fid = H5Fcreate(...);
for (int iter=0; iter<niter; iter++) {
// compute work
...
// Synchronous group create
gid = H5Gcreate(fid, ...);
// Synchronous dataset create
did1 = H5Dcreate(gid, ..);
did2 = H5Dcreate(gid, ..);
// Synchronous dataset write
status = H5Dwrite(did1, ..);
// Synchronous dataset write again
status = H5Dwrite(did2, ..);
// close dataset
err = H5Dclose(did1, ..);
err = H5Dclose(did2, ..);
// close group
err = H5Gclose(gid, ..)
}
H5Fclose(fid);
// Continue to computation
which can be converted to use async VOL as the following:
.. code-block::
// Synchronous file create at the beginning
fid = H5Fcreate(...);
for (int iter=0; iter<niter; iter++) {
// compute work
...
// close the datasets & group after the next round of compute
if (iter > 0) {
H5Dclose(did1);
H5Dclose(did2);
H5Gclose(gid);
}
// Synchronous group create
gid = H5Gcreate(fid, ...);
// Synchronous dataset create
did1 = H5Dcreate(gid, ..);
did2 = H5Dcreate(gid, ..);
// Synchronous dataset write
// Pause data migration before issuing H5Dwrite calls
H5Fcache_async_op_pause(fid);
status = H5Dwrite(did1, ..);
// Synchronous dataset write again
status = H5Dwrite(did2, ..);
// Start the data migration
H5Fcache_async_op_start(fid);
// close dataset
if (iter==niter-1) {
err = H5Dclose(did1, ..);
err = H5Dclose(did2, ..);
// close group
err = H5Gclose(gid, ..)
}
}
H5Fclose(fid);
-------------------
Read workloads
-------------------
Currently, Cache VOL works best for repeatedly read workloads.
1) The dataset can be one or multiple dimensional arrays. However, for multiple dimensional arrays, each read must select complete sampoles, i.e., the hyperslab selection must be of the shape: [i:j, :, :, : ..., :]. The sample list does not have to be contiguous.
2) If the dataset is relatively small, one could call H5Dprefetch to prefetch the entire dataset to the fast storage. H5Dprefetch will be asynchronously. H5Dread will then wait until the asynchronous prefetch is done.
3) If the dataset is large, one could just call H5Dread as usually, the library will then cache the data to the fast storage layer on the fly.
4) During the whole period of read, one should avoid opening and closing the dataset multiple times. For h5py workloads, one should avoid referencing datasets multiple times.
| 41.282609 | 263 | 0.67009 |
229765d8eeac3b1ca180b0445885f40581f7759d | 2,201 | rst | reStructuredText | SourceDocs/html/program_listing_file_Source_Azura_RenderSystem_Src_Vulkan_VkRenderSystem.cpp.rst | vasumahesh1/azura | 80aa23e2fb498e6288484bc49b0d5b8889db6ebb | [
"MIT"
] | 12 | 2019-01-08T23:10:37.000Z | 2021-06-04T09:48:42.000Z | docs/html/_sources/html/program_listing_file_Source_Azura_RenderSystem_Src_Vulkan_VkRenderSystem.cpp.rst.txt | vasumahesh1/azura | 80aa23e2fb498e6288484bc49b0d5b8889db6ebb | [
"MIT"
] | 38 | 2017-04-05T00:27:24.000Z | 2018-12-25T08:34:04.000Z | SourceDocs/html/program_listing_file_Source_Azura_RenderSystem_Src_Vulkan_VkRenderSystem.cpp.rst | vasumahesh1/azura | 80aa23e2fb498e6288484bc49b0d5b8889db6ebb | [
"MIT"
] | 4 | 2019-03-27T10:07:32.000Z | 2021-07-15T03:22:27.000Z |
.. _program_listing_file_Source_Azura_RenderSystem_Src_Vulkan_VkRenderSystem.cpp:
Program Listing for File VkRenderSystem.cpp
===========================================
|exhale_lsh| :ref:`Return to documentation for file <file_Source_Azura_RenderSystem_Src_Vulkan_VkRenderSystem.cpp>` (``Source\Azura\RenderSystem\Src\Vulkan\VkRenderSystem.cpp``)
.. |exhale_lsh| unicode:: U+021B0 .. UPWARDS ARROW WITH TIP LEFTWARDS
.. code-block:: cpp
#include "Generic/RenderSystem.h"
#include "Generic/Window.h"
#include "Vulkan/VkRenderer.h"
#include "Vulkan/VkShader.h"
#include "Vulkan/VkTextureManager.h"
namespace Azura {
std::unique_ptr<Renderer> RenderSystem::CreateRenderer(const ApplicationInfo& appInfo,
const DeviceRequirements& deviceRequirements,
const ApplicationRequirements& appRequirements,
const SwapChainRequirements& swapChainRequirement,
const RenderPassRequirements& renderPassRequirements,
const DescriptorRequirements& descriptorRequirements,
const ShaderRequirements& shaderRequirements,
Memory::Allocator& mainAllocator,
Memory::Allocator& drawAllocator,
Window& window) {
return std::make_unique<Vulkan::VkRenderer>(appInfo, deviceRequirements, appRequirements, swapChainRequirement,
renderPassRequirements, descriptorRequirements, shaderRequirements, mainAllocator,
drawAllocator, window);
}
std::unique_ptr<TextureManager> RenderSystem::CreateTextureManager(const TextureRequirements& textureRequirements) {
return std::make_unique<Vulkan::VkTextureManager>(textureRequirements);
}
} // namespace Azura
| 55.025 | 177 | 0.564289 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.