hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588
values | lang stringclasses 305
values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f2bd43a76b5c3b0e37f60617b87cba38fbbed809 | 33,055 | rst | reStructuredText | doc/build/changelog/changelog_10.rst | slafs/sqlalchemy | 156f473de00024688404d73aea305cd4fc452638 | [
"MIT"
] | null | null | null | doc/build/changelog/changelog_10.rst | slafs/sqlalchemy | 156f473de00024688404d73aea305cd4fc452638 | [
"MIT"
] | null | null | null | doc/build/changelog/changelog_10.rst | slafs/sqlalchemy | 156f473de00024688404d73aea305cd4fc452638 | [
"MIT"
] | null | null | null | ==============
1.0 Changelog
==============
.. changelog_imports::
.. include:: changelog_09.rst
:start-line: 5
.. include:: changelog_08.rst
:start-line: 5
.. include:: changelog_07.rst
:start-line: 5
.. changelog::
:version: 1.0.0
Version 1.0.0 is the first release of the 1.0 series. Many changes
described here are also present in the 0.9 and sometimes the 0.8
series as well. For changes that are specific to 1.0 with an emphasis
on compatibility concerns, see :doc:`/changelog/migration_10`.
.. change::
:tags: bug, sql
:tickets: 3260
Fixed bug in :meth:`.Table.tometadata` method where the
:class:`.CheckConstraint` associated with a :class:`.Boolean`
or :class:`.Enum` type object would be doubled in the target table.
The copy process now tracks the production of this constraint object
as local to a type object.
.. change::
:tags: feature, orm
:tickets: 3217
Added a parameter :paramref:`.Query.join.isouter` which is synonymous
with calling :meth:`.Query.outerjoin`; this flag is to provide a more
consistent interface compared to Core :meth:`.FromClause.join`.
Pull request courtesy Jonathan Vanasco.
.. change::
:tags: bug, sql
:tickets: 3243
The behavioral contract of the :attr:`.ForeignKeyConstraint.columns`
collection has been made consistent; this attribute is now a
:class:`.ColumnCollection` like that of all other constraints and
is initialized at the point when the constraint is associated with
a :class:`.Table`.
.. seealso::
:ref:`change_3243`
.. change::
:tags: bug, orm
:tickets: 3256
The :meth:`.PropComparator.of_type` modifier has been
improved in conjunction with loader directives such as
:func:`.joinedload` and :func:`.contains_eager` such that if
two :meth:`.PropComparator.of_type` modifiers of the same
base type/path are encountered, they will be joined together
into a single "polymorphic" entity, rather than replacing
the entity of type A with the one of type B. E.g.
a joinedload of ``A.b.of_type(BSub1)->BSub1.c`` combined with
joinedload of ``A.b.of_type(BSub2)->BSub2.c`` will create a
single joinedload of ``A.b.of_type((BSub1, BSub2)) -> BSub1.c, BSub2.c``,
without the need for the ``with_polymorphic`` to be explicit
in the query.
.. seealso::
:ref:`eagerloading_polymorphic_subtypes` - contains an updated
example illustrating the new format.
.. change::
:tags: bug, sql
:tickets: 3245
The :attr:`.Column.key` attribute is now used as the source of
anonymous bound parameter names within expressions, to match the
existing use of this value as the key when rendered in an INSERT
or UPDATE statement. This allows :attr:`.Column.key` to be used
as a "substitute" string to work around a difficult column name
that doesn't translate well into a bound parameter name. Note that
the paramstyle is configurable on :func:`.create_engine` in any case,
and most DBAPIs today support a named and positional style.
.. change::
:tags: bug, sql
:pullreq: github:146
Fixed the name of the :paramref:`.PoolEvents.reset.dbapi_connection`
parameter as passed to this event; in particular this affects
usage of the "named" argument style for this event. Pull request
courtesy Jason Goldberger.
.. change::
:tags: feature, sql
:pullreq: github:139
Added a new parameter :paramref:`.Table.tometadata.name` to
the :meth:`.Table.tometadata` method. Similar to
:paramref:`.Table.tometadata.schema`, this argument causes the newly
copied :class:`.Table` to take on the new name instead of
the existing one. An interesting capability this adds is that of
copying a :class:`.Table` object to the *same* :class:`.MetaData`
target with a new name. Pull request courtesy n.d. parker.
.. change::
:tags: bug, orm
:pullreq: github:137
Repaired support of the ``copy.deepcopy()`` call when used by the
:class:`.orm.util.CascadeOptions` argument, which occurs
if ``copy.deepcopy()`` is being used with :func:`.relationship`
(not an officially supported use case). Pull request courtesy
duesenfranz.
.. change::
:tags: bug, sql
:tickets: 3170
Reversing a change that was made in 0.9, the "singleton" nature
of the "constants" :func:`.null`, :func:`.true`, and :func:`.false`
has been reverted. These functions returning a "singleton" object
had the effect that different instances would be treated as the
same regardless of lexical use, which in particular would impact
the rendering of the columns clause of a SELECT statement.
.. seealso::
:ref:`bug_3170`
.. change::
:tags: bug, orm
:tickets: 3139
Fixed bug where :meth:`.Session.expunge` would not fully detach
the given object if the object had been subject to a delete
operation that was flushed, but not committed. This would also
affect related operations like :func:`.make_transient`.
.. seealso::
:ref:`bug_3139`
.. change::
:tags: bug, orm
:tickets: 3230
A warning is emitted in the case of multiple relationships that
ultimately will populate a foreign key column in conflict with
another, where the relationships are attempting to copy values
from different source columns. This occurs in the case where
composite foreign keys with overlapping columns are mapped to
relationships that each refer to a different referenced column.
A new documentation section illustrates the example as well as how
to overcome the issue by specifying "foreign" columns specifically
on a per-relationship basis.
.. seealso::
:ref:`relationship_overlapping_foreignkeys`
.. change::
:tags: feature, sql
:tickets: 3172
Exception messages have been spiffed up a bit. The SQL statement
and parameters are not displayed if None, reducing confusion for
error messages that weren't related to a statement. The full
module and classname for the DBAPI-level exception is displayed,
making it clear that this is a wrapped DBAPI exception. The
statement and parameters themselves are bounded within a bracketed
sections to better isolate them from the error message and from
each other.
.. change::
:tags: bug, orm
:tickets: 3228
The :meth:`.Query.update` method will now convert string key
names in the given dictionary of values into mapped attribute names
against the mapped class being updated. Previously, string names
were taken in directly and passed to the core update statement without
any means to resolve against the mapped entity. Support for synonyms
and hybrid attributes as the subject attributes of
:meth:`.Query.update` are also supported.
.. seealso::
:ref:`bug_3228`
.. change::
:tags: bug, orm
:tickets: 3035
Improvements to the mechanism used by :class:`.Session` to locate
"binds" (e.g. engines to use), such engines can be associated with
mixin classes, concrete subclasses, as well as a wider variety
of table metadata such as joined inheritance tables.
.. seealso::
:ref:`bug_3035`
.. change::
:tags: bug, general
:tickets: 3218
The ``__module__`` attribute is now set for all those SQL and
ORM functions that are derived as "public factory" symbols, which
should assist with documentation tools being able to report on the
target module.
.. change::
:tags: feature, sql
:meth:`.Insert.from_select` now includes Python and SQL-expression
defaults if otherwise unspecified; the limitation where non-
server column defaults aren't included in an INSERT FROM
SELECT is now lifted and these expressions are rendered as
constants into the SELECT statement.
.. seealso::
:ref:`feature_insert_from_select_defaults`
.. change::
:tags: bug, orm
:tickets: 3233
Fixed bug in single table inheritance where a chain of joins
that included the same single inh entity more than once
(normally this should raise an error) could, in some cases
depending on what was being joined "from", implicitly alias the
second case of the single inh entity, producing
a query that "worked". But as this implicit aliasing is not
intended in the case of single table inheritance, it didn't
really "work" fully and was very misleading, since it wouldn't
always appear.
.. seealso::
:ref:`bug_3233`
.. change::
:tags: bug, orm
:tickets: 3222
The ON clause rendered when using :meth:`.Query.join`,
:meth:`.Query.outerjoin`, or the standalone :func:`.orm.join` /
:func:`.orm.outerjoin` functions to a single-inheritance subclass will
now include the "single table criteria" in the ON clause even
if the ON clause is otherwise hand-rolled; it is now added to the
criteria using AND, the same way as if joining to a single-table
target using relationship or similar.
This is sort of in-between feature and bug.
.. seealso::
:ref:`migration_3222`
.. change::
:tags: feature, sql
:tickets: 3184
:pullreq: bitbucket:30
The :class:`.UniqueConstraint` construct is now included when
reflecting a :class:`.Table` object, for databases where this
is applicable. In order to achieve this
with sufficient accuracy, MySQL and Postgresql now contain features
that correct for the duplication of indexes and unique constraints
when reflecting tables, indexes, and constraints.
In the case of MySQL, there is not actually a "unique constraint"
concept independent of a "unique index", so for this backend
:class:`.UniqueConstraint` continues to remain non-present for a
reflected :class:`.Table`. For Postgresql, the query used to
detect indexes against ``pg_index`` has been improved to check for
the same construct in ``pg_constraint``, and the implicitly
constructed unique index is not included with a
reflected :class:`.Table`.
In both cases, the :meth:`.Inspector.get_indexes` and the
:meth:`.Inspector.get_unique_constraints` methods return both
constructs individually, but include a new token
``duplicates_constraint`` in the case of Postgresql or
``duplicates_index`` in the case
of MySQL to indicate when this condition is detected.
Pull request courtesy Johannes Erdfelt.
.. seealso::
:ref:`feature_3184`
.. change::
:tags: feature, postgresql
:pullreq: github:134
Added support for the FILTER keyword as applied to aggregate
functions, supported by Postgresql 9.4. Pull request
courtesy Ilja Everilä.
.. seealso::
:ref:`feature_gh134`
.. change::
:tags: bug, sql, engine
:tickets: 3215
Fixed bug where a "branched" connection, that is the kind you get
when you call :meth:`.Connection.connect`, would not share invalidation
status with the parent. The architecture of branching has been tweaked
a bit so that the branched connection defers to the parent for
all invalidation status and operations.
.. change::
:tags: bug, sql, engine
:tickets: 3190
Fixed bug where a "branched" connection, that is the kind you get
when you call :meth:`.Connection.connect`, would not share transaction
status with the parent. The architecture of branching has been tweaked
a bit so that the branched connection defers to the parent for
all transactional status and operations.
.. change::
:tags: bug, declarative
:tickets: 2670
A relationship set up with :class:`.declared_attr` on
a :class:`.AbstractConcreteBase` base class will now be configured
on the abstract base mapping automatically, in addition to being
set up on descendant concrete classes as usual.
.. seealso::
:ref:`feature_3150`
.. change::
:tags: feature, declarative
:tickets: 3150
The :class:`.declared_attr` construct has newly improved
behaviors and features in conjunction with declarative. The
decorated function will now have access to the final column
copies present on the local mixin when invoked, and will also
be invoked exactly once for each mapped class, the returned result
being memoized. A new modifier :attr:`.declared_attr.cascading`
is added as well.
.. seealso::
:ref:`feature_3150`
.. change::
:tags: feature, ext
:tickets: 3210
The :mod:`sqlalchemy.ext.automap` extension will now set
``cascade="all, delete-orphan"`` automatically on a one-to-many
relationship/backref where the foreign key is detected as containing
one or more non-nullable columns. This argument is present in the
keywords passed to :func:`.automap.generate_relationship` in this
case and can still be overridden. Additionally, if the
:class:`.ForeignKeyConstraint` specifies ``ondelete="CASCADE"``
for a non-nullable or ``ondelete="SET NULL"`` for a nullable set
of columns, the argument ``passive_deletes=True`` is also added to the
relationship. Note that not all backends support reflection of
ondelete, but backends that do include Postgresql and MySQL.
.. change::
:tags: feature, sql
:tickets: 3206
Added new method :meth:`.Select.with_statement_hint` and ORM
method :meth:`.Query.with_statement_hint` to support statement-level
hints that are not specific to a table.
.. change::
:tags: bug, sqlite
:tickets: 3203
:pullreq: bitbucket:31
SQLite now supports reflection of unique constraints from
temp tables; previously, this would fail with a TypeError.
Pull request courtesy Johannes Erdfelt.
.. seealso::
:ref:`change_3204` - changes regarding SQLite temporary
table and view reflection.
.. change::
:tags: bug, sqlite
:tickets: 3204
Added :meth:`.Inspector.get_temp_table_names` and
:meth:`.Inspector.get_temp_view_names`; currently, only the
SQLite and Oracle dialects support these methods. The return of
temporary table and view names has been **removed** from SQLite and
Oracle's version of :meth:`.Inspector.get_table_names` and
:meth:`.Inspector.get_view_names`; other database backends cannot
support this information (such as MySQL), and the scope of operation
is different in that the tables can be local to a session and
typically aren't supported in remote schemas.
.. seealso::
:ref:`change_3204`
.. change::
:tags: feature, postgresql
:tickets: 2891
:pullreq: github:128
Support has been added for reflection of materialized views
and foreign tables, as well as support for materialized views
within :meth:`.Inspector.get_view_names`, and a new method
:meth:`.PGInspector.get_foreign_table_names` available on the
Postgresql version of :class:`.Inspector`. Pull request courtesy
Rodrigo Menezes.
.. seealso::
:ref:`feature_2891`
.. change::
:tags: feature, orm
Added new event handlers :meth:`.AttributeEvents.init_collection`
and :meth:`.AttributeEvents.dispose_collection`, which track when
a collection is first associated with an instance and when it is
replaced. These handlers supersede the :meth:`.collection.linker`
annotation. The old hook remains supported through an event adapter.
.. change::
:tags: bug, orm
:tickets: 3148, 3188
A major rework to the behavior of expression labels, most
specifically when used with ColumnProperty constructs with
custom SQL expressions and in conjunction with the "order by
labels" logic first introduced in 0.9. Fixes include that an
``order_by(Entity.some_col_prop)`` will now make use of "order by
label" rules even if Entity has been subject to aliasing,
either via inheritance rendering or via the use of the
``aliased()`` construct; rendering of the same column property
multiple times with aliasing (e.g. ``query(Entity.some_prop,
entity_alias.some_prop)``) will label each occurrence of the
entity with a distinct label, and additionally "order by
label" rules will work for both (e.g.
``order_by(Entity.some_prop, entity_alias.some_prop)``).
Additional issues that could prevent the "order by label"
logic from working in 0.9, most notably that the state of a
Label could change such that "order by label" would stop
working depending on how things were called, has been fixed.
.. seealso::
:ref:`bug_3188`
.. change::
:tags: bug, mysql
:tickets: 3186
MySQL boolean symbols "true", "false" work again. 0.9's change
in :ticket:`2682` disallowed the MySQL dialect from making use of the
"true" and "false" symbols in the context of "IS" / "IS NOT", but
MySQL supports this syntax even though it has no boolean type.
MySQL remains "non native boolean", but the :func:`.true`
and :func:`.false` symbols again produce the
keywords "true" and "false", so that an expression like
``column.is_(true())`` again works on MySQL.
.. seealso::
:ref:`bug_3186`
.. change::
:tags: changed, mssql
:tickets: 3182
The hostname-based connection format for SQL Server when using
pyodbc will no longer specify a default "driver name", and a warning
is emitted if this is missing. The optimal driver name for SQL Server
changes frequently and is per-platform, so hostname based connections
need to specify this. DSN-based connections are preferred.
.. seealso::
:ref:`change_3182`
.. change::
:tags: changed, sql
The :func:`~.expression.column` and :func:`~.expression.table`
constructs are now importable from the "from sqlalchemy" namespace,
just like every other Core construct.
.. change::
:tags: changed, sql
:tickets: 2992
The implicit conversion of strings to :func:`.text` constructs
when passed to most builder methods of :func:`.select` as
well as :class:`.Query` now emits a warning with just the
plain string sent. The textual conversion still proceeds normally,
however. The only method that accepts a string without a warning
are the "label reference" methods like order_by(), group_by();
these functions will now at compile time attempt to resolve a single
string argument to a column or label expression present in the
selectable; if none is located, the expression still renders, but
you get the warning again. The rationale here is that the implicit
conversion from string to text is more unexpected than not these days,
and it is better that the user send more direction to the Core / ORM
when passing a raw string as to what direction should be taken.
Core/ORM tutorials have been updated to go more in depth as to how text
is handled.
.. seealso::
:ref:`migration_2992`
.. change::
:tags: feature, engine
:tickets: 3178
A new style of warning can be emitted which will "filter" up to
N occurrences of a parameterized string. This allows parameterized
warnings that can refer to their arguments to be delivered a fixed
number of times until allowing Python warning filters to squelch them,
and prevents memory from growing unbounded within Python's
warning registries.
.. seealso::
:ref:`feature_3178`
.. change::
:tags: feature, orm
The :class:`.Query` will raise an exception when :meth:`.Query.yield_per`
is used with mappings or options where either
subquery eager loading, or joined eager loading with collections,
would take place. These loading strategies are
not currently compatible with yield_per, so by raising this error,
the method is safer to use. Eager loads can be disabled with
the ``lazyload('*')`` option or :meth:`.Query.enable_eagerloads`.
.. seealso::
:ref:`migration_yield_per_eager_loading`
.. change::
:tags: bug, orm
:tickets: 3177
Changed the approach by which the "single inheritance criterion"
is applied, when using :meth:`.Query.from_self`, or its common
user :meth:`.Query.count`. The criteria to limit rows to those
with a certain type is now indicated on the inside subquery,
not the outside one, so that even if the "type" column is not
available in the columns clause, we can filter on it on the "inner"
query.
.. seealso::
:ref:`migration_3177`
.. change::
:tags: changed, orm
The ``proc()`` callable passed to the ``create_row_processor()``
method of custom :class:`.Bundle` classes now accepts only a single
"row" argument.
.. seealso::
:ref:`bundle_api_change`
.. change::
:tags: changed, orm
Deprecated event hooks removed: ``populate_instance``,
``create_instance``, ``translate_row``, ``append_result``
.. seealso::
:ref:`migration_deprecated_orm_events`
.. change::
:tags: bug, orm
:tickets: 3145
Made a small adjustment to the mechanics of lazy loading,
such that it has less chance of interfering with a joinload() in the
very rare circumstance that an object points to itself; in this
scenario, the object refers to itself while loading its attributes
which can cause a mixup between loaders. The use case of
"object points to itself" is not fully supported, but the fix also
removes some overhead so for now is part of testing.
.. change::
:tags: feature, orm
:tickets: 3176
A new implementation for :class:`.KeyedTuple` used by the
:class:`.Query` object offers dramatic speed improvements when
fetching large numbers of column-oriented rows.
.. seealso::
:ref:`feature_3176`
.. change::
:tags: feature, orm
:tickets: 3008
The behavior of :paramref:`.joinedload.innerjoin` as well as
:paramref:`.relationship.innerjoin` is now to use "nested"
inner joins, that is, right-nested, as the default behavior when an
inner join joined eager load is chained to an outer join eager load.
.. seealso::
:ref:`migration_3008`
.. change::
:tags: bug, orm
:tickets: 3171
The "resurrect" ORM event has been removed. This event hook had
no purpose since the old "mutable attribute" system was removed
in 0.8.
.. change::
:tags: bug, sql
:tickets: 3169
Using :meth:`.Insert.from_select` now implies ``inline=True``
on :func:`.insert`. This helps to fix a bug where an
INSERT...FROM SELECT construct would inadvertently be compiled
as "implicit returning" on supporting backends, which would
cause breakage in the case of an INSERT that inserts zero rows
(as implicit returning expects a row), as well as arbitrary
return data in the case of an INSERT that inserts multiple
rows (e.g. only the first row of many).
A similar change is also applied to an INSERT..VALUES
with multiple parameter sets; implicit RETURNING will no longer emit
for this statement either. As both of these constructs deal
with varible numbers of rows, the
:attr:`.ResultProxy.inserted_primary_key` accessor does not
apply. Previously, there was a documentation note that one
may prefer ``inline=True`` with INSERT..FROM SELECT as some databases
don't support returning and therefore can't do "implicit" returning,
but there's no reason an INSERT...FROM SELECT needs implicit returning
in any case. Regular explicit :meth:`.Insert.returning` should
be used to return variable numbers of result rows if inserted
data is needed.
.. change::
:tags: bug, orm
:tickets: 3167
Fixed bug where attribute "set" events or columns with
``@validates`` would have events triggered within the flush process,
when those columns were the targets of a "fetch and populate"
operation, such as an autoincremented primary key, a Python side
default, or a server-side default "eagerly" fetched via RETURNING.
.. change::
:tags: feature, oracle
Added support for the Oracle table option ON COMMIT.
.. change::
:tags: feature, postgresql
:tickets: 2051
Added support for PG table options TABLESPACE, ON COMMIT,
WITH(OUT) OIDS, and INHERITS, when rendering DDL via
the :class:`.Table` construct. Pull request courtesy
malikdiarra.
.. seealso::
:ref:`postgresql_table_options`
.. change::
:tags: bug, orm, py3k
The :class:`.IdentityMap` exposed from :class:`.Session.identity`
now returns lists for ``items()`` and ``values()`` in Py3K.
Early porting to Py3K here had these returning iterators, when
they technically should be "iterable views"..for now, lists are OK.
.. change::
:tags: orm, feature
UPDATE statements can now be batched within an ORM flush
into more performant executemany() call, similarly to how INSERT
statements can be batched; this will be invoked within flush
to the degree that subsequent UPDATE statements for the
same mapping and table involve the identical columns within the
VALUES clause, that no SET-level SQL expressions
are embedded, and that the versioning requirements for the mapping
are compatible with the backend dialect's ability to return
a correct rowcount for an executemany operation.
.. change::
:tags: engine, bug
:tickets: 3163
Removing (or adding) an event listener at the same time that the event
is being run itself, either from inside the listener or from a
concurrent thread, now raises a RuntimeError, as the collection used is
now an instance of ``colletions.deque()`` and does not support changes
while being iterated. Previously, a plain Python list was used where
removal from inside the event itself would produce silent failures.
.. change::
:tags: orm, feature
:tickets: 2963
The ``info`` parameter has been added to the constructor for
:class:`.SynonymProperty` and :class:`.ComparableProperty`.
.. change::
:tags: sql, feature
:tickets: 2963
The ``info`` parameter has been added as a constructor argument
to all schema constructs including :class:`.MetaData`,
:class:`.Index`, :class:`.ForeignKey`, :class:`.ForeignKeyConstraint`,
:class:`.UniqueConstraint`, :class:`.PrimaryKeyConstraint`,
:class:`.CheckConstraint`.
.. change::
:tags: orm, feature
:tickets: 2971
The :meth:`.InspectionAttr.info` collection is now moved down to
:class:`.InspectionAttr`, where in addition to being available
on all :class:`.MapperProperty` objects, it is also now available
on hybrid properties, association proxies, when accessed via
:attr:`.Mapper.all_orm_descriptors`.
.. change::
:tags: sql, feature
:tickets: 3027
:pullrequest: bitbucket:29
The :paramref:`.Table.autoload_with` flag now implies that
:paramref:`.Table.autoload` should be ``True``. Pull request
courtesy Malik Diarra.
.. change::
:tags: postgresql, feature
:pullreq: github:126
Added new method :meth:`.PGInspector.get_enums`, when using the
inspector for Postgresql will provide a list of ENUM types.
Pull request courtesy Ilya Pekelny.
.. change::
:tags: mysql, bug
The MySQL dialect will now disable :meth:`.ConnectionEvents.handle_error`
events from firing for those statements which it uses internally
to detect if a table exists or not. This is achieved using an
execution option ``skip_user_error_events`` that disables the handle
error event for the scope of that execution. In this way, user code
that rewrites exceptions doesn't need to worry about the MySQL
dialect or other dialects that occasionally need to catch
SQLAlchemy specific exceptions.
.. change::
:tags: mysql, bug
:tickets: 2515
Changed the default value of "raise_on_warnings" to False for
MySQLconnector. This was set at True for some reason. The "buffered"
flag unfortunately must stay at True as MySQLconnector does not allow
a cursor to be closed unless all results are fully fetched.
.. change::
:tags: bug, orm
:tickets: 3117
The "evaulator" for query.update()/delete() won't work with multi-table
updates, and needs to be set to `synchronize_session=False` or
`synchronize_session='fetch'`; this now raises an exception, with a
message to change the synchronize setting.
This is upgraded from a warning emitted as of 0.9.7.
.. change::
:tags: removed
The Drizzle dialect has been removed from the Core; it is now
available as `sqlalchemy-drizzle <https://bitbucket.org/zzzeek/sqlalchemy-drizzle>`_,
an independent, third party dialect. The dialect is still based
almost entirely off of the MySQL dialect present in SQLAlchemy.
.. seealso::
:ref:`change_2984`
.. change::
:tags: enhancement, orm
:tickets: 3061
Adjustment to attribute mechanics concerning when a value is
implicitly initialized to None via first access; this action,
which has always resulted in a population of the attribute,
no longer does so; the None value is returned but the underlying
attribute receives no set event. This is consistent with how collections
work and allows attribute mechanics to behave more consistently;
in particular, getting an attribute with no value does not squash
the event that should proceed if the value is actually set to None.
.. seealso::
:ref:`migration_3061`
.. change::
:tags: feature, sql
:tickets: 3034
The :meth:`.Select.limit` and :meth:`.Select.offset` methods
now accept any SQL expression, in addition to integer values, as
arguments. Typically this is used to allow a bound parameter to be
passed, which can be substituted with a value later thus allowing
Python-side caching of the SQL query. The implementation
here is fully backwards compatible with existing third party dialects,
however those dialects which implement special LIMIT/OFFSET systems
will need modification in order to take advantage of the new
capabilities. Limit and offset also support "literal_binds" mode,
where bound parameters are rendered inline as strings based on
a compile-time option.
Work on this feature is courtesy of Dobes Vandermeer.
.. seealso::
:ref:`feature_3034`.
| 38.570595 | 93 | 0.653456 |
a4386cb317791ba18d510af7fa6db0011a12fa1a | 458 | rst | reStructuredText | docs/examples/file.rst | Inrixia/pyais | b50fd4d75c687d71b3c70ee939ac9112cfec991e | [
"MIT"
] | null | null | null | docs/examples/file.rst | Inrixia/pyais | b50fd4d75c687d71b3c70ee939ac9112cfec991e | [
"MIT"
] | null | null | null | docs/examples/file.rst | Inrixia/pyais | b50fd4d75c687d71b3c70ee939ac9112cfec991e | [
"MIT"
] | null | null | null | ###############
Reading and parsing files
###############
Examples
--------
The following example shows how to read and parse AIS messages from a file::
from pyais.stream import FileReaderStream
filename = "sample.ais"
for msg in FileReaderStream(filename):
decoded = msg.decode()
print(decoded)
Please note, that by default the following lines are ignored:
* invalid lines
* lines starting with a `#` | 20.818182 | 77 | 0.622271 |
cc7e0a0f3ffbfe15028b82b205684c7b98f676af | 756 | rst | reStructuredText | doc/competenze_di_base/informazioni_sulla_traduzione.rst | nemobis/lg-competenzedigitali-docs | 060aa5e03ec2dbf67c665fbfab15a30e3489d7e4 | [
"CC0-1.0"
] | 4 | 2018-05-04T08:02:08.000Z | 2021-11-16T12:34:27.000Z | doc/competenze_di_base/informazioni_sulla_traduzione.rst | nemobis/lg-competenzedigitali-docs | 060aa5e03ec2dbf67c665fbfab15a30e3489d7e4 | [
"CC0-1.0"
] | 5 | 2018-06-13T09:22:25.000Z | 2018-10-23T15:53:40.000Z | doc/competenze_di_base/informazioni_sulla_traduzione.rst | nemobis/lg-competenzedigitali-docs | 060aa5e03ec2dbf67c665fbfab15a30e3489d7e4 | [
"CC0-1.0"
] | 11 | 2018-06-14T10:56:25.000Z | 2020-05-01T08:00:50.000Z | =============================
Informazioni sulla traduzione
=============================
Pubblicato per la prima volta, in inglese, nel 2017, come "DigComp 2.1:
The Digital Competence Framework for Citizens with eight proficiency
levels and examples of use"
(`http://europa.eu/!Yg77Dh <http://europa.eu/!Yg77Dh>`__) a cura
dell’European Commission's Joint Research Centre.
Questa traduzione è responsabilità dell’Agenzia per l’Italia Digitale
(AgID). La Commissione Europea non è responsabile per questa traduzione
e non può essere ritenuta responsabile di alcuna conseguenza derivante
dal riutilizzo del documento. Il copyright per la presente traduzione è
di proprietà dell’Agenzia per l’Italia Digitale (AgID) che la rilascia
in uso con licenza CC0.
| 44.470588 | 71 | 0.744709 |
2f719a2952d791a7a81c7c3616875ab7b593ffbf | 7,972 | rst | reStructuredText | docs/source/agent-framework/integrating-simulations/Simulation-Integration.rst | cloudcomputingabc/volttron | 6495e26e3185a7af8d0d79ad2586bdf8ea83992d | [
"Apache-2.0",
"BSD-2-Clause"
] | 406 | 2015-01-20T03:08:53.000Z | 2022-03-31T20:59:07.000Z | docs/source/agent-framework/integrating-simulations/Simulation-Integration.rst | cloudcomputingabc/volttron | 6495e26e3185a7af8d0d79ad2586bdf8ea83992d | [
"Apache-2.0",
"BSD-2-Clause"
] | 2,031 | 2015-01-05T21:35:45.000Z | 2022-03-29T21:44:36.000Z | docs/source/agent-framework/integrating-simulations/Simulation-Integration.rst | cloudcomputingabc/volttron | 6495e26e3185a7af8d0d79ad2586bdf8ea83992d | [
"Apache-2.0",
"BSD-2-Clause"
] | 219 | 2015-01-20T14:53:57.000Z | 2022-03-06T00:37:41.000Z | .. _Simulation-Integration:
=====================================
Integrating With Simulation Platforms
=====================================
An agent wanting to integrate with a simulation platform has to create an object of concrete simulation integration
class (HELICSSimIntegration). This is best described with an example agent. The example agent will interface with
HELICS co-simulation platform. For more info about HELICS, please refer to
https://helics.readthedocs.io/en/latest/installation/linux.html.
.. code-block:: python
class HelicsExample(Agent):
"""
HelicsExampleAgent demonstrates how VOLTTRON agent can interact with HELICS simulation environment
"""
def __init__(self, config, **kwargs):
super(HelicsExample, self).__init__(enable_store=False, **kwargs)
self.config = config
self.helics_sim = HELICSSimIntegration(config, self.vip.pubsub)
.. _Register-Simulation:
Register With Simulation Platform
=================================
The agent has to first load the configuration file containing parameters such as connection address, simulation
duration, input and output topics etc., and register with simulation platform. The concrete simulation object will then
register the agent with simulation platform (in this case, HELICS) using appropriate APIs. The registration steps
include connecting to the simulation platform, passing the input and outputs topics to the simulation etc. In addition
to that, the agent has to provide a callback method in order for the concrete simulation object to pass the messages
received from the simulation to the agent. The best place to call the `register_inputs` API is within the `onstart`
method of the agent.
.. code-block:: python
@Core.receiver("onstart")
def onstart(self, sender, **kwargs):
"""
Register config parameters with HELICS.
Start HELICS simulation.
"""
# Register inputs with HELICS and provide callback method to receive messages from simulation
try:
self.helics_sim.register_inputs(self.config, self.do_work)
except ValueError as ex:
_log.error("Unable to register inputs with HELICS: {}".format(ex))
self.core.stop()
return
Start the Simulation Platform
=============================
After registering with the simulation platform, the agent can start the simulation.
.. code-block:: python
# Register inputs with HELICS and provide callback method to receive messages from simulation
try:
self.helics_sim.start_simulation()
except ValueError as ex:
_log.error("Unable to register inputs with HELICS: {}".format(ex))
self.core.stop()
return
Receive outputs from the simulation
===================================
The concrete simulation object spawns a continuous loop that waits for any incoming messages (subscription messages)
from the simulation platform. On receiving a message, it passes the message to the callback method registered by the
agent during the :ref:`register with simulation step <Register-Simulation>`. The agent can now choose to work on the
incoming message based on it's use case. The agent can also choose to publish some message back to the simulation at
this point of time as shown in below example. This is totally optional and is based on agent's use-case. At the end of
the callback method, the agent needs to make time request to the simulation, so that it can advance forward in the
simulation. Please note, this is a necessary step for HELICS co-simulation integration as the HELICS broker waits for
time requests from all it's federates before advancing the simulation. If no time request is made, the broker blocks
the simulation.
.. code-block:: python
def do_work(self):
"""
Perform application specific work here using HELICS messages
:return:
"""
current_values = self.helics_sim.current_values
_log.debug("Doing work: {}".format(self.core.identity))
_log.debug("Current set of values from HELICS: {}".format(current_values))
# Do something with HELICS messages
# agent specific work!!!
for pub in self.publications:
key = pub['sim_topic']
# Check if VOLTTRON topic has been configured. If no, publish dummy value for the HELICS
# publication key
volttron_topic = pub.get('volttron_topic', None)
if volttron_topic is None:
value = 90.5
global_flag = pub.get('global', False)
# If global flag is False, prepend federate name to the key
if not global_flag:
key = "{fed}/{key}".format(fed=self._federate_name, key=key)
value = 67.90
self.helics_sim.publish_to_simulation(key, value)
self.helics_sim.make_time_request()
Publish to the simulation
=========================
The agent can publish messages to the simulation using publish_to_simulation API. The code snippet iterates over all the
publication keys (topics) and uses `publish_to_simulation` API to publish a dummy value of ``67.90`` for every
publication key.
.. code-block:: python
for pub in self.publications:
key = pub['sim_topic']
value = 67.90
self.helics_sim.publish_to_simulation(key, value)
Advance the simulation
======================
With some simulation platforms such as HELICS, the federate can make explicit time request to advance in time by certain
number of time steps. There will be a global time keeper (in this case HELICS broker) which will be responsible for
maintaining time within the simulation. In the time request mode, each federate has to request for time advancement
after it has completed it's work. The global time keeper grants the lowest time among all time requests. All the
federates receive the granted time and advance forward in simulation time together in a synchronized manner. Please
note, the granted time may not be the same as the requested time by the agent.
Typically, the best place to make the time request is in the callback method provided to the simulation integration
object.
.. code-block:: python
self.helics_sim.make_time_request()
Pause the simulation
====================
Some simulation platforms such as GridAPPS-D have the capability to pause the simulation. The agent can make use of
this functionality by calling the appropriate wrapper API exposed by the concrete simulation class. In the case of
HELICS, we do not have capability of pause/resume simulation, so calling the `pause_simulation` API will result in no
operation.
.. code-block:: python
self.helics_sim.pause_simulation()
Resume the simulation
=====================
If the simulation platform provides the pause simulation functionality then it will also provide capability to resume
the simulation. The agent can call resume_simulation API to resume the simulation. In case of HELICS, we do not have the
capability of pause/resume simulation, so calling the `resume_simulation` API will result in no operation.
.. code-block:: python
self.helics_sim.resume_simulation()
Stop the simulation
===================
The agent can stop the simulation at any point of point. In the case of `HELICSSimIntegration object`, it will
disconnect the federate from the HELICS core and close the library. Generally, it is a good practice to call the
`stop_simulation` API within the `onstop` method of the agent. In this way, the agent stops the simulation before
exiting the process.
.. code-block:: python
@Core.receiver("onstop")
def onstop(self, sender, **kwargs):
"""
This method is called when the Agent is about to shutdown, but before it
disconnects from the message bus.
"""
self.helics_sim.stop_simulation()
| 41.957895 | 120 | 0.704967 |
778b61eec1e88b39ee2dcff9dbaf6d323a9bed12 | 2,035 | rst | reStructuredText | doc/user_guide/transform/aggregate.rst | Mechachleopteryx/altair | c1523443f2a3c15c6181a3a7a154351518784df7 | [
"BSD-3-Clause"
] | 6,831 | 2016-09-23T19:35:19.000Z | 2022-03-31T13:29:39.000Z | doc/user_guide/transform/aggregate.rst | Mechachleopteryx/altair | c1523443f2a3c15c6181a3a7a154351518784df7 | [
"BSD-3-Clause"
] | 2,068 | 2016-09-23T14:53:23.000Z | 2022-03-31T01:43:15.000Z | doc/user_guide/transform/aggregate.rst | Mechachleopteryx/altair | c1523443f2a3c15c6181a3a7a154351518784df7 | [
"BSD-3-Clause"
] | 711 | 2016-09-26T16:59:18.000Z | 2022-03-24T11:32:40.000Z | .. currentmodule:: altair
.. _user-guide-aggregate-transform:
Aggregate Transforms
~~~~~~~~~~~~~~~~~~~~
There are two ways to aggregate data within Altair: within the encoding itself,
or using a top level aggregate transform.
The aggregate property of a field definition can be used to compute aggregate
summary statistics (e.g., median, min, max) over groups of data.
If at least one fields in the specified encoding channels contain aggregate,
the resulting visualization will show aggregate data. In this case, all
fields without aggregation function specified are treated as group-by fields
in the aggregation process.
For example, the following bar chart aggregates mean of ``acceleration``,
grouped by the number of Cylinders.
.. altair-plot::
import altair as alt
from vega_datasets import data
cars = data.cars.url
alt.Chart(cars).mark_bar().encode(
y='Cylinders:O',
x='mean(Acceleration):Q',
)
The Altair shorthand string::
# ...
x='mean(Acceleration):Q',
# ...
is made available for convenience, and is equivalent to the longer form::
# ...
x=alt.X(field='Acceleration', aggregate='mean', type='quantitative'),
# ...
For more information on shorthand encodings specifications, see
:ref:`encoding-aggregates`.
The same plot can be shown using an explicitly computed aggregation, using the
:meth:`~Chart.transform_aggregate` method:
.. altair-plot::
alt.Chart(cars).mark_bar().encode(
y='Cylinders:O',
x='mean_acc:Q'
).transform_aggregate(
mean_acc='mean(Acceleration)',
groupby=["Cylinders"]
)
For a list of available aggregates, see :ref:`encoding-aggregates`.
Transform Options
^^^^^^^^^^^^^^^^^
The :meth:`~Chart.transform_aggregate` method is built on the :class:`~AggregateTransform`
class, which has the following options:
.. altair-object-table:: altair.AggregateTransform
The :class:`~AggregatedFieldDef` objects have the following options:
.. altair-object-table:: altair.AggregatedFieldDef
| 27.876712 | 90 | 0.714496 |
25cc9e6e9523bf0fec94b6a1b7d53bf8f36b921d | 11,206 | rst | reStructuredText | docs/reference/installation.rst | teratux/SonataMediaBundle | 6c592bd1d9b996904b6f01adb78a734c67333c4e | [
"MIT"
] | 1 | 2022-03-20T16:15:54.000Z | 2022-03-20T16:15:54.000Z | docs/reference/installation.rst | teratux/SonataMediaBundle | 6c592bd1d9b996904b6f01adb78a734c67333c4e | [
"MIT"
] | null | null | null | docs/reference/installation.rst | teratux/SonataMediaBundle | 6c592bd1d9b996904b6f01adb78a734c67333c4e | [
"MIT"
] | null | null | null | .. index::
single: Installation
single: Configuration
Installation
============
Prerequisites
-------------
PHP ^7.2 and Symfony ^4.4 are needed to make this bundle work, there are
also some Sonata dependencies that need to be installed and configured beforehand.
Optional dependencies:
* `SonataAdminBundle <https://sonata-project.org/bundles/admin>`_
* `SonataClassificationBundle <https://sonata-project.org/bundles/classification>`_
And the persistence bundle (choose one):
* `SonataDoctrineOrmAdminBundle <https://sonata-project.org/bundles/doctrine-orm-admin>`_
* `SonataDoctrinePHPCRAdminBundle <https://sonata-project.org/bundles/doctrine-phpcr-admin>`_
* `SonataDoctrineMongoDBAdminBundle <https://sonata-project.org/bundles/mongo-admin>`_
Follow also their configuration step; you will find everything you need in
their own installation chapter.
.. note::
If a dependency is already installed somewhere in your project or in
another dependency, you won't need to install it again.
Install Symfony Flex packs
--------------------------
With this method you can directly setup all the entities required to make this bundle work
with the different persistence bundles supported.
If you picked ``SonataDoctrineOrmAdminBundle``, install the Sonata Media ORM pack::
composer require sonata-project/media-orm-pack
If you picked ``SonataDoctrineMongoDBAdminBundle``, install the Sonata Media ODM pack::
composer require sonata-project/media-odm-pack
Install without Symfony Flex packs
----------------------------------
Add ``SonataMediaBundle`` via composer::
composer require sonata-project/media-bundle
To load external resources, e.g. Vimeo or YouTube, you must use a ``psr/http-client`` and ``psr/http-factory``::
composer require symfony/http-client nyholm/psr7
If you want to use the REST API, you also need ``friendsofsymfony/rest-bundle`` and ``nelmio/api-doc-bundle``::
composer require friendsofsymfony/rest-bundle nelmio/api-doc-bundle
Next, be sure to enable the bundles in your ``config/bundles.php`` file if they
are not already enabled::
// config/bundles.php
return [
// ...
Sonata\MediaBundle\SonataMediaBundle::class => ['all' => true],
];
Configuration
=============
SonataMediaBundle Configuration
-------------------------------
.. code-block:: yaml
# config/packages/sonata_media.yaml
sonata_media:
class:
media: App\Entity\SonataMediaMedia
gallery: App\Entity\SonataMediaGallery
gallery_has_media: App\Entity\SonataMediaGalleryHasMedia
db_driver: doctrine_orm # or doctrine_mongodb, doctrine_phpcr it is mandatory to choose one here
default_context: default # you need to set a context
contexts:
default: # the default context is mandatory
providers:
- sonata.media.provider.dailymotion
- sonata.media.provider.youtube
- sonata.media.provider.image
- sonata.media.provider.file
- sonata.media.provider.vimeo
formats:
small: { width: 100 , quality: 70}
big: { width: 500 , quality: 70}
cdn:
server:
path: /uploads/media # http://media.sonata-project.org/
filesystem:
local:
directory: '%kernel.root_dir%/../public/uploads/media'
create: false
.. note::
You can define formats per provider type. You might want to set
a transversal ``admin`` format to be used by the ``mediaadmin`` class.
Also, you can determine the resizer to use; the default value is
``sonata.media.resizer.simple`` but you can change it to ``sonata.media.resizer.square`` or ``sonata.media.resizer.crop``
.. code-block:: yaml
# config/packages/sonata_media.yaml
sonata_media:
providers:
image:
resizer: sonata.media.resizer.square
.. note::
The square resizer works like the simple resizer when the image format has
only the width. But if you specify the height the resizer crop the image in
the lower size.
The crop resizer crops the image to the exact width and height. This is done by
resizing the image first and cropping the unwanted parts at the end.
Doctrine ORM Configuration
--------------------------
Add the bundle in the config mapping definition (or enable `auto_mapping`_)::
# config/packages/doctrine.yaml
doctrine:
orm:
entity_managers:
default:
mappings:
SonataMediaBundle: ~
And then create the corresponding entities, ``src/Entity/SonataMediaMedia``::
// src/Entity/SonataMediaMedia.php
use Doctrine\ORM\Mapping as ORM;
use Sonata\MediaBundle\Entity\BaseMedia;
/**
* @ORM\Entity
* @ORM\Table(name="media__media")
*/
class SonataMediaMedia extends BaseMedia
{
/**
* @ORM\Id
* @ORM\GeneratedValue
* @ORM\Column(type="integer")
*/
protected $id;
}
``src/Entity/SonataMediaGallery``::
// src/Entity/SonataMediaGallery.php
use Doctrine\ORM\Mapping as ORM;
use Sonata\MediaBundle\Entity\BaseGallery;
/**
* @ORM\Entity
* @ORM\Table(name="media__gallery")
*/
class SonataMediaGallery extends BaseGallery
{
/**
* @ORM\Id
* @ORM\GeneratedValue
* @ORM\Column(type="integer")
*/
protected $id;
}
and ``src/Entity/SonataMediaGalleryHasMedia``::
// src/Entity/SonataMediaGalleryHasMedia.php
use Doctrine\ORM\Mapping as ORM;
use Sonata\MediaBundle\Entity\BaseGalleryHasMedia;
/**
* @ORM\Entity
* @ORM\Table(name="media__gallery_has_media")
*/
class SonataMediaGalleryHasMedia extends BaseGalleryHasMedia
{
/**
* @ORM\Id
* @ORM\GeneratedValue
* @ORM\Column(type="integer")
*/
protected $id;
}
The only thing left is to update your schema::
bin/console doctrine:schema:update --force
Doctrine PHPCR Configuration
----------------------------
Add the bundle in the config mapping definition (or enable `auto_mapping`_)::
# config/packages/doctrine_phpcr.yaml
doctrine_phpcr:
odm:
mappings:
SonataMediaBundle:
prefix: Sonata\MediaBundle\PHPCR
Then you have to create the corresponding documents, ``src/PHPCR/SonataMediaMedia``::
// src/PHPCR/SonataMediaMedia.php
use Doctrine\ODM\PHPCR\Mapping\Annotations as PHPCR;
use Sonata\MediaBundle\PHPCR\BaseMedia;
/**
* @PHPCR\Document
*/
class SonataMediaMedia extends BaseMedia
{
/**
* @PHPCR\Id
*/
protected $id;
}
``src/PHPCR/SonataMediaGallery``::
// src/PHPCR/SonataMediaGallery.php
use Doctrine\ODM\PHPCR\Mapping\Annotations as PHPCR;
use Sonata\MediaBundle\PHPCR\BaseGallery;
/**
* @PHPCR\Document
*/
class SonataMediaGallery extends BaseGallery
{
/**
* @PHPCR\Id
*/
protected $id;
}
and ``src/PHPCR/SonataMediaGalleryHasMedia``::
// src/PHPCR/SonataMediaGalleryHasMedia.php
use Doctrine\ODM\PHPCR\Mapping\Annotations as PHPCR;
use Sonata\MediaBundle\PHPCR\BaseGalleryHasMedia;
/**
* @PHPCR\Document
*/
class SonataMediaGalleryHasMedia extends BaseGalleryHasMedia
{
/**
* @PHPCR\Id
*/
protected $id;
}
And then configure ``SonataMediaBundle`` to use the newly generated classes::
# config/packages/sonata_media.yaml
sonata_media:
db_driver: doctrine_phpcr
class:
media: App\PHPCR\SonataMediaMedia
gallery: App\PHPCR\SonataMediaGallery
gallery_has_media: App\PHPCR\SonataMediaGalleryHasMedia
Doctrine MongoDB Configuration
------------------------------
Add the bundle in the config mapping definition (or enable `auto_mapping`_)::
# config/packages/doctrine_mongodb.yaml
doctrine_mongodb:
odm:
mappings:
SonataMediaBundle: ~
Then you have to create the corresponding documents, ``src/Document/SonataMediaMedia``::
// src/Document/SonataMediaMedia.php
use Doctrine\ODM\MongoDB\Mapping\Annotations as MongoDB;
use Sonata\MediaBundle\Document\BaseMedia;
/**
* @MongoDB\Document
*/
class SonataMediaMedia extends BaseMedia
{
/**
* @MongoDB\Id
*/
protected $id;
}
``src/Document/SonataMediaGallery``::
// src/Document/SonataMediaGallery.php
use Doctrine\ODM\MongoDB\Mapping\Annotations as MongoDB;
use Sonata\MediaBundle\Document\BaseGallery;
/**
* @MongoDB\Document
*/
class SonataMediaGallery extends BaseGallery
{
/**
* @MongoDB\Id
*/
protected $id;
}
and ``src/Document/SonataMediaGalleryHasMedia``::
// src/Document/SonataMediaGalleryHasMedia.php
use Doctrine\ODM\MongoDB\Mapping\Annotations as MongoDB;
use Sonata\MediaBundle\Document\BaseGalleryHasMedia;
/**
* @MongoDB\Document
*/
class SonataMediaGalleryHasMedia extends BaseGalleryHasMedia
{
/**
* @MongoDB\Id
*/
protected $id;
}
And then configure ``SonataMediaBundle`` to use the newly generated classes::
# config/packages/sonata_media.yaml
sonata_media:
db_driver: doctrine_mongodb
class:
media: App\Document\SonataMediaMedia
gallery: App\Document\SonataMediaGallery
gallery_has_media: App\Document\SonataMediaGalleryHasMedia
Add SonataMediaBundle routes
----------------------------
.. code-block:: yaml
# config/routes.yaml
gallery:
resource: '@SonataMediaBundle/Resources/config/routing/gallery.xml'
prefix: /media/gallery
media:
resource: '@SonataMediaBundle/Resources/config/routing/media.xml'
prefix: /media
Create uploads folder
---------------------
If they are not already created, you need to add specific folder to allow uploads from users,
make sure your http user can write to this directory:
.. code-block:: bash
mkdir -p public/uploads/media
Next Steps
----------
At this point, your Symfony installation should be fully functional, without errors
showing up from SonataMediaBundle. If, at this point or during the installation,
you come across any errors, don't panic:
- Read the error message carefully. Try to find out exactly which bundle is causing the error.
Is it SonataMediaBundle or one of the dependencies?
- Make sure you followed all the instructions correctly, for both SonataMediaBundle and its dependencies.
- Still no luck? Try checking the project's `open issues on GitHub`_.
.. _`open issues on GitHub`: https://github.com/sonata-project/SonataMediaBundle/issues
.. _`auto_mapping`: http://symfony.com/doc/4.4/reference/configuration/doctrine.html#configuration-overviews
| 27.398533 | 121 | 0.64983 |
c25bea144938a13bf2d24628239c33c977fc28cc | 1,995 | rst | reStructuredText | README.rst | learningequality/pycaption | cfa5c7a8582c4df8d209a883f4272537830f081e | [
"Apache-2.0"
] | 1 | 2020-06-03T13:48:56.000Z | 2020-06-03T13:48:56.000Z | README.rst | learningequality/pycaption | cfa5c7a8582c4df8d209a883f4272537830f081e | [
"Apache-2.0"
] | 5 | 2019-06-28T19:05:29.000Z | 2020-04-29T16:55:02.000Z | README.rst | learningequality/pycaption | cfa5c7a8582c4df8d209a883f4272537830f081e | [
"Apache-2.0"
] | null | null | null | pycaption
==========
|Build Status|
**PLEASE SEE** `pbs/pycaption <https://github.com/pbs/pycaption>`__ **FOR OFFICIAL RELEASES.**
``pycaption`` is a caption reading/writing module. Use one of the given Readers
to read content into a CaptionSet object, and then use one of the Writers to
output the CaptionSet into captions of your desired format.
Version 2.0.0\@learningequality passes all tests with Python 2.7, 3.4, 3.5, 3.6, and 3.7.
For details, see the `documentation <http://pycaption.readthedocs.org>`__.
Changelog
---------
2.2.0\@learningequality
^^^^^^^^^^^^^^^^^^^^^^^
- Added ``enum_compat`` library to maintain Python 2.7 support
- Pinned ``beautifulsoup4`` under ``v4.9.0`` as it caused numerous test failures
- Unpinned ``lxml``
- Misc Travis and Tox updates
2.0.0\@learningequality
^^^^^^^^^^^^^^^^^^^^^^^
- Python 2 and 3 support (see branch `py27\@pbs <https://github.com/pbs/pycaption/tree/py27>`__)
- Upgraded ``beautifulsoup4`` package to a more current version, and resolved issues with tests due to upgrade. See full detailed changes `here <https://github.com/learningequality/pycaption/pull/1>`__.
- Removed ``from future import standard_library`` as it can cause issues with other packages and its removal causes no test failures.
- Fixed ``DFXPReader`` issue with default language (see `this PR <https://github.com/pbs/pycaption/pull/188>`__)
- Changed global default language to ISO-639-2 undetermined language code ``und`` (see `this PR <https://github.com/pbs/pycaption/pull/188>`__)
1.0.0\@pbs
^^^^^^^^^^
- Added Python 3 support (see `pbs/pycaption <https://github.com/pbs/pycaption>`__).
0.5.x\@pbs
^^^^^^^^^^
- Added positioning support
- Created documentation
License
-------
This module is Copyright 2012 PBS.org and is available under the `Apache
License, Version 2.0 <http://www.apache.org/licenses/LICENSE-2.0>`__.
.. |Build Status| image:: https://travis-ci.org/pbs/pycaption.png?branch=master
:target: https://travis-ci.org/pbs/pycaption
| 39.117647 | 202 | 0.714787 |
3e28648b1f58110c4a824f73c1ff26a676178095 | 101 | rst | reStructuredText | samples/debug/index.rst | maxvankessel/zephyr | 769d91b922b736860244b22e25328d91d9a17657 | [
"Apache-2.0"
] | 6,224 | 2016-06-24T20:04:19.000Z | 2022-03-31T20:33:45.000Z | samples/subsys/debug/index.rst | Conexiotechnologies/zephyr | fde24ac1f25d09eb9722ce4edc6e2d3f844b5bce | [
"Apache-2.0"
] | 32,027 | 2017-03-24T00:02:32.000Z | 2022-03-31T23:45:53.000Z | samples/subsys/debug/index.rst | Conexiotechnologies/zephyr | fde24ac1f25d09eb9722ce4edc6e2d3f844b5bce | [
"Apache-2.0"
] | 4,374 | 2016-08-11T07:28:47.000Z | 2022-03-31T14:44:59.000Z | .. _debug-samples:
Debug Samples
#################
.. toctree::
:maxdepth: 1
:glob:
**/*
| 9.181818 | 18 | 0.445545 |
2cfca5339c542ebd6baffab1196e1e5fc79cc1f6 | 74 | rst | reStructuredText | docssrc/source/base.rst | Peilonrayz/Kecleon | c75e2e3f97856bf668506e6675be8accac76cbe9 | [
"MIT"
] | null | null | null | docssrc/source/base.rst | Peilonrayz/Kecleon | c75e2e3f97856bf668506e6675be8accac76cbe9 | [
"MIT"
] | null | null | null | docssrc/source/base.rst | Peilonrayz/Kecleon | c75e2e3f97856bf668506e6675be8accac76cbe9 | [
"MIT"
] | null | null | null | Base
====
.. automodule:: kecleon.base
:members:
:private-members:
| 10.571429 | 28 | 0.621622 |
ec61e8281c8f1475f936434cf623f6d7297bf5cd | 1,761 | rst | reStructuredText | doc/about.rst | Zamwell/pandapower | ce51946342109e969b87b60c8883d7eec02d3060 | [
"BSD-3-Clause"
] | 104 | 2017-02-21T17:13:51.000Z | 2022-03-21T13:52:27.000Z | doc/about.rst | lvzhibai/pandapower | 24ed3056558887cc89f67d15b5527523990ae9a1 | [
"BSD-3-Clause"
] | 126 | 2017-02-15T17:09:08.000Z | 2018-07-16T13:25:15.000Z | doc/about.rst | gdgarcia/pandapower | 630e3278ca012535f78282ae73f1b86f3fe932fc | [
"BSD-3-Clause"
] | 57 | 2017-03-08T13:49:32.000Z | 2022-02-28T10:36:55.000Z | ##########################
About pandapower
##########################
pandapower combines the data analysis library `pandas <http://pandas.pydata.org>`_ and the power flow solver `PYPOWER <https:/pypi.python.org/pypi/PYPOWER>`_ to create an easy to use network calculation program
aimed at automation of analysis and optimization in power systems.
.. image:: /pics/pp.svg
:width: 250em
:align: left
.. |br| raw:: html
<br />
|br|
|br|
|br|
More information about pandapower can be found on `www.pandapower.org <https://www.pandapower.org>`_.
About pandapower:
- `Power System Modeling <https://www.pandapower.org/about/#modeling>`_
- `Power System Analysis <https://www.pandapower.org/about/#analysis>`_
- `Citing pandapower <https://www.pandapower.org/references/>`_
Getting Started:
- `Installation Notes <https://www.pandapower.org/start/>`_
- `Minimal Example <https://www.pandapower.org/start/#a-short-introduction->`_
- `Interactive Tutorials <https://www.pandapower.org/start/#interactive-tutorials->`_
If you are interested in the newest pandapower developments, subscribe to our `mailing list <https://www.pandapower.org/contact/#list>`_!
pandapower is a joint development of the research group Energy Management and Power System Operation, University of Kassel and the Department for Distribution System
Operation at the Fraunhofer Institute for Energy Economics and Energy System Technology (IEE), Kassel.
.. image:: /pics/iee.png
:width: 18em
:align: left
.. image:: /pics/e2n.png
:width: 22em
:align: right
|br|
|br|
|br|
|br|
.. toctree::
:maxdepth: 1
about/units
about/update
about/authors
about/changelog
about/license
| 25.521739 | 210 | 0.686542 |
e3635529fadc234e0fda0a649418e3b5a1d7c294 | 2,202 | rst | reStructuredText | README.rst | bminard/vagrant_boxes | f79e21b7ca9c405f72a491516d48c12d544fce96 | [
"MIT"
] | null | null | null | README.rst | bminard/vagrant_boxes | f79e21b7ca9c405f72a491516d48c12d544fce96 | [
"MIT"
] | null | null | null | README.rst | bminard/vagrant_boxes | f79e21b7ca9c405f72a491516d48c12d544fce96 | [
"MIT"
] | null | null | null | .. vim: set expandtab: tw=80
========================
A Vagrant Box for CentOS
========================
The master branch of this repository contains the tools to create a minimal
CentOS Vagrant box. Other branches contain different applications:
- cpython defines a virtual machine for working on `Cpython`_
- reviewboard defines a basic `Review Board`_ virtual machine
This implementation stores several passwords and key pairs in the credentials
directory. This directory is located within the directory used to make the box.
Generating key pairs and passwords eliminates the need for the public (insecure)
Vagrant SSH key pair.
Encrypted password hashes appear in the generated Kickstart file (called
ks.cfg). The generated Kickstart file only enough instructions to enable
Vagrant. Other required instructions are in the scripts identified in the Packer
template file (centos.json).
=====================
Build the Vagrant Box
=====================
-------------
Prerequisites
-------------
Install the following prerequisites on the host.
- GNU Make 3.81
- Packer 0.10.1
- Vagrant 1.8.1
- VirtualBox 5.0.20
- Python's passlib
Using Python's `virtualenv`_? `make install` takes care of Python dependencies.
It will not install Python modules into anything other than a virtual environment.
To set up a Python virtual environment run::
mkvirtualevn vagrant
workon vagrant
--------
Building
--------
On the host, run::
> make install all BOX_NAME=foo-centos-x86_64 PACKER_TEMPLATE=centos.json PACKER_VARS=centos-x86_64-vars.json
where
- BOX_NAME identifies the Vagrant box.
- PACKER_TEMPLATE identifies the Packer template used to generate the Vagrant box.
- PACKER_VARS identifies the Packer variables requried by PACKER_TEMPLATE.
The above make command creates the Vagrant box foo-centos-x86_64_virtualbox.box--a
Vagrant box relying on Virtual Box as a provider.
-----
Using
-----
On the host, run::
> cd flatfoot/foo-centos-x86_64 # organization and box name
> vagrant up
> vagrant ssh
.. _Cpython: http://cython.org
.. _Review Board: https://www.reviewboard.org
.. _virtualenv: https://virtualenv.pypa.io/en/stable/
| 28.230769 | 117 | 0.719346 |
e48cc9c1caa46d9fcb495c4061c77440606ce6f7 | 2,180 | rst | reStructuredText | Rdatasets/doc/MASS/rst/biopsy.rst | LuisFRoch/data | fef796bc2733120f20236f1de817a9f20acc9c94 | [
"MIT"
] | 467 | 2015-04-02T13:29:22.000Z | 2022-03-04T13:52:03.000Z | Rdatasets/doc/MASS/rst/biopsy.rst | LuisFRoch/data | fef796bc2733120f20236f1de817a9f20acc9c94 | [
"MIT"
] | 10 | 2015-08-03T23:46:07.000Z | 2016-07-16T15:57:09.000Z | Rdatasets/doc/MASS/rst/biopsy.rst | LuisFRoch/data | fef796bc2733120f20236f1de817a9f20acc9c94 | [
"MIT"
] | 188 | 2015-03-08T18:10:50.000Z | 2022-02-17T08:50:08.000Z | +----------+-------------------+
| biopsy | R Documentation |
+----------+-------------------+
Biopsy Data on Breast Cancer Patients
-------------------------------------
Description
~~~~~~~~~~~
This breast cancer database was obtained from the University of
Wisconsin Hospitals, Madison from Dr. William H. Wolberg. He assessed
biopsies of breast tumours for 699 patients up to 15 July 1992; each of
nine attributes has been scored on a scale of 1 to 10, and the outcome
is also known. There are 699 rows and 11 columns.
Usage
~~~~~
::
biopsy
Format
~~~~~~
This data frame contains the following columns:
``ID``
sample code number (not unique).
``V1``
clump thickness.
``V2``
uniformity of cell size.
``V3``
uniformity of cell shape.
``V4``
marginal adhesion.
``V5``
single epithelial cell size.
``V6``
bare nuclei (16 values are missing).
``V7``
bland chromatin.
``V8``
normal nucleoli.
``V9``
mitoses.
``class``
``"benign"`` or ``"malignant"``.
Source
~~~~~~
P. M. Murphy and D. W. Aha (1992). UCI Repository of machine learning
databases. [Machine-readable data repository]. Irvine, CA: University of
California, Department of Information and Computer Science.
O. L. Mangasarian and W. H. Wolberg (1990) Cancer diagnosis via linear
programming. *SIAM News* **23**, pp 1 & 18.
William H. Wolberg and O.L. Mangasarian (1990) Multisurface method of
pattern separation for medical diagnosis applied to breast cytology.
*Proceedings of the National Academy of Sciences, U.S.A.* **87**, pp.
9193–9196.
O. L. Mangasarian, R. Setiono and W.H. Wolberg (1990) Pattern
recognition via linear programming: Theory and application to medical
diagnosis. In *Large-scale Numerical Optimization* eds Thomas F. Coleman
and Yuying Li, SIAM Publications, Philadelphia, pp 22–30.
K. P. Bennett and O. L. Mangasarian (1992) Robust linear programming
discrimination of two linearly inseparable sets. *Optimization Methods
and Software* **1**, pp. 23–34 (Gordon & Breach Science Publishers).
References
~~~~~~~~~~
Venables, W. N. and Ripley, B. D. (1999) *Modern Applied Statistics with
S-PLUS.* Third Edition. Springer.
| 23.956044 | 72 | 0.676147 |
8586252eda10672dc0fc4b7047d79d480b386fc8 | 96 | rst | reStructuredText | lectures/integration/algorithms.rst | CasualDan/ose-course-scientific-computing | 28c9595aabf47d98406c05dd9519245aa58fac05 | [
"MIT"
] | null | null | null | lectures/integration/algorithms.rst | CasualDan/ose-course-scientific-computing | 28c9595aabf47d98406c05dd9519245aa58fac05 | [
"MIT"
] | null | null | null | lectures/integration/algorithms.rst | CasualDan/ose-course-scientific-computing | 28c9595aabf47d98406c05dd9519245aa58fac05 | [
"MIT"
] | null | null | null |
Functions
==========
.. automodule:: lectures.integration.integration_algorithms
:members:
| 13.714286 | 59 | 0.697917 |
66268a534fe14078ebf1417a8944fbd399e1d752 | 2,364 | rst | reStructuredText | docs/index.rst | r9y9/dnnsvs | b028f76fd4f081859ec99a2034e0e0dc8ce1a409 | [
"MIT"
] | 72 | 2020-04-19T16:14:09.000Z | 2020-05-02T04:02:05.000Z | docs/index.rst | r9y9/dnnsvs | b028f76fd4f081859ec99a2034e0e0dc8ce1a409 | [
"MIT"
] | 1 | 2020-04-19T16:28:03.000Z | 2020-05-02T13:49:13.000Z | docs/index.rst | r9y9/dnnsvs | b028f76fd4f081859ec99a2034e0e0dc8ce1a409 | [
"MIT"
] | 3 | 2020-04-20T02:34:31.000Z | 2020-04-26T01:04:35.000Z | .. nnsvs documentation master file, created by
sphinx-quickstart on Fri Apr 24 00:51:13 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
NNSVS
=====
Neural network based singing voice synthesis library
Features
--------
- **Open-source**: NNSVS is fully open-source. You can create your own voicebanks with your dataset.
- **Multiple languages**: NNSVS has been used for creating singing voice synthesis (SVS) systems for multiple languages by VocalSynth comminities (8+ as far as I know).
- **Research friendly**: NNSVS comes with reproducible Kaldi/ESPnet-style recipes. You can use NNSVS to create baseline systems for your research.
Note that NNSVS was originally designed for research purposes. Please check out more user-friendly `ENUNU <https://github.com/oatsu-gh/ENUNU>`_ for creative purposes.
You can find a practical guide for NNSVS/ENUNU at https://nnsvs.carrd.co/ (by `xuu <https://xuu.crd.co/>`_).
A detailed tutorial for for making voice banks can be found at `NNSVS Database Making Tutorial <https://docs.google.com/document/d/1uMsepxbdUW65PfIWL1pt2OM6ZKa5ybTTJOpZ733Ht6s/edit?usp=sharing>`_ (by `PixProcuer <https://twitter.com/PixPrucer>`_).
Audio samples
-------------
Samples by r9y9: https://soundcloud.com/r9y9/sets/dnn-based-singing-voice
Selected videos
---------------
Demo by https://github.com/DYVAUX
.. youtube:: 0sSd31TUVCU
:align: center
You can find more from the NNSVS/ENUNU community: `YouTube <https://www.youtube.com/results?search_query=nnsvs+enunu>`_, `NicoNico <https://www.nicovideo.jp/search/nnsvs?ref=nicotop_search>`_
.. toctree::
:maxdepth: 1
:caption: Demos
notebooks/Demos
notebooks/NNSVS_vs_Sinsy
demo_server
.. toctree::
:maxdepth: 1
:caption: Notes
installation
overview
recipes
optuna
tips
update_guide
devdocs
.. toctree::
:maxdepth: 1
:caption: Package reference
pretrained
svs
base
model
acoustic_models
postfilters
discriminators
dsp
gen
mdn
pitch
multistream
util
train_util
.. toctree::
:maxdepth: 1
:caption: Resources
links
papers
.. toctree::
:maxdepth: 1
:caption: Meta information
changelog
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 24.122449 | 247 | 0.712352 |
6d7d3b3f3ab8cb3e281a0dd708f6d3c0acb38826 | 4,305 | rst | reStructuredText | README.rst | laercioth/saga_py | 894d944a1141838b158424c249eb966dbaee572e | [
"MIT"
] | 22 | 2020-01-06T06:35:36.000Z | 2021-11-01T06:53:28.000Z | README.rst | personrcunha/saga_py | 894d944a1141838b158424c249eb966dbaee572e | [
"MIT"
] | null | null | null | README.rst | personrcunha/saga_py | 894d944a1141838b158424c249eb966dbaee572e | [
"MIT"
] | 9 | 2019-08-14T09:37:23.000Z | 2021-10-04T15:28:56.000Z |
saga_py
=======
Create a series of dependent actions and roll everything back when one of them fails.
Install
-------
.. code-block:: bash
$ pip install saga_py
Usage
-----
Simple example
^^^^^^^^^^^^^^
.. code-block:: python
from saga import SagaBuilder
counter1 = 0
counter2 = 0
def incr_counter1(amount):
global counter1
counter1 += amount
def incr_counter2(amount):
global counter2
counter2 += amount
def decr_counter1(amount):
global counter1
counter1 -= amount
def decr_counter2(amount):
global counter2
counter2 -= amount
SagaBuilder \
.create() \
.action(lambda: incr_counter1(15), lambda: decr_counter1(15)) \
.action(lambda: incr_counter2(1), lambda: decr_counter2(1)) \
.action() \
.build() \
.execute()
# if every action succeeds, the effects of all actions are applied
print(counter1) # 15
print(counter2) # 1
An action fails example
^^^^^^^^^^^^^^^^^^^^^^^
If one action fails, the compensations for all already executed actions are run and a SagaError is raised that wraps
all Exceptions encountered during the run.
.. code-block:: python
from saga import SagaBuilder, SagaError
counter1 = 0
counter2 = 0
def incr_counter1(amount):
global counter1
counter1 += amount
def incr_counter2(amount):
global counter2
counter2 += amount
raise BaseException('some error happened')
def decr_counter1(amount):
global counter1
counter1 -= amount
def decr_counter2(amount):
global counter2
counter2 -= amount
try:
SagaBuilder \
.create() \
.action(lambda: incr_counter1(15), lambda: decr_counter1(15)) \
.action(lambda: incr_counter2(1), lambda: decr_counter2(1)) \
.action() \
.build() \
.execute()
except SagaError as e:
print(e) # wraps the BaseException('some error happened')
print(counter1) # 0
print(counter2) # 0
An action and a compensation fail example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Since the compensation for action2 fails, the compensation effect is undefined from the framework's perspective,
all other compensations are run regardless.
.. code-block:: python
from saga import SagaBuilder, SagaError
counter1 = 0
counter2 = 0
def incr_counter1(amount):
global counter1
counter1 += amount
def incr_counter2(amount):
global counter2
counter2 += amount
raise BaseException('some error happened')
def decr_counter1(amount):
global counter1
counter1 -= amount
def decr_counter2(amount):
global counter2
raise BaseException('compensation also fails')
try:
SagaBuilder \
.create() \
.action(lambda: incr_counter1(15), lambda: decr_counter1(15)) \
.action(lambda: incr_counter2(1), lambda: decr_counter2(1)) \
.action() \
.build() \
.execute()
except SagaError as e:
print(e) #
print(counter1) # 0
print(counter2) # 1
Passing values from one action to the next
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
An action can return a dict of return values.
The dict is then passed as keyword arguments to the next action and it's corresponding compensation.
No values can be passed between compensations.
.. code-block:: python
from saga import SagaBuilder, SagaError
counter1 = 0
counter2 = 0
def incr_counter1(amount):
global counter1
counter1 += amount
return {'counter1_value': counter1}
def incr_counter2(counter1_value):
global counter2
counter2 += amount
def decr_counter1(amount):
global counter1
counter1 -= amount
def decr_counter2(counter1_value):
global counter2
counter2 -= amount
SagaBuilder \
.create() \
.action(lambda: incr_counter1(15), lambda: decr_counter1(15)) \
.action(incr_counter2, decr_counter2) \
.action() \
.build() \
.execute()
print(counter1) # 15
print(counter2) # 15
| 22.657895 | 116 | 0.604878 |
13091f786c06f9043083c32cedfdbcfd1f4f362f | 2,586 | rst | reStructuredText | docs/index.rst | EmilioCarrion/rele | 9071fe3430dbd845bdbcff20d4d6f8b94f601c40 | [
"Apache-2.0"
] | null | null | null | docs/index.rst | EmilioCarrion/rele | 9071fe3430dbd845bdbcff20d4d6f8b94f601c40 | [
"Apache-2.0"
] | null | null | null | docs/index.rst | EmilioCarrion/rele | 9071fe3430dbd845bdbcff20d4d6f8b94f601c40 | [
"Apache-2.0"
] | null | null | null | .. Rele documentation master file, created by
sphinx-quickstart on Wed Jun 5 14:19:08 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Relé's documentation!
================================
Release v\ |version|. (`Installation <https://github.com/mercadona/rele>`_)
.. image:: https://travis-ci.org/mercadona/rele.svg?branch=master
:target: https://travis-ci.org/mercadona/rele
.. image:: https://img.shields.io/badge/license-Apache%202-blue.svg
:target: https://github.com/mercadona/rele/blob/master/LICENSE
-------------------
**Relé** makes integration with Google PubSub easier and is ready to
integrate seamlessly into any Django project.
Motivation and Features
_______________________
The Publish-Subscribe pattern and specifically the Google Cloud Pub/Sub library are
very powerful tools but you can easily cut your fingers on it. Relé
makes integration seamless by providing Publisher, Subscriber and Worker
classes.
Out of the box, Relé includes the following features:
* Simple publishing API
* Declarative subscribers
* Scalable Worker
* Ready to install Django integration
* And much more...
What It Looks Like
__________________
.. code:: python
import rele
from rele import sub
# Subscribe to the Pub/Sub topic
@sub(topic='photo-uploaded')
def photo_uploaded(data, **kwargs):
print(f"Customer {data['customer_id']} has uploaded an image")
# Publish to the topic
rele.publish(topic='photo-uploaded', data={'customer_id': 123})
Install
_______
Relé supports Python 3.6+ and installing via ``pip``
.. code::
$ pip install rele
or with Django integration
.. code::
$ pip install rele[django]
User Guides
___________
.. toctree::
:maxdepth: 1
guides/basics
guides/django
guides/filters
guides/emulator
Configuration
_____________
.. toctree::
:maxdepth: 2
api/settings
API Docs
________
This is the part of documentation that details the inner workings of Relé.
.. toctree::
:maxdepth: 2
api/reference
Project Info
____________
.. toctree::
:maxdepth: 1
Source Code <https://github.com/mercadona/rele>
Contributing <https://github.com/mercadona/rele/blob/master/CONTRIBUTING.md>
Code of Conduct <https://github.com/mercadona/rele/blob/master/CODE_OF_CONDUCT.md>
License <https://github.com/mercadona/rele/blob/master/LICENSE>
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 21.371901 | 86 | 0.703403 |
41a964c09ae92ced41b25508c74ccc449b063ed5 | 11,968 | rst | reStructuredText | docs/BuildBrainFlow.rst | thiago-roque07/brainflow | 6a131433b792968ea548b83ccd5659facdd0a93d | [
"MIT"
] | 1 | 2020-05-12T11:12:03.000Z | 2020-05-12T11:12:03.000Z | docs/BuildBrainFlow.rst | thiago-roque07/brainflow | 6a131433b792968ea548b83ccd5659facdd0a93d | [
"MIT"
] | null | null | null | docs/BuildBrainFlow.rst | thiago-roque07/brainflow | 6a131433b792968ea548b83ccd5659facdd0a93d | [
"MIT"
] | null | null | null | .. _installation-label:
Installation Instructions
==========================
Precompiled libraries in package managers(Nuget, PYPI, etc)
-------------------------------------------------------------
Core part of BrainFlow is written in C/C++ and distributed as dynamic libraries, for some programming languages we publish packages with precompiled libraries to package managers like Nuget or PYPI.
C/C++ code should be compiled for each CPU architecture and for each OS and we cannot cover all possible cases, as of right now we support:
- x64 libraries for Windows starting from 8.1, for some devices newer version of Windows can be required
- x64 libraries for Linux, they are compiled inside manylinux docker container
- x64/ARM libraries for MacOS, they are universal binaries
If your CPU and OS is not listed above(e.g. Raspberry Pi or Windows with ARM) you still can use BrainFlow, but you need to compile it by youself first. See :ref:`compilation-label` for details.
Python
-------
.. compound::
Please, make sure to use Python 3+. Next, install the latest release from PYPI with the following command in terminal ::
python -m pip install brainflow
.. compound::
If you want to install it from source files or build unreleased version from Github, you should first compile the core module (:ref:`compilation-label`). Then run ::
cd python-package
python -m pip install -U .
C#
----
**Windows(Visual Studio)**
You are able to install the latest release from `Nuget <https://www.nuget.org/packages/brainflow/>`_ or build it yourself:
- Compile BrainFlow's core module
- Open Visual Studio Solution
- Build it using Visual Studio
- **Make sure that unmanaged(C++) libraries exist in search path** - set PATH env variable or copy them to correct folder
**Unix(Mono)**
- Compile BrainFlow's core module
- Install Mono on your system
- Build it using Mono
- **Make sure that unmanaged(C++) libraries exist in search path** - set LD_LIBRARY_PATH env variable or copy them to correct folder
.. compound::
Example for Fedora: ::
# compile c++ code
tools/build_linux.sh
# install dependencies, we skip dnf configuration steps
sudo dnf install nuget
sudo dnf install mono-devel
sudo dnf install mono-complete
sudo dnf install monodevelop
# build solution
xbuild csharp-package/brainflow/brainflow.sln
# run tests
export LD_LIBRARY_PATH=/home/andreyparfenov/brainflow/installed_linux/lib/
mono csharp-package/brainflow/denoising/bin/Debug/test.exe
R
-----
R binding is based on `reticulate <https://rstudio.github.io/reticulate/>`_ package and calls Python code, so you need to install Python binding first, make sure that reticulate uses correct virtual environment, after that you will be able to build R package from command line or using R Studio, install it and run samples.
Java
-----
You are able to download jar files directly from `release page <https://github.com/brainflow-dev/brainflow/releases>`_
.. compound::
If you want to install it from source files or build unreleased version from github you should compile core module first (:ref:`compilation-label`) and run ::
cd java-package
cd brainflow
mvn package
Also, you can use `GitHub Package <https://github.com/brainflow-dev/brainflow/packages/450100>`_ and download BrainFlow using Maven or Gradle.
To use Github packages you need to `change Maven settings <https://help.github.com/en/packages/using-github-packages-with-your-projects-ecosystem/configuring-apache-maven-for-use-with-github-packages>`_. `Example file <https://github.com/brainflow-dev/brainflow/blob/master/java-package/brainflow/settings.xml>`_ here you need to change OWNER and TOKEN by Github username and token with an access to Github Packages.
Matlab
--------
Steps to setup Matlab binding for BrainFlow:
- Compile Core Module, using the instructions in :ref:`compilation-label`. If you don't want to compile C++ code you can download Matlab package with precompiled libs from `Release page <https://github.com/brainflow-dev/brainflow/releases>`_
- Open Matlab IDE and open brainflow/matlab-package/brainflow folder there
- Add folders lib and inc to Matlab path
- If you want to run Matlab scripts from folders different than brainflow/matlab-package/brainflow you need to add it to your Matlab path too
- If you see errors you may need to configure Matlab to use C++ compiler instead C, install Visual Studio 2017 or newer(for Windows) and run this command in Matlab terminal :code:`mex -setup cpp`, you need to select Visual Studio Compiler from the list. More info can be found `here <https://www.mathworks.com/help/matlab/matlab_external/choose-c-or-c-compilers.html>`_.
Julia
--------
BrainFlow is a registered package in the Julia general registry, so it can be installed via the Pkg manager:
.. compound::
Example: ::
import Pkg
Pkg.add("BrainFlow")
When using BrainFlow for the first time in Julia, the BrainFlow artifact containing the compiled BrainFlow libraries will be downloaded from release page automatically.
If you compile BrainFlow from source local libraries will take precedence over the artifact.
Rust
-------
.. compound::
You can build Rust binding locally using commands below, but you need to compile C/C++ code first ::
cd rust-package
cd brainflow
cargo build --features generate_binding
Docker Image
--------------
There are docker images with precompiled BrainFlow. You can get them from `DockerHub <https://hub.docker.com/r/brainflow/brainflow>`_.
All bindings except Matlab are preinstalled there.
Also, there are other packages for BCI research and development:
- mne
- pyriemann
- scipy
- matplotlib
- jupyter
- pandas
- etc
If your devices uses TCP/IP to send data, you need to run docker container with :code:`--network host`. For serial port connection you need to pass serial port to docker using :code:`--device %your port here%`
.. compound::
Example: ::
# pull container from DockerHub
docker pull brainflow/brainflow:latest
# run docker container with serial port /dev/ttyUSB0
docker run -it --device /dev/ttyUSB0 brainflow/brainflow:latest /bin/bash
# run docker container for boards which use networking
docker run -it --network host brainflow/brainflow:latest /bin/bash
.. _compilation-label:
Compilation of Core Module and C++ Binding
-------------------------------------------
Windows
~~~~~~~~
- Install CMake>=3.16 you can install it from PYPI via pip or from `CMake website <https://cmake.org/>`_
- Install Visual Studio 2019(preferred) or Visual Studio 2017. Other versions may work but not tested
- In VS installer make sure you selected "Visual C++ ATL support"
- Build it as a standard CMake project, you don't need to set any options
.. compound::
If you are not familiar with CMake you can use `build.py <https://github.com/brainflow-dev/brainflow/blob/master/tools/build.py>`_ : ::
# install python3 and run
python -m pip install cmake
cd tools
python build.py
# to get info about args and configure your build you can run
python build.py --help
Linux
~~~~~~
- Install CMake>=3.16 you can install it from PYPI via pip, via package managers for your OS(apt, dnf, etc) or from `CMake website <https://cmake.org/>`_
- If you are going to distribute compiled Linux libraries you HAVE to build it inside manylinux Docker container
- Build it as a standard CMake project, you don't need to set any options
- You can use any compiler but for Linux we test only GCC
.. compound::
If you are not familiar with CMake you can use `build.py <https://github.com/brainflow-dev/brainflow/blob/master/tools/build.py>`_ : ::
python3 -m pip install cmake
cd tools
python3 build.py
# to get info about args and configure your build you can run
python3 build.py --help
MacOS
~~~~~~~
- Install CMake>=3.16 you can install it from PYPI via pip, using :code:`brew` or from `CMake website <https://cmake.org/>`_
- Build it as a standard CMake project, you don't need to set any options
- You can use any compiler but for MacOS we test only Clang
.. compound::
If you are not familiar with CMake you can use `build.py <https://github.com/brainflow-dev/brainflow/blob/master/tools/build.py>`_ : ::
python3 -m pip install cmake
cd tools
python3 build.py
# to get info about args and configure your build you can run
python3 build.py --help
Android
---------
To check supported boards for Android visit :ref:`supported-boards-label`
Installation instructions
~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Create Java project in Android Studio, Kotlin is not supported
- Download *jniLibs.zip* from `Release page <https://github.com/brainflow-dev/brainflow/releases>`_
- Unpack *jniLibs.zip* and copy it's content to *project/app/src/main/jniLibs*
- Download *brainflow-jar-with-dependencies.jar* from `Release page <https://github.com/brainflow-dev/brainflow/releases>`_ or from `Github package <https://github.com/brainflow-dev/brainflow/packages/290893>`_
- Copy *brainflow-jar-with-dependencies.jar* to *project/app/libs folder*
Now you can use BrainFlow SDK in your Android application!
Note: Android Studio inline compiler may show red errors but it should be compiled fine with Gradle. To fix inline compiler you can use *File > Sync Project with Gradle Files* or click at *File > Invalidate Cache/Restart > Invalidate and Restart*
.. compound::
For some API calls you need to provide additional permissions via manifest file of your application ::
<uses-permission android:name="android.permission.INTERNET"></uses-permission>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"></uses-permission>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"></uses-permission>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"></uses-permission>
Compilation using Android NDK
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**For BrainFlow developers**
To test your changes in BrainFlow on Android you need to build it using Android NDK manually.
Compilation instructions:
- `Download Android NDK <https://developer.android.com/ndk/downloads>`_
- `Download Ninja <https://github.com/ninja-build/ninja/releases>`_ or get one from the *tools* folder, make sure that *ninja.exe* is in search path
- You can also try *MinGW Makefiles* instead *Ninja*, but it's not tested and may not work
- Build C++ code using cmake and *Ninja* for **all ABIs**
- Compiled libraries will be in *tools/jniLibs* folder
.. compound::
Command line examples: ::
# to prepare project(choose ABIs which you need)
# for arm64-v8a
cmake -G Ninja -DCMAKE_TOOLCHAIN_FILE=E:\android-ndk-r21d-windows-x86_64\android-ndk-r21d\build\cmake\android.toolchain.cmake -DANDROID_NATIVE_API_LEVEL=android-19 -DANDROID_ABI=arm64-v8a ..
# for armeabi-v7a
cmake -G Ninja -DCMAKE_TOOLCHAIN_FILE=E:\android-ndk-r21d-windows-x86_64\android-ndk-r21d\build\cmake\android.toolchain.cmake -DANDROID_NATIVE_API_LEVEL=android-19 -DANDROID_ABI=armeabi-v7a ..
# for x86_64
cmake -G Ninja -DCMAKE_TOOLCHAIN_FILE=E:\android-ndk-r21d-windows-x86_64\android-ndk-r21d\build\cmake\android.toolchain.cmake -DANDROID_NATIVE_API_LEVEL=android-19 -DANDROID_ABI=x86_64 ..
# for x86
cmake -G Ninja -DCMAKE_TOOLCHAIN_FILE=E:\android-ndk-r21d-windows-x86_64\android-ndk-r21d\build\cmake\android.toolchain.cmake -DANDROID_NATIVE_API_LEVEL=android-19 -DANDROID_ABI=x86 ..
# to build(should be run for each ABI from previous step**
cmake --build . --target install --config Release -j 2 --parallel 2
| 42.590747 | 417 | 0.723513 |
0cf1d464a52371ef3ddc3d435b70d3b191d94ede | 546 | rst | reStructuredText | docs/source/pythonAPI.rst | Renneke/xyzflow | fa67a8efd81a5fd13ec3578b1af0710e167d4923 | [
"Apache-2.0"
] | 2 | 2022-02-24T06:48:12.000Z | 2022-02-24T06:48:17.000Z | docs/source/pythonAPI.rst | Renneke/xyzflow | fa67a8efd81a5fd13ec3578b1af0710e167d4923 | [
"Apache-2.0"
] | 13 | 2022-02-20T22:18:18.000Z | 2022-03-14T08:39:00.000Z | docs/source/pythonAPI.rst | Renneke/xyzflow | fa67a8efd81a5fd13ec3578b1af0710e167d4923 | [
"Apache-2.0"
] | null | null | null | ####################
Python API Reference
####################
*********************
Core Framework
*********************
.. rubric:: Classes
.. autosummary::
:toctree: _autosummary
:template: custom-module-template.rst
xyzflow.Task
.. rubric:: Built-In Tasks
.. autosummary::
:toctree: _autosummary
:template: custom-module-template.rst
xyzflow.Parameter
xyzflow.Add
xyzflow.Sub
xyzflow.Multiplication
xyzflow.EvaluatedValue
.. rubric:: Functions
.. autosummary::
:toctree: _autosummary
xyzflow.flow | 16.058824 | 40 | 0.602564 |
8b4f6c53f80d54768b4acc7f57b302538bb1fa48 | 4,714 | rst | reStructuredText | docs/appendix/appendix2/lab2.rst | f5devcentral/f5-agility-labs-kubernetes | 53ac535001e0473996f0947a4ae630ffee6896af | [
"MIT"
] | 12 | 2018-07-27T13:13:49.000Z | 2021-12-04T20:22:09.000Z | docs/appendix/appendix2/lab2.rst | f5devcentral/f5-agility-labs-kubernetes | 53ac535001e0473996f0947a4ae630ffee6896af | [
"MIT"
] | 3 | 2018-07-17T21:19:34.000Z | 2021-08-04T13:55:49.000Z | docs/appendix/appendix2/lab2.rst | f5devcentral/f5-agility-labs-kubernetes | 53ac535001e0473996f0947a4ae630ffee6896af | [
"MIT"
] | 18 | 2018-05-31T21:14:13.000Z | 2022-03-07T15:34:43.000Z | Lab 2.2 - Setup the Master
==========================
The master is the system where the "control plane" components run, including
etcd (the cluster database) and the API server (which the kubectl CLI
communicates with). All of these components run in pods started by kubelet
(which is why we had to setup docker first even on the master node)
.. important:: The following commands need to be run on the **master** only
unless otherwise specified.
#. Switch back to the ssh session connected to kube-master1
.. tip:: This session should be running from the previous if lab.
If not simply open **mRemoteNG** and connect via the saved session.
#. Initialize kubernetes
.. code-block:: bash
kubeadm init --apiserver-advertise-address=10.1.1.7 --pod-network-cidr=10.244.0.0/16
.. note::
- The IP address used to advertise the master. 10.1.1.0/24 is the
network for our control plane. if you don't specify the
--apiserver-advertise-address argument, kubeadm will pick the first
interface with a default gateway (because it needs internet access).
- 10.244.0.0/16 is the default network used by flannel. We'll setup
flannel in a later step.
- Be patient this step takes a few minutes. The initialization is
successful if you see "Your Kubernetes master has initialized
successfully!".
.. image:: images/cluster-setup-guide-kubeadm-init-master.png
.. important::
- Be sure to save the highlighted output from this command to notepad.
You'll need this information to add your worker nodes and configure
user administration.
- The "kubeadm join" command is run on the nodes to register themselves
with the master. Keep the secret safe since anyone with this token can
add an authenticated node to your cluster. This is used for mutual auth
between the master and the nodes.
#. Configure kubernetes administration. At this point you should be logged in
as root. The following will update both root and ubuntu user accounts for
kubernetes administration.
.. code-block:: bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
exit
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
cd $HOME
#. Verify kubernetes is up and running. You can monitor the services are
running by using the following command.
.. code-block:: bash
kubectl get pods --all-namespaces
You'll need to run this several times until you see several containers
"Running" It should look like the following:
.. image:: images/cluster-setup-guide-kubeadmin-init-check.png
.. note:: corends won't start until the network pod is up and running.
#. Install flannel
.. code-block:: bash
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
.. note:: You must install a *pod* network add-on so that your *pods* can
communicate with each other. **It is necessary to do this before you try
to deploy any applications to your cluster**, and before "coredns" will
start up.
#. If everything installs and starts as expected you should have "coredns" and
all services status "Running". To check the status of core services, you
can run the following command:
.. code-block:: bash
kubectl get pods --all-namespaces
The output should show all services as running.
.. image:: images/cluster-setup-guide-kubeadmin-init-check-cluster-get-pods.png
.. important:: Before moving to the next lab, "Setup the Nodes" wait for
all system pods to show status “Running”.
#. Additional kubernetes status checks.
.. code-block:: bash
kubectl get cs
.. image:: images/cluster-setup-guide-kubeadmin-init-check-cluster.png
.. code-block:: bash
kubectl cluster-info
.. image:: images/cluster-setup-guide-kubeadmin-init-check-cluster-info.png
.. hint:: If you made a mistake and need to re-initialize the cluster run
the following commands:
.. code-block:: bash
# If you followed the instructions you should be currently connected as user **ubuntu**
# When prompted for password enter "default" without the quotes
su -
# This resets the master to default settings
# You may need to run this command on the "nodes" if a fully functioning cluster is configured
kubeadm reset --force
# This removes the admin references to the broken cluster
rm -rf /home/ubuntu/.kube /root/.kube
| 35.712121 | 109 | 0.697285 |
81669977719fcf41d067c14b74ae8b63a5646267 | 178 | rst | reStructuredText | docs/api/whitesymex.strategies.strategy.rst | umutoztunc/whitesymex | 4d1a49f3d299b0789f7e6f6626588b76c1b3e80b | [
"MIT"
] | 12 | 2021-05-31T04:51:18.000Z | 2022-02-23T03:13:34.000Z | docs/api/whitesymex.strategies.strategy.rst | umutoztunc/whitesymex | 4d1a49f3d299b0789f7e6f6626588b76c1b3e80b | [
"MIT"
] | null | null | null | docs/api/whitesymex.strategies.strategy.rst | umutoztunc/whitesymex | 4d1a49f3d299b0789f7e6f6626588b76c1b3e80b | [
"MIT"
] | null | null | null | whitesymex.strategies.strategy module
=====================================
.. automodule:: whitesymex.strategies.strategy
:members:
:undoc-members:
:show-inheritance:
| 22.25 | 46 | 0.595506 |
31d834f96a07ef5abebbcdceffd8bc6131c67d4c | 115 | rst | reStructuredText | TorchRay/docs/utils.rst | UMBCvision/Consistent-Explanations-by-Contrastive-Learning | 589ff89cbcc96a1d8bd8d5b7bd7a785448ed2de3 | [
"MIT"
] | 11 | 2020-10-26T15:04:25.000Z | 2022-03-24T01:15:35.000Z | TorchRay/docs/utils.rst | UMBCvision/Consistent-Explanations-by-Contrastive-Learning | 589ff89cbcc96a1d8bd8d5b7bd7a785448ed2de3 | [
"MIT"
] | null | null | null | TorchRay/docs/utils.rst | UMBCvision/Consistent-Explanations-by-Contrastive-Learning | 589ff89cbcc96a1d8bd8d5b7bd7a785448ed2de3 | [
"MIT"
] | 2 | 2021-03-25T15:25:01.000Z | 2021-06-28T21:12:09.000Z | Utilities
=========
.. contents:: :local:
.. automodule:: torchray.utils
:members:
:show-inheritance: | 14.375 | 30 | 0.582609 |
8075b2979878aee66de28cbcd31aa3f3919eff58 | 21,310 | rst | reStructuredText | docs/source/user/use-cases.rst | by-voip/dsiprouter | cd7860138957fad20e630957d3352d31166c42b6 | [
"Apache-2.0"
] | 130 | 2017-07-18T11:50:56.000Z | 2022-02-26T09:21:53.000Z | docs/source/user/use-cases.rst | by-voip/dsiprouter | cd7860138957fad20e630957d3352d31166c42b6 | [
"Apache-2.0"
] | 319 | 2017-07-21T15:46:30.000Z | 2022-03-27T05:25:24.000Z | docs/source/user/use-cases.rst | by-voip/dsiprouter | cd7860138957fad20e630957d3352d31166c42b6 | [
"Apache-2.0"
] | 87 | 2017-07-21T13:27:05.000Z | 2022-02-22T20:06:41.000Z | #########
Use Cases
#########
This section contains a list of the common use cases that are implemented using dSIPRouter
====================================
SIP Trunking Using IP Authentication
====================================
dSIPRouter enables an organization to start supporting SIP Trunking within minutes.
Here are the steps to set it up using IP Authenication:
1. Login to dSIPRouter
2. Validate that your carrier is defined and specified in the Global Outbound Routes. If not, please follow the steps in :ref:`carrier_groups`_ and/or :ref:`global_outbound_routes`_ documentation.
3. Click on PBX's and Endpoints
4. Click "Add"
5. Select **IP Authentication** and fill in the fields specified below:
- Friendly Name
- IP Address of the PBX or Endpoint Device
.. image:: images/sip_trunking_ip_auth.png
:align: center
6. Click "Add"
7. Click "Reload" to make the change active.
===================================================
SIP Trunking Using Username/Password Authentication
===================================================
Here are the steps to set it up using Username/Password Authentication:
1. Login to dSIPRouter
2. Valiate that your carrier is defined and specified in the Global Outbound Routes. If not, please follow the steps in `<carrier_groups.rst>`_ and/or `<global_outbound_routes>`_ documentation.
3. Click on PBX's and Endpoints
4. Click "Add"
5. Select **Username/Password Authentication** and fill in the fields specified below:
- Friendly Name
- Click the "Username/Password Auth" radio button
- Enter a username
- Enter a domain. Note, you can make up the domain name. If you don't specify one then the default domain will be used, which is sip.dsiprouter.org by default.
- Enter a password
.. image:: images/sip_trunking_credentials_auth.png
:align: center
6. Click "Add"
7. Click "Reload" to make the change active.
=======================================
Using PJSIP Trunking - FreePBX Example
=======================================
The following screenshot(s) shows how to configure a PJSIP trunk within FreePBX for Username/Password Authentication.
The first screenshot shows the General tab of the "pjsip settings" page:
.. image:: images/sip_trunking_freepbx_pjsip_1.png
:align: center
The following fields needs to be entered
================== ============
Field Value
================== ============
Username Username from dSIPRouter PBX Setup
Secret Password from dSIPRouter PBX Setup
Authentication Outbound
Registration Send
SIP Server Domain name defined in the dSIPRouter PBX Setup
SIP Server SIP port, which is 5060 in dSIPRouter
================== ============
.. image:: images/sip_trunking_freepbx_pjsip_2.png
:align: center
The following fields needs to be entered
================== ============
Field Value
================== ============
Outbound Proxy IP address of dSIPRouter - must include the "\;lr" at the end
From Domain The name of the domain defined in the dSIPRouter PBX Setup
================== ============
=========================================
Using chanSIP Trunking - FreePBX Example
=========================================
The following screenshot(s) shows how to configure a chanSIP trunk within FreePBX for Username/Password Authentication.
1. Log into FreePBX server
2. Click Connectivity→Trunks
3. Select Add SIP (chan_sip) Trunk
4. Under General tab enter
The following fields needs to be entered
================== ============
Field Value
================== ============
Trunk Name Labeled in dsiprouter
Outbound Caller ID Phone# that you want to appear during a outbound call (if applicable)
================== ============
.. image:: images/sipchan_general.png
:align: center
5. Next you will enter the configurations under the SIP Settings. Here you will enter the SIP settings for outgoing calls by selecting the **Outbound** tab. You will need the following information:
The following fields needs to be entered
================== ============
Field Value
================== ============
Host <host name or IP address of dsiprouter>
Username <Specified in dsiprouter@domainname>
Secret <Specified in dsiprouter>
Type peer
Context from-trunk
================== ============
**The domain name has to be included and correct.**
.. image:: images/chansip_outgoing.png
:align: center
NOTE:** Type <context=from-trunk> underneath the <type=peer> in the Peer Details box if it does not appear.
6. Next you will enter the configurations for incoming by selecting the **Incoming** tab in the SIP Settings. Here you will enter the SIP settings for inbound calls. You will need:
User Context: This is most often the account name or number your provider expects. In this example we named it "inbound".
The following User Details needs to be entered:
================== ============
Field Value
================== ============
Host <host name or IP address of dsiprouter>
Insecure port,invite
Type peer
Context from-trunk
================== ============
.. image:: images/chansip_incoming.png
:align: center
In the **Register String** enter: <username@domainname>:<password>@<ip address **or** hostname>. In this example it would be sipchantest@sip.dsiprouter.org:HFmx9u9N@demo.dsiprouter.org. **The domain name has to be included and correct.**
.. image:: images/register_string.png
:align: center
7. Click Submit
8. Be sure to click the **Apply Config** button after submitting to confirm.
.. image:: images/apply_config_button.png
:align: center
You will now be able to see the new chanSIP added in the truck.
.. image:: images/add_trunk.png
:align: center
9. Next you will need to setup an outbound route. Select Connectivity→ Outbound Routes. Click the “+” sign to add a outbound route. In this tab you will need to enter:
================================= ============
Field Value
================================= ============
Route Name Type desired name
Route CID Number you want to appear on caller ID
Trunk Sequence for Matched Routes Trunk name (select from drop down box)
================================= ============
.. image:: images/outbound_routes_chansip.png
:align: center
10. Click the Dial Patterns tab to set the dial patterns.
If you are familiar with dial patterns, you can enter the dial patterns manually or you can click the Dial Patterans Wizard to auto create dial patterns if you like. You can choose 7, 10 or 11 digit patterns. Click Generate Routes.
.. image:: images/chansip_dial_wizard.png
:align: center
Dial pattern is set to your preference. Prefixes are optional, not required.
.. image:: images/chansip_dial_pattern.png
:align: center
11. Click Submit and Apply Config button.
Assuming you already have an extention created in your FreePBX, you can validate incoming/outgoing calls by configuring a softphone or a hard phone. Below is an example of the information you would enter if you use a softphone: In this example we are using Zoiper. Once you’ve downloaded Zoiper application on your PC or smart device you would enter the following to configure the soft phone:
================== ============
Field Value
================== ============
Username <extension>@<siptrunkipaddress>
secret <Password of that extension>
Hostname <IP address of your FreePBX> (should autofill)
================== ============
**Note** Skip Authenication and Outbound Proxy
.. image:: images/chansip_zoiper.png
:align: center
You should now be able to make a inbound and outbound call successfully!
===============================================
Using SIP Trunking - FusionPBX IP Authenication
===============================================
The following screenshot(s) shows how to configure a SIP trunk within FusionPBX for IP Authenication.
1. Log into your FusionPBX.
2. Click Accounts --> Gateways-->Click the + sign to add a gateway/SIP Trunk. The only fields you will need to fill here are:
- Gateway= Name of the SIP Trunk
- Proxy= IP address of the SIP trunk
- Register= Change to False because you are using IP authenication.
.. image:: images/sip_trunking_fusionpbx.png
:align: center
.. image:: images/sip_trunking_fusionpbx_2.png
:align: center
3. Click Save
4. Click DialPlan-->Outboung Routes-->Click the + sign to add a outbound route. Here you will enter in the following fields:
- Gateway= Name of the SIP Trunk
- Alternate gateways (if applicable)
- DialPlan Expression= 11d (standard setup in FusionPBX). To change the dialplan expression click on the dropdown box where it says "Shortcut to create the outbound dialplan entries for this Gateway."
- Description= (if desired)
5. Click Save
.. image:: images/outbound-routes_fusionpbx.png
:align: center
.. image:: images/outbound-routes_fusionpbx_2.png
:align: center
**NOTE** To make these changes global for ALL domains for this SIP Trunk: reopen outbound routes and change the Domain to Global and the Context to ${domain_name} as shown below.
.. image:: images/fusionpbx_global_dialplan.png
:align: center
==============================================================
Using SIP Trunking - FusionPBX Username/Password Authenication
==============================================================
The following screenshot(s) shows how to configure a SIP trunk within FusionPBX for Username/Password Authenication with IP Authenication off.
1. Log into your FusionPBX.
2. Click Accounts --> Gateways-->Click the + sign to add a gateway/SIP Trunk. The following fields you will need to fill here are:
- Gateway= Name of the SIP Trunk
- Username= specified by dSIPRouter provider
- Password= specified by dSIPRouter provider
- From Domain= Specified or set by default
- Proxy= IP address of the SIP trunk
- Register= set to True because you are using Username/Password authenication.
.. image:: images/sip_trunking_fusionpbx_3.png
:align: center
.. image:: images/sip_trunking_fusionpbx_4.png
:align: center
3. Click Save.
4. Click DialPlan-->Outboung Routes-->Click the + sign to add a outbound route. Here you will enter in the following fields:
- Gateway= Name of the SIP Trunk
- Alternate gateways (if applicable)
- DialPlan Expression= 11d (standard setup in FusionPBX). To change the dialplan expression click on the dropdown box where it says "Shortcut to create the outbound dialplan entries for this Gateway."
- Description= (if desired)
.. image:: images/outbound-routes_fusionpbx.png
:align: center
.. image:: images/outbound-routes_fusionpbx_2.png
:align: center
5. Click Save
=================
FusionPBX Hosting
=================
Here we will demostrate how to setup dSIPRouter to enable hosting FusionPBX. We have built-in support for FusionPBX that allows domains to be dynamically pulled from FusionPBX.
1. Login to dSIPRouter
2. Click PBX(s) and EndPoints
3. Click ADD; enter the following fields
- Friendly Name (opional)
- IP address
- IP Auth
- Click to enable FusionPBX Domain Support
- FusionPBX Database IP or Hostname
4. Click ADD
.. image:: images/fusionpbx_hosting.png
:align: center
5. Click Reload Kamailio. (when changes are made reload button will change to orange)
.. image:: images/reload_button.png
:align: center
6. Access your FusionPBX database via ssh.
7.Run the command as illustrated in the "Edit your PBX Detail" window as root on the FusionPBX server. Replace <ip address> (not including the brackets) with the IP address of the dSIPRouter server you're adding. Command line will look simulair to the following picture.
**NOTE** After you have entered the first two lines of commands you will not see a form of reply. If command is entered correctly it will return back to your root line. If the command line is incorrect you will receive a "command not found" error message. Recheck the command line and IP address.
.. image:: images/fusionpbx_domain_support.png
:align: center
After the command is run you should now be able to see the domains of that PBX in dSIPRouter.
.. image:: images/list_of_domain.png
:align: center
You can test PBX Hosting is valid by configuring a softphone or a hard phone. Below is an example using a softphone:
Now that domains have been synced in dSIPRouter you are able to register a softphone. In this example we are using Zoiper.
Once you've downloaded Zopier appliaction on your PC or smart device you would add:
- username (extension@domainname)
- password (password of that extension)
- outbound proxy (IP address of the dSIPRouter)
.. image:: images/zoiper_screenshot.png
:align: center
================================================
Provisioning and Registering a Polycom VVX Phone
================================================
Now that domains have been synced in dSIPRouter you are able to register a endpoint/hard-phone. In this example we are using a Polycom VVX410 desk phone.
1. Log into your FusionPBX box
a) Update the "outboundProxy.address" of the template with the IP address or hostname of the dSIPRouter in the provisioning editor.
.. image:: images/outbound_proxy.png
:align: center
2. Assign the phone to a template.
.. image:: images/assign_template.png
:align: center
3. Configuring the Provisioning Server section of the phone. Enter the appropriate information into the fields.
a) Server Type (dSIPRouter uses HTTP by default)
b) Server Address
c) Server Username (device provisioning server name)
d) Server Password
4. Click Save
.. image:: images/provisioning_server.png
:align: center
5. Reboot the phone
==========================================
FreePBX Hosting - Pass Thru Authentication
==========================================
Here we will demostrate how to setup dSIPRouter to enable hosting FreePBX using Pass Thru Authentication. FreePBX is designed to be a single tenant system or in other words, it was built to handle one SIP Domain. So, we use dSIPRouter to define a SIP Domain and we pass thru Registration info to the FreePBX server so that you don't have to change how authentication is done. However, this will only work for one FreePBX server. If you have a cluster of FreePBX servers then use "Local Subscriber Table" authentication. The value of having dSIPRouter in front of FreePBX is to provide you with flexibility. After setting this up you will have the ability upgrade or migrate users from one FreePBX instance to another without having to take an outage. The following video shows how to configure this. The steps to implement this is below the video.
.. raw:: html
<object width="560" height="315"><param name="movie"
value="https://www.youtube.com/embed/OgTZLYYx1u8"></param><param
name="allowFullScreen" value="true"></param><param
name="allowscriptaccess" value="always"></param><embed
src="https://www.youtube.com/embed/OgTZLYYx1u8"
type="application/x-shockwave-flash" allowscriptaccess="always"
allowfullscreen="true" width=""
height="385"></embed></object>
------------------
Steps to Implement
------------------
1. Click PBX and Endpoints
2. Click Add
.. image:: images/freepbx-pt-add-pbx.png
:align: center
3. Reload Kamailio
4. Click Domains
5. Click Add
.. image:: images/freepbx-pt-add-domain.png
:align: center
6. Reload Kamailio
7. Register a phone via dSIPRouter - notice that we used the hostname of dSIPRouter as the Outbound Proxy. This forces the registration thru the proxy.
.. image:: images/freepbx-pt-setup-softphone.png
:align: center
==============================
Microsoft Teams Direct Routing
==============================
dSIPRouter can act as an intermediary Session Border Controller between Microsoft Teams Direct Routing and your SIP provider or SIP servers.
An instance of dSIPRouter can either be a single tenant configuration (like sbc.example.com) or multi-tenant under a single wildcard subdomain (like *.sbc.example.com where * is the tenant's name).
.. image:: images/direct-routing-sbcs.png
:align: center
------------------
Steps to Implement
------------------
1. `Buy a license <https://dopensource.com/dsiprouter-annual-subscriptions/>`_ and follow the license installation instructions that are emailed to you.
2. Add any carriers you need for inbound and outbound routing, define appropriate routes.
3. Authorize your SBC's domain with Microsoft 365 by adding a TXT record starting with ms= per `Microsoft's documentation <https://docs.microsoft.com/en-us/microsoft-365/admin/setup/add-domain?view=o365-worldwide>`_.
Note: For multi-tenant use, authorizing the root subdomain or domain (if you use *.sbc.example.com, you would authorize sbc.example.com) should avoid the need to authorize each subdomain below this (like clientname.example.com)
4. Create a global admin user with proper Teams licensing associated with the domain (or for multi-tenant both the root subdomain (eg: sbc.example.com) and client's domain (eg: client.sbc.example.com))
5. Add the Teams session border controller in `Teams Admin Center <https://admin.teams.microsoft.com/direct-routing/v2>`_. Ensure the SIP port is correct (usually 5061) and the SBC is enabled!
6. `Install PowerShell <https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux>`_ type pwsh then:
.. code-block:: bash
Install-Module -Name MicrosoftTeams
Import-Module MicrosoftTeams
$userCredential = Get-Credential
Connect-MicrosoftTeams -Credential $userCredential
Login Note: If your using multi-factor authentication (MFA/2FA), log in by typing Connect-MicrosoftTeams
Debian 10 Note: If you run into `this OpenSSL issue <https://github.com/PowerShell/PowerShell/issues/12202>`_ , here is `a workaround <https://github.com/PowerShell/PowerShell/issues/12202#issuecomment-720402212>`_!
**Replace sbc.example.com, user@example.com and +13137175555** with your SBC's FQDN, the user's email address and their phone number (with + then country code, use +1 if you are in the North American Numbering Plan)
.. code-block:: bash
Set-CsOnlinePstnUsage -Identity Global -Usage @{Add="US and Canada"}
Set-CsOnlineVoiceRoute -Identity "LocalRoute" -NumberPattern ".*" -OnlinePstnGatewayList sbc.example.com
New-CsOnlineVoiceRoutingPolicy "US Only" -OnlinePstnUsages "US and Canada"
# This is suppose to stop MSTeams from using the Microsoft Dialing Plan and using the routing policies that was defined above
Set-CsTenantHybridConfiguration -UseOnPremDialPlan $False
# Apply and the US Only Voice Routing Policy to the user
Grant-CsOnlineVoiceRoutingPolicy -Identity “user@example.com“ -PolicyName "US Only"
# If it doesn’t return a value of US Only, then wait 15 minutes and try it again. It sometime takes a while for the policy to be ready.
Get-CsOnlineUser “user@example.com" | select OnlineVoiceRoutingPolicy
# Define a outgoing phone number (aka DID) and set Enterprise Voice and Voicemail
Set-CsUser -Identity "user@example.com" -OnPremLineURI tel:+13137175555 -EnterpriseVoiceEnabled $true -HostedVoiceMail $true
Note: Log out by typing Disconnect-MicrosoftTeams
Credits to Mack at dSIPRouter for the SkypeForBusiness script and `this blog post <https://seanmcavinue.net/2021/04/20/configure-teams-direct-routing-simple-deployment-via-powershell/>`_ for helping me update these commands for the new MicrosoftTeams PowerShell module.
-----------------------
Add a single Teams User
-----------------------
If you have an existing dSIPRouter SBC configured in Teams and have added a DID as an inbound route already, then run the commands below in PowerShell to add an additional user.
**Replace user@example.com and +13137175555** with your SBC's FQDN, the user's email address and their phone number (with + then country code, use +1 if you are in the North American Numbering Plan)
.. code-block:: bash
# Get Credentials, if using MFA/2FA just run Connect-MicrosoftTeams
$userCredential = Get-Credential
Connect-MicrosoftTeams -Credential $userCredential
# Apply and the US Only Voice Routing Policy to the user
Grant-CsOnlineVoiceRoutingPolicy -Identity “user@example.com“ -PolicyName "US Only"
# Define a outgoing phone number (aka DID) and set Enterprise Voice and Voicemail
Set-CsUser -Identity "user@example.com" -OnPremLineURI tel:+13137175555 -EnterpriseVoiceEnabled $true -HostedVoiceMail $true
Note: Log out by typing Disconnect-MicrosoftTeams | 39.244936 | 854 | 0.680056 |
047c6e0951f255249cc027277e0447f23dae2bcc | 382 | rst | reStructuredText | scripts/policy/tuning/defaults/warnings.zeek.rst | Jason--n/zeek-docs | fde2781665ea5b0207516ab6e444b95deb62e813 | [
"CC-BY-4.0"
] | 3 | 2020-11-12T01:11:05.000Z | 2020-12-02T01:25:46.000Z | scripts/policy/tuning/defaults/warnings.zeek.rst | Jason--n/zeek-docs | fde2781665ea5b0207516ab6e444b95deb62e813 | [
"CC-BY-4.0"
] | null | null | null | scripts/policy/tuning/defaults/warnings.zeek.rst | Jason--n/zeek-docs | fde2781665ea5b0207516ab6e444b95deb62e813 | [
"CC-BY-4.0"
] | 1 | 2021-09-03T10:00:56.000Z | 2021-09-03T10:00:56.000Z | :tocdepth: 3
policy/tuning/defaults/warnings.zeek
====================================
This file is meant to print messages on stdout for settings that would be
good to set in most cases or other things that could be done to achieve
better detection.
:Imports: :doc:`base/utils/site.zeek </scripts/base/utils/site.zeek>`
Summary
~~~~~~~
Detailed Interface
~~~~~~~~~~~~~~~~~~
| 21.222222 | 73 | 0.649215 |
101c64174ff7498ef449afb4e1c4e9cb40bf1e6b | 1,313 | rst | reStructuredText | docs/api/ahbicht.rst | Hochfrequenz/ahbicht | 90f6c8b5d7270dd1bc1c5e7aa4217f263e5e7d11 | [
"MIT"
] | null | null | null | docs/api/ahbicht.rst | Hochfrequenz/ahbicht | 90f6c8b5d7270dd1bc1c5e7aa4217f263e5e7d11 | [
"MIT"
] | 48 | 2021-08-31T15:00:47.000Z | 2022-03-25T18:32:51.000Z | docs/api/ahbicht.rst | Hochfrequenz/ahbicht | 90f6c8b5d7270dd1bc1c5e7aa4217f263e5e7d11 | [
"MIT"
] | null | null | null | ahbicht package
===============
Subpackages
-----------
.. toctree::
:maxdepth: 4
ahbicht.content_evaluation
ahbicht.expressions
ahbicht.json_serialization
Submodules
----------
ahbicht.condition\_node\_builder module
---------------------------------------
.. automodule:: ahbicht.condition_node_builder
:members:
:undoc-members:
:show-inheritance:
ahbicht.condition\_node\_distinction module
-------------------------------------------
.. automodule:: ahbicht.condition_node_distinction
:members:
:undoc-members:
:show-inheritance:
ahbicht.edifact module
----------------------
.. automodule:: ahbicht.edifact
:members:
:undoc-members:
:show-inheritance:
ahbicht.evaluation\_results module
----------------------------------
.. automodule:: ahbicht.evaluation_results
:members:
:undoc-members:
:show-inheritance:
ahbicht.mapping\_results module
-------------------------------
.. automodule:: ahbicht.mapping_results
:members:
:undoc-members:
:show-inheritance:
ahbicht.utility\_functions module
---------------------------------
.. automodule:: ahbicht.utility_functions
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: ahbicht
:members:
:undoc-members:
:show-inheritance:
| 18.236111 | 50 | 0.592536 |
343a6682c01c656d82cd614ea23c785893ca93c7 | 1,746 | rst | reStructuredText | README.rst | Kazzer/nestedconfigparser | 32f00ebc33c1db408ccf756511fd5d76e1834d2a | [
"WTFPL"
] | 1 | 2019-05-01T13:43:39.000Z | 2019-05-01T13:43:39.000Z | README.rst | Kazzer/nestedconfigparser | 32f00ebc33c1db408ccf756511fd5d76e1834d2a | [
"WTFPL"
] | null | null | null | README.rst | Kazzer/nestedconfigparser | 32f00ebc33c1db408ccf756511fd5d76e1834d2a | [
"WTFPL"
] | null | null | null | A simple extension to configparser that allows nested fallback configurations
By default, configparser only loads values from 'vars', 'section', and then 'default_section'. This extension allows for nested sections by use of a section splitter (default '.') and attempts to find values from 'vars', then 'section', then its logical parents, and finally 'default_section'.
For example, given the configuration below::
[DEFAULT]
alpha=first level
[section]
beta=second level
[section.subsection]
gamma=third level
the default configparser would behave like::
>>> settings = configparser.ConfigParser()
>>> settings.read('config.ini')
['config.ini']
>>> settings.get('section.subsection', 'alpha')
first level
>>> settings.get('section.subsection', 'beta')
None
>>> settings.get('section.subsection', 'gamma')
third level
Instead, in this extension, the behaviour would be::
>>> settings = nestedconfigparser.NestedConfigParser()
>>> settings.read('config.ini')
['config.ini']
>>> settings.get('section.subsection', 'alpha')
first level
>>> settings.get('section.subsection', 'beta')
second level
>>> settings.get('section.subsection', 'gamma')
third level
This extensions supports theoretically unlimited levels of nesting. It also does not require each level of the subsection to exist for the case where a section has no additional configurations.
Note: this extension intentionally does not raise a NoSectionError if a section does not exist when using 'nestedconfigparser.NestedConfigParser().get(section, option)'. This is because it will attempt to fallback to higher sections and avoids potentially empty sections that don't have any added configurations at the highest subsection.
| 40.604651 | 338 | 0.75315 |
1185f214cc33766384d99d4758ef7d7382f39be9 | 965 | rst | reStructuredText | elements/gentoo/README.rst | CSC-IT-Center-for-Science/diskimage-builder | 103b7dea6d8359f0e0b510761622f52445fac88e | [
"Apache-2.0"
] | null | null | null | elements/gentoo/README.rst | CSC-IT-Center-for-Science/diskimage-builder | 103b7dea6d8359f0e0b510761622f52445fac88e | [
"Apache-2.0"
] | 2 | 2017-12-12T09:49:53.000Z | 2020-09-09T12:45:01.000Z | elements/gentoo/README.rst | CSCfi/diskimage-builder | 103b7dea6d8359f0e0b510761622f52445fac88e | [
"Apache-2.0"
] | null | null | null | ========
Gentoo
========
Use a Gentoo cloud image as the baseline for built disk images. The images are
located in profile specific sub directories:
http://distfiles.gentoo.org/releases/amd64/autobuilds/
As of this writing, only x86_64 images are available.
Notes:
* There are very frequently new automated builds that include changes that
happen during the product maintenance. The download directories contain an
unversioned name and a versioned name. The unversioned name will always
point to the latest image, but will frequently change its content. The versioned
one will never change content, but will frequently be deleted and replaced
by a newer build with a higher version-release number.
* Other profiles can be used by exporting GENTOO_PROFILE with a valid profile.
A list of valid profiles follows:
default/linux/amd64/13.0
default/linux/amd64/13.0/no-multilib
hardened/linux/amd64
hardened/linux/amd64/no-multilib
| 35.740741 | 82 | 0.772021 |
9af00f2ef8d1152ab54482098b31226fc0671bae | 4,461 | rst | reStructuredText | Resources/doc/AccessingData/DisplayCriteria.rst | imatic/data-bundle | d1e2b108b15c267ca9e2f56a09d5c925648db69b | [
"MIT"
] | null | null | null | Resources/doc/AccessingData/DisplayCriteria.rst | imatic/data-bundle | d1e2b108b15c267ca9e2f56a09d5c925648db69b | [
"MIT"
] | 3 | 2021-05-14T07:42:10.000Z | 2021-11-28T06:10:59.000Z | Resources/doc/AccessingData/DisplayCriteria.rst | imatic/data-bundle | d1e2b108b15c267ca9e2f56a09d5c925648db69b | [
"MIT"
] | 1 | 2022-03-15T14:27:50.000Z | 2022-03-15T14:27:50.000Z | ===============
DisplayCriteria
===============
Display criteria uses 1 main interface
- `Imatic\\Bundle\\DataBundle\\Data\\Query\\DisplayCriteria\\DisplayCriteriaInterface </Data/Query/DisplayCriteria/DisplayCriteriaInterface.php>`_
- used to alter query objects by
- using pagination
- applying filters
- applying sorters
- default implementation is `DisplayCriteria <display_criteria_h_>`_
.. _display_criteria_h:
`DisplayCriteria </Data/Query/DisplayCriteria.php>`__
-----------------------------------------------------
- display criteria is object which allows us to use filtering, sorting and pagination when using one of the 3 methods of `query executor <QueryObjects.rst>`_
- can be created either manually or by `DisplayCriteriaFactory <display_criteria_factory_h_>`_ from request
- it has 3 arguments
- ``pager``
- used for pagination
- object implementing `Imatic\\Bundle\\DataBundle\\Data\\Query\\DisplayCriteria\\PagerInterface </Data/Query/DisplayCriteria/PagerInterface.php>`_
- more info in `Pager <Pagination.rst>`_ documentation
- ``sorter``
- used for specifying sorting
- object implementing `Imatic\\Bundle\\DataBundle\\Data\\Query\\DisplayCriteria\\SorterInterface </Data/Query/DisplayCriteria/SorterInterface.php>`_
- more info in `Sorter <Sorting.rst>`_ documentation
- ``filter``
- used for specifying filtering
- object implementing `Imatic\\Bundle\\DataBundle\\Data\\Query\\DisplayCriteria\\FilterInterface </Data/Query/DisplayCriteria/FilterInterface.php>`_
- more info in `Filter <Filtering.rst>`_ documentation
.. _display_criteria_factory_h:
`DisplayCriteriaFactory </Data/Query/DisplayCriteria/DisplayCriteriaFactory.php>`_
----------------------------------------------------------------------------------
- creates `DisplayCriteria <display_criteria_h_>`_ from request (or whatever else using `DisplayCriteriaReader <display_criteria_reader_h_>`_)
- service with ``Imatic\Bundle\DataBundle\Data\Query\DisplayCriteria\DisplayCriteriaFactory`` id
- it has 1 main method ``createCriteria`` with 2 arguments
- ``options``
- associative array
- ``componentId`` - id to distinguish between several components on a page
- ``pager``
- optional array with optional keys (values are used only as default values in case request doesn't contain any)
- ``page`` - current page
- ``limit`` - maximum items per page
- ``filter``
- filter with filter rules describing fields - which and how can be filtered
- ``sorter``
- optional array of default sorters
- ``persistent``
- boolean if current filter/sorter/pager values should be persisted (so if user opens page next time without
specifying any criteria, he will see the last used)
.. _display_criteria_reader_h:
`DisplayCriteriaReader </Data/Query/DisplayCriteria/Reader/DisplayCriteriaReader.php>`_
---------------------------------------------------------------------------------------
- reader used by `DisplayCriteriaFactory <display_criteria_factory_h_>`_ to create filter/pager/sorter from user input
- bundle ships with 2 main implementations
- `RequestQueryReader </Data/Query/DisplayCriteria/Reader/RequestQueryReader.php>`_ (default one)
- used to read data from request
- data are stored in url
- format
- ``filter``
- associative array
- ``clearFilter``
- optional boolean which causes all filter values to be cleared
- ``defaultFilter``
- optional boolean which causes setting all filter values to their defaults
- other keys are programmer defined filters (associative arrays) where each item is indexed by filter name
and it's value is associative array with following keys:
- ``operator``
- user selected operator of ``FilterRule``
- ``value``
- user selected value of the filter
- ``sorter``
- associative array with sorter as key and direction as a value
- ``page``
- current page (used for pagination)
- ``limit``
- maximum records per page
- `ExtJsReader </Data/Query/DisplayCriteria/Reader/ExtJsReader.php>`_
- used to read data from request in format `ExtJs <https://www.sencha.com/products/extjs/#overview>`_ uses by default
| 32.801471 | 157 | 0.66308 |
d08d3123417c7116f8d9759e95d6be33102a0e91 | 87 | rst | reStructuredText | docs/message.rst | dfa/gnsq | fd37ae7d8337c2dcd4358be4f092f8467549750f | [
"BSD-3-Clause"
] | 71 | 2015-01-21T08:59:20.000Z | 2021-04-02T05:41:04.000Z | docs/message.rst | dfa/gnsq | fd37ae7d8337c2dcd4358be4f092f8467549750f | [
"BSD-3-Clause"
] | 26 | 2015-02-11T00:42:54.000Z | 2020-10-25T17:15:46.000Z | docs/message.rst | dfa/gnsq | fd37ae7d8337c2dcd4358be4f092f8467549750f | [
"BSD-3-Clause"
] | 17 | 2015-02-10T18:31:48.000Z | 2020-11-12T12:27:37.000Z | NSQ Message
-----------
.. autoclass:: gnsq.Message
:members:
:inherited-members:
| 12.428571 | 27 | 0.609195 |
3f909d5898f53c1f5a23bf5d74c60d90fe54df34 | 591 | rst | reStructuredText | package/docs/cloudshell.tests.rst | DYeag/vCenterShell | e2e24cd938a92a68f4a8e6a860810d3ef72aae6d | [
"Apache-2.0"
] | 20 | 2015-12-06T10:19:27.000Z | 2020-08-05T06:19:46.000Z | package/docs/cloudshell.tests.rst | DYeag/vCenterShell | e2e24cd938a92a68f4a8e6a860810d3ef72aae6d | [
"Apache-2.0"
] | 956 | 2015-12-06T13:00:17.000Z | 2021-03-31T23:54:51.000Z | package/docs/cloudshell.tests.rst | DYeag/vCenterShell | e2e24cd938a92a68f4a8e6a860810d3ef72aae6d | [
"Apache-2.0"
] | 26 | 2015-12-06T15:24:17.000Z | 2020-09-16T09:27:03.000Z | cloudshell.tests package
========================
.. automodule:: cloudshell.tests
:members:
:undoc-members:
:show-inheritance:
Subpackages
-----------
.. toctree::
cloudshell.tests.test_commands
cloudshell.tests.test_common
cloudshell.tests.test_models
cloudshell.tests.test_network
cloudshell.tests.test_vm
cloudshell.tests.tests_pycommon
cloudshell.tests.utils
Submodules
----------
cloudshell.tests.tests module
-----------------------------
.. automodule:: cloudshell.tests.tests
:members:
:undoc-members:
:show-inheritance:
| 17.382353 | 38 | 0.637902 |
22c183a7181f99715ce93df75fbc10a662f562e8 | 121 | rst | reStructuredText | doc/results/index.rst | jmuhlich/rasmodel | f6ca37a1443b4ae4cdeb5aefb44a2c18db65f785 | [
"MIT"
] | null | null | null | doc/results/index.rst | jmuhlich/rasmodel | f6ca37a1443b4ae4cdeb5aefb44a2c18db65f785 | [
"MIT"
] | 4 | 2015-04-09T17:23:59.000Z | 2015-04-09T18:15:37.000Z | doc/results/index.rst | jmuhlich/rasmodel | f6ca37a1443b4ae4cdeb5aefb44a2c18db65f785 | [
"MIT"
] | 2 | 2015-03-17T17:08:52.000Z | 2015-03-17T17:09:44.000Z | Model outputs
=============
.. toctree::
:maxdepth: 2
rxn_net
contact_map
influence_map
simulation
| 11 | 17 | 0.570248 |
df58d4492fdd35e00a3488a7c1fc98b8eda4dd43 | 3,171 | rst | reStructuredText | api/autoapi/Microsoft/AspNet/Mvc/ModelBinding/ArrayModelBinder-TElement/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | 2 | 2017-12-12T05:08:17.000Z | 2021-02-08T10:15:42.000Z | api/autoapi/Microsoft/AspNet/Mvc/ModelBinding/ArrayModelBinder-TElement/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | null | null | null | api/autoapi/Microsoft/AspNet/Mvc/ModelBinding/ArrayModelBinder-TElement/index.rst | lucasvfventura/Docs | ea93e685c737236ab08d5444065cc550bba17afa | [
"Apache-2.0"
] | 3 | 2017-12-12T05:08:29.000Z | 2022-02-02T08:39:25.000Z |
ArrayModelBinder<TElement> Class
================================
.. contents::
:local:
Summary
-------
:any:`Microsoft.AspNet.Mvc.ModelBinding.IModelBinder` implementation for binding array values.
Inheritance Hierarchy
---------------------
* :dn:cls:`System.Object`
* :dn:cls:`Microsoft.AspNet.Mvc.ModelBinding.CollectionModelBinder{{TElement}}`
* :dn:cls:`Microsoft.AspNet.Mvc.ModelBinding.ArrayModelBinder\<TElement>`
Syntax
------
.. code-block:: csharp
public class ArrayModelBinder<TElement> : CollectionModelBinder<TElement>, ICollectionModelBinder, IModelBinder
GitHub
------
`View on GitHub <https://github.com/aspnet/mvc/blob/master/src/Microsoft.AspNet.Mvc.Core/ModelBinding/ArrayModelBinder.cs>`_
.. dn:class:: Microsoft.AspNet.Mvc.ModelBinding.ArrayModelBinder<TElement>
Methods
-------
.. dn:class:: Microsoft.AspNet.Mvc.ModelBinding.ArrayModelBinder<TElement>
:noindex:
:hidden:
.. dn:method:: Microsoft.AspNet.Mvc.ModelBinding.ArrayModelBinder<TElement>.BindModelAsync(Microsoft.AspNet.Mvc.ModelBinding.ModelBindingContext)
:type bindingContext: Microsoft.AspNet.Mvc.ModelBinding.ModelBindingContext
:rtype: System.Threading.Tasks.Task{Microsoft.AspNet.Mvc.ModelBinding.ModelBindingResult}
.. code-block:: csharp
public override Task<ModelBindingResult> BindModelAsync(ModelBindingContext bindingContext)
.. dn:method:: Microsoft.AspNet.Mvc.ModelBinding.ArrayModelBinder<TElement>.CanCreateInstance(System.Type)
:type targetType: System.Type
:rtype: System.Boolean
.. code-block:: csharp
public override bool CanCreateInstance(Type targetType)
.. dn:method:: Microsoft.AspNet.Mvc.ModelBinding.ArrayModelBinder<TElement>.ConvertToCollectionType(System.Type, System.Collections.Generic.IEnumerable<TElement>)
:type targetType: System.Type
:type collection: System.Collections.Generic.IEnumerable{{TElement}}
:rtype: System.Object
.. code-block:: csharp
protected override object ConvertToCollectionType(Type targetType, IEnumerable<TElement> collection)
.. dn:method:: Microsoft.AspNet.Mvc.ModelBinding.ArrayModelBinder<TElement>.CopyToModel(System.Object, System.Collections.Generic.IEnumerable<TElement>)
:type target: System.Object
:type sourceCollection: System.Collections.Generic.IEnumerable{{TElement}}
.. code-block:: csharp
protected override void CopyToModel(object target, IEnumerable<TElement> sourceCollection)
.. dn:method:: Microsoft.AspNet.Mvc.ModelBinding.ArrayModelBinder<TElement>.CreateEmptyCollection(System.Type)
:type targetType: System.Type
:rtype: System.Object
.. code-block:: csharp
protected override object CreateEmptyCollection(Type targetType)
| 22.978261 | 166 | 0.653737 |
ed54c21d55d0b8e2fb570db054b693cc9bc20638 | 11,977 | rst | reStructuredText | doc/neps/nep-0046-sponsorship-guidelines.rst | Akhilesh271988/numpy | 1ab7e8fbf90ac4a81d2ffdde7d78ec464dccb02e | [
"BSD-3-Clause"
] | 1 | 2022-02-24T10:16:43.000Z | 2022-02-24T10:16:43.000Z | doc/neps/nep-0046-sponsorship-guidelines.rst | Akhilesh271988/numpy | 1ab7e8fbf90ac4a81d2ffdde7d78ec464dccb02e | [
"BSD-3-Clause"
] | 8 | 2021-10-07T10:59:49.000Z | 2021-11-22T20:06:49.000Z | doc/neps/nep-0046-sponsorship-guidelines.rst | Akhilesh271988/numpy | 1ab7e8fbf90ac4a81d2ffdde7d78ec464dccb02e | [
"BSD-3-Clause"
] | 1 | 2022-03-22T11:47:01.000Z | 2022-03-22T11:47:01.000Z | .. _NEP46:
=====================================
NEP 46 — NumPy sponsorship guidelines
=====================================
:Author: Ralf Gommers <ralf.gommers@gmail.com>
:Status: Accepted
:Type: Process
:Created: 2020-12-27
:Resolution: https://mail.python.org/pipermail/numpy-discussion/2021-January/081424.html
Abstract
--------
This NEP provides guidelines on how the NumPy project will acknowledge
financial and in-kind support.
Motivation and Scope
--------------------
In the past few years, the NumPy project has gotten significant financial
support, as well as dedicated work time for maintainers to work on NumPy. There
is a need to acknowledge that support - it's the right thing to do, it's
helpful when looking for new funding, and funders and organizations expect or
require it, Furthermore, having a clear policy for how NumPy acknowledges
support is helpful when searching for new support. Finally, this policy may
help set reasonable expectations for potential funders.
This NEP is aimed at both the NumPy community - who can use it as a guideline
when looking for support on behalf of the project and when acknowledging
existing support - and at past, current and prospective sponsors, who often
want or need to know what they get in return for their support other than a
healthier NumPy.
The scope of this proposal includes:
- direct financial support, employers providing paid time for NumPy maintainers
and regular contributors, and in-kind support such as free hardware resources or
services,
- where and how NumPy acknowledges support (e.g., logo placement on the website),
- the amount and duration of support which leads to acknowledgement, and
- who in the NumPy project is responsible for sponsorship related topics, and
how to contact them.
How NumPy will acknowledge support
----------------------------------
There will be two different ways to acknowledge financial and in-kind support:
one to recognize significant active support, and another one to recognize
support received in the past and smaller amounts of support.
Entities who fall under "significant active supporter" we'll call Sponsor.
The minimum level of support given to NumPy to be considered a Sponsor are:
- $30,000/yr for unrestricted financial contributions (e.g., donations)
- $60,000/yr for financial contributions for a particular purpose (e.g., grants)
- $100,000/yr for in-kind contributions (e.g., time for employees to contribute)
We define support being active as:
- for a one-off donation: it was received within the previous 12 months,
- for recurring or financial or in-kind contributions: they should be ongoing.
After support moves from "active" to "inactive" status, the acknowledgement
will be left in its place for at least another 6 months. If appropriate, the
funding team can discuss opportunities for renewal with the sponsor. After
those 6 months, acknowledgement may be moved to the historical overview. The
exact timing of this move is at the discretion of the funding team, because
there may be reasons to keep it in the more prominent place for longer.
The rationale for the above funding levels is that unrestricted financial
contributions are typically the most valuable for the project, and the hardest
to obtain. The opposite is true for in-kind contributions. The dollar value of
the levels also reflect that NumPy's needs have grown to the point where we
need multiple paid developers in order to effectively support our user base and
continue to move the project forward. Financial support at or above these
levels is needed to be able to make a significant difference.
Sponsors will get acknowledged through:
- a small logo displayed on the front page of the NumPy website
- prominent logo placement on https://numpy.org/about/
- logos displayed in talks about NumPy by maintainers
- announcements of the sponsorship on the NumPy mailing list and the numpy-team
Twitter account
In addition to Sponsors, we already have the concept of Institutional Partner
(defined in NumPy's
`governance document <https://numpy.org/devdocs/dev/governance/index.html>`__),
for entities who employ a NumPy maintainer and let them work on NumPy as part
of their official duties. The governance document doesn't currently define a
minimum amount of paid maintainer time needed to be considered for partnership.
Therefore we propose that level here, roughly in line with the sponsorship
levels:
- 6 person-months/yr of paid work time for one or more NumPy maintainers or
regular contributors to any NumPy team or activity
Institutional Partners get the same benefits as Sponsors, in addition to what
is specified in the NumPy governance document.
Finally, a new page on the website (https://numpy.org/funding/, linked from the
About page) will be added to acknowledge all current and previous sponsors,
partners, and any other entities and individuals who provided $5,000 or more of
financial or in-kind support. This page will include relevant details of
support (dates, amounts, names, and purpose); no logos will be used on this
page. The rationale for the $5,000 minimum level is to keep the amount of work
maintaining the page reasonable; the level is the equivalent of, e.g., one GSoC
or a person-week's worth of engineering time in a Western country, which seems
like a reasonable lower limit.
Implementation
--------------
The following content changes need to be made:
- Add a section with small logos towards the bottom of the `numpy.org
<https://numpy.org/>`__ website.
- Create a full list of historical and current support and deploy it to
https://numpy.org/funding.
- Update the NumPy governance document for changes to Institutional Partner
eligibility requirements and benefits.
- Update https://numpy.org/about with details on how to get in touch with the
NumPy project about sponsorship related matters (see next section).
NumPy Funding Team
~~~~~~~~~~~~~~~~~~
At the moment NumPy has only one official body, the Steering Council, and no
good way to get in touch with either that body or any person or group
responsible for funding and sponsorship related matters. The way this is
typically done now is to somehow find the personal email of a maintainer, and
email them in private. There is a need to organize this more transparently - a
potential sponsor isn't likely to inquire through the mailing list, nor is it
easy for a potential sponsor to know if they're reaching out to the right
person in private.
https://numpy.org/about/ already says that NumPy has a "funding and grants"
team. However that is not the case. We propose to organize this team, name team
members on it, and add the names of those team members plus a dedicated email
address for the team to the About page.
Status before this proposal
---------------------------
Acknowledgement of support
~~~~~~~~~~~~~~~~~~~~~~~~~~
At the time of writing (Dec 2020), the logos of the four largest financial
sponsors and two institutional partners are displayed on
https://numpy.org/about/. The `Nature paper about NumPy <https://www.nature.com/articles/s41586-020-2649-2>`__
mentions some early funding. No comprehensive list of received funding and
in-kind support is published anywhere.
Decisions on which logos to list on the website have been made mostly by the
website team. Decisions on which entities to recognize as Institutional Partner
have been made by the NumPy Steering Council.
NumPy governance, decision-making, and financial oversight
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*This section is meant as context for the reader, to help put the rest of this
NEP in perspective, and perhaps answer questions the reader has when reading
this as a potential sponsor.*
NumPy has a formal governance structure defined in
`this governance document <https://numpy.org/devdocs/dev/governance/index.html>`__).
Decisions are made by consensus among all active participants in a discussion
(typically on the mailing list), and if consensus cannot be reached then the
Steering Council takes the decision (also by consensus).
NumPy is a sponsored project of NumFOCUS, a US-based 501(c)3 nonprofit.
NumFOCUS administers NumPy funds, and ensures they are spent in accordance with
its mission and nonprofit status. In practice, NumPy has a NumFOCUS
subcommittee (with its members named in the NumPy governance document) who can
authorize financial transactions. Those transactions, for example paying a
contractor for a particular activity or deliverable, are decided on by the
NumPy Steering Council.
Alternatives
------------
*Tiered sponsorship levels.* We considered using tiered sponsorship levels, and
rejected this alternative because it would be more complex, and not necessarily
communicate the right intent - the minimum levels are for us to determine how
to acknowledge support that we receive, not a commercial value proposition.
Entities typically will support NumPy because they rely on the project or want
to help advance it, and not to get brand awareness through logo placement.
*Listing all donations*. Note that in the past we have received many smaller
donations, mostly from individuals through NumFOCUS. It would be great to list
all of those contributions, but given the way we receive information on those
donations right now, that would be quite labor-intensive. If we manage to move
to a more suitable platform, such as `Open Collective <https://opencollective.com/>`__,
in the future, we should reconsider listing all individual donations.
Related Work
------------
Here we provide a few examples of how other projects handle sponsorship
guidelines and acknowledgements.
*Scikit-learn* has a narrow banner with logos at the bottom of
https://scikit-learn.org, and a list of present funding and past sponsors at
https://scikit-learn.org/stable/about.html#funding. Plus a separate section
"Infrastructure support" at the bottom of that same About page.
*Jupyter* has logos of sponsors and institutional partners in two sections on
https://jupyter.org/about. Some subprojects have separate approaches, for
example sponsors are listed (by using the `all-contributors
<https://github.com/all-contributors/all-contributors>`__ bot) in the README for
`jupyterlab-git <https://github.com/jupyterlab/jupyterlab-git>`__.
For a discussion from Jan 2020 on that, see
`here <https://discourse.jupyter.org/t/ideas-for-recognizing-developer-contributions-by-companies-institutes/3178>`_.
*NumFOCUS* has a large banner with sponsor logos on its front page at
https://numfocus.org, and a full page with sponsors at different sponsorship
levels listed at https://numfocus.org/sponsors. They also have a
`Corporate Sponsorship Prospectus <https://numfocus.org/blog/introducing-our-newest-corporate-sponsorship-prospectus>`__,
which includes a lot of detail on both sponsorship levels and benefits, as well
as how that helps NumFOCUS-affiliated projects (including NumPy).
Discussion
----------
- `Mailing list thread discussing this NEP <https://mail.python.org/pipermail/numpy-discussion/2020-December/081353.html>`__
- `PR with review of the NEP draft <https://github.com/numpy/numpy/pull/18084>`__
References and Footnotes
------------------------
- `Inside NumPy: preparing for the next decade <https://github.com/numpy/archive/blob/main/content/inside_numpy_presentation_SciPy2019.pdf>`__ presentation at SciPy'19 discussing the impact of the first NumPy grant.
- `Issue <https://github.com/numpy/numpy/issues/13393>`__ and
`email <https://mail.python.org/pipermail/numpy-discussion/2019-April/079371.html>`__
where IBM offered a $5,000 bounty for VSX SIMD support
- `JupyterLab Corporate Engagement and Contribution Guide <https://github.com/jupyterlab/jupyterlab/blob/master/CORPORATE.md>`__
.. _jupyterlab-git acknowledgements discussion: https://github.com/jupyterlab/jupyterlab-git/pull/530
Copyright
---------
This document has been placed in the public domain.
| 46.422481 | 215 | 0.775236 |
25d2219bd838d0de6960ea433f006a3667a15c28 | 537 | rst | reStructuredText | docs/source/install.rst | thewaverguy/wacy | 4a2ad247fad3b4af281392053d4ac50a67550a01 | [
"Apache-2.0"
] | 5 | 2021-03-11T17:41:10.000Z | 2021-03-23T09:36:27.000Z | docs/source/install.rst | thewaverguy/wacy | 4a2ad247fad3b4af281392053d4ac50a67550a01 | [
"Apache-2.0"
] | null | null | null | docs/source/install.rst | thewaverguy/wacy | 4a2ad247fad3b4af281392053d4ac50a67550a01 | [
"Apache-2.0"
] | null | null | null | Installation
============
**Python 3.6+ is required**
To install stable version from PyPI (recommended):
.. code-block:: bash
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install --upgrade pip setuptools wheel
$ pip install wacy
To install development version:
.. code-block:: bash
$ git clone https://github.com/thewaverguy/wacy
$ cd wacy
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install --upgrade pip setuptools wheel
$ python3 -m pip install -r requirements.txt
| 22.375 | 51 | 0.668529 |
c1de2f3b10b3161068c01ec335b43015734969ae | 1,294 | rst | reStructuredText | docs/source/index.rst | mustafaquraish/marker | 2b7f08e98c6c552938754d850f7795bd2cd289ad | [
"MIT"
] | 4 | 2020-10-03T20:15:26.000Z | 2021-05-10T15:14:08.000Z | docs/source/index.rst | mustafaquraish/marker | 2b7f08e98c6c552938754d850f7795bd2cd289ad | [
"MIT"
] | 2 | 2020-06-17T22:31:15.000Z | 2021-05-26T01:18:49.000Z | docs/source/index.rst | mustafaquraish/marker | 2b7f08e98c6c552938754d850f7795bd2cd289ad | [
"MIT"
] | 2 | 2020-07-28T22:43:15.000Z | 2022-01-08T11:06:08.000Z | Welcome to marker's documentation!
==================================
Marker is a highly configurable code-testing utility to automate downloading and
testing code, as well as returning detailed reports to the students. Currently it
supports the following platforms:
- Canvas
- MarkUs
Who is this for?
----------------
This tool is aimed to primarily help instructors of university computer science
courses who want to automate code tests for classes. It can also be useful to
simply fetch student submissions from the learning platforms or to upload marks
or report files that were generated through any other means.
-----
For installation and usage, please look at `Getting Started`.
.. toctree::
:maxdepth: 1
:caption: Getting Started:
getting_started/installation
getting_started/access_tokens
getting_started/system_overview
getting_started/examples
getting_started/changelog
.. toctree::
:maxdepth: 1
:caption: General:
general/configuration
general/results
general/reports
.. toctree::
:maxdepth: 1
:caption: Commands:
commands/download.rst
commands/prepare.rst
commands/run.rst
commands/upload_marks.rst
commands/upload_reports.rst
commands/delete_reports.rst
commands/set_status.rst
commands/stats.rst
| 23.107143 | 82 | 0.737249 |
084359dc88f1c0d717cc58c389867c29f053f119 | 1,139 | rst | reStructuredText | doc/index.rst | XiaoSanchez/hammer | 87a2ee26b0912df39973a35632970a47a31ca22a | [
"BSD-3-Clause"
] | 138 | 2017-08-15T18:56:55.000Z | 2022-03-29T05:23:37.000Z | doc/index.rst | XiaoSanchez/hammer | 87a2ee26b0912df39973a35632970a47a31ca22a | [
"BSD-3-Clause"
] | 444 | 2017-09-11T01:15:37.000Z | 2022-03-31T17:30:33.000Z | doc/index.rst | XiaoSanchez/hammer | 87a2ee26b0912df39973a35632970a47a31ca22a | [
"BSD-3-Clause"
] | 33 | 2017-10-30T14:23:53.000Z | 2022-03-25T01:36:13.000Z | .. Hammer documentation master file, created by
sphinx-quickstart on Wed Sep 25 13:36:12 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Hammer's documentation!
==================================
Hammer is a physical design framework that wraps around vendor specific technologies and tools to provide a single API to create ASICs. Hammer allows for reusability in ASIC design while still providing the designers leeway to make their own modifications.
.. include:: Hammer-Intro.rst
.. toctree::
:maxdepth: 3
:caption: Contents:
:numbered:
Hammer-Basics/index
:maxdepth: 3
:caption: Technology:
:numbered:
Technology/index
:maxdepth: 3
:caption: CAD Tools:
:numbered:
CAD-Tools/index
:maxdepth: 3
:caption: Hammer Flow:
:numbered:
Hammer-Flow/index
:maxdepth: 3
:caption: Hammer Use:
:numbered:
Hammer-Use/index
:maxdepth: 3
:caption: Examples:
:numbered:
Examples/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 21.903846 | 256 | 0.67252 |
e56ce6089fe90403e8c193ea82f260e9410b802a | 210 | rst | reStructuredText | src/aem/HISTORY.rst | sophsoph321/azure-cli-extensions | 18059a8783aed7ea0d9a3c97d734ebd049ff3e4d | [
"MIT"
] | 1 | 2021-12-17T01:27:06.000Z | 2021-12-17T01:27:06.000Z | src/aem/HISTORY.rst | sophsoph321/azure-cli-extensions | 18059a8783aed7ea0d9a3c97d734ebd049ff3e4d | [
"MIT"
] | 4 | 2020-09-07T12:56:24.000Z | 2021-02-04T12:19:20.000Z | src/aem/HISTORY.rst | sophsoph321/azure-cli-extensions | 18059a8783aed7ea0d9a3c97d734ebd049ff3e4d | [
"MIT"
] | 5 | 2020-09-08T22:46:48.000Z | 2020-11-08T14:54:35.000Z | .. :changelog:
Release History
===============
0.2.2
+++++
* Fix for https://github.com/Azure/azure-cli-extensions/issues/3019
* Switched from VM PUT to VM PATCH REST call when adding a VM identity
| 21 | 72 | 0.638095 |
9428eee01dafda998fbdba5bdce91efc4454f9cb | 65,623 | rst | reStructuredText | processo-di-gestione-degli-incidenti-di-sicurezza.rst | AgID/lg-cert-regionali | 9be8dbe0ce2932519998f51166523928d4f039f0 | [
"CC-BY-4.0"
] | 1 | 2019-05-14T22:25:18.000Z | 2019-05-14T22:25:18.000Z | processo-di-gestione-degli-incidenti-di-sicurezza.rst | AgID/lg-cert-regionali | 9be8dbe0ce2932519998f51166523928d4f039f0 | [
"CC-BY-4.0"
] | null | null | null | processo-di-gestione-degli-incidenti-di-sicurezza.rst | AgID/lg-cert-regionali | 9be8dbe0ce2932519998f51166523928d4f039f0 | [
"CC-BY-4.0"
] | 2 | 2019-05-10T16:23:53.000Z | 2022-02-08T08:25:17.000Z | Processo di gestione degli incidenti di sicurezza
=================================================
Gli obiettivi principali del processo di gestione degli incidenti di sicurezza
informatica che occorrono nel dominio logico della PA sono:
- minimizzare l’impatto degli eventi malevoli;
- individuare ed attuare in maniera tempestiva idonee misure di
contrasto/contenimento;
- individuare ed attuare tutte le attività di ripristino a seguito di
un incidente;
- raccogliere dati per produrre statistiche in grado di alimentare il
processo di analisi proattiva, finalizzato al rilevamento di eventi
sospetti o di pattern comportamentali ripetuti nel tempo;
- attraverso i feedback ricevuti, aumentare all’interno della PA il
grado di sensibilità verso le tematiche di sicurezza informatica e il
livello di sicurezza delle infrastrutture tecnologiche gestite.
Data la possibile interconnessione delle infrastrutture appartenenti al dominio
della PA il raggiungimento di tali obiettivi può avvenire in maniera efficiente
ed efficace soltanto tramite una sinergia tra i vari soggetti che partecipano
alla sua sicurezza. Gli attori coinvolti e le strutture preposte – PAL, CERT
Regionale e CERT-PA in primis - devono pertanto svolgere il proprio ruolo con
attitudine proattiva sia in fase di prevenzione che di gestione degli incidenti.
Al fine di garantire un efficace modello di interazione e cooperazione tra tutti
gli attori coinvolti nel processo di gestione dell’incidente, è auspicabile che
all’interno delle PAL sia identificato un responsabile per la gestione delle
attività afferenti al dominio della Cyber Security (*Titolare o* *Referente
Nominato della Sicurezza Informatica*), che possa sovraintendere i processi di
gestione della sicurezza a livello locale e costituire il punto di contatto con
il CERT regionale con riferimento ai servizi di cyber security acquisiti. Nei
casi in cui sia stato designato all’interno dell’amministrazione, coincide con
la figura del Referente Nominato della Sicurezza Informatica.
Definizioni
------------
Si riportano di seguito le principali definizioni e convenzioni adottate
nell’ambito del processo di gestione degli incidenti di sicurezza.
Evento di sicurezza
~~~~~~~~~~~~~~~~~~~
Con evento di sicurezza si indica qualsiasi situazione che si verifichi
nell’ambito di un determinato asset informatico, comunque rilevata, la cui
valenza è considerata significativa ai fini delle attività di gestione,
controllo della sicurezza e contenimento dei rischi ad essa correlati.
Gli eventi di sicurezza sono classificati in base alla seguente scala gerarchica
di importanza:
- *evento a basso impatto* (non significativo): qualsiasi evento
gestito in maniera silente dal sistema di sicurezza della PAL
interessata e che non richiede un trattamento ad hoc. Solitamente
viene utilizzato a fini statistici;
- *evento significativo*: qualsiasi evento rilevato nell’ambito dei
sistemi e delle infrastrutture ICT, che deve essere analizzato dal
personale incaricato delle attività di monitoraggio (quali ad
esempio, il rilevamento in tempo reale delle segnalazioni provenienti
dai dispositivi di sicurezza, l’analisi dei log prodotti da tali
dispositivi, la raccolta e la valutazione di comunicazioni su
comportamenti anomali o eventi sospetti) e gestione degli allarmi di
sicurezza;
- *evento critico*: qualsiasi evento significativo che, a seguito delle
analisi effettuate dal personale incaricato, potrebbe sottintendere,
direttamente o indirettamente, una violazione delle politiche di
sicurezza applicate ai sistemi ed alle infrastrutture ICT.
Gli eventi di sicurezza, significativi e critici, sono ulteriormente
classificati in *allarmi*, *incidenti* e *incidenti a rilevanza sistemica*, in
accordo con le definizioni riportate nel seguito. A questo tipo di eventi si
applica l’intero processo di gestione incidenti, suddiviso nelle fasi di
**rilevazione**, **analisi**, **gestione** e **ripristino**.
Allarme di sicurezza o tentativo di attacco
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Viene definito allarme di sicurezza o tentativo di attacco una segnalazione
derivante dal rilevamento di uno o più eventi che costituiscono una criticità
accertata per la sicurezza ICT, misurata sulla base di una scala di criticità
predefinita. Gli allarmi di sicurezza non causano danni e sono associati ad
attività che non costituiscono di per sé un pericolo diretto al patrimonio
informatico, ovvero a comportamenti anomali da parte di utenti e applicazioni
che non necessitano di un particolare intervento di contenimento, se non di
un’azione di monitoraggio per prevenire o contenere eventuali attacchi
susseguenti, ma che è, tuttavia, necessario registrare per una raccolta dei dati
a fini statistici e di valutazione.
La maggior parte di questi eventi è costituita da *atti ostili* - ovvero azioni
che cercano di pregiudicare un aspetto qualunque dei servizi o dei sistemi
tecnologici, ma che vengono efficacemente respinte dalle contromisure poste in
essere, e *prodromi di attacco* – ovvero attività di “raccolta delle
informazioni” non intrusive, che possono preludere ad un successivo attacco vero
e proprio.
Una lista, non esaustiva, di possibili allarmi è la seguente:
- azioni di enumeration (ad es. allarmi tipo ping sweep) dei nodi
attivi su una sottorete;
- web crawling di un servizio web effettuato tramite tool automatici;
- scansioni di porte TCP/UDP aperte (host e port scan) effettuato una
sola volta o comunque su destinazioni non particolarmente sensibili;
- accessi errati da parte di utenti che, pur comportando il lock di una
singola utenza, non denotano una particolare attività illecita mirata
al DoS, o all’accesso non autorizzato ai sistemi;
- fingerprinting su sistema operativo e applicativi installati sul
sistema target;
- allarmi di tipo Policy Violations, determinati dalla presenza di
software non autorizzato sui sistemi client (Instant Messaging, File
Sharing, P2P, ecc.).
Incidente di sicurezza informatica
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Viene definito incidente di sicurezza informatica qualsiasi evento o insieme di
eventi che sottintendono una violazione delle politiche di sicurezza ICT fonte
di danno per gli asset ICT ovvero per il patrimonio informativo
dell’organizzazione e per il quale si rende necessaria l’applicazione di misure
di contrasto e/o contenimento da parte delle strutture preposte. Da questa
definizione si evince che l’elemento caratteristico distintivo di un incidente
di sicurezza è rappresentato dal nesso di causa-effetto tra evento rilevato e
danno subito dagli asset ICT.
In altri termini, un incidente di sicurezza rappresenta una particolare
tipologia di allarme i cui eventi sottintendono una constatazione conclamata di
danni, già subiti al momento del loro rilevamento e segnalazione. Una lista non
esaustiva di possibili incidenti per la PAL è la seguente:
- accesso non autorizzato agli asset ICT;
- diffusione non autorizzata di informazioni riservate provenienti
dagli asset ICT;
- impersonificazione di utenti, tramite la compromissione delle
credenziali personali di autenticazione;
- perdita o modifica delle configurazioni di sistema;
- decadimento dei livelli di servizio standard;
- interruzione di servizi ICT;
- constatazione di illeciti o azioni criminose apportate con l’ausilio
delle risorse ICT di una PAL ai danni della stessa PAL o di terzi.
Incidente di sicurezza informatica a rilevanza sistemica delle PAL
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
La correlazione sistemica tra le varie PA, in termini di servizi informatici e
di connettività, comporta un aumento della possibilità che eventi dannosi che
occorrono in un Ente possano ripercuotersi su altri Enti ad esso correlati. Un
incidente a carico di una PAL può acquisire rilevanza sistemica qualora
determini disservizi e/o alterazioni (in termini di riservatezza, integrità o
disponibilità) ad altri servizi erogati da altre PA. In caso di incidente a
rilevanza sistemica per la PA dovranno essere allertate e coordinate tutte le PA
coinvolte nella catena di correlazione sistemica, ovvero tutte le PAL che
presentano caratteristiche similari alla PAL primariamente impattata, sia in
termini di servizi che di tecnologie. Una lista non esaustiva di possibili
incidenti a rilevanza sistemica per la PAL è la seguente:
- attacco mirato, ad alto impatto in termini di riservatezza, integrità
e disponibilità per i servizi ICT, specificatamente progettato per determinate
piattaforme tecnologiche (sistemi operativi, middleware, applicativi di tipo
infrastrutturale, ecc.) presenti in diverse PAL;
- interruzione di un servizio ICT che provoca un inaccettabile
decadimento di prestazioni (o un’interruzione a sua volta) per altri servizi
erogati da altre PAL sistemicamente correlate;
- perdita, alterazione o diffusione incontrollata di dati tali da
provocare danni o alterazioni di servizio per altre PAL.
In tali casi, si renderà necessario attivare un processo di escalation verso il
CERT-PA per la presa in carico e relativa risoluzione dell’incidente di
sicurezza.
Criticità degli incidenti
~~~~~~~~~~~~~~~~~~~~~~~~~
Si definisce criticità di un incidente di sicurezza la misura qualitativa della
gravità dello stesso, in termini dei seguenti cinque scenari di impatto:
- *Persone*: impatto sulla salute e la sicurezza fisica dei cittadini;
- *Economia*: impatto economico provocato dall’incidente;
- *Servizi PAL*: quantità e tipologia di servizi critici coinvolti
dall’incidente;
- *Immagine*: visibilità dell’incidente (o danno di immagine);
- *Sociale*: impatti sociali provocati dall’incidente.
La criticità dell’incidente viene espressa secondo una scala ordinale a quattro
valori o livelli di impatto (Livello 0 – Livello 3), secondo la seguente metrica
valutativa (scala degli impatti):
.. table:: Classificazione degli incidenti di sicurezza
:name: classificazione-incidenti-sicurezza
+------------------------------------+----------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+
| .. rst-class:: text-sans-serif p-3 | .. rst-class:: neutral-2-bg-a1 text-sans-serif p-3 | .. rst-class:: complementary-3-bg text-sans-serif p-3 | .. rst-class:: complementary-2-bg text-sans-serif p-3 | .. rst-class:: complementary-1-bg text-sans-serif p-3 |
| Scenario di impatto | Livello 0 | Livello 1 | Livello 2 | Livello 3 |
+====================================+====================================================+=======================================================+=======================================================+=======================================================+
| **Persone** | Nessun impatto significativo | Impatti limitati solo all’interno dell’ente | Impatti limitati ma possibile interessamento di altre | Impatti limitati con interessamento di altre PAL o |
| | | | PAL o privati | privati |
+------------------------------------+----------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+
| **Economia** | Nessun impatto significativo | Impatto economico trascurabile e limitato all’ente | Impatto economico limitato con possibile | Impatto economico significativo o interessamento di |
| | | | interessamento di altre PAL | altre PAL |
+------------------------------------+----------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+
| **Servizi PA** | Nessun impatto significativo | Impatto limitato a servizi interni all’ente | Impatto limitato ma possibile interessamento di altre | Impatto limitato ma con interessamento di altre PAL o |
| | | | PAL o privati | privati |
+------------------------------------+----------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+
| **Immagine** | Nessun impatto significativo | Danno di immagine ma problema limitato all’ente | Danno di immagine con interessamento di altre PAL | Danno di immagine con interessamento esterno alla PAL |
+------------------------------------+----------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+
| **Sociale** | Nessun impatto significativo | L’incidente provoca malessere nel personale dell’ente | L’incidente provoca malessere nel personale in altre | L’incidente provoca malessere anche all’esterno della |
| | | | PAL | PAL |
+------------------------------------+----------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+-------------------------------------------------------+
Come si evince dalla tabella precedente, il livello di impatto associato ad un
evento di sicurezza rappresenta una misura qualitativa del danno provocato
dall’evento stesso. In generale:
- incidenti di Livello 0 sono assimilabili ad eventi di sicurezza non
significativi;
- nel caso in cui ci siano impatti limitati che rimangono confinati
all’interno dell’ente locale, si attribuisce il Livello 1;
- nel caso in cui ci siano impatti limitati ma che potenzialmente
potrebbero interessare anche altre PAL ovvero un numero limitato di soggetti
privati, si attribuisce il Livello 2;
- nel caso in cui ci siano impatti significativi (o sistemici) che
interessano sicuramente altre PAL ovvero cittadini o soggetti privati, si
attribuisce il Livello 3, attivando contestualmente il processo di escalation
verso il CERT-PA.
Nel caso in cui un incidente presenti diversi scenari di impatto, il livello di
criticità è costituito dal massimo tra tutti i valori.
Priorità di gestione degli incidenti
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
La priorità di trattamento di un evento e le modalità di gestione sono
attribuite in funzione del livello di impatto, secondo la metrica TLP a 4 valori
(o colori) riassunta nella seguente tabella sinottica:
.. table:: Definizione dei livelli di priorità
:name: definizione-livelli-priorita
+---------------+---------------------+--------------------------------------------------------------------------+-----------------+------------------+
| Scenario di | **Classificazione** | **Priorità** | **Modalità di | **Ruolo CERT |
| impatto | | | gestione** | Regionale** |
+===============+=====================+==========================================================================+=================+==================+
| **Livello 0** | Allarme | .. rst-class:: text-sans-serif neutral-2-color-b7 p-3 | Locale all’Ente | \- |
| | | **Non rilevante** | coinvolto | |
+---------------+---------------------+--------------------------------------------------------------------------+-----------------+------------------+
| **Livello 1** | Incidente | .. rst-class:: complementary-3-bg text-sans-serif neutral-2-color-b7 p-3 | Locale all’Ente | Informato |
| | | **Informativo** | coinvolto | |
+---------------+---------------------+--------------------------------------------------------------------------+-----------------+------------------+
| **Livello 2** | Incidente | .. rst-class:: complementary-2-bg text-sans-serif neutral-2-color-b7 p-3 | Condivisa | Supporto alla |
| | | **Attenzione** | | gestione / |
| | | | | coordinamento |
| | | | | per incidenti |
| | | | | sistemici PA |
+---------------+---------------------+--------------------------------------------------------------------------+-----------------+------------------+
| **Livello 3** | Incidente | .. rst-class:: complementary-1-bg text-sans-serif neutral-2-color-b7 p-3 | Condivisa | Supporto alla |
| | | **Critico** | | gestione / |
| | | | | coordinamento e |
| | | | | coinvolgimento |
| | | | | CERT-PA |
+---------------+---------------------+--------------------------------------------------------------------------+-----------------+------------------+
Come indicato nella tabella precedente, il livello di impatto
dell’allarme/incidente determina la modalità ed il ruolo svolto dal CERT
Regionale nelle attività di gestione.
Attori coinvolti e responsabilità
---------------------------------
In questa sezione si definiscono gli attori coinvolti nel processo di gestione
degli incidenti e le rispettive responsabilità.
Referente della Sicurezza Informatica della PAL
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Il Referente di Sicurezza ha la responsabilità di:
- effettuare il monitoraggio continuativo degli eventi di sicurezza
provenienti dai dispositivi di sicurezza gestiti o da altre fonti;
- effettuare l’analisi e la classificazione degli eventi rilevati;
- gestire gli allarmi, applicando le procedure interne per il
contrasto/contenimento degli allarmi;
- definire il piano di intervento per il trattamento di incidenti,
sottoponendolo al CERT Regionale;
- interfacciarsi con il CERT Regionale, inviando comunicazione sugli
incidenti di Livello 1 occorsi;
- effettuare il coinvolgimento ufficiale del CERT Regionale nella
gestione di incidenti di Livello 2 e Livello 3;
- valutare l’applicazione di procedure di change management sui
dispositivi di monitoraggio per la rimozione di falsi positivi;
- coordinare l’applicazione delle misure di trattamento definite nel
piano di intervento secondo le procedure interne di trattamento degli
incidenti;
- chiudere formalmente gli incidenti di Livello 0 e Livello 1;
- chiudere internamente gli incidenti di Livello 2 e Livello 3, a
seguito della chiusura formale da parte del CERT Regionale.
CERT Regionale
~~~~~~~~~~~~~~
Il CERT Regionale ha le responsabilità di:
- effettuare l’analisi e la ri-classificazione degli incidenti
segnalati;
- supportare le attività di risposta per incidenti di Livello 2 e
Livello 3;
- integrare, ove necessario, il piano di intervento ricevuto dalle PAL
con direttive di carattere strategico;
- coordinarsi con il CERT-PA per la gestione di incidenti di Livello 3;
- inviare al CERT-PA il risultato delle analisi effettuate e la
strategia di gestione ipotizzata per il trattamento di incidenti di Livello 3;
- coinvolgere le PAL interessate in caso di incidenti a rilevanza
sistemica;
- effettuare il coordinamento nella risposta degli incidenti a
rilevanza sistemica o di incidenti a Livello 3;
- chiudere formalmente gli incidenti di Livello 2 e Livello 3;
- effettuare l’analisi post-incidente;
- monitorare le azioni di ripristino;
- rendicontare gli incidenti pervenuti per ciascun Livello e le azioni
intraprese.
CERT-PA
~~~~~~~
Il CERT-PA ha la responsabilità di:
- assumere il coordinamento della gestione operativa degli incidenti di
Livello 3;
- definire la strategia di contrasto in caso di incidenti di Livello 3,
eventualmente verificando ed approvando quanto proposto dal CERT Regionale;
- coordinarsi con il CERT Regionale nella risoluzione degli incidenti
di Livello 3.
Fasi del processo di gestione incidenti nelle PAL
-------------------------------------------------
Il processo di gestione incidenti è articolato nelle seguenti attività:
- **monitoraggio**, a carico delle singole PAL, finalizzato
all’individuazione degli eventi rilevati automaticamente, ovvero alla
ricezione delle segnalazioni di comportamenti anomali o sospetti;
- **analisi e** prima **classificazione** degli eventi di sicurezza
rilevati a carico delle PAL coinvolte;
- attivazione del sotto-processo di **gestione degli allarmi o** di
**gestione incidenti**, coinvolgendo o meno il CERT Regionale ed il CERT-PA,
in funzione del livello di classificazione e dell’eventuale rilevanza
sistemica dell’incidente;
- **monitoraggio delle attività di ripristino** ed analisi
post-incidenti a carico del CERT Regionale.
La Figura 9.1 descrive il diagramma di flusso inter-funzionale del processo
generale di gestione incidenti nelle PAL, attivato dal Referente della Sicurezza
all’arrivo di una segnalazione di sicurezza; come evidenziato dalle linee
tratteggiate, il processo di ripristino ed analisi post-incidente viene attivato
al termine dei sotto-processi di gestione incidenti definiti per i diversi
livelli di impatto.
.. figure:: media/flusso-gestione-incidenti-pa.png
:name: flusso-gestione-incidenti-pa
Flusso di gestione incidenti nelle PA
Monitoraggio degli eventi di sicurezza
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
La responsabilità delle attività continuative H24 7x7 di monitoraggio degli
incidenti di sicurezza è in carico al Referente di sicurezza della PAL, che
eventualmente si coordina con altre strutture operative interne o esterne.
Il processo di gestione incidenti nelle PAL si attiva quando gli operatori
responsabili del monitoraggio:
- rilevano le segnalazioni prodotte in tempo reale dai dispositivi
informatici di monitoraggio, cui è affidato il compito di tracciare e
segnalare tutti gli eventi di sicurezza ICT occorsi nel dominio degli
asset ICT controllati;
- effettuano delle analisi periodiche dei log prodotti dai dispositivi
di monitoraggio o da altre piattaforme di sicurezza gestite, per il
rilevamento di pattern sospetti e/o eventi ripetuti, che possano
evidenziare attività malevoli;
- raccolgono e valutano comunicazioni, verbali o scritte, di
comportamenti anomali o di eventi sospetti provenienti da utenti o
amministratori di sistema;
- raccolgono e valutano comunicazioni (sotto forma di alert o
bollettini di sicurezza) provenienti da fonti esterne autoritative
(tra i quali il CERT-PA e il CERT Regionale competente), circa nuovi
scenari di rischio, comportamenti anomali o attacchi in corso ai
danni di altri, che potrebbero avere impatto sul patrimonio
tecnologico gestito.
L’output delle attività di monitoraggio è costituito dagli eventi di sicurezza
da sottoporre alla successiva fase di analisi e classificazione.
Analisi e classificazione
~~~~~~~~~~~~~~~~~~~~~~~~~
Il personale preposto effettua l’analisi e la classificazione degli eventi
segnalati, eventualmente avvalendosi di matrici diagnostiche e di procedure
operative interne per il riconoscimento e la classificazione degli eventi di
sicurezza.
L’output delle attività di analisi e classificazione è costituito dagli eventi,
categorizzati secondo la metrica precedentemente definita. In particolare:
- in caso di falsi positivi, gli eventi vengono registrati e si procede
al relativo trattamento;
- in caso di allarmi di sicurezza (o incidenti a Livello 0) dovrà
essere aperta formalmente una scheda di gestione allarme ed attivato
il corrispondente sotto-processo interno;
- in caso di incidente di sicurezza a Livello 1 dovrà essere aperta
formalmente una scheda di gestione incidente ed attivato il
corrispondente sotto-processo interno;
- in caso di incidente di sicurezza a Livello 2 e Livello 3 dovrà
essere aperta formalmente una scheda di gestione incidente e inviata
al Referente della Sicurezza Informatica della PAL, che attiva il
corrispondente sotto-processo e si coordina con il CERT Regionale
nelle modalità di seguito illustrate;
- in caso di eventi anomali, che presentano un impatto sulla sicurezza
informatica e per i quali non è possibile effettuare una
classificazione, viene inviata formalmente una richiesta di supporto
al CERT Regionale, che procede alle attività di classificazione e di
attivazione del corrispondente sotto-processo di gestione.
In caso di più eventi concorrenti, viene data priorità alla gestione degli
eventi a maggior livello di criticità.
Trattamento falsi positivi
~~~~~~~~~~~~~~~~~~~~~~~~~~
Gli eventi identificati come falsi positivi non danno seguito ad allarmi e, una
volta accertati, deve essere valutata (in particolare nel caso di falsi positivi
persistenti) la possibilità di far filtrare tali eventi dai sistemi di
tracciamento, per non sovraccaricare le attività di monitoraggio con operazioni
ripetitive.
Tali attività sono condotte internamente alle PAL, attivando procedure
opportunamente codificate e documentate di change management degli apparati di
sicurezza.
Gestione evento anomalo
~~~~~~~~~~~~~~~~~~~~~~~
Nel caso di eventi che presentano degli impatti sulla sicurezza, ma per i quali
il Referente della Sicurezza Informatica della PAL non riesce ad effettuare in
autonomia la classificazione, viene inviata al CERT Regionale una richiesta di
supporto contenente:
- le evidenze rilevate,
- il risultato delle analisi eseguite e le motivazioni per le quali non
si riesce ad effettuare la classificazione.
Il CERT Regionale, coordinandosi con il Referente della Sicurezza Informatica
della PAL, provvede all’analisi dell’evento, alla sua classificazione secondo la
metrica definita, e all’attivazione del corrispondente sotto-processo di
gestione coinvolgendo, qualora necessario, il CERT-PA.
Gestione incidenti di Livello 0 (Non Rilevante) e Livello 1 (Informativo)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Il sotto-processo di gestione degli incidenti di Livello 0 (o allarmi) e
incidenti a Livello 1 è un processo interno alla PAL, che prevede l’interazione
con il CERT Regionale esclusivamente in fase di rendicontazione finale, al
termine delle attività di trattamento.
Di seguito si riporta una descrizione di alto livello del sotto-processo,
rimandando ad eventuali procedure interne di gestione che tengano conto delle
specificità organizzative di ogni singola PAL.
Il diagramma di flusso inter-funzionale del sotto-processo di gestione incidenti
di Livello 0 e 1 è il seguente.
.. figure:: media/flusso-gestione-incidenti-livello-0-livello-1.png
:name: flusso-gestione-incidenti-livello-0-livello-1
Flusso di gestione incidenti Livello 0 e Livello 1
Si dettagliano di seguito le attività indicate in Figura 9.2 (per la legenda dei
simboli fare riferimento alla Figura 9.1).
Definizione delle attività di gestione
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Il Referente della Sicurezza Informatica della PAL accerta l’entità e la natura
degli eventuali danni subiti e definisce, in funzione delle evidenze raccolte,
la strategia ottimale di contrasto per l’incidente rilevato, tenendo conto dei
seguenti vincoli operativi:
- le misure di contrasto e contenimento individuate devono essere
commisurate all’effettivo beneficio che si può ottenere, ovvero gli interventi
devono arrecare il minor danno possibile agli asset ICT ed ai servizi da
questi gestiti;
- i danni o i disservizi agli asset ICT, causati dall’attuazione delle
misure di contrasto e contenimento, devono comunque essere inferiori a quelli
conseguenti la violazione in essere;
- in nessun caso sono ammesse operazioni tali da comportare,
direttamente o indirettamente, una violazione delle politiche di sicurezza in
vigore, delle clausole contrattuali e delle vigenti leggi, ovvero non sono
ammessi interventi che possano arrecare un qualsiasi danno, materiale o
morale, a persone fisiche, sia dipendenti che esterni alla PAL coinvolta.
Durante questa fase il Referente di Sicurezza può interagire con altre strutture
operative interne, in funzione delle procedure di trattamento degli incidenti di
sicurezza definite localmente per ciascuna PAL. In particolare, deve essere
richiesta l’autorizzazione a procedere, nel caso in cui le attività da porre in
atto presentino un carattere di invasività o possano comunque comportare un
disservizio sui sistemi coinvolti.
Al termine di questa fase viene redatto un piano di intervento, dove è riportata
una descrizione dettagliata delle operazioni da effettuare per facilitare
l’intervento delle diverse strutture operative coinvolte.
Trattamento incidente
^^^^^^^^^^^^^^^^^^^^^
Il Referente di Sicurezza coordina l’applicazione delle misure di trattamento
definite nel piano di intervento, integrandosi con i gruppi operativi
all’interno della PAL ed eventualmente effettuando l’escalation di
responsabilità verso altre strutture, in funzione delle procedure interne di
trattamento degli incidenti di sicurezza.
Al termine dell’applicazione delle procedure di trattamento, il Referente di
Sicurezza valuta l’effettiva chiusura dell’incidente. Tali verifiche possono
essere effettuate analizzando gli eventi rilevati dai sistemi di tracciamento,
ovvero mediante ogni altra verifica volta a fornire l’evidenza del lavoro
svolto.
Al termine di queste verifiche:
- se l’incidente non risulta effettivamente rientrato il Referente di
Sicurezza continua con le attività di gestione, definendo un nuovo
piano di intervento e valutando, se necessario, una
ri-classificazione dell’incidente stesso;
- se l’incidente risulta rientrato, il Referente di Sicurezza provvede
alla chiusura formale dell’incidente.
Chiusura incidente
^^^^^^^^^^^^^^^^^^
Il Referente di Sicurezza effettua la chiusura formale dell’incidente e redige
un report di chiusura formale che contiene, come minimo:
- codice univoco identificativo dell’incidente;
- data e ora di apertura della scheda di gestione incidente;
- rapporto di constatazione incidente, comprensivo del livello di
classificazione e dell’elenco dei danni subiti;
- documentazione degli interventi posti in atto e delle attività di
ripristino effettuate;
- data e ora di chiusura dell’incidente.
Se l’incidente subito è di Livello 0 non viene effettuata alcuna comunicazione
al CERT Regionale archiviando localmente il report redatto. Se l’incidente è di
Livello 1, viene inviata comunicazione formale al CERT Regionale.
Comunicazione incidente Liv. 1 al CERT Regionale
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Il Referente della Sicurezza Informatica della PA invia, al termine delle
attività di gestione e dopo la chiusura dell’incidente, una segnalazione
informativa al CERT Regionale contenente i dettagli dell’incidente (di Livello
1) subito, attivando in tal modo il sotto-processo di ripristino e analisi
post-incidente.
Gestione incidenti di Livello 2 (Attenzione)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Il diagramma di flusso inter-funzionale del sotto-processo di gestione degli
incidenti di Livello 2 è riportato nella figura seguente.
.. figure:: media/flusso-gestione-incidenti-livello-2.png
:name: flusso-gestione-incidenti-livello-2
Flusso di gestione incidenti Livello 2
Si dettagliano di seguito le attività indicate in Figura 9.3 (per la legenda dei
simboli fare riferimento alla Figura 9.1).
Coinvolgimento CERT Regionale
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Nel momento in cui viene identificato un incidente di Livello 2, il Referente
della Sicurezza Informatica della PA coinvolge il CERT Regionale, inviando,
mediante i canali condivisi, una richiesta formale di supporto per la gestione
dell’incidente in corso.
Nella richiesta devono essere riportati tutti i dettagli necessari al CERT
Regionale per poter effettuare l’analisi e fornire le indicazioni utili al
trattamento dell’incidente. In particolare, devono essere indicate:
- data e ora di rilevamento evento;
- data e ora cui si riferiscono i danni rilevati;
- sistemi coinvolti dall’incidente;
- servizi coinvolti dall’incidente e relativa criticità;
- livello di classificazione assegnato;
- rapporto di constatazione incidente, comprensivo dell’elenco dei
danni subiti.
Contestualmente alla richiesta di supporto, il Referente della Sicurezza
sottopone al CERT Regionale il piano operativo di intervento dove sono
dettagliate:
- le attività di contrasto e contenimento fino a quel momento
intraprese;
- i risultati riscontrati;
- le azioni suggerite, in funzione delle conoscenze acquisite sul
patrimonio tecnologico gestito, degli eventi rilevati e dei sistemi da essi
coinvolti;
- i risultati attesi dalle attività suggerite.
Analisi e ri-classificazione incidente
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Il gruppo di analisti di sicurezza del CERT Regionale effettua una
ri-classificazione dell’incidente segnalato, analizzandone l’impatto in
un’ottica di tipo sistemico per la PAL e correlando le informazioni ricevute con
ulteriori indicazioni eventualmente ricevute da altre PAL, da altri CERT
Regionali o dal CERT-PA.
Al termine di questa classificazione, il CERT Regionale valuta se è necessario:
- passare alla de-classificazione dell’incidente in caso di Falso
Positivo;
- de-classificare l’incidente a Livello 0 o 1, attivando il
corrispondente sotto-processo e segnalandolo alla PAL coinvolta;
- aumentare la classificazione a Livello 3 attivando il corrispondente
sotto-processo, ingaggiando il CERT-PA e segnalandolo alla PAL coinvolta.
In funzione della tipologia di incidente segnalato, il CERT Regionale può
inviare un’eventuale informativa di sicurezza alle PAL accreditate, per
segnalare l’occorrenza dell’incidente in corso ed innalzare il livello di
attenzione su determinati scenari di minaccia o particolari aspetti tecnologici
e organizzativi.
Se l’incidente (in base ai dati raccolti) presenta dei potenziali impatti sui
servizi o le infrastrutture di altre PAL differenti da quella originante, il
CERT Regionale procede al coinvolgimento di tutte le PAL interessate.
De-classificazione per falso positivo
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Nel caso in cui gli analisti del CERT Regionale rilevino che l’incidente
segnalato dalla PAL non presenta carattere di rilevanza e non sottintende
direttamente o indirettamente ad alcuna violazione, lo de-classificano come
falso positivo, inviando una comunicazione alla PAL coinvolta contenente:
- risultato dell’analisi;
- motivazioni della de-classificazione;
- eventuali indicazioni per il trattamento del falso positivo segnalato
e su come rimuoverlo dai dispostivi di monitoraggio.
Coinvolgimento PAL interessate
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In caso di incidente che presenti un potenziale impatto su diverse PAL, il CERT
Regionale coinvolge formalmente tutte le PAL interessate dall’incidente,
inviando ai relativi Referenti della Sicurezza Informatica una comunicazione
contenente (come minimo):
- la data e l’ora dell’incidente;
- la tipologia di sistemi e servizi coinvolti dall’incidente (sistemi
operativi, tecnologie utilizzate, ecc.);
- i possibili danni che potrebbero essere provocati;
- qualsiasi altra informazione ritenuta necessaria per il trattamento
di eventuali altri incidenti sistemicamente connessi all’incidente rilevato.
Ove possibile la comunicazione inviata deve essere anonimizzata, garantendo la
riservatezza delle informazioni e impedendo la circolazione di informazioni
comunque critiche.
Analisi segnalazione
^^^^^^^^^^^^^^^^^^^^
Il Referente della Sicurezza di ciascuna PAL analizza la comunicazione ricevuta
dal CERT Regionale, coinvolgendo, all’interno della propria PAL, eventuali altre
strutture operative in funzione delle procedure localmente definite.
In funzione della tipologia di evento segnalato, sono normalmente ipotizzabili i
seguenti scenari:
- qualora i sistemi della PAL coinvolta non risultino vulnerabili alla
minaccia segnalata, il Referente di Sicurezza della PAL invia al CERT
Regionale un report di riscontro;
- qualora non vi siano specifiche evidenze di applicazione della
minaccia segnalata, viene aumentato il livello di monitoraggio nei
confronti della specifica segnalazione ricevuta ed inviato dal
Referente di Sicurezza al CERT Regionale un report di riscontro;
- qualora, in funzione della segnalazione ricevuta, venga rilevato
internamente alla PAL un incidente di sicurezza precedentemente non
rilevato (o comunque già rilevato ed in corso di classificazione), il
Referente di Sicurezza attiva il corrispondente sotto-processo di
gestione incidenti, inviando un report di riscontro al CERT Regionale
e rimanendo in costante contatto con il CERT Regionale che coordina
tutte le attività di risposta.
In caso in cui l’incidente segnalato presenti reali impatti su altre PAL
(incidente a rilevanza sistemica) il CERT Regionale procede al coordinamento
delle successive attività di risposta, tracciando formalmente la presenza di un
incidente sistemico. Negli altri casi il CERT Regionale offre il proprio
supporto specialistico alla PAL coinvolta nel trattamento degli incidenti
rilevati.
Supporto risposta incidente
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Il CERT Regionale supporta il Referente della Sicurezza Informatica della PAL
nella risposta all’incidente rilevato e nella definizione del piano operativo di
intervento.
In particolare, il CERT Regionale analizza il materiale ricevuto dalla PAL e,
ove necessario, integra il piano di intervento con indicazioni di carattere
strategico, derivanti dalle informazioni desunte in fase di analisi e
ri-classificazione o comunque raccolte grazie al proprio ruolo di coordinamento
all’interno del dominio logico della Pubblica Amministrazione.
Il CERT Regionale condivide quindi il piano di intervento così integrato al
Referente della Sicurezza Informatica della PAL.
Coordinamento risposta incidente sistemico PAL
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dopo la comunicazione formale di incidente sistemico della PAL, il CERT
Regionale assume il coordinamento delle attività di gestione dell’incidente
sistemico PAL.
Il CERT Regionale analizza il materiale ricevuto dalla PAL e, ove necessario,
integra il piano di intervento con indicazioni di carattere strategico,
derivanti dalle informazioni desunte in fase di analisi e ri-classificazione o
comunque raccolte grazie al proprio ruolo di coordinamento all’interno del
dominio logico della Pubblica Amministrazione sul territorio.
Durante tale fase il CERT Regionale coordina i Referenti della Sicurezza delle
PAL coinvolte, con i quali condivide il piano degli interventi necessari al
trattamento degli incidenti e dai quali riceve un aggiornamento costante sullo
stato di avanzamento delle attività di trattamento.
Trattamento incidente
^^^^^^^^^^^^^^^^^^^^^
I Referenti della Sicurezza delle PAL coinvolte (la PAL originante e le altre
PAL nel caso di incidente a rilevanza sistemica) ricevono il piano con gli
interventi di propria competenza e, in base alle procedure interne di
trattamento degli incidenti, curano l’applicazione delle misure di contrasto e
contenimento ivi definite coinvolgendo, all’interno della propria PAL, eventuali
altre strutture operative, in funzione delle suddette procedure.
Durante le attività di trattamento i Referenti della Sicurezza Informatica delle
PAL rimangono costantemente allineati con il CERT Regionale, al quale inviano
tutti gli aggiornamenti significativi sullo stato di avanzamento delle attività.
Al termine delle attività di trattamento, ciascuna PAL interessata redige un
report di rendicontazione, che permette al CERT Regionale di capire se lo stato
di emergenza è formalmente risolto o se è necessario procedere con una nuova
analisi e ri-classificazione dell’incidente. Tali verifiche possono essere
effettuate analizzando gli eventi rilevati dai sistemi di tracciamento, ovvero
mediante ogni altra verifica volta a fornire l’evidenza del lavoro svolto.
Rientro stato ordinario
^^^^^^^^^^^^^^^^^^^^^^^
Nel momento in cui si chiude lo stato di emergenza, il CERT Regionale effettua
la chiusura formale dell’incidente, redigendo un rapporto contenente (come
minimo):
- codice univoco identificativo dell’incidente;
- data e ora di apertura della scheda di gestione incidente;
- rapporto di constatazione incidente, comprensivo del livello di
classificazione, dell’elenco dei danni subiti e delle PAL coinvolte in caso
di impatti sistemici;
- documentazione degli interventi posti in atto e dei risultati
ottenuti, specificando, particolarmente per gli incidenti sistemici, i
sistemi/servizi delle varie PAL coinvolti dalle attività di trattamento;
- data e ora di chiusura dell’incidente.
Tale rapporto deve essere inviato alle PAL coinvolte, che a quel punto
provvedono internamente alla chiusura dell’incidente segnalato. Al rientro dello
stato ordinario, il CERT Regionale attiva il sotto-processo di ripristino e
analisi post-incidente.
Gestione incidenti di Livello 3 (Critico)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Il diagramma di flusso inter-funzionale del sotto-processo di gestione
degli incidenti di Livello 3 è riportato nella figura seguente.
.. figure:: media/flusso-gestione-incidenti-livello-3.png
:name: flusso-gestione-incidenti-livello-3
Flusso di gestione incidenti Livello 3
Si dettagliano di seguito le attività indicate in Figura 9.4 (per la legenda dei
simboli fare riferimento alla Figura 9.1).
Per comodità espositiva, si dettagliano a seguire solamente le attività
specifiche che caratterizzano l’attivazione del sotto-processo di gestione degli
incidenti di livello 3. Per i dettagli relativi alle altre attività si rimanda
alle precedenti sezioni del documento.
Coinvolgimento CERT-PA
^^^^^^^^^^^^^^^^^^^^^^
Contestualmente alla segnalazione, il gruppo di analisti di sicurezza del CERT
Regionale invia al CERT-PA il risultato delle analisi effettuate e la strategia
di intervento posta in essere dalla PAL originante per il trattamento
dell’incidente in corso, eventualmente integrata da ulteriori proposte definite
dal CERT Regionale in fase di analisi.
La strategia dovrà dettagliare:
- la descrizione dell’attacco subito, i sistemi ed i servizi coinvolti
e una descrizione dei danni subiti;
- le attività di contrasto poste in essere e quelle sulle quali si
richiede il consenso;
- gli scenari di minaccia che si intendono contrastare ed i risultati
attesi dalle attività individuate.
Se l’incidente (in base ai dati raccolti) presenta dei potenziali impatti sui
servizi o le infrastrutture di altre PAL differenti da quella originante, il
CERT Regionale procede al coinvolgimento di tutte le PAL interessate.
Aggiornamento CERT-PA
^^^^^^^^^^^^^^^^^^^^^
Nel caso in cui l’incidente segnalato presenti reali impatti su altre PAL
(incidente a rilevanza sistemica), il CERT Regionale aggiorna il CERT-PA sullo
stato dell’Emergenza in corso, inviando:
- i rapporti di riscontro ricevuti dalle diverse PAL coinvolte;
- ogni altra informazione utile alla definizione/aggiornamento della
strategia di intervento elaborata dal CERT-PA.
Il CERT Regionale rimane in contatto con i Referenti della Sicurezza delle PAL
coinvolte, dai quali riceve aggiornamenti sugli eventi in corso, che sottopone
per aggiornamento al CERT-PA.
Supporto attività di gestione
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Il CERT-PA una volta attivato dal CERT Regionale:
- supporta e si coordina con il CERT Regionale nella gestione
dell’incidente, che a sua volta coinvolgerà il Referente della Sicurezza della
PAL interessata;
- verifica e approva formalmente la strategia di intervento elaborata,
eventualmente integrandola con indicazioni di carattere strategico desunte
grazie al proprio ruolo di coordinamento a livello nazionale nel dominio della
PA e di interfacciamento con altri CERT Regionali.
Il CERT Regionale offre quindi il supporto specialistico nel trattamento
dell’incidente alle PAL coinvolte, in base alle informazioni/direttive
provenienti dal CERT-PA contenute nella strategia di intervento condivisa con il
CERT-PA, con il quale rimane costantemente allineato.
Coordinamento e supporto attività di gestione
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Per Emergenze di carattere nazionale a rilevanza sistemica, il CERT-PA una volta
attivato dal CERT Regionale:
- supporta e si coordina con il CERT Regionale nella gestione
dell’incidente, che a sua volta coinvolgerà i Referenti della Sicurezza di
tutte le PAL interessate;
- correla tutte le informazioni ricevute dalle diverse PAL, aggiornando
la strategia di intervento in funzione delle comunicazioni ricevute, durante
la gestione dell’incidente, dal CERT Regionale e delle indicazioni di
carattere strategico desunte grazie al proprio ruolo di coordinamento a
livello nazionale nel dominio della PA e di interfacciamento con altri CERT
Regionali;
- verifica ed approva formalmente la strategia di intervento elaborata.
Il CERT Regionale assume quindi il coordinamento nel trattamento dell’incidente
alle PAL coinvolte, in base alle informazioni/direttive provenienti dal CERT-PA
contenute nella strategia di intervento condivisa con il CERT-PA, con il quale
rimane costantemente allineato.
Supporto risposta incidente
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Il CERT Regionale supporta il Referente della Sicurezza Informatica della PAL
nella gestione dell’incidente rilevato e nella definizione del piano operativo
di intervento.
In particolare, il CERT Regionale analizza il materiale ricevuto dalla PAL e,
ove necessario, integra il piano di intervento con le indicazioni di carattere
strategico provenienti dal CERT-PA e con le informazioni desunte in fase di
analisi e ri-classificazione dell’incidente.
Il CERT Regionale condivide quindi il piano di intervento così integrato al
Referente della Sicurezza Informatica della PAL.
Coordinamento risposta incidente sistemico PAL
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dopo la comunicazione formale di incidente sistemico della PAL, il CERT
Regionale assume, in accordo con il CERT-PA, il coordinamento delle attività di
gestione dell’incidente sistemico PA.
Il CERT Regionale analizza il materiale ricevuto dalla PA e, ove necessario,
integra il piano di intervento con le indicazioni di carattere strategico
provenienti dal CERT-PA e con le informazioni desunte in fase di analisi e
ri-classificazione dell’incidente.
Durante tale fase il CERT Regionale coordina i Referenti della Sicurezza
Informatica delle PAL coinvolte, con i quali condivide il piano degli interventi
necessari al trattamento degli incidenti e dai quali riceve un aggiornamento
costante sullo stato di avanzamento delle attività di trattamento.
Rientro stato ordinario
^^^^^^^^^^^^^^^^^^^^^^^
Nel momento in cui si chiude lo stato di emergenza, il CERT Regionale effettua
la chiusura formale dell’incidente, redigendo un rapporto contenente (come
minimo):
- codice univoco identificativo dell’incidente;
- data e ora di apertura della scheda di gestione incidente;
- rapporto di constatazione incidente, comprensivo dell’elenco dei
danni subiti e di eventuali impatti sistemici;
- documentazione degli interventi posti in atto, specificando,
particolarmente per gli incidenti sistemici, i sistemi/servizi
coinvolti dalle attività di trattamento;
- data e ora di chiusura dell’incidente.
Tale rapporto deve essere inviato al CERT-PA e alle PA coinvolte, che a quel
punto provvedono internamente alla chiusura dell’incidente.
Al rientro dello stato ordinario, il CERT regionale attiva il sotto-processo di
ripristino e analisi post-incidente.
Ripristino e analisi post-incidente
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Il sotto-processo di ripristino e analisi post-incidente viene dettagliato nella
figura seguente.
.. figure:: media/flusso-ripristino-incidente.png
:name: flusso-ripristino-incidente
Flusso di ripristino incidente
Si dettagliano di seguito le attività indicate in Figura 9.5 (per la legenda dei
simboli fare riferimento alla Figura 9.1).
Analisi post-incidente e follow up
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Alla chiusura dell’incidente (di Livello 1 – 3), il Referente della Sicurezza
Informatica della PA, invia al CERT Regionale tutti i dati relativi
all’incidente gestito, necessari per consentire l’avvio dell’analisi
post-incidente.
Tale analisi viene effettuata dal CERT Regionale relativamente al dominio logico
di competenza, e va ad integrare le attività di analisi svolte parallelamente
all’interno di ciascuna PA alla chiusura dell’incidente.
Tale processo consiste in una serie di attività programmate, volte a verificare:
- le caratteristiche dell’agente di minaccia responsabile
dell’incidente;
- le modalità di sviluppo dell’incidente, le circostanze e/o le
vulnerabilità che lo hanno reso possibile;
- un eventuale piano propositivo per il miglioramento dello stato della
sicurezza che tenda a contenere i rischi di nuovi incidenti di simile
natura.
L’analisi post-incidente è un’attività posta sotto la responsabilità ed il
coordinamento del CERT Regionale che si avvale del Referente della Sicurezza
Informatica delle PAL e, ove necessario, di specialisti esterni. Essa comprende
le seguenti attività:
- raccolta e isolamento dei dati relativi all’incidente;
- analisi degli asset coinvolti;
- analisi degli eventi correlati all’incidente;
- costruzione dello scenario causa-effetto;
- valutazione degli impatti e potenziali propagazioni degli incidenti;
- conservazione delle fonti di prova per fini probatori.
Al termine delle attività il CERT Regionale dovrà produrre un rapporto di
analisi post-incidente, contenente le raccomandazioni necessarie ad evitare che
un tale evento possa ripetersi, e lo invia al Referente della Sicurezza
Informatica della PAL.
Nel corso di tutte le attività di analisi post-incidente deve comunque essere
garantita la stretta osservanza delle leggi e delle normative in materia di
privacy. Inoltre, tutto il personale componente il gruppo che effettua le
analisi è sottoposto alla stretta osservanza di una clausola di riservatezza,
che vieta la comunicazione a terzi di qualsiasi elemento o giudizio, anche a
titolo di parere personale, relativo alle informazioni raccolte.
Tutte le attività di analisi post-incidente devono comunque essere formalmente
autorizzate dal CERT Regionale, con l’indicazione delle finalità, del contesto,
dei limiti temporali e delle modalità di esecuzione delle analisi. Qualora tali
attività prevedano degli interventi all’interno della PAL, quest’ultimi devono
essere concordati con il Referente della Sicurezza Informatica, condividendo un
opportuno piano di azione nel quale devono essere dettagliate le attività svolte
ed il personale coinvolto.
Salvo disposizioni differenti per contesti specifici, la copia elettronica del
rapporto di analisi post incidente, dei dati raccolti nel corso delle analisi e
delle raccomandazioni emesse, devono essere custoditi a cura del CERT Regionale
per un periodo di dodici mesi solari. Dopo tale termine si provvederà alla loro
rimozione ed alla distruzione dei supporti mobili di archiviazione.
Supporto pianificazione attività di ripristino
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Alla chiusura formale dell’incidente che sancisce il termine dell’emergenza ed
il rientro dello stato ordinario, il CERT Regionale supporta il Referente della
Sicurezza Informatica della PA coinvolta dall’incidente nella definizione del
piano di ripristino.
Il piano di ripristino è finalizzato a riportare il patrimonio informativo
coinvolto dall’incidente nella situazione precedente all’incidente stesso e
contiene (come minimo):
- PA e relativi asset ICT coinvolti dall’incidente;
- piano delle attività di rientro e descrizione dei passi operativi da
svolgere;
- funzioni e responsabilità coinvolte dalle attività operative;
- date di rientro previste.
Generalmente il piano delle attività deve prevedere un rientro nei tempi più
stretti possibile. Tuttavia, in funzione della tipologia, dell’estensione e
dell’impatto registrato sul patrimonio informativo a seguito dell’incidente
intercorso, è possibile avere piani di ripristino a lungo termine, che prevedano
un rientro alla situazione antecedente all’incidente nell’arco di mesi o anni.
Attuazione piano di ripristino
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Il Referente della Sicurezza Informatica della PA attua il piano di ripristino
concordato con il CERT Regionale, attivando i gruppi operativi all’interno della
PA ed eventualmente effettuando l’escalation di responsabilità verso altre
strutture, in funzione delle procedure interne di trattamento degli incidenti di
sicurezza.
Monitoraggio ripristino
^^^^^^^^^^^^^^^^^^^^^^^
Il CERT Regionale monitora periodicamente lo stato di avanzamento delle attività
di ripristino, coordinandosi con il Referente della Sicurezza Informatica delle
PA coinvolte e producendo dei SAL periodici sullo stato delle attività previste
dal piano di ripristino.
Al termine delle attività, si passa alla chiusura formale delle attività di
ripristino.
Chiusura ripristino
^^^^^^^^^^^^^^^^^^^
Il CERT Regionale chiude formalmente le attività di ripristino, redigendo un
report di ripristino che contenga:
- data di chiusura delle attività di ripristino;
- riferimenti al piano di ripristino definito e alle sue eventuali
ri-pianificazioni successive;
- riferimenti ai SAL effettuati durante l’attività di monitoraggio;
- esito delle attività di ripristino effettuate.
Tale rapporto deve essere inviato al CERT-PA e alle PA coinvolte, che a quel
punto provvedono internamente alla chiusura dell’incidente.
Matrice delle responsabilità
----------------------------
Si riporta di seguito la matrice delle responsabilità relativa al processo di
gestione incidenti della PA, indicando per ciascuna attività sopra descritta,
l’attore coinvolto ed il grado di coinvolgimento, secondo la convenzione:
+-------+-----------------+
| **A** | Analizza |
+-------+-----------------+
| **R** | Riceve |
+-------+-----------------+
| **V** | Valida/Verifica |
+-------+-----------------+
| **E** | Effettua |
+-------+-----------------+
| **S** | Supervisiona |
+-------+-----------------+
| **I** | Viene Informato |
+-------+-----------------+
| **U** | Supporta |
+-------+-----------------+
.. table:: Matrice delle responsabilità
:name: matrice-responsabilita
+------+---------------------------------+-------------+-------------+---------+
| Id | Attività | Referente | CERT | CERT-PA |
| | | Sicurezza | Regionale | |
| | | Informatica | | |
| | | PA | | |
+======+=================================+=============+=============+=========+
| 1 | Monitoraggi o degli eventi di | E | | |
| | sicurezza | | | |
+------+---------------------------------+-------------+-------------+---------+
| 2 | Analisi e classificazione | E | | |
+------+---------------------------------+-------------+-------------+---------+
| 3 | Trattamento falsi positivi | E | | |
+------+---------------------------------+-------------+-------------+---------+
| 4 | Gestione evento anomalo | U | E | |
+------+---------------------------------+-------------+-------------+---------+
| 5 | **Gestione Incidenti Livello 0 | | | |
| | (Non Rilevante) – Livello 1 | | | |
| | (Informativo)** | | | |
+------+---------------------------------+-------------+-------------+---------+
| 5.1 | Definizione delle attività di | E | | |
| | gestione | | | |
+------+---------------------------------+-------------+-------------+---------+
| 5.2 | Trattamento incidente | E | I | |
+------+---------------------------------+-------------+-------------+---------+
| 5.3 | Chiusura incidente | E | | |
+------+---------------------------------+-------------+-------------+---------+
| 5.4 | Comunicazio ne incidente Liv.1 | E | R | |
| | al CERT Regionale | | | |
+------+---------------------------------+-------------+-------------+---------+
| 6 | **Gestione Incidenti Livello 2 | | | |
| | (Attenzione)** | | | |
+------+---------------------------------+-------------+-------------+---------+
| 6.1 | Coinvolgimento CERT Regionale | E | | |
+------+---------------------------------+-------------+-------------+---------+
| 6.2 | Analisi e riclassifi cazione | I | E | |
| | incidente | | | |
+------+---------------------------------+-------------+-------------+---------+
| 6.3 | De-classificazione per falso | I | E | |
| | positivo | | | |
+------+---------------------------------+-------------+-------------+---------+
| 6.4 | Coinvolgimento PAL interessate | I | E | |
+------+---------------------------------+-------------+-------------+---------+
| 6.5 | Analisi segnalazione | E | I | |
+------+---------------------------------+-------------+-------------+---------+
| 6.6 | Supporto risposta incidente | R | E | |
+------+---------------------------------+-------------+-------------+---------+
| 6.7 | Coordinamento risposta | R | E | |
| | incidente sistemico PA | | | |
+------+---------------------------------+-------------+-------------+---------+
| 6.8 | Trattamento incidente | E | I, S | |
+------+---------------------------------+-------------+-------------+---------+
| 6.9 | Rientro stato ordinario | I | E | |
+------+---------------------------------+-------------+-------------+---------+
| 7 | **Gestione Incidenti Livello 3 | | | |
| | (Critico)** | | | |
+------+---------------------------------+-------------+-------------+---------+
| 7.1 | Coinvolgimento CERT Regionale | E | | |
+------+---------------------------------+-------------+-------------+---------+
| 7.2 | Analisi e ri-classificazione | I | E | |
| | incidente | | | |
+------+---------------------------------+-------------+-------------+---------+
| 7.3 | De-classificazione per falso | I | E | |
| | positivo | | | |
+------+---------------------------------+-------------+-------------+---------+
| 7.4 | Coinvolgimento CERT-PA | U | E | V |
+------+---------------------------------+-------------+-------------+---------+
| 7.5 | Coinvolgimento PA interessate | I | E | |
+------+---------------------------------+-------------+-------------+---------+
| 7.6 | Analisi segnalazione | E | I | |
+------+---------------------------------+-------------+-------------+---------+
| 7.7 | Aggiornamen to CERT-PA | U | E | V |
+------+---------------------------------+-------------+-------------+---------+
| 7.8 | Supporto attività di gestione | | R | E |
+------+---------------------------------+-------------+-------------+---------+
| 7.9 | Coordinamento e supporto | | R | E |
| | attività di gestione | | | |
+------+---------------------------------+-------------+-------------+---------+
| 7.10 | Supporto risposta incidente | R | E | I |
+------+---------------------------------+-------------+-------------+---------+
| 7.11 | Coordinamento risposta | R | E | I |
| | incidente sistemico PAL | | | |
+------+---------------------------------+-------------+-------------+---------+
| 7.12 | Trattamento incidente | E | I, S | I |
+------+---------------------------------+-------------+-------------+---------+
| 7.13 | Rientro stato ordinario | I | E | |
+------+---------------------------------+-------------+-------------+---------+
| 8 | **Ripristino e analisi post- | | | |
| | incidente** | | | |
+------+---------------------------------+-------------+-------------+---------+
| 8.1 | Analisi post-incidente e | U | E | |
| | follow-up | | | |
+------+---------------------------------+-------------+-------------+---------+
| 8.2 | Supporto pianificazione | U | E | |
| | attività di ripristino | | | |
+------+---------------------------------+-------------+-------------+---------+
| 8.3 | Attuazione piano di ripristino | E | I | |
+------+---------------------------------+-------------+-------------+---------+
| 8.4 | Monitoraggio ripristino | I, U | E | |
+------+---------------------------------+-------------+-------------+---------+
| 8.5 | Chiusura ripristino | R | E | |
+------+---------------------------------+-------------+-------------+---------+
.. discourse::
:topic_identifier: 9725 | 51.187988 | 262 | 0.634396 |
bcac196535129fda1c9b5d977b5ae80f04c23a2b | 75 | rst | reStructuredText | contributors.rst | dcslagel/welly | cc0769690c8a4bba60b6aa61b0bbbc0b16bee411 | [
"Apache-2.0"
] | 223 | 2015-11-11T01:09:03.000Z | 2022-03-01T08:55:49.000Z | contributors.rst | st4tic0/welly | df6943a8419f6ec7ebf2a0b4df3a1b5d056ebeba | [
"Apache-2.0"
] | 196 | 2015-11-09T18:35:00.000Z | 2022-02-24T16:17:16.000Z | contributors.rst | st4tic0/welly | df6943a8419f6ec7ebf2a0b4df3a1b5d056ebeba | [
"Apache-2.0"
] | 118 | 2015-11-12T06:05:04.000Z | 2022-03-05T08:05:14.000Z | Contributors
------------------
* Matt Hall
* Evan Bianco
* Jesper Dramsch | 12.5 | 18 | 0.573333 |
6dbb99917b8e97056643652279f1d263f13cb6d6 | 422 | rst | reStructuredText | docs/api.rst | theislab/AutoGeneS | 22bde0d5eba013e90edb85341e0bd9c28b82e7fd | [
"MIT"
] | 46 | 2020-02-25T14:09:21.000Z | 2022-01-20T16:42:40.000Z | docs/api.rst | theislab/AutoGeneS | 22bde0d5eba013e90edb85341e0bd9c28b82e7fd | [
"MIT"
] | 16 | 2020-03-18T15:08:42.000Z | 2022-01-29T20:00:10.000Z | docs/api.rst | theislab/AutoGeneS | 22bde0d5eba013e90edb85341e0bd9c28b82e7fd | [
"MIT"
] | 6 | 2020-02-13T14:23:46.000Z | 2021-12-28T16:50:50.000Z | API
===
Import AutoGeneS as::
import autogenes as ag
Main functions
--------------
.. currentmodule:: autogenes.Interface
.. autosummary::
:toctree: api
init
optimize
plot
select
deconvolve
Auxiliary functions
-------------------
.. autosummary::
:toctree: api
pipeline
resume
save
load
Accessors
---------
.. autosummary::
:toctree: api
adata
pareto
fitness_matrix
selection
| 9.813953 | 38 | 0.618483 |
503db7fa5b4a49df7ca2348ceffe4603c79fb9f3 | 1,147 | rst | reStructuredText | HISTORY.rst | mothsART/linkmanager | 5213eb2757e76caaf919bc1f5158af7830fb4165 | [
"BSD-2-Clause"
] | null | null | null | HISTORY.rst | mothsART/linkmanager | 5213eb2757e76caaf919bc1f5158af7830fb4165 | [
"BSD-2-Clause"
] | null | null | null | HISTORY.rst | mothsART/linkmanager | 5213eb2757e76caaf919bc1f5158af7830fb4165 | [
"BSD-2-Clause"
] | null | null | null | Release History
===============
0.4.0.14 (2014-10-04)
---------------------
- Add features to LinkManager to use it on a production server
- [Bug] first launch + data requirement with installation
0.4.0 (2014-09-16)
------------------
- [cli] : auto-completion
- [cli] : progress bar on long load
- [webserver] : add/del/update
- [webserver] : add a tag-it extension
- [webserver] : lazyloading and inline on css & js
- conf files on /etc/ and ~/.config
- usage of fakeredis on unit tests
- [webserver] flatdesign
- READ_ONLY option
0.3.0 (2014-04-15)
------------------
- some setup.py correction + init redis on launch
- debian package + travis/tox
- add version option
- better i18n + change makemessages script + new doc example
0.2 (2014-03-14)
----------------
- test it with a Dockerfile (docker.io)
- setup.py with a dependance not include on pypi : a fork of clint (https://github.com/mothsART/clint)
- pre-launch redis server
- change script name + README examples
- add requirements
- add HISTORY.rst on MANIFEST.in
0.1 (2014-03-09)
----------------
- french translation
- english manpage
- AUTHORS.rst and HISTORY.rst file
| 24.404255 | 102 | 0.65388 |
89f2be23e3497dcaf27822aa8d3458fb22da6f63 | 119 | rst | reStructuredText | docs/flatpak-builder-command-reference.rst | vchernin/flatpak-docs | 51da100300b2149a3f80be5ee6079de8d07e5562 | [
"CC-BY-4.0"
] | 59 | 2017-03-30T16:06:05.000Z | 2022-03-31T05:28:02.000Z | docs/flatpak-builder-command-reference.rst | vchernin/flatpak-docs | 51da100300b2149a3f80be5ee6079de8d07e5562 | [
"CC-BY-4.0"
] | 157 | 2017-01-13T14:45:47.000Z | 2022-03-26T17:01:21.000Z | docs/flatpak-builder-command-reference.rst | vchernin/flatpak-docs | 51da100300b2149a3f80be5ee6079de8d07e5562 | [
"CC-BY-4.0"
] | 123 | 2017-01-14T22:08:44.000Z | 2022-03-29T21:55:06.000Z | Flatpak Builder Command Reference
=================================
.. raw:: html
:file: flatpak-builder-docs.html
| 19.833333 | 35 | 0.529412 |
b075b41e7c33efb125585b80ff18694840132a71 | 302 | rst | reStructuredText | tests/doc_test/doc_needtable/test_options.rst | tlovett/sphinxcontrib-needs | 41794403266deb6a4f7ec07bb8297abb0ddc57b1 | [
"MIT"
] | null | null | null | tests/doc_test/doc_needtable/test_options.rst | tlovett/sphinxcontrib-needs | 41794403266deb6a4f7ec07bb8297abb0ddc57b1 | [
"MIT"
] | null | null | null | tests/doc_test/doc_needtable/test_options.rst | tlovett/sphinxcontrib-needs | 41794403266deb6a4f7ec07bb8297abb0ddc57b1 | [
"MIT"
] | null | null | null | TEST OPTIONS
============
.. spec:: test123
:id: SP_TOO_003
:status: implemented
:tags: test;test2
The Tool awesome shall have a command line interface.
.. story:: test abc
:links: SP_TOO_003
:tags: 1
.. needtable::
.. needtable::
:columns: Incoming;Id;tags;STATUS;TiTle
| 15.1 | 57 | 0.629139 |
7d8cf3dc079ae9e5ab3115effd4485e6058beca9 | 414 | rst | reStructuredText | docs/api/function_static__point__light__component_8h_1a9f2a604613655c76f5b25d2b02642a9d.rst | IronicallySerious/Rootex | a529da1799dc6308afd3040e0344130a3f76524b | [
"MIT"
] | null | null | null | docs/api/function_static__point__light__component_8h_1a9f2a604613655c76f5b25d2b02642a9d.rst | IronicallySerious/Rootex | a529da1799dc6308afd3040e0344130a3f76524b | [
"MIT"
] | null | null | null | docs/api/function_static__point__light__component_8h_1a9f2a604613655c76f5b25d2b02642a9d.rst | IronicallySerious/Rootex | a529da1799dc6308afd3040e0344130a3f76524b | [
"MIT"
] | null | null | null | .. _exhale_function_static__point__light__component_8h_1a9f2a604613655c76f5b25d2b02642a9d:
Function DECLARE_COMPONENT(StaticPointLightComponent)
=====================================================
- Defined in :ref:`file_rootex_framework_components_visual_light_static_point_light_component.h`
Function Documentation
----------------------
.. doxygenfunction:: DECLARE_COMPONENT(StaticPointLightComponent)
| 29.571429 | 96 | 0.727053 |
65757fb4bc1486f087b0e8ab713fd0e0a1cb1554 | 281 | rst | reStructuredText | source/namespaces.rst | oneapi-account/DPCPP_Reference | 6ef0b40fe4c4b6f0883ab8eb02f40613bc86c533 | [
"ECL-2.0",
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | source/namespaces.rst | oneapi-account/DPCPP_Reference | 6ef0b40fe4c4b6f0883ab8eb02f40613bc86c533 | [
"ECL-2.0",
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | source/namespaces.rst | oneapi-account/DPCPP_Reference | 6ef0b40fe4c4b6f0883ab8eb02f40613bc86c533 | [
"ECL-2.0",
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | ..
Copyright 2020 The Khronos Group Inc.
SPDX-License-Identifier: CC-BY-4.0
Namespaces
==========
Unless otherwise noted, all symbols should be prefixed with the
``sycl`` namespace. ``buffer`` is ``sycl::buffer``, and
``info::device::name`` is ``sycl::info::device::name``.
| 23.416667 | 63 | 0.672598 |
7dc3cd27c913c9902aaf7f27feae2121766b76e2 | 2,993 | rst | reStructuredText | docs/getting_started.rst | Kraymer/qifhack | 7f3ce91087d2a6820da7303f4a45476db1a2b7f4 | [
"MIT"
] | 50 | 2015-04-26T22:13:32.000Z | 2022-03-03T22:04:48.000Z | docs/getting_started.rst | Kraymer/qifqif | 7f3ce91087d2a6820da7303f4a45476db1a2b7f4 | [
"MIT"
] | 68 | 2015-03-28T09:45:49.000Z | 2022-03-02T10:42:41.000Z | docs/getting_started.rst | Kraymer/qifhack | 7f3ce91087d2a6820da7303f4a45476db1a2b7f4 | [
"MIT"
] | 14 | 2015-11-01T03:44:04.000Z | 2021-08-23T11:48:55.000Z | Getting started
===============
.. |br| raw:: html
<br />
Install
-------
qiqif is written for `Python 2.7`_ and is tested on Linux, Mac OS X and
(somewhat) Windows.
Install with `pip`_ via ``pip install qifqif`` command.
If you're on Windows and don't have pip yet, follow
`this guide`_ to install it.
To install without pip, download qifqif from `its PyPI page`_ and run ``python
setup.py install`` in the directory therein.
.. _Python 2.7: ttps://www.python.org/downloads/
.. _pip: https://pip.pypa.io/en/stable/
.. _this guide: https://pip.pypa.io/en/latest/installing/
.. _here: https://github.com/Kraymer/qifqif/releases
.. _its PyPI page: http://pypi.python.org/pypi/qifqif#downloads
The best way to upgrade is by running ``pip install -U qifqif``. |br|
A `RSS feed`_ allowing to hear about delivery of new releases is yet to come.
.. _RSS feed: https://github.com/Kraymer/qifqif/issues/40
How it works
------------
qifqif augment your qif files by adding a category line for each transaction,
that additional information can then be used by accounting software to
perform automatic QIF imports.
It picks categories by searching for keywords you previously defined in
transactions fields and by repeating choices you made regarding similar
transactions.
Learning as time goes by
^^^^^^^^^^^^^^^^^^^^^^^^
On first run, qifqif starts with an empty configuration file and prompt you to
enter a category and corresponding keywords for each transaction it reads.
The mappings you enter at prompts are saved in the configuration file which
builds up upon the time as you use the software, allowing qifqif to tag always
more transactions automatically.
.. note::
While not recommended, it's possible to mass-edit the configuration file in
a text editor. Typically, the goal would be to add matchings in bulk and
speed up qifqif learning process as a consequence. |br|
See explanations about the `configuration file format`_ if you want to go
that way.
.. _configuration file format: http://qifqif.readthedocs.org/en/latest/tips.html#format-of-the-configuration-file
Entering keywords
^^^^^^^^^^^^^^^^^
The main interaction you will have with qifqif consist to enter a category and
keywords that are required for the transactions to belong to that category.
Entering a category is pretty basic: it can be any string of your choice, use
``<TAB>`` completion to reuse existing ones. |br|
For a software like Gnucash, that info must correspond to the account name you
want the transaction to be attached to.
Entering keywords has a more advanced interface. |br|
Basic & simple case: you enter (successive) word(s) present in the *Payee* field of the current transaction. |br|
If you are interested to detect transactions on field(s) other than *Payee*,
using partial words matching or --foul you!-- using regexes, then please read
`mastering keywords`_.
.. _mastering keywords: http://qifqif.readthedocs.org/en/latest/tips.html#mastering-keywords-prompts
| 36.5 | 113 | 0.750084 |
714664ff09d3843d7400709d960eea5a859965d8 | 1,563 | rst | reStructuredText | rtfm/source/_pages/stack/plainphp.rst | MULXCODE/wordless | 3b6ef66f65cad0c6c2b986aa1dedca78b49e7670 | [
"Unlicense",
"MIT"
] | null | null | null | rtfm/source/_pages/stack/plainphp.rst | MULXCODE/wordless | 3b6ef66f65cad0c6c2b986aa1dedca78b49e7670 | [
"Unlicense",
"MIT"
] | 1 | 2021-05-11T20:29:16.000Z | 2021-05-11T20:29:16.000Z | rtfm/source/_pages/stack/plainphp.rst | MULXCODE/wordless | 3b6ef66f65cad0c6c2b986aa1dedca78b49e7670 | [
"Unlicense",
"MIT"
] | null | null | null | .. _PlainPhp:
Using plain PHP templates
=========================
Let's take the unaltered default theme as an example. In ``views/layouts`` we
have the ``default`` template which calls a ``render_partial`` for the
``_header`` partial.
.. literalinclude:: /../../wordless/theme_builder/vanilla_theme/views/layouts/default.html.pug
:language: slim
:caption: views/layouts/default.html.pug
:emphasize-lines: 6
.. literalinclude:: /../../wordless/theme_builder/vanilla_theme/views/layouts/_header.html.pug
:language: slim
:caption: views/layouts/_header.html.pug
Let's suppose we need to change ``_header`` in a PHP template because we don't
like PUG or we need to write complex code there.
.. warning::
If you have to write complex code in a view you are on the wrong path :)
#. Rename ``_header.html.pug`` in ``_header.html.php``
#. Update its content, e.g.:
.. code-block:: php
:caption: views/layouts/_header.html.php
<h1> <?php echo link_to(get_bloginfo('name'), get_bloginfo('url')); ?> </h1>
<h2> <?php echo htmlentities(get_bloginfo('description')) ?> </h2>
#. Done
When ``render_partial("layouts/header")`` doesn't find ``_header.html.pug`` it
will automatically search for ``_header.html.php`` and will use it *as is*,
without passing through any compilation process.
Conclusions
###########
As you can see, Wordless does not force you that much. Moreover, you will continue
to have its goodies/helpers to break down views in little partials, simplifying
code readability and organization.
| 33.255319 | 94 | 0.699296 |
e67dbdace10ba5d4b14189b3bae03e1e0dd4493e | 2,300 | rst | reStructuredText | vuln_test_suite_gen/docs/vuln_test_suite_gen.rst | wmentzer/Vulnerability-Test-Suite-generator | 475da7b611f077ce74190c683bad2cf7cf0e2eaf | [
"MIT"
] | 4 | 2016-04-04T16:36:37.000Z | 2018-08-08T16:38:18.000Z | vuln_test_suite_gen/docs/vuln_test_suite_gen.rst | wmentzer/Vulnerability-Test-Suite-generator | 475da7b611f077ce74190c683bad2cf7cf0e2eaf | [
"MIT"
] | 1 | 2022-01-31T22:22:55.000Z | 2022-01-31T22:22:55.000Z | docs/vuln_test_suite_gen/vuln_test_suite_gen.rst | usnistgov/VTSG | f4477a78ec19f7e9757da0321cb5a69428e358cf | [
"MIT"
] | 1 | 2020-07-08T16:34:00.000Z | 2020-07-08T16:34:00.000Z | :tocdepth: 2
vuln_test_suite_gen package
===================================
complexities_generator module
---------------------------------------------------------
.. automodule:: vuln_test_suite_gen.complexities_generator
:members:
:undoc-members:
:show-inheritance:
complexity module
---------------------------------------------
.. automodule:: vuln_test_suite_gen.complexity
:members:
:undoc-members:
:show-inheritance:
condition module
--------------------------------------------
.. automodule:: vuln_test_suite_gen.condition
:members:
:undoc-members:
:show-inheritance:
exec_query module
---------------------------------------------
.. automodule:: vuln_test_suite_gen.exec_query
:members:
:undoc-members:
:show-inheritance:
file_manager module
-----------------------------------------------
.. automodule:: vuln_test_suite_gen.file_manager
:members:
:undoc-members:
:show-inheritance:
file_template module
------------------------------------------------
.. automodule:: vuln_test_suite_gen.file_template
:members:
:undoc-members:
:show-inheritance:
filtering_sample module
---------------------------------------------------
.. automodule:: vuln_test_suite_gen.filtering_sample
:members:
:undoc-members:
:show-inheritance:
generator module
--------------------------------------------
.. automodule:: vuln_test_suite_gen.generator
:members:
:undoc-members:
:show-inheritance:
input_sample module
-----------------------------------------------
.. automodule:: vuln_test_suite_gen.input_sample
:members:
:undoc-members:
:show-inheritance:
manifest module
-------------------------------------------
.. automodule:: vuln_test_suite_gen.manifest
:members:
:undoc-members:
:show-inheritance:
sample module
-----------------------------------------
.. automodule:: vuln_test_suite_gen.sample
:members:
:undoc-members:
:show-inheritance:
sink_sample module
----------------------------------------------
.. automodule:: vuln_test_suite_gen.sink_sample
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: vuln_test_suite_gen
:members:
:undoc-members:
:show-inheritance:
| 20.909091 | 58 | 0.528696 |
41830219e853d86e463a78276a42e3f017e2c485 | 14,142 | rst | reStructuredText | docs/source/pattern_syntax.rst | milaboratory/minnn | 6f4957cc6ba31cfd93ff304e9bfd0c79049d8b4d | [
"FSFAP"
] | 5 | 2019-02-27T03:00:57.000Z | 2021-06-21T13:51:46.000Z | docs/source/pattern_syntax.rst | milaboratory/minnn | 6f4957cc6ba31cfd93ff304e9bfd0c79049d8b4d | [
"FSFAP"
] | 65 | 2018-08-17T10:49:38.000Z | 2020-08-18T21:24:14.000Z | docs/source/pattern_syntax.rst | milaboratory/minnn | 6f4957cc6ba31cfd93ff304e9bfd0c79049d8b4d | [
"FSFAP"
] | 1 | 2020-03-03T01:18:24.000Z | 2020-03-03T01:18:24.000Z | .. _pattern_syntax:
==============
Pattern Syntax
==============
Patterns are used in :ref:`extract` action to specify which sequences must pass to the output and which sequences
must be filtered out. Also, capture groups in patterns are used for barcode extraction. Patterns must always
be specified after :code:`--pattern` option and must always be in double quotes. Examples:
.. code-block:: text
minnn extract --pattern "ATTAGACA"
minnn extract --pattern "*\*" --input R1.fastq R2.fastq
minnn extract --pattern "^(UMI:N{3:5})attwwAAA\*" --input-format mif
Basic Syntax Elements
---------------------
Many syntax elements in patterns are similar to regular expressions, but there are differences. Uppercase
and lowercase letters are used to specify the sequence that must be matched, but uppercase letters don't allow
indels between them and lowercase letters allow indels. Indels on left and right borders of uppercase letters are
also not allowed. Also, score penalty for mismatches in uppercase and lowercase letters can be different:
:code:`--mismatch-score` parameter used for lowercase mismatches and :code:`--uppercase-mismatch-score` for
uppercase mismatches. Standard IUPAC wildcards (N, W, S, M etc) are also allowed in both uppercase and lowercase
sequences.
``\`` character is very important syntax element: it used as read separator. There can be single-read input
files, in this case ``\`` character must not be used. In multi-read inputs ``\`` must be used, and number
of reads in pattern must be equal to number of input FASTQ files (or to number of reads in input MIF file if
:code:`--input-format MIF` parameter is used). There can be many reads, but the most common case is 2 reads:
:code:`R1` and :code:`R2`. By default, extract action will check input reads in order they specified in
:code:`--input` argument, or if input file is MIF, then in order they saved in MIF file. If :code:`--try-reverse-order`
argument is specified, it will also try the combination with 2 swapped last reads (for example, if there are 3 reads,
it will try :code:`R1, R2, R3` and :code:`R1, R3, R2` combinations), and then choose the match with better score.
Another important syntax element is capture group. It looks like :code:`(group_name:query)` where :code:`group_name`
is any sequence of letters and digits (like :code:`UMI` or :code:`SB1`) that you use as group name. Group names are
case sensitive, so :code:`UMI` and :code:`umi` are different group names. :code:`query` is part of query that will be
saved as this capture group. It can contain nested groups and some other syntax elements that are allowed inside
single read (see below).
:code:`R1`, :code:`R2`, :code:`R3` etc are built-in group names that contain full matched reads.
You can override them by specifying manually in the query, and overridden values will go to output instead of full
reads. For example, query like this
.. code-block:: text
minnn extract --input R1.fastq R2.fastq --pattern "^NNN(R1:(UMI:NNN)ATTAN{*})\^NNN(R2:NNNGACAN{*})"
can be used if you want to strip first 3 characters and override built-in :code:`R1` and :code:`R2` groups to write
output reads without stripped characters. Note that :code:`R1`, :code:`R2`, :code:`R3` etc, like any common groups,
can contain nested groups and can be nested inside other groups.
**Important:** in matches that come from swapped reads (when :code:`--try-reverse-order` argument is specified),
if you don't use built-in group names override, :code:`R1` in input will become :code:`R2` in output and vice versa
(or there can be, for example, swapped :code:`R2` and :code:`R3` in case of 3 reads). If you use the override,
:code:`R1`, :code:`R2`, :code:`R3` etc in output will come from the place where they matched. If you export the output
MIF file from :ref:`extract` action to FASTQ and want to determine whether the match came from straight or swapped
reads, check the comments for :code:`||~` character sequence: it is added to matches that came from swapped reads.
Look at :ref:`mif2fastq` section for detailed information.
:code:`*` character can be used instead of read contents if any contents must match. It can be enclosed in one or
multiple capture groups, but can't be used if there are other query elements in the same read. If there are other
query elements, use :code:`N{*}` instead. For example, the following queries are **valid**:
.. code-block:: text
minnn extract --input R1.fastq R2.fastq --try-reverse-order --pattern "(G1:ATTA)\(G2:(G3:*))"
minnn extract --input R1.fastq R2.fastq R3.fastq --pattern "*\*\*"
minnn extract --input R1.fastq R2.fastq --pattern "(G1:ATTAN{*})\(G2:*)"
and this is **invalid**:
.. code-block:: text
minnn extract --input R1.fastq R2.fastq --pattern "(G1:ATTA*)\*"
Curly brackets after nucleotide can be used to specify number of repeats for the nucleotide. There can be any
nucleotide letter (uppercase or lowercase, basic or wildcard) and then curly braces with quantity specifier.
The following syntax constructions are allowed:
:code:`a{*}` - any number of repeats, from 1 to the entire sequence
:code:`a{:}` - same as the above
:code:`a{14}` - fixed number of repeats
:code:`a{3:6}` - specified interval of allowed repeats, interval borders are inclusive
:code:`a{:5}` - interval from 1 to specified number, inclusive
:code:`a{4:}` - interval from specified number (inclusive) to the entire sequence
**Special Case:** if :code:`n` or :code:`N` nucleotide is used before curly brackets, indels and pattern overlaps
(see :code:`--max-overlap` parameter below) are disabled, so lowercase :code:`n` and uppercase :code:`N` are
equivalent when used before curly brackets.
Symbols :code:`^` and :code:`$` can be used to restrict matched sequence to start or end of the target sequence.
:code:`^` mark must be in the start of the query for the read, and it means that the query match must start from
the beginning of the read sequence. :code:`$` mark must be in the end, and it means that the query match must be in the
end of the read. Examples:
.. code-block:: text
minnn extract --pattern "^ATTA"
minnn extract --input R1.fastq R2.fastq --pattern "TCCNNWW$\^(G1:ATTAGACA)N{3:18}(G2:ssttggca)$"
Advanced Syntax Elements
------------------------
There are operators :code:`&`, :code:`+` and :code:`||` that can be used inside the read query.
:code:`&` operator is logical AND, it means that 2 sequences must match in any order and gap between them.
Examples:
.. code-block:: text
minnn extract --pattern "ATTA & GACA"
minnn extract --input R1.fastq R2.fastq --pattern "AAAA & TTTT & CCCC \ *"
minnn extract --input R1.fastq R2.fastq --pattern "(G1:AAAA) & TTTT & CCCC \ ATTA & (G2:GACA)"
Note that :code:`AAAA`, :code:`TTTT` and :code:`CCCC` sequences can be in any order in the target to consider that the
entire query is matching. :code:`&` operator is not allowed within groups, so this example is **invalid**:
.. code-block:: text
minnn extract --pattern "(G1:ATTA & GACA)"
:code:`+` operator is also logical AND but with order restriction. Nucleotide sequences can be matched only in
the specified order. Also, :code:`+` operator can be used within groups. Note that in this case the matched group will
also include all nucleotides between matched operands. Examples:
.. code-block:: text
minnn extract --pattern "(G1:ATTA + GACA)"
minnn extract --input R1.fastq R2.fastq --pattern "(G1:AAAA + TTTT) + CCCC \ ATTA + (G2:GACA)"
:code:`||` operator is logical OR. It is not allowed within groups, but groups with the same name are allowed
inside operands of :code:`||` operator. Note that if a group is present in one operand of :code:`||` operator and
missing in another operand, this group may appear not matched in the output while the entire query is matched.
Examples:
.. code-block:: text
minnn extract --pattern "^AAANNN(G1:ATTA) || ^TTTNNN(G1:GACA)"
minnn extract --input R1.fastq R2.fastq --pattern "(G1:AAAA) || TTTT || (G1:CCCC) \ ATTA || (G2:GACA)"
:code:`+`, :code:`&` and :code:`||` operators can be combined in single query. :code:`+` operator has the highest
priority, then :code:`&`, and :code:`||` has the lowest. Read separator (``\``) has lower priority than all these
3 operators. To change the priority, square brackets :code:`[]` can be used. Examples:
.. code-block:: text
minnn extract --pattern "^[AAA & TTT] + [GGG || CCC]$"
minnn extract --input R1.fastq R2.fastq --pattern "[(G1:ATTA+GACA)&TTT]+CCC\(G2:AT+AC)"
Square brackets can be used to create sequences of patterns. Sequence is special pattern that works like :code:`+`
but with penalty for gaps between patterns. Examples of sequence pattern:
.. code-block:: text
minnn extract --pattern "[AAA & TTT]CCC"
minnn extract --input R1.fastq R2.fastq --pattern "[(G1:ATTA+GACA)][(G2:TTT)&ATT]\*"
Also square brackets allow to set separate score threshold for the query inside brackets. This can be done by writing
score threshold value followed by :code:`:` after opening bracket. Examples:
.. code-block:: text
minnn extract --pattern "[-14:AAA & TTT]CCC"
minnn extract --input R1.fastq R2.fastq --pattern "[0:(G1:ATTA+GACA)][(G2:TTT)&ATT]\[-25:c{*}]"
Matched operands of :code:`&`, :code:`+` and sequence patterns can overlap, but overlaps add penalty to match score.
You can control maximum overlap size and overlapping letter penalty by :code:`--max-overlap` and
:code:`--single-overlap-penalty` parameters. :code:`-1` value for :code:`--max-overlap` parameters means no restriction
on maximum overlap size.
**Important:** parentheses that used for groups are not treated as square brackets; instead, they treated as group
edges attached to nucleotide sequences. So, the following examples are different: first example creates sequence
pattern and second example adds end of :code:`G1` and start of :code:`G2` to the middle of sequence :code:`TTTCCC`.
.. code-block:: text
minnn extract --pattern "[(G1:AAA+TTT)][(G2:CCC+GGG)]"
minnn extract --pattern "(G1:AAA+TTT)(G2:CCC+GGG)"
If some of nucleotides on the edge of nucleotide sequence can be cut without gap penalty, tail cut pattern can be used.
It looks like repeated :code:`<` characters in the beginning of the sequence, or repeated :code:`>` characters in
the end of the read, or single :code:`<` or :code:`>` character followed by curly braces with number of
repeats. It is often used with :code:`^`/:code:`$` marks. Examples:
.. code-block:: text
minnn extract --input R1.fastq R2.fastq --pattern "^<<<ATTAGACA>>$\[^<TTTT || ^<<CCCC]"
minnn extract --input R1.fastq R2.fastq --pattern "<{6}ACTCACTCGC + GGCTCGC>{2}$\<<AATCC>"
**Important:** :code:`<` and :code:`>` marks belong to nucleotide sequences and not to complex patterns, so square
brackets between :code:`<` / :code:`>` and nucleotide sequences are **not** allowed. Also, the following examples are
different: in first example edge cut applied only to the first operand, and in second example - to both operands.
.. code-block:: text
minnn extract --pattern "<{3}ATTA & GACA"
minnn extract --pattern "<{3}ATTA & <{3}GACA"
High Level Logical Operators
----------------------------
There are operators :code:`~`, :code:`&&` and :code:`||` that can be used with full multi-read queries. Note that
:code:`||` operator have the same symbol as read-level OR operator, so square brackets must be used to use
high level :code:`||`.
:code:`||` operator is high-level OR. Groups with the same name are allowed in different operands of this operator,
and if a group is present in one operand of :code:`||` operator and missing in another operand, this group may appear
not matched in the output while the entire query is matched. Examples:
.. code-block:: text
minnn extract --pattern "[AA\*\TT] || [*\GG\CG]" --input R1.fastq R2.fastq R3.fastq
minnn extract --pattern "[^(G1:AA) + [ATTA || GACA]$ \ *] || [AT(G1:N{:8})\(G2:AATGC)]" --input R1.fastq R2.fastq
:code:`&&` operator is high-level AND. For AND operator it is not necessary to enclose multi-read query in square
brackets because there is no ambiguity. Groups with the same name are **not** allowed in different operands of
:code:`&&` operator. Examples:
.. code-block:: text
minnn extract --pattern "AA\*\TT && *\GG\CG" --input R1.fastq R2.fastq R3.fastq
minnn extract --pattern "^(G1:AA) + [ATTA || GACA]$ \ * && AT(G2:N{:8})\(G3:AATGC)" --input R1.fastq R2.fastq
:code:`~` is high-level NOT operator with single operand. It can sometimes be useful with single-read queries to
filter out wrong data. Groups are **not** allowed in operand of :code:`~` operator.
.. code-block:: text
minnn extract --pattern "~ATTAGACA"
minnn extract --pattern "~[TT \ GC]" --input R1.fastq R2.fastq
**Important:** :code:`~` operator always belongs to multi-read query that includes all input reads, so this example
is **invalid**:
.. code-block:: text
minnn extract --pattern "[~ATTAGACA] \ TTC" --input R1.fastq R2.fastq
Instead, this query can be used:
.. code-block:: text
minnn extract --pattern "~[ATTAGACA \ *] && * \ TTC" --input R1.fastq R2.fastq
Note that if :code:`--try-reverse-order` argument is specified, reads will be swapped synchronously for all multi-read
queries that appear as operands in the entire query, so this query will never match:
.. code-block:: text
minnn extract --pattern "~[ATTA \ *] && ATTA \ *" --input R1.fastq R2.fastq
Square brackets are not required for :code:`~` operator, but recommended for clarity if input contains more than
1 read. :code:`~` operator have lower priority than ``\``; :code:`&&` has lower priority than :code:`~`, and
high-level :code:`||` has lower priority than :code:`&&`. But remember that high-level :code:`||` requires to enclose
operands or multi-read blocks inside operands into square brackets to avoid ambiguity with read-level OR operator.
Square brackets with score thresholds can be used with high-level queries too:
.. code-block:: text
minnn extract --pattern "~[0: ATTA \ GACA && * \ TTT] || [-18: CCC \ GGG]" --input R1.fastq R2.fastq
| 52.377778 | 119 | 0.715882 |
cbd7b7e56746eb799e20e0cedf3b4fe5e57c54ce | 2,814 | rst | reStructuredText | docs/source/overview/traffic_stats.rst | qtweng/trafficcontrol | 5cc9fd90f7425ee5ed2705d09d82f099a1d7eead | [
"Apache-2.0",
"BSD-2-Clause",
"MIT-0",
"MIT",
"BSD-3-Clause"
] | 598 | 2018-06-16T02:54:28.000Z | 2022-03-31T22:31:25.000Z | docs/source/overview/traffic_stats.rst | qtweng/trafficcontrol | 5cc9fd90f7425ee5ed2705d09d82f099a1d7eead | [
"Apache-2.0",
"BSD-2-Clause",
"MIT-0",
"MIT",
"BSD-3-Clause"
] | 3,506 | 2018-06-13T16:39:39.000Z | 2022-03-29T18:31:31.000Z | docs/source/overview/traffic_stats.rst | qtweng/trafficcontrol | 5cc9fd90f7425ee5ed2705d09d82f099a1d7eead | [
"Apache-2.0",
"BSD-2-Clause",
"MIT-0",
"MIT",
"BSD-3-Clause"
] | 360 | 2018-06-13T20:08:42.000Z | 2022-03-31T10:37:47.000Z | ..
..
.. Licensed under the Apache License, Version 2.0 (the "License");
.. you may not use this file except in compliance with the License.
.. You may obtain a copy of the License at
..
.. http://www.apache.org/licenses/LICENSE-2.0
..
.. Unless required by applicable law or agreed to in writing, software
.. distributed under the License is distributed on an "AS IS" BASIS,
.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. See the License for the specific language governing permissions and
.. limitations under the License.
..
.. _ts-overview:
*************
Traffic Stats
*************
:dfn:`Traffic Stats` is a program written in `Go <http://golang.org>`_ that is used to acquire and store statistics about CDNs controlled by Traffic Control. Traffic Stats mines metrics from the :ref:`tm-api` and stores the data in `InfluxDB <http://influxdb.com>`_. Data is typically stored in InfluxDB on a short-term basis (30 days or less). The data from InfluxDB is then used to drive graphs created by `Grafana <http://grafana.org>`_ - which are linked to from :ref:`tp-overview` - as well as provide data exposed through the :ref:`to-api`. Traffic Stats performs two functions:
- Gathers statistics for Edge-tier :term:`cache servers` and :term:`Delivery Services` at a configurable interval (10 second default) from the :ref:`tm-api` and stores the data in InfluxDB
- Summarizes all of the statistics once a day (around midnight UTC) and creates a daily report containing the Max :abbr:`Gbps (Gigabits per second)` Served and the Total Bytes Served.
Statistics are stored in three different databases:
- ``cache_stats``: Stores data gathered from edge-tier :term:`cache servers`. The `measurements <https://influxdb.com/docs/v0.9/concepts/glossary.html#measurement>`_ stored by ``cache_stats`` are:
- ``bandwidth``
- ``maxKbps``
- ``client_connections`` (``ats.proxy.process.http.current_client_connections``).
Cache Data is stored with `tags <https://influxdb.com/docs/v0.9/concepts/glossary.html#tag>`_ for hostname, :term:`Cache Group`, and CDN. Data can be queried using tags.
- ``deliveryservice_stats``: Stores data for :term:`Delivery Services`. The measurements stored by ``deliveryservice_stats`` are:
- ``kbps``
- ``status_4xx``
- ``status_5xx``
- ``tps_2xx``
- ``tps_3xx``
- ``tps_4xx``
- ``tps_5xx``
- ``tps_total``
:term:`Delivery Service` statistics are stored with tags for :term:`Cache Group`, CDN, and :term:`Delivery Service` :ref:`ds-xmlid`.
- ``daily_stats``: Stores summary data for daily activities. The statistics that are currently summarized are:
- Max Bandwidth
- Bytes Served
Daily stats are stored by CDN.
Traffic Stats does not influence overall CDN operation, but is required in order to display charts in :ref:`tp-overview`.
| 49.368421 | 584 | 0.734186 |
cd36c6f5c9970d287ee22ae2a584615fedbb4143 | 3,155 | rst | reStructuredText | docsrc/intro.rst | boba-and-beer/vectorai | 5244968e4a3622f6c536e96e1fa25719634e5b45 | [
"Apache-2.0"
] | 255 | 2020-09-30T12:32:20.000Z | 2022-03-19T16:12:35.000Z | docsrc/intro.rst | boba-and-beer/vectorai | 5244968e4a3622f6c536e96e1fa25719634e5b45 | [
"Apache-2.0"
] | 20 | 2020-10-01T06:14:35.000Z | 2021-04-12T07:22:57.000Z | docsrc/intro.rst | boba-and-beer/vectorai | 5244968e4a3622f6c536e96e1fa25719634e5b45 | [
"Apache-2.0"
] | 33 | 2020-10-01T20:52:39.000Z | 2022-03-18T07:17:25.000Z |
Vector AI - Essentials
^^^^^^^^^^^^^^^^^^^^^^
Vector AI is built to store vectors alongside documents (text/audio/images/videos).
It is designed to be a light-weight library to create, manipulate, search and analyse vectors to power machine
learning applications such as semantic search, recommendations, etc.
Important Terminologies
=======================
- **Vectors** (aka. embeddings, 1D arrays)
- **Models/Encoders** (aka. Embedders) Turns data into vectors e.g. Word2Vec turns words into vectors
- **Vector Similarity Search** (aka. Nearest Neighbor Search, Distance Search)
- **Collection** (aka. Index, Table) ~ a collection is made up of multiple documents
- **Documents** (aka. Json, Item, Dictionary, Row) ~ a document can contain vector + other important information
.. code-block:: RST
e.g.
{
"_id" : "1",
"description_vector__ ": [...],
"description" : "This is a great idea"
}
Some important information: for predefined vectors use the suffix "_vector_" in the name like "description_vector_", for ids to do quick key value lookup use the name "_id"
Documents in Vector AI
========================
Documents (dictionaries) consists of fields (dictionary keys) and values.
1. Vector AI is document orientated (dictionaries/jsons) which means you can have nested fields. This means that you have documents such as:
.. code-block:: RST
document_example = {
"car": {
"wheels":
{
"number": 4
}
}
}
then running vi_client.get_field("car.wheels.number") will return 4
2. When uploading documents into VectorAi, it will infer the schema from the first document being inserted.
You are able to navigate the documents within the fields by using the functions below, allowing you to navigate through
nested documents if the fields are separated by .'s.
.. code-block:: python
vi_client.set_field(field, doc, value)
vi_client.get_field(field, doc)
vi_client.set_field_across_documents(field, docs, values)
vi_client.get_field_across_documents(field, docs)
Models With Vector AI
========================
Vector AI has deployed models that we've handpicked and tuned to work nicely out of the box on most problems.
These models, however, may be changed over time. When they do we make sure that
previous models are still deployed and can be used.
To prototype something quickly we highly recommend using these deployed models.
**If you are working on a problem that requires highly customised or finetuned models, reach out to us
for enterprise services where we can fine tune these models for your use case or feel free to build your own.**
Currently, our deployed models are:
* ViText2Vec - our text to vector model
* ViImage2Vec - our image to vector model
* ViAudio2Vec - our audio to vector model
* dimensionality_reduction_job - perform dimensionality reduction on your vectors
* clustering_job - perform clustering on your vectors
* advanced_cluster_job - perform clustering with advanced options on your vectors
| 38.012048 | 172 | 0.699842 |
b83343ee6165e05e19e89029f42a4499b402230e | 184 | rst | reStructuredText | docs/source/reference/clustertools.analysis.operations.rst | webbjj/clustertools | 16e01374da364da5688cd0c7b2bd0b8aa7d0e87a | [
"MIT"
] | 8 | 2020-07-21T00:49:24.000Z | 2022-02-05T17:43:49.000Z | docs/source/reference/clustertools.analysis.operations.rst | nstarman/clustertools | 0cb23be6d96f835885ff5ea560d6858a172b8fab | [
"MIT"
] | 4 | 2020-07-16T14:11:18.000Z | 2021-06-02T18:58:26.000Z | docs/source/reference/clustertools.analysis.operations.rst | nstarman/clustertools | 0cb23be6d96f835885ff5ea560d6858a172b8fab | [
"MIT"
] | 3 | 2020-07-16T01:15:18.000Z | 2021-07-15T15:18:07.000Z | clustertools.analysis.operations module
=======================================
.. automodule:: clustertools.analysis.operations
:members:
:undoc-members:
:show-inheritance:
| 23 | 48 | 0.597826 |
e6528788bd4999e759fa818154d3ce9797528232 | 675 | rst | reStructuredText | docs/source/helpers/tests/test_general/tests.general.test_mocker.TestMocker.rst | Privex/python-helpers | 1c976ce5b0e2c5241ea0bdf330bd6701b5e31153 | [
"X11"
] | 12 | 2019-06-18T11:17:41.000Z | 2021-09-13T23:00:21.000Z | docs/source/helpers/tests/test_general/tests.general.test_mocker.TestMocker.rst | Privex/python-helpers | 1c976ce5b0e2c5241ea0bdf330bd6701b5e31153 | [
"X11"
] | 1 | 2019-10-13T07:34:44.000Z | 2019-10-13T07:34:44.000Z | docs/source/helpers/tests/test_general/tests.general.test_mocker.TestMocker.rst | Privex/python-helpers | 1c976ce5b0e2c5241ea0bdf330bd6701b5e31153 | [
"X11"
] | 4 | 2019-10-10T10:15:09.000Z | 2021-05-16T01:55:48.000Z | TestMocker
==========
.. currentmodule:: tests.general.test_mocker
.. autoclass:: TestMocker
.. automethod:: __init__
:noindex:
Methods
^^^^^^^
.. rubric:: Methods
.. autosummary::
:toctree: testmocker
~TestMocker.test_mocker_add_modules
~TestMocker.test_mocker_attributes
~TestMocker.test_mocker_items
~TestMocker.test_mocker_items_attributes_equiv
~TestMocker.test_mocker_make_class
~TestMocker.test_mocker_make_class_module
~TestMocker.test_mocker_make_class_module_isolation
~TestMocker.test_mocker_make_class_not_instance
Attributes
^^^^^^^^^^
.. rubric:: Attributes
.. autosummary::
:toctree: testmocker
| 15.697674 | 55 | 0.73037 |
1d029bc8b65702c967e09117aec43ca72ffd723a | 411 | rst | reStructuredText | docs/source/config.rst | gitloon/sopel | 8cf460fc9ee5c53c36cdfffcf019e6df4801515e | [
"EFL-2.0"
] | 555 | 2015-07-25T21:21:43.000Z | 2022-03-28T02:22:38.000Z | docs/source/config.rst | gitloon/sopel | 8cf460fc9ee5c53c36cdfffcf019e6df4801515e | [
"EFL-2.0"
] | 1,177 | 2015-07-31T09:52:31.000Z | 2022-03-26T05:10:34.000Z | docs/source/config.rst | gitloon/sopel | 8cf460fc9ee5c53c36cdfffcf019e6df4801515e | [
"EFL-2.0"
] | 406 | 2015-07-28T20:34:02.000Z | 2022-03-18T00:37:01.000Z | ===========================
Configuration functionality
===========================
sopel.config
============
.. automodule:: sopel.config
:members:
:undoc-members:
sopel.config.types
==================
.. automodule:: sopel.config.types
:members:
:undoc-members:
sopel.config.core_section
=========================
.. automodule:: sopel.config.core_section
:members:
:undoc-members:
| 14.678571 | 41 | 0.518248 |
48a1c8b690e65d3deaa1bd78b23bf5cafd946599 | 58 | rst | reStructuredText | Misc/NEWS.d/next/Core and Builtins/2021-08-11-16-46-27.bpo-44890.PwNg8N.rst | kashishmish/cpython | 32ff88af346c6fedfecbfb4abd8681d84283dda7 | [
"0BSD"
] | 4 | 2020-03-04T06:35:24.000Z | 2021-09-20T12:22:45.000Z | Misc/NEWS.d/next/Core and Builtins/2021-08-11-16-46-27.bpo-44890.PwNg8N.rst | kashishmish/cpython | 32ff88af346c6fedfecbfb4abd8681d84283dda7 | [
"0BSD"
] | 148 | 2020-02-26T01:08:34.000Z | 2022-03-01T15:00:59.000Z | Misc/NEWS.d/next/Core and Builtins/2021-08-11-16-46-27.bpo-44890.PwNg8N.rst | kashishmish/cpython | 32ff88af346c6fedfecbfb4abd8681d84283dda7 | [
"0BSD"
] | 1 | 2021-09-04T09:56:10.000Z | 2021-09-04T09:56:10.000Z | Specialization stats are always collected in debug builds. | 58 | 58 | 0.862069 |
2ee4a528e4f44426128c7d03934113f706075074 | 4,539 | rst | reStructuredText | docs/tested_systems.rst | MisanthropicBit/colorise | c7a7e3d4b224e80f39761edfc10e5676b610ba41 | [
"BSD-3-Clause"
] | 2 | 2016-02-07T19:58:46.000Z | 2022-03-28T12:26:57.000Z | docs/tested_systems.rst | MisanthropicBit/colorise | c7a7e3d4b224e80f39761edfc10e5676b610ba41 | [
"BSD-3-Clause"
] | 5 | 2018-05-25T04:36:11.000Z | 2021-01-18T19:08:04.000Z | docs/tested_systems.rst | MisanthropicBit/colorise | c7a7e3d4b224e80f39761edfc10e5676b610ba41 | [
"BSD-3-Clause"
] | 2 | 2018-03-04T21:57:03.000Z | 2022-03-28T12:25:54.000Z | Tested Systems
==============
.. _GitHub Actions: https://github.com/MisanthropicBit/colorise/actions
This is a maintained list of systems that colorise has been tested on per
release version.
Something not working as expected with your terminal? Please report an `issue
<https://github.com/MisanthropicBit/colorise/issues>`__ or submit a `pull
request <https://github.com/MisanthropicBit/colorise/pulls>`__ using the
`contribution guidelines
<https://github.com/MisanthropicBit/colorise/blob/master/CONTRIBUTING.md>`__.
v2.0.0
------
**Mac**
+---------------------------------------+--------------------------------------+-------------------------+
| Terminal | OS | Python versions |
+=======================================+======================================+=========================+
| `iTerm 3.4.3 <https://iterm2.com/>`__ | macOS Big Sur v11.1 | 3.9.1 |
+---------------------------------------+--------------------------------------+-------------------------+
| Terminal.app 2.11 (440) | macOS Big Sur v11.1 | 3.8.2 |
+---------------------------------------+--------------------------------------+-------------------------+
| bash | Mac OS X 10.15.7 (`GitHub Actions`_) | 3.5, 3.6, 3.7, 3.8, 3.9 |
+---------------------------------------+--------------------------------------+-------------------------+
**Windows**
+----------------------------------------+------------------------------------------+-------------------------+
| Terminal | OS | Python versions |
+========================================+==========================================+=========================+
| Default Windows console | Windows 8.1 Pro, version 6.3, build 9600 | 3.7.3 |
+----------------------------------------+------------------------------------------+-------------------------+
| `ConEmu <https://conemu.github.io/>`__ | Windows 8.1 Pro, version 6.3, build 9600 | 3.7.3 |
+----------------------------------------+------------------------------------------+-------------------------+
| cmd.exe | Windows Server 2019 (`GitHub Actions`_) | 3.5, 3.6, 3.7, 3.8, 3.9 |
+----------------------------------------+------------------------------------------+-------------------------+
**Linux**
+----------------------------------------+------------------------------------------+-------------------------+
| Terminal | OS | Python versions |
+========================================+==========================================+=========================+
| Most likely sh or bash | Ubuntu 18.04.5 LTS (`GitHub Actions`_) | 3.5, 3.6, 3.7, 3.8, 3.9 |
+----------------------------------------+------------------------------------------+-------------------------+
v1.0.1
------
**Mac**
+---------------------------------------+---------------------------+---------------------+
| Terminal | OS | Python versions |
+=======================================+===========================+=====================+
| `iTerm 3.2.9 <https://iterm2.com/>`__ | macOS Catalina v10.15.1 | Python 3.7.4 |
+---------------------------------------+---------------------------+---------------------+
| Terminal.app 2.9.4 (421.1.1) | macOS Catalina v10.15.1 | Python 3.7.4 |
+---------------------------------------+---------------------------+---------------------+
**Windows**
+----------------------------------------+------------------------------------------+-----------------+
| Terminal | OS | Python versions |
+========================================+==========================================+=================+
| Default Windows console | Windows 8.1 Pro, version 6.3, build 9600 | Python 3.7.3 |
+----------------------------------------+------------------------------------------+-----------------+
| `ConEmu <https://conemu.github.io/>`__ | Windows 8.1 Pro, version 6.3, build 9600 | Python 3.7.3 |
+----------------------------------------+------------------------------------------+-----------------+
**Linux**
None yet.
| 59.723684 | 111 | 0.249835 |
3e5404724513d20620591fb13d27969c182b8200 | 2,589 | rst | reStructuredText | doc/buildgroundsdk.rst | Parrot-Developers/groundsdk-ios | d85c798e9e3a7e19f5a0ad9f9ab8f7753d422d4a | [
"BSD-3-Clause"
] | 13 | 2019-05-29T00:17:09.000Z | 2021-12-22T16:05:15.000Z | doc/buildgroundsdk.rst | Parrot-Developers/groundsdk-ios | d85c798e9e3a7e19f5a0ad9f9ab8f7753d422d4a | [
"BSD-3-Clause"
] | 5 | 2019-05-08T01:11:42.000Z | 2021-07-20T21:29:33.000Z | doc/buildgroundsdk.rst | Parrot-Developers/groundsdk-ios | d85c798e9e3a7e19f5a0ad9f9ab8f7753d422d4a | [
"BSD-3-Clause"
] | 10 | 2019-05-07T18:43:29.000Z | 2021-09-14T13:41:14.000Z | .. _repo install:
Install and build GroundSdk
===========================
Environment setup
-----------------
Download and install the latest Xcode from App Store Application
.. note:: GroundSdk has been successfully tested with Xcode 10.2, Swift 4.2
and Mac OS Mojave 10.14
Install Homebrew
.. code-block:: console
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Install the required tools using Homebrew
.. code-block:: console
$ brew install bash-completion git xctool pkgconfig repo python3 cmake autoconf automake libtool
Install the following Python module
.. code-block:: console
$ pip3 install requests pyparsing
Configure git with your real name and email address
.. code-block:: console
$ git config --global user.name "Your Name"
$ git config --global user.email "you@example.com"
Download GroundSdk
------------------
Create your working directory
.. code-block:: console
$ mkdir groundsdk
$ cd groundsdk
Initialize Repo in your working directory
.. code-block:: console
$ repo init -u https://github.com/Parrot-Developers/groundsdk-manifest
.. note:: You can learn how to use Repo on the `command reference page`_
Download the GroundSdk source tree
.. code-block:: console
$ repo sync
Xcode configuration
-------------------
Open project to install automatically last components of Xcode
.. code-block:: console
$ open ./products/groundsdk/ios/xcode/groundsdk.xcworkspace/
Connect your Apple developer account and select your provisioning profile
Build and run GroundSdk Demo
----------------------------
#. Build GroundSdk Demo for iOS device
.. code-block:: console
# the build script will ask you to enter your password session a few times
$ ./build.sh -p groundsdk-ios -t build -j
.. note:: To know more about building options
.. code-block:: console
$ ./build.sh -p groundsdk-ios -t
.. note:: Build GroundSdk Demo for Simulator
.. code-block:: console
$ ./build.sh -p groundsdk-ios_sim -t build -j
2. Connect an iOS device to your computer
#. Go back to Xcode
#. Select iOS device
#. Click on Build and then run the current scheme
Connect to your drone
---------------------
#. Switch on your drone
#. Open wifi settings on your iOS device
#. Select your drone's wifi access point (e.g. ANAFI-xxxxxxx)
#. Enter wifi password
#. Open Ground SDK Demo app
#. Your drone should appear in the list, select it
#. Click on Connect
.. _command reference page: https://source.android.com/setup/develop/repo
| 23.536364 | 104 | 0.691 |
96a9419200a10826220487c687d530a4dc9b6cfd | 609 | rst | reStructuredText | includes_install/includes_install_server.rst | jblaine/chef-docs | dc540f7bbc2d3eedb05a74f34b1caf25f1a5d7d3 | [
"CC-BY-3.0"
] | null | null | null | includes_install/includes_install_server.rst | jblaine/chef-docs | dc540f7bbc2d3eedb05a74f34b1caf25f1a5d7d3 | [
"CC-BY-3.0"
] | 1 | 2021-06-27T17:03:16.000Z | 2021-06-27T17:03:16.000Z | includes_install/includes_install_server.rst | jblaine/chef-docs | dc540f7bbc2d3eedb05a74f34b1caf25f1a5d7d3 | [
"CC-BY-3.0"
] | null | null | null | .. The contents of this file may be included in multiple topics (using the includes directive).
.. The contents of this file should be modified in a way that preserves its ability to appear in multiple topics.
The |omnibus installer| is used to set up a |chef server| using a single command to install the server and all of its dependencies, including |erlang|, |ruby|, |rabbitmq|, |nginx|, and |postgresql|. The |omnibus installer| puts everything into a unique directory (``/etc/chef-server/``) so that the |chef server| will not interfere with other applications that may be running on the target server. | 152.25 | 397 | 0.768473 |
21f18836b7992060f6e68757c1b6d6f7e0f56310 | 1,500 | rst | reStructuredText | docs/source/00GetStarted/02AdvancedSetup.rst | jrcpereira/NetCoreControls | 650d849440f8f3178a23c1b46c78606769d5e68c | [
"Apache-2.0"
] | 9 | 2017-04-27T17:36:46.000Z | 2020-06-15T11:18:30.000Z | docs/source/00GetStarted/02AdvancedSetup.rst | jrcpereira/NetCoreControls | 650d849440f8f3178a23c1b46c78606769d5e68c | [
"Apache-2.0"
] | 4 | 2018-06-05T07:44:17.000Z | 2018-10-22T06:59:21.000Z | docs/source/00GetStarted/02AdvancedSetup.rst | jrcpereira/NetCoreControls | 650d849440f8f3178a23c1b46c78606769d5e68c | [
"Apache-2.0"
] | 4 | 2017-11-04T22:23:51.000Z | 2021-07-09T07:15:41.000Z | Advanced Setup
==============
There are some more additional settings that can be used to setup controls.
Settings
--------
This settings are global to the controls, and are placed on the ``appsettings.json`` file, within a section named ``NccSettings``::
{
(...)
"Logging": {
(...)
},
"NccSettings": {
"UseDependencyInjection": "true",
"DataProtectionKey": "11111111-2222-3333-4444-555555555555"
}
}
- **UseDependencyInjection** *(default: true)* - indicates wether the data access class should be requested directly from the iOC or if it should be instantiated.
- **DataProtectionKey** - defines the key to be used when encrypting the context of each control. The example uses a Guid, but it can be any type of string.
Exception Handling
------------------
Exceptions within tag helpers are contained and isolated from the page, preventing that an error in a control blocks the rendering of a page.
Neverthless, if the error occurs on the Razor code, such as trying to read an inexisting property, these errors cannot be handled by the NccControls and prevent the page from rendering.
To prevent this from happening, you can use the ``RenderControl`` TagHelper to render the controls. So, instead of using::
@await Html.PartialAsync("/Views/NccGrid/Partials/_GridExample.cshtml")
just use::
<ncc:render-control Context="@(ViewData["GridExample"] as NccGridContext)"></ncc:render-control> | 35.714286 | 185 | 0.694 |
94b51913403c3704569fea1e1de1d4bd725be52b | 1,330 | rst | reStructuredText | docsrc/source/examples/example_globalfit_regularization.rst | spribitzer/deerlab | 846abe25194119a3bf276ff03b504da7669bfff1 | [
"MIT"
] | null | null | null | docsrc/source/examples/example_globalfit_regularization.rst | spribitzer/deerlab | 846abe25194119a3bf276ff03b504da7669bfff1 | [
"MIT"
] | null | null | null | docsrc/source/examples/example_globalfit_regularization.rst | spribitzer/deerlab | 846abe25194119a3bf276ff03b504da7669bfff1 | [
"MIT"
] | null | null | null | .. highlight:: matlab
.. _example_globalfit_regularization:
**********************************************************************************
Global fit of a dipolar evolution function using regularization
**********************************************************************************
.. raw:: html
<br>
<p align="center">
<a href="https://deertutorials.s3.eu-central-1.amazonaws.com/globalfit_regularization/globalfit_regularization.pdf" title="Download PDF file" target="_blank" download>
<img src="../_static/img/download_pdf_button.png" style="width:10%;height:10%;" alt="pdf">
</a>
<a href="https://deertutorials.s3.eu-central-1.amazonaws.com/globalfit_regularization/globalfit_regularization.mlx" title="Download Live Script" target="_blank">
<img src="../_static/img/download_live_button.png" style="width:10%;height:10%;" alt="live">
</a>
<a href="https://deertutorials.s3.eu-central-1.amazonaws.com/globalfit_regularization/globalfit_regularization.m" title="Download Source File" target="_blank">
<img src="../_static/img/download_source_button.png" style="width:10%;height:10%;" alt="pdf">
</a>
</p>
.. raw:: html
:file: ../../../tutorials/globalfit_regularization/globalfit_regularization.html | 49.259259 | 171 | 0.62782 |
5ed6f1b8889cb2e78adb2e19c1fabb30c20ab8a0 | 349 | rst | reStructuredText | docs/parser.rst | dduraipandian/scrapqd | 7515ebebd54506af765c65e5c9b14331208e6cf6 | [
"MIT"
] | null | null | null | docs/parser.rst | dduraipandian/scrapqd | 7515ebebd54506af765c65e5c9b14331208e6cf6 | [
"MIT"
] | 2 | 2022-03-14T00:42:59.000Z | 2022-03-15T15:50:14.000Z | docs/parser.rst | dduraipandian/scrapqd | 7515ebebd54506af765c65e5c9b14331208e6cf6 | [
"MIT"
] | null | null | null | =======
Parser
=======
Parser is used in the GraphQL query to parse the html. Current system supports xpath in Lxml parser.
Library does not support Beautiful soup as it slower than lxml parser and Selector parsing is comparatively slower than xpath.
- `Lxml`_
Lxml
====
.. autoclass:: scrapqd.gql_parser.lxml_parser.LXMLParser
:members: | 24.928571 | 126 | 0.733524 |
9a297f0b83e691e588370ac0df7f22db40f06c76 | 275 | rst | reStructuredText | docs/source/framework/api.rst | eduardostarling/restio | 66bdb0f86105bf090d7f109da2dd37cbd0096da7 | [
"MIT"
] | 3 | 2019-11-11T14:18:26.000Z | 2020-09-04T20:50:11.000Z | docs/source/framework/api.rst | eduardostarling/restio | 66bdb0f86105bf090d7f109da2dd37cbd0096da7 | [
"MIT"
] | 16 | 2019-11-19T14:39:30.000Z | 2021-06-26T15:08:21.000Z | docs/source/framework/api.rst | eduardostarling/restio | 66bdb0f86105bf090d7f109da2dd37cbd0096da7 | [
"MIT"
] | null | null | null | Public Modules
==============
Model
^^^^^^
.. automodule:: restio.model
:members:
.. automodule:: restio.fields
:members:
Data Access Object
^^^^^^^^^^^^^^^^^^
.. automodule:: restio.dao
:members:
Session
^^^^^^^
.. automodule:: restio.session
:members:
| 10.576923 | 30 | 0.56 |
a91d38d8d2109eae69eb010d1b9984883aae5048 | 3,791 | rst | reStructuredText | tests/readmes/aerospike/aerospike-client-python/README.rst | JulienPalard/mdorrst | 88a605dc8aea11c24d24e4257efc39c2a9a125ad | [
"MIT"
] | 3 | 2017-04-27T03:19:02.000Z | 2021-02-05T13:17:27.000Z | tests/readmes/aerospike/aerospike-client-python/README.rst | JulienPalard/mdorrst | 88a605dc8aea11c24d24e4257efc39c2a9a125ad | [
"MIT"
] | 1 | 2019-10-23T07:36:30.000Z | 2019-10-23T07:36:31.000Z | tests/readmes/aerospike/aerospike-client-python/README.rst | JulienPalard/mdorrst | 88a605dc8aea11c24d24e4257efc39c2a9a125ad | [
"MIT"
] | 4 | 2017-05-13T06:39:20.000Z | 2020-11-06T11:00:50.000Z | Aerospike Python Client
=======================
|Build| |Release| |Wheel| |Downloads| |License|
.. |Build| image:: https://travis-ci.org/aerospike/aerospike-client-python.svg?branch=master
.. |Release| image:: https://img.shields.io/pypi/v/aerospike.svg
.. |Wheel| image:: https://img.shields.io/pypi/wheel/aerospike.svg
.. |Downloads| image:: https://img.shields.io/pypi/dm/aerospike.svg
.. |License| image:: https://img.shields.io/pypi/l/aerospike.svg
Dependencies
------------
The Python client for Aerospike works with Python 2.6, 2.7, 3.4, 3.5 running on
**64-bit** OS X 10.9+ and Linux.
The client depends on:
- Python devel package
- OpenSSL
- The Aerospike C client
RedHat 6+ and CentOS 6+
~~~~~~~~~~~~~~~~~~~~~~~
The following are dependencies for:
- RedHat Enterprise (RHEL) 6 or newer
- CentOS 6 or newer
- Related distributions which use the ``yum`` package manager
::
sudo yum install python-devel
sudo yum install openssl-devel
Debian 6+ and Ubuntu 12.04+
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following are dependencies for:
- Debian 6 or newer
- Ubuntu 12.04 or newer
- Related distributions which use the ``apt`` package manager
::
sudo apt-get install python-dev
sudo apt-get install libssl-dev
OS X
~~~~~~~~
By default OS X will be missing command line tools. On Mavericks (OS X 10.9)
and higher those `can be installed without Xcode <http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x/>`__.
::
xcode-select --install # install the command line tools, if missing
OpenSSL can be installed through the `Homebrew <http://brew.sh/>`__ OS X package
manager.
::
brew install openssl
Install
-------
Aerospike Python Client can be installed using ``pip``:
::
pip install aerospike
# to troubleshoot pip versions >= 6.0 you can
pip install --no-cache-dir aerospike
# to trouleshoot installation on OS X El-Capitan (10.11)
pip install --no-cache-dir --user aerospike
# to have pip copy the Lua system files to a dir other than /usr/local/aerospike/lua
pip install aerospike --install-option="--lua-system-path=/opt/aerospike/lua"
If you run into trouble installing the client on a supported OS, you may be
using an outdated ``pip``.
Versions of ``pip`` older than 7.0.0 should be upgraded, as well as versions of
``setuptools`` older than 18.0.0. Upgrading ``pip`` on OS X El-Capitan (10.11)
runs into `SIP issues <https://apple.stackexchange.com/questions/209572/how-to-use-pip-after-the-el-capitan-max-os-x-upgrade>`__
with ``pip install --user <module>`` as the recommended workaround.
Build
-----
For instructions on manually building the Python client, please refer to the
``BUILD.md`` file in this repo.
Documentation
-------------
Documentation is hosted at `pythonhosted.org/aerospike <https://pythonhosted.org/aerospike/>`__
and at `aerospike.com/apidocs/python <http://www.aerospike.com/apidocs/python/>`__.
Examples
--------
Example applications are provided in the `examples directory of the GitHub repository <https://github.com/aerospike/aerospike-client-python/tree/master/examples/client>`__
For examples, to run the ``kvs.py``:
::
python examples/client/kvs.py
Benchmarks
----------
To run the benchmarks the python modules 'guppy' and 'tabulate' need to be installed.
Benchmark applications are provided in the `benchmarks directory of the GitHub repository <https://github.com/aerospike/aerospike-client-python/tree/master/benchmarks>`__
License
-------
The Aerospike Python Client is made availabled under the terms of the
Apache License, Version 2, as stated in the file ``LICENSE``.
Individual files may be made available under their own specific license,
all compatible with Apache License, Version 2. Please see individual
files for details.
| 28.503759 | 171 | 0.714851 |
c3dadc84ce2a09bf7b36d569abc86a0bededbd05 | 1,227 | rst | reStructuredText | docs/source/posts/2019/2019-03-08-release-announcement.rst | iamansoni/fury | 2e7971a176c2540e10a9a6da861097583d08cb4a | [
"BSD-3-Clause"
] | 149 | 2018-09-20T18:36:16.000Z | 2022-03-29T05:16:25.000Z | docs/source/posts/2019/2019-03-08-release-announcement.rst | iamansoni/fury | 2e7971a176c2540e10a9a6da861097583d08cb4a | [
"BSD-3-Clause"
] | 523 | 2018-09-20T16:57:16.000Z | 2022-03-31T18:52:41.000Z | docs/source/posts/2019/2019-03-08-release-announcement.rst | iamansoni/fury | 2e7971a176c2540e10a9a6da861097583d08cb4a | [
"BSD-3-Clause"
] | 150 | 2018-10-10T07:21:27.000Z | 2022-03-29T08:33:17.000Z | FURY 0.2.0 Released
===================
.. post:: March 8 2019
:author: skoudoro
:tags: fury
:category: release
The FURY project is happy to announce the release of FURY 0.2.0!
FURY is a free and open source software library for scientific visualization and 3D animations.
You can show your support by `adding a star <https://github.com/fury-gl/fury/stargazers>`_ on FURY github project.
The **major highlights** of this release are:
.. include:: ../../release_notes/releasev0.2.0.rst
:start-after: --------------
:end-before: Details
.. note:: The complete release notes are :ref:`available here <releasev0.2.0>`
**To upgrade or install FURY**
Run the following command in your terminal::
pip install --upgrade fury
or::
conda install -c conda-forge fury
**Questions or suggestions?**
For any questions go to http://fury.gl, or send an e-mail to fury@python.org
We can also join our `discord community <https://discord.gg/6btFPPj>`_
We would like to thanks to :ref:`all contributors <community>` for this release:
.. include:: ../../release_notes/releasev0.2.0.rst
:start-after: commits.
:end-before: We closed
On behalf of the :ref:`FURY developers <community>`,
Serge K.
| 25.040816 | 114 | 0.693562 |
a0a288cc15fed0dd8753d21c3540dde333da7b0e | 821 | rst | reStructuredText | docs/source/admin/t3c/index.rst | villajo/trafficcontrol | 7c20bdab550773d83cf86f4b2b2d0f50c7608a29 | [
"Apache-2.0"
] | null | null | null | docs/source/admin/t3c/index.rst | villajo/trafficcontrol | 7c20bdab550773d83cf86f4b2b2d0f50c7608a29 | [
"Apache-2.0"
] | null | null | null | docs/source/admin/t3c/index.rst | villajo/trafficcontrol | 7c20bdab550773d83cf86f4b2b2d0f50c7608a29 | [
"Apache-2.0"
] | null | null | null | ..
..
.. Licensed under the Apache License, Version 2.0 (the "License");
.. you may not use this file except in compliance with the License.
.. You may obtain a copy of the License at
..
.. http://www.apache.org/licenses/LICENSE-2.0
..
.. Unless required by applicable law or agreed to in writing, software
.. distributed under the License is distributed on an "AS IS" BASIS,
.. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
.. See the License for the specific language governing permissions and
.. limitations under the License.
..
.. program:: t3c
.. _t3c:
***
t3c
***
:dfn:`t3c` is the tool used to deploy configuration to :term:`cache servers` used by :abbr:`ATC (Apache Traffic Control)`. It stands for "Traffic Control Cache Configuration".
.. toctree::
:maxdepth: 3
:glob:
*
| 27.366667 | 175 | 0.714982 |
939b24ed47e4f6cad3ba8a17b4719dcb25210b46 | 630 | rst | reStructuredText | doc/ref/controllers/controller_logon_done.rst | udanieli/zotonic | d26ea6298984db4d9c8a15fad587f9d7987e9ea7 | [
"Apache-2.0"
] | 495 | 2015-01-04T19:27:43.000Z | 2022-03-23T09:35:30.000Z | doc/ref/controllers/controller_logon_done.rst | udanieli/zotonic | d26ea6298984db4d9c8a15fad587f9d7987e9ea7 | [
"Apache-2.0"
] | 1,563 | 2015-01-03T00:43:31.000Z | 2022-03-17T10:27:29.000Z | doc/ref/controllers/controller_logon_done.rst | udanieli/zotonic | d26ea6298984db4d9c8a15fad587f9d7987e9ea7 | [
"Apache-2.0"
] | 154 | 2015-01-09T05:55:29.000Z | 2022-03-11T13:23:22.000Z |
.. include:: meta-logon_done.rst
This controller is used as a jumping stone after a log on from the ``/logon`` page.
The ``p`` argument is passed from the ``/logon`` page.
The controller will notify observers of ``#logon_ready_page{ request_page = P }}`` to
see where to redirect next.
The notification is a `:ref:`notification-first`, so the first module responding
with something else than ``undefined`` will determine the redirect.
If no redirection is returned, and the ``p`` argument is empty, then the user
is redirected to the home page ``/``.
.. seealso:: :ref:`logon_ready_page`, :ref:`controller-authentication`
| 33.157895 | 85 | 0.734921 |
13415d00b3677d7a23d2b8396da0ddc139350173 | 201 | rst | reStructuredText | ansible/tune/README.rst | zulcss/browbeat | 1aedcebcdabec0d92c0c0002a6ef458858629e88 | [
"Apache-2.0"
] | 19 | 2019-07-12T08:46:58.000Z | 2022-03-11T19:25:28.000Z | ansible/tune/README.rst | zulcss/browbeat | 1aedcebcdabec0d92c0c0002a6ef458858629e88 | [
"Apache-2.0"
] | 1 | 2022-03-30T07:05:53.000Z | 2022-03-30T07:05:53.000Z | ansible/tune/README.rst | zulcss/browbeat | 1aedcebcdabec0d92c0c0002a6ef458858629e88 | [
"Apache-2.0"
] | 31 | 2019-06-10T20:08:44.000Z | 2022-02-23T15:43:32.000Z | Browbeat OSP Performance Tuning Playbook
========================================
This playbook aims to tune OSP deployed on Red Hat Enterprise Linux.
The playbook in here is currently experimental.
| 28.714286 | 68 | 0.656716 |
dbfedea39addb4b3a3a212aa2d78bd8d3d9b1140 | 2,828 | rst | reStructuredText | doc/PWeducativi/05-obiettivi.rst | dariorin75/03ed-pw04A-docs | cb03bf157f99b65d13dff6bae27928a9bf1108a5 | [
"CC-BY-4.0"
] | 1 | 2020-11-02T19:44:31.000Z | 2020-11-02T19:44:31.000Z | doc/PWeducativi/05-obiettivi.rst | marina0501/03ed-pw04A-docs | f3bb02123675f7e2cf6a066c3067acb9b4534bb1 | [
"CC-BY-4.0"
] | null | null | null | doc/PWeducativi/05-obiettivi.rst | marina0501/03ed-pw04A-docs | f3bb02123675f7e2cf6a066c3067acb9b4534bb1 | [
"CC-BY-4.0"
] | 3 | 2020-11-01T21:14:26.000Z | 2020-11-02T17:45:35.000Z | #########################
Obiettivi di fondo del pw
#########################
Si possono individuare alcuni obiettivi operativi principali perseguibili con la realizzazione del pw: la mission del pw è rivolta all'acquisizione di un superiore livello di efficienza ed efficacia nella gestione del patrimonio complessivo degli oggetti di arredamento scolastici per mezzo della nuova modalità di gestione informatizzata; tale strumento permetterà una maggior conoscenza del patrimonio e di conseguenza maggior efficienza nella gestione in tutti i suoi ambiti.
Se quello citato è l'obiettivo principale, intorno ad esso possono essere individuati ulteriori obiettivi su cui costruire il pw:
• dal punto di vista del cliente (cioè le scuole) ci si può attendere il miglioramento della relazione e del servizio offerto;
• dal punto di vista dell’uffico la possibilità di migliorare la qualità e la velocità delle istruttorie costruendo una banca dati dove le informaizoni relative a forniture o richieste degli anni precedenti suano facilmente estraibili ed utilizzabili ai fini della valutazione della richiesta;
• la costruzione di un'applicazione e di un archivio su server faciliterà le relazioni interne alla Divisione e la collaborazione dell'ufficio con altri uffici dei Servizi Educativi che si occupano delle strutture in ambito economale, della gestione del patrimonio edilizio e della gestione di bilancio;
• grazie all'applicazione sviluppata, le funzioni svolte nell'ufficio in modalità prevalentemente autonoma e scollegata dalle attività degli altri uffici della Divisione o del Servizio, potrebbero essere arricchite di componenti utili alla migliore integrazione del processo all'interno del sistema organizzativo dei Servizi Educativi della città.
Per la miglior individuazione degli obiettivi del progetto è stata condotta la riflessione finalizzata alla compilazione del Business Model Canvas dell’attività di fornitura arredi: la dedazione dello scheda è avvenuta a progetto avviato in quanto tale strumento è uno degli argomenti sviluppati con il procedere del programma didattico del master. Questo strumento, definito soprattutto per un modello di business finalizzato a creare valore e ricavi, ha permesso di identificare e definire in modo più chiaro rilevanza, ruoli e relazioni degli elementi costitutivi di processo nell’ambito dell’ente.
Il documento prodotto acquisendo il modulo compilabile e la guida alla compilazione presenti sul sito https://www.businessmodelcanvas.it/ evidenzia anche elementi non citati in questa scheda, considerati meno rilevanti per il processo e tuttavia presenti nel procedimento (es le relazioni e i canali) PW4A_RIGOTTI-arredi-canvas.pdf - https://github.com/Master-MSL/03ed-pw04A-docs/blob/main/doc/PWeducativi/allegatiEDU/PW4A_RIGOTTI-arredi-canvas.pdf
| 235.666667 | 601 | 0.820014 |
a8558ad96b315bae700683b1f8c98e2b097446e2 | 5,335 | rst | reStructuredText | readme.rst | bojigu/thoipapy | cc571677bd7e25e73db07af9d5d3c2cc36682903 | [
"MIT"
] | 2 | 2018-04-06T15:59:52.000Z | 2019-01-19T06:14:22.000Z | readme.rst | bojigu/thoipapy | cc571677bd7e25e73db07af9d5d3c2cc36682903 | [
"MIT"
] | 17 | 2018-06-20T14:41:39.000Z | 2022-03-12T00:50:25.000Z | readme.rst | bojigu/thoipapy | cc571677bd7e25e73db07af9d5d3c2cc36682903 | [
"MIT"
] | null | null | null | .. image:: https://raw.githubusercontent.com/bojigu/thoipapy/develop/thoipapy/docs/THOIPA_banner.png
THOIPApy
========
The Transmembrane HOmodimer Interface Prediction Algorithm (THOIPA) is a machine learning method for the analysis of protein-protein-interactions.
THOIPA predicts transmembrane homodimer interface residues from evolutionary sequence information.
THOIPA helps predict potential homotypic transmembrane interface residues, which can then be verified experimentally.
THOIPA also aids in the energy-based modelling of transmembrane homodimers.
Important links:
* `THOIPA webserver <http://www.thoipa.org>`_
* `THOIPA FAQ <https://github.com/bojigu/thoipapy/wiki/What-is-THOIPA%3F>`_
* `THOIPA wiki main page <https://github.com/bojigu/thoipapy/wiki/THOIPA-wiki-main-page>`_
How does thoipapy work?
-----------------------
* downloads protein homologues with BLAST
* extracts residue properties (e.g. residue conservation and polarity)
* trains a machine learning classifier
* validates the prediction performance
* creates heatmaps of residue properties and THOIPA prediction
Installation
------------
.. code::
pip install thoipapy
THOIPA has only been tested on Linux, due to reliance on external dependencies such as FreeContact, Phobius, CD-HIT and rate4site.
For predictions only, a dockerised version is available that runs on Windows or MacOS.
Please see the `THOIPA webserver <http://www.thoipa.org>`_ for the latest information.
Dependencies
------------
We recommend the `Anaconda python distribution <https://www.anaconda.com/products/individual>`_, which contains all the required python modules
(numpy, scipy, pandas,biopython and matplotlib). THOIPApy is currently tested for python 3.8.5. The requirements.txt contains a snapshot of compatible
dependencies.
Development status
------------------
The code has been extensively updated and annotated for public release. However is released "as is" with some known issues, limitations and legacy code.
Usage as a standalone predictor
-------------------------------
* first check if your needs are met by the `THOIPA webserver <http://www.thoipa.org>`_ or the latest version of dockerised software
* for local predictions on linux, first install phobius, NCBI_BLAST, biopython, freecontact, CD-HIT, and rate4site
* please see `thoipapy/test/functional/test_standalone_prediction.py <https://github.com/bojigu/thoipapy/tree/develop/thoipapy/test/functional/test_standalone_prediction.py>`_ for the latest run syntax, typically
.. code:: python
from thoipapy.thoipa import get_md5_checksum, run_THOIPA_prediction
from thoipapy.utils import make_sure_path_exists
protein_name = "ERBB3"
TMD_seq = "MALTVIAGLVVIFMMLGGTFL"
full_seq = "MVQNECRPCHENCTQGCKGPELQDCLGQTLVLIGKTHLTMALTVIAGLVVIFMMLGGTFLYWRGRRIQNKRAMRRYLERGESIEPLDPSEKANKVLA"
out_dir = "/path/to/your/desired/output/folder"
make_sure_path_exists(out_dir)
md5 = get_md5_checksum(TMD_seq, full_seq)
run_THOIPA_prediction(protein_name, md5, TMD_seq, full_seq, out_dir)
**Example Output**
* the output includes a csv showing the THOIPA prediction for each residue, as well as a heatmap figure as a summary
* below is a heatmap showing the THOIPA prediction, and underlying conservation, relative polarity, and coevolution
.. image:: https://raw.githubusercontent.com/bojigu/thoipapy/master/thoipapy/docs/standalone_heatmap_example.png
Create your own machine learning predictor
------------------------------------------
* THOIPA can be retrained to any dataset of your choice
* the original set of training sequences and other resources are available via the `Open Science Foundation <https://osf.io/txjev/>`_
* the THOIPA feature extraction, feature selection, and training pipeline is fully automated
* contact us for an introduction to the THOIPA software pipeline and settings
.. code:: bash
python path/to/thoipapy/run.py -s home/user/thoipa/THOIPA_settings.xlsx
License
-------
THOIPApy is free software distributed under the permissive MIT License.
Contribute
-------------
* Contributors are welcome.
* For feedback or troubleshooting, please email us directly and initiate an issue in Github.
Contact
-------
* Mark Teese, `TNG Technology Consulting GmbH <https://www.tngtech.com/en/index.html>`_, formerly of the `Langosch Lab <http://cbp.wzw.tum.de/index.php?id=10>`_ at the `Technical University of Munich <https://www.tum.de/en/>`_
* `Bo Zeng <http://frishman.wzw.tum.de/index.php?id=50>`_, `Chinese Academy of Sciences, Beijing <http://english.cas.cn/>`_ formerly of the `Frishman Lab <http://frishman.wzw.tum.de/index.php?id=2>`_ at the `Technical University of Munich <https://www.tum.de/en/>`_
.. image:: https://raw.githubusercontent.com/bojigu/thoipapy/develop/thoipapy/docs/signac_seine_bei_samois_mt.png
:height: 150px
:width: 250px
.. image:: https://raw.githubusercontent.com/bojigu/thoipapy/develop/thoipapy/docs/signac_notredame_bz.png
:height: 120px
:width: 250px
Citation
--------
`Yao Xiao, Bo Zeng, Nicola Berner, Dmitrij Frishman, Dieter Langosch, and Mark George Teese (2020)
Experimental determination and data-driven prediction of homotypic transmembrane domain interfaces,
Computational and Structural Biotechnology Journal <https://doi.org/10.1016/j.csbj.2020.09.035>`_
| 40.725191 | 265 | 0.764199 |
ffc1972cf86ac91a46b866bb53ff4986b1e604db | 414 | rst | reStructuredText | doc/release/upcoming_changes/15355.c_api.rst | erwinvanthiel/numpy | 4a14a3526abf616ab469c83d8adcc2f2f1d5de9c | [
"BSD-3-Clause"
] | 1 | 2020-05-11T09:28:19.000Z | 2020-05-11T09:28:19.000Z | doc/release/upcoming_changes/15355.c_api.rst | erwinvanthiel/numpy | 4a14a3526abf616ab469c83d8adcc2f2f1d5de9c | [
"BSD-3-Clause"
] | 88 | 2020-04-23T18:33:38.000Z | 2021-08-02T06:25:47.000Z | doc/release/upcoming_changes/15355.c_api.rst | erwinvanthiel/numpy | 4a14a3526abf616ab469c83d8adcc2f2f1d5de9c | [
"BSD-3-Clause"
] | 1 | 2020-10-28T10:03:50.000Z | 2020-10-28T10:03:50.000Z | Const qualify UFunc inner loops
-------------------------------
``UFuncGenericFunction`` now expects pointers to const ``dimension`` and
``strides`` as arguments. This means inner loops may no longer modify
either ``dimension`` or ``strides``. This change leads to an
``incompatible-pointer-types`` warning forcing users to either ignore
the compiler warnings or to const qualify their own loop signatures.
| 51.75 | 73 | 0.71256 |
b082ebbbf1ddf7b73cf3a843b905f2ef95b2c48d | 59 | rst | reStructuredText | docs/authors.rst | rodfersou/see | 0a830166f127cc2486d455465a5eb0a0dc0fec87 | [
"BSD-3-Clause"
] | 42 | 2017-09-24T09:54:51.000Z | 2022-03-20T15:51:16.000Z | docs/authors.rst | rodfersou/see | 0a830166f127cc2486d455465a5eb0a0dc0fec87 | [
"BSD-3-Clause"
] | 6 | 2016-04-16T23:55:40.000Z | 2016-04-24T13:31:31.000Z | docs/authors.rst | rodfersou/see | 0a830166f127cc2486d455465a5eb0a0dc0fec87 | [
"BSD-3-Clause"
] | 4 | 2016-04-12T21:35:10.000Z | 2017-08-05T23:46:04.000Z | .. index::
single: Authors
.. include:: ../AUTHORS.rst
| 11.8 | 27 | 0.59322 |
4372359bd279b9bca9aba06d9e05183c66ecee66 | 187 | rst | reStructuredText | tools/docs/losses/confusion_losses.rst | mrphys/tensorflow-mri | 46a8929aec4180aba4961f902897e02592f25da6 | [
"Apache-2.0"
] | 3 | 2021-07-28T17:22:26.000Z | 2022-03-29T15:17:26.000Z | tools/docs/losses/confusion_losses.rst | mrphys/tensorflow-mri | 46a8929aec4180aba4961f902897e02592f25da6 | [
"Apache-2.0"
] | 1 | 2021-07-23T01:37:11.000Z | 2021-07-23T01:37:11.000Z | tools/docs/losses/confusion_losses.rst | mrphys/tensorflow-mri | 46a8929aec4180aba4961f902897e02592f25da6 | [
"Apache-2.0"
] | null | null | null | Confusion
=========
.. automodule:: tensorflow_mri.python.losses.confusion_losses
.. currentmodule:: tfmr.losses
.. autosummary::
:toctree: confusion_losses
:nosignatures:
| 17 | 61 | 0.695187 |
9ca2f5f993ef0c780660a875513c676a2a2b8cab | 434 | rst | reStructuredText | README.rst | sebMathieu/crude-user-updater | 3a4c74fe2426180dde315fc9e0f3a0742b0d33e6 | [
"BSD-2-Clause"
] | null | null | null | README.rst | sebMathieu/crude-user-updater | 3a4c74fe2426180dde315fc9e0f3a0742b0d33e6 | [
"BSD-2-Clause"
] | null | null | null | README.rst | sebMathieu/crude-user-updater | 3a4c74fe2426180dde315fc9e0f3a0742b0d33e6 | [
"BSD-2-Clause"
] | null | null | null | Crude user updater
==================
Package to update the program of your users to the latest version using a simple archive and an ignore list.
Create a zip from your source directory:
::
import updater
updater.create_archive("src", ignore_list=updater.COMMON_FILTERS, package_name="my_super_package.zip")
Apply the update:
::
import updater
updater.apply_archive("my_super_package.zip", "final_destination")
| 24.111111 | 108 | 0.737327 |
a2841c45c3f009d327fcae0fb6ab05b4ab9aa842 | 1,813 | rst | reStructuredText | clients/docs/clients-all-examples.rst | rocpoc/examples | 00530ed9895d265e4786005812cd3b82de30df5e | [
"Apache-2.0"
] | null | null | null | clients/docs/clients-all-examples.rst | rocpoc/examples | 00530ed9895d265e4786005812cd3b82de30df5e | [
"Apache-2.0"
] | 4 | 2020-08-04T19:50:47.000Z | 2020-08-11T14:48:32.000Z | clients/docs/clients-all-examples.rst | rocpoc/examples | 00530ed9895d265e4786005812cd3b82de30df5e | [
"Apache-2.0"
] | null | null | null | .. _clients-all-examples:
Programming Languages and Examples
==================================
There are many other programming languages that provide |ak| client libraries as well.
Refer to :devx-examples:`confluentinc/examples|clients/cloud/README.md` GitHub repository for client code written in the following programming languages and tools.
These "Hello, World!" examples produce to and consume from |ak-tm| clusters, and for the subset of languages that support it, there are additional examples using Avro and |sr-long|.
* :devx-examples:`C|clients/cloud/c/README.md`
* :devx-examples:`Clojure|clients/cloud/clojure/README.md`
* :devx-examples:`Confluent CLI|clients/cloud/confluent-cli/README.md`
* :devx-examples:`Confluent Cloud CLI|clients/cloud/ccloud/README.md`
* :ref:`C-sharp/.Net <client-examples-csharp>`
* :devx-examples:`Go|clients/cloud/go/README.md`
* :devx-examples:`Groovy|clients/cloud/groovy/README.md`
* :devx-examples:`Java|clients/cloud/java/README.md`
* :devx-examples:`Java Spring Boot|clients/cloud/java-springboot/README.md`
* :devx-examples:`Apache Kafka commands|clients/cloud/kafka-commands/README.md`
* :devx-examples:`Kafka Connect datagen|clients/cloud/kafka-connect-datagen/README.md`
* :devx-examples:`kafkacat|clients/cloud/kafkacat/README.md`
* :devx-examples:`Kotlin|clients/cloud/kotlin/README.md`
* :devx-examples:`KSQL Datagen|clients/cloud/ksql-datagen/README.md`
* :devx-examples:`Node.js|clients/cloud/nodejs/README.md`
* :ref:`Python <client-examples-python>`
* :devx-examples:`Ruby|clients/cloud/ruby/README.md`
* :devx-examples:`Rust|clients/cloud/rust/README.md`
* :devx-examples:`Scala|clients/cloud/scala/README.md`
.. figure:: ../cloud/images/clients-all.png
:width: 600px
.. toctree::
:maxdepth: 1
:hidden:
csharp
python
| 45.325 | 181 | 0.745174 |
4273832b49cf91021f4476916b11e2c0b08f6eb8 | 904 | rst | reStructuredText | docs/source/incar.rst | Zhiwei-Lu/pyvaspflow | b80eab3e8bfc52aed6a2459dd32655f1075d9058 | [
"MIT"
] | 13 | 2019-06-03T11:41:35.000Z | 2022-03-04T07:45:42.000Z | docs/source/incar.rst | Zhiwei-Lu/pyvaspflow | b80eab3e8bfc52aed6a2459dd32655f1075d9058 | [
"MIT"
] | 2 | 2019-03-12T10:51:15.000Z | 2019-03-14T02:18:18.000Z | docs/source/incar.rst | Zhiwei-Lu/pyvaspflow | b80eab3e8bfc52aed6a2459dd32655f1075d9058 | [
"MIT"
] | 8 | 2019-06-03T03:20:20.000Z | 2021-01-06T11:48:37.000Z | ============
INCAR 文件
============
INCAR 参数的设置也是在 ``-a`` 中, 只需要指定INCAR的参数就可以自动写入到INCAR文件.
例如::
$ pyvasp prep_single_vasp POSCAR -a NSW=143.2,LCHARG=True,EDIFF=1e-4,NELECT=145
我们可以自动将输入的参数格式化到正确的格式, 例如 ``NSW=143.2`` 可以转化成 ``NSW=143`` .
我们也提供了只生成INCAR文件的命令, 使用方式都是一样的.
例如::
$ pyvasp incar -h
Usage: pyvasp incar [OPTIONS]
Example:
pyvasp incar -f INCAR -a NSW=100,EDIFF=1e-6
For more help you can refer to
https://pyvaspflow.readthedocs.io/zh_CN/latest/incar.html
Options:
-a, --attribute TEXT
-f, --incar_file TEXT
-h, --help Show this message and exit.
一个例子::
# generate a new INCAR
$ pyvasp incar -a NSW=143.2,LCHARG=True,EDIFF=1e-4,NELECT=145
# generate a INCAR based on an old INCAR file
$ pyvasp incar -f INCAR -a NSW=100,EDIFF=1e-6
下面这个例子是指可以基于已有的INCAR文件生成一个新的INCAR, ``-a`` 后面的参数会更新到INCAR文件中。
| 19.652174 | 83 | 0.641593 |
7a5acb6f7b1c7e8ae4e1a6b660264b6f9300c585 | 825 | rst | reStructuredText | reference/rpc/getsnapshotrequest.rst | tweetyf/Hivecoin-developer | 3b03fd891f00137fe6695f34d790a2c41ebfffff | [
"Apache-2.0"
] | 1 | 2021-11-16T13:33:09.000Z | 2021-11-16T13:33:09.000Z | reference/rpc/getsnapshotrequest.rst | tweetyf/Hivecoin-developer | 3b03fd891f00137fe6695f34d790a2c41ebfffff | [
"Apache-2.0"
] | null | null | null | reference/rpc/getsnapshotrequest.rst | tweetyf/Hivecoin-developer | 3b03fd891f00137fe6695f34d790a2c41ebfffff | [
"Apache-2.0"
] | null | null | null | .. This file is licensed under the Apache License 2.0 available on http://www.apache.org/licenses/.
getsnapshotrequest
==================
``getsnapshotrequest "asset_name" block_height``
Retrieves the specified snapshot request details.
Arguments:
~~~~~~~~~~
1. "asset_name" (string, required) The asset name for which the snapshot will be taken
2. "block_height" (number, required) The block height at which the snapshot will be take
Result:
~~~~~~~
::
{
asset_name: (string),
block_height: (number),
}
Examples:
~~~~~~~~~
::
hive-cli getsnapshotrequest "TRONCO" 12345
::
curl --user myusername --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getsnapshotrequest", "params": ["PHATSTACKS" 34987] }' -H 'content-type: text/plain;' http://127.0.0.1:9766/
| 22.297297 | 195 | 0.646061 |
ed150f3c35e36b7f715051b457e8ad18cf7962de | 47 | rst | reStructuredText | docs/entities/spells.rst | blockspacer/entity_spell_system | 6b2c97df9a7a9f1d7c8ecb9f80006b0c471cb80d | [
"MIT"
] | null | null | null | docs/entities/spells.rst | blockspacer/entity_spell_system | 6b2c97df9a7a9f1d7c8ecb9f80006b0c471cb80d | [
"MIT"
] | null | null | null | docs/entities/spells.rst | blockspacer/entity_spell_system | 6b2c97df9a7a9f1d7c8ecb9f80006b0c471cb80d | [
"MIT"
] | 1 | 2020-01-08T12:55:20.000Z | 2020-01-08T12:55:20.000Z | .. _doc_entities_spells:
Spells
======
spells | 7.833333 | 24 | 0.680851 |
658be2f017f61493993930b97b9e2948093adf2b | 667 | rst | reStructuredText | docs/source/release-notes/1.6.0.rst | jmcgill298/flake8 | 4439ea202526b50154d287f3e581222a4c86d782 | [
"MIT"
] | 2,013 | 2015-01-02T20:46:49.000Z | 2022-03-31T20:10:41.000Z | docs/source/release-notes/1.6.0.rst | jmcgill298/flake8 | 4439ea202526b50154d287f3e581222a4c86d782 | [
"MIT"
] | 1,413 | 2015-02-07T07:34:40.000Z | 2022-03-23T16:27:14.000Z | docs/source/release-notes/1.6.0.rst | jmcgill298/flake8 | 4439ea202526b50154d287f3e581222a4c86d782 | [
"MIT"
] | 241 | 2015-03-23T17:04:45.000Z | 2022-03-30T21:51:02.000Z | 1.6 - 2012-11-16
----------------
- changed the signatures of the ``check_file`` function in flake8/run.py,
``skip_warning`` in flake8/util.py and the ``check``, ``checkPath``
functions in flake8/pyflakes.py.
- fix ``--exclude`` and ``--ignore`` command flags (#14, #19)
- fix the git hook that wasn't catching files not already added to the index
(#29)
- pre-emptively includes the addition to pep8 to ignore certain lines.
Add ``# nopep8`` to the end of a line to ignore it. (#37)
- ``check_file`` can now be used without any special prior setup (#21)
- unpacking exceptions will no longer cause an exception (#20)
- fixed crash on non-existent file (#38)
| 44.466667 | 76 | 0.686657 |
66aa696d6b4c0c78c284c5efbab1f31edd61b13b | 387 | rst | reStructuredText | docs/source/manual/graphite.rst | SimpleFinance/metrics | 2987c00750240fa0739fbe1451228744fdb03286 | [
"Apache-2.0"
] | 3 | 2016-10-18T20:44:32.000Z | 2017-09-22T13:16:36.000Z | docs/source/manual/graphite.rst | SimpleFinance/metrics | 2987c00750240fa0739fbe1451228744fdb03286 | [
"Apache-2.0"
] | 13 | 2016-10-03T13:57:02.000Z | 2022-01-25T22:16:18.000Z | docs/source/manual/graphite.rst | jmhodges/metrics | 759fd0d27558592e33cd635b2b0b025bcfe589d2 | [
"Apache-2.0"
] | 5 | 2016-04-20T05:05:07.000Z | 2019-08-01T21:40:29.000Z | .. _manual-graphite:
#####################
Reporting to Graphite
#####################
The ``metrics-graphite`` module provides ``GraphiteReporter``, which allows your application to
constantly stream metric values to a Graphite_ server:
.. _Graphite: http://graphite.wikidot.com/
.. code-block:: java
GraphiteReporter.enable(1, TimeUnit.MINUTES, "graphite.example.com", 2003);
| 25.8 | 95 | 0.671835 |
4645fac9c64e235d9fce2082b33ff97c2c6b8c24 | 140 | rst | reStructuredText | src/getNP.rst | gaaraujo/OpenSeesPyDoc | a7424f5a1ac5cbda2c221fd68af5b5f3564e2dbe | [
"MIT"
] | 79 | 2017-12-25T14:37:24.000Z | 2022-03-18T19:28:20.000Z | src/getNP.rst | gaaraujo/OpenSeesPyDoc | a7424f5a1ac5cbda2c221fd68af5b5f3564e2dbe | [
"MIT"
] | 212 | 2018-02-23T21:03:29.000Z | 2022-02-16T15:41:23.000Z | src/getNP.rst | gaaraujo/OpenSeesPyDoc | a7424f5a1ac5cbda2c221fd68af5b5f3564e2dbe | [
"MIT"
] | 97 | 2017-12-25T14:37:25.000Z | 2022-03-25T17:14:06.000Z | .. include:: sub.txt
=====================
getNP command
=====================
.. function:: getNP()
Get total number of processors.
| 14 | 34 | 0.457143 |
d055e9df41b9bcacdee0456381b7c073e7b90879 | 71 | rst | reStructuredText | docs/cauldron.just.rst | wolvespack/cauldron | 016dd69c03de6e9f666dbb18c8c2e63868ade7c6 | [
"MIT"
] | null | null | null | docs/cauldron.just.rst | wolvespack/cauldron | 016dd69c03de6e9f666dbb18c8c2e63868ade7c6 | [
"MIT"
] | 29 | 2017-10-06T06:20:30.000Z | 2017-12-01T08:00:13.000Z | docs/cauldron.just.rst | wolvespack/cauldron | 016dd69c03de6e9f666dbb18c8c2e63868ade7c6 | [
"MIT"
] | null | null | null | just module
~~~~~~~~~~~
.. doxygenfile:: just.h
:project: cauldron
| 11.833333 | 23 | 0.577465 |
1a96273ee42b3fbd39c8d3b95c70cecda3b75684 | 13,939 | rst | reStructuredText | docs/source/user/elastodyn/input.rst | ndevelder/openfast | 3456a645581456883e44d441eb285ed688e98797 | [
"Apache-2.0"
] | 1 | 2020-07-09T16:32:06.000Z | 2020-07-09T16:32:06.000Z | docs/source/user/elastodyn/input.rst | ndevelder/openfast | 3456a645581456883e44d441eb285ed688e98797 | [
"Apache-2.0"
] | 4 | 2020-07-24T22:53:00.000Z | 2021-09-29T16:38:12.000Z | docs/source/user/elastodyn/input.rst | ndevelder/openfast | 3456a645581456883e44d441eb285ed688e98797 | [
"Apache-2.0"
] | 3 | 2019-08-06T15:43:39.000Z | 2021-04-20T06:59:12.000Z | .. _ed_input:
Input Files
===========
The user configures the structural model parameters via a primary ElastoDyn
input file, as well as separate input files for the tower and *other stuff that
will be documented here later.*
No lines should be added or removed from the input files.
Units
-----
ElastoDyn uses the SI system (kg, m, s, N). Angles are assumed to be in
radians unless otherwise specified.
ElastoDyn Primary Input File
----------------------------
The primary ElastoDyn input file defines modeling options and geometries for the
OpenFAST structure including the tower, nacelle, drivetrain, and blades (if
BeamDyn is not used). It also sets the initial conditions for the structure.
Simulation Control
~~~~~~~~~~~~~~~~~~
Set the **Echo** flag to TRUE if you wish to have ElastoDyn echo the
contents of the ElastoDyn primary, airfoil, and blade input files (useful
for debugging errors in the input files). The echo file has the naming
convention of *OutRootFile.ED.ech*. **OutRootFile** is either
specified in the I/O SETTINGS section of the driver input file when
running ElastoDyn standalone, or by the OpenFAST program when running a
coupled simulation.
**Method**
**dT**
Degrees of Freedom
~~~~~~~~~~~~~~~~~~
**FlapDOF1** - First flapwise blade mode DOF (flag)
**FlapDOF2** - Second flapwise blade mode DOF (flag)
**EdgeDOF** - First edgewise blade mode DOF (flag)
**TeetDOF** - Rotor-teeter DOF (flag) [unused for 3 blades]
**DrTrDOF** - Drivetrain rotational-flexibility DOF (flag)
**GenDOF** - Generator DOF (flag)
**YawDOF** - Yaw DOF (flag)
**TwFADOF1** - First fore-aft tower bending-mode DOF (flag)
**TwFADOF2** - Second fore-aft tower bending-mode DOF (flag)
**TwSSDOF1** - First side-to-side tower bending-mode DOF (flag)
**TwSSDOF2** - Second side-to-side tower bending-mode DOF (flag)
**PtfmSgDOF** - Platform horizontal surge translation DOF (flag)
**PtfmSwDOF** - Platform horizontal sway translation DOF (flag)
**PtfmHvDOF** - Platform vertical heave translation DOF (flag)
**PtfmRDOF** - Platform roll tilt rotation DOF (flag)
**PtfmPDOF** - Platform pitch tilt rotation DOF (flag)
**PtfmYDOF** - Platform yaw rotation DOF (flag)
Initial Conditions
~~~~~~~~~~~~~~~~~~
**OoPDefl** - Initial out-of-plane blade-tip displacement (meters)
**IPDefl** - Initial in-plane blade-tip deflection (meters)
**BlPitch(1)** - Blade 1 initial pitch (degrees)
**BlPitch(2)** - Blade 2 initial pitch (degrees)
**BlPitch(3)** - Blade 3 initial pitch (degrees) [unused for 2 blades]
**TeetDefl** - Initial or fixed teeter angle (degrees) [unused for 3 blades]
**Azimuth** - Initial azimuth angle for blade 1 (degrees)
**RotSpeed** - Initial or fixed rotor speed (rpm)
**NacYaw** - Initial or fixed nacelle-yaw angle (degrees)
**TTDspFA** - Initial fore-aft tower-top displacement (meters)
**TTDspSS** - Initial side-to-side tower-top displacement (meters)
**PtfmSurge** - Initial or fixed horizontal surge translational displacement of platform (meters)
**PtfmSway** - Initial or fixed horizontal sway translational displacement of platform (meters)
**PtfmHeave** - Initial or fixed vertical heave translational displacement of platform (meters)
**PtfmRoll** - Initial or fixed roll tilt rotational displacement of platform (degrees)
**PtfmPitch** - Initial or fixed pitch tilt rotational displacement of platform (degrees)
**PtfmYaw** - Initial or fixed yaw rotational displacement of platform (degrees)
Turbine Configuration
~~~~~~~~~~~~~~~~~~~~~
**NumBl** - Number of blades (-)
**TipRad** - The distance from the rotor apex to the blade tip (meters)
**HubRad** - The distance from the rotor apex to the blade root (meters)
**PreCone(1)** - Blade 1 cone angle (degrees)
**PreCone(2)** - Blade 2 cone angle (degrees)
**PreCone(3)** - Blade 3 cone angle (degrees) [unused for 2 blades]
**HubCM** - Distance from rotor apex to hub mass [positive downwind] (meters)
**UndSling** - Undersling length [distance from teeter pin to the rotor apex] (meters) [unused for 3 blades]
**Delta3** - Delta-3 angle for teetering rotors (degrees) [unused for 3 blades]
**AzimB1Up** - Azimuth value to use for I/O when blade 1 points up (degrees)
**OverHang** - Distance from yaw axis to rotor apex [3 blades] or teeter pin [2 blades] (meters)
**ShftGagL** - Distance from rotor apex [3 blades] or teeter pin [2 blades] to shaft strain gages [positive for upwind rotors] (meters)
**ShftTilt** - Rotor shaft tilt angle (degrees)
**NacCMxn** - Downwind distance from the tower-top to the nacelle CM (meters)
**NacCMyn** - Lateral distance from the tower-top to the nacelle CM (meters)
**NacCMzn** - Vertical distance from the tower-top to the nacelle CM (meters)
**NcIMUxn** - Downwind distance from the tower-top to the nacelle IMU (meters)
**NcIMUyn** - Lateral distance from the tower-top to the nacelle IMU (meters)
**NcIMUzn** - Vertical distance from the tower-top to the nacelle IMU (meters)
**Twr2Shft** - Vertical distance from the tower-top to the rotor shaft (meters)
**TowerHt** - Height of tower above ground level [onshore] or MSL [offshore] (meters)
**TowerBsHt** - Height of tower base above ground level [onshore] or MSL [offshore] (meters)
**PtfmCMxt** - Downwind distance from the ground level [onshore] or MSL [offshore] to the platform CM (meters)
**PtfmCMyt** - Lateral distance from the ground level [onshore] or MSL [offshore] to the platform CM (meters)
**PtfmCMzt** - Vertical distance from the ground level [onshore] or MSL [offshore] to the platform CM (meters)
**PtfmRefzt** - Vertical distance from the ground level [onshore] or MSL [offshore] to the platform reference point (meters)
Mass and Inertia
~~~~~~~~~~~~~~~~
**TipMass(1)** - Tip-brake mass, blade 1 (kg)
**TipMass(2)** - Tip-brake mass, blade 2 (kg)
**TipMass(3)** - Tip-brake mass, blade 3 (kg) [unused for 2 blades]
**HubMass** - Hub mass (kg)
**HubIner** - Hub inertia about rotor axis [3 blades] or teeter axis [2 blades] (kg m^2)
**GenIner** - Generator inertia about HSS (kg m^2)
**NacMass** - Nacelle mass (kg)
**NacYIner** - Nacelle inertia about yaw axis (kg m^2)
**YawBrMass** - Yaw bearing mass (kg)
**PtfmMass** - Platform mass (kg)
**PtfmRIner** - Platform inertia for roll tilt rotation about the platform CM (kg m^2)
**PtfmPIner** - Platform inertia for pitch tilt rotation about the platform CM (kg m^2)
**PtfmYIner** - Platform inertia for yaw rotation about the platform CM (kg m^2)
Blade
~~~~~
**BldNodes** - Number of blade nodes (per blade) used for analysis (-)
**BldFile(1)** - Name of file containing properties for blade 1 (quoted string)
**BldFile(2)** - Name of file containing properties for blade 2 (quoted string)
**BldFile(3)** - Name of file containing properties for blade 3 (quoted string) [unused for 2 blades]
Rotor-Teeter
~~~~~~~~~~~~
**TeetMod** - Rotor-teeter spring/damper model {0: none, 1: standard, 2: user-defined from routine UserTeet} (switch) [unused for 3 blades]
**TeetDmpP** - Rotor-teeter damper position (degrees) [used only for 2 blades and when TeetMod=1]
**TeetDmp** - Rotor-teeter damping constant (N-m/(rad/s)) [used only for 2 blades and when TeetMod=1]
**TeetCDmp** - Rotor-teeter rate-independent Coulomb-damping moment (N-m) [used only for 2 blades and when TeetMod=1]
**TeetSStP** - Rotor-teeter soft-stop position (degrees) [used only for 2 blades and when TeetMod=1]
**TeetHStP** - Rotor-teeter hard-stop position (degrees) [used only for 2 blades and when TeetMod=1]
**TeetSSSp** - Rotor-teeter soft-stop linear-spring constant (N-m/rad) [used only for 2 blades and when TeetMod=1]
**TeetHSSp** - Rotor-teeter hard-stop linear-spring constant (N-m/rad) [used only for 2 blades and when TeetMod=1]
Drivetrain
~~~~~~~~~~
**GBoxEff** - Gearbox efficiency (%)
**GBRatio** - Gearbox ratio (-)
**DTTorSpr** - Drivetrain torsional spring (N-m/rad)
**DTTorDmp** - Drivetrain torsional damper (N-m/(rad/s))
Furling
~~~~~~~
**Furling** - Read in additional model properties for furling turbine (flag) [must currently be FALSE)
**FurlFile** - Name of file containing furling properties (quoted string) [unused when Furling=False]
Tower
~~~~~
**TwrNodes** - Number of tower nodes used for analysis (-)
**TwrFile** - Name of file containing tower properties (quoted string)
.. _ED-Outputs:
Outputs
~~~~~~~
**SumPrint** [flag] Set this value to TRUE if you want ElastoDyn to generate a
summary file with the name **OutFileRoot**.ED.sum*. **OutFileRoot** is specified
by the OpenFAST program when running a coupled simulation.
**OutFile** [switch] is currently unused. The eventual purpose is to allow
output from ElastoDyn to be written to a module output file (option 1), or the
main OpenFAST output file (option 2), or both. At present this switch is
ignored.
**TabDelim** [flag] is currently unused. Setting this to True will set the
delimeter for text files to the tab character for the ElastoDyn module
**OutFile**.
**OutFmt** [quoted string] is currently unused. ElastoDyn will use this string
as the numerical format specifier for output of floating-point values in its
local output specified by **OutFile**. The length of this string must not exceed
20 characters and must be enclosed in apostrophes or double quotes. You may not
specify an empty string. To ensure that fixed-width column data align properly
with the column titles, you should ensure that the width of the field is 10
characters. Using an E, EN, or ES specifier will guarantee that you will never
overflow the field because the number is too big, but such numbers are harder to
read. Using an F specifier will give you numbers that are easier to read, but
you may overflow the field. Please refer to any Fortran manual for details for
format specifiers.
**TStart** [s] sets the start time for **OutFile**. This is currenlty unused.
**DecFact** [-] This parameter sets the decimation factor for output. ElastoDyn
will output data to **OutFile** only once each DecFact integration time steps.
For instance, a value of 5 will cause FAST to generate output only every fifth
time step. This value must be an integer greater than zero.
**NTwGages** [-] The number of strain-gage locations along the tower indicates
the number of input values on the next line. Valid values are integers from 0 to
5 (inclusive).
**TwrGagNd** [-] The virtual strain-gage locations along the tower are assigned
to the tower analysis nodes specified on this line. Possible values are 1 to
TwrNodes (inclusive), where 1 corresponds to the node closest to the tower base
(but not at the base) and a value of TwrNodes corresponds to the node closest to
the tower top. The exact elevations of each analysis node in the undeflected
tower, relative to the base of the tower, are determined as follows:
Elev. of node J = TwrRBHt + ( J – 1⁄2 ) • [ ( TowerHt + TwrDraft – TwrRBHt ) / TwrNodes ]
(for J = 1,2,...,TwrNodes)
You must enter at least NTwGages values on this line.
If NTwGages is 0, this line will be skipped, but you must have a line taking up
space in the input file. You can separate the values with combinations of tabs,
spaces, and commas, but you may use only one comma between numbers.
**NBlGages** [-] specifies the number of strain-gague locations along the blade,
and indicates the number of input values expected in **BldGagNd**. This is only
used when the blade structure is modeled in ElastoDyn.
**BldGagNd** [-] specifies the virtual strain-gage locations along the blade
that should be output. Possible values are 1 to **BldNodes** (inclusive), where
1 corresponds to the node closest to the blade root (but not at the root) and a
value of BldNodes corresponds to the node closest to the blade tip. The node
locations are specified by the ElastoDyn blade input files. You must enter at
least NBlGages values on this line. If NBlGages is 0, this line will be skipped,
but you must have a line taking up space in the input file. You can separate the
values with combinations of tabs, spaces, and commas, but you may use only one
comma between numbers. This is only used when the blade structure is modeled in
ElastoDyn.
The **OutList** section controls output quantities generated by
ElastoDyn. Enter one or more lines containing quoted strings that in turn
contain one or more output parameter names. Separate output parameter
names by any combination of commas, semicolons, spaces, and/or tabs. If
you prefix a parameter name with a minus sign, “-”, underscore, “_”, or
the characters “m” or “M”, ElastoDyn will multiply the value for that
channel by –1 before writing the data. The parameters are written in the
order they are listed in the input file. ElastoDyn allows you to use
multiple lines so that you can break your list into meaningful groups
and so the lines can be shorter. You may enter comments after the
closing quote on any of the lines. Entering a line with the string “END”
at the beginning of the line or at the beginning of a quoted string
found at the beginning of the line will cause ElastoDyn to quit scanning
for more lines of channel names. Blade and tower node-related quantities
are generated for the requested nodes identified through the
**BldGagNd** and **TwrGagNd** lists above. If ElastoDyn encounters an
unknown/invalid channel name, it warns the users but will remove the
suspect channel from the output file. Please refer to the ElastoDyn tab in the
Excel file :download:`OutListParameters.xlsx <../../../OtherSupporting/OutListParameters.xlsx>`
for a complete list of possible output parameters.
.. _ED-Nodal-Outputs:
.. include:: EDNodalOutputs.rst
| 38.505525 | 143 | 0.720568 |
1fc1c65e7104d949d00b1fcf9da0b36e42b6e4ab | 697 | rst | reStructuredText | README.rst | myrix/poio-api | 754b8db78b2812003a7fe83b66503f899265ce69 | [
"Apache-2.0"
] | 10 | 2015-03-15T19:27:54.000Z | 2020-09-04T13:06:07.000Z | README.rst | myrix/poio-api | 754b8db78b2812003a7fe83b66503f899265ce69 | [
"Apache-2.0"
] | 2 | 2015-06-09T07:24:32.000Z | 2015-07-10T08:47:25.000Z | README.rst | myrix/poio-api | 754b8db78b2812003a7fe83b66503f899265ce69 | [
"Apache-2.0"
] | 4 | 2015-07-08T16:32:46.000Z | 2019-04-11T09:22:06.000Z | Poio API
========
Poio API is a free and open source Python library to access and search data from
language documentation in your linguistic analysis workflow. It converts file
formats like Elan's EAF, Toolbox files, Typecraft XML and others into annotation
graphs as defined in ISO 24612. Those graphs, for which we use an implementation
called "Graph Annotation Framework" (GrAF), allow unified access to linguistic
data from a wide range sources.
For documentation, please visit http://media.cidles.eu/poio/poio-api/
License
-------
Poio API source code is distributed under the Apache 2.0 License.
Poio API documentation is distributed under the Creative Commons Attribution
3.0 Unported.
| 36.684211 | 80 | 0.790531 |
263173e367e1479c778b7ed73a90e0ff2243106b | 4,672 | rst | reStructuredText | docs/old/annotation/samples.rst | jmtsuji/atlas | 17e078421d6ae610ccc2438bd56567b13c025ca0 | [
"BSD-3-Clause"
] | 1 | 2021-03-01T09:44:05.000Z | 2021-03-01T09:44:05.000Z | docs/old/annotation/samples.rst | jmtsuji/atlas | 17e078421d6ae610ccc2438bd56567b13c025ca0 | [
"BSD-3-Clause"
] | null | null | null | docs/old/annotation/samples.rst | jmtsuji/atlas | 17e078421d6ae610ccc2438bd56567b13c025ca0 | [
"BSD-3-Clause"
] | 1 | 2020-11-12T05:27:39.000Z | 2020-11-12T05:27:39.000Z | Defining Samples
================
Annotation
----------
Samples are defined under "samples" and have a unique name that does not
contain spaces or underscores (dashes are accepted). To only annotate, a
sample needs ``fasta`` defined::
samples:
sample-1:
fasta: /project/bins/sample-1_contigs.fasta
sample-2:
fasta: /project/bins/sample-2_contigs.fasta
All other annotation parameters can be defined following samples. See
:ref:`annotation`.
Annotation and Quantification
-----------------------------
To get counts, we need to define FASTQ file paths that will be mapped back
to the FASTA regions.
In addition to specifying FASTQ paths for each sample, the configuration will
also need to contain::
quantification: true
Interleaved
```````````
Reads are always assumed to be paired-end, so we only need to specify
``fastq``::
samples:
sample-1:
fasta: /project/bins/sample-1_contigs.fasta
fastq: /project/data/sample-1_pe.fastq
sample-2:
fasta: /project/bins/sample-2_contigs.fasta
fastq: /project/data/sample-2_pe.fastq
Paired-end
``````````
In this case, we create a list using YAML_ syntax for both R1 and R2 indexes::
samples:
sample-1:
fasta: /project/bins/sample-1_contigs.fasta
fastq:
- /project/data/sample-1_R1.fastq
- /project/data/sample-1_R2.fastq
sample-2:
fasta: /project/bins/sample-2_contigs.fasta
fastq:
- /project/data/sample-2_R1.fastq
- /project/data/sample-2_R2.fastq
Single-end
``````````
As data are assumed to be paired-end, we need to add ``paired: false``::
samples:
sample-1:
fasta: /project/bins/sample-1_contigs.fasta
fastq: /project/data/sample-1_se.fastq
paired: false
sample-2:
fasta: /project/bins/sample-2_contigs.fasta
fastq: /project/data/sample-2_se.fastq
paired: false
Example
-------
A complete example for annotation and quantification for samples with
paired-end reads in separate and in interleaved FASTQs::
samples:
sample-1:
fasta: /project/bins/sample-1_contigs.fasta
fastq: /project/data/sample-1_pe.fastq
sample-2:
fasta: /project/bins/sample-2_contigs.fasta
fastq:
- /project/data/sample-2_R1.fastq
- /project/data/sample-2_R2.fastq
quantification: true
tmpdir: /scratch
threads: 24
refseq_namemap: /pic/projects/mint/atlas_databases/refseq.db
refseq_tree: /pic/projects/mint/atlas_databases/refseq.tree
diamond_db: /pic/projects/mint/atlas_databases/refseq.dmnd
# 'fast' or 'sensitive'
diamond_run_mode: fast
# setting top_seqs to 5 will report all alignments whose score is
# at most 5% lower than the top alignment score for a query
diamond_top_seqs: 2
# maximum e-value to report alignments
diamond_e_value: "0.000001"
# minimum identity % to report an alignment
diamond_min_identity: 50
# minimum query cover % to report an alignment
diamond_query_coverage: 60
# gap open penalty
diamond_gap_open: 11
# gap extension penalty
diamond_gap_extend: 1
# Block size in billions of sequence letters to be processed at a time.
# This is the main parameter for controlling DIAMOND's memory usage.
# Bigger numbers will increase the use of memory and temporary disk space,
# but also improve performance. The program can be expected to roughly use
# six times this number of memory (in GB).
diamond_block_size: 6
# The number of chunks for processing the seed index (default=4). This
# option can be additionally used to tune the performance. It is
# recommended to set this to 1 on a high memory server, which will
# increase performance and memory usage, but not the usage of temporary
# disk space.
diamond_index_chunks: 1
# 'lca', 'majority', or 'best'; summary method for annotating ORFs; when
# using LCA, it's recommended that one limits the number of hits using a
# low top_fraction
summary_method: lca
# 'lca', 'lca-majority', or 'majority'; summary method for aggregating ORF
# taxonomic assignments to contig level assignment; 'lca' will result in
# most stringent, least specific assignments
aggregation_method: lca-majority
# constitutes a majority fraction at tree node for 'lca-majority' ORF
# aggregation method
majority_threshold: 0.51
.. _YAML: http://www.yaml.org/
| 32 | 78 | 0.663527 |
4d55e05dbfbd5a9a2d8e51980c1d4020496e174c | 857 | rst | reStructuredText | examples/todobackend/README.rst | fajfer/rororo | f90b451bdf117c4e0c1147fe49e85fa2b9af87ff | [
"BSD-3-Clause"
] | 105 | 2015-05-12T09:23:15.000Z | 2022-03-24T10:55:47.000Z | examples/todobackend/README.rst | fajfer/rororo | f90b451bdf117c4e0c1147fe49e85fa2b9af87ff | [
"BSD-3-Clause"
] | 209 | 2015-05-11T08:25:13.000Z | 2022-03-01T05:07:33.000Z | examples/todobackend/README.rst | fajfer/rororo | f90b451bdf117c4e0c1147fe49e85fa2b9af87ff | [
"BSD-3-Clause"
] | 13 | 2016-02-27T21:11:39.000Z | 2022-01-19T14:31:22.000Z | ============
Todo-Backend
============
`Todo-Backend <http://todobackend.com>`_ implementation built on top of
*rororo*, which highlights class based views usage.
Requirements
============
- `redis <https://redis.io>`_ 5.0 or later
Usage
=====
.. code-block:: bash
make EXAMPLE=todobackend example
**IMPORTANT:** To run from *rororo* root directory.
After, feel free to run Todo-Backend tests by opening
http://www.todobackend.com/specs/index.html?http://localhost:8080/todos/ in
your browser.
Swagger UI
----------
To consume the API via `Swagger UI <https://swagger.io/tools/swagger-ui/>`_
run next command,
.. code-block:: bash
docker run --rm -e URL="http://localhost:8080/todos/openapi.yaml" \
-h localhost -p 8081:8080 swaggerapi/swagger-ui:v3.36.1
After, open http://localhost:8081/ in your browser to access Swagger UI.
| 22.552632 | 75 | 0.68028 |
08ee558c2b4b82aa630a23f8a2ccce82667dd384 | 1,236 | rst | reStructuredText | docs/source/release-notes/3.4.0.rst | jmcgill298/flake8 | 4439ea202526b50154d287f3e581222a4c86d782 | [
"MIT"
] | null | null | null | docs/source/release-notes/3.4.0.rst | jmcgill298/flake8 | 4439ea202526b50154d287f3e581222a4c86d782 | [
"MIT"
] | 1 | 2020-11-14T00:19:03.000Z | 2020-11-14T00:19:03.000Z | docs/source/release-notes/3.4.0.rst | jmcgill298/flake8 | 4439ea202526b50154d287f3e581222a4c86d782 | [
"MIT"
] | null | null | null | 3.4.0 -- 2017-07-27
-------------------
You can view the `3.4.0 milestone`_ on GitLab for more details.
- Refine logic around ``--select`` and ``--ignore`` when combined with the
default values for each. (See also `GitLab#318`_)
- Handle spaces as an alternate separate for error codes, e.g.,
``--ignore 'E123 E234'``. (See also `GitLab#329`_)
- Filter out empty select and ignore codes, e.g., ``--ignore E123,,E234``.
(See also `GitLab#330`_)
- Specify dependencies appropriately in ``setup.py`` (See also `Gitlab#341_`)
- Fix bug in parsing ``--quiet`` and ``--verbose`` from config files.
(See also `GitLab!193`_)
- Remove unused import of ``os`` in the git hook template (See also
`GitLab!194`_)
.. all links
.. _3.4.0 milestone:
https://gitlab.com/pycqa/flake8/milestones/18
.. issue links
.. _GitLab#318:
https://gitlab.com/pycqa/flake8/issues/318
.. _GitLab#329:
https://gitlab.com/pycqa/flake8/issues/329
.. _GitLab#330:
https://gitlab.com/pycqa/flake8/issues/330
.. _GitLab#341:
https://gitlab.com/pycqa/flake8/issues/341
.. merge request links
.. _GitLab!193:
https://gitlab.com/pycqa/flake8/merge_requests/193
.. _GitLab!194:
https://gitlab.com/pycqa/flake8/merge_requests/194
| 29.428571 | 77 | 0.677184 |
0eaac514ccdaccf329e085e58a8b2b926dc97d29 | 494 | rst | reStructuredText | not_for_deploy/docs/ievv_restframework_helpers.rst | appressoas/ievv_opensource | 63e87827952ddc8f6f86145b79478ef21d6a0990 | [
"BSD-3-Clause"
] | null | null | null | not_for_deploy/docs/ievv_restframework_helpers.rst | appressoas/ievv_opensource | 63e87827952ddc8f6f86145b79478ef21d6a0990 | [
"BSD-3-Clause"
] | 37 | 2015-10-26T09:14:12.000Z | 2022-02-10T10:35:33.000Z | not_for_deploy/docs/ievv_restframework_helpers.rst | appressoas/ievv_opensource | 63e87827952ddc8f6f86145b79478ef21d6a0990 | [
"BSD-3-Clause"
] | 1 | 2015-11-06T07:56:34.000Z | 2015-11-06T07:56:34.000Z | ###############################################################################
`ievv_restframework_helpers` --- Helpers for working with Django rest framework
###############################################################################
************************
FullCleanModelSerializer
************************
.. currentmodule:: ievv_opensource.ievv_restframework_helpers.full_clean_model_serializer
.. automodule:: ievv_opensource.ievv_restframework_helpers.full_clean_model_serializer
| 44.909091 | 89 | 0.504049 |
337f2f8c08d5b248d3dd265c18ac363ac5632285 | 676 | rst | reStructuredText | docs/source/publications.rst | JordanKoeller/Mirage2 | 33ad0d07322953ac6fc5c26b4f6fe7d17e4784dd | [
"MIT"
] | null | null | null | docs/source/publications.rst | JordanKoeller/Mirage2 | 33ad0d07322953ac6fc5c26b4f6fe7d17e4784dd | [
"MIT"
] | 22 | 2018-10-08T23:55:01.000Z | 2019-02-26T05:18:54.000Z | docs/source/publications.rst | JordanKoeller/Mirage2 | 33ad0d07322953ac6fc5c26b4f6fe7d17e4784dd | [
"MIT"
] | 1 | 2021-11-30T00:03:03.000Z | 2021-11-30T00:03:03.000Z |
Publications
============
Mirage: A New Package to Simulate Quasar Microlensing
-----------------------------------------------------
This is Jordan Koeller's thesis, detailing Mirage and how it works.
Link to PDF
Applications of Apache Spark for Numerical Simulation
-----------------------------------------------------
Accompanying paper to a talk given at the Parallel
and Distributed Techniques and Applications 2018 conference, describing
the algorithm used in Mirage and Spark's effectiveness as a cluster computing
framework for numerical work.
ApplicationsSparkLink_
.. _ApplicationsSparkLink: https://csce.ucmss.com/cr/books/2018/LFS/CSREA2018/PDP3704.pdf
| 27.04 | 89 | 0.674556 |
ad2b31eec2af426843df51470b39fcd8f6f371e4 | 215 | rst | reStructuredText | source/Photoshop/Application/name.rst | theiviaxx/photoshop-docs | 02a26d36acfe158f6ca638c9f36d3e96bf3631f0 | [
"MIT"
] | 10 | 2018-06-24T18:39:40.000Z | 2022-02-23T03:59:30.000Z | docs/_sources/Photoshop/Application/name.rst.txt | theiviaxx/photoshop-docs | 02a26d36acfe158f6ca638c9f36d3e96bf3631f0 | [
"MIT"
] | null | null | null | docs/_sources/Photoshop/Application/name.rst.txt | theiviaxx/photoshop-docs | 02a26d36acfe158f6ca638c9f36d3e96bf3631f0 | [
"MIT"
] | 4 | 2019-02-26T20:44:40.000Z | 2022-03-02T07:12:41.000Z | .. _Application.name:
================================================
Application.name
================================================
:ref:`string` **name**
Description
-----------
The application name.
| 14.333333 | 48 | 0.339535 |
da273ed78588ddf7a4cabc13419e1f41967c7605 | 4,718 | rst | reStructuredText | docs/metview/macro/data_types/geopointset.rst | ecmwf/metview-docs | cfde15232834c2d2a8393e7e6ba67ba95acd51e0 | [
"Apache-2.0"
] | 2 | 2021-07-19T09:02:33.000Z | 2021-09-01T14:29:50.000Z | docs/metview/macro/data_types/geopointset.rst | ecmwf/metview-docs | cfde15232834c2d2a8393e7e6ba67ba95acd51e0 | [
"Apache-2.0"
] | 1 | 2021-10-14T11:53:08.000Z | 2021-12-01T10:07:34.000Z | docs/metview/macro/data_types/geopointset.rst | ecmwf/metview-docs | cfde15232834c2d2a8393e7e6ba67ba95acd51e0 | [
"Apache-2.0"
] | null | null | null | .. _macro_geopointset:
Geopointsets in Macro
===========================
.. note::
Geopointset is the format used by Metview to combine a set of Geopoints variables into a single entity for ease of processing. Thus, a set of observations can be grouped in the same way that fields are grouped into a fieldset variable. For a full list and details of functions and operators on geopoints, see :ref:`Geopointset functions <macro_geopoints_fn>`.
Creating a geopointset
++++++++++++++++++++++++++
A geopointset can be created with the create_geo_set() function, which takes any number of geopoints variables as arguments, or none. Both geopoints and geopointset variables can be concatenated with a geopointset.
.. code-block:: python
set1 = create_geo_set() # creates an empty set
set2 = create_geo_set(g1, g2, g3) # assuming that g1,g2,g3 are geopoints variables
set3 = set1 & g1 & g2 # set3 has 2 geopoints
set4 = set2 & set3 # set4 has 5 geopoints
Accessing geopintset elements
++++++++++++++++++++++++++++++++
The count() function returns the number of geopoints variables contained by the set.
Use the indexing operator [] to access the geopoints variables contained in a set. For example:
.. code-block:: python
print(type(set4)) # geopointset
print(count(set4)) # 5
g1 = set4[1] # extract the first item
print(type(g1)) # geopoints
print(count(g1)) # 244 (if there are 244 points in this geopoints variable)
Operations on geopointsets
As a geopointset is simply a container for geopoints variables, most operations on a geopointset are performed on each of its component geopoints. For example, the following line of code with return a new geopointset where each geopoints variable has had the cos() function applied to its values:
.. code-block:: python
cgset = cos(gset)
Operations between geopointsets and numbers are performed on each geopoints, e.g.
.. code-block:: python
# add 1 to each value in each geopoints var in gset
gsetplus1 = gset + 1
Operations can be performed between geopointsets and geopointsets, or geopointsets and fieldsets, as long as they both contain the same number of items, or they contain exactly one item. Otherwise, if they contain a different number of items, the computation will fail.
For example, if gset_5a and gset_5b each contain 5 geopoints variables, the following code will add each pair of geopoints variables, giving a resulting geopointset of size 5:
.. code-block:: python
gsetsum_r1 = gset_5a + gset_5b # gset_5b[n] is added to gset_5a[n]
If gset_1c contains a single geopoints variable, the following code will produce a geopointset with 5 items, the result of adding gset_1c[1] to each item in gset_5a:
.. code-block:: python
gsetsum_r2 = gset_5a + gset_1c # gset_1c[1] is added to each gset_5a[n]
Likewise, geopointset/fieldset operations work the same way:
.. code-block:: python
gsetdiff_r1 = fc_fieldset_5 - gset_5a # gset_5a[n] is subtracted from fc_fieldset_5[n]
gsetdiff_r2 = fc_fieldset_5 - gset_1c # gset_1c[1] is subtracted from each field
Filtering a geopointset
++++++++++++++++++++++++++++++
Individual geopoints variables can contain meta-data - see Geopoints for details. To select only those geopoints variables with given meta-data, use the filter() function as described in Geopointset Functions.
The Geopointset file format
++++++++++++++++++++++++++++++
The format for a geopointset file is very simply a header followed by a contatenation of geopoints files - see Geopoints for details of the format. The overall header is this::
#GEOPOINTSET
The subsequent geopoints structures should all share the same format as each other. Here's an example with 3 geopoints files inside the set::
#GEOPOINTSET
#GEO
# lat lon height date time value
# Missing values represented by 3e+38 (not user-changeable)
#DATA
69.6523 18.9057 0 20130512 0 100869.8625
63.4882 10.8795 0 20130512 0 100282.3392
63.5657 10.694 0 20130512 0 100241.1666
61.2928 5.0443 0 20130512 0 99852.18932
#GEO
# lat lon height date time value
# Missing values represented by 3e+38 (not user-changeable)
#METADATA
param=geopotential
#DATA
60.82 23.5 0 20130512 600 101045.8
#GEO
# lat lon height date time value
# Missing values represented by 3e+38 (not user-changeable)
#DATA
55.01 8.41 0 20130513 0 100949.1809
54.33 8.62 0 20130513 0 101027.9101
53.71 7.15 0 20130513 0 100846.619 | 41.385965 | 363 | 0.695422 |
1dd40b4829f90560386fac23f4b3a1a49c59a571 | 78 | rst | reStructuredText | doc/api/agent.rst | okkhoy/rlpy | af25d2011fff1d61cb7c5cc8992549808f0c6103 | [
"BSD-3-Clause"
] | 265 | 2015-01-21T08:11:12.000Z | 2021-12-21T08:06:21.000Z | doc/api/agent.rst | okkhoy/rlpy | af25d2011fff1d61cb7c5cc8992549808f0c6103 | [
"BSD-3-Clause"
] | 22 | 2015-03-26T17:41:43.000Z | 2019-12-19T08:47:36.000Z | doc/api/agent.rst | okkhoy/rlpy | af25d2011fff1d61cb7c5cc8992549808f0c6103 | [
"BSD-3-Clause"
] | 85 | 2015-02-18T00:25:15.000Z | 2021-11-15T11:10:00.000Z | .. _agent:
Agent
-----
.. autoclass:: rlpy.Agents.Agent.Agent
:members:
| 9.75 | 38 | 0.602564 |
59a79f94d7ccc8f32874627a18cdc18ae7538b95 | 143 | rst | reStructuredText | docs/source/api/data_utils/data_augmenter.rst | aniketkt/CTSegNet | 5560a94ff1470ae5a0adb4071f5cd99e309a25c4 | [
"BSD-3-Clause"
] | 19 | 2020-04-13T17:22:38.000Z | 2022-02-09T22:36:17.000Z | docs/source/api/data_utils/data_augmenter.rst | aniketkt/CTSegNet | 5560a94ff1470ae5a0adb4071f5cd99e309a25c4 | [
"BSD-3-Clause"
] | 3 | 2020-04-12T03:27:52.000Z | 2021-04-13T16:27:05.000Z | docs/source/api/data_utils/data_augmenter.rst | aniketkt/CTSegNet | 5560a94ff1470ae5a0adb4071f5cd99e309a25c4 | [
"BSD-3-Clause"
] | 6 | 2020-04-13T16:49:48.000Z | 2021-12-08T20:28:10.000Z | :mod:`data_augmenter`
==========================
.. automodule:: ct_segnet.data_utils.data_augmenter
:members:
:show-inheritance:
| 14.3 | 51 | 0.573427 |
85b04c963f5906e608c562caa788a8c5365b55d6 | 31,552 | rst | reStructuredText | Doc/faq/library.rst | dignissimus/cpython | 17357108732c731d6ed4f2bd123ee6ba1ff6891b | [
"0BSD"
] | null | null | null | Doc/faq/library.rst | dignissimus/cpython | 17357108732c731d6ed4f2bd123ee6ba1ff6891b | [
"0BSD"
] | 2 | 2022-01-01T11:08:44.000Z | 2022-03-01T19:01:02.000Z | Doc/faq/library.rst | sthagen/python-cpython | dfd438dfb2a0e299cd6ab166f203dfe9740868ae | [
"0BSD"
] | null | null | null | :tocdepth: 2
=========================
Library and Extension FAQ
=========================
.. only:: html
.. contents::
General Library Questions
=========================
How do I find a module or application to perform task X?
--------------------------------------------------------
Check :ref:`the Library Reference <library-index>` to see if there's a relevant
standard library module. (Eventually you'll learn what's in the standard
library and will be able to skip this step.)
For third-party packages, search the `Python Package Index
<https://pypi.org>`_ or try `Google <https://www.google.com>`_ or
another web search engine. Searching for "Python" plus a keyword or two for
your topic of interest will usually find something helpful.
Where is the math.py (socket.py, regex.py, etc.) source file?
-------------------------------------------------------------
If you can't find a source file for a module it may be a built-in or
dynamically loaded module implemented in C, C++ or other compiled language.
In this case you may not have the source file or it may be something like
:file:`mathmodule.c`, somewhere in a C source directory (not on the Python Path).
There are (at least) three kinds of modules in Python:
1) modules written in Python (.py);
2) modules written in C and dynamically loaded (.dll, .pyd, .so, .sl, etc);
3) modules written in C and linked with the interpreter; to get a list of these,
type::
import sys
print(sys.builtin_module_names)
How do I make a Python script executable on Unix?
-------------------------------------------------
You need to do two things: the script file's mode must be executable and the
first line must begin with ``#!`` followed by the path of the Python
interpreter.
The first is done by executing ``chmod +x scriptfile`` or perhaps ``chmod 755
scriptfile``.
The second can be done in a number of ways. The most straightforward way is to
write ::
#!/usr/local/bin/python
as the very first line of your file, using the pathname for where the Python
interpreter is installed on your platform.
If you would like the script to be independent of where the Python interpreter
lives, you can use the :program:`env` program. Almost all Unix variants support
the following, assuming the Python interpreter is in a directory on the user's
:envvar:`PATH`::
#!/usr/bin/env python
*Don't* do this for CGI scripts. The :envvar:`PATH` variable for CGI scripts is
often very minimal, so you need to use the actual absolute pathname of the
interpreter.
Occasionally, a user's environment is so full that the :program:`/usr/bin/env`
program fails; or there's no env program at all. In that case, you can try the
following hack (due to Alex Rezinsky):
.. code-block:: sh
#! /bin/sh
""":"
exec python $0 ${1+"$@"}
"""
The minor disadvantage is that this defines the script's __doc__ string.
However, you can fix that by adding ::
__doc__ = """...Whatever..."""
Is there a curses/termcap package for Python?
---------------------------------------------
.. XXX curses *is* built by default, isn't it?
For Unix variants: The standard Python source distribution comes with a curses
module in the :source:`Modules` subdirectory, though it's not compiled by default.
(Note that this is not available in the Windows distribution -- there is no
curses module for Windows.)
The :mod:`curses` module supports basic curses features as well as many additional
functions from ncurses and SYSV curses such as colour, alternative character set
support, pads, and mouse support. This means the module isn't compatible with
operating systems that only have BSD curses, but there don't seem to be any
currently maintained OSes that fall into this category.
Is there an equivalent to C's onexit() in Python?
-------------------------------------------------
The :mod:`atexit` module provides a register function that is similar to C's
:c:func:`onexit`.
Why don't my signal handlers work?
----------------------------------
The most common problem is that the signal handler is declared with the wrong
argument list. It is called as ::
handler(signum, frame)
so it should be declared with two parameters::
def handler(signum, frame):
...
Common tasks
============
How do I test a Python program or component?
--------------------------------------------
Python comes with two testing frameworks. The :mod:`doctest` module finds
examples in the docstrings for a module and runs them, comparing the output with
the expected output given in the docstring.
The :mod:`unittest` module is a fancier testing framework modelled on Java and
Smalltalk testing frameworks.
To make testing easier, you should use good modular design in your program.
Your program should have almost all functionality
encapsulated in either functions or class methods -- and this sometimes has the
surprising and delightful effect of making the program run faster (because local
variable accesses are faster than global accesses). Furthermore the program
should avoid depending on mutating global variables, since this makes testing
much more difficult to do.
The "global main logic" of your program may be as simple as ::
if __name__ == "__main__":
main_logic()
at the bottom of the main module of your program.
Once your program is organized as a tractable collection of function and class
behaviours, you should write test functions that exercise the behaviours. A
test suite that automates a sequence of tests can be associated with each module.
This sounds like a lot of work, but since Python is so terse and flexible it's
surprisingly easy. You can make coding much more pleasant and fun by writing
your test functions in parallel with the "production code", since this makes it
easy to find bugs and even design flaws earlier.
"Support modules" that are not intended to be the main module of a program may
include a self-test of the module. ::
if __name__ == "__main__":
self_test()
Even programs that interact with complex external interfaces may be tested when
the external interfaces are unavailable by using "fake" interfaces implemented
in Python.
How do I create documentation from doc strings?
-----------------------------------------------
The :mod:`pydoc` module can create HTML from the doc strings in your Python
source code. An alternative for creating API documentation purely from
docstrings is `epydoc <http://epydoc.sourceforge.net/>`_. `Sphinx
<http://sphinx-doc.org>`_ can also include docstring content.
How do I get a single keypress at a time?
-----------------------------------------
For Unix variants there are several solutions. It's straightforward to do this
using curses, but curses is a fairly large module to learn.
.. XXX this doesn't work out of the box, some IO expert needs to check why
Here's a solution without curses::
import termios, fcntl, sys, os
fd = sys.stdin.fileno()
oldterm = termios.tcgetattr(fd)
newattr = termios.tcgetattr(fd)
newattr[3] = newattr[3] & ~termios.ICANON & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSANOW, newattr)
oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)
try:
while True:
try:
c = sys.stdin.read(1)
print("Got character", repr(c))
except OSError:
pass
finally:
termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)
You need the :mod:`termios` and the :mod:`fcntl` module for any of this to
work, and I've only tried it on Linux, though it should work elsewhere. In
this code, characters are read and printed one at a time.
:func:`termios.tcsetattr` turns off stdin's echoing and disables canonical
mode. :func:`fcntl.fnctl` is used to obtain stdin's file descriptor flags
and modify them for non-blocking mode. Since reading stdin when it is empty
results in an :exc:`OSError`, this error is caught and ignored.
.. versionchanged:: 3.3
*sys.stdin.read* used to raise :exc:`IOError`. Starting from Python 3.3
:exc:`IOError` is alias for :exc:`OSError`.
Threads
=======
How do I program using threads?
-------------------------------
Be sure to use the :mod:`threading` module and not the :mod:`_thread` module.
The :mod:`threading` module builds convenient abstractions on top of the
low-level primitives provided by the :mod:`_thread` module.
None of my threads seem to run: why?
------------------------------------
As soon as the main thread exits, all threads are killed. Your main thread is
running too quickly, giving the threads no time to do any work.
A simple fix is to add a sleep to the end of the program that's long enough for
all the threads to finish::
import threading, time
def thread_task(name, n):
for i in range(n):
print(name, i)
for i in range(10):
T = threading.Thread(target=thread_task, args=(str(i), i))
T.start()
time.sleep(10) # <---------------------------!
But now (on many platforms) the threads don't run in parallel, but appear to run
sequentially, one at a time! The reason is that the OS thread scheduler doesn't
start a new thread until the previous thread is blocked.
A simple fix is to add a tiny sleep to the start of the run function::
def thread_task(name, n):
time.sleep(0.001) # <--------------------!
for i in range(n):
print(name, i)
for i in range(10):
T = threading.Thread(target=thread_task, args=(str(i), i))
T.start()
time.sleep(10)
Instead of trying to guess a good delay value for :func:`time.sleep`,
it's better to use some kind of semaphore mechanism. One idea is to use the
:mod:`queue` module to create a queue object, let each thread append a token to
the queue when it finishes, and let the main thread read as many tokens from the
queue as there are threads.
How do I parcel out work among a bunch of worker threads?
---------------------------------------------------------
The easiest way is to use the :mod:`concurrent.futures` module,
especially the :mod:`~concurrent.futures.ThreadPoolExecutor` class.
Or, if you want fine control over the dispatching algorithm, you can write
your own logic manually. Use the :mod:`queue` module to create a queue
containing a list of jobs. The :class:`~queue.Queue` class maintains a
list of objects and has a ``.put(obj)`` method that adds items to the queue and
a ``.get()`` method to return them. The class will take care of the locking
necessary to ensure that each job is handed out exactly once.
Here's a trivial example::
import threading, queue, time
# The worker thread gets jobs off the queue. When the queue is empty, it
# assumes there will be no more work and exits.
# (Realistically workers will run until terminated.)
def worker():
print('Running worker')
time.sleep(0.1)
while True:
try:
arg = q.get(block=False)
except queue.Empty:
print('Worker', threading.current_thread(), end=' ')
print('queue empty')
break
else:
print('Worker', threading.current_thread(), end=' ')
print('running with argument', arg)
time.sleep(0.5)
# Create queue
q = queue.Queue()
# Start a pool of 5 workers
for i in range(5):
t = threading.Thread(target=worker, name='worker %i' % (i+1))
t.start()
# Begin adding work to the queue
for i in range(50):
q.put(i)
# Give threads time to run
print('Main thread sleeping')
time.sleep(5)
When run, this will produce the following output:
.. code-block:: none
Running worker
Running worker
Running worker
Running worker
Running worker
Main thread sleeping
Worker <Thread(worker 1, started 130283832797456)> running with argument 0
Worker <Thread(worker 2, started 130283824404752)> running with argument 1
Worker <Thread(worker 3, started 130283816012048)> running with argument 2
Worker <Thread(worker 4, started 130283807619344)> running with argument 3
Worker <Thread(worker 5, started 130283799226640)> running with argument 4
Worker <Thread(worker 1, started 130283832797456)> running with argument 5
...
Consult the module's documentation for more details; the :class:`~queue.Queue`
class provides a featureful interface.
What kinds of global value mutation are thread-safe?
----------------------------------------------------
A :term:`global interpreter lock` (GIL) is used internally to ensure that only one
thread runs in the Python VM at a time. In general, Python offers to switch
among threads only between bytecode instructions; how frequently it switches can
be set via :func:`sys.setswitchinterval`. Each bytecode instruction and
therefore all the C implementation code reached from each instruction is
therefore atomic from the point of view of a Python program.
In theory, this means an exact accounting requires an exact understanding of the
PVM bytecode implementation. In practice, it means that operations on shared
variables of built-in data types (ints, lists, dicts, etc) that "look atomic"
really are.
For example, the following operations are all atomic (L, L1, L2 are lists, D,
D1, D2 are dicts, x, y are objects, i, j are ints)::
L.append(x)
L1.extend(L2)
x = L[i]
x = L.pop()
L1[i:j] = L2
L.sort()
x = y
x.field = y
D[x] = y
D1.update(D2)
D.keys()
These aren't::
i = i+1
L.append(L[-1])
L[i] = L[j]
D[x] = D[x] + 1
Operations that replace other objects may invoke those other objects'
:meth:`__del__` method when their reference count reaches zero, and that can
affect things. This is especially true for the mass updates to dictionaries and
lists. When in doubt, use a mutex!
Can't we get rid of the Global Interpreter Lock?
------------------------------------------------
.. XXX link to dbeazley's talk about GIL?
The :term:`global interpreter lock` (GIL) is often seen as a hindrance to Python's
deployment on high-end multiprocessor server machines, because a multi-threaded
Python program effectively only uses one CPU, due to the insistence that
(almost) all Python code can only run while the GIL is held.
Back in the days of Python 1.5, Greg Stein actually implemented a comprehensive
patch set (the "free threading" patches) that removed the GIL and replaced it
with fine-grained locking. Adam Olsen recently did a similar experiment
in his `python-safethread <https://code.google.com/archive/p/python-safethread>`_
project. Unfortunately, both experiments exhibited a sharp drop in single-thread
performance (at least 30% slower), due to the amount of fine-grained locking
necessary to compensate for the removal of the GIL.
This doesn't mean that you can't make good use of Python on multi-CPU machines!
You just have to be creative with dividing the work up between multiple
*processes* rather than multiple *threads*. The
:class:`~concurrent.futures.ProcessPoolExecutor` class in the new
:mod:`concurrent.futures` module provides an easy way of doing so; the
:mod:`multiprocessing` module provides a lower-level API in case you want
more control over dispatching of tasks.
Judicious use of C extensions will also help; if you use a C extension to
perform a time-consuming task, the extension can release the GIL while the
thread of execution is in the C code and allow other threads to get some work
done. Some standard library modules such as :mod:`zlib` and :mod:`hashlib`
already do this.
It has been suggested that the GIL should be a per-interpreter-state lock rather
than truly global; interpreters then wouldn't be able to share objects.
Unfortunately, this isn't likely to happen either. It would be a tremendous
amount of work, because many object implementations currently have global state.
For example, small integers and short strings are cached; these caches would
have to be moved to the interpreter state. Other object types have their own
free list; these free lists would have to be moved to the interpreter state.
And so on.
And I doubt that it can even be done in finite time, because the same problem
exists for 3rd party extensions. It is likely that 3rd party extensions are
being written at a faster rate than you can convert them to store all their
global state in the interpreter state.
And finally, once you have multiple interpreters not sharing any state, what
have you gained over running each interpreter in a separate process?
Input and Output
================
How do I delete a file? (And other file questions...)
-----------------------------------------------------
Use ``os.remove(filename)`` or ``os.unlink(filename)``; for documentation, see
the :mod:`os` module. The two functions are identical; :func:`~os.unlink` is simply
the name of the Unix system call for this function.
To remove a directory, use :func:`os.rmdir`; use :func:`os.mkdir` to create one.
``os.makedirs(path)`` will create any intermediate directories in ``path`` that
don't exist. ``os.removedirs(path)`` will remove intermediate directories as
long as they're empty; if you want to delete an entire directory tree and its
contents, use :func:`shutil.rmtree`.
To rename a file, use ``os.rename(old_path, new_path)``.
To truncate a file, open it using ``f = open(filename, "rb+")``, and use
``f.truncate(offset)``; offset defaults to the current seek position. There's
also ``os.ftruncate(fd, offset)`` for files opened with :func:`os.open`, where
*fd* is the file descriptor (a small integer).
The :mod:`shutil` module also contains a number of functions to work on files
including :func:`~shutil.copyfile`, :func:`~shutil.copytree`, and
:func:`~shutil.rmtree`.
How do I copy a file?
---------------------
The :mod:`shutil` module contains a :func:`~shutil.copyfile` function.
Note that on Windows NTFS volumes, it does not copy
`alternate data streams
<https://en.wikipedia.org/wiki/NTFS#Alternate_data_stream_(ADS)>`_
nor `resource forks <https://en.wikipedia.org/wiki/Resource_fork>`__
on macOS HFS+ volumes, though both are now rarely used.
It also doesn't copy file permissions and metadata, though using
:func:`shutil.copy2` instead will preserve most (though not all) of it.
How do I read (or write) binary data?
-------------------------------------
To read or write complex binary data formats, it's best to use the :mod:`struct`
module. It allows you to take a string containing binary data (usually numbers)
and convert it to Python objects; and vice versa.
For example, the following code reads two 2-byte integers and one 4-byte integer
in big-endian format from a file::
import struct
with open(filename, "rb") as f:
s = f.read(8)
x, y, z = struct.unpack(">hhl", s)
The '>' in the format string forces big-endian data; the letter 'h' reads one
"short integer" (2 bytes), and 'l' reads one "long integer" (4 bytes) from the
string.
For data that is more regular (e.g. a homogeneous list of ints or floats),
you can also use the :mod:`array` module.
.. note::
To read and write binary data, it is mandatory to open the file in
binary mode (here, passing ``"rb"`` to :func:`open`). If you use
``"r"`` instead (the default), the file will be open in text mode
and ``f.read()`` will return :class:`str` objects rather than
:class:`bytes` objects.
I can't seem to use os.read() on a pipe created with os.popen(); why?
---------------------------------------------------------------------
:func:`os.read` is a low-level function which takes a file descriptor, a small
integer representing the opened file. :func:`os.popen` creates a high-level
file object, the same type returned by the built-in :func:`open` function.
Thus, to read *n* bytes from a pipe *p* created with :func:`os.popen`, you need to
use ``p.read(n)``.
.. XXX update to use subprocess. See the :ref:`subprocess-replacements` section.
How do I run a subprocess with pipes connected to both input and output?
------------------------------------------------------------------------
Use the :mod:`popen2` module. For example::
import popen2
fromchild, tochild = popen2.popen2("command")
tochild.write("input\n")
tochild.flush()
output = fromchild.readline()
Warning: in general it is unwise to do this because you can easily cause a
deadlock where your process is blocked waiting for output from the child
while the child is blocked waiting for input from you. This can be caused
by the parent expecting the child to output more text than it does or
by data being stuck in stdio buffers due to lack of flushing.
The Python parent can of course explicitly flush the data it sends to the
child before it reads any output, but if the child is a naive C program it
may have been written to never explicitly flush its output, even if it is
interactive, since flushing is normally automatic.
Note that a deadlock is also possible if you use :func:`popen3` to read
stdout and stderr. If one of the two is too large for the internal buffer
(increasing the buffer size does not help) and you ``read()`` the other one
first, there is a deadlock, too.
Note on a bug in popen2: unless your program calls ``wait()`` or
``waitpid()``, finished child processes are never removed, and eventually
calls to popen2 will fail because of a limit on the number of child
processes. Calling :func:`os.waitpid` with the :data:`os.WNOHANG` option can
prevent this; a good place to insert such a call would be before calling
``popen2`` again.
In many cases, all you really need is to run some data through a command and
get the result back. Unless the amount of data is very large, the easiest
way to do this is to write it to a temporary file and run the command with
that temporary file as input. The standard module :mod:`tempfile` exports a
:func:`~tempfile.mktemp` function to generate unique temporary file names. ::
import tempfile
import os
class Popen3:
"""
This is a deadlock-safe version of popen that returns
an object with errorlevel, out (a string) and err (a string).
(capturestderr may not work under windows.)
Example: print(Popen3('grep spam','\n\nhere spam\n\n').out)
"""
def __init__(self,command,input=None,capturestderr=None):
outfile=tempfile.mktemp()
command="( %s ) > %s" % (command,outfile)
if input:
infile=tempfile.mktemp()
open(infile,"w").write(input)
command=command+" <"+infile
if capturestderr:
errfile=tempfile.mktemp()
command=command+" 2>"+errfile
self.errorlevel=os.system(command) >> 8
self.out=open(outfile,"r").read()
os.remove(outfile)
if input:
os.remove(infile)
if capturestderr:
self.err=open(errfile,"r").read()
os.remove(errfile)
Note that many interactive programs (e.g. vi) don't work well with pipes
substituted for standard input and output. You will have to use pseudo ttys
("ptys") instead of pipes. Or you can use a Python interface to Don Libes'
"expect" library. A Python extension that interfaces to expect is called
"expy" and available from http://expectpy.sourceforge.net. A pure Python
solution that works like expect is `pexpect
<https://pypi.org/project/pexpect/>`_.
How do I access the serial (RS232) port?
----------------------------------------
For Win32, OSX, Linux, BSD, Jython, IronPython:
https://pypi.org/project/pyserial/
For Unix, see a Usenet post by Mitch Chapman:
https://groups.google.com/groups?selm=34A04430.CF9@ohioee.com
Why doesn't closing sys.stdout (stdin, stderr) really close it?
---------------------------------------------------------------
Python :term:`file objects <file object>` are a high-level layer of
abstraction on low-level C file descriptors.
For most file objects you create in Python via the built-in :func:`open`
function, ``f.close()`` marks the Python file object as being closed from
Python's point of view, and also arranges to close the underlying C file
descriptor. This also happens automatically in ``f``'s destructor, when
``f`` becomes garbage.
But stdin, stdout and stderr are treated specially by Python, because of the
special status also given to them by C. Running ``sys.stdout.close()`` marks
the Python-level file object as being closed, but does *not* close the
associated C file descriptor.
To close the underlying C file descriptor for one of these three, you should
first be sure that's what you really want to do (e.g., you may confuse
extension modules trying to do I/O). If it is, use :func:`os.close`::
os.close(stdin.fileno())
os.close(stdout.fileno())
os.close(stderr.fileno())
Or you can use the numeric constants 0, 1 and 2, respectively.
Network/Internet Programming
============================
What WWW tools are there for Python?
------------------------------------
See the chapters titled :ref:`internet` and :ref:`netdata` in the Library
Reference Manual. Python has many modules that will help you build server-side
and client-side web systems.
.. XXX check if wiki page is still up to date
A summary of available frameworks is maintained by Paul Boddie at
https://wiki.python.org/moin/WebProgramming\ .
Cameron Laird maintains a useful set of pages about Python web technologies at
http://phaseit.net/claird/comp.lang.python/web_python.
How can I mimic CGI form submission (METHOD=POST)?
--------------------------------------------------
I would like to retrieve web pages that are the result of POSTing a form. Is
there existing code that would let me do this easily?
Yes. Here's a simple example that uses :mod:`urllib.request`::
#!/usr/local/bin/python
import urllib.request
# build the query string
qs = "First=Josephine&MI=Q&Last=Public"
# connect and send the server a path
req = urllib.request.urlopen('http://www.some-server.out-there'
'/cgi-bin/some-cgi-script', data=qs)
with req:
msg, hdrs = req.read(), req.info()
Note that in general for percent-encoded POST operations, query strings must be
quoted using :func:`urllib.parse.urlencode`. For example, to send
``name=Guy Steele, Jr.``::
>>> import urllib.parse
>>> urllib.parse.urlencode({'name': 'Guy Steele, Jr.'})
'name=Guy+Steele%2C+Jr.'
.. seealso:: :ref:`urllib-howto` for extensive examples.
What module should I use to help with generating HTML?
------------------------------------------------------
.. XXX add modern template languages
You can find a collection of useful links on the `Web Programming wiki page
<https://wiki.python.org/moin/WebProgramming>`_.
How do I send mail from a Python script?
----------------------------------------
Use the standard library module :mod:`smtplib`.
Here's a very simple interactive mail sender that uses it. This method will
work on any host that supports an SMTP listener. ::
import sys, smtplib
fromaddr = input("From: ")
toaddrs = input("To: ").split(',')
print("Enter message, end with ^D:")
msg = ''
while True:
line = sys.stdin.readline()
if not line:
break
msg += line
# The actual mail send
server = smtplib.SMTP('localhost')
server.sendmail(fromaddr, toaddrs, msg)
server.quit()
A Unix-only alternative uses sendmail. The location of the sendmail program
varies between systems; sometimes it is ``/usr/lib/sendmail``, sometimes
``/usr/sbin/sendmail``. The sendmail manual page will help you out. Here's
some sample code::
import os
SENDMAIL = "/usr/sbin/sendmail" # sendmail location
p = os.popen("%s -t -i" % SENDMAIL, "w")
p.write("To: receiver@example.com\n")
p.write("Subject: test\n")
p.write("\n") # blank line separating headers from body
p.write("Some text\n")
p.write("some more text\n")
sts = p.close()
if sts != 0:
print("Sendmail exit status", sts)
How do I avoid blocking in the connect() method of a socket?
------------------------------------------------------------
The :mod:`select` module is commonly used to help with asynchronous I/O on
sockets.
To prevent the TCP connect from blocking, you can set the socket to non-blocking
mode. Then when you do the :meth:`socket.connect`, you will either connect immediately
(unlikely) or get an exception that contains the error number as ``.errno``.
``errno.EINPROGRESS`` indicates that the connection is in progress, but hasn't
finished yet. Different OSes will return different values, so you're going to
have to check what's returned on your system.
You can use the :meth:`socket.connect_ex` method to avoid creating an exception. It will
just return the errno value. To poll, you can call :meth:`socket.connect_ex` again later
-- ``0`` or ``errno.EISCONN`` indicate that you're connected -- or you can pass this
socket to :meth:`select.select` to check if it's writable.
.. note::
The :mod:`asyncio` module provides a general purpose single-threaded and
concurrent asynchronous library, which can be used for writing non-blocking
network code.
The third-party `Twisted <https://twistedmatrix.com/trac/>`_ library is
a popular and feature-rich alternative.
Databases
=========
Are there any interfaces to database packages in Python?
--------------------------------------------------------
Yes.
Interfaces to disk-based hashes such as :mod:`DBM <dbm.ndbm>` and :mod:`GDBM
<dbm.gnu>` are also included with standard Python. There is also the
:mod:`sqlite3` module, which provides a lightweight disk-based relational
database.
Support for most relational databases is available. See the
`DatabaseProgramming wiki page
<https://wiki.python.org/moin/DatabaseProgramming>`_ for details.
How do you implement persistent objects in Python?
--------------------------------------------------
The :mod:`pickle` library module solves this in a very general way (though you
still can't store things like open files, sockets or windows), and the
:mod:`shelve` library module uses pickle and (g)dbm to create persistent
mappings containing arbitrary Python objects.
Mathematics and Numerics
========================
How do I generate random numbers in Python?
-------------------------------------------
The standard module :mod:`random` implements a random number generator. Usage
is simple::
import random
random.random()
This returns a random floating point number in the range [0, 1).
There are also many other specialized generators in this module, such as:
* ``randrange(a, b)`` chooses an integer in the range [a, b).
* ``uniform(a, b)`` chooses a floating point number in the range [a, b).
* ``normalvariate(mean, sdev)`` samples the normal (Gaussian) distribution.
Some higher-level functions operate on sequences directly, such as:
* ``choice(S)`` chooses a random element from a given sequence.
* ``shuffle(L)`` shuffles a list in-place, i.e. permutes it randomly.
There's also a ``Random`` class you can instantiate to create independent
multiple random number generators.
| 37.517241 | 89 | 0.684869 |
f30d119f4d6ad7ffee4044a7cf048b57934bf03b | 431 | rst | reStructuredText | node_modules/zeppelin-solidity/docs/source/limitbalance.rst | lucav/BusinessSmartContract | e67506be8f833a78ebbc42d81b539f9b42a2e901 | [
"Unlicense"
] | 22 | 2018-04-10T16:38:04.000Z | 2019-04-25T09:08:16.000Z | node_modules/zeppelin-solidity/docs/source/limitbalance.rst | lucav/BusinessSmartContract | e67506be8f833a78ebbc42d81b539f9b42a2e901 | [
"Unlicense"
] | 54 | 2019-06-21T06:48:30.000Z | 2021-10-18T20:24:30.000Z | node_modules/zeppelin-solidity/docs/source/limitbalance.rst | lucav/BusinessSmartContract | e67506be8f833a78ebbc42d81b539f9b42a2e901 | [
"Unlicense"
] | 15 | 2018-04-11T16:24:58.000Z | 2018-08-30T01:03:08.000Z | LimitBalance
=============================================
Base contract that provides mechanism for limiting the amount of funds a contract can hold.
LimitBalance(unit _limit)
""""""""""""""""""""""""""""
Constructor takes an unsigned integer and sets it as the limit of funds this contract can hold.
modifier limitedPayable()
""""""""""""""""""""""""""""
Throws an error if this contract's balance is already above the limit.
| 33.153846 | 95 | 0.61949 |
a0cce75a89f846e69dc80ba7c6d3188c160f4b37 | 3,484 | rst | reStructuredText | docs/zh_CN/rst/gettingstarted.rst | shahpiyushv/esp-jumpstart | c6a5010c099220a37786564e7d6581d4393cad7a | [
"Apache-2.0"
] | null | null | null | docs/zh_CN/rst/gettingstarted.rst | shahpiyushv/esp-jumpstart | c6a5010c099220a37786564e7d6581d4393cad7a | [
"Apache-2.0"
] | null | null | null | docs/zh_CN/rst/gettingstarted.rst | shahpiyushv/esp-jumpstart | c6a5010c099220a37786564e7d6581d4393cad7a | [
"Apache-2.0"
] | null | null | null | 入门指南
===============
:link_to_translation:`en:[English]`
在本章中,我们将介绍 ESP32 的开发环境,并帮助您了解 ESP32 可用的开发工具和代码仓库。
开发过程概述
--------------------
使用 ESP32 开发产品时,常用的开发环境如下图所示:
.. figure:: ../../_static/dev_setup.png
:alt: Typical Developer Setup
ESP32 产品开发过程
上图电脑,即开发主机可以是 Linux、Windows 或 MacOS 操作系统。ESP32 开发板通过 USB 连接到开发主机,开发主机上有 ESP-IDF (乐鑫 SDK)、编译器工具链和项目代码。主机先编译代码生成可执行文件,然后电脑上的工具把生成的文件烧到板子上,然后板子开始执行文件。最后你可以从主机查看日志。
ESP-IDF 介绍
-------------
ESP-IDF 是乐鑫为 ESP32 提供的物联网开发框架。
- ESP-IDF 包含一系列库及头文件,提供了基于 ESP32 构建软件项目所需的核心组件。
- ESP-IDF 还提供了开发和量产过程中最常用的工具及功能,例如:构建、烧录、调试和测量等。
设置 ESP-IDF
~~~~~~~~~~~~~~
请参照 `ESP-IDF 入门指南 <https://docs.espressif.com/projects/esp-idf/zh_CN/latest/get-started/index.html>`_,按照步骤设置 ESP-IDF。注:请完成链接页面的所有步骤。
在进行下面步骤之前,请确认您已经正确设置了开发主机,并按照上面链接中的步骤构建了第一个应用程序。如果上面步骤已经完成,那让我们继续探索 ESP-IDF。
ESP-IDF 详解
~~~~~~~~~~~~~~
ESP-IDF 采用了一种基于组件的架构:
.. figure:: ../../_static/idf_comp.png
:alt: Component Based Design
组件设计
ESP-IDF 中的所有软件均以“组件”的形式提供,比如操作系统、网络协议栈、Wi-Fi 驱动程序、以及 HTTP 服务器等中间件等等。
在这种基于“组件”的架构下,你可以轻松使用更多自己研发或第三方提供的组件。
开发人员通常借助 ESP-IDF 构建 *应用程序*,包含业务逻辑、外设驱动程序和 SDK 配置。
.. figure:: ../../_static/app_structure.png
:alt: Application’s Structure
应用程序架构
应用程序必须包含一个 *main* 组件,这是保存应用程序逻辑的主要组件。除此之外,应用程序根据自身需求还可以包含其他组件。应用程序的 *Makefile* 文件定义了应用程序的构建指令,此外,应用程序还包含一个可选的 *sdkconfig.defaults* 文件,用于存放应用程序默认的 SDK 配置。
获取 ESP-Jumpstart 库
---------------------
ESP-Jumpstart 库包含了一系列由 ESP-IDF 构建的 *应用程序*,我们将在本次练习中使用这些应用程序。首先克隆 ESP-Jumpstart 库:
::
$ git clone --recursive https://github.com/espressif/esp-jumpstart
我们将构建一个可用于量产的固件,因此选择使用 ESP-IDF 稳定版本进行开发。目前 ESP-Jumpstart 使用的是 ESP-IDF v3.2 稳定版本,请切换到这一版本。
::
$ cd esp-idf
$ git checkout -b release/v3.2 remotes/origin/release/v3.2
$ git submodule update --recursive
现在,我们构建 ESP-Jumpstart 中的第一个应用程序 *Hello World*,并将其烧录到开发板上,具体步骤如下,相信您已经熟悉这些步骤:
::
$ cd esp-jumpstart/1_hello_world
$ make -j8 menuconfig
$ export ESPPORT=/dev/cu.SLAB_USBTOUART # Or the correct device name for your setup
$ export ESPBAUD=921600
$ make -j8 flash monitor
上面的步骤将构建整个 SDK 和应用程序。构建成功后,将会把生成的固件烧录到开发板。
烧录成功后,设备将重启。同时,你还可以在控制台看到该固件的输出。
.. _sec_for\_esp8266\_users:
ESP8266 用户
~~~~~~~~~~~~~~~~~
请确保已将 IDF\_PATH 设置为 ESP8266\_RTOS\_SDK 的路径,并使用 esp-jumpstart 的 platform/esp8266 分支。请使用以下命令切换到该分支:
::
$ cd esp-jumpstart
$ git checkout -b platform/esp8266 origin/platform/esp8266
代码
--------
现在,让我们研究一下 Hello World 应用程序的代码,只有如下几行:
.. code:: c
#include <stdio.h>
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
void app_main()
{
int i = 0;
while (1) {
printf("[%d] Hello world!\n", i);
i++;
vTaskDelay(5000 / portTICK_PERIOD_MS);
}
}
这组代码非常简单,下面是一些要点:
- app\_main() 函数是应用程序入口点,所有应用程序都从这里开始执行。FreeRTOS 内核在 ESP32 双核上运行之后将调用此函数,FreeRTOS 一旦完成初始化,即将在 ESP32 的其中一个核上新建一个应用程序线程,称为主线程,并在这一线程中调用 app\_main() 函数。应用程序线程的堆栈可以通过 SDK 的配置信息进行配置。
- printf()、strlen()、time() 等 C 库函数可以直接调用。IDF 使用 newlib C 标准库,newlib 是一个占用空间较低的 C 标准库,支持 stdio、stdlib、字符串操作、数学、时间/时区、文件/目录操作等 C 库中的大多数函数,不支持 signal、locale、wchr 等。在上面示例中,我们使用 printf() 函数将数据输出打印到控制台。
- FreeRTOS 是驱动 ESP32 双核的操作系统。`FreeRTOS <https://www.freertos.org>`_ 是一个很小的内核,提供了任务创建、任务间通信(信号量、信息队列、互斥量)、中断和定时器等机制。在上面示例中,我们使用 vTaskDelay 函数让线程休眠 5 秒。有关 FreeRTOS API 的详细信息,请查看 `FreeRTOS 文档 <https://www.freertos.org/a00106.html>`_。
未完待续
---------------
到现在为止,我们已经具备了基本的开发能力,可以进行编译代码、烧录固件、查看固件日志和消息等基本开发操作。
下一步,让我们用 ESP32 构建一个简单的电源插座。
| 25.246377 | 231 | 0.706946 |
3414b9fc7a9ba5c50eaeb265fccd155774006db9 | 5,015 | rst | reStructuredText | docs/comware_loghost_module.rst | HPENetworking/hpe-cw7-ansible | a7569b1dd21ad38a53d825eb4d4b2caf8ff6ea16 | [
"Apache-2.0"
] | 4 | 2022-01-10T21:02:00.000Z | 2022-03-09T03:05:22.000Z | docs/comware_loghost_module.rst | flycoolman/hpe-cw7-ansible | a7569b1dd21ad38a53d825eb4d4b2caf8ff6ea16 | [
"Apache-2.0"
] | null | null | null | docs/comware_loghost_module.rst | flycoolman/hpe-cw7-ansible | a7569b1dd21ad38a53d825eb4d4b2caf8ff6ea16 | [
"Apache-2.0"
] | 2 | 2022-01-10T21:03:07.000Z | 2022-01-20T09:11:44.000Z | .. _comware_loghost:
comware_loghost
++++++++++++++++++++++++++++
.. contents::
:local:
:depth: 1
Synopsis
--------
Added in version 1.0
Manage info-center log host and related parameters on V7 devices
Options
-------
.. raw:: html
<table border=1 cellpadding=4>
<tr>
<th class="head">parameter</th>
<th class="head">required</th>
<th class="head">default</th>
<th class="head">choices</th>
<th class="head">comments</th>
</tr><tr style="text-align:center">
<td style="vertical-align:middle">loghost</td>
<td style="vertical-align:middle">yes</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left">Address of the log host</td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">VRF</td>
<td style="vertical-align:middle">yes</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left">VRF instance name</td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">hostport</td>
<td style="vertical-align:middle">no</td>
<td style="vertical-align:middle">514</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left">Port number of the log host.</td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">168</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left"></td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">sourceID</td>
<td style="vertical-align:middle">no</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left">Configure the source IP address of the sent log information.The default state is Using the primary IP address of the outgoing interface as the source IP address of the sent log information</td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">state</td>
<td style="vertical-align:middle">no</td>
<td style="vertical-align:middle">present</td>
<td style="vertical-align:middle;text-align:left"><ul style="margin:0;"><li>present</li><li>absent</li></td></td>
<td style="vertical-align:middle;text-align:left">Desired state of the switchport</td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">hostname</td>
<td style="vertical-align:middle">yes</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left">IP Address or hostname of the Comware v7 device that has NETCONF enabled</td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">username</td>
<td style="vertical-align:middle">yes</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left">Username used to login to the switch</td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">password</td>
<td style="vertical-align:middle">yes</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left">Password used to login to the switch</td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">port</td>
<td style="vertical-align:middle">no</td>
<td style="vertical-align:middle">830</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left">NETCONF port number</td>
</tr>
<tr style="text-align:center">
<td style="vertical-align:middle">look_for_keys</td>
<td style="vertical-align:middle">no</td>
<td style="vertical-align:middle">False</td>
<td style="vertical-align:middle"></td>
<td style="vertical-align:middle;text-align:left">Whether searching for discoverable private key files in ~/.ssh/</td>
</tr>
</table><br>
Examples
--------
.. raw:: html
<br/>
::
# basic config
- comware_loghost: loghost=3.3.3.7 VRF=vpn2 hostport=512 facility=128 sourceID=LoopBack0 username={{ username }} password={{ password }} hostname={{ inventory_hostname }}
# delete config
- comware_loghost: loghost=3.3.3.7 VRF=vpn2 hostport=512 facility=128 sourceID=LoopBack0 state=absent username={{ username }} password={{ password }} hostname={{ inventory_hostname }}
.. note:: - | 37.425373 | 251 | 0.625723 |
8c66f08becebd219615becb9e4cad31196ef8341 | 2,139 | rst | reStructuredText | docs/dust_extinction/references.rst | saimn/dust_extinction | 6f68953735563481910c3dada95534523a0bce14 | [
"BSD-3-Clause"
] | null | null | null | docs/dust_extinction/references.rst | saimn/dust_extinction | 6f68953735563481910c3dada95534523a0bce14 | [
"BSD-3-Clause"
] | null | null | null | docs/dust_extinction/references.rst | saimn/dust_extinction | 6f68953735563481910c3dada95534523a0bce14 | [
"BSD-3-Clause"
] | null | null | null | ##########
References
##########
CCM89: `Cardelli, Clayton, & Mathis 1989, ApJ, 345, 245
<http://ui.adsabs.harvard.edu/abs/1989ApJ...345..245C>`_
CT06: `Chiar & Tielens 2006, ApJ, 637 774
<https://ui.adsabs.harvard.edu/abs/2006ApJ...637..774C>`_
F99: `Fitzpatrick 1999, PASP, 111, 63
<http://ui.adsabs.harvard.edu/abs/1999PASP..111...63F>`_
F04: `Fitzpatrick 2004, ASP Conf. Ser. 309, Astrophysics of Dust, 33
<http://ui.adsabs.harvard.edu/abs/2004ASPC..309...33F>`_
FM90: `Fitzpatrick & Massa 1990, ApJS, 72, 163
<http://ui.adsabs.harvard.edu/abs/1990ApJS...72..163F>`_
FM07: `Fitzpatrick & Massa 2007, ApJ, 663, 320
<http://ui.adsabs.harvard.edu/abs/2007ApJ...663..320F>`_
F19: `Fitzpatrick, Massa, Gordon, et al. 2019, ApJ, 886, 108
<https://ui.adsabs.harvard.edu/abs/2019ApJ...886..108F>`_
F11: `Fritz et al. 2011, ApJ, 737, 73
<https://ui.adsabs.harvard.edu/abs/2011ApJ...737...73F>`_
G03: `Gordon et al. 2003, ApJ, 594, 279
<http://ui.adsabs.harvard.edu/abs/2003ApJ...594..279G>`_
GCC09: `Gordon, Cartledge, & Clayton 2009, ApJ, 705, 1320
<http://ui.adsabs.harvard.edu/abs/2009ApJ...705.1320G>`_
G16: `Gordon et al. 2016, 826, 104
<http://ui.adsabs.harvard.edu/abs/2016ApJ...826..104G>`_
I05: `Indebetouw et al. 2005, ApJ, 619, 931
<https://ui.adsabs.harvard.edu/abs/2005ApJ...619..931I>`_
M14: `Ma\’{\i}z Apell\’aniz et al. 2014, A&A, 564, 63
<http://ui.adsabs.harvard.edu/abs/2014A%26A...564A..63M>`_
OD94: `O'Donnell 1994, 422, 1580
<http://ui.adsabs.harvard.edu/abs/1994ApJ...422..158O>`_
P92: `Pei 1992, ApJ, 395, 130
<http://ui.adsabs.harvard.edu/abs/1992ApJ...395..130P>`_
RL85: `Rieke & Lebofsky 1985, ApJ,288, 618
<https://ui.adsabs.harvard.edu/abs/1985ApJ...288..618R>`_
VCG04: `Valencic, Clayton, & Gordon 2014, 616, 912
<http://ui.adsabs.harvard.edu/abs/2004ApJ...616..912V>`_
Naming Convention
=================
The names of models are use the paper where they were presented and are
based on the author last name initials and year published.
For papers with 3 or fewer authors, the initials of all the authors are used.
For papers with 4 or more authors, only the initial of the 1st authors is used.
| 33.952381 | 79 | 0.690977 |
4612e96b8d000929a8fed0eea5607118d069f958 | 2,346 | rst | reStructuredText | docs/source/tutorials/evaluator.rst | 12yuens2/jMetalPy | 6f54940cb205df831f5498e2eac2520b331ee4fd | [
"MIT"
] | 335 | 2017-03-16T19:44:50.000Z | 2022-03-30T08:50:46.000Z | docs/source/tutorials/evaluator.rst | 12yuens2/jMetalPy | 6f54940cb205df831f5498e2eac2520b331ee4fd | [
"MIT"
] | 85 | 2017-05-16T06:40:51.000Z | 2022-02-05T23:43:49.000Z | docs/source/tutorials/evaluator.rst | 12yuens2/jMetalPy | 6f54940cb205df831f5498e2eac2520b331ee4fd | [
"MIT"
] | 130 | 2017-02-08T01:19:15.000Z | 2022-03-25T08:32:08.000Z | Evaluate solutions
========================
The lifecycle of metaheuristics often requires to evaluate list of solutions on every iteration. In evolutionary algorithms, for example, this list of solutions is known as *population*.
In order to evaluate a population, NSGA-II (and in general, any generational algorithms in jMetalPy) uses an evaluator object.
Sequential
------------------------
The default evaluator runs in a sequential fashion (i.e., one solution at a time):
.. code-block:: python
from jmetal.util.evaluator import SequentialEvaluator
algorithm = NSGAII(
problem=problem,
population_size=100,
offspring_population_size=100,
...
population_evaluator = SequentialEvaluator(),
)
API
^^^
.. autoclass:: jmetal.util.evaluator.SequentialEvaluator
:members:
:undoc-members:
:show-inheritance:
Parallel and distributed
------------------------
Solutions can also be evaluated in parallel, using threads or processes:
.. code-block:: python
from jmetal.util.evaluator import MapEvaluator
from jmetal.util.evaluator import MultiprocessEvaluator
jMetalPy includes an evaluator based on Apache Spark, so the solutions can be evaluated in a variety of parallel systems (multicores, clusters):
.. code-block:: python
from jmetal.util.evaluator import SparkEvaluator
algorithm = NSGAII(
problem=problem,
population_size=100,
offspring_population_size=100,
...
population_evaluator = SparkEvaluator(processes=8),
)
Or by means of Dask:
.. code-block:: python
from jmetal.util.evaluator import DaskEvaluator
algorithm = NSGAII(
problem=problem,
population_size=100,
offspring_population_size=100,
...
population_evaluator = DaskEvaluator(),
)
.. warning:: :code:`SparkEvaluator` and :code:`DaskEvaluator` requires pySpark and Dask, respectively.
API
^^^
.. autoclass:: jmetal.util.evaluator.MapEvaluator
:members:
:undoc-members:
:show-inheritance:
.. autoclass:: jmetal.util.evaluator.MultiprocessEvaluator
:members:
:undoc-members:
:show-inheritance:
.. autoclass:: jmetal.util.evaluator.SparkEvaluator
:members:
:undoc-members:
:show-inheritance:
.. autoclass:: jmetal.util.evaluator.DaskEvaluator
:members:
:undoc-members:
:show-inheritance:
| 24.694737 | 186 | 0.707587 |
e528479f47326f099199d521ca75cb81ad3ce1c9 | 385 | rst | reStructuredText | src/Nonunit/Review/index.rst | thewtex/ITKSphinxExamples | 66b6352596c6a819111802674bf5851f09c9946d | [
"Apache-2.0"
] | 34 | 2015-01-26T19:38:36.000Z | 2021-02-04T02:15:41.000Z | src/Nonunit/Review/index.rst | thewtex/ITKSphinxExamples | 66b6352596c6a819111802674bf5851f09c9946d | [
"Apache-2.0"
] | 142 | 2016-01-22T15:59:25.000Z | 2021-03-17T15:11:19.000Z | src/Nonunit/Review/index.rst | thewtex/ITKSphinxExamples | 66b6352596c6a819111802674bf5851f09c9946d | [
"Apache-2.0"
] | 32 | 2015-01-26T19:38:41.000Z | 2021-03-17T15:28:14.000Z | Review
======
.. toctree::
:maxdepth: 1
GeometricPropertiesOfRegion/Documentation.rst
MultiphaseChanAndVeseSparseFieldLevelSetSegmentation/Documentation.rst
SegmentBloodVesselsWithMultiScaleHessianBasedMeasure/Documentation.rst
SinglephaseChanAndVeseDenseFieldLevelSetSegmentation/Documentation.rst
SinglephaseChanAndVeseSparseFieldLevelSetSegmentation/Documentation.rst
| 32.083333 | 73 | 0.877922 |
a7f8fa8bbd159512372641a4b54d2164ac7728b0 | 9,786 | rst | reStructuredText | doc/release/0.11.0-notes.rst | seberg/scipy | d8081cdd40ed8cbebd5905c0ad6c323c57d5da6e | [
"BSD-3-Clause"
] | 3 | 2019-08-14T03:11:16.000Z | 2021-04-15T19:45:35.000Z | doc/release/0.11.0-notes.rst | seberg/scipy | d8081cdd40ed8cbebd5905c0ad6c323c57d5da6e | [
"BSD-3-Clause"
] | null | null | null | doc/release/0.11.0-notes.rst | seberg/scipy | d8081cdd40ed8cbebd5905c0ad6c323c57d5da6e | [
"BSD-3-Clause"
] | 1 | 2019-08-13T21:23:57.000Z | 2019-08-13T21:23:57.000Z | ==========================
SciPy 0.11.0 Release Notes
==========================
.. contents::
SciPy 0.11.0 is the culmination of 8 months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and
better documentation. Highlights of this release are:
- A new module has been added which provides a number of common sparse graph
algorithms.
- New unified interfaces to the existing optimization and root finding
functions have been added.
All users are encouraged to upgrade to this release, as there are a large
number of bug-fixes and optimizations. Our development attention will now
shift to bug-fix releases on the 0.11.x branch, and on adding new features on
the master branch.
This release requires Python 2.4-2.7 or 3.1-3.2 and NumPy 1.5.1 or greater.
New features
============
Sparse Graph Submodule
----------------------
The new submodule :mod:`scipy.sparse.csgraph` implements a number of efficient
graph algorithms for graphs stored as sparse adjacency matrices. Available
routines are:
- :func:`connected_components` - determine connected components of a graph
- :func:`laplacian` - compute the laplacian of a graph
- :func:`shortest_path` - compute the shortest path between points on a
positive graph
- :func:`dijkstra` - use Dijkstra's algorithm for shortest path
- :func:`floyd_warshall` - use the Floyd-Warshall algorithm for
shortest path
- :func:`breadth_first_order` - compute a breadth-first order of nodes
- :func:`depth_first_order` - compute a depth-first order of nodes
- :func:`breadth_first_tree` - construct the breadth-first tree from
a given node
- :func:`depth_first_tree` - construct a depth-first tree from a given node
- :func:`minimum_spanning_tree` - construct the minimum spanning
tree of a graph
``scipy.optimize`` improvements
-------------------------------
The optimize module has received a lot of attention this release. In addition
to added tests, documentation improvements, bug fixes and code clean-up, the
following improvements were made:
- A unified interface to minimizers of univariate and multivariate
functions has been added.
- A unified interface to root finding algorithms for multivariate functions
has been added.
- The L-BFGS-B algorithm has been updated to version 3.0.
Unified interfaces to minimizers
````````````````````````````````
Two new functions ``scipy.optimize.minimize`` and
``scipy.optimize.minimize_scalar`` were added to provide a common interface
to minimizers of multivariate and univariate functions respectively.
For multivariate functions, ``scipy.optimize.minimize`` provides an
interface to methods for unconstrained optimization (`fmin`, `fmin_powell`,
`fmin_cg`, `fmin_ncg`, `fmin_bfgs` and `anneal`) or constrained
optimization (`fmin_l_bfgs_b`, `fmin_tnc`, `fmin_cobyla` and `fmin_slsqp`).
For univariate functions, ``scipy.optimize.minimize_scalar`` provides an
interface to methods for unconstrained and bounded optimization (`brent`,
`golden`, `fminbound`).
This allows for easier comparing and switching between solvers.
Unified interface to root finding algorithms
````````````````````````````````````````````
The new function ``scipy.optimize.root`` provides a common interface to
root finding algorithms for multivariate functions, embeding `fsolve`,
`leastsq` and `nonlin` solvers.
``scipy.linalg`` improvements
-----------------------------
New matrix equation solvers
```````````````````````````
Solvers for the Sylvester equation (``scipy.linalg.solve_sylvester``, discrete
and continuous Lyapunov equations (``scipy.linalg.solve_lyapunov``,
``scipy.linalg.solve_discrete_lyapunov``) and discrete and continuous algebraic
Riccati equations (``scipy.linalg.solve_continuous_are``,
``scipy.linalg.solve_discrete_are``) have been added to ``scipy.linalg``.
These solvers are often used in the field of linear control theory.
QZ and QR Decomposition
````````````````````````
It is now possible to calculate the QZ, or Generalized Schur, decomposition
using ``scipy.linalg.qz``. This function wraps the LAPACK routines sgges,
dgges, cgges, and zgges.
The function ``scipy.linalg.qr_multiply``, which allows efficient computation
of the matrix product of Q (from a QR decompostion) and a vector, has been
added.
Pascal matrices
```````````````
A function for creating Pascal matrices, ``scipy.linalg.pascal``, was added.
Sparse matrix construction and operations
-----------------------------------------
Two new functions, ``scipy.sparse.diags`` and ``scipy.sparse.block_diag``, were
added to easily construct diagonal and block-diagonal sparse matrices
respectively.
``scipy.sparse.csc_matrix`` and ``csr_matrix`` now support the operations
``sin``, ``tan``, ``arcsin``, ``arctan``, ``sinh``, ``tanh``, ``arcsinh``,
``arctanh``, ``rint``, ``sign``, ``expm1``, ``log1p``, ``deg2rad``, ``rad2deg``,
``floor``, ``ceil`` and ``trunc``. Previously, these operations had to be
performed by operating on the matrices' ``data`` attribute.
LSMR iterative solver
---------------------
LSMR, an iterative method for solving (sparse) linear and linear
least-squares systems, was added as ``scipy.sparse.linalg.lsmr``.
Discrete Sine Transform
-----------------------
Bindings for the discrete sine transform functions have been added to
``scipy.fftpack``.
``scipy.interpolate`` improvements
----------------------------------
For interpolation in spherical coordinates, the three classes
``scipy.interpolate.SmoothSphereBivariateSpline``,
``scipy.interpolate.LSQSphereBivariateSpline``, and
``scipy.interpolate.RectSphereBivariateSpline`` have been added.
Binned statistics (``scipy.stats``)
-----------------------------------
The stats module has gained functions to do binned statistics, which are a
generalization of histograms, in 1-D, 2-D and multiple dimensions:
``scipy.stats.binned_statistic``, ``scipy.stats.binned_statistic_2d`` and
``scipy.stats.binned_statistic_dd``.
Deprecated features
===================
``scipy.sparse.cs_graph_components`` has been made a part of the sparse graph
submodule, and renamed to ``scipy.sparse.csgraph.connected_components``.
Calling the former routine will result in a deprecation warning.
``scipy.misc.radon`` has been deprecated. A more full-featured radon transform
can be found in scikits-image.
``scipy.io.save_as_module`` has been deprecated. A better way to save multiple
Numpy arrays is the ``numpy.savez`` function.
The `xa` and `xb` parameters for all distributions in
``scipy.stats.distributions`` already weren't used; they have now been
deprecated.
Backwards incompatible changes
==============================
Removal of ``scipy.maxentropy``
-------------------------------
The ``scipy.maxentropy`` module, which was deprecated in the 0.10.0 release,
has been removed. Logistic regression in scikits.learn is a good and modern
alternative for this functionality.
Minor change in behavior of ``splev``
-------------------------------------
The spline evaluation function now behaves similarly to ``interp1d``
for size-1 arrays. Previous behavior::
>>> from scipy.interpolate import splev, splrep, interp1d
>>> x = [1,2,3,4,5]
>>> y = [4,5,6,7,8]
>>> tck = splrep(x, y)
>>> splev([1], tck)
4.
>>> splev(1, tck)
4.
Corrected behavior::
>>> splev([1], tck)
array([ 4.])
>>> splev(1, tck)
array(4.)
This affects also the ``UnivariateSpline`` classes.
Behavior of ``scipy.integrate.complex_ode``
-------------------------------------------
The behavior of the ``y`` attribute of ``complex_ode`` is changed.
Previously, it expressed the complex-valued solution in the form::
z = ode.y[::2] + 1j * ode.y[1::2]
Now, it is directly the complex-valued solution::
z = ode.y
Minor change in behavior of T-tests
-----------------------------------
The T-tests ``scipy.stats.ttest_ind``, ``scipy.stats.ttest_rel`` and
``scipy.stats.ttest_1samp`` have been changed so that 0 / 0 now returns NaN
instead of 1.
Other changes
=============
The SuperLU sources in ``scipy.sparse.linalg`` have been updated to version 4.3
from upstream.
The function ``scipy.signal.bode``, which calculates magnitude and phase data
for a continuous-time system, has been added.
The two-sample T-test ``scipy.stats.ttest_ind`` gained an option to compare
samples with unequal variances, i.e. Welch's T-test.
``scipy.misc.logsumexp`` now takes an optional ``axis`` keyword argument.
Authors
=======
This release contains work by the following people (contributed at least
one patch to this release, names in alphabetical order):
* Jeff Armstrong
* Chad Baker
* Brandon Beacher +
* behrisch +
* borishim +
* Matthew Brett
* Lars Buitinck
* Luis Pedro Coelho +
* Johann Cohen-Tanugi
* David Cournapeau
* dougal +
* Ali Ebrahim +
* endolith +
* Bjørn Forsman +
* Robert Gantner +
* Sebastian Gassner +
* Christoph Gohlke
* Ralf Gommers
* Yaroslav Halchenko
* Charles Harris
* Jonathan Helmus +
* Andreas Hilboll +
* Marc Honnorat +
* Jonathan Hunt +
* Maxim Ivanov +
* Thouis (Ray) Jones
* Christopher Kuster +
* Josh Lawrence +
* Denis Laxalde +
* Travis Oliphant
* Joonas Paalasmaa +
* Fabian Pedregosa
* Josef Perktold
* Gavin Price +
* Jim Radford +
* Andrew Schein +
* Skipper Seabold
* Jacob Silterra +
* Scott Sinclair
* Alexis Tabary +
* Martin Teichmann
* Matt Terry +
* Nicky van Foreest +
* Jacob Vanderplas
* Patrick Varilly +
* Pauli Virtanen
* Nils Wagner +
* Darryl Wally +
* Stefan van der Walt
* Liming Wang +
* David Warde-Farley +
* Warren Weckesser
* Sebastian Werk +
* Mike Wimmer +
* Tony S Yu +
A total of 55 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
| 30.870662 | 80 | 0.697834 |
3a1e59cacf01a38e006481fc93bfb77965d9a174 | 187 | rst | reStructuredText | python-sc2/docs_generate/units/index.rst | manaccac/sc2_bot | 3aa8b3711378b71fd0a44662cdd7148846e39530 | [
"MIT"
] | 242 | 2019-06-26T22:03:36.000Z | 2022-03-28T19:32:04.000Z | python-sc2/docs_generate/units/index.rst | manaccac/sc2_bot | 3aa8b3711378b71fd0a44662cdd7148846e39530 | [
"MIT"
] | 85 | 2019-06-08T13:22:32.000Z | 2022-03-28T15:52:45.000Z | python-sc2/docs_generate/units/index.rst | manaccac/sc2_bot | 3aa8b3711378b71fd0a44662cdd7148846e39530 | [
"MIT"
] | 111 | 2019-09-13T14:37:13.000Z | 2022-03-31T22:35:14.000Z | .. toctree::
:maxdepth: 2
****************************
units.py
****************************
TODO: Replace this with info about this file
.. autoclass:: sc2.units.Units
:members: | 17 | 44 | 0.470588 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.