hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
90f5f7d912da61f02756e953b36824b053331fc0 | 2,521 | adoc | AsciiDoc | docs/hop-user-manual/modules/ROOT/pages/pipeline/transforms/beamwindow.adoc | leanwithdata/hop | df840b3385513801e1fdf15ecbabf54b669cedb1 | [
"Apache-2.0"
] | 246 | 2021-12-30T15:46:30.000Z | 2022-03-29T04:24:26.000Z | docs/hop-user-manual/modules/ROOT/pages/pipeline/transforms/beamwindow.adoc | leanwithdata/hop | df840b3385513801e1fdf15ecbabf54b669cedb1 | [
"Apache-2.0"
] | 95 | 2020-11-02T14:17:34.000Z | 2021-12-22T10:31:30.000Z | docs/hop-user-manual/modules/ROOT/pages/pipeline/transforms/beamwindow.adoc | leanwithdata/hop | df840b3385513801e1fdf15ecbabf54b669cedb1 | [
"Apache-2.0"
] | 88 | 2020-10-31T09:03:45.000Z | 2021-12-12T11:03:16.000Z | ////
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
////
:documentationPath: /pipeline/transforms/
:language: en_US
:description: The Beam Window transform adds event-time-based window functions using the Beam execution engine.
= Beam Window
== Description
The Beam Window transform adds event-time-based window functions using the Beam execution engine.
== Options
[width="90%",options="header"]
|===
|Option|Description
|Transform name|Name of the transform, this name has to be unique in a single pipeline.
|Window type a|
* Fixed
* Sliding
* Session
* Global
|Window size (duration in seconds)|Sets the window duration size in seconds, default 60.
|Every x seconds (Sliding windows)|Sets the slide window duration in seconds.
|Window start field|The field containing the window start time.
|Window end field|The field containing the window end time.
|Window max field|The field containing the max duration between events.
|===
== Window Types
=== Fixed
Fixed or tumbling windows are used to repeatedly segment data into distinct time segments and do not overlap.
Events cannot belong to more than one window.
=== Sliding
Sliding windows produce an output only when an event occurs and continuously move forward.
Every window will have at least one event and can overlap.
Events can belong to more than one window.
=== Session
Session windows group events which arrive at similar times and filter out periods of time when there is no data.
The window begins when the first event occurs and extends to include new events within a specified timeout.
If events keep occurring the window will keep extending until maximum duration is reached.
=== Global
Global windowing is the default in Beam and ignores event time (spanning all of event time) and uses triggers to provide snapshots of that window. | 37.073529 | 146 | 0.788576 |
14e34433c04de74e997d9d1f02cf4fb8f77fd528 | 1,435 | adoc | AsciiDoc | doc/userguide/OHCKIT_HealthCardControl.adoc | gematik/ref-OpenHealthCardKit | 23c18b962d904896fce393c1636f7431ec6e0118 | [
"Apache-2.0"
] | 9 | 2020-05-02T08:09:31.000Z | 2022-02-14T15:08:25.000Z | doc/userguide/OHCKIT_HealthCardControl.adoc | gematik/ref-OpenHealthCardKit | 23c18b962d904896fce393c1636f7431ec6e0118 | [
"Apache-2.0"
] | 5 | 2020-11-06T08:54:41.000Z | 2021-12-22T13:33:27.000Z | doc/userguide/OHCKIT_HealthCardControl.adoc | gematik/ref-OpenHealthCardKit | 23c18b962d904896fce393c1636f7431ec6e0118 | [
"Apache-2.0"
] | 3 | 2020-07-08T14:58:40.000Z | 2021-07-28T16:16:10.000Z | include::config.adoc[]
[#HealthCardControl]
=== HealthCardControl
This library can be used to realize use cases for interacting with a German Health Card
(eGk, elektronische Gesundheitskarte) via a mobile device.
Typically you would use this library as the high level API gateway for your mobile application
to send predefined command chains to the Health Card and interpret the responses.
For more info, please find the low level part `HealthCardAccess`.
and a https://github.com/gematik/ref-OpenHealthCardApp-iOS[Demo App] on GitHub.
See the https://gematik.github.io/[Gematik GitHub IO] page for a more general overview.
include::config.adoc[]
==== Code Samples
Take the necessary preparatory steps for signing a challenge on the Health Card, then sign it.
[source,swift]
----
include::{integrationitestdir}/HealthCardControl/HealthCardTypeExtESIGNIntegrationTest.swift[tags=signChallenge,indent=0]
----
Encapsulate the https://www.bsi.bund.de/DE/Publikationen/TechnischeRichtlinien/tr03110/index_htm.html[PACE protocol]
steps for establishing a secure channel with the Health Card and expose only a simple API call .
[source,swift]
----
include::{integrationitestdir}/HealthCardControl/KeyAgreementIntegrationTest.swift[tags=negotiateSessionKey,indent=0]
----
See the integration tests link:include::{integrationitestdir}/HealthCardControl/[IntegrationTests/HealthCardControl/]
for more already implemented use cases. | 36.794872 | 121 | 0.804181 |
15fa86a8efa8e21007d9a3ad4c8dab287c333e73 | 1,600 | adoc | AsciiDoc | doc/integrating_applications/topics/add_describe_data_step.adoc | maschmid/syndesis | 96c277e00208e092114454ed185cc36865ab916b | [
"Apache-2.0"
] | null | null | null | doc/integrating_applications/topics/add_describe_data_step.adoc | maschmid/syndesis | 96c277e00208e092114454ed185cc36865ab916b | [
"Apache-2.0"
] | null | null | null | doc/integrating_applications/topics/add_describe_data_step.adoc | maschmid/syndesis | 96c277e00208e092114454ed185cc36865ab916b | [
"Apache-2.0"
] | null | null | null | [id='specify-data-type_{context}']
ifeval::["{context}" == "start"]
. On the *Specify Data Type* page, define the data type for the output
from the connection action.
endif::[]
ifeval::["{context}" == "finish"]
. On the *Specify Data Type* page, define the data type for the input
to the connection action.
endif::[]
ifeval::["{context}" == "middle"]
. On the *Specify Data Type* page, define the data type for the input to and/or
the output from the connection action.
endif::[]
+
* If the data type does not need to be known, select *No data type*
and then click *Done*. You do not need to follow the rest of these
instructions.
* Otherwise, select one of the following as the schema type:
+
** JSON schema
** JSON instance document
** XML schema
** XML instance document
. To import the schema, select *Upload a file*, *Use a URL*, or *Copy and paste*.
+
* If you are importing a JSON file, the file extension must be `.json`.
* If you are importing an XML schema, the file extension must be `.xsd`.
* If you are importing an XML document, the file extension must be `.xml`.
. According to your selection in the previous step, drop the file that contains
the schema, paste the URL for the schema, or paste the schema or instance
document itself.
. If you are importing a schema, identify the root element.
+
{prodname} immediately tries to upload the specified file or obtain the
schema from the URL. If there is an error, {prodname} displays a
message about the problem. Correct the error and try importing the
schema again.
. When {prodname} has a valid schema, click *Done*.
| 36.363636 | 82 | 0.724375 |
12b86345bc102135128c0d1de67d918ec075f4fa | 506 | adoc | AsciiDoc | search/releases/5.7/index.adoc | froque/hibernate.org | b107e8f5fa33a21e03d687a83be0536420d9773d | [
"Apache-2.0"
] | 26 | 2015-02-01T13:01:15.000Z | 2021-12-20T10:54:13.000Z | search/releases/5.7/index.adoc | froque/hibernate.org | b107e8f5fa33a21e03d687a83be0536420d9773d | [
"Apache-2.0"
] | 57 | 2015-01-12T12:49:56.000Z | 2022-03-08T15:34:45.000Z | search/releases/5.7/index.adoc | froque/hibernate.org | b107e8f5fa33a21e03d687a83be0536420d9773d | [
"Apache-2.0"
] | 37 | 2015-01-11T13:33:54.000Z | 2021-12-30T15:44:16.000Z | :awestruct-layout: project-releases-series
:awestruct-project: search
:awestruct-series_version: "5.7"
=== Hibernate ORM 5.2 upgrade
Hibernate Search 5.7 adds compatibility with Hibernate ORM 5.2,
and is no longer compatible with Hibernate ORM 5.1 and below.
[WARNING]
====
Hibernate Search 5.7 is only compatible with Hibernate ORM 5.2.3 and later.
There isn't any version of Hibernate Search compatible with Hibernate ORM 5.2.0 to 5.2.2.
====
=== Java 8 upgrade
Hibernate Search now requires Java 8. | 28.111111 | 89 | 0.76087 |
5a0675d217d2be5ecfcd8251432fbec353a0a9e9 | 4,116 | asciidoc | AsciiDoc | doc/book.asciidoc | tbug/graphql-erlang-tutorial | c2cebedf45e8b2bd3eb798977ebb72e13b9bfd05 | [
"Apache-2.0"
] | 58 | 2017-05-19T12:28:41.000Z | 2020-06-11T06:07:25.000Z | doc/book.asciidoc | tbug/graphql-erlang-tutorial | c2cebedf45e8b2bd3eb798977ebb72e13b9bfd05 | [
"Apache-2.0"
] | 38 | 2017-05-20T12:08:50.000Z | 2019-02-25T15:37:35.000Z | doc/book.asciidoc | tbug/graphql-erlang-tutorial | c2cebedf45e8b2bd3eb798977ebb72e13b9bfd05 | [
"Apache-2.0"
] | 12 | 2017-05-22T18:29:01.000Z | 2020-05-06T09:22:28.000Z | = {project} Tutorial
Jesper Louis Andersen <https://github.com/jlouis[@jlouis]>; Martin Gausby <https://github.com/gausby[@gausby]>; ShopGun <https://github.com/shopgun[@shopgun]>
Nov 2017
:toc: left
:icons: font
:source-highlighter: prettify
:sw_core: ../apps/sw_core
:sw_web: ../apps/sw_web
:sw_test: ../test
:project: Erlang GraphQL
:relay: Relay Modern
:shopgun: ShopGun
:star-wars: Star Wars
:cowboy-version: 2.2.x
:port-number: 17290
:imagesdir: ./images
{project} Tutorial
The guide here is a running example of an API implemented in Erlang
through the ShopGun GraphQL engine. The API is a frontend to a
database, containing information about the Star Wars films by George
Lucas. The intent is to provide readers with enough information they
can go build their own GraphQL servers in Erlang.
We use the GraphQL system at https://shopgun.com as a data backend. We
sponsor this tutorial as part of our Open Source efforts. We developed
this GraphQL system to meet our demands as our system evolves. The
world of tracking businesses and offers is a highly heterogeneous
dataset, which requires the flexibility of something like GraphQL.
Because GraphQL provides a lot of great tooling, we decided to move
forward and implement a server backend for Erlang, which didn't exist
at the time.
At the same time, we recognize other people may be interested in the
system and its development. Hence the decision was made to open source
the GraphQL parts of the system.
include::introduction.asciidoc[Introduction]
include::why_graphql.asciidoc[Why GraphQL]
include::system_tour.asciidoc[System Tour]
include::getting_started.asciidoc[Getting Started]
include::schema.asciidoc[Schema]
include::scalar_resolution.asciidoc[Scalar Resolution]
include::enum_resolution.asciidoc[Enum Resolution]
include::type_resolution.asciidoc[Type Resolution]
include::object_resolution.asciidoc[Object Resolution]
include::transports.asciidoc[Transports]
include::graphiql.asciidoc[GraphiQL]
include::errors.asciidoc[Error Handling]
include::relay_modern.asciidoc[Relay Modern]
include::security.asciidoc[Security]
[[annotations]]
== Annotations
TBD
include::tricks.asciidoc[Tricks]
[appendix]
include::terms.asciidoc[Terms]
[appendix]
include::code.asciidoc[Code Overview]
[appendix]
[[changelog]]
== Changelog
Nov 6, 2017:: Document enumerated types. They have been inside the
system in several different variants over the last months, but now
we have a variant we are happy with, so document it and lock it down
as the way to handle enumerated types in the system. Add `Episode`
as a type which is enumerated in the system as an example. Also add
lookups by episode to demonstrate the input/output paths for
enumerated values. (Large parts of this work is due to a {shopgun}
intern, Callum Roberts).
Oct 18, 2017:: Document a trick: How one implements lazy evaluation in
a GraphQL schema in the engine. Make sure that all code passes the
dialyzer and enable dialyzer runs in Travis CI.
June 22nd, 2017:: Merged a set of issues found by @benbro where
wording made certain sections harder to understand. See issues #21,
and #23-26.
June 5th, 2017:: Merged a set of typo fixes to the documentation by
@benbro.
May 30th, 2017:: Documented a more complex mutation example,
<<introducing-starships>>, which explains how to carry out more
complex queries. Also added this as an example to the
<<system-tour>>.
May 29th, 2017:: Moved <<cqrs>> into terminology so it can be
referenced from other places in the document easily. Described
<<schema-default-values>>. Described <<middleware-stacks>>. Made the
first sweep on the documentation describing the notion of mutations.
The <<system-tour>> now includes simple mutations as an example.
May 24th, 2017:: Described Scalar Coercion in more detail in
<<scalar-resolution>>. Change the schema such that a *DateTime*
scalar is used for the fields `created` and `edited` in all output
objects. Then demonstrate how this is used to coerce values.
May 22nd, 2017:: Documented how to resolve array objects in
<<resolving-lists>>.
| 37.418182 | 158 | 0.777697 |
a5e48d172ba695674e181e483e97bfcca3ffbdef | 2,769 | adoc | AsciiDoc | documentation/src/main/asciidoc/topics/con_calculate_size_data_set.adoc | pferraro/infinispan | 4de1affedbd7ee497e18ce84400788c900fe83cc | [
"Apache-2.0"
] | null | null | null | documentation/src/main/asciidoc/topics/con_calculate_size_data_set.adoc | pferraro/infinispan | 4de1affedbd7ee497e18ce84400788c900fe83cc | [
"Apache-2.0"
] | null | null | null | documentation/src/main/asciidoc/topics/con_calculate_size_data_set.adoc | pferraro/infinispan | 4de1affedbd7ee497e18ce84400788c900fe83cc | [
"Apache-2.0"
] | 1 | 2021-12-19T09:44:04.000Z | 2021-12-19T09:44:04.000Z | [id='data-set-size_{context}']
= How to calculate the size of your data set
Planning a {brandname} deployment involves calculating the size of your data set then figuring out the correct number of nodes and amount of RAM to hold the data set.
You can roughly estimate the total size of your data set with this formula:
[source,options="nowrap",subs=attributes+]
----
Data set size = Number of entries * (Average key size + Average value size + Memory overhead)
----
[NOTE]
====
With remote caches you need to calculate key sizes and value sizes in their marshalled forms.
====
[discrete]
== Data set size in distributed caches
Distributed caches require some additional calculation to determine the data set size.
In normal operating conditions, distributed caches store a number of copies for each key/value entry that is equal to the `Number of owners` that you configure.
During cluster rebalancing operations, some entries have an extra copy, so you should calculate `Number of owners + 1` to allow for that scenario.
You can use the following formula to adjust the estimate of your data set size for distributed caches:
[source,options="nowrap",subs=attributes+]
----
Distributed data set size = Data set size * (Number of owners + 1)
----
.Calculating available memory for distributed caches
Distributed caches allow you to increase the data set size either by adding more nodes or by increasing the amount of available memory per node.
[source,options="nowrap",subs=attributes+]
----
Distributed data set size <= Available memory per node * Minimum number of nodes
----
.Adjusting for node loss tolerance
Even if you plan to have a fixed number of nodes in the cluster, you should take into account the fact that not all nodes will be in the cluster all the time.
Distributed caches tolerate the loss of `Number of owners - 1` nodes without losing data so you can allocate that many extra node in addition to the minimum number of nodes that you need to fit your data set.
[source,options="nowrap",subs=attributes+]
----
Planned nodes = Minimum number of nodes + Number of owners - 1
Distributed data set size <= Available memory per node * (Planned nodes - Number of owners + 1)
----
For example, you plan to store one million entries that are 10KB each in size and configure three owners per entry for availability.
If you plan to allocate 4GB of RAM for each node in the cluster, you can then use the following formula to determine the number of nodes that you need for your data set:
[source,options="nowrap",subs=attributes+]
----
Data set size = 1_000_000 * 10KB = 10GB
Distributed data set size = (3 + 1) * 10GB = 40GB
40GB <= 4GB * Minimum number of nodes
Minimum number of nodes >= 40GB / 4GB = 10
Planned nodes = 10 + 3 - 1 = 12
----
| 42.6 | 208 | 0.758397 |
fd971a0ecebc0f57971b346dc0022401a1b86a33 | 5,865 | adoc | AsciiDoc | spring-security/src/docs/asciidoc/zh-cn/_includes/servlet/authentication/unpwd/jdbc.adoc | jcohy/jcohy-docs | 3b890e2aa898c78d40182f3757e3e840cf63d38b | [
"Apache-2.0"
] | 19 | 2020-06-04T07:46:20.000Z | 2022-03-23T01:46:40.000Z | spring-security/src/docs/asciidoc/zh-cn/_includes/servlet/authentication/unpwd/jdbc.adoc | jcohy/jcohy-docs | 3b890e2aa898c78d40182f3757e3e840cf63d38b | [
"Apache-2.0"
] | 15 | 2020-06-11T09:38:15.000Z | 2022-01-04T16:04:53.000Z | spring-security/src/docs/asciidoc/zh-cn/_includes/servlet/authentication/unpwd/jdbc.adoc | jcohy/jcohy-docs | 3b890e2aa898c78d40182f3757e3e840cf63d38b | [
"Apache-2.0"
] | 4 | 2020-11-24T11:03:19.000Z | 2022-02-28T07:21:23.000Z | [[servlet-authentication-jdbc]]
= JDBC Authentication
Spring Security 的 `JdbcDaoImpl` 实现了 <<servlet-authentication-userdetailsservice,UserDetailsService>> ,以提供对使用 JDBC 检索的基于用户名/密码的身份验证的支持.
`JdbcUserDetailsManager` 扩展了 `JdbcDaoImpl` 以通过 `UserDetailsManager` 接口提供对 `UserDetails` 的管理. 当配置为 <<servlet-authentication-unpwd-input,接受 username/password>>进行身份验证时,Spring Security 使用基于 `UserDetails` 的身份验证.
在以下各节中,我们将讨论:
* Spring Security JDBC 身份验证使用的默认 <<servlet-authentication-jdbc-schema,架构>>
* <<servlet-authentication-jdbc-datasource,设置数据源>>
* <<servlet-authentication-jdbc-bean,JdbcUserDetailsManager Bean>>
[[servlet-authentication-jdbc-schema]]
== 默认架构
Spring Security 为基于 JDBC 的身份验证提供默认查询. 本节提供了与默认查询一起使用的相应默认架构. 您将需要调整架构,以将所有自定义项与查询和正在使用的数据库方言进行匹配.
[[servlet-authentication-jdbc-schema-user]]
=== User Schema
`JdbcDaoImpl` 需要使用表来加载用户的密码,帐户状态(启用或禁用) 和权限列表(角色) . 所需的默认架构可以在下面找到.
[NOTE]
====
默认架构也作为名为 `org/springframework/security/core/userdetails/jdbc/users.ddl` 的类路径资源暴露 .
====
.Default User Schema
====
[source,sql]
----
create table users(
username varchar_ignorecase(50) not null primary key,
password varchar_ignorecase(500) not null,
enabled boolean not null
);
create table authorities (
username varchar_ignorecase(50) not null,
authority varchar_ignorecase(50) not null,
constraint fk_authorities_users foreign key(username) references users(username)
);
create unique index ix_auth_username on authorities (username,authority);
----
====
Oracle 是一种流行的数据库选择,但是需要稍微不同的架构. 您可以在下面找到用户的默认 Oracle 模式.
.Default User Schema for Oracle Databases
====
[source,sql]
----
CREATE TABLE USERS (
USERNAME NVARCHAR2(128) PRIMARY KEY,
PASSWORD NVARCHAR2(128) NOT NULL,
ENABLED CHAR(1) CHECK (ENABLED IN ('Y','N') ) NOT NULL
);
CREATE TABLE AUTHORITIES (
USERNAME NVARCHAR2(128) NOT NULL,
AUTHORITY NVARCHAR2(128) NOT NULL
);
ALTER TABLE AUTHORITIES ADD CONSTRAINT AUTHORITIES_UNIQUE UNIQUE (USERNAME, AUTHORITY);
ALTER TABLE AUTHORITIES ADD CONSTRAINT AUTHORITIES_FK1 FOREIGN KEY (USERNAME) REFERENCES USERS (USERNAME) ENABLE;
----
====
[[servlet-authentication-jdbc-schema-group]]
=== Group Schema
如果您的应用程序利用组,则需要提供组架构. 组的默认架构可以在下面找到.
.Default Group Schema
====
[source,sql]
----
create table groups (
id bigint generated by default as identity(start with 0) primary key,
group_name varchar_ignorecase(50) not null
);
create table group_authorities (
group_id bigint not null,
authority varchar(50) not null,
constraint fk_group_authorities_group foreign key(group_id) references groups(id)
);
create table group_members (
id bigint generated by default as identity(start with 0) primary key,
username varchar(50) not null,
group_id bigint not null,
constraint fk_group_members_group foreign key(group_id) references groups(id)
);
----
====
[[servlet-authentication-jdbc-datasource]]
== 设置数据源
在配置 `JdbcUserDetailsManager` 之前,我们必须创建一个数据源. 在我们的示例中,我们将设置一个使用 <<servlet-authentication-jdbc-schema,默认用户架构>> 初始化的 https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/data-access.html#jdbc-embedded-database-support[嵌入式 DataSource] .
.Embedded Data Source
====
.Java
[source,java,role="primary"]
----
@Bean
DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(H2)
.addScript("classpath:org/springframework/security/core/userdetails/jdbc/users.ddl")
.build();
}
----
.XML
[source,xml,role="secondary"]
----
<jdbc:embedded-database>
<jdbc:script location="classpath:org/springframework/security/core/userdetails/jdbc/users.ddl"/>
</jdbc:embedded-database>
----
.Kotlin
[source,kotlin,role="secondary"]
----
@Bean
fun dataSource(): DataSource {
return EmbeddedDatabaseBuilder()
.setType(H2)
.addScript("classpath:org/springframework/security/core/userdetails/jdbc/users.ddl")
.build()
}
----
====
在生产环境中,您将要确保设置与外部数据库的连接.
[[servlet-authentication-jdbc-bean]]
== JdbcUserDetailsManager Bean
在此示例中,我们使用 <<authentication-password-storage-boot-cli,Spring Boot CLI>> 对 `password` 的密码进行编码,并获得 `+{bcrypt}$2a$10$GRLdNijSQMUvl/au9ofL.eDwmoohzzS7.rmNSJZ.0FxO/BTk76klW+` 密码. 有关如何存储密码的更多详细信息,请参见 <<authentication-password-storage,PasswordEncoder>> 部分.
.JdbcUserDetailsManager
====
.Java
[source,java,role="primary",attrs="-attributes"]
----
@Bean
UserDetailsManager users(DataSource dataSource) {
UserDetails user = User.builder()
.username("user")
.password("{bcrypt}$2a$10$GRLdNijSQMUvl/au9ofL.eDwmoohzzS7.rmNSJZ.0FxO/BTk76klW")
.roles("USER")
.build();
UserDetails admin = User.builder()
.username("admin")
.password("{bcrypt}$2a$10$GRLdNijSQMUvl/au9ofL.eDwmoohzzS7.rmNSJZ.0FxO/BTk76klW")
.roles("USER", "ADMIN")
.build();
JdbcUserDetailsManager users = new JdbcUserDetailsManager(dataSource);
users.createUser(user);
users.createUser(admin);
}
----
.XML
[source,xml,role="secondary",attrs="-attributes"]
----
<jdbc-user-service>
<user name="user"
password="{bcrypt}$2a$10$GRLdNijSQMUvl/au9ofL.eDwmoohzzS7.rmNSJZ.0FxO/BTk76klW"
authorities="ROLE_USER" />
<user name="admin"
password="{bcrypt}$2a$10$GRLdNijSQMUvl/au9ofL.eDwmoohzzS7.rmNSJZ.0FxO/BTk76klW"
authorities="ROLE_USER,ROLE_ADMIN" />
</jdbc-user-service>
----
.Kotlin
[source,kotlin,role="secondary",attrs="-attributes"]
----
@Bean
fun users(dataSource: DataSource): UserDetailsManager {
val user = User.builder()
.username("user")
.password("{bcrypt}$2a$10\$GRLdNijSQMUvl/au9ofL.eDwmoohzzS7.rmNSJZ.0FxO/BTk76klW")
.roles("USER")
.build();
val admin = User.builder()
.username("admin")
.password("{bcrypt}$2a$10\$GRLdNijSQMUvl/au9ofL.eDwmoohzzS7.rmNSJZ.0FxO/BTk76klW")
.roles("USER", "ADMIN")
.build();
val users = JdbcUserDetailsManager(dataSource)
users.createUser(user)
users.createUser(admin)
return users
}
----
====
| 28.75 | 261 | 0.740153 |
51027f0fec1d226d9d7eea7d4c280112cd60857d | 849 | adoc | AsciiDoc | modules/serverless-functions-adding-annotations.adoc | Rupesh-git-eng/openshift-docs | afdcf620989299ec2b36e494dbe7034d95be3dad | [
"Apache-2.0"
] | null | null | null | modules/serverless-functions-adding-annotations.adoc | Rupesh-git-eng/openshift-docs | afdcf620989299ec2b36e494dbe7034d95be3dad | [
"Apache-2.0"
] | null | null | null | modules/serverless-functions-adding-annotations.adoc | Rupesh-git-eng/openshift-docs | afdcf620989299ec2b36e494dbe7034d95be3dad | [
"Apache-2.0"
] | null | null | null | :_content-type: PROCEDURE
[id="serverless-functions-adding-annotations_{context}"]
= Adding annotations to a function
.Procedure
. Open the `func.yaml` file for your function.
. For every annotation that you want to add, add the following YAML to the `annotations` section:
+
[source,yaml]
----
name: test
namespace: ""
runtime: go
...
annotations:
<annotation_name>: "<annotation_value>" <1>
----
<1> Substitute `<annotation_name>: "<annotation_value>"` with your annotation.
+
For example, to indicate that a function was authored by Alice, you might include the following annotation:
+
[source,yaml]
----
name: test
namespace: ""
runtime: go
...
annotations:
author: "alice@example.com"
----
. Save the configuration.
The next time you deploy your function to the cluster, the annotations are added to the corresponding Knative service.
| 22.945946 | 118 | 0.738516 |
221274c94ab11723c95f3e2e0220300e49e4a411 | 3,467 | adoc | AsciiDoc | docs/fromiter.adoc | kgizdov/awkward-0.x | 11953017a5495527169d3a031a5e866904d824a2 | [
"BSD-3-Clause"
] | 224 | 2018-07-01T00:28:27.000Z | 2020-11-16T11:00:25.000Z | docs/fromiter.adoc | kgizdov/awkward-0.x | 11953017a5495527169d3a031a5e866904d824a2 | [
"BSD-3-Clause"
] | 196 | 2018-07-12T06:48:42.000Z | 2020-11-01T17:18:51.000Z | docs/fromiter.adoc | kgizdov/awkward-0.x | 11953017a5495527169d3a031a5e866904d824a2 | [
"BSD-3-Clause"
] | 42 | 2018-06-28T11:36:55.000Z | 2020-10-23T03:24:31.000Z | = Row-wise to columnar conversion
:Author: Jim Pivarski
:Email: pivarski@princeton.edu
:Date: 2019-07-08
:Revision: 0.x
Awkward-array provides a general facility for creating columnar arrays from row-wise data. This is often the first step before processing. In a statically typed language, the array structure can be fully determined before filling, but in a dynamically typed language, the array types and nesting structure would have to be discovered while iterating over the data.
The interface has one entry point:
* `+fromiter(iterable, **options)+`: iterate over data and return an array representing all of the data in a columnar form. In a statically typed language, the type structure would have to be provided somehow, and this function would raise an error if the data do not conform to that type (if possible).
The following `options` are recognized:
* `dictencoding`: boolean or a function returning boolean. If `True`, all arrays of bytes/strings are https://en.wikipedia.org/wiki/Dictionary_coder[dictionary encoded] as an `IndexedArray` of `StringArray`. If `False`, all arrays of bytes/strings are simply `StringArray`. If a function, this function is called on the list of bytes/strings to determine if it should be dictionary encoded.
* `maskedwhen`: boolean. If `True`, the `mask` of `MaskedArrays` use `True` to indicate missing values; if `False`, they use `False` to indicate missing values.
Types derived from the data resolve ambiguities by satisfying the following rules.
. Boolean types are distinct from numbers, but mixed numeric types are resolved in favor of the most general number at a level of the hierarchy. (That is, a single floating point value will make all integers at the same nesting depth floating point.) Booleans and numbers at the same level of hierarchy would be represented by a `UnionArray`.
. In Python (which makes a distinction between raw bytes and strings with an encoding), bytes and strings are different types. Bytes and strings at the same level of hierarchy would be represented by a `UnionArray`.
. All lists are presumed to be variable-length (`JaggedArray`).
. A mapping type (Python dict) is represented as a `Table`, and mappings with different sets of field names are considered distinct. Mappings with the same field names but different field types are not distinct: they are `Tables` with some `UnionArray` columns. Missing fields are different from fields with `None` values. Mappings must have string-typed keys: other types are not supported.
. An empty mapping, `{}`, is considered identical to `None`.
. `Table` column names are sorted alphabetically.
. An object type may be represented as an `ObjectArray` or it may not be supported. The Numpy-only implementation generates `ObjectArrays` from Python class instances and from namedtuples.
. Optional and sum types are in canonical form: `MaskedArrays` are not nested directly within `MaskedArrays` and `UnionArrays` are not nested directly within `UnionArrays`. If a given level of hierarchy is both masked and heterogeneous, the `MaskArray` is outside the `UnionArray` and none of the union array's `contents` are masked.
. `Tables`, `ObjectArrays`, and `UnionArrays` are masked with `IndexedMaskedArray`, while all others are masked with `MaskedArray`.
. If a type at some level of hierarchy cannot be determined (all lists at that level are empty), then the type is taken to be `DEFAULTTYPE`.
| 115.566667 | 394 | 0.776464 |
1eef8fc597179561a21c85899239f49c62eac3f7 | 135 | adoc | AsciiDoc | docs/modules/ROOT/nav.adoc | eolivelli/cassandra-source-connector | b41ffa8ebf2bab225cd7e0e63b7d1aaad76605ab | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | docs/modules/ROOT/nav.adoc | eolivelli/cassandra-source-connector | b41ffa8ebf2bab225cd7e0e63b7d1aaad76605ab | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | docs/modules/ROOT/nav.adoc | eolivelli/cassandra-source-connector | b41ffa8ebf2bab225cd7e0e63b7d1aaad76605ab | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | * xref:index.adoc[]
* xref:install.adoc[]
* xref:monitor.adoc[]
* xref:faqs.adoc[]
* xref:quickstart-kafka.adoc[]
* xref:kafka.adoc[]
| 16.875 | 30 | 0.674074 |
d67ba26e16ca0e010c94ef89f2d778941aac8698 | 2,814 | adoc | AsciiDoc | CONTRIBUTING.adoc | insideqt/awesome-qt | 5c077e8b2018d963edabf5769a56c8ef8c45829f | [
"CC0-1.0"
] | 58 | 2015-11-11T21:44:07.000Z | 2022-03-06T06:22:54.000Z | CONTRIBUTING.adoc | insideqt/awesome-qt | 5c077e8b2018d963edabf5769a56c8ef8c45829f | [
"CC0-1.0"
] | 4 | 2015-11-11T16:34:33.000Z | 2018-09-21T04:06:47.000Z | CONTRIBUTING.adoc | insideqt/awesome-qt | 5c077e8b2018d963edabf5769a56c8ef8c45829f | [
"CC0-1.0"
] | 3 | 2016-07-25T08:32:09.000Z | 2018-07-24T17:37:20.000Z | :AsciiDoctorOrg: https://asciidoctor.org
:FsfLicenses: http://www.gnu.org/licenses/license-list.html
:OsiLicenses: http://opensource.org/licenses/alphabetical
:QuickReference: http://asciidoctor.org/docs/asciidoc-syntax-quick-reference/
:WritersGuide: http://asciidoctor.org/docs/asciidoc-writers-guide/
= Contributing to Awesome Qt
Thank you for contributing to the Awesome Qt list!
The document is written in AsciiDoc and is usually rendered (e.g. in GitHub)
with {AsciiDoctorOrg}[AsciiDoctor]. If you are not familiar with AsciiDoc, don't
worry, it is as simple as MarkDown, but much more powerful and structured. See
the {QuickReference}[Quick Reference] or the {WritersGuide}[AsciiDoc Writer's
Guide].
Many things are perfectly open for discussion. Feel free to comment on the
issues page.
== Which projects qualify?
All that fulfill the following criteria:
. Be related to Qt (uses Qt's types, event loop, signals, etc.). Other C++
projects should not qualify. There is an "off topic" section for notable
exceptions, but it should be really small. Note that there is also
https://github.com/fffaraz/awesome-cpp[Awesome {cpp}] for other native
libraries.
. Projects published under a license which appears in at least the
{FsfLicenses}[Free Software Foundation's license list] or the
{OsiLicenses}[Open Source Initiative's license list]. Projects with dual
license are fine, but they have to provide a way to contribute back, even if
it is after signing a legal agreement.
. Projects that are reasonably alive. No specific time is given because it
depends on how relevant the project is in its area. There is no need to have 5
JSON parsers in the list when there is already native JSON support in Qt Core,
for example, but if there is only one library in the world which can read some
proprietary format it will be better to have it, even if it didn't receive a
commit in two years.
== Content of each entry
Each entry should include exactly:
Name:: See the https://github.com/insideqt/awesome-qt/issues/3[discussion about
the naming].
Link to the project website:: Try to use the link to the most complete page, so
from that page alone other sites can be reached (repository, bug tracker, etc.).
A simple description:: From 1 to 3 sentences at most. Use your judgement. There
is little to say about a library that "just" parses some file format, but more
specific topics require longer explanations. Try not to use the name of the
project or Qt in the description to avoid repetition of the same terms over and
over.
Ideally, we should mention the license too, but many projects are too informal
about this. This might be in contradiction with the requirements. But at least
projects explicitly licensed under proprietary terms should clearly be presented
differently.
| 46.131148 | 80 | 0.780384 |
33683b35d7ec80eefe42b56cfc2561d05817a7aa | 2,095 | asciidoc | AsciiDoc | docs/tornado.asciidoc | laerteallan/apm-agent-python | 62e494fa032e69bf4999cbc0efb30b80093ad5ba | [
"BSD-3-Clause"
] | 2 | 2019-02-15T20:23:39.000Z | 2019-02-15T20:26:06.000Z | docs/tornado.asciidoc | laerteallan/apm-agent-python | 62e494fa032e69bf4999cbc0efb30b80093ad5ba | [
"BSD-3-Clause"
] | null | null | null | docs/tornado.asciidoc | laerteallan/apm-agent-python | 62e494fa032e69bf4999cbc0efb30b80093ad5ba | [
"BSD-3-Clause"
] | null | null | null | [[tornado-support]]
== Tornado support
Getting Elastic APM set up for your Tornado project is easy,
and there are various ways you can tweak it to fit to your needs.
This configuration is much same with flask-apm
[float]
[[Tornado-installation]]
=== Installation
Install the Elastic APM agent using pip:
[source,bash]
----
$ pip install elastic-apm
----
[float]
[[tornado-setup]]
=== Setup
To set up the agent, you need to initialize it with appropriate settings.
The settings are configured either via environment variables,
the application's settings, or as initialization arguments.
You can find a list of all available settings in the <<configuration, Configuration>> page.
Below an example of as configure the apm with tornado framework.
[source,python]
----
from elasticapm.contrib.tornado import TornadoApm, ApiElasticHandlerAPM
import tornado
import tornado.ioloop
import tornado.web
from tornado.httpserver import HTTPServer
class MainTest1(ApiElasticHandlerAPM):
def get(self, *args, **kwargs):
self.write({'status': 'ok'})
self.finish()
class MainTest2(ApiElasticHandlerAPM):
def get(self):
raise Exception("Value Error")
def post(self):
try:
raise Exception("erro message")
except Exception as error:
# This error(Personalized) captured an send elastic
self.capture_message("personalized error test")
self.set_status(500)
self.write("Internal Server Error")
self.finish()
def make_app():
settings = {
'ELASTIC_APM':
{
"SERVICE_NAME": "Teste tornado",
"SECRET_TOKEN": "",
"Debug": False},
"compress_response": True,
}
application = tornado.web.Application([
(r"/", MainTest1),
(r"/error", MainTest2),
], **settings)
TornadoApm(application)
return application
if __name__ == "__main__":
app = make_app()
server = HTTPServer(app)
server.bind(8888)
server.start(1)
tornado.ioloop.IOLoop.current().start()
| 23.806818 | 91 | 0.66253 |
9b81a3dc4f38246a5e7d045624a4ea021e8b67c7 | 1,305 | adoc | AsciiDoc | docker-examples/README.adoc | niaomingjian/vertx-examples | 1029c5fd1bc16cdf1fc25793de4b83de51d2e13b | [
"Apache-2.0"
] | 3,643 | 2015-02-25T16:51:32.000Z | 2022-03-31T14:52:32.000Z | docker-examples/README.adoc | niaomingjian/vertx-examples | 1029c5fd1bc16cdf1fc25793de4b83de51d2e13b | [
"Apache-2.0"
] | 373 | 2015-03-10T19:28:48.000Z | 2022-03-30T02:53:40.000Z | docker-examples/README.adoc | niaomingjian/vertx-examples | 1029c5fd1bc16cdf1fc25793de4b83de51d2e13b | [
"Apache-2.0"
] | 2,586 | 2015-01-26T23:46:18.000Z | 2022-03-31T14:52:34.000Z | = Vert.x Docker Examples
Here you will find examples demonstrating how to run Vert.x applications in Docker container. To run these examples you need https://www.docker.com/[Docker] installed on your computer. More details about these examples are in the http://vert-x3.github.io/docs/vertx-docker/[Vert.x Docker Manual].
== vertx-docker-java
This example deploys a Java verticle inside Docker.
The link:vertx-docker-java
To build and run it:
----
docker build -t sample/vertx-java .
docker run -t -i -p 8080:8080 sample/vertx-java
----
== vertx-docker-groovy
This example deploys a Groovy verticle inside Docker.
The link:vertx-docker-groovy
To build and run it:
----
docker build -t sample/vertx-groovy .
docker run -t -i -p 8080:8080 sample/vertx-groovy
----
== vertx-docker-example
This example builds and deploys a Java verticle inside Docker using Apache Maven
The link:vertx-docker-example
To build and run it:
----
mvn clean package docker:build
docker run -t -i -p 8080:8080 vertx/vertx3-example
----
== vertx-docker-java-fatjar
This example deploys a Java verticle inside Docker. The verticle is packaged as a _fat jar_.
The link:vertx-docker-java-fatjar
To build and run it:
----
docker build -t sample/vertx-java-fat .
docker run -t -i -p 8080:8080 sample/vertx-java-fat
----
| 25.096154 | 296 | 0.747893 |
e6c7d0fe9fce4798c86ac5fda0bbbcda19a7f050 | 68 | asciidoc | AsciiDoc | docs/visualize/kibi/automatic_relation.asciidoc | rpatil524/kibi | ef015f25a559bf1623a5376fd24c68cd221fe240 | [
"Apache-2.0"
] | 546 | 2015-09-14T17:51:46.000Z | 2021-11-02T00:48:01.000Z | docs/visualize/kibi/automatic_relation.asciidoc | rpatil524/kibi | ef015f25a559bf1623a5376fd24c68cd221fe240 | [
"Apache-2.0"
] | 102 | 2015-09-28T14:14:32.000Z | 2020-07-21T21:23:20.000Z | docs/visualize/kibi/automatic_relation.asciidoc | rpatil524/kibi | ef015f25a559bf1623a5376fd24c68cd221fe240 | [
"Apache-2.0"
] | 144 | 2015-09-13T16:41:41.000Z | 2020-06-26T20:32:33.000Z | [[automatic_relation_filter]]
== Automatic Relational Filter
*TODO* | 17 | 30 | 0.794118 |
12ee96b21b0c752839b7b775ae8b80f90fa64cad | 4,846 | asciidoc | AsciiDoc | docs/setup/docker.asciidoc | orfeas0/kibana | e148eb4437cb890c93ee328ea424d2d67584de50 | [
"Apache-2.0"
] | 1 | 2020-07-08T17:31:29.000Z | 2020-07-08T17:31:29.000Z | docs/setup/docker.asciidoc | orfeas0/kibana | e148eb4437cb890c93ee328ea424d2d67584de50 | [
"Apache-2.0"
] | 3 | 2021-09-02T21:36:19.000Z | 2022-03-24T11:56:50.000Z | docs/setup/docker.asciidoc | orfeas0/kibana | e148eb4437cb890c93ee328ea424d2d67584de50 | [
"Apache-2.0"
] | null | null | null | [[docker]]
== Running Kibana on Docker
Docker images for Kibana are available from the Elastic Docker registry. The
base image is https://hub.docker.com/_/centos/[centos:7].
A list of all published Docker images and tags is available at
https://www.docker.elastic.co[www.docker.elastic.co]. The source code is in
https://github.com/elastic/dockerfiles/tree/{branch}/kibana[GitHub].
These images are free to use under the Elastic license. They contain open source
and free commercial features and access to paid commercial features.
{stack-ov}/license-management.html[Start a 30-day trial] to try out all of the
paid commercial features. See the
https://www.elastic.co/subscriptions[Subscriptions] page for information about
Elastic license levels.
[float]
[[pull-image]]
=== Pulling the image
Obtaining Kibana for Docker is as simple as issuing a +docker pull+ command
against the Elastic Docker registry.
ifeval::["{release-state}"=="unreleased"]
However, version {version} of Kibana has not yet been released, so no Docker
image is currently available for this version.
endif::[]
ifeval::["{release-state}"!="unreleased"]
["source","txt",subs="attributes"]
--------------------------------------------
docker pull {docker-repo}:{version}
--------------------------------------------
Alternatively, you can download other Docker images that contain only features
available under the Apache 2.0 license. To download the images, go to
https://www.docker.elastic.co[www.docker.elastic.co].
[float]
=== Running Kibana on Docker for development
Kibana can be quickly started and connected to a local Elasticsearch container for development
or testing use with the following command:
--------------------------------------------
docker run --link YOUR_ELASTICSEARCH_CONTAINER_NAME_OR_ID:elasticsearch -p 5601:5601 {docker-repo}:{version}
--------------------------------------------
endif::[]
[float]
[[configuring-kibana-docker]]
=== Configuring Kibana on Docker
The Docker images provide several methods for configuring Kibana. The
conventional approach is to provide a `kibana.yml` file as described in
{kibana-ref}/settings.html[Configuring Kibana], but it's also possible to use
environment variables to define settings.
[float]
[[bind-mount-config]]
==== Bind-mounted configuration
One way to configure Kibana on Docker is to provide `kibana.yml` via bind-mounting.
With +docker-compose+, the bind-mount can be specified like this:
["source","yaml",subs="attributes"]
--------------------------------------------
version: '2'
services:
kibana:
image: {docker-image}
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
--------------------------------------------
[float]
[[environment-variable-config]]
==== Environment variable configuration
Under Docker, Kibana can be configured via environment variables. When
the container starts, a helper process checks the environment for variables that
can be mapped to Kibana command-line arguments.
For compatibility with container orchestration systems, these
environment variables are written in all capitals, with underscores as
word separators. The helper translates these names to valid
Kibana setting names.
Some example translations are shown here:
.Example Docker Environment Variables
[horizontal]
**Environment Variable**:: **Kibana Setting**
`SERVER_NAME`:: `server.name`
`KIBANA_DEFAULTAPPID`:: `kibana.defaultAppId`
`MONITORING_ENABLED`:: `monitoring.enabled`
In general, any setting listed in <<settings>> can be
configured with this technique.
These variables can be set with +docker-compose+ like this:
["source","yaml",subs="attributes"]
----------------------------------------------------------
version: '2'
services:
kibana:
image: {docker-image}
environment:
SERVER_NAME: kibana.example.org
ELASTICSEARCH_HOSTS: http://elasticsearch.example.org
----------------------------------------------------------
Since environment variables are translated to CLI arguments, they take
precedence over settings configured in `kibana.yml`.
[float]
[[docker-defaults]]
==== Docker defaults
The following settings have different default values when using the Docker
images:
[horizontal]
`server.name`:: `kibana`
`server.host`:: `"0"`
`elasticsearch.hosts`:: `http://elasticsearch:9200`
`monitoring.ui.container.elasticsearch.enabled`:: `true`
NOTE: The setting `monitoring.ui.container.elasticsearch.enabled` is not
defined in the `-oss` image.
These settings are defined in the default `kibana.yml`. They can be overridden
with a <<bind-mount-config,custom `kibana.yml`>> or via
<<environment-variable-config,environment variables>>.
IMPORTANT: If replacing `kibana.yml` with a custom version, be sure to copy the
defaults to the custom file if you want to retain them. If not, they will
be "masked" by the new file.
| 34.614286 | 108 | 0.710277 |
7e92465bce53e68d9cc6d752d95c7076b757b6a4 | 683 | adoc | AsciiDoc | docs/schedule.adoc | PedrosWits/cool-course | f471839f9a0c47b9ac1021ee46433adb88bde1ae | [
"MIT"
] | null | null | null | docs/schedule.adoc | PedrosWits/cool-course | f471839f9a0c47b9ac1021ee46433adb88bde1ae | [
"MIT"
] | null | null | null | docs/schedule.adoc | PedrosWits/cool-course | f471839f9a0c47b9ac1021ee46433adb88bde1ae | [
"MIT"
] | null | null | null | = Cool Course
author <email>
== Schedule
1-day workshop/tutorial/activity
[cols="^,^,^,^"]
|===
|Time |Activity |Title |Session
|09:00 - 09:15
|Talk
|Workshop introduction and overview
|0
|09:15 - 09:30
|Talk
|The COURSE: why, when and how?
.2+|1
|09:30 - 10:30
|Practical
|COURSE-TOPIC A
|10:30 - 10:45
3+|Coffee Break
|10:45 - 11:00
|Talk
|COURSE-TOPIC B
.2+|2
|11:00 - 12:00
|Practical
|COURSE-TOPIC B
|12:00 - 13:30
3+|Lunch Break
|13:30 - 13:45
|Talk
.2+|COURSE-TOPIC C
.2+|3
|13:45 - 15:00
|Practical
|15:00 - 15:15
3+|Coffee Break
|15:15 - 15:30
|Talk
.2+|COURSE-TOPIC D
.2+|4
|15:30 - 16:45
|Practical
|16:45 - 17:00
|Discussion
|Final remarks. Q+A
|-
|===
| 10.19403 | 35 | 0.626647 |
1d57d4804d587d680d028e38405ebcfb46cda5ee | 1,294 | asciidoc | AsciiDoc | docs/java-rest/overview.asciidoc | dileepdkumar/ElasticSearch5.5 | 83109eded2fa9bbddf80f4556624a3c6102bbe4f | [
"Apache-2.0"
] | null | null | null | docs/java-rest/overview.asciidoc | dileepdkumar/ElasticSearch5.5 | 83109eded2fa9bbddf80f4556624a3c6102bbe4f | [
"Apache-2.0"
] | null | null | null | docs/java-rest/overview.asciidoc | dileepdkumar/ElasticSearch5.5 | 83109eded2fa9bbddf80f4556624a3c6102bbe4f | [
"Apache-2.0"
] | null | null | null | [[java-rest]]
== Overview
Official low-level client for Elasticsearch. Allows to communicate with an
Elasticsearch cluster through http. Compatible with all elasticsearch versions.
=== Features
The low-level client's features include:
* minimal dependencies
* load balancing across all available nodes
* failover in case of node failures and upon specific response codes
* failed connection penalization (whether a failed node is retried depends on
how many consecutive times it failed; the more failed attempts the longer the
client will wait before trying that same node again)
* persistent connections
* trace logging of requests and responses
* optional automatic <<sniffer,discovery of cluster nodes>>
=== License
Copyright 2013-2016 Elasticsearch
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 29.409091 | 79 | 0.79289 |
65a646c31653029bb84b0c22fe7c33fe0585c6bf | 5,247 | adoc | AsciiDoc | _posts/blog/2020/2020-12-26-mengembalikan-nama-interface-ke-traditional-interface-name.adoc | bandithijo/v2 | 4c7bdd5813f60821e55e05ae38464bb2f8acfb48 | [
"MIT"
] | 4 | 2021-07-23T08:31:40.000Z | 2021-11-12T12:01:54.000Z | _posts/blog/2020/2020-12-26-mengembalikan-nama-interface-ke-traditional-interface-name.adoc | bandithijo/v2 | 4c7bdd5813f60821e55e05ae38464bb2f8acfb48 | [
"MIT"
] | null | null | null | _posts/blog/2020/2020-12-26-mengembalikan-nama-interface-ke-traditional-interface-name.adoc | bandithijo/v2 | 4c7bdd5813f60821e55e05ae38464bb2f8acfb48 | [
"MIT"
] | null | null | null | = Kembali Menggunakan Traditional Interface Name (eth0, wlan0, etc.)
Assyaufi, Rizqi Nur
:page-email: bandithijo@gmail.com
:page-navtitle: Kembali Menggunakan Traditional Interface Name (eth0, wlan0, etc.)
:page-excerpt: Menggunakan namespace yang baru untuk interface name, terkadang membuat bingung. Saya pilih untuk kembali menggunakan penamaan yang model lama saja.
:page-permalink: /blog/:title
:page-categories: blog
:page-tags: [networking]
:page-liquid:
:page-published: true
== Latar Belakang Masalah
Mungkin yang baru menggunakan GNU/Linux tidak mengerti apa maksudnya "Traditional Interface Name".
Penamaan interface saat ini sudah menggunakan aturan penamaan yang baru.
Kita dapat melihat daftar network interface yang ada di sistem dengan menggunakan perintah-perintah yang disediakan oleh link:https://en.wikipedia.org/wiki/iproute2[*iproute2*^].
[source,console]
----
$ ip address show
----
Atau dapat dipersingkat,
[source,console]
----
$ ip a s
----
----
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether 00:16:d3:c4:fb:d2 brd ff:ff:ff:ff:ff:ff
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 08:11:96:00:00:00 brd ff:ff:ff:ff:ff:ff
----
Bagian yang saya marking kuning, adalah nama interface.
*lo*, adalah interface loopback.
*eth0*, adalah interface untuk ethernet.
*wlan0*, adalah interface untuk wireless.
Angka *0* dibagian belakang dapat bertambah seiring bertambahnya jumlah interface, misal wlan1, wlan2, wlan3.
Namun, saat ini, sudah tidak lagi menggunakan namespace seperti ini sejak systemd v197, dikarenakan beberapa alasan terkait keamanan.
Mungkin yang teman-teman miliki akan seprti ini namanya, `wlp3s0`, `enp1s0`. Ya, ini adalah interface namespace yang baru.
Keuntungan apabila kita menggunakan namespace yang baru.
. Stable interface names across reboots
. Stable interface names even when hardware is added or removed, i.e. no re-enumeration takes place (to the level the firmware permits this)
. Stable interface names when kernels or drivers are updated/changed
. Stable interface names even if you have to replace broken ethernet cards by new ones
. The names are automatically determined without user configuration, they just work
. The interface names are fully predictable, i.e. just by looking at lspci you can figure out what the interface is going to be called
. dan seterusnya, masih banyak.
Teman-teman dapat membaca penjelasan lebih lengkapnya pada artikel ini, link:https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[*Predictable Network Interface Names*^].
Dari keuntungan-keuntungan tersebut, tidak ada yang cocok dengan saya. Terutama poin-poin pertama.
Karena saya hanya menggunakan laptop dan tidak memiliki banyak interface.
Maka, saya putuskan untuk tidak menggunakan interface namespace yang baru. Alasan yang kurang greget. 😄
WARNING: *Saya tidak merekomendasikan untuk mengikuti apa yang saya lakukan*.
Okeh, langsung saja bah, males nulis teori-teori. Temen-temen bisa cari tahu sendiri yaa.
== Pemecahan Masalah
Sebenarnya di Arch Wiki sudah ada yang mencatatkan.
Teman-teman bisa pilih antara:
. link:https://wiki.archlinux.org/index.php/Network_configuration#Change_interface_name[Change interface name^]
. link:https://wiki.archlinux.org/index.php/Network_configuration#Revert_to_traditional_interface_names[Revert to traditional interface names^]
Untuk catatan kali ini, sesuai judulnya, saya akan mencatat cara kedua, yaitu "Mengembalikan ke Traditional Interface Names".
Cara kedua ini juga terdapat 2 cara:
=== 1. Masking Udev Rule
Dengan melakukan masking terhadap udev rule yang memberikan aturan interface namespace yang baru.
[source,console]
----
$ ln -s /dev/null /etc/udev/rules.d/80-net-setup-link.rules
----
=== 2. Kernel Parameter
Cara alternatif adalah dengan menambahkan `net.ifnames=0` di kernel parameter.
Saya menggunakan cara alternatif ini, karena praktis. 😁
Karena saya menggunakan GRUB, maka saya akan menambahkan parameter tersebut melalui konfigurasi GRUB saja, agar lebih mudah.
./etc/default/grub
[source,conf,linenums]
----
# GRUB boot loader configuration
# ...
# ...
GRUB_CMDLINE_LINUX_DEFAULT="... ... ..."
GRUB_CMDLINE_LINUX="net.ifnames=0" <1>
# ...
# ...
----
<1> Tambahkan `net.ifnames=0`, untuk kembali menggunakan traditional namespace
*_Selesai!_*
Tinggal reboot dan coba lakukan `$ ip a s` lagi untuk melihat nama interface, apakah sudah kembali ke traditional interface namespace atau belum.
== Pesan Penulis
Sepertinya, segini dulu yang dapat saya tuliskan.
Mudah-mudahan dapat bermanfaat.
Terima kasih.
(\^_^)
== Referensi
. link:https://wiki.archlinux.org/index.php/Network_configuration#Revert_to_traditional_interface_names[Arch Wiki - Network configuration: Revert to traditional interface names^]
Diakses tanggal: 2020/12/26
. link:https://wiki.artixlinux.org/Main/Migration#Configure_networking[Artix Wiki - Migrtaion: Configure Networking^]
Diakses tanggal: 2020/12/27
| 36.692308 | 201 | 0.783495 |
f57f905ae04c8fbfa820d0fe5baff82e5888b4eb | 13,406 | asciidoc | AsciiDoc | x-pack/docs/en/rollup/understanding-groups.asciidoc | engimatic/elasticsearch | 9fb95fef9ca6043d8215b1bcd87fe5f9fbcf4dba | [
"Apache-2.0"
] | 1 | 2015-07-23T10:30:13.000Z | 2015-07-23T10:30:13.000Z | x-pack/docs/en/rollup/understanding-groups.asciidoc | engimatic/elasticsearch | 9fb95fef9ca6043d8215b1bcd87fe5f9fbcf4dba | [
"Apache-2.0"
] | null | null | null | x-pack/docs/en/rollup/understanding-groups.asciidoc | engimatic/elasticsearch | 9fb95fef9ca6043d8215b1bcd87fe5f9fbcf4dba | [
"Apache-2.0"
] | 2 | 2020-06-20T16:06:53.000Z | 2020-12-24T15:04:02.000Z | [[rollup-understanding-groups]]
== Understanding Groups
experimental[]
To preserve flexibility, Rollup Jobs are defined based on how future queries may need to use the data. Traditionally, systems force
the admin to make decisions about what metrics to rollup and on what interval. E.g. The average of `cpu_time` on an hourly basis. This
is limiting; if, at a future date, the admin wishes to see the average of `cpu_time` on an hourly basis _and partitioned by `host_name`_,
they are out of luck.
Of course, the admin can decide to rollup the `[hour, host]` tuple on an hourly basis, but as the number of grouping keys grows, so do the
number of tuples the admin needs to configure. Furthermore, these `[hours, host]` tuples are only useful for hourly rollups... daily, weekly,
or monthly rollups all require new configurations.
Rather than force the admin to decide ahead of time which individual tuples should be rolled up, Elasticsearch's Rollup jobs are configured
based on which groups are potentially useful to future queries. For example, this configuration:
[source,js]
--------------------------------------------------
"groups" : {
"date_histogram": {
"field": "timestamp",
"interval": "1h",
"delay": "7d"
},
"terms": {
"fields": ["hostname", "datacenter"]
},
"histogram": {
"fields": ["load", "net_in", "net_out"],
"interval": 5
}
}
--------------------------------------------------
// NOTCONSOLE
Allows `date_histogram`'s to be used on the `"timestamp"` field, `terms` aggregations to be used on the `"hostname"` and `"datacenter"`
fields, and `histograms` to be used on any of `"load"`, `"net_in"`, `"net_out"` fields.
Importantly, these aggs/fields can be used in any combination. This aggregation:
[source,js]
--------------------------------------------------
"aggs" : {
"hourly": {
"date_histogram": {
"field": "timestamp",
"interval": "1h"
},
"aggs": {
"host_names": {
"terms": {
"field": "hostname"
}
}
}
}
}
--------------------------------------------------
// NOTCONSOLE
is just as valid as this aggregation:
[source,js]
--------------------------------------------------
"aggs" : {
"hourly": {
"date_histogram": {
"field": "timestamp",
"interval": "1h"
},
"aggs": {
"data_center": {
"terms": {
"field": "datacenter"
}
},
"aggs": {
"host_names": {
"terms": {
"field": "hostname"
}
},
"aggs": {
"load_values": {
"histogram": {
"field": "load",
"interval": 5
}
}
}
}
}
}
}
--------------------------------------------------
// NOTCONSOLE
You'll notice that the second aggregation is not only substantially larger, it also swapped the position of the terms aggregation on
`"hostname"`, illustrating how the order of aggregations does not matter to rollups. Similarly, while the `date_histogram` is required
for rolling up data, it isn't required while querying (although often used). For example, this is a valid aggregation for
Rollup Search to execute:
[source,js]
--------------------------------------------------
"aggs" : {
"host_names": {
"terms": {
"field": "hostname"
}
}
}
--------------------------------------------------
// NOTCONSOLE
Ultimately, when configuring `groups` for a job, think in terms of how you might wish to partition data in a query at a future date...
then include those in the config. Because Rollup Search allows any order or combination of the grouped fields, you just need to decide
if a field is useful for aggregating later, and how you might wish to use it (terms, histogram, etc)
=== Grouping Limitations with heterogeneous indices
There is a known limitation to Rollup groups, due to some internal implementation details at this time. The Rollup feature leverages
the `composite` aggregation from Elasticsearch. At the moment, the composite agg only returns buckets when all keys in the tuple are non-null.
Put another way, if the you request keys `[A,B,C]` in the composite aggregation, the only documents that are aggregated are those that have
_all_ of the keys `A, B` and `C`.
Because Rollup uses the composite agg during the indexing process, it inherits this behavior. Practically speaking, if all of the documents
in your index are homogeneous (they have the same mapping), you can ignore this limitation and stop reading now.
However, if you have a heterogeneous collection of documents that you wish to roll up, you may need to configure two or more jobs to
accurately cover the original data.
As an example, if your index has two types of documents:
[source,js]
--------------------------------------------------
{
"timestamp": 1516729294000,
"temperature": 200,
"voltage": 5.2,
"node": "a"
}
--------------------------------------------------
// NOTCONSOLE
and
[source,js]
--------------------------------------------------
{
"timestamp": 1516729294000,
"price": 123,
"title": "Foo"
}
--------------------------------------------------
// NOTCONSOLE
it may be tempting to create a single, combined rollup job which covers both of these document types, something like this:
[source,js]
--------------------------------------------------
PUT _xpack/rollup/job/combined
{
"index_pattern": "data-*",
"rollup_index": "data_rollup",
"cron": "*/30 * * * * ?",
"page_size" :1000,
"groups" : {
"date_histogram": {
"field": "timestamp",
"interval": "1h",
"delay": "7d"
},
"terms": {
"fields": ["node", "title"]
}
},
"metrics": [
{
"field": "temperature",
"metrics": ["min", "max", "sum"]
},
{
"field": "price",
"metrics": ["avg"]
}
]
}
--------------------------------------------------
// NOTCONSOLE
You can see that it includes a `terms` grouping on both "node" and "title", fields that are mutually exclusive in the document types.
*This will not work.* Because the `composite` aggregation (and by extension, Rollup) only returns buckets when all keys are non-null,
and there are no documents that have both a "node" field and a "title" field, this rollup job will not produce any rollups.
Instead, you should configure two independent jobs (sharing the same index, or going to separate indices):
[source,js]
--------------------------------------------------
PUT _xpack/rollup/job/sensor
{
"index_pattern": "data-*",
"rollup_index": "data_rollup",
"cron": "*/30 * * * * ?",
"page_size" :1000,
"groups" : {
"date_histogram": {
"field": "timestamp",
"interval": "1h",
"delay": "7d"
},
"terms": {
"fields": ["node"]
}
},
"metrics": [
{
"field": "temperature",
"metrics": ["min", "max", "sum"]
}
]
}
--------------------------------------------------
// NOTCONSOLE
[source,js]
--------------------------------------------------
PUT _xpack/rollup/job/purchases
{
"index_pattern": "data-*",
"rollup_index": "data_rollup",
"cron": "*/30 * * * * ?",
"page_size" :1000,
"groups" : {
"date_histogram": {
"field": "timestamp",
"interval": "1h",
"delay": "7d"
},
"terms": {
"fields": ["title"]
}
},
"metrics": [
{
"field": "price",
"metrics": ["avg"]
}
]
}
--------------------------------------------------
// NOTCONSOLE
Notice that each job now deals with a single "document type", and will not run into the limitations described above. We are working on changes
in core Elasticsearch to remove this limitation from the `composite` aggregation, and the documentation will be updated accordingly
when this particular scenario is fixed.
=== Doc counts and overlapping jobs
There is an issue with doc counts, related to the above grouping limitation. Imagine you have two Rollup jobs saving to the same index, where
one job is a "subset" of another job.
For example, you might have jobs with these two groupings:
[source,js]
--------------------------------------------------
PUT _xpack/rollup/job/sensor-all
{
"groups" : {
"date_histogram": {
"field": "timestamp",
"interval": "1h",
"delay": "7d"
},
"terms": {
"fields": ["node"]
}
},
"metrics": [
{
"field": "price",
"metrics": ["avg"]
}
]
...
}
--------------------------------------------------
// NOTCONSOLE
and
[source,js]
--------------------------------------------------
PUT _xpack/rollup/job/sensor-building
{
"groups" : {
"date_histogram": {
"field": "timestamp",
"interval": "1h",
"delay": "7d"
},
"terms": {
"fields": ["node", "building"]
}
}
...
}
--------------------------------------------------
// NOTCONSOLE
The first job `sensor-all` contains the groupings and metrics that apply to all data in the index. The second job is rolling up a subset
of data (in different buildings) which also include a building identifier. You did this because combining them would run into the limitation
described in the previous section.
This _mostly_ works, but can sometimes return incorrect `doc_counts` when you search. All metrics will be valid however.
The issue arises from the composite agg limitation described before, combined with search-time optimization. Imagine you try to run the
following aggregation:
[source,js]
--------------------------------------------------
"aggs" : {
"nodes": {
"terms": {
"field": "node"
}
}
}
--------------------------------------------------
// NOTCONSOLE
This aggregation could be serviced by either `sensor-all` or `sensor-building` job, since they both group on the node field. So the RollupSearch
API will search both of them and merge results. This will result in *correct* doc_counts and *correct* metrics. No problem here.
The issue arises from an aggregation that can _only_ be serviced by `sensor-building`, like this one:
[source,js]
--------------------------------------------------
"aggs" : {
"nodes": {
"terms": {
"field": "node"
},
"aggs": {
"building": {
"terms": {
"field": "building"
}
}
}
}
}
--------------------------------------------------
// NOTCONSOLE
Now we run into a problem. The RollupSearch API will correctly identify that only `sensor-building` job has all the required components
to answer the aggregation, and will search it exclusively. Unfortunately, due to the composite aggregation limitation, that job only
rolled up documents that have both a "node" and a "building" field. Meaning that the doc_counts for the `"nodes"` aggregation will not
include counts for any document that doesn't have `[node, building]` fields.
- The `doc_count` for `"nodes"` aggregation will be incorrect because it only contains counts for `nodes` that also have buildings
- The `doc_count` for `"buildings"` aggregation will be correct
- Any metrics, on any level, will be correct
==== Workarounds
There are two main workarounds if you find yourself with a schema like the above.
Easiest and most robust method: use separate indices to store your rollups. The limitations arise because you have several document
schemas co-habitating in a single index, which makes it difficult for rollups to correctly summarize. If you make several rollup
jobs and store them in separate indices, these sorts of difficulties do not arise. It does, however, keep you from searching across several
different rollup indices at the same time.
The other workaround is to include an "off-target" aggregation in the query, which pulls in the "superset" job and corrects the doc counts.
The RollupSearch API determines the best job to search for each "leaf node" in the aggregation tree. So if we include a metric agg on `price`,
which was only defined in the `sensor-all` job, that will "pull in" the other job:
[source,js]
--------------------------------------------------
"aggs" : {
"nodes": {
"terms": {
"field": "node"
},
"aggs": {
"building": {
"terms": {
"field": "building"
}
},
"avg_price": {
"avg": { "field": "price" } <1>
}
}
}
}
--------------------------------------------------
// NOTCONSOLE
<1> Adding an avg aggregation here will fix the doc counts
Because only `sensor-all` job had an `avg` on the price field, the RollupSearch API is forced to pull in that additional job for searching,
and will merge/correct the doc_counts as appropriate. This sort of workaround applies to any additional aggregation -- metric or bucketing --
although it can be tedious to look through the jobs and determine the right one to add.
==== Status
We realize this is an onerous limitation, and somewhat breaks the rollup contract of "pick the fields to rollup, we do the rest". We are
actively working to get the limitation to `composite` agg fixed, and the related issues in Rollup. The documentation will be updated when
the fix is implemented. | 32.538835 | 145 | 0.582948 |
bf906e42b2026dfe29d24d1d04313aef43c45860 | 17,312 | asciidoc | AsciiDoc | README.asciidoc | pombredanne/fabric8-analytics-tagger | 840bf07bfe209311bfe8cf762834854f54b8383f | [
"Apache-2.0"
] | 9 | 2017-07-20T13:41:54.000Z | 2020-12-13T15:04:17.000Z | README.asciidoc | pombredanne/fabric8-analytics-tagger | 840bf07bfe209311bfe8cf762834854f54b8383f | [
"Apache-2.0"
] | 110 | 2017-07-26T13:53:09.000Z | 2021-12-13T19:50:49.000Z | README.asciidoc | pombredanne/fabric8-analytics-tagger | 840bf07bfe209311bfe8cf762834854f54b8383f | [
"Apache-2.0"
] | 11 | 2017-07-20T12:25:29.000Z | 2020-04-16T14:02:50.000Z | fabric8-analytics-tagger
========================
image:https://ci.centos.org/view/Devtools/job/devtools-fabric8-analytics-tagger-fabric8-analytics/badge/icon[Build status, link="https://ci.centos.org/view/Devtools/job/devtools-fabric8-analytics-tagger-fabric8-analytics/"]
image:https://codecov.io/gh/fabric8-analytics/fabric8-analytics-tagger/branch/master/graph/badge.svg[Code coverage, link="https://codecov.io/gh/fabric8-analytics/fabric8-analytics-tagger"]
Keyword extractor and tagger for fabric8-analytics.
== Usage
For getting all available commands issue:
```
$ f8a_tagger_cli.py --help
Usage: f8a_tagger_cli.py [OPTIONS] COMMAND [ARGS]...
Tagger for fabric8-analytics.
Options:
-v, --verbose Level of verbosity, can be applied multiple times.
--help Show this message and exit.
Commands:
aggregate Aggregate keywords to a single file.
collect Collect keywords from external resources.
diff Compute diff on keyword files.
lookup Perform keywords lookup.
reckon Compute keywords and stopwords based on stemmer and lemmatizer configuration.
```
To run a command in verbose mode (adds additional messages), run:
```sh
$ f8a_tagger_cli.py -vvvv lookup /path/to/tree/or/file
```
Verbose output will give you additional insides on steps that are performed during execution (debug mode).
== Installation using pip
```sh
$ git clone https://github.com/fabric8-analytics/fabric8-analytics-tagger && cd fabric8-analytics-tagger
$ python3 setup.py install # or make install
```
== Tagging workflow
=== Collecting keywords - `collect`
The prerequisite for tagging is to collect keywords that are used out there by developers. This also means that tagger uses keywords that are considered as interesting ones by developers.
The collection is done by collectors (available in `f8a_tagger/collectors`). These collectors gather keywords and also count number of occurrences for gathered keywords. Collectors do not perform any additional post-processing, but rather gather raw keywords that are after that post-processed by the `aggregate` command (see bellow).
An example of raw keywords can be link:https://github.com/fabric8-analytics/fabric8-analytics-tags/blob/master/raw/pypi_tags.yaml[the following YAML] file that keeps keywords gathered in PyPI ecosystem.
=== Aggregating keywords - `aggregate`
If you take a look at raw keywords that are gathered by the `collect` command explained above, you can easily spot a lot of keywords that are written in a wrong way (they have broken encoding, multi-line keywords, numerical values, one letter keywords, ...). These keywords should be removed and other keywords that are present should be normalized and, if possible, there can be computed some obvious synonyms that can be present during keywords lookup phase.
The `aggregate` command handles:
* keywords filtering - suspicious and obviously useless keywords can be directly thrown away (e.g. bogus keywords, one letter keywords, ...)
* keywords normalization - all keywords are normalized to lowercase form, having only ASCII characters, multi-word keywords are separated with dashes rather than spaces
* synonyms computation - some synonyms can be directly computed - e.g. some people use `machine-learning`, some use `machine learning`
* aggregating multiple `keywords.yaml` files - the `aggregate` command can aggregate multiple `keywords.yaml` files into one, this is especially useful if there are more than one keywords sources available for collecting keywords
The output of `aggregate` command is a single configuration file (could be JSON or YAML), that keeps the following (aggregated) entries:
* keywords themselves
* synonyms
* regular expressions that match the given keyword
* occurrence count of the given keyword in keyword sources (used later for keywords scoring, see bellow)
An example of a keyword entries produced by the `aggregate` command could be:
```yaml
machine-learning:
synonyms:
- machine learning
- machinelearning
- machine-learning
- machine_learning
- occurrence_count: 56
django:
occurrence_count: 2654
regexp:
- '.*django.*'
```
The `keywords.yaml` file can be, of course, additionally manually changed as desired.
An example of automatically aggregated `keywords.yaml` can be found in link:https://github.com/fabric8-analytics/fabric8-analytics-tags/blob/master/pypi_tags.yaml[fabric8-analytics-tags] repository. This `keywords.yaml` file was computed based on link:https://github.com/fabric8-analytics/fabric8-analytics-tags/blob/master/raw/pypi_tags.yaml[collected raw keywords from PyPI] stated above.
=== Keywords lookup - `lookup`
The overall outcome of steps above is a single `keywords.yaml` file. This file, with `stopwords.txt` file keeping stopwords, is the input for the `lookup` command.
The `lookup` command does the whole heavy computation needed for keywords extraction. It utilizes link:http://www.nltk.org/[NLTK] for utilizing many natural language processing tasks.
The overall high-level overview of the `lookup` command can be described in the following steps:
1. The first step is to do pre-processing of input files. Input files can be written in different formats. Except plaintext, there can be also used text files using different markup formats (such as Markdown, AsciiDoc, and such).
2. After input pre-processing there is available plaintext without any markup formatting parts. This text is after that split into sentences. The actual split is done in a smart way (so "This Mr. Baron e.g. Mr. Foo." will be one sentence - not just split on dots).
3. Sentences are tokenized into words. This tokenization is done, again, in a smart way ("e.g. isn't" is split into three tokens - "e.g", "is" and "n't".
4. link:https://en.wikipedia.org/wiki/Lemmatisation[Lemmatization] - all words (tokens) are replaced with their representative as words appear in several inflected forms. Lemmatization uses NLTK's WordNet corpus (large lexical database of English words).
5. After lemmatization there is performed link:https://en.wikipedia.org/wiki/Stemming[stemming]. Stemming ensures that different words are mapped to their word stem (e.g. "licensing", "license" is same). There are available different stemmers, check `lookup --help` for listing of all available ones. Check out link:https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html[Standford's NLP] for more insights on lemmatization and stemming.
6. Unwanted tokens are removed - tokens are checked against stopwords file and if there is a match, unwanted tokens are removed. This step ensures that the lookup will perform faster and we also remove obviously wrong words that shouldn't be marked as keywords (words with high entropy).
7. There are calculated ngrams for multi-word keywords by systematically concatenating tokens (e.g. tokens `["this", "is", "machine", "learning"]` with ngram size equal to 2 create the following tokens: `["this", "is", "machine", "learning", "this is", "is machine", "machine learning"]`. This step ensures that there can be performed lookup of multi-word keywords (such as "machine learning"). The actual ngrams size (bigrams, trigrams) is determined by `keywords.yaml` configuration file (based on synonyms), but can be explicitly stated using `--ngram-size` option.
8. Actual lookup against `keywords.yaml` configuration file. Constructed array of tokens with ngrams is checked against `keywords.yaml` file. The output of this step is an array of found keywords during keywords mining.
9. The last step performs scoring on found keywords based on their relevance in the system (based on occurrence count of the found keyword and occurrence count in the text).
You can watch check output of all steps by running tagger in debug mode by supplying multiple `--verbose` command line options. In that case tagger will report what steps are performed, what is input and the outcome. This can also help you when debugging what is going on when using tagger.
=== Working with keywords.yaml and stopwords
There are prepared few commands that can make your life easier when working with keywords database.
==== Using `reckon` command
This command will apply lemmatization and stemming on your `keywords.yaml` and `stopwords.txt` files. The output is after that printed to you to check form of keywords and stopwords that will be used during lookup (in respect to lemmatization and stemming).
Check `reckon --help` for more info on available options.
==== Using `diff` command
The `diff` command will give you an overview what has changed in keywords.yaml file. It simply prints added synonyms and regular expressions that differ in `keywords.yaml` files. Also there are reported missing/added keywords to help you see changes in your configuration files.
== Configuration files
=== keywords.yaml
File `keywords.yaml` keeps all keywords that are in a form of:
```yaml
keyword:
occurrence_count: 42
synonyms:
- list
- of
- synonyms
regexp:
- 'list.*'
- 'o{1}f{1}'
- 'regular[ _-]expressions?'
```
A keyword is a key to dictionary containing additional fields:
* synonyms - for list of synonyms to the given keyword
* regexp - for list of regular expressions that match the given keyword
* occurrence_count - number of times the given keyword was found in the external source (helping with keywords scoring)
For example, if you would like to define keyword `django` that matches all words that contain "`django`", just define:
```yaml
django:
occurrence_count: 1339
regexp:
- '.*django.*'
```
Another example demonstrates synonyms. To define synonyms IP, IPv4 and IPv6 as synonyms to networking, just define the following entry:
```yaml
networking:
synonyms:
- ip
- ipv4
- ipv6
```
Regular expressions conform to link:https://docs.python.org/3/library/re.html[Python regular expressions].
=== stopwords.txt
This file contains all stopwords (words that should be left out from text analysis) in raw/plaintext and regular expression format. All stopwords are listed one per line.
An example of stopwords file keeping stopwords ("would", "should" and "are"):
```
would
should
are
```
There can be also specified regular expression that describe stopwords.
An example of regular expression stopwords:
```
re: [0-9][0-9]*
re: https?://[a-zA-Z0-9][a-zA-Z0-9.]*.[a-z]{2,3}
```
In the example above, there are listed two regular expressions to define stopwords. The first one defines stopwords that consist purely of integer numbers (any integer number will be dropped from textual analysis). The latter example filters out any URL (the regexp is simplified).
Regular expressions conforms to link:https://docs.python.org/3/library/re.html[Python regular expressions].
== Development environment
If you would like to set up a virtualenv for your environment, just issue prepared `make venv` Make target:
```sh
$ make venv
```
After this command, there should be available virtual environment that can be accessed using:
```sh
$ source venv/bin/activate
```
And exited using:
```sh
$ deactivate
```
To run checks, issue `make check` command:
```sh
$ make check
```
The check Make target runs a set of linters provided by link:https://coala.io/[Coala]; there is also run `pylint`, `pydocstyle`. To execute only desired linter, run appropriate Make target:
```sh
$ make coala
$ make pylint
$ make pydocstyle
```
== Evaluating accuracy
Tagger does not use any machine learning technique to gather keywords. All steps correspond to data mining techniques so there is no "accuracy" that could be evaluated. Tagger simply checks for important, key words that are relevant (low entropy). The overall quality of keywords found is equal to quality of `keywords.yaml` file.
== Practices
* all collectors should receive a set of keywords that are all lowercase
* the only delimiter that is allowed for multi word keywords is a dash (`-`), all spaces should be replaced with dash
* synonyms for multi word keywords are automatically created in aggregate command, if requested
== README.json
README.json is a format introduced by one task (`GitReadmeCollectorTask`) present in fabric8-analytics-worker. The structure of document is described by one JSON file containing two keys:
* `content` - raw content of README file
* `type` - content type that can be markdown, ReStructuredText, ... (see `f8a_tagger.parsers.abstract` for more info)
== Parsers
Parsers are used to transform README.json files to plaintext files. Their main goal is to remove any markup specific annotations and provide just plaintext that can be directly used for additional text processing.
You can see implementation of parsers in the `f8a_tagger/parsers` directory.
== Collectors
There is also present a set of collectors that collect keywords/topics/tags from various external resources such as PyPI, Maven central and such. These collectors produce a list of keywords with they occurrence count that can be later on used for keywords extraction.
All collectors are present under `f8a_tagger/collectors` package.
== Check for all possible issues
The script named `check-all.sh` is to be used to check the sources for all detectable errors and issues. This script can be run w/o any arguments:
---
./check-all.sh
---
Expected script output:
---
Running all tests and checkers
Check all BASH scripts
OK
Check documentation strings in all Python source file
OK
Detect common errors in all Python source file
OK
Detect dead code in all Python source file
OK
Run Python linter for Python source file
OK
Unit tests for this project
OK
Done
Overal result
OK
---
An example of script output when one error is detected:
---
Running all tests and checkers
Check all BASH scripts
Error: please look into files check-bashscripts.log and check-bashscripts.err for possible causes
Check documentation strings in all Python source file
OK
Detect common errors in all Python source file
OK
Detect dead code in all Python source file
OK
Run Python linter for Python source file
OK
Unit tests for this project
OK
Done
Overal result
One error detected!
---
== Coding standards
You can use scripts `run-linter.sh` and `check-docstyle.sh` to check if the code follows https://www.python.org/dev/peps/pep-0008/[PEP 8] and https://www.python.org/dev/peps/pep-0257/[PEP 257] coding standards. These scripts can be run w/o any arguments:
----
./run-linter.sh
./check-docstyle.sh
----
The first script checks the indentation, line lengths, variable names, white space around operators etc. The second
script checks all documentation strings - its presence and format. Please fix any warnings and errors reported by these
scripts.
List of directories containing source code, that needs to be checked, are stored in a file `directories.txt`
== Code complexity measurement
The scripts `measure-cyclomatic-complexity.sh` and `measure-maintainability-index.sh` are used to measure code complexity. These scripts can be run w/o any arguments:
----
./measure-cyclomatic-complexity.sh
./measure-maintainability-index.sh
----
The first script measures cyclomatic complexity of all Python sources found in the repository. Please see https://radon.readthedocs.io/en/latest/commandline.html#the-cc-command[this table] for further explanation how to comprehend the results.
The second script measures maintainability index of all Python sources found in the repository. Please see https://radon.readthedocs.io/en/latest/commandline.html#the-mi-command[the following link] with explanation of this measurement.
You can specify command line option `--fail-on-error` if you need to check and use the exit code in your workflow. In this case the script returns 0 when no failures has been found and non zero value instead.
== Dead code detection
The script `detect-dead-code.sh` can be used to detect dead code in the repository. This script can be run w/o any arguments:
----
./detect-dead-code.sh
----
Please note that due to Python's dynamic nature, static code analyzers are likely to miss some dead code. Also, code that is only called implicitly may be reported as unused.
Because of this potential problems, only code detected with more than 90% of confidence is reported.
List of directories containing source code, that needs to be checked, are stored in a file `directories.txt`
== Common issues detection
The script `detect-common-errors.sh` can be used to detect common errors in the repository. This script can be run w/o any arguments:
----
./detect-common-errors.sh
----
Please note that only semantical problems are reported.
List of directories containing source code, that needs to be checked, are stored in a file `directories.txt`
== Check for scripts written in BASH
The script named `check-bashscripts.sh` can be used to check all BASH scripts (in fact: all files with the `.sh` extension) for various possible issues, incompatibilities, and caveats. This script can be run w/o any arguments:
----
./check-bashscripts.sh
----
Please see https://github.com/koalaman/shellcheck[the following link] for further explanation, how the ShellCheck works and which issues can be detected.
| 44.73385 | 568 | 0.769004 |
731263b2c650af3af4c3b6d6bd597846fa749549 | 3,444 | adoc | AsciiDoc | modules/ROOT/pages/spark-workers.adoc | slachiewicz/mastering-apache-spark-book | d863b9b1c6f127b46220ac1722db438ff49a7a58 | [
"Apache-2.0"
] | 1 | 2019-06-27T07:40:38.000Z | 2019-06-27T07:40:38.000Z | modules/ROOT/pages/spark-workers.adoc | slachiewicz/mastering-apache-spark-book | d863b9b1c6f127b46220ac1722db438ff49a7a58 | [
"Apache-2.0"
] | null | null | null | modules/ROOT/pages/spark-workers.adoc | slachiewicz/mastering-apache-spark-book | d863b9b1c6f127b46220ac1722db438ff49a7a58 | [
"Apache-2.0"
] | null | null | null | == Workers
*Workers* (aka *slaves*) are running Spark instances where executors live to execute tasks. They are the compute nodes in Spark.
CAUTION: FIXME Are workers perhaps part of Spark Standalone only?
CAUTION: FIXME How many executors are spawned per worker?
A worker receives serialized tasks that it runs in a thread pool.
It hosts a local link:spark-BlockManager.adoc[Block Manager] that serves blocks to other workers in a Spark cluster. Workers communicate among themselves using their Block Manager instances.
CAUTION: FIXME Diagram of a driver with workers as boxes.
Explain task execution in Spark and understand Spark’s underlying execution model.
New vocabulary often faced in Spark UI
link:spark-SparkContext.adoc[When you create SparkContext], each worker starts an executor. This is a separate process (JVM), and it loads your jar, too. The executors connect back to your driver program. Now the driver can send them commands, like `flatMap`, `map` and `reduceByKey`. When the driver quits, the executors shut down.
A new process is not started for each step. A new process is started on each worker when the SparkContext is constructed.
The executor deserializes the command (this is possible because it has loaded your jar), and executes it on a partition.
Shortly speaking, an application in Spark is executed in three steps:
1. Create RDD graph, i.e. DAG (directed acyclic graph) of RDDs to represent entire computation.
2. Create stage graph, i.e. a DAG of stages that is a logical execution plan based on the RDD graph. Stages are created by breaking the RDD graph at shuffle boundaries.
3. Based on the plan, schedule and execute tasks on workers.
link:exercises/spark-examples-wordcount-spark-shell.adoc[In the WordCount example], the RDD graph is as follows:
file -> lines -> words -> per-word count -> global word count -> output
Based on this graph, two stages are created. The *stage* creation rule is based on the idea of *pipelining* as many link:spark-rdd.adoc[narrow transformations] as possible. RDD operations with "narrow" dependencies, like `map()` and `filter()`, are pipelined together into one set of tasks in each stage.
In the end, every stage will only have shuffle dependencies on other stages, and may compute multiple operations inside it.
In the WordCount example, the narrow transformation finishes at per-word count. Therefore, you get two stages:
* file -> lines -> words -> per-word count
* global word count -> output
Once stages are defined, Spark will generate link:spark-scheduler-Task.adoc[tasks] from link:spark-scheduler-Stage.adoc[stages]. The first stage will create link:spark-scheduler-ShuffleMapTask.adoc[ShuffleMapTask]s with the last stage creating link:spark-scheduler-ResultTask.adoc[ResultTask]s because in the last stage, one action operation is included to produce results.
The number of tasks to be generated depends on how your files are distributed. Suppose that you have 3 three different files in three different nodes, the first stage will generate 3 tasks: one task per partition.
Therefore, you should not map your steps to tasks directly. A task belongs to a stage, and is related to a partition.
The number of tasks being generated in each stage will be equal to the number of partitions.
=== [[Cleanup]] Cleanup
CAUTION: FIXME
=== [[settings]] Settings
* `spark.worker.cleanup.enabled` (default: `false`) <<Cleanup, Cleanup>> enabled.
| 58.372881 | 373 | 0.783101 |
a745b24c3c270d1b06237989c7b1cb1fd2a7582b | 2,036 | adoc | AsciiDoc | _posts/2019-05-10-Jpa-and-private-setter-and-private-constructor.adoc | mikrethor/blog | c8ab487e49b1a62b54e6fb0b6e7a62e5cf225218 | [
"MIT"
] | null | null | null | _posts/2019-05-10-Jpa-and-private-setter-and-private-constructor.adoc | mikrethor/blog | c8ab487e49b1a62b54e6fb0b6e7a62e5cf225218 | [
"MIT"
] | 1 | 2021-09-02T14:50:43.000Z | 2021-09-02T14:50:43.000Z | _posts/2019-05-10-Jpa-and-private-setter-and-private-constructor.adoc | mikrethor/blog | c8ab487e49b1a62b54e6fb0b6e7a62e5cf225218 | [
"MIT"
] | null | null | null | = Jpa and private setter and private constructor
// See https://hubpress.gitbooks.io/hubpress-knowledgebase/content/ for information about the parameters.
// :hp-image: /covers/cover.png
// :published_at: 2019-05-10
// :hp-tags: JPA, Setter, OO,
// :hp-alt-title: Jpa and private setter and private constructor
Today, with my colleagues, I talked about Oriented Programming and why we shouldn't always generate all the getters and setters when we design a class (https://www.javaworld.com/article/2073723/why-getter-and-setter-methods-are-evil.html[Why getter and setter methods are evil?]).
And someone told me that JPA forces us to declare setters and getters in an entity class. Does it?
== What is the JPA requirements for an entity class
=== No setter ?
When an entity class doesn't contain a setter for a field, JPA will raise the following error :
....
org.hibernate.PropertyNotFoundException: Could not locate setter method for property
....
=== No constructor ?
When an entity class doesn't contain a constructor without parameters, JPA will raise the following error :
....
org.hibernate.InstantiationException: No default constructor for entity:
....
== Conclusion
JPA needs a constructor and setter for each declared field. Does it mean we have to expose the default constructor and each setter ?
It doesn't...
JPA supports private constructor and private setters.
[source,java]
----
@Entity
public class Person {
private long id;
private String name;
public Person(long id, String name) {
this.id = id;
this.name = name;
}
private Person() {
}
@Id
public long getId() {
return id;
}
private void setId(long id) {
this.id = id;
}
@Column
public String getName() {
return name;
}
private void setName(String name) {
this.name = name;
}
}
----
This entity is a valid JPA entity and it can't be modified after its creation.
You can use the following github project to try by ourself.
https://github.com/mikrethor/jpa-private[JpaPrivate @ *GitHub*]
| 26.789474 | 280 | 0.727898 |
8a5099f009d9833dad2ee736e61fb7a80c7f9db5 | 40,061 | asciidoc | AsciiDoc | fc-solve/source/USAGE.asciidoc | shlomif/fc-solve | 1c2293f823a304ac658867b7486cccd6881a8b06 | [
"MIT"
] | 50 | 2016-03-12T10:28:49.000Z | 2022-03-20T00:56:04.000Z | fc-solve/source/USAGE.asciidoc | shlomif/fc-solve | 1c2293f823a304ac658867b7486cccd6881a8b06 | [
"MIT"
] | 41 | 2016-10-08T21:45:09.000Z | 2021-12-26T05:46:55.000Z | fc-solve/source/USAGE.asciidoc | shlomif/fc-solve | 1c2293f823a304ac658867b7486cccd6881a8b06 | [
"MIT"
] | 9 | 2018-01-20T20:52:14.000Z | 2021-05-11T05:09:30.000Z | Freecell Solver's Command-Line Syntax and Usage
===============================================
Shlomi Fish <shlomif@cpan.org>
:Date: 2009-08-29
:Revision: $Id$
[id="the_programs"]
The programs
------------
Most command-line switches have two versions:
* A short POSIX one which is a dash followed by a letter or a few. This option
must come standalone and not clustered: +-sam+ is not equivalent to
specifying +-s+, +-a+ and +-m+.
* A long switch which is two dashes followed by the command string. For
example: +--prelude+, +--st-name+.
If command line arguments have parameters, they are followed in separate
parameters - Freecell Solver won't recognise a parameter preceded by an equal
sign. +--st-name=myname+ is invalid, while +--st-name myname+ is OK.
[id="scope_of_the_opts"]
The Scope of the Options
~~~~~~~~~~~~~~~~~~~~~~~~
The scope of the options is mentioned along with them. Options can be:
1. Global - affects all the soft-threads.
2. Instance-specific - affects an instance (separated by the +--next-instance+
option below). Each instance consists of several flares.
3. Flare-specific - affects the current flare (separated by the +--next-flare+
option below. Each flare consists of several hard threads.
4. Hard-thread-specific - affects the current hard thread (separated by
the +--next-hard-thread+ option below. Each hard thread consists of several
soft threads.
5. Soft-thread-specific - affects only the current soft thread.
[id="getting_help"]
Getting Help
------------
[id="help_flag"]
-h , --help
~~~~~~~~~~~
*Global*
This option displays a help text on the screen. This help gives a help
display summarizing some ways to use the program and get more help.
[id="version_flag"]
--version
~~~~~~~~~
*Global*
This option displays the version number of the components that make
the executable (and then exits).
[id="help-configs_flag"]
--help-configs
~~~~~~~~~~~~~~
*Global*
Some help on the various configurations of Freecell Solver.
[id="help-options_flag"]
--help-options
~~~~~~~~~~~~~~
*Global*
A help screen giving an overview of all available options.
[id="help-real-help_flag"]
--help-real-help
~~~~~~~~~~~~~~~~
*Global*
Explains how to change the default help screen to a different one.
[id="help-short-sol_flag"]
--help-short-sol
~~~~~~~~~~~~~~~~
*Global*
How to generate shorter solutions.
[id="help-summary_flag"]
--help-summary
~~~~~~~~~~~~~~
*Global*
The default help screen.
[id="output_options"]
Output Options
--------------
[id="parseable-output_flag"]
-p , --parseable-output
~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option will display the columns in a format that can be more easily
manipulated by text-processing programs such as grep or perl. Namely,
The freecells will be displayed in one line, and the foundations in a
separate line. Plus, Each column will be displayed horizontally, in its
own line, while beginning with a +:+.
[id="display-10-as-t_flag"]
-t , --display-10-as-t
~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option will display the 10 cards as a capital +T+ instead of a +10+.
Thus, the cards will be more properly aligned.
For example, here is a command line using +-p+ and +-t+:
---------------------
$ pi-make-microsoft-freecell-board 24 | fc-solve -p -t
-=-=-=-=-=-=-=-=-=-=-=-
Foundations: H-0 C-0 D-0 S-0
Freecells:
: 4C 2C 9C 8C QS 4S 2H
: 5H QH 3C AC 3H 4H QD
: QC 9S 6H 9H 3S KS 3D
: 5D 2S JC 5C JH 6D AS
: 2D KD TH TC TD 8D
: 7H JS KH TS KC 7C
: AH 5S 6S AD 8H JD
: 7S 6C 7D 4D 8S 9D
====================
Foundations: H-0 C-0 D-0 S-A
Freecells:
: 4C 2C 9C 8C QS 4S 2H
: 5H QH 3C AC 3H 4H QD
: QC 9S 6H 9H 3S KS 3D
: 5D 2S JC 5C JH 6D
: 2D KD TH TC TD 8D
: 7H JS KH TS KC 7C
: AH 5S 6S AD 8H JD
: 7S 6C 7D 4D 8S 9D
---------------------
[id="canonized-order-output_flag"]
-c , --canonized-order-output
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
Freecell Solver re-arranges the stacks and freecells in a given state
according to their first card. It keeps their actual position in a
separate place, but internally it uses their canonized place. Use
this option, if you want Freecell Solver to display them in that order.
One should be warned that that way the place of a given stack in the
board will not be preserved throughout the solution.
[id="display-moves_flag"]
-m , --display-moves
~~~~~~~~~~~~~~~~~~~~
*Global*
This option will display the moves instead of the intermediate states.
Each move will be displayed in a separate line, in a format that is
human-readable, but that can also be parsed and analyzed by a computer
program with some effort on the programmer's part.
For example:
----------------------
$ pi-make-microsoft-freecell-board 24 | fc-solve -m | head -30
-=-=-=-=-=-=-=-=-=-=-=-
Move a card from stack 3 to the foundations
====================
Move a card from stack 6 to freecell 0
====================
Move a card from stack 6 to freecell 1
----------------------
[id="standard-notation_flag"]
-sn , --standard-notation
~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option will display the moves in standard notation in which every
move consists of two characters and there are ten moves in a line. Naturally,
this option will only become apparent if the display moves is specified.
(it does not implicitly specify it, though).
For more information regarding standard notation refer to the following
web-page:
http://www.solitairelaboratory.com/solutioncatalog.html
[id="standard-notation-extended_flag"]
-snx , --standard-notation-extended
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option is similar to the previous one, except that when a sequence
move is made to an empty stack with more than one card in the sequence,
the move will be followed with "v" and the number of cards moved in
hexadecimal.
[id="display-states-and-moves_flag"]
-sam , --display-states-and-moves
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option will display both the intermediate states and the moves that
are needed to move from one to another. The standard notation
option applies to it to.
--------------------------------
$ pi-make-microsoft-freecell-board 24 | fc-solve -sam -p -t | head -50
-=-=-=-=-=-=-=-=-=-=-=-
Foundations: H-0 C-0 D-0 S-0
Freecells:
: 4C 2C 9C 8C QS 4S 2H
: 5H QH 3C AC 3H 4H QD
: QC 9S 6H 9H 3S KS 3D
: 5D 2S JC 5C JH 6D AS
: 2D KD TH TC TD 8D
: 7H JS KH TS KC 7C
: AH 5S 6S AD 8H JD
: 7S 6C 7D 4D 8S 9D
====================
Move a card from stack 3 to the foundations
Foundations: H-0 C-0 D-0 S-A
Freecells:
: 4C 2C 9C 8C QS 4S 2H
: 5H QH 3C AC 3H 4H QD
: QC 9S 6H 9H 3S KS 3D
: 5D 2S JC 5C JH 6D
: 2D KD TH TC TD 8D
: 7H JS KH TS KC 7C
: AH 5S 6S AD 8H JD
: 7S 6C 7D 4D 8S 9D
====================
Move a card from stack 6 to freecell 0
Foundations: H-0 C-0 D-0 S-A
Freecells: JD
: 4C 2C 9C 8C QS 4S 2H
: 5H QH 3C AC 3H 4H QD
: QC 9S 6H 9H 3S KS 3D
: 5D 2S JC 5C JH 6D
: 2D KD TH TC TD 8D
: 7H JS KH TS KC 7C
: AH 5S 6S AD 8H
: 7S 6C 7D 4D 8S 9D
====================
Move a card from stack 6 to freecell 1
--------------------------------
[id="display-parent-iter_flag"]
-pi , --display-parent-iter
~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option (assuming the -s and -i options are specified) will also
display the iteration index of the state from which the current state
was derived. This is especially useful for BeFS (so-called +a-star+) or
BFS scans.
[id="output_flag"]
-o [filename] , --output [filename]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
Outputs to a file instead of standard output. So for example:
----------------------
$ fc-solve -o 2405.solution.txt 2405.board
----------------------
Will put the solution to the file in 2405.board in the file
+2405.solution.txt+ . This will also be done using:
----------------------
$ fc-solve --output 2405.solution.txt 2405.board
----------------------
[id="show-exceeded-limits_flag"]
-sel , --show-exceeded-limits
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option will display a different status message ("Iterations count
exceeded.") instead of "I could not solve this game." in case the iterations
count was exceeded. This is recommended because the "I could not solve this
game." message can also mean that the entire game graph was fully traversed
(within the limitations of the specified moves' types) and so no solution
is possible.
This option is not the default, to retain compatibility with previous versions
of Freecell Solver, and was added in version 3.12.0 of fc-solve.
[id="hint-on-intractable_flag"]
-hoi , --hint-on-intractable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
Presents the moves to the intermediate reached state, if the maximal number
of iterations was reached without a conclusion (= "intractable").
This option is not the default, to retain compatibility with previous versions
of Freecell Solver, and was added in version 4.20.0 of fc-solve.
[id="game_variants_options"]
Game Variants Options
---------------------
[id="freecells-num_flag"]
--freecells-num [Number of Freecells]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option specifies the number of freecells which are available to
the program. Freecell Solver can use any number of freecells as long as
it does not exceed its maximal number.
This maximum is hard-coded into the program, and can be specified at
compile-time by modifying the file +config.h+. See the file +INSTALL+
(or alternatively +INSTALL.html+) for details.
[id="stacks-num_flag"]
--stacks-num [Number of Stacks]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option specifies the number of stacks present in the board. Again,
this number cannot exceed the maximal number of stacks, which can be
specified in the file +config.h+ during compile-time of Freecell
Solver.
[id="decks-num_flag"]
--decks-num [Number of Decks]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This options specifies how many decks are found in the board. This number
cannot exceed the maximal number of decks, which can be specified by the
Freecell Solver build system.
[id="sequences-are-built-by_flag"]
--sequences-are-built-by {suit|alternate_color|rank}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option specifies whether a card sequence is built by suit or by
alternate colour or by rank regardless of suit.
[id="sequence-move_flag"]
--sequence-move {limited|unlimited}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This option specifies whether the sequence move is limited by the
number of freecells or vacant stacks or not.
[id="empty-stacks-filled-by_flag"]
--empty-stacks-filled-by {kings|none|all}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
Specifies which cards can fill an empty stack.
[id="game_flag"]
--game [game] , --preset [game] , -g [game]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
Specifies the type of game. Each preset implies several of the
settings options above and sometimes even the moves’ order below. The
default configuration is for Freecell.
Available presets:
[width="50%"]
|================================================
|+bakers_dozen+ |Baker's Dozen
|+bakers_game+ |Baker's Game
|+beleaguered_castle+ |Beleaguered Castle
|+citadel+ |Citadel
|+cruel+ |Cruel
|+der_katz+ |Der Katzenschwanz
|+die_schlange+ |Die Schlange
|+eight_off+ |Eight Off
|+fan+ |Fan
|+forecell+ |Forecell
|+freecell+ |Freecell (default)
|+good_measure+ |Good Measure
|+ko_bakers_game+ |Kings' Only Baker's Game
|+relaxed_freecell+ |Relaxed Freecell
|+relaxed_sehaven+ |Relaxed Seahaven Towers
|+seahaven+ |Seahaven Towers
|+simple_simon+ |Simple Simon
|+streets_and_alleys+ |Streets and Alleys
|================================================
Note: in order to solve Der Katzenschwanz and Die Schlange I recommend you
compile Freecell Solver with the INDIRECT_STACK_STATES option, or else it will
consume much more memory. For details consult the file INSTALL.
[id="game_flag_examples"]
Examples
~~~~~~~~
To solve PySol Eight Off game No. 1,000 type:
-----------------------
$ make_pysol_freecell_board.py 1000 eight_off | fc-solve -g eight_off
-----------------------
To solve PySol Baker's Game No. 50, type:
-----------------------
$ make_pysol_freecell_board.py 50 bakers_game | fc-solve -g bakers_game
-----------------------
If you want to solve a game similar to Freecell only with sequences built
by rank, and unlimited sequence move, do:
------------------------------------------
$ fc-solve -g freecell --sequences-are-built-by rank --sequence-move unlimited
------------------------------------------
[id="solving_algorithm_options"]
Solving Algorithm Options
-------------------------
[id="max-iters_flag"]
-mi [Iterations num] , --max-iters [Iterations num]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This parameter limits the maximal number of states to check. This will
give a rough limit on the time spent to solve a given board.
[id="max-depth_flag"]
-md [Maximal depth] , --max-depth [Maximal depth]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Not currently implemented*
Freecell Solver recurses into the solution. This parameter specifies a
maximal recursion depth. Generally speaking, it's not a good idea to
set it, because that way several important intermediate states may become
inaccessible.
[id="max-stored-states_flag"]
-mss [num] , --max-stored-states [num]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
Limits the number of the states stored by the program in the computer's
memory. This differs from the maximal number of iterations in the sense, that
it is possible that a stored state was not checked yet.
[id="trim-max-stored-states_flag"]
-tmss [num] , --trim-max-stored-states [num]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Instance-wide*
This also limits the number of trimmed stored states, but this time will
try to trim them once the limit has been reached (which is time consuming
and may cause states to be traversed again in the future).
[id="tests-order_flag"]
-to [Moves’ Order] , --tests-order [Moves Order]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
This option specifies the order in which Freecell Solver will try the different
types of moves (formerly termed "tests") that it can perform. Each move is
specified by one character, and they are performed in the order in which they
appear in the parameter string. You can omit moves by not including their
corresponding characters in the string.
The moves along with their characters are:
[width="80%",cols="1,10"]
|====================================================
2+|Freecell Moves:
|'0' | put top stack cards in the foundations.
|'1' | put freecell cards in the foundations.
|'2' | put freecell cards on top of stacks.
|'3' | put non-top stack cards in the foundations.
|'4' | move stack cards to different stacks.
|'5' | move stack cards to a parent card on the same stack.
|'6' | move sequences of cards onto free stacks.
|'7' | put freecell cards on empty stacks.
|'8' | move cards to a different parent.
|'9' | empty an entire stack into the freecells.
|'j' | put freecell cards on empty stacks and right away put cards on top.
2+|Atomic Freecell Moves:
|'A' | move a stack card to an empty stack.
|'B' | move a stack card to a parent on a different stack.
|'C' | move a stack card to a freecell.
|'D' | move a freecell card to a parent.
|'E' | move a freecell card to an empty stack.
2+|Simple Simon Moves:
|'a' | move a full sequence to the foundations.
|'b' | move a sequence to a true parent of his.
|'c' | move a whole stack sequence to a false parent (in order to clear the stack)
|'d' | move a sequence to a true parent that has some cards above it.
|'e' | move a sequence with some cards above it to a true parent.
|'f' | move a sequence with a junk sequence above it to a true parent that
has some cards above it.
|'g' | move a whole stack sequence to a false parent which has some
cards above it.
|'h' | move a sequence to a parent on the same stack.
|'i' | move any sequence to a false parent (using it may make the solution
much slower).
|====================================================
Manipulating the moves order can be very helpful to the quick solution
of a given board. If you found that a certain board cannot be solved in
after a long time or in a certain maximal number of iterations, you
should try different moves' orders. Usually, one can find a moves order
that solves a board very quickly.
Note that this moves order usually makes sense only for the Soft-DFS
and Random DFS scans (see the +--method+ option below).
Also note that Freecell moves are not suitable for solving Simple Simon games
and Simple Simon moves are not suitable for solving anything except Simple
Simon.
Moves can be grouped together into groups using parenthesis
(e.g: "(0123)") or square brackets ("[012][3456789]"). Such grouping is
only relevant to the Random DFS scan (see below). A group may optionally
be followed by the equal sign "=" and by an ordering specifier. If one
specifies "=rand()", then the derived states will be randomised based on the
seed (which is what happens if no equal sign is specified). On the other
hand, if one specifies something like "=asw(5,0,5,0,0,5)", then the numbers
inside the parentheses will be treated as weights for the same ordering
function used by the +-asw+ flag (see below).
If the order specifier is "=all()" then all the moves in the group will
be run, even if some derived states have been yielded by earlier moves
in the group. ( This was added in version 5.24.0. )
[id="depth-tests-order2_flag"]
-dto2 [Min Depth],[Moves' Order] , --depth-tests-order2 [Min Depth],[Moves' Order]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
Sets the Moves' order starting from the minimal depth onwards. This way, if
a Soft-DFS scan recurses deeply into the game, it will use a different moves'
order.
Note that if you set the moves' order of a minimal depth of say 50, then it
will override all the moves' order of 50 and above. As a result, it is
recommended that you set the minimal depth moves order in an increasing
depth.
It should be noted that the +-to+ or +--tests-order+ option above is
equivalent to using this option with a minimal depth of 0.
Here are some examples:
---------------------
-to 0123456789 -dto2 30,0138924567
---------------------
This sets the moves' order to +0123456789+ for all depths below 30 and to
+0138924567+ for all depths above it.
---------------------
-to 0123457 -dto2 10,750123 -dto2 25,710235
---------------------
This sets the moves' order to +0123457+ for depths -9 (those below 10),
to +750123+ for depths 10-24, and to +710235+ for the depths 25 onwards.
---------------------
-to 0123457 -dto2 "10,[012357]=asw(1)"
---------------------
This sorts the moves starting from 10 onward based on the asw() function.
---------------------
-to 0123457 -dto2 "10,[012357]=rand()"
---------------------
This randomises the moves from 10 onward.
---------------------
-to 0123457 -dto2 "10,[012357]"
---------------------
This does the same thing as the previous example.
*Note* : This option should be used instead of the older +-dto+ option given
below which mutilates the moves order parameter and is still provided for
backward compatibility.
[id="depth-tests-order_flag"]
-dto [Min Depth],[Moves' Order] , --depth-tests-order [Min Depth],[Moves' Order]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is equivalent to specifying +-dto2 [Min Depth],[Min Depth],[Moves' Order]+
- i.e: the "[Min Depth]," string is prefixed to the given moves order.
This option is provided for backward compatibility with older versions of
Freecell Solver.
[id="method_flag"]
-me [Solving Method] , --method [Solving Method]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
This option specifies the solving method that will be used to solve the
board. Currently, the following methods are available:
* +a-star+ - A Best-First-Search scan (not "A*" as it was once thought to be)
* +bfs+ - A Breadth-First Search (or BFS) scan
* +dfs+ - A Depth-First Search (or DFS) scan
* +random-dfs+ - A randomized DFS scan
* +patsolve+ - uses the scan of patsolve.
* +soft-dfs+ - A "soft" DFS scan
Starting from recent Freecell Solver versions there is no difference between
+dfs+ and +soft-dfs+. In earlier versions, use of +soft-dfs+ is recommended.
+random-dfs+ is similar to +soft-dfs+ only it determines to which states to
recurse into randomly. Its behaviour will differ depending on the seed you
supply to it. (see the "-seed" option below.)
BFS does not yield good results, and +a-star+ has a mixed behaviour, so for
the time being I recommend using Soft-DFS or Random-DFS.
The Random-DFS scan processes every moves' random group, randomizes the
states that it found and recurses into them one by one. Standalone moves
that do not belong to any group, are processed in a non-random manner.
[id="a-star-weight_flag"]
-asw [BeFS Weights] , --a-star-weight [BeFS Weights]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
Specify weights for the +a-star+ (= "Best-First Search") scan, assuming it is
used. The parameter should be a comma-separated list of numbers, each one is
proportional to the weight of its corresponding test.
The numbers are, in order:
1. The number of cards out.
2. The maximal sequence move.
3. The number of cards under sequences.
4. The length of the sequences which are found over renegade cards.
5. The depth of the board in the solution.
6. The negative of the number of cards that are not placed above their
parents. To get the irreversibility depth, give equal weight to this weight
and to the number of cards out.
The default weights are respectively: {0.5, 0, 0.3, 0, 0.2, 0}
[id="seed_flag"]
-seed [Seed Number]
~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
Specifies a seed to be used by Freecell Solver's internal random number
generator. This seed may alter the behaviour and speed of the +random-dfs+
scan.
[id="set-pruning_flag"]
--set-pruning [Pruning] , -sp [Pruning]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
This option sets the pruning algorithm for the soft thread. Current valid
values are only the empty string (+""+) for no pruning and +r:tf+ (short
for "Run: to foundations") for Horne's rule. See:
https://groups.yahoo.com/neo/groups/fc-solve-discuss/conversations/topics/214
[id="optimize-solution_flag"]
-opt , --optimize-solution
~~~~~~~~~~~~~~~~~~~~~~~~~~
*Flare-wide*
This option instructs Freecell Solver to try and optimize the solution
path so it will have a smaller number of moves.
[id="optimization-tests-order_flag"]
-opt-to [moves order] , --optimization-tests-order [moves order]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Flare-wide*
This argument specifies the moves order for the optimization scan, in case
it should be different than an order that contains all the moves that were
used in all the normal scans.
[id="reparent-states_flag"]
--reparent-states
~~~~~~~~~~~~~~~~~
*Flare-wide*
This option specifies that states that were encountered whose depth in the
states graph can be improved should be reparented to the new parent. This
option can possibly make solutions shorter.
[id="calc-real-depth_flag"]
--calc-real-depth
~~~~~~~~~~~~~~~~~
*Flare-wide*
This option becomes effective only if +--reparent-states+ is specified. What
it does, is explicitly calculate the depth of the state by tracing its path
to the initial state. This may make depth consideration more accurate.
[id="patsolve-x-param_flag"]
--patsolve-x-param [pos],[value]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
Sets the patsolve’s scan X param (an integer) in position "pos" into "value".
Examples:
---------------------
--patsolve-x-param 0,5
--patsolve-x-param 2,100
---------------------
[id="patsolve-y-param_flag"]
--patsolve-y-param [pos],[value]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
Sets the patsolve Y param (a floating point number) in position "pos" into
"value".
Examples:
---------------------
--patsolve-y-param 0,0.5
--patsolve-y-param 1,103.2
---------------------
[id="running_several_scans_in_parallel"]
Running Several Scans in Parallel
---------------------------------
Starting from Version 2.4.0, Freecell Solver can run several scans in
parallel on the same state collection. Each scan resides in its own
"Soft Thread". By specifying several soft threads on the command line
one can create and run several task-switched scans. Once one of the scans
reaches a solution, the solution will be displayed.
[id="next-soft-thread_flag"]
-nst , --next-soft-thread
~~~~~~~~~~~~~~~~~~~~~~~~~
*Hard-thread-specific*
This option creates a new soft-thread and makes the following scan-specific
options initialize it. For example:
----------------------
$ fc-solve --method a-star -nst --method soft-dfs -to 0123467 myboard.txt
----------------------
will run an BeFS scan and a Soft-DFS scan with a moves order of 0123467 on
myboard.txt.
[id="soft-thread-step_flag"]
-step [Step] , --soft-thread-step [Step]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
This option will set the number of iterations with which to run the
soft thread before switching to the next one. By specifying a larger
step, one can give a certain scan a longer run-time and a higher priority.
*Note:* after some experimentation, we have concluded that the +--prelude+
option normally yields better results, but +-step+ can be used as a fallback.
[id="next-hard-thread_flag"]
-nht , --next-hard-thread
~~~~~~~~~~~~~~~~~~~~~~~~~
*Flare-wide*
This argument lets one initialize the next hard thread. If Freecell Solver was
compiled with such support, then it is possible to run each hard thread in its
own system thread. Each hard-thread contains one or more soft threads.
[id="st-name_flag"]
--st-name [soft thread name]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Soft-thread-specific*
This argument sets the name used to identify the current soft thread. This name
can later be used to construct the prelude (see below).
[id="prelude_flag"]
--prelude [\i1@st1{,\i2@st2{,\i3@st3...}}]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Hard-thread-specific*
Sets the prelude for the hard thread. At the beginning of the search, the
hard thread plays a static sequence of iterations at each of the soft threads
specified in the prelude, for the number of iterations specified.
For example, if you had three soft threads named "foo", "bar" and "rin", then
the following prelude:
------------
--prelude 500@foo,1590@bar,100@foo,200@rin
------------
Will run 500 iterations in "foo", then 1590 in "bar", then 100 in "foo" again,
and then 200 in "rin". After the prelude finishes, the hard thread would
run the scans one after the other in the sequence they were defined for their
step number.
[id="scans-synergy_flag"]
--scans-synergy {none|dead-end-marks}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Flare-wide*
Specifies the synergy between the various scans, or how much they cooperate
between themselves. +none+ means they do not cooperate and only share
the same memory resources. +dead-end-marks+ means they try to mark states
that they have withdrawn from, and states whose all their derived states are
such, as "dead ends". This may or may not improve the speed of the solution.
[id="next-instance_flag"]
-ni , --next-instance
~~~~~~~~~~~~~~~~~~~~
*Global*
This option allows one to run two or more separate solvers one after the
other. If the first one returned an unsolvable verdict, then the second
one would run and so on. One use of it is to run an atomic moves scan
after a meta-moves scan, so we will always get an accurate verdict and
still enjoy some of the speed benefits of the meta-moves scan.
[id="next-flare_flag"]
-nf , --next-flare
~~~~~~~~~~~~~~~~~~
*Instance-wide*
Each instance contains several flares. Flares are various alternative scans,
that are ran one after another, as specified in the +--flares-plan+ below
or defaulting to running only the first flare (which isn't very useful). Out
of all the flares that are successful in solving a board, Freecell Solver
picks the one with the shortest solution.
[id="flare-name_flag"]
--flare-name [flare name]
~~~~~~~~~~~~~~~~~~~~~~~~~
*Flare-wide*
This is a name that identifies the flare for use in the flares' plan.
[id="flares-plan_flag"]
--flares-plan [flare plan]
~~~~~~~~~~~~~~~~~~~~~~~~~~
*Instance-wide*
This instance-wide parameter gives a plan for the flares as a big string. Here
are some examples:
------------
--flares-plan "RunIndef:FlareyFlare"
------------
This plan will run the flare with the name +FlareyFlare+ indefinitely, until it
terminates. Once a RunIndef action is encountered, the rest of the plan is
ignored.
------------
--flares-plan "Run:500@MyFlare,Run:2000@FooFlare"
------------
Runs +MyFlare+ for 500 iterations and +FooFlare+ for 2,000
iterations. Note that both flares will be run and won't share any resources
between them, and then the minimal solution out of both flares (or only
those that finished ). If no flares finished, then Freecell Solver will run
them both again for the same number of iterations each, until at least one
finishes (or it ran out of the iterations' limit).
------------
--flares-plan "Run:500@dfs,Run:1500@befs,CP:,Run:10000@funky"
------------
This runs the flares identified by +dfs+ and +befs+ and then see if a solution
was reached ("CP:" stands for *"checkpoint"*), and if so yield it. If both
flares did not reach a solution yet, or failed to solve the board, it will run
the flare +funky+ for 10,000 iterations and yield its solution. And like the
previous case, this solution will loop after it ended for as long as the
no flare solved the board or the program did not run out of iterations.
Using checkpoints one can yield a possibly sub-optimal (as far as solution
length is concerned) solution that will still solve faster than letting all
the flares run.
[id="flares-choice_flag"]
--flares-choice [choice]
~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This dictates how to choose the winning flare based on if more than one yielded
a solution. Possible options are:
1. +--flares-choice fc_solve+ - the default, which picks up the solutions based
on the length of the solution in Freecell Solver's moves.
2. +--flares-choice fcpro+ - picks up the shortest solution based on the
number of Freecell Pro moves, while not considering implicit moves to the
foundations using Horne's Prune / Raymond Prune.
[id="flares-iters-factor_flag"]
-fif [factor] , --flares-iters-factor [factor]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
Sets a global, floating-point number, factor to multiply all the iterations
counts in the flares plans. The higher it is, the longer the scans will take,
but there is a greater chance more of them will succeed, and, as a result,
the solution may be shorter.
As an example, the following:
------------
--flares-plan "Run:500@MyFlare,Run:2000@FooFlare" --flares-iters-factor 2
------------
Is equivalent to:
------------
--flares-plan "Run:1000@MyFlare,Run:4000@FooFlare"
------------
while:
------------
--flares-plan "Run:500@MyFlare,Run:2000@FooFlare" --flares-iters-factor 0.5
------------
Is equivalent to:
------------
--flares-plan "Run:250@MyFlare,Run:1000@FooFlare"
------------
[id="cache-limit_flag"]
--cache-limit [cache limit]
~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
This is a numeric limit to the LRU cache which only matters if Freecell
Solver was compiled with +FCS_RCS_STATES+ enabled. This value should be
a positive integer and the higher it is, the more quickly it is likely
that Freecell Solver will run, but it will also consume more memory. (The
entire point of +FCS_RCS_STATES+ is to conserve memory).
[id="meta-options"]
Meta-Options
------------
[id="reset_flag"]
--reset
~~~~~~~
*Global*
This option resets the program to its initial state, losing all the
configuration logic that was input to it up to that state. Afterwards,
it can be set to a different configuration, again.
[id="read-from-file_flag"]
--read-from-file [num_skip,]filename
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global* (but context-specific).
This option will read the configuration options from a file. The format
of the file is similar to that used by the UNIX Bourne Shell. (i.e:
spaces denote separate arguments, double-quotes encompass arguments,
backslash escapes characters).
The filename can be preceded by an optional number of the arguments to
skip followed by a comma. (the default is 0)
[id="load-config_flag"]
-l [preset] , --load-config [preset]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Global* (but context-specific).
Reads the configuration specified by [preset] and configures the solver
accordingly. A preset is a set of command line arguments to be analyzed
in the place of this option. They are read from a set of presetrc files
: one installed system-wide, the other at $HOME/.freecell-solver/presetrc
and the third at the path specified by the FREECELL_SOLVER_PRESETRC
environment variable. You can add more presets at any of these places.
(refer to http://groups.yahoo.com/group/fc-solve-discuss/message/403
for information about their format)
Presets that are shipped with Freecell Solver:
[cols="20%,80%"]
|====================================================
|+abra-kadabra+ |a meta-moves preset
|+amateur-star+ |a meta-moves preset that yields solutions
faster on average than +three-eighty+.
|+blue-yonder+ |a meta-moves preset generated by a
quota optimization algorithm.
|+children-playing-ball+ |a meta-moves and flare-based preset that tends
to yield very short solution, but is very slow (solves only 3 boards per
second on a Pentium 4 2.4GHz).
|+conspiracy-theory+ |a meta-moves preset that yields solutions
faster on average than +amateur-star+.
|+cookie-monster+ |a meta-moves preset that yields solutions
faster on average than +one-big-family+.
|+cool-jives+ |a meta-moves preset
|+crooked-nose+ |an atomic-moves preset (guarantees an
accurate verdict)
|+enlightened-ostrich+ |a meta-moves preset (that depends on Freecell
Solver 3.4.0 and above) that yields solutions faster on average than
+foss-nessy+.
|+fools-gold+ |an atomic-moves preset
|+foss-nessy+ |a meta-moves preset (that depends on Freecell
Solver 3.2.0 and above) that yields solutions faster on average than
+the-iglu-cabal+.
|+good-intentions+ |runs "cool-jives" and then "fools-gold"
|+gooey-unknown-thing+ |a meta-moves preset that aims to minimise
the outcome solution's length.
|+hello-world+ |a meta-moves preset
|+john-galt-line+ |a meta-moves preset
|+looking-glass+ |a meta-moves preset that yields solutions
faster on average than +cookie-monster+.
|+maliciously-obscure+ |a meta-moves and flare-based preset that tends
to yield very short solutions (even in comparison to +children-playing-ball+
) but is slow.
|+micro-finance+ |a meta-moves and flare-based preset that tends
to yield very short solutions (even in comparison to +maliciously-obscure+
) but is even slower.
|+micro-finance-improved+ |a meta-moves and flare-based preset, based
on +micro-finance+ that yields somewhat shorter solutions on average, and
should not be slower.
|+one-big-family+ |a meta-moves preset that yields solutions
faster on average than +conspiracy-theory+.
|+qualified-seed+ |a meta-moves and flare-based preset, based
on +micro-finance-improved+ that yields somewhat shorter solutions on average,
and should not be slower.
|+qualified-seed-improved+ |+qualified-seed+ with +-fif 5+ and
+--flares-choice fcpro+
|+rin-tin-tin+ |a meta-moves preset
|+sand-stone+ |an atomic-moves preset that aims to
minimise the outcome solution's length.
|+slick-rock+ |run "gooey-unknown-thing" and then "sand-stone"
|+sentient-pearls+ |a meta-moves and flares based preset with
short solutions. Much faster than +children-playing-ball+ but yields less
optimal solutions.
|+tea-for-two+ |a meta-moves preset optimized for
two-freecells' Freecell games (although it can work on other Freecell-like
games as well).
|+the-iglu-cabal+ |a meta-moves preset that yields faster
solutions on average than +blue-yonder+.
|+the-last-mohican+ |a preset for solving Simple Simon. Yields
less false negatives than the default one, but might be slower.
|+three-eighty+ |a meta-moves preset (that depends on Freecell
Solver 3.4.0 and above) that yields solutions faster on average than
+enlightened-ostrich+.
|+toons-for-twenty-somethings+ |an atomic-moves preset that solves
more boards efficiently than "fools-gold".
|+video-editing+ |a meta-moves and flare-based preset, based
on +qualified-seed+ that yields shorter solutions on average, but may be
somewhat slower. Named to commemorate the earlier work of
http://en.wikipedia.org/wiki/Adrian_Ettlinger[Adrian Ettlinger (1925-2013)]
who later contributed to Freecell Solver and to Freecell research.
|+yellow-brick-road+ |a meta-moves preset
|====================================================
They can be abbreviated into their lowercase acronym (i.e: "ak" or "rtt").
[id="run-time-display-options"]
Run-time Display Options
------------------------
[id="iter-output_flag"]
-i , --iter-output
~~~~~~~~~~~~~~~~~~
*Global*
This option tells fc-solve to print the iteration number and the
recursion depth of every state which is checked, to the standard
output. It's a good way to keep track of how it's doing, but the output
slows it down a bit.
[id="iter-output-step_flag"]
--iter-output-step [step]
~~~~~~~~~~~~~~~~~~~~~~~~~
*Global*
Prints the current iteration if +-i+ is specified, only every +[step]+
steps, where +[step]+ is a positive integer. For example, if you do
+fc-solve -i --iter-output-step 100+, you will see this:
------------------------
Iteration: 0
Iteration: 100
Iteration: 200
Iteration: 300
------------------------
This option has been added in Freecell Solver 4.20.0 and is useful for speeding
up the runtime process, by avoiding excessive output.
[id="state-output_flag"]
-s , --state-output
~~~~~~~~~~~~~~~~~~~
*Global*
This option implies -i. If specified, this option outputs the cards and
formation of the board itself, for every state that is checked.
"fc-solve -s" yields a nice real-time display of the progress of
Freecell Solver, but you usually cannot make what is going on because
it is so fast.
[id="signal_combinations"]
Signal Combinations
-------------------
If you are working on a UNIX or a similar system, then you can set some
run-time options in "fc-solve" by sending it some signal
combinations.
If you send the fc-solve a single ABRT signal, then fc-solve will terminate
the scan prematurely, and report that the iterations’s limit has been
exceeded.
If you send the signal USR1, without sending any other signals before
that, then +fc-solve+ will output the present number of
iterations. This method is a good way to monitor an instance that takes
a long time to solve.
If you send it the signal USR2 and then USR1, then +fc-solve+
will print the iteration number and depth on every state that it
checks. It is the equivalent of specifying (or unspecifying) the
option -i/--iter-output.
If you send it two USR2 signals and then USR1, then +fc-solve+
will also print the board of every state. Again, this will only be done
assuming the iteration output is turned on.
| 32.38561 | 82 | 0.672899 |
d8d775f1f8a53ac9452f70200d081ab61e7221c4 | 1,284 | adoc | AsciiDoc | src/doc/fragments/task-guide/build-tasks.adoc | vdmeer/skb-framework | 2fe7e0b163654967dea70317c2153517d80049ba | [
"Apache-2.0"
] | null | null | null | src/doc/fragments/task-guide/build-tasks.adoc | vdmeer/skb-framework | 2fe7e0b163654967dea70317c2153517d80049ba | [
"Apache-2.0"
] | 1 | 2019-05-28T22:32:40.000Z | 2019-05-28T22:40:53.000Z | src/doc/fragments/task-guide/build-tasks.adoc | vdmeer/skb-framework | 2fe7e0b163654967dea70317c2153517d80049ba | [
"Apache-2.0"
] | null | null | null | //
// ============LICENSE_START=======================================================
// Copyright (C) 2018-2019 Sven van der Meer. All rights reserved.
// ================================================================================
// This file is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License
// Full license text at https://creativecommons.org/licenses/by-sa/4.0/legalcode
//
// SPDX-License-Identifier: CC-BY-SA-4.0
// ============LICENSE_END=========================================================
//
// @author Sven van der Meer (vdmeer.sven@mykolab.com)
// @version 0.0.5
//
== Build Tasks
This category of tasks either _builds_ artifacts or _compiles_ source files to create artifacts.
Those tasks should be available in the application mode _build_, but no _use_.
They might be available in the application mode _dev_ if required.
The exception to this general rule is the task `build-manual`,
since it can be used to build an application-mode-specific manual and might thus be required in all application modes.
By conventions, all _build_ and _compile_ tasks should provide an argument `-c` or `--clean`.
This argument should clean (remove) all built or compiled artifacts (and directories if applicable).
| 51.36 | 122 | 0.634735 |
422dd12d4c2a47611812af9481f48553e4c715c6 | 1,643 | adoc | AsciiDoc | specification/sources/chapters/extensions/htc/htc_vive_focus3_controller_interaction.adoc | srivers8424/OpenXR-Docs | 1120eafa2522f84ce74d2ce01defe35226061a5b | [
"MIT",
"BSD-3-Clause",
"Apache-2.0",
"Unlicense"
] | 1 | 2022-03-29T03:19:28.000Z | 2022-03-29T03:19:28.000Z | specification/sources/chapters/extensions/htc/htc_vive_focus3_controller_interaction.adoc | srivers8424/OpenXR-Docs | 1120eafa2522f84ce74d2ce01defe35226061a5b | [
"MIT",
"BSD-3-Clause",
"Apache-2.0",
"Unlicense"
] | null | null | null | specification/sources/chapters/extensions/htc/htc_vive_focus3_controller_interaction.adoc | srivers8424/OpenXR-Docs | 1120eafa2522f84ce74d2ce01defe35226061a5b | [
"MIT",
"BSD-3-Clause",
"Apache-2.0",
"Unlicense"
] | null | null | null | // Copyright (c) 2020 HTC
//
// SPDX-License-Identifier: CC-BY-4.0
include::../meta/XR_HTC_vive_focus3_controller_interaction.adoc[]
*Last Modified Date*::
2022-01-03
*IP Status*::
No known IP claims.
*Contributors*::
Ria Hsu, HTC
*Overview*
This extension defines a new interaction profile for the VIVE Focus 3
Controller.
*VIVE Focus 3 Controller interaction profile*
Interaction profile path:
* pathname:/interaction_profiles/htc/vive_focus3_controller
Valid for user paths:
* pathname:/user/hand/left
* pathname:/user/hand/right
This interaction profile represents the input sources and haptics on the
VIVE Focus 3 Controller.
Supported component paths:
* On pathname:/user/hand/left only:
** subpathname:/input/x/click
** subpathname:/input/y/click
** subpathname:/input/menu/click
* On pathname:/user/hand/right only:
** subpathname:/input/a/click
** subpathname:/input/b/click
** subpathname:/input/system/click (may: not be available for application
use)
* subpathname:/input/squeeze/click
* subpathname:/input/squeeze/touch
* subpathname:/input/trigger/click
* subpathname:/input/trigger/touch
* subpathname:/input/trigger/value
* subpathname:/input/thumbstick/x
* subpathname:/input/thumbstick/y
* subpathname:/input/thumbstick/click
* subpathname:/input/thumbstick/touch
* subpathname:/input/thumbrest/touch
* subpathname:/input/grip/pose
* subpathname:/input/aim/pose
* subpathname:/output/haptic
*New Object Types*
*New Flag Types*
*New Enum Constants*
*New Enums*
*New Structures*
*New Functions*
*Issues*
*Version History*
* Revision 1, 2022-01-03 (Ria Hsu)
** Initial extension description
| 21.337662 | 73 | 0.759586 |
6f5d79db71a51202a36f4fdbdc36a60add3b0204 | 27,025 | adoc | AsciiDoc | docs/DeveloperGuide.adoc | Kangwkk/main | 638c32019d445a4a0023c81e2c13f5d529c95994 | [
"MIT"
] | null | null | null | docs/DeveloperGuide.adoc | Kangwkk/main | 638c32019d445a4a0023c81e2c13f5d529c95994 | [
"MIT"
] | null | null | null | docs/DeveloperGuide.adoc | Kangwkk/main | 638c32019d445a4a0023c81e2c13f5d529c95994 | [
"MIT"
] | null | null | null | = NUSProductivity - Developer Guide
:site-section: DeveloperGuide
:toc:
:toc-title:
:toc-placement: preamble
:sectnums:
:imagesDir: images
:stylesDir: stylesheets
:xrefstyle: full
ifdef::env-github[]
:tip-caption: :bulb:
:note-caption: :information_source:
:warning-caption: :warning:
endif::[]
:repoURL: https://github.com/AY1920S2-CS2103T-W16-4/main
By: `CS2103T-W16-4` Since: `Feb 2020` Licence: `MIT`
== Setting up
Refer to the guide <<SettingUp#, here>>.
== Design
[[Design-Architecture]]
=== Architecture
.Architecture Diagram
image::ArchitectureDiagram.png[]
The *_Architecture Diagram_* given above explains the high-level design of the App. Given below is a quick overview of each component.
[TIP]
The `.puml` files used to create diagrams in this document can be found in the link:{repoURL}/docs/diagrams/[diagrams] folder.
Refer to the <<UsingPlantUml#, Using PlantUML guide>> to learn how to create and edit diagrams.
`Main` has two classes called link:{repoURL}/src/main/java/seedu/address/Main.java[`Main`] and link:{repoURL}/src/main/java/seedu/address/MainApp.java[`MainApp`]. It is responsible for,
* At app launch: Initializes the components in the correct sequence, and connects them up with each other.
* At shut down: Shuts down the components and invokes cleanup method where necessary.
<<Design-Commons,*`Commons`*>> represents a collection of classes used by multiple other components.
The following class plays an important role at the architecture level:
* `LogsCenter` : Used by many classes to write log messages to the App's log file.
The rest of the App consists of four components.
* <<Design-Ui,*`UI`*>>: The UI of the App.
* <<Design-Logic,*`Logic`*>>: The command executor.
* <<Design-Model,*`Model`*>>: Holds the data of the App in-memory.
* <<Design-Storage,*`Storage`*>>: Reads data from, and writes data to, the hard disk.
Each of the four components
* Defines its _API_ in an `interface` with the same name as the Component.
* Exposes its functionality using a `{Component Name}Manager` class.
For example, the `Logic` component (see the class diagram given below) defines it's API in the `Logic.java` interface and exposes its functionality using the `LogicManager.java` class.
.Class Diagram of the Logic Component
image::LogicClassDiagram.png[width="790"]
[discrete]
==== How the architecture components interact with each other
The _Sequence Diagram_ below shows how the components interact with each other for the scenario where the user issues the command `moduleAdd m/CS2103T`.
// tag::UIDiagram[]
.Component interactions for `moduleAdd m/CS2103T` command
image::ArchitectureSequenceDiagram.png[width="790"]
The sections below give more details of each component.
[[Design-Ui]]
=== UI component
.Structure of the UI Component
image::UiClassDiagram.png[width="790"]
// end::UIDiagram[]
*API* : link:{repoURL}/src/main/java/seedu/address/ui/Ui.java[`Ui.java`]
The UI consists of a `MainWindow` that is made up of parts e.g.`CommandBox`, `ResultDisplay`, `CalendarListPanel`, `StatusBarFooter` etc. All these, including the `MainWindow`, inherit from the abstract `UiPart` class.
The `UI` component uses JavaFx UI framework. The layout of these UI parts are defined in matching `.fxml` files that are in the `src/main/resources/view` folder. For example, the layout of the link:{repoURL}/src/main/java/seedu/address/ui/MainWindow.java[`MainWindow`] is specified in link:{repoURL}/src/main/resources/view/MainWindow.fxml[`MainWindow.fxml`]
The `UI` component,
* Executes user commands using the `Logic` component.
* Listens for changes to `Model` data so that the UI can be updated with the modified data.
[[Design-Logic]]
=== Logic component
[[fig-LogicClassDiagram]]
.Structure of the Logic Component
image::LogicClassDiagram.png[width="790"]
*API* :
link:{repoURL}/src/main/java/seedu/address/logic/Logic.java[`Logic.java`]
. `Logic` uses the `AddressBookParser` class to parse the user command.
. This results in a `Command` object which is executed by the `LogicManager`.
. The command execution can affect the `Model` (e.g. adding a person).
. The result of the command execution is encapsulated as a `CommandResult` object which is passed back to the `Ui`.
. In addition, the `CommandResult` object can also instruct the `Ui` to perform certain actions, such as displaying help to the user.
Given below is the Sequence Diagram for interactions within the `Logic` component for the `execute("delete 1")` API call.
.Interactions Inside the Logic Component for the `delete 1` Command
image::DeleteSequenceDiagram.png[width="790"]
NOTE: The lifeline for `DeleteCommandParser` should end at the destroy marker (X) but due to a limitation of PlantUML, the lifeline reaches the end of diagram.
[[Design-Model]]
=== Model component
.Part of the Model Component
image::ModelClassDiagram.png[width="790"]
*API* : link:{repoURL}/src/main/java/seedu/address/model/Model.java[`Model.java`]
The `Model`,
* stores a `UserPref` object that represents the user's preferences.
* stores the Address Book data.
* exposes an unmodifiable `ObservableList<Person>` that can be 'observed' e.g. the UI can be bound to this list so that the UI automatically updates when the data in the list change.
* does not depend on any of the other three components.
[NOTE]
As a more OOP model, we can store a `Tag` list in `Address Book`, which `Person` can reference. This would allow `Address Book` to only require one `Tag` object per unique `Tag`, instead of each `Person` needing their own `Tag` object. An example of how such a model may look like is given below. +
+
image:BetterModelClassDiagram.png[]
[[Design-Storage]]
=== Storage component
.Structure of the Storage Component
image::StorageClassDiagram.png[width="790"]
*API* : link:{repoURL}/src/main/java/seedu/address/storage/Storage.java[`Storage.java`]
The `Storage` component,
* can save `UserPref` objects in json format and read it back.
* can save the Address Book data in json format and read it back.
[[Design-Commons]]
=== Common classes
Classes used by multiple components are in the `seedu.addressbook.commons` package.
== Implementation
This section describes some noteworthy details on how certain features are implemented.
=== Module Search
image::SearchCommandUMLDiagram.png[width="790"]
*API* :
link:{repoURL}/src/main/java/seedu/address/searcher/Search.java[`Search.java`]
Module Search function returns `module` object that contains useful information for each module for the rest of the application to use.
The function first checks if the information is available in local cache, and if it isnt, pulls it from NUSmods API.
The JSON object pulled from the web is then parsed into a `module` object.
This implementation means that a local cache of the added modules will be available even if the user is offline.
// tag::ModuleBook[]
=== Profile (Module Book) feature (Wangkai)
This profile feature allows users to manage the modules they have taken before or is taking now in NUS.
In details, users are able to store their module taken into the program with the grades for each module stated if applicable and
can also store tasks which are related to each module.
==== Implementation
- This feature is implemented using a panel on the main screen of profile tab with a list of modules that is updated with every command that
may affect module list (such as add, delete or grade).
- The module book (profile) currently supports following features.
. Adds in or deletes modules and display the list of modules in profile tab.
. Updates user's grades for each module and get CAP calculated immediately.
. Manage the tasks related to each module (module tasks) through CLI.
. Any modification to module tasks will be updated in the Calendar tab and also show messages on the result display panel.
.Class diagram of structure and relations of NusModule, ModuleBook and relevant classes.
image::NusModuleClassDiagram.png[width="790"]
- As shown in the class diagram above, modules are created by a class called `NusModule`. Every instance of `NusModule` contains a `ModuleCode` object, a
`Grade` object (optional) and a list of `ModuleTask` objects.
[NOTE]
The module book only accept modules that are provided in NUS and will check the module code the user want to add is valid or not by using the search feature mentioned above.
- All possible actions mentioned above such as creating modules, deleting modules and adding tasks to modules are implemented through
the `ModuleBook` class.
- The program will automatically save any modification to module book after each command is executed by calling the `saveModuleBook` method
in `Storage`.
- For example, modules are created with `moduleAdd` command, followed by the module code and grade. (if applicable) +
Our program will check if the module code is valid by using the search feature above and whether the module has already been added in our module book.
And then call method `addModule` in `ModuleBook` to create the module as required. Finally, it will automatically save the module added just now.
- The _Sequence Diagram_ below shows how the components interact with each other for the scenario where the user want to add a module in our program.
.Sequence diagram when moduleAdd command is executed
image::ModuleAddSequenceDiagram.png[width="950"]
.Relation between ModuleBook and Task
image::PartOfModelClassDiagramForProfile.png[width="400"]
- The program will synchronize the modification to module tasks in `ModuleBook` with that shown in Calendar tab through `ModelManager` as shown above.
i.e. Any modification in module tasks will be updated in `Task` which is the main class Calendar feature depends on. (see more details in Calendar feature)
==== Example Use Scenario
Given below are example usages scenario and how the Module Book feature behaves at each step.
[TIP]
User can manage their tasks in different ways.
*Example 1*: +
. The user execute `listModuleTasks` command.
. The program check whether the module code provided has been recorded or not.
. Display the list of tasks.
Below is an activity diagram describing the events that will happen:
.Activity diagram for list module tasks command
image::ListModuleTasksActivityDiagram.png[width="790"]
*Example 2*: +
. The user execute `done` command.
. The program check whether the input is valid or not.
. The task specified will be deleted accordingly.
. Synchronize between module book and calendar.
Below is an activity diagram describing the events that will happen:
.Activity diagram for done command
image::DoneCommandActivityDiagram.png[width="850"]
==== Design Considerations
*Aspect 1*: How the user add in a module into module book for future management ?
- *Current solution*: Only need to provide the module code to add module and it will fetch the information about the module using Module Search feature automatically.
* *Pros*: Users don't need to provide any other information (such as modular credit of the module) for other functionality such as calculating the CAP.
* *Pros*: The module information will be cached locally after you add the module once and this can used for future development.
* *Cons*: Need Internet connection when you add in certain module for the first time.
* *Cons*: Highly depends on the Module Search feature.
- *Alternative Solution*: Let the user enter all information required for each module when they add it in. (such as modular credit)
* *Pros*: More flexible, not depends on other features.
* *Cons*: Very tedious for users to add in lots of modules.
* *Cons*: Need to ask user to provide new information when more new functionality is added in the future.
*Reason for chosen implementation*: +
The current implementation is much more user friendly and have more potential for future development. The implement can become
very ideal if the Module Search feature works properly.
*Aspect 2*: How the user manage their tasks for each module?
- *Current solution*: For each module added, it contains a list of `ModuleTask`. Also update the calendar when add task in.
* *Pros*: Users can either view their tasks for each module separately or view all the tasks shown in Calendar tab.
* *Pros*: More nice-looking that the user can view all the deadlines on calendar.
* *Cons*: Prone to bugs during the synchronization of module book and calendar.
- *Alternative solution*: Only store the list of `ModuleTask` in module book and do not update in Calendar tab.
* *Pros*: Easier to implement and can avoid some synchronization bugs.
* *Cons*: Users can not gain a view of the whole pictures with all tasks shown on calendar.
*Reason for chosen implementation*: +
The current implementation updates the module tasks added in onto the calender and provides the users different ways
to manage their tasks. (as a whole or separately for each module)
// end::ModuleBook[]
// tag::Xinwei[]
=== Calendar Feature (Xinwei)
NUSProductivity consist of a calendar feature that provides an overarching view of all tasks, allowing user to view their uncompleted tasks and whether there is a task present on the date itself.
The calendar feature allows users to add either a `deadline` or a `Module Task` to the calendar, which inherits from a super class `Task`
==== Implementation
The implementation of the main Calender tab is facilitated a `SplitPane` in the MainWindow class consisting of 2 main classes, `CalenderListPanel` and `CalenderPanel`
The `CalenderListPanel` on the right contains a list of `Task` added to the calendar will the `CalenderPanel` shows the actual calender view for the current month.
The diagram below describes the class structure of the calendar class structure.
.Calender UI Class Diagram
image::CalenderUIClassDiagram.png[]
Upon initialisation of CalenderPanel, the CalenderPanel would call its 2 methods of `setMonth()` and `setDate()` to create `CalenderDate` instances starting from the first day of the current month.
Then, upon initialisation of CalenderListPanel, it will create instances of `CalenderDeadline` by getting the `ObservableList<Task>` from `getDeadlineTaskList`.
This will call upon the inner class in `CalenderListPanel`, `DeadlineListViewCell updateItem` method which allows the program to check whether there is any deadline due on any on the date in `calenderDatesArrayList`.
If a `deadline` or a `Module Task` is found, `setPriorityColour()` and `setStatusColour()` will be invoked to update the Calendar display to change the colour of the dots based on the priority levels mentioned in the User Guide.
Every time a `Task` is modified, the `DeadlineListViewCell updateItem` method will be invoked to update any changes to the display.
==== Implementation logic
* Implementation: both deadline and module Task are inherits from the super class Task. A task is created when the `moduleTask` or `deadlineAdd` command is invoked.
* The _Sequence Diagram_ below shows how the components interact with each other for the scenario where the user wants to add a task to the program.
.Add task sequence diagram
image::AddTaskSequenceDiagram.png[]
The `addDeadlineTask` method will modify the `ObservableList<Task>` supplied to the `CalenderListPanel`, invoking the `updateItem` method, causing a change in the user display.
All other calendar functions works similarly to `addDeadlineTask` as shown in the Activity diagram below.
.Calendar Activity Diagram
image::CalendarActivityDiagram.png[]
==== Design Considerations:
Aspect 1: Method of displaying the dot indicator
* *Current solution*: Currently, the dot is being shown by getting the `static HashMap` from `Task` as this `HashMap` stores a key-pair value of date - Tasks.
* By making changes to the `deadlineTaskList`, we also edited the `HashMap`. This allows everytime a `updateItem` method call to check whether a task is present, and if so the priority of the task.
* *Alternative 1*: Store all tasks of the current date in `CalendarDate`.
** Pros: Allows for tasks to be accessed locally and not through a static variable from the main class `Task`.
** Cons: Implementation may be more complex as more parameters have to be passed to `CalenderDate` and also ensuring that the list of task passed in `CalendarDate` is up to date.
**Reason for chosen implementation:**
The current solution is easier to implement as everything is done in the relevant functions such as `deadlineAdd` or `taskDelete`. The only thing that the program needs to check is whether a date in the `HashMap` contains a task and if so, the priority of the task. With the alternative implementation, we will need to pass in a `List` for each of the 31 dates which may be very troublesome to keep track of especially when we are editing the main task list. This ease of implementation is the deciding factor when choosing which method to implement.
=== Notes Feature (Xinwei)
==== Implementation
The notes feature allow users to access their desktop files and folders with commands.
This feature is implemented using a panel on the main window, listing out a list of documents and folders that are in the specified directory.
Notes features includes `notesOpen`, `notesCreate`, `notesDelete` and `notesList`.
The diagram below shows the sequence diagram of a `notesOpen` command with the other methods working similarly to the stated method.
.Notes Open Sequence Diagram
image::notesOpenSequenceDiagram.png[width="600"]
.Notes List Activity Diagram
image::NotesList.png[width="600"]
.Notes Open Activity Diagram
image::NotesCreation.png[width="600"]
notesCreate and notesDelete activity diagram works similar to notesOpen.
==== Pathing
Our program allows the user to specify different pathing system, namely:
1. AbsolutePath
2. RelativePath
.Notes Pathing Diagram
image::absVSrel.png[width="600"]
AbsolutePath will take the path given from `usr/`.
RelativePath will take reference from the path that the current system has opened, in this case, `usr/Desktop/NUS Y2S2`.
The user is given the freedom to provide any of the 2 forms when using the `notesOpen`, `notesCreate`, `notesDelete` and `notesList` commands.
**AbsolutePath**:
*Benefits*:
This allows for more flexibility as the user do not need to keep note of its current directory and will be able to access any folder/document that is on their system
*cons*:
Will require much more input from the user, for example, referring to the above figure,
Accessing the CS2103T file requires the user to input `loc/Desktop/NUS Y2S2/CS2103T` as opposed to `loc/CS2103T` if the user is using absolute over relative pathing
**RelativePath**:
*Benefits*:
Easier for the user to navigate through the current folder and not key in the whole folder path
*Cons*:
Not as flexible. Referring to the above diagram,
Accessing the *Documents* folder will require the user to input `loc/../../Documents`, this may not be as intuitive to people with no programming background.
Using `loc/Documents abs/abs` will allow the user to access any folder from anywhere.
// end::Xinwei[]
=== Diary
- `diaryAdd` and `diaryLog` extends from `Command` class
- `DiaryEntry` is another model which contains:
* Diary Entry
* Date
* Weather
* Mood
=== Logging
We are using `java.util.logging` package for logging. The `LogsCenter` class is used to manage the logging levels and logging destinations.
* The logging level can be controlled using the `logLevel` setting in the configuration file (See <<Implementation-Configuration>>)
* The `Logger` for a class can be obtained using `LogsCenter.getLogger(Class)` which will log messages according to the specified logging level
* Currently log messages are output through: `Console` and to a `.log` file.
*Logging Levels*
* `SEVERE` : Critical problem detected which may possibly cause the termination of the application
* `WARNING` : Can continue, but with caution
* `INFO` : Information showing the noteworthy actions by the App
* `FINE` : Details that is not usually noteworthy but may be useful in debugging e.g. print the actual list instead of just its size
[[Implementation-Configuration]]
=== Configuration
Certain properties of the application can be controlled (e.g user prefs file location, logging level) through the configuration file (default: `config.json`).
== Documentation
Refer to the guide <<Documentation#, here>>.
== Testing
Refer to the guide <<Testing#, here>>.
== Dev Ops
Refer to the guide <<DevOps#, here>>.
[appendix]
== Product Scope
*Target user profile*:
* has a need to manage a significant number of contacts
* has a need to manage deadlines and tasks
* has a need to manage module planning
* prefer desktop apps over other types
* prefers typing over mouse input
* prefers to have everything in one app
* can type fast
* is reasonably comfortable using CLI apps
* is studying in NUS
*Value proposition*: manage contacts faster than a typical mouse/GUI driven app
[appendix]
== User Stories
Priorities: High (must have) - `* * \*`, Medium (nice to have) - `* \*`, Low (unlikely to have) - `*`
|=======================================================================
|Priority |As a ... |I want to ... |So that I can...
|`* * *` |new user |see usage instructions |refer to instructions when I forget how to use the App
|`* * *` |user |add a new person |
|`* * *` |user |delete a person |remove entries that I no longer need
|`* * *` |user |find a person by name |locate details of persons without having to go through the entire list
|`* * *` |user who wants to improve time management |add deadlines |know when to complete tasks in todo list
|`* * *` |user |add event |know when and where is the event and who will going to participate in the event
|`* * *` |NUS student|add module to module plan |see modules I need to take to fulfill degree requirements
|`* * *` |NUS student|show module plan |see list of modules I need to take/have taken
|`* * *` |NUS student|write and save notes for each module I have taken/am taking |
|`* * *` |NUS student|write diaries for each day's summary |refer back to what I have done in the future
|`* *` |user |hide <<private-contact-detail,private contact details>> by default |minimize chance of someone else seeing them by accident
|`* *` |user |delete diary entry |
|`* *` |user |show diary entry list |
|`* *` |user |delete module from module plan |know which modules I have taken
|`* *` |NUS student|fetch module information |
|`* *` |NUS student|know current CAP |
|`* *` |user who wants to improve grades |calculate target CAP |know what grades to aim for to achieve my target CAP
|`* *` |user |sort deadlines |prioritize which tasks to finish first
|`* *` |user who has a short memory span |receive reminders about the deadlines |don't miss out any important tasks
|`*` |user with many persons in the address book |sort persons by name |locate a person easily
|`*` |user |create group chats |communicate with peers in the same module
|`*` |user |tag my diary with that day's weather |
|`*` |user |tag my diary with that day's emotion |I can filter my diaries with specific mood
|=======================================================================
_{More to be added}_
[appendix]
== Use Cases
(For all use cases below, the *System* is the `AddressBook` and the *Actor* is the `user`, unless specified otherwise)
[discrete]
=== Use case: Delete person
*MSS*
1. User requests to list persons
2. AddressBook shows a list of persons
3. User requests to delete a specific person in the list
4. AddressBook deletes the person
+
Use case ends.
*Extensions*
[none]
* 2a. The list is empty.
+
Use case ends.
* 3a. The given index is invalid.
+
[none]
** 3a1. AddressBook shows an error message.
+
Use case resumes at step 2.
[discrete]
=== Use case: Delete module
*MSS*
1. User requests to show module plan
2. AddressBook shows module plan
3. User requests to delete a module taken
4. AddressBook deletes module
5. AddressBook updates module plan
+
Use case ends.
*Extensions*
[none]
* 3a. The given module code is invalid.
+
[none]
** 3a1. AddressBook shows an error message.
+
Use case resumes at step 2.
_{More to be added}_
[appendix]
== Non Functional Requirements
. Should work on any <<mainstream-os,mainstream OS>> as long as it has Java `11` or above installed.
. Should be able to hold up to 1000 persons without a noticeable sluggishness in performance for typical usage.
. A user with above average typing speed for regular English text (i.e. not code, not system admin commands) should be able to accomplish most of the tasks faster using commands than using the mouse (e.g. fetch module information)
. Should respond within 2 seconds
. Should be easy to use for users who are novice at using technology
. User should be a current student in NUS
_{More to be added}_
[appendix]
== Glossary
[[mainstream-os]] Mainstream OS::
Windows, Linux, Unix, OS-X
[[private-contact-detail]] Private contact detail::
A contact detail that is not meant to be shared with others
[[NUS]]NUS::
National University of Singapore
[[CAP]]CAP::
The Cumulative Average Point is the weighted average grade point of the letter grades of all the modules taken by the students, according to NUS's grading system.
[[CLI]]CLI::
Command Line Interface
[appendix]
== Product Survey
*Product Name*
Author: ...
Pros:
* ...
* ...
Cons:
* ...
* ...
[appendix]
== Instructions for Manual Testing
Given below are instructions to test the app manually.
[NOTE]
These instructions only provide a starting point for testers to work on; testers are expected to do more _exploratory_ testing.
=== Launch and Shutdown
. Initial launch
.. Download the jar file and copy into an empty folder
.. Double-click the jar file +
Expected: Shows the GUI with a set of sample contacts. The window size may not be optimum.
. Saving window preferences
.. Resize the window to an optimum size. Move the window to a different location. Close the window.
.. Re-launch the app by double-clicking the jar file. +
Expected: The most recent window size and location is retained.
_{ more test cases ... }_
=== Deleting a person
. Deleting a person while all persons are listed
.. Prerequisites: List all persons using the `list` command. Multiple persons in the list.
.. Test case: `delete 1` +
Expected: First contact is deleted from the list. Details of the deleted contact shown in the status message. Timestamp in the status bar is updated.
.. Test case: `delete 0` +
Expected: No person is deleted. Error details shown in the status message. Status bar remains the same.
.. Other incorrect delete commands to try: `delete`, `delete x` (where x is larger than the list size) _{give more}_ +
Expected: Similar to previous.
_{ more test cases ... }_
=== Saving data
. Dealing with missing/corrupted data files
.. _{explain how to simulate a missing/corrupted file and the expected behavior}_
_{ more test cases ... }_
| 39.918759 | 550 | 0.757595 |
f35d8541b3cef8bc048c72456a43069d4ba35989 | 2,142 | adoc | AsciiDoc | README.adoc | cirosantilli/feathers-realworld-example-app | e25915ce1a03e06a0c76624ddac426981bcf59e1 | [
"MIT"
] | null | null | null | README.adoc | cirosantilli/feathers-realworld-example-app | e25915ce1a03e06a0c76624ddac426981bcf59e1 | [
"MIT"
] | null | null | null | README.adoc | cirosantilli/feathers-realworld-example-app | e25915ce1a03e06a0c76624ddac426981bcf59e1 | [
"MIT"
] | null | null | null | = RealWorld App in FeathersJS
I did not manage to get the authentication to work while porting feathers 4.5.11.
I tried copying the chat app as much as possible, but I can't get it to work, it is currently failing with:
....
[1] error: NotAuthenticated: Not authenticated
[1] at new NotAuthenticated (/home/ciro/git/feathers-realworld-example-app/node_modules/@feathersjs/errors/lib/index.js:93:17)
[1] at /home/ciro/git/feathers-realworld-example-app/node_modules/@feathersjs/authentication/lib/hooks/authenticate.js:54:19
[1] at Generator.next (<anonymous>)
[1] at /home/ciro/git/feathers-realworld-example-app/node_modules/@feathersjs/authentication/lib/hooks/authenticate.js:8:71
[1] at new Promise (<anonymous>)
[1] at __awaiter (/home/ciro/git/feathers-realworld-example-app/node_modules/@feathersjs/authentication/lib/hooks/authenticate.js:4:12)
[1] at Object.<anonymous> (/home/ciro/git/feathers-realworld-example-app/node_modules/@feathersjs/authentication/lib/hooks/authenticate.js:27:25)
[1] at /home/ciro/git/feathers-realworld-example-app/node_modules/@feathersjs/feathers/node_modules/@feathersjs/commons/lib/hooks.js:116:46
....
after you create an account and login, this fails while fetching articles.
Run:
....
git clone --recursive https://github.com/cirosantilli/feathers-realworld-example-app
cd react-redux-realworld-example-app
npm install
cd ..
npm install
npm start
....
Forked from https://github.com/randyscotsmithey/feathers-realworld-example-app adding:
* a more direct integration with https://github.com/gothinkster/react-redux-realworld-example-app[]:
** `npm start` starts both front and backend properly linked without any further configuration
** clearer deployment instructions on how to extract the final built static components
** later on might add as well:
*** SSR
*** realtime functionality
* port from MongoDB to sequelize because:
** MongoDB has serious licencing burden, and was removed from several convenient distros
** sequelize runs on SQLite locally and on a real database remotelly, which dispenses the need for local server management which is a pain
| 49.813953 | 149 | 0.778245 |
42ffe8eceb3871090ae4a02cd70aa00657968237 | 325 | adoc | AsciiDoc | extensions/multitile/standard/requirements/tiles/requirements_class_cols-multitiles.adoc | ayoumans/ogcapi-tiles | 0f89870cdfa145a9f93202f83b514e669e4dea0e | [
"OML"
] | 11 | 2021-01-24T17:19:23.000Z | 2022-03-27T18:41:44.000Z | extensions/multitile/standard/requirements/tiles/requirements_class_cols-multitiles.adoc | ayoumans/ogcapi-tiles | 0f89870cdfa145a9f93202f83b514e669e4dea0e | [
"OML"
] | 64 | 2020-12-09T18:51:15.000Z | 2022-03-31T19:24:45.000Z | extensions/multitile/standard/requirements/tiles/requirements_class_cols-multitiles.adoc | ayoumans/ogcapi-tiles | 0f89870cdfa145a9f93202f83b514e669e4dea0e | [
"OML"
] | 7 | 2021-01-06T06:23:03.000Z | 2022-02-22T12:37:01.000Z | [[rc_tiles-cols-multitiles]]
[cols="1,4",width="90%"]
|===
2+|*Requirements Class*
2+|http://www.opengis.net/spec/ogcapi-tiles-1/1.0/req/cols-multitiles
|Target type |Web API
|Dependency |http://www.opengis.net/spec/ogcapi-tiles-1/1.0/req/core
|Dependency |http://www.opengis.net/spec/ogcapi-tiles-1/1.0/req/collections
|===
| 32.5 | 75 | 0.716923 |
9380447774599ebc47d250e5c4c441d8a794f7b4 | 131 | adoc | AsciiDoc | src/main/asciidoc/dataobjects.adoc | daniel-dona/piveau-hub | 889b3c57ea8520b788a0e38859fdac2875d58147 | [
"Apache-2.0"
] | null | null | null | src/main/asciidoc/dataobjects.adoc | daniel-dona/piveau-hub | 889b3c57ea8520b788a0e38859fdac2875d58147 | [
"Apache-2.0"
] | null | null | null | src/main/asciidoc/dataobjects.adoc | daniel-dona/piveau-hub | 889b3c57ea8520b788a0e38859fdac2875d58147 | [
"Apache-2.0"
] | 1 | 2020-10-09T23:53:51.000Z | 2020-10-09T23:53:51.000Z | = Cheatsheets
[[DatasetHelper]]
== DatasetHelper
[cols=">25%,25%,50%"]
[frame="topbot"]
|===
^|Name | Type ^| Description
|===
| 10.076923 | 28 | 0.587786 |
6fe4da2969a0f6ee53c74a3099f2bc238ffa6a30 | 1,815 | asciidoc | AsciiDoc | CobiGen.asciidoc | LarsReinken/devonfw-wiki-tools-cobigen | b7cee0c8e9315201c3cf061e6eed5a4a844b0c80 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | CobiGen.asciidoc | LarsReinken/devonfw-wiki-tools-cobigen | b7cee0c8e9315201c3cf061e6eed5a4a844b0c80 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | CobiGen.asciidoc | LarsReinken/devonfw-wiki-tools-cobigen | b7cee0c8e9315201c3cf061e6eed5a4a844b0c80 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | = CobiGen -- Code-based incremental Generator
:title-logo-image: images/logo/cobigen_logo.png
:leveloffset: 0
[preface]
== Document Description
This document contains the documentation of the CobiGen core module as well as all CobiGen plug-ins and the CobiGen eclipse integration.
**Current versions:**
* CobiGen - Eclipse Plug-in v4.4.1
* CobiGen - Maven Build Plug-in v4.1.0
---
* CobiGen v5.3.1
* CobiGen - Java Plug-in v2.1.0
* CobiGen - XML Plug-in v4.1.0
* CobiGen - TypeScript Plug-in v2.2.0
* CobiGen - Property Plug-in v2.0.0
* CobiGen - Text Merger v2.0.0
* CobiGen - JSON Plug-in v2.0.0
* CobiGen - HTML Plug-in v2.0.1
* CobiGen - Open API Plug-in v2.3.0
* CobiGen - FreeMaker Template Engine v2.0.0
* CobiGen - Velocity Template Engine v2.0.0
**Authors:**
* Malte Brunnlieb
* Jaime Diaz Gonzalez
* Steffen Holzer
* Ruben Diaz Martinez
* Joerg Hohwiller
* Fabian Kreis
* Lukas Goerlach
* Krati Shah
* Christian Richter
* Erik Grüner
* Mike Schumacher
* Marco Rose
[preface]
include::Guide-to-the-Reader[]
:leveloffset: 0
:toc:
:leveloffset: 0
include::Home[]
include::cobigen-usecases[]
:leveloffset: 0
= CobiGen
:leveloffset: 2
include::cobigen-core_configuration[]
:leveloffset: 0
=== Plug-ins
:leveloffset: 3
include::cobigen-javaplugin[]
include::cobigen-propertyplugin[]
include::cobigen-xmlplugin[]
include::cobigen-textmerger[]
include::cobigen-jsonplugin[]
include::cobigen-tsplugin[]
include::cobigen-htmlplugin[]
:leveloffset: 0
= Maven Build Integration
:leveloffset: 2
include::cobigen-maven_configuration[]
:leveloffset: 0
= Eclipse Integration
:leveloffset: 2
include::cobigen-eclipse_installation[]
include::cobigen-eclipse_usage[]
include::cobigen-eclipse_logging[]
:leveloffset: 0
= Template Development
:leveloffset: 2
include::cobigen-templates_helpful-links[]
| 19.308511 | 136 | 0.747658 |
e7a6a275b1f7d37a736f65af351c76dd3b2eadf2 | 1,733 | adoc | AsciiDoc | day00/groovy/michael-simons/README.adoc | clssn/aoc-2019 | a978e5235855be937e60a1e7f88d1ef9b541be15 | [
"MIT"
] | 22 | 2019-11-27T08:28:46.000Z | 2021-04-27T05:37:08.000Z | day00/groovy/michael-simons/README.adoc | clssn/aoc-2019 | a978e5235855be937e60a1e7f88d1ef9b541be15 | [
"MIT"
] | 77 | 2019-11-16T17:22:42.000Z | 2021-05-10T20:36:36.000Z | day00/groovy/michael-simons/README.adoc | clssn/aoc-2019 | a978e5235855be937e60a1e7f88d1ef9b541be15 | [
"MIT"
] | 43 | 2019-11-27T06:36:51.000Z | 2021-11-03T20:56:48.000Z | == Hello, World
This is an over engineered "Hello, World" application written in Groovy, using Spring Boot and the reactive web stack.
Of course, one wouldn't need a framework for that, but since Ralf asked for it... ¯\\_(ツ)_/¯.
Day 0 doesn't correspond to any of the official AoC puzzles.
The application defines a `HelloWorldController` as a top level Spring bean:
[source, groovy, numbered]
....
include::solution.groovy[tags=helloWorld]
....
The solutions shall print out a greeting.
I will call the above endpoint via Springs reactive `WebClient` and subscribe to the flow.
This is done in a Spring Application listener, reacting on `ApplicationReadyEvent`.
`ApplicationReadyEvent` is fired when the application is ready to serve requests:
After the flow ends, I'll define a completable future to shutdown the application context.
That needs to happen on a new thread, as it contains some blocking calls, which are not allowed in a reactive flow.
[source, groovy, numbered]
....
include::solution.groovy[tags=callingTheGreeter]
....
I connect everything in a top level class call `DemoApplication` and wire up the listener:
[source, groovy, numbered]
....
include::solution.groovy[tags=puttingItTogether]
....
By using Groovys "Grapes" (`@Grab('org.springframework.boot:spring-boot-starter-webflux:2.2.1.RELEASE')`), this gives a runnable Groovy script:
```
groovy -Dlogging.level.root=WARN -Dspring.main.banner-mode=OFF solution.groovy
```
I included some configuration variables to turn off logging and Springs Banner, so that the application outputs only the greeting.
Bear in mind that this solution fires up a complete web stack and therefore is slower than probably any other _Hello World_ program out there.
| 40.302326 | 143 | 0.77438 |
683a67efe6538d97c5ed19086d378367b517b959 | 5,047 | asciidoc | AsciiDoc | docs/static/breaking-changes.asciidoc | olagache/logstash | df2bd4e45b04a5a27ac47de7e0545d78114365e0 | [
"Apache-2.0"
] | null | null | null | docs/static/breaking-changes.asciidoc | olagache/logstash | df2bd4e45b04a5a27ac47de7e0545d78114365e0 | [
"Apache-2.0"
] | null | null | null | docs/static/breaking-changes.asciidoc | olagache/logstash | df2bd4e45b04a5a27ac47de7e0545d78114365e0 | [
"Apache-2.0"
] | null | null | null | [[breaking-changes]]
== Breaking changes
This section discusses the changes that you need to be aware of when migrating your application to Logstash {version}.
[float]
=== Changes in Logstash Core
* Logstash 5.0.0 requires Java 8
* **Application Settings:** Introduced a new way to configure application settings for Logstash through a settings.yml file. This file
is typically located in `LS_HOME/config`, or `/etc/logstash` when installed via packages. +
[IMPORTANT]
Logstash will not be able to start without this file, so please make sure to
pass in `--path.settings /etc/logstash` if you are starting Logstash manually
after installing it via a package (RPM, DEB).
* **Release Packages:** When Logstash is installed via DEB, RPM packages, it uses `/usr/share/logstash` and `/var/lib/logstash` to install binaries.
Previously it used to install in `/opt/logstash` directory. This change was done to make the user experience
consistent with other Elastic products. Full directory layout is described in <<dir-layout>>. The source of release packages
has changed from `packages.elastic.co` to `artifacts.elastic.co`. For example, 5.x and all the patch releases in this series
will available at `https://artifacts.elastic.co/packages/5.x/apt`
* **Default Logging Level:** Changed the default log severity level to INFO instead of WARN to match Elasticsearch. Existing logs
(in core and plugins) were too noisy at INFO level, so we had to audit log messages and switch some of them to DEBUG
level.
* **Command Line Interface:** Most of the long form <<command-line-flags,options>> have been renamed
to adhere to the yml dot notation to be used in the settings file. Short form options have not been changed.
* **Plugin Manager Renamed:** `bin/plugin` has been renamed to `bin/logstash-plugin`. This change was to mainly prevent `PATH` being polluted when
other components of the Elastic stack are installed on the same instance. Also, this provides a foundation
for future change which will allow Elastic Stack packs to be installed via this script.
[float]
=== Breaking Changes in Plugins
* **Elasticsearch Output Index Template:** The index template for 5.0 has been changed to reflect https://www.elastic.co/guide/en/elasticsearch/reference/5.0/breaking_50_mapping_changes.html[Elasticsearch's mapping changes]. Most
importantly, the subfield for string multi-fields has changed from `.raw` to `.keyword` to match Elasticsearch's default
behavior. The impact of this change to various user groups is detailed below:
** New Logstash 5.0 and Elasticsearch 5.0 users - subfields use `.keyword` from the outset. In Kibana, you can use
`field.keyword` to perform aggregations.
** Existing users with custom templates - most of you won't be impacted because you use a custom template.
** Existing users with default template - Logstash does not force you to upgrade templates if one already exists. If you
intend to move to the new template and want to use `.keyword`, you'll have to reindex existing data. Elasticsearch's
{ref}docs-reindex.html[reindexing API] can help move your data from using `.raw` subfields to `.keyword`.
* **Kafka Input/Output Configuration Changes:** This release added support for the new 0.10 consumer/producer API which supports security features introduced by Kafka.
A few Configuration options were renamed to make it consistent with Kafka consumer and producer settings.
Also, this plugin version will not work with Kafka 0.8 broker.
Please see the following specific plugin documentation for new configuration options:
* <<plugins-inputs-kafka, Kafka Input>>
* <<plugins-outputs-kafka, Kafka Output>>
* **File Input:** SinceDB file is now saved in `<path.data>/plugins/inputs/file` location, not user's home. If you have manually specified `sincedb_path`
configuration, this change will not affect you. If you are moving from 2.x to 5.x, and would like to use the existing SinceDB file, it
has to be copied over to `path.data` manually to use the save state.
[float]
=== Ruby Filter and Custom Plugin Developers
With the migration to the new <<event-api>>, we have changed how you can access internal data compared to previous release.
The Event object no longer returns a reference to the data. Instead, it returns a copy. This might change how you do manipulation of
your data, especially when working with nested hashes. When working with nested hashes, it’s recommended that you
use the `fieldref` syntax instead of using multiple brackets. Also note that we have introduced new Getter/Setter APIs
for accessing information in the Event object. Refer <<event-api>> for details.
**Examples:**
[source, js]
----------------------------------
filter {
ruby {
codec => "event.set('[product][price]', 10)"
}
}
----------------------------------
Instead of:
[source, js]
----------------------------------
filter {
ruby {
codec => "event['product']['price'] = 10"
}
}
----------------------------------
The above syntax is not supported, and will produce an error at run-time.
| 53.691489 | 230 | 0.747771 |
2ea9bff26ba9aedce9b5eb9bdc1ee5b48ac8f29c | 68 | adoc | AsciiDoc | docs/en-gb/modules/business-decisions/partials/number.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | null | null | null | docs/en-gb/modules/business-decisions/partials/number.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | 2 | 2022-01-05T10:31:24.000Z | 2022-03-11T11:56:07.000Z | docs/en-gb/modules/business-decisions/partials/number.adoc | plentymarkets/plenty-manual-docs | 65d179a8feb8fcf1b594ef45883e3437287d8e09 | [
"MIT"
] | 1 | 2021-03-01T09:12:18.000Z | 2021-03-01T09:12:18.000Z | {key-figure} is calculated on the basis of the number. {that-means}
| 34 | 67 | 0.75 |
8a9d8452322921f48eea8e29d45ba9431373690b | 3,224 | adoc | AsciiDoc | doc/teams.adoc | LiScI-Lab/Guardian-of-Times | 6855eff086cf7b6c2d054d18fe0b43ea506ea71b | [
"Apache-2.0"
] | 3 | 2018-11-21T09:38:00.000Z | 2021-07-28T11:11:22.000Z | doc/teams.adoc | LiScI-Lab/Guardian-of-Times | 6855eff086cf7b6c2d054d18fe0b43ea506ea71b | [
"Apache-2.0"
] | 3 | 2018-11-21T09:44:05.000Z | 2021-07-28T10:29:08.000Z | doc/teams.adoc | LiScI-Lab/Guardian-of-Times | 6855eff086cf7b6c2d054d18fe0b43ea506ea71b | [
"Apache-2.0"
] | 1 | 2019-07-30T11:01:21.000Z | 2019-07-30T11:01:21.000Z | == Teams
In GoT werden alle Aktivitäten in Teams verwaltet.
Ein Team besteht aus einem Besitzer (den Vorgesetzten) und mehreren Mitgliedern.
=== Teams erstellen
Teams können im _Meine Teams_ tab erstellt werden.
.Neue Teams erstellen
image::teams/creating-teams.png[]
Im sich öffnenden Formular kann nun ein _Name_, eine _Beschreibung_ und _Tags_ zugewiesen werden.
Die Sichtbarkeit bestimmt, wer das Team _sehen_ und _betreten_ kann:
Versteckt:: Das Team ist _unsichtbar_, nur eingeladene Benutzer können es sehen und betreten.
Privat:: Das Team ist _privat_, _jeder Benutzer_ kann es sehen und um Aufnahme _bitten_.
Es kann *nicht ohne* Einladung/Aufnahme betreten werden.
Öffentlich:: _Jeder_ kann das Team _sehen_ und jeder kann ihm _beitreten_.
.Team Formular
image::teams/new-team.png[]
=== Mitglieder einladen
Im _Mitglieder-Tab_ können neue Mitglieder eingeladen werden.
Sobald sich die Mitglieder in GoT anmelden, sehen sie eine neue Einladung unter ihren _Team Einladungen_.
Dort können diese dem Team beitreten und ab sofort ihre Stunden eintragen.
.Mitglieder-Tab
image::teams/member-tab.png[]
.Neue Mitglieder einladen
image::teams/invide-members.png[]
.Team einladungen
image::teams/invitations-tab.png[]
=== Arbeitszeiten eintragen
Arbeitszeiten können von Mitarbeitern im *Meine Fortschritte*-Tab eingetragen werden.
Entweder über den Shortcut *Fortschritt starten* oder über das grüne *+*.
Fortschritt starten::
Startet eine neue Arbeitszeit mit jetzigem Beginn. Kann danach über den _roten Pause_-Button beendet werden.
Hier können nur eine Beschreibung und Tags eintragen.
Mittels *+*::
Hier können auch Arbeitszeiten aus der Vergangenheit eingetragen werden. Im sich öffnenden Formular können
_Start und Enddatum_, sowie Beschreibung und Tags eingetragen werden.
.Arbeitszeiten-Tab
image::teams/progresses-tab.png[]
.Arbeitszeiten über das *+* eintragen
image::teams/progresses-form.png[]
=== Arbeitszeiten ändern
Über die _Options_ kann jede eingetragene Arbeitszeit nachträglich verändert, gelöscht oder dupliziert werden.
Die neueste Arbeitszeit des heutigen Tages kann fortgesetzt werden.
.Options für Arbeitszeiten
image::teams/progresses-options.png[]
. löscht das Enddatum und führt somit diese Arbeitszeit weiter.
. dupliziert diese Arbeitszeit: Startet eine neue Arbeitszeit ab _jetzt_ und übernimmt Beschreibung sowie Tags.
. editiert diese Arbeitszeit
. löscht diese Arbeitszeit
=== Rollen für Teammitglieder
GoT unterstützt Rollen zur Rechtevergabe innerhalb von Teams.
Im *Mitglieder-Tab* können die Rollen für jedes Teammitglied vom _Owner_ des Teams verändert werden.
Die Rollen im Überblick:
Owner::
Die Person ist der Besitzer des Teams und kann alle Arbeitszeiten einsehen und auch _verwalten/ändern_.
Responsible::
Die Person hat die gleichen Rechte wie der _Owner_.
Diese Rolle sollte nur in Ausnahmefällen vergeben werden.
(Sie wird meist' an Entwickler von GoT vergeben.)
Timekeeper::
Das Mitglied kann neben seinen Arbeitszeiten, auch die aller anderen einsehen.
Participant::
Das Mitglied ist ein Arbeitnehmer. Er kann seine eigenen Arbeitszeiten eintragen und einsehen.
Er sieht keine Arbeitszeiten von anderen Arbeitnehmern.
| 37.929412 | 111 | 0.80397 |
64a323fd227ae29e89570f725f75071009c0e2f8 | 480 | adoc | AsciiDoc | docs/xs2a_flows/ais-decoupled-approach.adoc | ztomic/xs2a-adapter | 724f48ba1fec4e43f9f39ccb26b4c45e0715f934 | [
"Apache-2.0"
] | 32 | 2019-05-09T12:28:56.000Z | 2022-03-18T22:56:36.000Z | docs/xs2a_flows/ais-decoupled-approach.adoc | ztomic/xs2a-adapter | 724f48ba1fec4e43f9f39ccb26b4c45e0715f934 | [
"Apache-2.0"
] | 54 | 2019-06-19T10:08:46.000Z | 2022-01-25T12:24:07.000Z | docs/xs2a_flows/ais-decoupled-approach.adoc | ztomic/xs2a-adapter | 724f48ba1fec4e43f9f39ccb26b4c45e0715f934 | [
"Apache-2.0"
] | 23 | 2019-02-18T16:14:28.000Z | 2022-03-17T12:03:00.000Z | = Get Account Information with Decoupled Approach
The transaction flow in the Decoupled SCA Approach is similar to the Redirect SCA Approach.
The difference is that the ASPSP is asking the PSU to authorise the payment e.g. via a
dedicated mobile app, or any other application or device which is independent from the online
banking frontend.
== Sequence Flow
The example with the explicit start of the authorization process
image::./images/ais-decoupled-approach.png[] | 40 | 95 | 0.791667 |
7149386f99cbe0c5dda6d0fb09df103dcc94dc5b | 5,608 | adoc | AsciiDoc | README.adoc | gzmuSoft/authorization-server | 52f0a96babe4eb4332b81b54b681fca5abafc3b0 | [
"MIT"
] | 24 | 2019-11-05T03:24:26.000Z | 2021-12-29T16:32:31.000Z | README.adoc | gzmuSoft/authorization-server | 52f0a96babe4eb4332b81b54b681fca5abafc3b0 | [
"MIT"
] | 22 | 2020-05-15T08:50:24.000Z | 2020-08-14T08:04:10.000Z | README.adoc | gzmuSoft/authorization-server | 52f0a96babe4eb4332b81b54b681fca5abafc3b0 | [
"MIT"
] | 15 | 2020-03-01T07:42:39.000Z | 2021-12-31T05:21:36.000Z |
= authorization-server
Doc Writer EchoCow <https://echocow.cn>
v1.0, 2019-06-19
:toc:
== 简述
很久以前就想让资源服务器、授权服务器、客户端完全分离的,年初的时候尝试过,但是由于那个时候能力有限,所以失败很多次,后来在考试系统项目初期,也再次尝试,可惜的是依旧失败,那时候真的很菜,不得已暂时将他们放在一个应用之中。最近由于毕业设计的需求的原因,再次需要将授权服务器分离,原本在 https://github.com/gzmuSoft/lesson-cloud[lesson-cloud] 系统内的授权服务器,现在将它独立出来,作为一个单独的应用授权服务器进行使用。花了大概一天的时间完成整个移植过程,觉得自己进步了很多。
目前授权服务器内置我校的 6 张数据表,包括
- `sys_user` 用户表
- `sys_role` 角色表
- `sys_res` 资源表
- `sys_user_role` 用户角色关联表
- `sys_role_res` 角色资源关联表
- `sys_data` 数据表
- `client_details` OAuth2 客户端信息
- `student` 学生信息表
- `teacher` 教师表
授权码模式示例
image::https://resources.echocow.cn/file/2020/01/20logout.gif[logout]
=== 任务
贵州民族大学授权服务器,专门用来为我校应用提供授权服务,使用 spring security oauth 进行完成,完成进度:
- [x] 密码模式、授权码模式
- [x] 手机验证码认证授权
- [x] 手机/邮箱验证码注册
- [x] 多应用认证授权服务器
- [x] jwt 信息非对称密钥加密
- [x] jwt 信息认证完整性填充
- [x] SSO 认证授权服务器
- [x] 微服务注册
- [ ] 高并发处理
- [ ] 集成测试完善
=== 授权服务器
授权服务器的示例参见 https://github.com/gzmuSoft/resource-server[resource-server] 项目,他演示了一个简单的授权服务器的搭建。
== 技术选型
- 构建工具: gradle 5
- 授权协议: OAuth2
- 核心框架: spring security oauth
- 服务发现: spring cloud consul
- 数据缓存: redis 5
- 邮件模板: thymeleaf
- 应用监控: actuator + druid
- 单元测试: junit 5
- 数据库: jpa + postgresql 8
== 基本认证授权
对于认证我们暂时提供三种认证方式,一种 `密码认证`,`手机验证码`,`授权码模式` 三种。提供以下端点:
- `/code/sms` 获取手机验证码端点。
- `/oauth/sms` 手机验证码认证端点。
- `/oauth/token` 授权模式、密码认证 与 刷新令牌 端点。
- `/oauth/authorize` 授权码模式授权端点。
- `/oauth/confirm_access` 用户确认授权提交端点。
- `/oauth/error` 授权服务错误信息端点。
- `/oauth/check_token` 用于资源服务访问的令牌解析端点。
- `/oauth/token_key` 提供公有密匙的端点,使用JWT令牌。
- `/.well-known/jwks.json` JWK 令牌端点。
=== 密码认证
使用 spring security oauth2 提供的默认密码登录即可,请求接口如下:
- 请求路径:`/oauth/token`
- 请求方法: POST
- 请求头:
[cols="1,4,2", options="header"]
.请求头
|===
|参数 |值 | 描述
|Authorization
|Basic bGVzc29uLWNsb3VkOmxlc3Nvbi1jbG91ZC1zZWNyZXQ=
|来自于 oauth client id 和 client secret base64 加密
|===
- 请求参数:
[cols="1,1,2", options="header"]
.请求参数
|===
|参数 |值 | 描述
|grant_type
|password
|请求类型
|scope
|all
|请求权限域
|username
|-
|用户名
|password
|-
|密码
|===
- 正确响应:
[cols="1,1", options="header"]
.正确响应
|===
|属性 | 描述
|access_token
|jwt 加密后令牌
|token_type
|令牌类型,默认 bearer
|refresh_token
|用来刷新的令牌
|expires_in
|有效期
|scope
|请求域,默认 all
|jti
|JWT ID
|===
- 错误响应
[cols="1,2,2,2", options="header"]
.错误响应
|===
|状态码 |错误原因 | 错误(error) | 错误信息(error_message)
| 401
| 请求头中不含有 Authorization 属性
| unauthorized
| Full authentication is required to access this resource
| 400
| grant_type 参数错误
| unsupported_grant_type
| Unsupported grant type: ...
| 400
| scope 参数错误
| invalid_scope
| Invalid scope:...
| 400
| 用户名或密码错误
| invalid_grant
| 用户名或密码错误
|===
这里原理我就不介绍了,是由 spring security oauth2 实现的,有兴趣可以去看看源码。他的核心是 `org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter` 这个过滤器。
=== 手机验证码认证
手机验证码认证分为两步,第一步为下发验证码,第二步为携带验证码和手机号请求认证。
==== 获取验证码
由于目前没有真正的手机提供商,所以我不会真正的发短信,但是会默认短信验证码为 1234 并存储到 redis 之中。
- 请求路径:`/code/sms`
- 请求方式: GET
- 请求头:
[cols="1,1,2", options="header"]
.请求头
|===
|参数 |值 | 描述
|sms
|-
|手机号
|===
- 请求参数: 无
- 正确响应:
[cols="1,1", options="header"]
.正确响应
|===
|状态码 | 响应体
| 200
| 无
|===
- 错误响应:
[cols="1,2,2,2", options="header"]
.错误响应
|===
|状态码 |错误原因 | 错误(error) | 错误信息(error_message)
| 401
| 请求头中不含有 sms 属性
| unauthorized
| 请求中不存在设备号
|===
==== 手机认证
- 请求路径:`/oauth/sms`
- 请求方式: POST
- 请求头:
[cols="1,4,2", options="header"]
.请求头
|===
|参数 |值 | 描述
|Authorization
|Basic bGVzc29uLWNsb3VkOmxlc3Nvbi1jbG91ZC1zZWNyZXQ=
|来自于 oauth client id 和 client secret base64 加密
| sms
| -
| 手机号
| code
| -
| 验证码
|===
- 正确响应:
[cols="1,1", options="header"]
.正确响应
|===
|属性 | 描述
|access_token
|jwt 加密后令牌
|token_type
|令牌类型,默认 bearer
|refresh_token
|用来刷新的令牌
|expires_in
|有效期
|scope
|请求域,默认 all
|jti
|JWT ID
|===
- 错误响应: 待封装
[cols="1,3,3,3", options="header"]
.错误响应
|===
|状态码 |错误原因 | 错误(error) | 错误信息(error_message)
| 400
| 请求体中不含有 sms 属性或者验证码验证失败
| 获取验证码失败,请重新发送
| 获取验证码失败,请重新发送
| 400
| 请求头中不含有 sms 属性
| 请求中不存在设备号
| 请求中不存在设备号
|===
==== 原理
获取手机验证码主要在 `cn.edu.gzmu.authserver.validate.sms` 下,具体请参见 `cn/edu/gzmu/authserver/validate/package-info.java`
手机验证主要在 `cn.edu.gzmu.authserver.auth.sms`,具体请参见 `cn/edu/gzmu/authserver/auth/sms/package-info.java`
=== 刷新令牌
- 请求路径:`/oauth/token`
- 请求方式: POST
- 请求头:
[cols="1,4,2", options="header"]
.请求头
|===
|参数 |值 | 描述
|Authorization
|Basic bGVzc29uLWNsb3VkOmxlc3Nvbi1jbG91ZC1zZWNyZXQ=
|来自于 oauth client id 和 client secret base64 加密
|===
- 请求体:
[cols="1,4,2", options="header"]
.请求头
|===
|参数 |值 | 描述
|grant_type
|refresh_token
|刷新验证码
|refresh_token
|-
|获取 token 时候得到的 refresh_token
|===
- 正确响应:
[cols="1,1", options="header"]
.正确响应
|===
|属性 | 描述
|access_token
|jwt 加密后令牌
|token_type
|令牌类型,默认 bearer
|refresh_token
|用来刷新的令牌
|expires_in
|有效期
|scope
|请求域,默认 all
|jti
|JWT ID
|===
- 错误响应:
[cols="1,2,2,2", options="header"]
.错误响应
|===
|状态码 |错误原因 | 错误(error) | 错误信息(error_message)
| 401
| 请求头中不含有 Authorization 属性
| unauthorized
| Full authentication is required to access this resource
| 400
| grant_type 参数错误
| unsupported_grant_type
| Unsupported grant type: ...
| 400
| refresh_token 不合法
| invalid_grant
| Invalid refresh token:...
|===
=== 检查 token 信息
- 请求路径:`/oauth/check_token`
- 请求方式: POST
- 请求头:
[cols="1,4,2", options="header"]
.请求头
|===
|参数 |值 | 描述
|Authorization
|Basic bGVzc29uLWNsb3VkOmxlc3Nvbi1jbG91ZC1zZWNyZXQ=
|来自于 oauth client id 和 client secret base64 加密
|===
- 请求体:
[cols="1,4,2", options="header"]
.请求头
|===
|参数 |值 | 描述
|token
|-
|有效的 token
|===
- 正确响应:
[cols="1,1", options="header"]
.正确响应
|===
|属性 | 描述
|aud
|授权的资源服务器名称
|user_name
|用户名
|scope
|有效的域
|active
|是否存活
|exp
|有效期
|authorities
|授权角色
|jti
|jwt id
|client_id
|客户端 id
|===
| 13.04186 | 263 | 0.680991 |
ed15d4f64da9a000c29822292537b10ff2252b9d | 2,726 | adoc | AsciiDoc | docs/modules/servers/partials/SMIMECheckSignature.adoc | vttranlina/james-project | 665261f3d6bbe134e2c2c02e71268f2fadecf659 | [
"Apache-2.0"
] | 634 | 2015-12-21T20:24:06.000Z | 2022-03-24T09:57:48.000Z | docs/modules/servers/partials/SMIMECheckSignature.adoc | vttranlina/james-project | 665261f3d6bbe134e2c2c02e71268f2fadecf659 | [
"Apache-2.0"
] | 4,148 | 2015-09-14T15:59:06.000Z | 2022-03-31T10:29:10.000Z | docs/modules/servers/partials/SMIMECheckSignature.adoc | vttranlina/james-project | 665261f3d6bbe134e2c2c02e71268f2fadecf659 | [
"Apache-2.0"
] | 392 | 2015-07-16T07:04:59.000Z | 2022-03-28T09:37:53.000Z | === SMIMECheckSignature
Verifies the s/mime signature of a message. The s/mime signing ensure that
the private key owner is the real sender of the message. To be checked by
this mailet the s/mime signature must contain the actual signature, the
signer's certificate and optionally a set of certificate that can be used to
create a chain of trust that starts from the signer's certificate and leads
to a known trusted certificate.
This check is composed by two steps: firstly it's ensured that the signature
is valid, then it's checked if a chain of trust starting from the signer
certificate and that leads to a trusted certificate can be created. The first
check verifies that the the message has not been modified after the signature
was put and that the signer's certificate was valid at the time of the
signing. The latter should ensure that the signer is who he declare to be.
The results of the checks perfomed by this mailet are wrote as a mail
attribute which default name is org.apache.james.SMIMECheckSignature (it can
be changed using the mailet parameter *mailAttribute*). After
the check this attribute will contain a list of SMIMESignerInfo object, one
for each message's signer. These objects contain the signer's certificate and
the trust path.
Optionally, specifying the parameter *strip*, the signature of
the message can be stripped after the check. The message will become a
standard message without an attached s/mime signature.
The configuration parameter of this mailet are summerized below. The firsts
defines the location, the format and the password of the keystore containing
the certificates that are considered trusted. Note: only the trusted certificate
entries are read, the key ones are not.
* keyStoreType (default: jks): Certificate store format . "jks" is the
standard java certificate store format, but pkcs12 is also quite common and
compatible with standard email clients like Outlook Express and Thunderbird.
* keyStoreFileName (default: JAVA_HOME/jre/lib/security/cacert): Certificate
store path.
* keyStorePassword (default: ""): Certificate store password.
Other parameters configure the behavior of the mailet:
* strip (default: false): Defines if the s/mime signature of the message
have to be stripped after the check or not. Possible values are true and
false.
* mailAttribute (default: org.apache.james.SMIMECheckSignature):
specifies in which attribute the check results will be written.
* onlyTrusted (default: true): Usually a message signature to be
considered by this mailet as authentic must be valid and trusted. Setting
this mailet parameter to "false" the last condition is relaxed and also
"untrusted" signature are considered will be considered as authentic.
| 52.423077 | 80 | 0.805943 |
f83e541db0cf5a8d8b307426ae9d556dbf38e179 | 152 | adoc | AsciiDoc | cardreader.provider.usb.tactivo/doc/userguide/UTACCRP_API.adoc | gematik/ref-CardReaderProvider-USB-Tactivo-Android | e49e01f305cc3f24fe3544fbe9cdb3077cddcb6f | [
"Apache-2.0"
] | null | null | null | cardreader.provider.usb.tactivo/doc/userguide/UTACCRP_API.adoc | gematik/ref-CardReaderProvider-USB-Tactivo-Android | e49e01f305cc3f24fe3544fbe9cdb3077cddcb6f | [
"Apache-2.0"
] | null | null | null | cardreader.provider.usb.tactivo/doc/userguide/UTACCRP_API.adoc | gematik/ref-CardReaderProvider-USB-Tactivo-Android | e49e01f305cc3f24fe3544fbe9cdb3077cddcb6f | [
"Apache-2.0"
] | null | null | null | include::config.adoc[]
== API Documentation
Generated API docs are available at https://gematik.github.io/ref-CardReaderProvider-USB-Tactivo-Android.
| 25.333333 | 105 | 0.796053 |
5811c03e002962a3f11b5f87e88ec2aedc6e06ef | 52 | adoc | AsciiDoc | src/test/resources/com/github/fluorumlabs/asciidocj/tests/images/block-image-with-positioning-roles.adoc | paulroemer/asciidocj | 498f3faf8824314f8beaff4c26f9005d911a7b30 | [
"Apache-2.0"
] | 5 | 2018-12-05T12:13:01.000Z | 2021-07-15T13:42:33.000Z | src/test/resources/com/github/fluorumlabs/asciidocj/tests/images/block-image-with-positioning-roles.adoc | paulroemer/asciidocj | 498f3faf8824314f8beaff4c26f9005d911a7b30 | [
"Apache-2.0"
] | 3 | 2021-12-09T08:22:13.000Z | 2021-12-09T08:22:29.000Z | src/test/resources/com/github/fluorumlabs/asciidocj/tests/images/block-image-with-positioning-roles.adoc | paulroemer/asciidocj | 498f3faf8824314f8beaff4c26f9005d911a7b30 | [
"Apache-2.0"
] | 3 | 2020-04-28T07:43:29.000Z | 2021-12-09T06:58:45.000Z | [.right.text-center]
image::tiger.png[Tiger,200,200] | 26 | 31 | 0.75 |
045804e872e0b1f128d3826ba08e14bf2785c451 | 877 | asciidoc | AsciiDoc | docs/static/listing-a-plugin.asciidoc | 111andre111/logstash | f2a9656bc68f0726372ee475861ba4af76a60a31 | [
"Apache-2.0"
] | null | null | null | docs/static/listing-a-plugin.asciidoc | 111andre111/logstash | f2a9656bc68f0726372ee475861ba4af76a60a31 | [
"Apache-2.0"
] | null | null | null | docs/static/listing-a-plugin.asciidoc | 111andre111/logstash | f2a9656bc68f0726372ee475861ba4af76a60a31 | [
"Apache-2.0"
] | null | null | null | [[plugin-listing]]
=== List your plugin
The {logstash-ref}[Logstash Reference] is the first place {ls} users look for plugins and documentation.
If your plugin meets the <<plugin-acceptance,quality and acceptance guidelines>>, we may be able to list it in the guide.
The plugin source and readme will continue to live in your repo, and we will direct users there.
If you would like to have your plugin included in the {logstash-ref}[Logstash Reference]:
* verify that it meets our <<plugin-acceptance,quality and acceptance guidelines>>
* create a new https://github.com/elasticsearch/logstash/issues[issue] in the Logstash repository.
** Use `PluginListing: <yourpluginname>` as the title for the issue.
** Apply the `docs` label.
** In the body of the issue, explain the purpose and value your plugin offers, and describe how this plugin adheres to the guidelines.
| 54.8125 | 134 | 0.766249 |
bd8f45c15988f732d1dcd1fd32d31ecc6235e343 | 375 | adoc | AsciiDoc | examples/demo/domain/src/main/java/demoapp/dom/types/isisext/asciidocs/samples/IsisAsciiDocSamples-sample5.adoc | stitheridge/isis | 1a24310c01c6c4d52cd2a451c7a97d4e28f6dd59 | [
"Apache-2.0"
] | 665 | 2015-01-01T06:06:28.000Z | 2022-03-27T01:11:56.000Z | examples/demo/domain/src/main/java/demoapp/dom/types/isisext/asciidocs/samples/IsisAsciiDocSamples-sample5.adoc | stitheridge/isis | 1a24310c01c6c4d52cd2a451c7a97d4e28f6dd59 | [
"Apache-2.0"
] | 176 | 2015-02-07T11:29:36.000Z | 2022-03-25T04:43:12.000Z | examples/demo/domain/src/main/java/demoapp/dom/types/isisext/asciidocs/samples/IsisAsciiDocSamples-sample5.adoc | stitheridge/isis | 1a24310c01c6c4d52cd2a451c7a97d4e28f6dd59 | [
"Apache-2.0"
] | 337 | 2015-01-02T03:01:34.000Z | 2022-03-21T15:56:28.000Z | == We're back!
.Asciidoctor usage example, should contain 3 lines
[source, ruby]
----
doc = Asciidoctor::Document.new("*This* is it!", :header_footer => false)
puts doc.render
----
// FIXME: use ifdef to show output according to backend
Here's what it outputs (using the built-in templates):
....
<div class="paragraph">
<p><strong>This</strong> is it!</p>
</div>
.... | 19.736842 | 73 | 0.669333 |
a1e143f981e07a2745bfdd2c0c3189a70ccdc6ce | 387 | asciidoc | AsciiDoc | docs/glossary.asciidoc | rylnd/ecs | d66dc89988b9432d0a261debe7e2db048794c983 | [
"Apache-2.0"
] | 854 | 2018-05-24T14:52:14.000Z | 2022-03-31T02:47:18.000Z | docs/glossary.asciidoc | rylnd/ecs | d66dc89988b9432d0a261debe7e2db048794c983 | [
"Apache-2.0"
] | 1,102 | 2018-05-25T13:59:26.000Z | 2022-03-28T22:21:12.000Z | docs/glossary.asciidoc | rylnd/ecs | d66dc89988b9432d0a261debe7e2db048794c983 | [
"Apache-2.0"
] | 369 | 2018-05-24T14:52:06.000Z | 2022-03-31T12:19:50.000Z | [[ecs-glossary]]
=== Glossary of {ecs} Terms
[[ecs-glossary-ecs]]
ECS::
*Elastic Common Schema*. The Elastic Common Schema (ECS) is a document schema
for Elasticsearch, for use cases such as logging and metrics.
ECS defines a common set of fields, their datatype,
and gives guidance on their correct usage.
ECS is used to improve uniformity of event data coming from different sources.
| 35.181818 | 78 | 0.770026 |
18c6c78bb498a58f1f327fba5010b8e7051b276a | 508 | adoc | AsciiDoc | app-main-integrationTest/src/docs/asciidoc/fragments/common/Responses.adoc | FrancescoJo/spring-board-demo | 9a3860f780a1c994fcc7f2bed1525a0d73a3fa64 | [
"Beerware"
] | 3 | 2020-07-20T23:58:21.000Z | 2021-06-23T11:36:28.000Z | app-main-integrationTest/src/docs/asciidoc/fragments/common/Responses.adoc | FrancescoJo/spring-board-demo | 9a3860f780a1c994fcc7f2bed1525a0d73a3fa64 | [
"Beerware"
] | 56 | 2020-07-15T03:18:21.000Z | 2021-07-13T11:46:06.000Z | app-main-integrationTest/src/docs/asciidoc/fragments/common/Responses.adoc | FrancescoJo/spring-board-demo | 9a3860f780a1c994fcc7f2bed1525a0d73a3fa64 | [
"Beerware"
] | 1 | 2022-01-19T04:57:56.000Z | 2022-01-19T04:57:56.000Z | [[common-payloads-responses]]
== Common Response body format
[source,json]
----
{
"body" : ResponseBody<JsonObject>,
"timestamp" : <Number>,
"type" : "OK"
}
----
=== Response fields
|===
| Path | Type | Description
| `+body+`
| `+Object+`
| Wrapper object of actual response.
| `+timestamp+`
| `+Number+`
| UTC based UNIX timestamp when server made this response.
| `+type+`
| `+String+`
| Reserved value to indicate whether this envelope includes a requested object(`OK`) or error(`ERROR`).
|===
| 17.517241 | 103 | 0.645669 |
76226082df374eca279d8d34f2cdeb0a4b7be135 | 722 | adoc | AsciiDoc | content/manual/adoc/en/deployment/app_dirs/log_dir.adoc | xmeng1/documentation | ed7fa6b5b9fd389278a710cd9531124ddd52dda1 | [
"CC-BY-4.0"
] | 33 | 2018-03-05T14:04:52.000Z | 2021-06-21T13:20:23.000Z | content/manual/adoc/en/deployment/app_dirs/log_dir.adoc | xmeng1/documentation | ed7fa6b5b9fd389278a710cd9531124ddd52dda1 | [
"CC-BY-4.0"
] | 744 | 2018-03-06T09:17:39.000Z | 2021-12-16T05:50:56.000Z | content/manual/adoc/en/deployment/app_dirs/log_dir.adoc | xmeng1/documentation | ed7fa6b5b9fd389278a710cd9531124ddd52dda1 | [
"CC-BY-4.0"
] | 58 | 2018-04-02T16:25:58.000Z | 2022-03-14T09:14:13.000Z | :sourcesdir: ../../../../source
[[log_dir]]
==== Log Directory
Log directory is a directory where the application creates log files. The location and content of log files is defined by the configuration of the *Logback* framework provided in the `logback.xml` file. See <<logging>> for more details.
Location of log files is usually specified relative to <<app_home,application home>> directory, for example:
[source,xml]
----
include::{sourcesdir}/deployment/log_dir_1.xml[]
----
You should also specify the same directory as defined by `logback.xml` in the <<cuba.logDir,cuba.logDir>> application property. It will allow system administrators to view and load log files in the *Administration > Server Log* screen.
| 45.125 | 236 | 0.756233 |
e278b2d199617cb3db1065368bf4c540e4b4505e | 1,882 | adoc | AsciiDoc | documentation/ansible-playbook/modules/ansible-playbook-appendix-python3-crypto-pyghmi.adoc | mbpavan/baremetal-deploy | 73e8650a9300c1ea001967c2a32deaae90209447 | [
"Apache-2.0"
] | 119 | 2019-11-21T19:59:45.000Z | 2022-03-11T13:14:07.000Z | documentation/ansible-playbook/modules/ansible-playbook-appendix-python3-crypto-pyghmi.adoc | mbpavan/baremetal-deploy | 73e8650a9300c1ea001967c2a32deaae90209447 | [
"Apache-2.0"
] | 824 | 2019-11-28T14:06:54.000Z | 2022-03-30T11:03:25.000Z | documentation/ansible-playbook/modules/ansible-playbook-appendix-python3-crypto-pyghmi.adoc | mbpavan/baremetal-deploy | 73e8650a9300c1ea001967c2a32deaae90209447 | [
"Apache-2.0"
] | 119 | 2019-11-21T18:48:46.000Z | 2022-03-22T10:57:48.000Z | [id="ansible-playbook-appendix-appendix-python3-crypto-pyghmi"]
[[packages]]
[appendix]
== Installing `python3-crypto` and `python3-pyghmi`
The Ansible playbook uses the https://docs.ansible.com/ansible/latest/modules/ipmi_power_module.html[`ipmi_power`]
module to power off the OpenShift cluster nodes prior to deployment. This
particular module has a dependency for two packages:
`python3-crypto` and `python3-pyghmi`. When using Red Hat Enterprise Linux 8,
these packages do not reside in BaseOS nor AppStream repositories. If using
`subscription-manager`, they reside in the OpenStack repositories such as
`openstack-16-for-rhel-8-x86_64-rpms`, however, to simplify the installation
of these packages, the playbook uses the available versions from
`trunk.rdoproject.org`.
The playbook assumes that the rpm packages are manually installed on
provision host.
When the provision host packages are
not already installed on the system, the following error can be expected
```sh
TASK [node-prep : Install required packages] ************************************************************************************************
Thursday 07 May 2020 19:11:35 +0000 (0:00:00.161) 0:00:11.940 **********
fatal: [provisioner.example.com]: FAILED! => {"changed": false, "failures": ["No package python3-crypto available.", "No package python3-pyghmi available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
```
---
The `python3-crypto` and `python3-pyghmi` can be downloaded from the following
links for install on an offline provision host and transferred locally for local install of the rpms.
- https://trunk.rdoproject.org/rhel8-master/deps/latest/Packages/python3-crypto-2.6.1-18.el8ost.x86_64.rpm[python3-crypto]
- https://trunk.rdoproject.org/rhel8-master/deps/latest/Packages/python3-pyghmi-1.0.22-2.el8ost.noarch.rpm[python3-pyghmi]
| 53.771429 | 240 | 0.731668 |
0b6022f7595d5db3a94c5fa1e4c2c98abbdb9020 | 483 | adoc | AsciiDoc | subprojects/documentation/src/docs/asciidoc/ref/attribute/qualifier.adoc | canoo/open-dolphin | 50166defc3ef2de473d7a7246c32dc04304d80ef | [
"Apache-2.0"
] | 38 | 2015-01-16T21:21:57.000Z | 2020-10-02T20:50:36.000Z | subprojects/documentation/src/docs/asciidoc/ref/attribute/qualifier.adoc | canoo/open-dolphin | 50166defc3ef2de473d7a7246c32dc04304d80ef | [
"Apache-2.0"
] | 21 | 2015-01-21T12:02:21.000Z | 2018-01-17T12:53:38.000Z | subprojects/documentation/src/docs/asciidoc/ref/attribute/qualifier.adoc | canoo/open-dolphin | 50166defc3ef2de473d7a7246c32dc04304d80ef | [
"Apache-2.0"
] | 31 | 2015-01-09T00:09:08.000Z | 2021-11-24T09:28:52.000Z | In addition of name and tag, dolphin attributes are defined also by a _qualifier_.
This property captures the fact that this attribute represents a qualified feature in the domain world.
Whenever the value of an attribute changes, instant update of all attributes that share the same qualifier will happen automatically.
Qualifiers are a prerequisite for Stable bindings
Open dolphin provides methods in client and server side to find attributes by name, and tag and by qualifier. | 60.375 | 133 | 0.824017 |
c1a1ee7e589520baa65018c96b27424a2ac0cbc7 | 931 | adoc | AsciiDoc | tests/README.adoc | ababi92/ansible | d9730766ca4d58c206dabf7ecaaf4c45eedf3616 | [
"Apache-2.0"
] | 1 | 2021-06-15T09:40:29.000Z | 2021-06-15T09:40:29.000Z | tests/README.adoc | ababi92/ansible | d9730766ca4d58c206dabf7ecaaf4c45eedf3616 | [
"Apache-2.0"
] | 8 | 2021-06-04T08:26:46.000Z | 2022-02-02T14:33:43.000Z | tests/README.adoc | ababi92/ansible | d9730766ca4d58c206dabf7ecaaf4c45eedf3616 | [
"Apache-2.0"
] | 2 | 2021-06-03T09:10:55.000Z | 2021-09-09T15:37:57.000Z | // Copyright (C) 2020, RTE (http://www.rte-france.com)
// SPDX-License-Identifier: CC-BY-4.0
Ansible roles tests
===================
== Introduction
These tests allow us to test our Ansible roles.
== Prerequisites
These tests require a cluster already deployed with the following parameters:
* an rbd Ceph pool called "rbd".
* a libvirt volume pool called "ceph" and being linked to the pool "rbd".
It also requires an inventory describing the cluster, the same as the one used
to deploy the cluster.
Finally, the test data must have been generated with the
"tests/generate_test_data.sh". This script requires qemu-img but can be called
with cqfd:
....
cqfd run tests/generate_test_data.sh
....
== Using tests
These tests are playbooks that can be used with ansible-playbook. They must be
launched from the root directory of the project. For example:
....
ansible-playbook -i my_inventory tests/test_import_disk.yaml
....
| 27.382353 | 78 | 0.745435 |
23e4c440ee70407b74953170707ed94e5e129379 | 1,195 | adoc | AsciiDoc | documentationtesting/src/test/docs/org/sfvl/doctesting/writer/_DocumentationBuilderTest.define_document_structure.approved.adoc | sfauvel/documentationtesting | 835bb7f5e202a1ebdd28348634df7aec25a5e9a5 | [
"MIT"
] | 5 | 2020-01-09T19:22:34.000Z | 2022-03-31T16:48:21.000Z | documentationtesting/src/test/docs/org/sfvl/doctesting/writer/_DocumentationBuilderTest.define_document_structure.approved.adoc | sfauvel/documentationtesting | 835bb7f5e202a1ebdd28348634df7aec25a5e9a5 | [
"MIT"
] | 4 | 2020-02-02T19:37:47.000Z | 2020-10-15T19:31:03.000Z | documentationtesting/src/test/docs/org/sfvl/doctesting/writer/_DocumentationBuilderTest.define_document_structure.approved.adoc | sfauvel/documentationtesting | 835bb7f5e202a1ebdd28348634df7aec25a5e9a5 | [
"MIT"
] | 2 | 2020-02-02T17:19:42.000Z | 2020-03-03T13:41:24.000Z | ifndef::ROOT_PATH[:ROOT_PATH: ../../../..]
[#org_sfvl_doctesting_writer_documentationbuildertest_define_document_structure]
= Define document structure
We can change document structure to organize it as we want.
In this example, we display only classes includes and we add text before and after them.
.Usage
[source, java, indent=0]
----
Class[] classesToAdd = {
org.sfvl.doctesting.sample.basic.FirstTest.class,
org.sfvl.docformatter.AsciidocFormatterTest.class
};
DocumentationBuilder builder = new DocumentationBuilder()
.withClassesToInclude(classesToAdd)
.withLocation(Paths.get("org", "sfvl", "docformatter"))
.withStructureBuilder(DocumentationBuilder.class,
b -> "Documentation of classes",
b -> b.includeClasses(),
b -> "This is my footer"
);
String document = builder.build();
----
.Document generated
----
Documentation of classes
\include::../doctesting/sample/basic/FirstTest.adoc[leveloffset=+1]
\include::AsciidocFormatterTest.adoc[leveloffset=+1]
This is my footer
---- | 30.641026 | 88 | 0.640167 |
3367a7bbe037b91f23d8cb80bfe46c7d89fb4df6 | 1,147 | adoc | AsciiDoc | golang/tools/env/README.adoc | russellgao/blog | 6a45e674ad7e14af7e3b64c9e7624b8661fb90d2 | [
"Apache-2.0"
] | 1 | 2020-05-20T01:52:02.000Z | 2020-05-20T01:52:02.000Z | golang/tools/env/README.adoc | russellgao/blog | 6a45e674ad7e14af7e3b64c9e7624b8661fb90d2 | [
"Apache-2.0"
] | null | null | null | golang/tools/env/README.adoc | russellgao/blog | 6a45e674ad7e14af7e3b64c9e7624b8661fb90d2 | [
"Apache-2.0"
] | null | null | null | = go env
:toc:
:toclevels: 5
:toc-title:
:sectnums:
== 作用
命令go env用于打印Go语言的环境信息
go env命令可打印出的Go语言通用环境信息
|===
| 名称 | 说明
| CGO_ENABLED | 指明cgo工具是否可用的标识。
| GOARCH | 程序构建环境的目标计算架构。
| GOBIN | 存放可执行文件的目录的绝对路径。
| GOCHAR | 程序构建环境的目标计算架构的单字符标识。
| GOEXE | 可执行文件的后缀。
| GOHOSTARCH | 程序运行环境的目标计算架构。
| GOOS | 程序构建环境的目标操作系统。
| GOHOSTOS | 程序运行环境的目标操作系统。
| GOPATH | 工作区目录的绝对路径。
| GORACE | 用于数据竞争检测的相关选项。
| GOROOT | Go语言的安装目录的绝对路径。
| GOTOOLDIR | Go工具目录的绝对路径。
|===
=== GORACE
GORACE的值包含了用于数据竞争检测的相关选项。数据竞争是在并发程序中最常见和最难调试的一类bug。数据竞争会发生在多个Goroutine争相访问相同的变量且至少有一个Goroutine中的程序在对这个变量进行写操作的情况下。
数据竞争检测需要被显式的开启。还记得标记-race吗?我们可以通过在执行一些标准go命令时加入这个标记来开启数据竞争检测。在这种情况下,GORACE的值就会被使用到了。支持-race标记的标准go命令包括:go test命令、go run命令、go build命令和go install命令。
GORACE的值形如“option1=val1 option2=val2”,即:选项名称与选项值之间以等号“=”分隔,多个选项之间以空格“ ”分隔。数据竞争检测的选项包括log_path、exitcode、strip_path_prefix和history_size。为了设置GORACE的值,我们需要设置环境变量GORACE。或者,我们也可以在执行go命令时临时设置它,像这样:
```
hc@ubt:~/golang/goc2p/src/cnet/ctcp$ GORACE="log_path=/home/hc/golang/goc2p /race/report strip_path_prefix=home/hc/golang/goc2p/" go test -race
```
== 参考
- https://wiki.jikexueyuan.com/project/go-command-tutorial/0.14.html
| 26.674419 | 190 | 0.787271 |
a94a3a09b41f2f230983a881b4e67d009aeb40ae | 1,886 | adoc | AsciiDoc | readme.adoc | arun-gupta/shades-of-trump | 5af196fee8770b1ae1546788a0c24356baf394b5 | [
"Apache-2.0"
] | null | null | null | readme.adoc | arun-gupta/shades-of-trump | 5af196fee8770b1ae1546788a0c24356baf394b5 | [
"Apache-2.0"
] | null | null | null | readme.adoc | arun-gupta/shades-of-trump | 5af196fee8770b1ae1546788a0c24356baf394b5 | [
"Apache-2.0"
] | null | null | null | = Shades of Donald Trump
I'm an independent voter. Voted for President Barack Obama in past two elections and glad I did!
This is no-emotions and fact-based different shades of Republican presidential candidate Donald Trump. If you like Trump, add a bullet to #YesTrump. If you don't like Trump, add a bullet to #NoTrump.
Send a Pull Request if you'd like to add something to the list.
NOTE: Please do *not* include links from donaldjtrump.com. Instead include links from third-party sites.
== #YesTrump
. HERE
== #NoTrump
. https://www.reddit.com/r/EnoughTrumpSpam/comments/4r2yxs/a_final_response_to_the_tell_me_why_trump_is/[Donald Trump is racist]
.. https://www.washingtonpost.com/politics/trump-pushes-expanded-ban-on-muslims-and-other-foreigners/2016/06/13/c9988e96-317d-11e6-8ff7-7b6c1998b7a0_story.html[Ban Muslims from entering the United States]
.. http://www.newyorker.com/magazine/2015/09/07/the-death-and-life-of-atlantic-city[Ordered blacks to leave casino floor whenever him or wife arrives on property]
.. http://www.huffingtonpost.com/entry/9-outrageous-things-donald-trump-has-said-about-latinos_us_55e483a1e4b0c818f618904b[Called Latino immigrants "`criminals`" and "`rapists`"]
. http://www.thewrap.com/donald-trump-says-he-has-more-humility-than-you-would-think-video/[More humble than you can think]
. http://dailycaller.com/2016/07/31/trump-gets-asked-about-chelsea-and-ivankas-friendship-youve-got-to-see-what-was-said/[Wishes Ivanka and Chelsea are not friends]
. Global warming is created by and for the Chinese: https://twitter.com/realdonaldtrump/status/265895292191248385
. Chinese trade: http://thefederalist.com/2016/01/20/almost-everything-donald-trump-says-about-trade-with-china-is-wrong/
. http://www.politifact.com/truth-o-meter/article/2016/jul/26/how-trump-plans-build-wall-along-us-mexico-border/[Build a wall along US-Mexico border ]
| 65.034483 | 204 | 0.792153 |
92a7e74fa6d74fba47b46f74cb4af4028a97235a | 281 | asciidoc | AsciiDoc | vendor/github.com/elastic/beats/metricbeat/module/nginx/stubstatus/_meta/docs.asciidoc | PPACI/krakenbeat | e75ec8f006164acb8a57d0c9609bebe534955813 | [
"Apache-2.0"
] | 3 | 2018-01-04T19:15:26.000Z | 2020-02-20T03:35:27.000Z | vendor/github.com/elastic/beats/metricbeat/module/nginx/stubstatus/_meta/docs.asciidoc | PPACI/krakenbeat | e75ec8f006164acb8a57d0c9609bebe534955813 | [
"Apache-2.0"
] | null | null | null | vendor/github.com/elastic/beats/metricbeat/module/nginx/stubstatus/_meta/docs.asciidoc | PPACI/krakenbeat | e75ec8f006164acb8a57d0c9609bebe534955813 | [
"Apache-2.0"
] | 1 | 2020-10-11T14:57:48.000Z | 2020-10-11T14:57:48.000Z | === Nginx StubStatus Metricset
The Nginx `stubstatus` metricset collects data from the Nginx
http://nginx.org/en/docs/http/ngx_http_stub_status_module.html[ngx_http_stub_status] module. It
scrapes the server status data from the web page generated by ngx_http_stub_status.
| 40.142857 | 96 | 0.807829 |
ec1a02b06a215c61119fda4bd63474e2c1976456 | 7,452 | adoc | AsciiDoc | docs/src/main/asciidoc/other-topics.adoc | pmv/spring-cloud-vault | 79eb7c2b579fa98982c7e01a5fa6e96ed2521ea9 | [
"Apache-2.0"
] | null | null | null | docs/src/main/asciidoc/other-topics.adoc | pmv/spring-cloud-vault | 79eb7c2b579fa98982c7e01a5fa6e96ed2521ea9 | [
"Apache-2.0"
] | null | null | null | docs/src/main/asciidoc/other-topics.adoc | pmv/spring-cloud-vault | 79eb7c2b579fa98982c7e01a5fa6e96ed2521ea9 | [
"Apache-2.0"
] | null | null | null | == Service Registry Configuration
You can use a `DiscoveryClient` (such as from Spring Cloud Consul) to locate a Vault server by setting spring.cloud.vault.discovery.enabled=true (default `false`).
The net result of that is that your apps need a application.yml (or an environment variable) with the appropriate discovery configuration.
The benefit is that the Vault can change its co-ordinates, as long as the discovery service is a fixed point.
The default service id is `vault` but you can change that on the client with
`spring.cloud.vault.discovery.serviceId`.
The discovery client implementations all support some kind of metadata map (e.g. for Eureka we have eureka.instance.metadataMap).
Some additional properties of the service may need to be configured in its service registration metadata so that clients can connect correctly.
Service registries that do not provide details about transport layer security need to provide a `scheme` metadata entry to be set either to `https` or `http`.
If no scheme is configured and the service is not exposed as secure service, then configuration defaults to `spring.cloud.vault.scheme` which is `https` when it's not set.
====
[source,yaml]
----
spring.cloud.vault.discovery:
enabled: true
service-id: my-vault-service
----
====
[[vault.config.fail-fast]]
== Vault Client Fail Fast
In some cases, it may be desirable to fail startup of a service if it cannot connect to the Vault Server.
If this is the desired behavior, set the bootstrap configuration property
`spring.cloud.vault.fail-fast=true` and the client will halt with an Exception.
====
[source,yaml]
----
spring.cloud.vault:
fail-fast: true
----
====
[[vault.config.namespaces]]
== Vault Enterprise Namespace Support
Vault Enterprise allows using namespaces to isolate multiple Vaults on a single Vault server.
Configuring a namespace by setting
`spring.cloud.vault.namespace=…` enables the namespace header
`X-Vault-Namespace` on every outgoing HTTP request when using the Vault
`RestTemplate` or `WebClient`.
Please note that this feature is not supported by Vault Community edition and has no effect on Vault operations.
====
[source,yaml]
----
spring.cloud.vault:
namespace: my-namespace
----
====
See also: https://www.vaultproject.io/docs/enterprise/namespaces/index.html[Vault Enterprise: Namespaces]
[[vault.config.ssl]]
== Vault Client SSL configuration
SSL can be configured declaratively by setting various properties.
You can set either `javax.net.ssl.trustStore` to configure JVM-wide SSL settings or `spring.cloud.vault.ssl.trust-store`
to set SSL settings only for Spring Cloud Vault Config.
====
[source,yaml]
----
spring.cloud.vault:
ssl:
trust-store: classpath:keystore.jks
trust-store-password: changeit
trust-store-type: JKS
----
====
* `trust-store` sets the resource for the trust-store.
SSL-secured Vault communication will validate the Vault SSL certificate with the specified trust-store.
* `trust-store-password` sets the trust-store password
* `trust-store-type` sets the trust-store type. Supported values are all supported `KeyStore` types including `PEM`.
Please note that configuring `spring.cloud.vault.ssl.*` can be only applied when either Apache Http Components or the OkHttp client is on your class-path.
[[vault-lease-renewal]]
== Lease lifecycle management (renewal and revocation)
With every secret, Vault creates a lease:
metadata containing information such as a time duration, renewability, and more.
Vault promises that the data will be valid for the given duration, or Time To Live (TTL).
Once the lease is expired, Vault can revoke the data, and the consumer of the secret can no longer be certain that it is valid.
Spring Cloud Vault maintains a lease lifecycle beyond the creation of login tokens and secrets.
That said, login tokens and secrets associated with a lease are scheduled for renewal just before the lease expires until terminal expiry.
Application shutdown revokes obtained login tokens and renewable leases.
Secret service and database backends (such as MongoDB or MySQL) usually generate a renewable lease so generated credentials will be disabled on application shutdown.
NOTE: Static tokens are not renewed or revoked.
Lease renewal and revocation is enabled by default and can be disabled by setting `spring.cloud.vault.config.lifecycle.enabled`
to `false`.
This is not recommended as leases can expire and Spring Cloud Vault cannot longer access Vault or services using generated credentials and valid credentials remain active after application shutdown.
====
[source,yaml]
----
spring.cloud.vault:
config.lifecycle:
enabled: true
min-renewal: 10s
expiry-threshold: 1m
lease-endpoints: Legacy
----
====
* `enabled` controls whether leases associated with secrets are considered to be renewed and expired secrets are rotated.
Enabled by default.
* `min-renewal` sets the duration that is at least required before renewing a lease.
This setting prevents renewals from happening too often.
* `expiry-threshold` sets the expiry threshold.
A lease is renewed the configured period of time before it expires.
* `lease-endpoints` sets the endpoints for renew and revoke.
Legacy for vault versions before 0.8 and SysLeases for later.
See also: https://www.vaultproject.io/docs/concepts/lease.html[Vault Documentation: Lease, Renew, and Revoke]
[[vault-session-lifecycle]]
== Session token lifecycle management (renewal, re-login and revocation)
A Vault session token (also referred to as `LoginToken`) is quite similar to a lease as it has a TTL, max TTL, and may expire.
Once a login token expires, it cannot be used anymore to interact with Vault.
Therefore, Spring Vault ships with a `SessionManager` API for imperative and reactive use.
Spring Cloud Vault maintains the session token lifecycle by default.
Session tokens are obtained lazily so the actual login is deferred until the first session-bound use of Vault.
Once Spring Cloud Vault obtains a session token, it retains it until expiry.
The next time a session-bound activity is used, Spring Cloud Vault re-logins into Vault and obtains a new session token.
On application shut down, Spring Cloud Vault revokes the token if it was still active to terminate the session.
Session lifecycle is enabled by default and can be disabled by setting `spring.cloud.vault.session.lifecycle.enabled`
to `false`.
Disabling is not recommended as session tokens can expire and Spring Cloud Vault cannot longer access Vault.
====
[source,yaml]
----
spring.cloud.vault:
session.lifecycle:
enabled: true
refresh-before-expiry: 10s
expiry-threshold: 20s
----
====
* `enabled` controls whether session lifecycle management is enabled to renew session tokens.
Enabled by default.
* `refresh-before-expiry` controls the point in time when the session token gets renewed.
The refresh time is calculated by subtracting `refresh-before-expiry` from the token expiry time.
Defaults to `5 seconds`.
* `expiry-threshold` sets the expiry threshold.
The threshold represents a minimum TTL duration to consider a session token as valid.
Tokens with a shorter TTL are considered expired and are not used anymore.
Should be greater than `refresh-before-expiry` to prevent token expiry.
Defaults to `7 seconds`.
See also: https://www.vaultproject.io/api-docs/auth/token#renew-a-token-self[Vault Documentation: Token Renewal]
| 44.094675 | 198 | 0.776838 |
563d2eb517e3efad6875ef7659055d699976e912 | 845 | adoc | AsciiDoc | _includes/tutorials/rekeying/ksql/markup/dev/print-input-topic.adoc | lct45/kafka-tutorials | 104d85036930ad527aeb371773fd85840029a84e | [
"Apache-2.0"
] | 213 | 2019-08-08T17:47:00.000Z | 2022-03-30T04:00:51.000Z | _includes/tutorials/rekeying/ksql/markup/dev/print-input-topic.adoc | lct45/kafka-tutorials | 104d85036930ad527aeb371773fd85840029a84e | [
"Apache-2.0"
] | 832 | 2019-08-07T20:57:20.000Z | 2022-03-31T23:00:50.000Z | _includes/tutorials/rekeying/ksql/markup/dev/print-input-topic.adoc | lct45/kafka-tutorials | 104d85036930ad527aeb371773fd85840029a84e | [
"Apache-2.0"
] | 82 | 2019-08-07T23:57:41.000Z | 2022-03-25T08:59:55.000Z | Now that you have a stream, let's examine what key the Kafka messages have using the `PRINT` command:
+++++
<pre class="snippet"><code class="sql">{% include_raw tutorials/rekeying/ksql/code/tutorial-steps/dev/print-input-topic.sql %}</code></pre>
+++++
This should yield roughly the following output. `PRINT` pulls from all partitions of a topic. The order will be different depending on how the records were actually inserted:
+++++
<pre class="snippet"><code class="shell">{% include_raw tutorials/rekeying/ksql/code/tutorial-steps/dev/expected-print-input.log %}</code></pre>
+++++
Note that the key is `null` for every message. This means that ratings data for the same movie could be spread across multiple partitions. This is generally not good for scalability when you care about having the same "kind" of data in a single partition. | 65 | 255 | 0.75503 |
bc337da7a37c8aedc43ac4d41b6f398ef66deee1 | 1,624 | adoc | AsciiDoc | modules/nbde-using-tang-servers-for-disk-encryption.adoc | georgettica/openshift-docs | 728a069f9c8ecd73701ac84175374e7e596b0ee4 | [
"Apache-2.0"
] | 625 | 2015-01-07T02:53:02.000Z | 2022-03-29T06:07:57.000Z | modules/nbde-using-tang-servers-for-disk-encryption.adoc | georgettica/openshift-docs | 728a069f9c8ecd73701ac84175374e7e596b0ee4 | [
"Apache-2.0"
] | 21,851 | 2015-01-05T15:17:19.000Z | 2022-03-31T22:14:25.000Z | modules/nbde-using-tang-servers-for-disk-encryption.adoc | georgettica/openshift-docs | 728a069f9c8ecd73701ac84175374e7e596b0ee4 | [
"Apache-2.0"
] | 1,681 | 2015-01-06T21:10:24.000Z | 2022-03-28T06:44:50.000Z | // Module included in the following assemblies:
//
// security/nbde-implementation-guide.adoc
[id="nbde-using-tang-servers-for-disk-encryption_{context}"]
= Tang server disk encryption
The following components and technologies implement Network-Bound Disk Encryption (NBDE).
image::179_OpenShift_NBDE_implementation_0821_3.png[Network-Bound Disk Encryption (NBDE), Clevis framework, Tang server]
_Tang_ is a server for binding data to network presence. It makes a node containing the data available when the node is bound to a certain secure network. Tang is stateless and does not require Transport Layer Security (TLS) or authentication. Unlike escrow-based solutions, where the key server stores all encryption keys and has knowledge of every encryption key, Tang never interacts with any node keys, so it never gains any identifying information from the node.
_Clevis_ is a pluggable framework for automated decryption that provides automated unlocking of Linux Unified Key Setup-on-disk-format (LUKS) volumes. The Clevis package runs on the node and provides the client side of the feature.
A _Clevis pin_ is a plug-in into the Clevis framework. There are three pin types:
TPM2:: Binds the disk encryption to the TPM2.
Tang:: Binds the disk encryption to a Tang server to enable NBDE.
Shamir’s secret sharing (sss):: Allows more complex combinations of other pins. It allows more nuanced policies such as the following:
* Must be able to reach one of these three Tang servers
* Must be able to reach three of these five Tang servers
* Must be able to reach the TPM2 AND at least one of these three Tang servers
| 64.96 | 467 | 0.801108 |
1eda11a91143a2ee07e9f61f16680dd70d3ec003 | 701 | adoc | AsciiDoc | src/TechnicalDocumentation/pages/conditionals/activateConditionals.adoc | MayurMehta-anthem/nimbus-docs | 4356ff44799f1eadb76b07b7504bfaa80ed9c034 | [
"Apache-2.0"
] | null | null | null | src/TechnicalDocumentation/pages/conditionals/activateConditionals.adoc | MayurMehta-anthem/nimbus-docs | 4356ff44799f1eadb76b07b7504bfaa80ed9c034 | [
"Apache-2.0"
] | null | null | null | src/TechnicalDocumentation/pages/conditionals/activateConditionals.adoc | MayurMehta-anthem/nimbus-docs | 4356ff44799f1eadb76b07b7504bfaa80ed9c034 | [
"Apache-2.0"
] | null | null | null | `@ActivateConditionals` is a collection of `@ActivateConditional` calls.
.ActivateConditionals Attributes
[cols="4,^3,^3,10",options="header"]
|=========================================================
|Name | Type |Default |Description
|value |ActivateConditional[] | |
|=========================================================
[source,java,indent=0]
[subs="verbatim,attributes"]
.ActivateConditionals.java
----
@ActivateConditionals({
@ActivateConditional(when="state == 'A'", targetPath="/../q3Level1"),
@ActivateConditional(when="state == 'B'", targetPath="/../q3Level2")
})
private String q3;
private SampleCoreNestedEntity q3Level1;
private SampleCoreNestedEntity q3Level2;
----
| 28.04 | 72 | 0.60913 |
fa0712dde411ccccc6eb164a167736b14e746292 | 16,735 | adoc | AsciiDoc | solr/solr-ref-guide/src/shard-management.adoc | soleuu/lucene-solr | 2c8cfa678b27bb68912d5333ec2f0cb5a2d23017 | [
"Apache-2.0"
] | 2 | 2021-11-04T13:46:56.000Z | 2022-03-29T14:56:24.000Z | solr/solr-ref-guide/src/shard-management.adoc | soleuu/lucene-solr | 2c8cfa678b27bb68912d5333ec2f0cb5a2d23017 | [
"Apache-2.0"
] | null | null | null | solr/solr-ref-guide/src/shard-management.adoc | soleuu/lucene-solr | 2c8cfa678b27bb68912d5333ec2f0cb5a2d23017 | [
"Apache-2.0"
] | 1 | 2022-03-29T14:56:28.000Z | 2022-03-29T14:56:28.000Z | = Shard Management Commands
:toclevels: 1
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
In SolrCloud, a shard is a logical partition of a collection. This partition stores part of the entire index for a collection.
The number of shards you have helps to determine how many documents a single collection can contain in total, and also impacts search performance.
[[splitshard]]
== SPLITSHARD: Split a Shard
`/admin/collections?action=SPLITSHARD&collection=_name_&shard=_shardID_`
Splitting a shard will take an existing shard and break it into two pieces which are written to disk as two (new) shards. The original shard will continue to contain the same data as-is but it will start re-routing requests to the new shards. The new shards will have as many replicas as the original shard. A soft commit is automatically issued after splitting a shard so that documents are made visible on sub-shards. An explicit commit (hard or soft) is not necessary after a split operation because the index is automatically persisted to disk during the split operation.
This command allows for seamless splitting and requires no downtime. A shard being split will continue to accept query and indexing requests and will automatically start routing requests to the new shards once this operation is complete. This command can only be used for SolrCloud collections created with `numShards` parameter, meaning collections which rely on Solr's hash-based routing mechanism.
The split is performed by dividing the original shard's hash range into two equal partitions and dividing up the documents in the original shard according to the new sub-ranges. Two parameters discussed below, `ranges` and `split.key` provide further control over how the split occurs.
The newly created shards will have as many replicas as the parent shard, of the same replica types.
When using `splitMethod=rewrite` (default) you must ensure that the node running the leader of the parent shard has enough free disk space i.e., more than twice the index size, for the split to succeed. The API uses the Autoscaling framework to find nodes that can satisfy the disk requirements for the new replicas but only when an Autoscaling policy is configured. Refer to <<solrcloud-autoscaling-policy-preferences.adoc#solrcloud-autoscaling-policy-preferences,Autoscaling Policy and Preferences>> section for more details.
Also, the first replicas of resulting sub-shards will always be placed on the shard leader node, which may cause Autoscaling policy violations that need to be resolved either automatically (when appropriate triggers are in use) or manually.
Shard splitting can be a long running process. In order to avoid timeouts, you should run this as an <<collections-api.adoc#asynchronous-calls,asynchronous call>>.
=== SPLITSHARD Parameters
`collection`::
The name of the collection that includes the shard to be split. This parameter is required.
`shard`::
The name of the shard to be split. This parameter is required when `split.key` is not specified.
`ranges`::
A comma-separated list of hash ranges in hexadecimal, such as `ranges=0-1f4,1f5-3e8,3e9-5dc`.
+
This parameter can be used to divide the original shard's hash range into arbitrary hash range intervals specified in hexadecimal. For example, if the original hash range is `0-1500` then adding the parameter: `ranges=0-1f4,1f5-3e8,3e9-5dc` will divide the original shard into three shards with hash range `0-500`, `501-1000`, and `1001-1500` respectively.
`split.key`::
The key to use for splitting the index.
+
This parameter can be used to split a shard using a route key such that all documents of the specified route key end up in a single dedicated sub-shard. Providing the `shard` parameter is not required in this case because the route key is enough to figure out the right shard. A route key which spans more than one shard is not supported.
+
For example, suppose `split.key=A!` hashes to the range `12-15` and belongs to shard 'shard1' with range `0-20`. Splitting by this route key would yield three sub-shards with ranges `0-11`, `12-15` and `16-20`. Note that the sub-shard with the hash range of the route key may also contain documents for other route keys whose hash ranges overlap.
`numSubShards`::
The number of sub-shards to split the parent shard into. Allowed values for this are in the range of `2`-`8` and defaults to `2`.
+
This parameter can only be used when `ranges` or `split.key` are not specified.
`splitMethod`::
Currently two methods of shard splitting are supported:
* `splitMethod=rewrite` (default) after selecting documents to retain in each partition this method creates sub-indexes from
scratch, which is a lengthy CPU- and I/O-intensive process but results in optimally-sized sub-indexes that don't contain
any data from documents not belonging to each partition.
* `splitMethod=link` uses file system-level hard links for creating copies of the original index files and then only modifies the
file that contains the list of deleted documents in each partition. This method is many times quicker and lighter on resources than the
`rewrite` method but the resulting sub-indexes are still as large as the original index because they still contain data from documents not
belonging to the partition. This slows down the replication process and consumes more disk space on replica nodes (the multiple hard-linked
copies don't occupy additional disk space on the leader node, unless hard-linking is not supported).
`splitFuzz`::
A float value (default is 0.0f, must be smaller than 0.5f) that allows to vary the sub-shard ranges
by this percentage of total shard range, odd shards being larger and even shards being smaller.
`property._name_=_value_`::
Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
`waitForFinalState`::
If `true`, the request will complete only when all affected replicas become active. The default is `false`, which means that the API will return the status of the single action, which may be before the new replica is online and active.
`timing`::
If `true` then each stage of processing will be timed and a `timing` section will be included in response.
`async`::
Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>
`splitByPrefix`::
If `true`, the split point will be selected by taking into account the distribution of compositeId values in the shard.
A compositeId has the form `<prefix>!<suffix>`, where all documents with the same prefix are colocated on in the hash space.
If there are multiple prefixes in the shard being split, then the split point will be selected to divide up the prefixes into as equal sized shards as possible without splitting any prefix.
If there is only a single prefix in a shard, the range of the prefix will be divided in half.
+
The id field is usually scanned to determine the number of documents with each prefix.
As an optimization, if an optional field called `id_prefix` exists and has the document prefix indexed (including the !) for each document,
then that will be used to generate the counts.
+
One simple way to populate `id_prefix` is a copyField in the schema:
[source,xml]
----
<!-- OPTIONAL, for optimization used by splitByPrefix if it exists -->
<field name="id_prefix" type="composite_id_prefix" indexed="true" stored="false"/>
<copyField source="id" dest="id_prefix"/>
<fieldtype name="composite_id_prefix" class="solr.TextField">
<analyzer>
<tokenizer class="solr.PatternTokenizerFactory" pattern=".*!" group="0"/>
</analyzer>
</fieldtype>
----
Current implementation details and limitations:
* Prefix size is calculated using number of documents with the prefix.
* Only two level compositeIds are supported.
* The shard can only be split into two.
=== SPLITSHARD Response
The output will include the status of the request and the new shard names, which will use the original shard as their basis, adding an underscore and a number. For example, "shard1" will become "shard1_0" and "shard1_1". If the status is anything other than "success", an error message will explain why the request failed.
=== Examples using SPLITSHARD
*Input*
Split shard1 of the "anotherCollection" collection.
[source,text]
----
http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=anotherCollection&shard=shard1&wt=xml
----
*Output*
[source,xml]
----
<response>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">6120</int>
</lst>
<lst name="success">
<lst>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">3673</int>
</lst>
<str name="core">anotherCollection_shard1_1_replica1</str>
</lst>
<lst>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">3681</int>
</lst>
<str name="core">anotherCollection_shard1_0_replica1</str>
</lst>
<lst>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">6008</int>
</lst>
</lst>
<lst>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">6007</int>
</lst>
</lst>
<lst>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">71</int>
</lst>
</lst>
<lst>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">0</int>
</lst>
<str name="core">anotherCollection_shard1_1_replica1</str>
<str name="status">EMPTY_BUFFER</str>
</lst>
<lst>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">0</int>
</lst>
<str name="core">anotherCollection_shard1_0_replica1</str>
<str name="status">EMPTY_BUFFER</str>
</lst>
</lst>
</response>
----
[[createshard]]
== CREATESHARD: Create a Shard
Shards can only created with this API for collections that use the 'implicit' router (i.e., when the collection was created, `router.name=implicit`). A new shard with a name can be created for an existing 'implicit' collection.
Use SPLITSHARD for collections created with the 'compositeId' router (`router.key=compositeId`).
`/admin/collections?action=CREATESHARD&shard=_shardName_&collection=_name_`
The default values for `replicationFactor` or `nrtReplicas`, `tlogReplicas`, `pullReplicas` from the collection is used to determine the number of replicas to be created for the new shard. This can be customized by explicitly passing the corresponding parameters to the request.
The API uses the Autoscaling framework to find the best possible nodes in the cluster when an Autoscaling preferences or policy is configured. Refer to <<solrcloud-autoscaling-policy-preferences.adoc#solrcloud-autoscaling-policy-preferences,Autoscaling Policy and Preferences>> section for more details.
=== CREATESHARD Parameters
`collection`::
The name of the collection that includes the shard to be split. This parameter is required.
`shard`::
The name of the shard to be created. This parameter is required.
`createNodeSet`::
Allows defining the nodes to spread the new collection across. If not provided, the CREATESHARD operation will create shard-replica spread across all live Solr nodes.
+
The format is a comma-separated list of node_names, such as `localhost:8983_solr,localhost:8984_solr,localhost:8985_solr`.
`nrtReplicas`::
The number of `nrt` replicas that should be created for the new shard (optional, the defaults for the collection is used if omitted)
`tlogReplicas`::
The number of `tlog` replicas that should be created for the new shard (optional, the defaults for the collection is used if omitted)
`pullReplicas`::
The number of `pull` replicas that should be created for the new shard (optional, the defaults for the collection is used if omitted)
`property._name_=_value_`::
Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
`waitForFinalState`::
If `true`, the request will complete only when all affected replicas become active. The default is `false`, which means that the API will return the status of the single action, which may be before the new replica is online and active.
`async`::
Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
=== CREATESHARD Response
The output will include the status of the request. If the status is anything other than "success", an error message will explain why the request failed.
=== Examples using CREATESHARD
*Input*
Create 'shard-z' for the "anImplicitCollection" collection.
[source,text]
----
http://localhost:8983/solr/admin/collections?action=CREATESHARD&collection=anImplicitCollection&shard=shard-z&wt=xml
----
*Output*
[source,xml]
----
<response>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">558</int>
</lst>
</response>
----
[[deleteshard]]
== DELETESHARD: Delete a Shard
Deleting a shard will unload all replicas of the shard, remove them from `clusterstate.json`, and (by default) delete the instanceDir and dataDir for each replica. It will only remove shards that are inactive, or which have no range given for custom sharding.
`/admin/collections?action=DELETESHARD&shard=_shardID_&collection=_name_`
=== DELETESHARD Parameters
`collection`::
The name of the collection that includes the shard to be deleted. This parameter is required.
`shard`::
The name of the shard to be deleted. This parameter is required.
`deleteInstanceDir`::
By default Solr will delete the entire instanceDir of each replica that is deleted. Set this to `false` to prevent the instance directory from being deleted.
`deleteDataDir`::
By default Solr will delete the dataDir of each replica that is deleted. Set this to `false` to prevent the data directory from being deleted.
`deleteIndex`::
By default Solr will delete the index of each replica that is deleted. Set this to `false` to prevent the index directory from being deleted.
`async`::
Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
=== DELETESHARD Response
The output will include the status of the request. If the status is anything other than "success", an error message will explain why the request failed.
=== Examples using DELETESHARD
*Input*
Delete 'shard1' of the "anotherCollection" collection.
[source,text]
----
http://localhost:8983/solr/admin/collections?action=DELETESHARD&collection=anotherCollection&shard=shard1&wt=xml
----
*Output*
[source,xml]
----
<response>
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">558</int>
</lst>
<lst name="success">
<lst name="10.0.1.4:8983_solr">
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">27</int>
</lst>
</lst>
</lst>
</response>
----
[[forceleader]]
== FORCELEADER: Force Shard Leader
In the unlikely event of a shard losing its leader, this command can be invoked to force the election of a new leader.
`/admin/collections?action=FORCELEADER&collection=<collectionName>&shard=<shardName>`
=== FORCELEADER Parameters
`collection`::
The name of the collection. This parameter is required.
`shard`::
The name of the shard where leader election should occur. This parameter is required.
WARNING: This is an expert level command, and should be invoked only when regular leader election is not working. This may potentially lead to loss of data in the event that the new leader doesn't have certain updates, possibly recent ones, which were acknowledged by the old leader before going down.
| 47.814286 | 575 | 0.756259 |
cea6e13320989f7ab9828ec83003f654dbb654cf | 1,881 | adoc | AsciiDoc | src/docs/03_system_scope_and_context.adoc | Arquisoft/viade_es2a | 2337f53bdf6b65aadfd697ae948d600b96bd8279 | [
"BSD-2-Clause"
] | 5 | 2020-02-03T12:23:52.000Z | 2020-05-14T08:12:22.000Z | src/docs/03_system_scope_and_context.adoc | Arquisoft/viade_es2a | 2337f53bdf6b65aadfd697ae948d600b96bd8279 | [
"BSD-2-Clause"
] | 113 | 2020-02-03T13:46:05.000Z | 2021-03-12T12:12:44.000Z | src/docs/03_system_scope_and_context.adoc | Arquisoft/viade_es2a | 2337f53bdf6b65aadfd697ae948d600b96bd8279 | [
"BSD-2-Clause"
] | 10 | 2020-02-22T23:01:15.000Z | 2021-02-08T13:32:54.000Z | [[section-system-scope-and-context]]
== System Scope and Context
=== Business Context
image::bussiness.png[Diagram]
[cols="1,2" options="header"]
|===
| **Entity** | **Context**
| User | A user can create, save, and manage his own routes, he is also able to see the shared ones of another users he follows
| Friend | It's a user who follows another one (solid friend relationship), being able to share their routes with them
| Group | Composed by a set of users
| Route | It's the main element of the business, it is a set of geographical points and waypoints (which can be named), which represent a real life route or travel
| Waypoint |They're part of a route, optionally added, and used to highlight important places, they resemble a common route point, with name and description
|===
=== Technical Context
image::TechContDiagram.png[Diagram]
The system is based on the SOLID architecture created by Tim Berners-Lee, which focus on the decentrilzation of the web. Each user of the application
will have a SOLID POD linked to his account. In SOLID, a POD is the main unit of storage, it can be thought as a private webpage for each user. In your
POD you can store all your information and also choose whether you want each piece of data to be public or private, and you can even choose who you want
to share that information with.
So, as explained above, each user of the application has a POD of his own. The application has to be connected to the Internet in order to communicate with
that POD, as it is sort of an online server containing your data. Once you are online, you can start uploading routes to your profile (The POD) and then
choose which of your friends can view which route.
The web application itself will be written in Javascript using React, an open-source library that makes easier and more complete the process of setting up
an user interface. | 57 | 163 | 0.767677 |
24aefb91cb33c878f37cd7b243183850ba699b1b | 20 | adoc | AsciiDoc | _posts/2017-04-13-aaa.adoc | s-f-ek971/s-f-ek971.github.io | 5c1ea90c872201ded86e3716f0d49dcf77172e40 | [
"MIT"
] | null | null | null | _posts/2017-04-13-aaa.adoc | s-f-ek971/s-f-ek971.github.io | 5c1ea90c872201ded86e3716f0d49dcf77172e40 | [
"MIT"
] | null | null | null | _posts/2017-04-13-aaa.adoc | s-f-ek971/s-f-ek971.github.io | 5c1ea90c872201ded86e3716f0d49dcf77172e40 | [
"MIT"
] | null | null | null | # aaa
aaa
bbb
ccc | 2.857143 | 5 | 0.6 |
4051f3899287b43229a928f2a7dde7a785524fea | 595 | adoc | AsciiDoc | modules/deployments-starting-deployment.adoc | Pseudopooja/openshift-docs | 8a97ff222f827b8b023c89105833af10cdd6046a | [
"Apache-2.0"
] | 1 | 2020-03-12T21:22:34.000Z | 2020-03-12T21:22:34.000Z | modules/deployments-starting-deployment.adoc | Pseudopooja/openshift-docs | 8a97ff222f827b8b023c89105833af10cdd6046a | [
"Apache-2.0"
] | null | null | null | modules/deployments-starting-deployment.adoc | Pseudopooja/openshift-docs | 8a97ff222f827b8b023c89105833af10cdd6046a | [
"Apache-2.0"
] | 1 | 2021-07-12T08:48:49.000Z | 2021-07-12T08:48:49.000Z | // Module included in the following assemblies:
//
// * applications/deployments/managing-deployment-processes.adoc
[id="deployments-starting-a-deployment_{context}"]
= Starting a deployment
You can start a _rollout_ to begin the deployment process of your application.
.Procedure
. To start a new deployment process from an existing DeploymentConfig, run the
following command:
+
[source,terminal]
----
$ oc rollout latest dc/<name>
----
+
[NOTE]
====
If a deployment process is already in progress, the command displays a
message and a new ReplicationController will not be deployed.
====
| 23.8 | 78 | 0.763025 |
b96017bec458d6fbb6376ebf5c82f073bf982a2f | 3,118 | adoc | AsciiDoc | storage/persistent_storage/persistent-storage-aws.adoc | alebedev87/openshift-docs | b7ed96ce84670e2b286f51b4303c144a01764e2b | [
"Apache-2.0"
] | 625 | 2015-01-07T02:53:02.000Z | 2022-03-29T06:07:57.000Z | storage/persistent_storage/persistent-storage-aws.adoc | alebedev87/openshift-docs | b7ed96ce84670e2b286f51b4303c144a01764e2b | [
"Apache-2.0"
] | 21,851 | 2015-01-05T15:17:19.000Z | 2022-03-31T22:14:25.000Z | storage/persistent_storage/persistent-storage-aws.adoc | alebedev87/openshift-docs | b7ed96ce84670e2b286f51b4303c144a01764e2b | [
"Apache-2.0"
] | 1,681 | 2015-01-06T21:10:24.000Z | 2022-03-28T06:44:50.000Z | [id="persistent-storage-aws"]
= Persistent storage using AWS Elastic Block Store
include::modules/common-attributes.adoc[]
:context: persistent-storage-aws
toc::[]
{product-title} supports AWS Elastic Block Store volumes (EBS). You can
provision your {product-title} cluster with persistent storage by using link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html[Amazon EC2].
Some familiarity with Kubernetes and AWS is assumed.
The Kubernetes persistent volume framework allows administrators to provision a
cluster with persistent storage and gives users a way to request those
resources without having any knowledge of the underlying infrastructure.
AWS Elastic Block Store volumes can be provisioned dynamically.
Persistent volumes are not bound to a single project or namespace; they can be
shared across the {product-title} cluster.
Persistent volume claims are specific to a project or namespace and can be
requested by users.
[IMPORTANT]
====
{product-title} defaults to using an in-tree (non-CSI) plug-in to provision AWS EBS storage.
In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
====
[IMPORTANT]
====
High-availability of storage in the infrastructure is left to the underlying
storage provider.
====
For {product-title}, automatic migration from AWS EBS in-tree to the Container Storage Interface (CSI) driver is available as a Technology Preview (TP) feature.
With migration enabled, volumes provisioned using the existing in-tree driver are automatically migrated to use the AWS EBS CSI driver For more information, see xref:../container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration feature].
// Defining attributes required by the next module
:StorageClass: EBS
:Provisioner: kubernetes.io/aws-ebs
:CsiDriver: ebs.csi.aws.com
include::modules/storage-create-storage-class.adoc[leveloffset=+1]
include::modules/storage-persistent-storage-creating-volume-claim.adoc[leveloffset=+1]
:provider: AWS
include::modules/storage-persistent-storage-volume-format.adoc[leveloffset=+1]
include::modules/storage-persistent-storage-aws-maximum-volumes.adoc[leveloffset=+1]
[id="additional-resources_persistent-storage-aws"]
== Additional resources
* See xref:../../storage/container_storage_interface/persistent-storage-csi-ebs.adoc#persistent-storage-csi-ebs[AWS Elastic Block Store CSI Driver Operator] for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins.
| 53.758621 | 514 | 0.809814 |
cb1fcd649e71ef035f04cbadb890decc4727690c | 2,736 | adoc | AsciiDoc | doc/optimization.adoc | tani/generic-cl | dd35115707b7d8b643219600a78f29c8ba165827 | [
"MIT"
] | 99 | 2018-12-19T17:46:37.000Z | 2022-03-29T16:30:04.000Z | doc/optimization.adoc | tani/generic-cl | dd35115707b7d8b643219600a78f29c8ba165827 | [
"MIT"
] | 8 | 2019-03-16T18:49:41.000Z | 2020-11-23T08:52:16.000Z | doc/optimization.adoc | tani/generic-cl | dd35115707b7d8b643219600a78f29c8ba165827 | [
"MIT"
] | 7 | 2019-02-08T16:37:44.000Z | 2022-01-03T02:58:51.000Z | [[gf-optimization]]
== Optimization ==
There is an overhead associated with generic functions. Code making
use of the generic function interface will be slower than code which
calls the `CL` functions directly, due to the cost of dynamic method
dispatch. For most cases this will not result in a noticeable decrease
in performance, however for those cases where it does there is an
optimization.
This library is built on top of
https://github.com/alex-gutev/static-dispatch[STATIC-DISPATCH], which
is a library that allows generic-function dispatch to be performed
statically, at compile-time, rather than dynamically, at runtime. The
library allows a call to a generic function to be replaced with the
body of the appropriate method, or a call to an ordinary function
implementing the method, which is selected based on the type
declarations of its arguments.
For a generic function call to be inlined an `OPTIMIZE` declaration
with a `SPEED` level of `3` and with `SAFETY` and `DEBUG` levels less
than `3` has to be in place, in the environment of the
call. Additionally, it must also be possible to determine the types of
the arguments at compile-time. This means in general that the types of
variables and the return types of functions have to be declared. _See
https://github.com/alex-gutev/static-dispatch[STATIC-DISPATCH] and
https://github.com/alex-gutev/cl-form-types[CL-FORM-TYPES] for
information on how the types of the arguments are determined and how
to make the type information available_.
.Example
[source,lisp]
----
(let ((x 1))
(declare (optimize (speed 3) (safety 2))
(type number x))
(equalp x (the number (+ 3 4))))
----
This will result in the call to the `EQUALP` function being replaced
with the body of the `NUMBER NUMBER` method.
The n-argument equality, comparison and arithmetic functions also have
associated compiler-macros which replace the calls to the n-argument
functions with multiple inline calls to the binary functions, e.g. `(=
1 2 3)` is replaced with `(and (equalp 1 2) (equalp 1 3))`.
Thus the following should also result in the `EQUALP` function calls
being statically dispatched:
[source,lisp]
----
(let ((x 1))
(declare (optimize (speed 3) (safety 2))
(type number x))
(= x (the number (+ 3 4))))
----
IMPORTANT: STATIC-DISPATCH requires the ability to extract `TYPE` and
`INLINE` declarations from implementation specific environment
objects. This is provided by the
https://alex-gutev.github.io/cl-environments/[CL-ENVIRONMENTS]
library, which works in the general case however some
https://alex-gutev.github.io/cl-environments/#ensuring_code_walking[workarounds]
are necessary in order for it to work on all implementations in all
cases.
| 39.652174 | 81 | 0.762061 |
e5cad1da0fb746d49da089d2797ee940cdd9b775 | 183 | adoc | AsciiDoc | core/requirements/json/REQ_definition.adoc | pvretano/wps-rest-binding | a4e69435ccfb00a6e959eee3afabfc9331f5f3d2 | [
"OML"
] | 12 | 2021-01-29T16:24:43.000Z | 2022-02-23T14:15:35.000Z | core/requirements/json/REQ_definition.adoc | pvretano/wps-rest-binding | a4e69435ccfb00a6e959eee3afabfc9331f5f3d2 | [
"OML"
] | 134 | 2020-12-01T07:34:10.000Z | 2022-03-15T19:34:08.000Z | core/requirements/json/REQ_definition.adoc | pvretano/wps-rest-binding | a4e69435ccfb00a6e959eee3afabfc9331f5f3d2 | [
"OML"
] | 11 | 2020-12-15T17:35:52.000Z | 2021-11-12T13:50:23.000Z | [[req_json_definition]]
[requirement]
====
[%metadata]
label:: /req/json/definition
`200`-responses of the server SHALL support the following media type:
* `application/json`
====
| 15.25 | 69 | 0.715847 |
dfa01327f713998bddd58302ee8d3aaa186599cc | 5,043 | adoc | AsciiDoc | 3-Basic_Synchronization_Patterns/3.6-Barrier/readme.adoc | soniakeys/LittleBookOfSemaphores | a7cb094bccae34924b6bf5f9313775a1f682255b | [
"Unlicense"
] | 14 | 2018-02-07T06:34:30.000Z | 2021-11-02T17:52:13.000Z | 3-Basic_Synchronization_Patterns/3.6-Barrier/readme.adoc | Yiqing-Y/LittleBookOfSemaphores | a7cb094bccae34924b6bf5f9313775a1f682255b | [
"Unlicense"
] | 1 | 2018-01-30T19:10:07.000Z | 2018-01-30T19:10:07.000Z | 3-Basic_Synchronization_Patterns/3.6-Barrier/readme.adoc | Yiqing-Y/LittleBookOfSemaphores | a7cb094bccae34924b6bf5f9313775a1f682255b | [
"Unlicense"
] | 2 | 2019-10-26T11:53:28.000Z | 2019-11-21T20:47:20.000Z | # 3.6 Barrier
Quoting from the book, "Every thread should run the following code:"
.Barrier code
----
rendezvous
critical point
----
It's a little unclear what these two lines are supposed to be. Are they
referencing some specific already-introduced code implementing rendezvous and
critical point? Are they function names for functions that might do some
arbitrary application-specific task? Are they just general concepts?
I'm going to implement them as print statements, just indicating that some
code is being excecuted.
## book.go
This is the literal interpretation.
Output from an example run:
----
$ go run book.go
2018/01/27 13:23:43 gr 0 rendezvous
2018/01/27 13:23:43 gr 3 rendezvous
2018/01/27 13:23:43 gr 2 rendezvous
2018/01/27 13:23:43 gr 1 rendezvous
2018/01/27 13:23:43 gr 4 rendezvous
2018/01/27 13:23:43 gr 0 critical point
2018/01/27 13:23:43 gr 2 critical point
2018/01/27 13:23:43 gr 3 critical point
2018/01/27 13:23:43 gr 1 critical point
2018/01/27 13:23:43 gr 4 critical point
----
All goroutines execute their "rendezvous" code, wait for each other, then
execute their "critical point" code.
This use of words rendezvous and critical point still seems confusing. How
about instead "before barrier" and "after barrier".
Much more disturbing though is the note in the text,
____
It might seem dangerous to read the value of count outside the mutex. In
this case it is not a problem, but in general it is probably not a good idea.
____
It does seem dangerous. What does the Go race detector think?
----
$ go run -race book.go
2018/01/30 14:59:15 gr 0 rendezvous
2018/01/30 14:59:15 gr 1 rendezvous
==================
2018/01/30 14:59:15 gr 2 rendezvous
WARNING: DATA RACE
Write at 0x0000005d1140 by goroutine 7:
main.gr()
/home/sonia/go/src/github.com/soniakeys/LittleBookOfSemaphores/3-Basic_Synchronization_Patterns/3.6-Barrier/book.go:22 +0x166
Previous read at 0x0000005d1140 by goroutine 6:
main.gr()
/home/sonia/go/src/github.com/soniakeys/LittleBookOfSemaphores/3-Basic_Synchronization_Patterns/3.6-Barrier/book.go:24 +0x1c4
Goroutine 7 (running) created at:
2018/01/30 14:59:15 gr 3 rendezvous
main.main()
/home/sonia/go/src/github.com/soniakeys/LittleBookOfSemaphores/3-Basic_Synchronization_Patterns/3.6-Barrier/book.go:36 +0xa2
Goroutine 6 (running) created at:
main.main()
/home/sonia/go/src/github.com/soniakeys/LittleBookOfSemaphores/3-Basic_Synchronization_Patterns/3.6-Barrier/book.go:36 +0xa2
==================
2018/01/30 14:59:15 gr 4 rendezvous
2018/01/30 14:59:15 gr 0 critical point
2018/01/30 14:59:15 gr 1 critical point
2018/01/30 14:59:15 gr 2 critical point
2018/01/30 14:59:15 gr 3 critical point
2018/01/30 14:59:15 gr 4 critical point
Found 1 data race(s)
exit status 66
----
Why read the value of count outside the mutex? I don't know. I tried putting
it inside the mutex and the program seems to work okay and keep the race
detector happy.
## atomic.go
Here's a variant of mine still using the barrier semaphore but using Go's
sync/atomic for the count. The mutex protected count didn't seem part of the
problem, and also atomic nicely avoids the sketchy racey code where count
is tested outside of the mutex.
## close.go
A version without semaphores. It's a significant departure from the book
solution, but one I think still captures the idea of a barrier. Instead of
maintaining a count and relying on one of the goroutines to recognize that it
is last, it uses a WaitGroup to let a supervisory goroutine (main) wait for all
of the worker goroutines to finish their before-barrier work.
Main then uses a Go idiom for broadcasting a signal -- it closes a channel.
Closing a channel in Go causes it to thereafter send a zero value of the
channel type in response to any number of receive requests. Main creates
this channel and leaves it empty. Then at the "barrier", workers all block
trying to recieve from the empty channel. Main waits for the WaitGroup
signalling all workers are at the barrier, then closes the channel.
All workers then succeed at getting a zero value from the channel and
proceed to their after-barrier work.
Note that a couple of contrivences of the book solution are not needed here.
There is no explicit count, just that maintained by the WaitGroup. One
worker does not have to take up a supervisory role. And there is no
"turnstile" needed. The channel close idiom effectively releases all workers
at once.
----
$ go run -race close.go
2018/01/27 14:00:06 gr 0 rendezvous (before barrier)
2018/01/27 14:00:06 gr 3 rendezvous (before barrier)
2018/01/27 14:00:06 gr 1 rendezvous (before barrier)
2018/01/27 14:00:06 gr 2 rendezvous (before barrier)
2018/01/27 14:00:06 gr 4 rendezvous (before barrier)
2018/01/27 14:00:06 gr 1 critical point (after barrier)
2018/01/27 14:00:06 gr 4 critical point (after barrier)
2018/01/27 14:00:06 gr 0 critical point (after barrier)
2018/01/27 14:00:06 gr 2 critical point (after barrier)
2018/01/27 14:00:06 gr 3 critical point (after barrier)
----
| 37.355556 | 131 | 0.763633 |
c42613543f195b221748c38cd19313c8fd66c6b1 | 13,003 | adoc | AsciiDoc | rs/rosetta-api/docs/modules/rosetta-api/pages/staking-support.adoc | zhiiker/ic | 370af58ea1231cdbb8d2e4f593a089dfd1c803cd | [
"Apache-2.0"
] | 941 | 2021-05-10T08:14:14.000Z | 2022-03-31T11:40:24.000Z | rs/rosetta-api/docs/modules/rosetta-api/pages/staking-support.adoc | zhiiker/ic | 370af58ea1231cdbb8d2e4f593a089dfd1c803cd | [
"Apache-2.0"
] | 3 | 2022-02-16T12:24:20.000Z | 2022-03-23T12:05:41.000Z | rs/rosetta-api/docs/modules/rosetta-api/pages/staking-support.adoc | zhiiker/ic | 370af58ea1231cdbb8d2e4f593a089dfd1c803cd | [
"Apache-2.0"
] | 122 | 2021-05-10T08:21:23.000Z | 2022-03-25T20:34:12.000Z | = Staking and neuron management
This document specifies extensions of the Rosetta API enabling staking funds and managing governance "neurons" on the Internet Computer.
NOTE: Operations within a transaction are applied in order, so the order of operations is significant.
Transactions that contain idempotent operations provided by this API can be re-tried within the 24-hour window.
NOTE: Due to limitations of the governance canister smart contract, neuron management operations are not reflected on the chain.
If you lookup transactions by identifier returned from the `/construction/submit` endpoint, these transactions might not exist or miss neuron management operations.
Instead, `/construction/submit` returns the statuses of all the operations in the `metadata` field using the same format as `/block/transaction` would return.
== Deriving neuron address
[cols="1,1"]
|===
| Since version
| 1.3.0
|===
Call the `/construction/derive` endpoint with metadata field `account_type` set to `"neuron"` to compute the ledger address corresponding to the neuron controlled by the public key.
=== Request
[source,json]
----
{
"network_identifier": {
"blockchain": "Internet Computer",
"network": "00000000000000020101"
},
"public_key": {
"hex_bytes": "1b400d60aaf34eaf6dcbab9bba46001a23497886cf11066f7846933d30e5ad3f",
"curve_type": "edwards25519"
},
"metadata": {
"account_type": "neuron",
"neuron_index": 0
}
}
----
NOTE: Since version 1.3.0, you can control many neurons using the same key.
You can differentiate between neurons by specifying different values of the `neuron_index` metadata field.
The `neuron_index` field is supported by all neuron management operations and is equal to zero if not specified.
=== Response
[source,json]
----
{
"account_identifier": {
"address": "531b163cd9d6c1d88f867bdf16f1ede020be7bcd928d746f92fbf7e797c5526a"
}
}
----
== Stake funds
[cols="1,1"]
|===
| Since version
| 1.0.5
| Idempotent?
| yes
|===
To stake funds, execute a transfer to the neuron address followed by a `STAKE` operation.
The only field that you must set for the `STAKE` operation is `account`, which should be equal to the ledger account of the neuron controller.
You can specify `neuron_index` field in the `metadata` field of the `STAKE` operation.
If you do specify the `neuron_index`, its value must be the same as you used to derive the neuron account identifier.
=== Request
[source,json]
----
{
"network_identifier": {
"blockchain": "Internet Computer",
"network": "00000000000000020101",
},
"operations": [
{
"operation_identifier": { "index": 0 },
"type": "TRANSACTION",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"amount": {
"value": "-100000000",
"currency": { "symbol": "ICP", "decimals": 8 }
}
},
{
"operation_identifier": { "index": 1 },
"type": "TRANSACTION",
"account": { "address": "531b163cd9d6c1d88f867bdf16f1ede020be7bcd928d746f92fbf7e797c5526a" },
"amount": {
"value": "100000000",
"currency": { "symbol": "ICP", "decimals": 8 }
}
},
{
"operation_identifier": { "index": 2 },
"type": "FEE",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"amount": {
"value": "-10000",
"currency": { "symbol": "ICP", "decimals": 8 }
}
},
{
"operation_identifier": { "index": 3 },
"type": "STAKE",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"metadata": {
"neuron_index": 0
}
}
]
}
----
=== Response
[source,json]
----
{
"transaction_identifier": {
"hash": "2f23fd8cca835af21f3ac375bac601f97ead75f2e79143bdf71fe2c4be043e8f"
},
"metadata": {
"operations": [
{
"operation_identifier": { "index": 0 },
"type": "TRANSACTION",
"status": "COMPLETED",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"amount": {
"value": "-100000000",
"currency": { "symbol": "ICP", "decimals": 8 }
}
},
{
"operation_identifier": { "index": 1 },
"type": "TRANSACTION",
"status": "COMPLETED",
"account": { "address": "531b163cd9d6c1d88f867bdf16f1ede020be7bcd928d746f92fbf7e797c5526a" },
"amount": {
"value": "100000000",
"currency": { "symbol": "ICP", "decimals": 8 }
}
},
{
"operation_identifier": { "index": 2 },
"type": "FEE",
"status": "COMPLETED",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"amount": {
"value": "-10000",
"currency": { "symbol": "ICP", "decimals": 8 }
}
},
{
"operation_identifier": { "index": 3 },
"type": "STAKE",
"status": "COMPLETED",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"metadata": {
"neuron_index": 0
}
}
]
}
}
----
== Managing neurons
=== Setting dissolve timestamp
[cols="1,1"]
|===
| Since version
| 1.1.0
| Idempotent?
| yes
| Minimal access level
| controller
|===
This operation updates the time when the neuron can reach the `DISSOLVED` state.
Dissolve timestamp always increases monotonically.
* If the neuron is in the `DISSOLVING` state, this operation can move the dissolve timestamp further into the future.
* If the neuron is in the `NOT_DISSOLVING` state, invoking `SET_DISSOLVE_TIMESTAMP` with time T will attempt to increase the neuron's dissolve delay (the minimal time it will take to dissolve the neuron) to `T - current_time`.
* If the neuron is in the `DISSOLVED` state, invoking `SET_DISSOLVE_TIMESTAMP` will move it to the `NOT_DISSOLVING` state and will set the dissolve delay accordingly.
.Preconditions
* `account.address` is the ledger address of the neuron contoller.
.Example
[source,json]
----
{
"operation_identifier": { "index": 4 },
"type": "SET_DISSOLVE_TIMESTAMP",
"account": {
"address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d"
},
"metadata": {
"neuron_index": 0,
"dissolve_time_utc_seconds": 1879939507
}
}
----
=== Start dissolving
[cols="1,1"]
|===
| Since version
| 1.1.0
| Idempotent?
| yes
| Minimal access level
| controller
|===
The `START_DISSOLVNG` operation changes the state of the neuron to `DISSOLVING`.
.Preconditions
* `account.address` is the ledger address of the neuron contoller.
.Postconditions
* The neuron is in the `DISSOLVING` state.
.Example
[source,json]
----
{
"operation_identifier": { "index": 5 },
"type": "START_DISSOLVING",
"account": {
"address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d"
},
"metadata": {
"neuron_index": 0
}
}
----
=== Stop dissolving
[cols="1,1"]
|===
| Since version
| 1.1.0
| Idempotent?
| yes
| Minimal access level
| controller
|===
The `STOP_DISSOLVNG` operation changes the state of the neuron to `NOT_DISSOLVING`.
.Preconditions
* `account.address` is a ledger address of a neuron contoller.
.Postconditions
* The neuron is in `NOT_DISSOLVING` state.
.Example
[source,json]
----
{
"operation_identifier": { "index": 6 },
"type": "STOP_DISSOLVING",
"account": {
"address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d"
},
"metadata": {
"neuron_index": 0
}
}
----
=== Adding hotkeys
[cols="1,1"]
|===
| Since version
| 1.2.0
| Idempotent?
| yes
| Minimal access level
| controller
|===
The `ADD_HOTKEY` operation adds a hotkey to the neuron.
The Governance canister smart contract allows some non-critical operations to be signed with a hotkey instead of the controller's key (e.g., voting and querying maturity).
.Preconditions
* `account.address` is a ledger address of a neuron controller.
* The neuron has less than 10 hotkeys.
The command has two forms: one form accepts an https://smartcontracts.org/docs/interface-spec/index.html#principal[IC principal] as a hotkey, another form accepts a https://www.rosetta-api.org/docs/models/PublicKey.html[public key].
==== Add a principal as a hotkey
[source,json]
----
{
"operation_identifier": { "index": 0 },
"type": "ADD_HOTKEY",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"metadata": {
"neuron_index": 0,
"principal": "sp3em-jkiyw-tospm-2huim-jor4p-et4s7-ay35f-q7tnm-hi4k2-pyicb-xae"
}
}
----
==== Add a public key as a hotkey
[source,json]
----
{
"operation_identifier": { "index": 0 },
"type": "ADD_HOTKEY",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"metadata": {
"neuron_index": 0,
"public_key": {
"hex_bytes": "1b400d60aaf34eaf6dcbab9bba46001a23497886cf11066f7846933d30e5ad3f",
"curve_type": "edwards25519"
}
}
}
----
=== Spawn neurons
[cols="1,1"]
|===
| Since version
| 1.3.0
| Idempotent?
| yes
| Minimal access level
| controller
|===
The `SPAWN` operation creates a new neuron from an existing neuron with enough maturity.
This operation transfers all the maturity from the existing neuron to the staked amount of the newly spawned neuron.
.Preconditions
* `account.address` is a ledger address of a neuron controller.
* The parent neuron has at least 1 ICP worth of maturity.
.Postconditions
* Parent neuron maturity is set to `0`.
* A new neuron is spawned with a balance equal to the transferred maturity.
[source,json]
----
{
"operation_identifier": { "index": 0 },
"type": "SPAWN",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"metadata": {
"neuron_index": 0,
"controller": "sp3em-jkiyw-tospm-2huim-jor4p-et4s7-ay35f-q7tnm-hi4k2-pyicb-xae",
"spawned_neuron_index": 1
}
}
----
[NOTE]
====
* `controller` metadata field is optional and equal to the existing neuron controller by default.
* `spawned_neuron_index` metadata field is required.
The rosetta node uses this index to compute the subaccount for the spawned neuron.
All spawned neurons must have different values of `spawned_neuron_index`.
====
=== Merge neuron maturity
[cols="1,1"]
|===
| Since version
| 1.4.0
| Idempotent?
| no
| Minimal access level
| controller
|===
The `MERGE_MATURITY` operation merges the existing maturity of the neuron into its stake.
The percentage of maturity to merge can be specified, otherwise the entire maturity is merged.
.Preconditions
* `account.address` is the ledger address of the neuron controller.
* The neuron has non-zero maturity to merge.
.Postconditions
* Maturity decreased by the amount merged.
* Neuron stake increased by the amount merged.
.Example
[source,json]
----
{
"operation_identifier": { "index": 0 },
"type": "MERGE_MATURITY",
"account": { "address": "907ff6c714a545110b42982b72aa39c5b7742d610e234a9d40bf8cf624e7a70d" },
"metadata": {
"neuron_index": 0,
"percentage_to_merge": 14
}
}
----
NOTE: `percentage_to_merge` metadata field is optional and equal to 100 by default.
If specified, the value must be an integer between 1 and 100 (bounds included).
== Accessing neuron attributes
[cols="1,1"]
|===
| Since version
| 1.3.0
| Minimal access level
| public
|===
Call the `/account/balance` endpoint to access the staked amount and publicly available neuron metadata.
.Preconditions
* `public_key` contains the public key of a neuron's controller.
[NOTE]
====
* This operation is available only in online mode.
* The request should not specify any block identifier because the endpoint always returns the latest state of the neuron.
====
=== Request
[source,json]
----
{
"network_identifier": {
"blockchain": "Internet Computer",
"network": "00000000000000020101"
},
"account_identifier": {
"address": "a4ac33c6a25a102756e3aac64fe9d3267dbef25392d031cfb3d2185dba93b4c4"
},
"metadata": {
"account_type": "neuron",
"neuron_index": 0,
"public_key": {
"hex_bytes": "ba5242d02642aede88a5f9fe82482a9fd0b6dc25f38c729253116c6865384a9d",
"curve_type": "edwards25519"
}
}
}
----
=== Response
[source,json]
----
{
"block_identifier": {
"index": 1150,
"hash": "ca02e34bafa2f58b18a66073deb5f389271ee74bd59a024f9f7b176a890039b2"
},
"balances": [
{
"value": "100000000",
"currency": {
"symbol": "ICP",
"decimals": 8
}
}
],
"metadata": {
"verified_query": false,
"retrieved_at_timestamp_seconds": 1639670156,
"state": "DISSOLVING",
"age_seconds": 0,
"dissolve_delay_seconds": 240269355,
"voting_power": 195170955,
"created_timestamp_seconds": 1638802541
}
}
----
| 25.346979 | 232 | 0.676844 |
4a6951a839b408b13569dd7cb1b47be368c6afcc | 1,172 | adoc | AsciiDoc | src/main/docs/guide/i18n/bundle.adoc | Paullo612/micronaut-core | a612e4aba043e07ec6619fe37ed3b8fc5936541b | [
"Apache-2.0"
] | null | null | null | src/main/docs/guide/i18n/bundle.adoc | Paullo612/micronaut-core | a612e4aba043e07ec6619fe37ed3b8fc5936541b | [
"Apache-2.0"
] | null | null | null | src/main/docs/guide/i18n/bundle.adoc | Paullo612/micronaut-core | a612e4aba043e07ec6619fe37ed3b8fc5936541b | [
"Apache-2.0"
] | null | null | null | A resource bundle is a Java .properties file that contains locale-specific data.
Given this Resource Bundle:
[source, properties]
.src/main/resources/io/micronaut/docs/i18n/messages_en.properties
----
include::test-suite/src/test/resources/io/micronaut/docs/i18n/messages_en.properties[]
----
[source, properties]
.src/main/resources/io/micronaut/docs/i18n/messages_es.properties
----
include::test-suite/src/test/resources/io/micronaut/docs/i18n/messages_es.properties[]
----
You can use api:context.i18n.ResourceBundleMessageSource[], an implementation of api:context.MessageSource[] which eases accessing link:{javase}/java/util/ResourceBundle.html[Resource Bundles] and provides cache functionality, to access the previous messages.
WARNING: Do not instantiate a new `ResourceBundleMessageSource` each time you retrieve a message. Instantiate it once, for example in a factory.
snippet::io.micronaut.docs.i18n.MessageSourceFactory[tags="clazz",indent=0,title="MessageSource Factory Example"]
Then you can retrieve the messages supplying the locale:
snippet::io.micronaut.docs.i18n.I18nSpec[tags="test",indent=0,title="ResourceBundleMessageSource Example"]
| 39.066667 | 259 | 0.802901 |
2e374edf68070d3f41d2ff6b1bd2be6edd6ebd33 | 1,543 | adoc | AsciiDoc | spark-sql-vectorized-parquet-reader.adoc | yashwanth2804/mastering-spark-sql-book | e34670138b23a28bf9052d3a41e910d3df3637a4 | [
"Apache-2.0"
] | null | null | null | spark-sql-vectorized-parquet-reader.adoc | yashwanth2804/mastering-spark-sql-book | e34670138b23a28bf9052d3a41e910d3df3637a4 | [
"Apache-2.0"
] | null | null | null | spark-sql-vectorized-parquet-reader.adoc | yashwanth2804/mastering-spark-sql-book | e34670138b23a28bf9052d3a41e910d3df3637a4 | [
"Apache-2.0"
] | 1 | 2020-09-21T00:31:38.000Z | 2020-09-21T00:31:38.000Z | == Vectorized Parquet Decoding (Reader)
*Vectorized Parquet Decoding* (aka *Vectorized Parquet Reader*) allows for reading datasets in parquet format in batches, i.e. rows are decoded in batches. That aims at improving memory locality and cache utilization.
Quoting https://issues.apache.org/jira/browse/SPARK-12854[SPARK-12854 Vectorize Parquet reader]:
> The parquet encodings are largely designed to decode faster in batches, column by column. This can speed up the decoding considerably.
Vectorized Parquet Decoding is used exclusively when `ParquetFileFormat` is requested for a <<spark-sql-ParquetFileFormat.adoc#buildReaderWithPartitionValues, data reader>> when <<spark.sql.parquet.enableVectorizedReader, spark.sql.parquet.enableVectorizedReader>> property is enabled (`true`) and the read schema uses <<spark-sql-DataType.adoc#AtomicType, AtomicTypes>> data types only.
Vectorized Parquet Decoding uses <<spark-sql-VectorizedParquetRecordReader.adoc#, VectorizedParquetRecordReader>> for vectorized decoding.
=== [[spark.sql.parquet.enableVectorizedReader]] spark.sql.parquet.enableVectorizedReader Configuration Property
link:spark-sql-properties.adoc#spark.sql.parquet.enableVectorizedReader[spark.sql.parquet.enableVectorizedReader] configuration property is on by default.
[source, scala]
----
val isParquetVectorizedReaderEnabled = spark.conf.get("spark.sql.parquet.enableVectorizedReader").toBoolean
assert(isParquetVectorizedReaderEnabled, "spark.sql.parquet.enableVectorizedReader should be enabled by default")
----
| 70.136364 | 387 | 0.822424 |
814280537fbaf70fc7ccd5fa12d32bd5a210b4ec | 1,898 | adoc | AsciiDoc | extra/02_strings/README.adoc | kent13/tiny_python_projects | cd4223e0cb92984d10cb99bf6a473144ccca8d3c | [
"MIT"
] | 742 | 2020-01-30T12:08:38.000Z | 2022-03-28T00:17:12.000Z | extra/02_strings/README.adoc | kent13/tiny_python_projects | cd4223e0cb92984d10cb99bf6a473144ccca8d3c | [
"MIT"
] | 15 | 2020-08-08T07:09:51.000Z | 2022-02-15T19:02:10.000Z | extra/02_strings/README.adoc | kent13/tiny_python_projects | cd4223e0cb92984d10cb99bf6a473144ccca8d3c | [
"MIT"
] | 1,472 | 2020-01-21T01:50:59.000Z | 2022-03-31T08:29:33.000Z | = Text Classifier
Write a Python program called `classify.py` that will report if a given input is:
* uppercase, e.g., "HELLO"
* lowercase, e.g., "hello"
* title case, e.g., "Hello"
* a digit, e.g., "10"
* a space, e.g., " " or the tab character "\t"
* none of the above, e.g., "1.2"
The program should work like so:
----
$ ./classify.py HELLO
HELLO is uppercase.
$ ./classify.py hello
hello is lowercase.
$ ./classify.py Hello
Hello is title case.
$ ./classify.py 10
10 is a digit.
$ ./classify.py " "
input is space.
$ ./classify.py "1.2"
1.2 is unclassified.
----
It should print a brief usage if provided with no arguments:
----
$ ./classify.py
usage: classify.py [-h] str
classify.py: error: the following arguments are required: str
----
And a longer usage for the `-h` or `--help` flags:
----
$ ./classify.py -h
usage: classify.py [-h] str
Classify a given string
positional arguments:
str Some text
optional arguments:
-h, --help show this help message and exit
----
It should pass all the tests:
----
$ make test
pytest -xv test.py
============================= test session starts ==============================
...
collected 8 items
test.py::test_exists PASSED [ 12%]
test.py::test_usage PASSED [ 25%]
test.py::test_upper PASSED [ 37%]
test.py::test_lower PASSED [ 50%]
test.py::test_title PASSED [ 62%]
test.py::test_digit PASSED [ 75%]
test.py::test_space PASSED [ 87%]
test.py::test_unclassified PASSED [100%]
============================== 8 passed in 0.44s ===============================
----
| 26.361111 | 81 | 0.501054 |
d66ce70d3b37da3ddda047fb52b7f93c9403d103 | 48 | asciidoc | AsciiDoc | metricbeat/module/beat/state/_meta/docs.asciidoc | tetianakravchenko/beats | 6aec024e0ab8239791be20885d6d3c58697d18cd | [
"ECL-2.0",
"Apache-2.0"
] | 9,729 | 2015-12-02T12:44:19.000Z | 2022-03-31T13:26:12.000Z | metricbeat/module/beat/state/_meta/docs.asciidoc | tetianakravchenko/beats | 6aec024e0ab8239791be20885d6d3c58697d18cd | [
"ECL-2.0",
"Apache-2.0"
] | 25,281 | 2015-12-02T08:46:55.000Z | 2022-03-31T23:26:12.000Z | metricbeat/module/beat/state/_meta/docs.asciidoc | tetianakravchenko/beats | 6aec024e0ab8239791be20885d6d3c58697d18cd | [
"ECL-2.0",
"Apache-2.0"
] | 5,239 | 2015-12-02T09:22:33.000Z | 2022-03-31T15:11:58.000Z | This is the state metricset of the beat module.
| 24 | 47 | 0.791667 |
0ef26e8e5540f08b8a774d925d75173a295465e2 | 850 | adoc | AsciiDoc | modules/virt-about-pci-passthrough.adoc | georgettica/openshift-docs | 728a069f9c8ecd73701ac84175374e7e596b0ee4 | [
"Apache-2.0"
] | null | null | null | modules/virt-about-pci-passthrough.adoc | georgettica/openshift-docs | 728a069f9c8ecd73701ac84175374e7e596b0ee4 | [
"Apache-2.0"
] | 1 | 2022-01-12T21:27:35.000Z | 2022-01-12T21:27:35.000Z | modules/virt-about-pci-passthrough.adoc | georgettica/openshift-docs | 728a069f9c8ecd73701ac84175374e7e596b0ee4 | [
"Apache-2.0"
] | 2 | 2021-12-07T13:59:50.000Z | 2021-12-09T11:02:08.000Z | // Module included in the following assemblies:
//
// * virt/virtual_machines/advanced_vm_management/virt-configuring-pci-passthrough.adoc
:_content-type: CONCEPT
[id="virt-about_pci-passthrough_{context}"]
= About preparing a host device for PCI passthrough
To prepare a host device for PCI passthrough by using the CLI, create a `MachineConfig` object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the `permittedHostDevices` field of the `HyperConverged` custom resource (CR). The `permittedHostDevices` list is empty when you first install the {VirtProductName} Operator.
To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the `HyperConverged` CR.
| 70.833333 | 460 | 0.798824 |
96d77fbbdacedc964b2a0c78e388bc0ca812ad81 | 6,640 | adoc | AsciiDoc | misc/boost_your_crowdfunding_campaign_with_these_10_tips.adoc | ppreeper/osea | 717f2bec95b801d2582085625644ea4cb38d4917 | [
"MIT"
] | null | null | null | misc/boost_your_crowdfunding_campaign_with_these_10_tips.adoc | ppreeper/osea | 717f2bec95b801d2582085625644ea4cb38d4917 | [
"MIT"
] | null | null | null | misc/boost_your_crowdfunding_campaign_with_these_10_tips.adoc | ppreeper/osea | 717f2bec95b801d2582085625644ea4cb38d4917 | [
"MIT"
] | null | null | null | = Boost your crowdfunding campaign with these 10 tips
* By Lyndsey Gilpin December 4, 2014, 4:54 AM PST // lyndseygilpin
* **After the initial donations flow in, you need to gain more momentum for your crowdfunding campaign. Here are 10 ways to do it.**
You worked for months building your crowdfunding campaign. You made your prototype, shot a video, wrote your pitch,
practiced your spiel, and designed your website. The campaign finally goes live and your friends and family
immediately back you. But after a quick start, you're left wondering how to gain more momentum. After all, campaigns
go viral on Indiegogo and Kickstarter all the time. How do you wake up to 1,000 backers and an overfunded campaign?
In the spirit of full disclosure, I am currently running a crowdfunding campaign for a book project, so figuring out how to
find more backers consumes a lot of my thoughts. But it's important stuff, because promoting your campaign is the
most challenging aspect of this process.
So, here are 10 ways to boost your campaign.
== 1. Look at your pitch again
[Georgia Tech researchers](http://comp.social.gatech.edu/papers/cscw14.crowdfunding.mitra.pdf) looked at nine million phrases from
45,000 Kickstarter campaigns and found some of the most used phrases from campaigns that were fully funded vs.
more unsuccessful campaigns. Most of the successful campaigns used persuasive phrases, as you'd probably
imagine. Some of the popular phrases used: "good karma," "got you," "given the chance," "future is," "some help with,"
and, of course, "cats."
Some words that didn't work well? "Hope to get," provide us," "need one," and "we have lots."
== 2. Update
[According to Indiegogo](https://go.indiegogo.com/playbook/lifecyclephase/runningyourcampaign) , projects that post at least three
updates throughout their campaign raise 239% more money than those who don't post updates. Really, the more
updates you send, the more money you raise. Send one every few days. Thank your contributors, tell them about new
news you have about your product, share your achievements, and let them know if there are any changes. Keep
engaging your community.
== 3. Run referral contests
Referral contests work well to motivate people to share your campaign. Recognize your backers who send people your
way or share your information on social media. Indiegogo has a help center for running a [referral contest](http://support.indiegogo.com/hc/enus/articles/527406HowtoRunaReferralContest) .
Typically, referral contests work better 30 days
into the campaign, or when 60% of the funding goal has already been reached, so you have your base audience.
== 4. Make a social media strategy
Obviously, this one is huge. But for good reason your social media strategy determines whether your crowdfunding
campaign lives or dies. You're not going to get backers by relying on people to browse through campaigns on the
platforms. You have to reach out to them. Ask your most committed inner circle to share with at least five of their
friends, for instance. Personally reach out to people on Facebook and Twitter. Crosspromote. And instead of focusing
on making your campaign go viral, focus on what you can control. Be consistent with your audience, but switch up your
posts.
== 5. Buzzstream
When you install the [Buzzmarker Chrome extension](https://chrome.google.com/webstore/detail/buzzstream-buzzmarker/difjajjajoeppdgbcbbhbdblblmkcbhp?hl=enUS) ,
you just have to click on the page and "Buzzmark it" and it identifies
the contact email for you for that particular page or blog. [Buzzstream](http://www.buzzstream.com/) also has a "prospecting
list" feature that shows you where other blog links are on the site. Big time saver. There's also a free trial for two
weeks, which is just enough time for you to use it during your crowdfunding campaign.
== 6. Scout your competition
Guaranteed, your project has some competition on Kickstarter and Indiegogo, and probably on smaller crowdfunding
sites, too. But for starters, use Google to search campaigns like yours on Kickstarter and Indiegogo, find out who
covered them in the media, and reach out to those reporters or blogs. You can also use Google image search for this
save a few images from the campaign, drag them into Google image search tool, and backtrack to find out who covered
them when the images show up on the results page.
== 7. Buzzsumo
[Buzzsumo](http://buzzsumo.com/) allows you to search for Twitter influencers based on topic. Type in "crowdfunding" in the
search bar, and you'll find the most talked about campaigns and who has mentioned them. (Unfortunately, the [potato salad](https://app.buzzsumo.com/topcontent?type=articles&result_type=total&num_days=360&general_article&infographic&video&guest_post&giveaway&interview&q=crowdfunding&page=1) Kickstarter is still winning out
with this search). Buzzsumo is a great way to find who is talking about things related to your campaign. Then you can
personally tweet at them or engage them to pique their interest and start attracting a bigger audience.
== 8. Add perks
If you're doing well, add a perk or update the ones you have. To engage people in your campaign more, offer a live
stream or live video, or a Twitter chat to talk about the product. Make a hashtag and promote it. Invite influencers you
know on social media to help you out. You need to build that audience, and people want to see who is behind the
product they're looking at. Make yourself accessible by doing a video Q & A about your campaign, or offer incentives to
get people to contribute.
== 9. HARO
Help a [Reporter Out](http://www.helpareporter.com/) is a popular way for journalists to find sources for their stories. It's also a
pretty nice way to get the press interested in your campaign, if you act fast enough. If a reporter mentions something
related to your product or campaign and you think you might make a good source for their story, hit them up. They get
an example for their story, and you probably get a link to your campaign in the article. It's a winwin. But as a reporter
who likes HARO, heed my advice: don't waste your time contacting journalists who have nothing to do with your
product. Do your research.
== 10. Kicktraq
[Kicktraq](http://www.kicktraq.com/) mainly does analytics for Kickstarter, but it can also be used to search the internet for
mentions of your competitors' campaigns. Find out who covered them, and then pitch those members of the media. Or,
it may simply give you inspiration on how best to promote, update, or market your product during the rest of your
campaign. | 69.166667 | 325 | 0.786145 |
d48dc8bce22950488e1c0b732a337a677591bb49 | 266 | adoc | AsciiDoc | README.adoc | rognan/chrome-extension-custom-new-tab | 59daa6adbdd52464c9442f4870a26e08c5dcde9e | [
"MIT"
] | 1 | 2019-08-24T15:15:40.000Z | 2019-08-24T15:15:40.000Z | README.adoc | rognan/chrome-extension-custom-new-tab | 59daa6adbdd52464c9442f4870a26e08c5dcde9e | [
"MIT"
] | 360 | 2019-12-13T04:59:13.000Z | 2022-03-15T04:04:47.000Z | README.adoc | rognan/chrome-extension-custom-new-tab | 59daa6adbdd52464c9442f4870a26e08c5dcde9e | [
"MIT"
] | null | null | null | = New Tab
Thor Andreas Rognan <thor.rognan@gmail.com>
ifdef::env-github[]
:tip-caption: :bulb:
:note-caption: :information_source:
:important-caption: :heavy_exclamation_mark:
:caution-caption: :fire:
:warning-caption: :warning:
endif::[]
_Custom Chrome extension_
| 22.166667 | 44 | 0.763158 |
114343385b0a7e518e8727b2ba5e4c0f8fc31887 | 466 | adoc | AsciiDoc | src/docs/asciidoc/guide_overview.adoc | EMBL-EBI-SUBS/subs-api | 98ae937e474e3de651f1d3390f97c59016fcd376 | [
"Apache-2.0"
] | null | null | null | src/docs/asciidoc/guide_overview.adoc | EMBL-EBI-SUBS/subs-api | 98ae937e474e3de651f1d3390f97c59016fcd376 | [
"Apache-2.0"
] | 13 | 2017-08-15T10:46:39.000Z | 2020-04-22T15:12:43.000Z | src/docs/asciidoc/guide_overview.adoc | EMBL-EBI-SUBS/subs-api | 98ae937e474e3de651f1d3390f97c59016fcd376 | [
"Apache-2.0"
] | 2 | 2017-08-14T12:34:23.000Z | 2017-08-17T09:04:06.000Z | = Developer guides
:docinfo: shared
:nofooter:
This section of the documentation is intended to get you started writing applications to work with the submissions API.
We'll cover everything you need to know, from authentication, to submission and validation.
We suggest that you start with <<guide_accounts_and_logging_in.adoc#,Setting up a user account and logging in>>, then
move on to <<guide_getting_started.adoc#,Getting started with the submissions API>>.
| 38.833333 | 119 | 0.802575 |
a6baeb6129ae724ecbda96dcb246c078dc6ed657 | 3,053 | adoc | AsciiDoc | server_admin/topics/clients/oidc/service-accounts.adoc | rvadym/keycloak-documentation | 8436c1866b67a420e44e467229f1042f335713b2 | [
"Apache-2.0"
] | null | null | null | server_admin/topics/clients/oidc/service-accounts.adoc | rvadym/keycloak-documentation | 8436c1866b67a420e44e467229f1042f335713b2 | [
"Apache-2.0"
] | 3 | 2021-07-15T20:55:47.000Z | 2021-12-17T02:15:57.000Z | server_admin/topics/clients/oidc/service-accounts.adoc | rvadym/keycloak-documentation | 8436c1866b67a420e44e467229f1042f335713b2 | [
"Apache-2.0"
] | null | null | null | [[_service_accounts]]
==== Service Accounts
Each OIDC client has a built-in _service account_ which allows it to obtain an access token.
This is covered in the OAuth 2.0 specifiation under <<_client_credentials_grant,Client Credentials Grant>>.
To use this feature you must set the <<_access-type, Access Type>> of your client to `confidential`. When you do this,
the `Service Accounts Enabled` switch will appear. You need to turn on this switch. Also make sure that you have
configured your <<_client-credentials, client credentials>>.
To use it you must have registered a valid `confidential` Client and you need to check the switch `Service Accounts Enabled` in {project_name} admin console for this client.
In tab `Service Account Roles` you can configure the roles available to the service account retrieved on behalf of this client.
Remember that you must have the roles available in Role Scope Mappings (tab `Scope`) of this client as well, unless you
have `Full Scope Allowed` on. As in a normal login, roles from access token are the intersection of:
* Role scope mappings of particular client combined with the role scope mappings inherited from linked client scopes
* Service account roles
The REST URL to invoke on is `/auth/realms/{realm-name}/protocol/openid-connect/token`.
Invoking on this URL is a POST request and requires you to post the client credentials.
By default, client credentials are represented by clientId and clientSecret of the client in `Authorization: Basic` header, but you can also authenticate the client with a signed JWT assertion or any other custom mechanism for client authentication.
You also need to use the parameter `grant_type=client_credentials` as per the OAuth2 specification.
For example the POST invocation to retrieve a service account can look like this:
[source]
----
POST /auth/realms/demo/protocol/openid-connect/token
Authorization: Basic cHJvZHVjdC1zYS1jbGllbnQ6cGFzc3dvcmQ=
Content-Type: application/x-www-form-urlencoded
grant_type=client_credentials
----
The response would be this https://datatracker.ietf.org/doc/html/rfc6749#section-4.4.3[standard JSON document] from the OAuth 2.0 specification.
[source]
----
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache
{
"access_token":"2YotnFZFEjr1zCsicMWpAA",
"token_type":"bearer",
"expires_in":60
}
----
There is the only access token returned by default. There is no refresh token returned and there is also no user session created
on the {project_name} side upon successful authentication by default. Due the lack of refresh token, there is a need to re-authenticate when access token expires,
however this does not mean any additional overhead on {project_name} server side due the fact that sessions are not created by default.
Due to this, there is no need for logout, however issued access tokens can be revoked by sending request to the OAuth2 Revocation Endpoint described
in the <<_oidc-endpoints, OpenID Connect Endpoints>> section.
| 52.637931 | 249 | 0.787095 |
49d22ec7861b17beeb8ff59c438870f2c5e42fab | 2,048 | adoc | AsciiDoc | gravitee-apim-console-webui/README.adoc | mkajitansnyk/gravitee-gateway | d8c5e726514f95f39216cbc4ef4fca994fb3938a | [
"Apache-2.0"
] | 100 | 2021-07-09T06:46:26.000Z | 2022-03-31T07:43:51.000Z | gravitee-apim-console-webui/README.adoc | mkajitansnyk/gravitee-gateway | d8c5e726514f95f39216cbc4ef4fca994fb3938a | [
"Apache-2.0"
] | 261 | 2021-07-28T15:32:23.000Z | 2022-03-29T07:51:12.000Z | gravitee-apim-console-webui/README.adoc | mkajitansnyk/gravitee-gateway | d8c5e726514f95f39216cbc4ef4fca994fb3938a | [
"Apache-2.0"
] | 30 | 2021-09-08T09:06:15.000Z | 2022-03-31T02:44:29.000Z | == Gravitee API Management - Console
== Description
This repo contains the source code of APIM Console.
APIM Console is a client-side only Angular application and can be deployed on any HTTP server, such as Apache or Nginx.
For more information about installation and usage, see https://docs.gravitee.io/apim/3.x/apim_installguide_management_ui_install_zip.html[Gravitee.io Documentation Website].
== Contributing
=== Install
Prerequisites:
- Install https://github.com/nvm-sh/nvm[nvm]
- Use with `nvm use` or install with `nvm install` the version of Node.js declared in `.nvmrc`
- Then install dependencies with:
[source,bash]
----
npm install
----
=== Getting started
Here are the useful NPM scripts available when developing in APIM Console:
- `serve`: Start the app in dev mode (with hot reload) and proxy backend calls to `http://localhost:8083`
- `serve:nightly`: Start the app in dev mode (with hot reload) and proxy backend calls to `https://nightly.gravitee.io`
- `lint:eslint`: Run ESLint and Prettier
- `lint:eslint:fix`: Run ESLint in auto fix mode and Prettier in write mode
- `test`: Run unit tests with Jest
- `build:prod`: Build the app in production mode and output the result to `dist`
- `serve:prod`: Start the built app (from `dist` folder) and proxy backend calls to `http://localhost:8083`. Don't forget to run `npm run build:prod` to build the app before starting serving it.
=== About WIP dependencies
This project uses https://github.com/gravitee-io/gravitee-ui-components[Gravitee UI Components] library and sometimes changes need to be done in both projects at the same time. If you want to develop in parallel, you can clone the repository and link to the project.
[source,bash]
----
git clone git@github.com:gravitee-io/gravitee-ui-components.git
cd gravitee-ui-components
npm link
# Go to gravitee-apim-console-webui folder
npm link @gravitee/ui-components
----
⚠️ The npm link will be removed if you run `npm install`, and so you will need to rerun the previous snippet to link the library.
| 37.925926 | 266 | 0.755859 |
cc3e568a78b8917a9069dbb520fd7255f200457b | 10,493 | adoc | AsciiDoc | java/chapters/ch03-build-image.adoc | hendisantika/labs | bf3231cbdf39432f8db1855175e744b817b1bdb7 | [
"Apache-2.0"
] | null | null | null | java/chapters/ch03-build-image.adoc | hendisantika/labs | bf3231cbdf39432f8db1855175e744b817b1bdb7 | [
"Apache-2.0"
] | null | null | null | java/chapters/ch03-build-image.adoc | hendisantika/labs | bf3231cbdf39432f8db1855175e744b817b1bdb7 | [
"Apache-2.0"
] | 1 | 2019-07-07T05:05:23.000Z | 2019-07-07T05:05:23.000Z | :imagesdir: images
== Build a Docker Image
*PURPOSE*: This chapter explains how to create a Docker image.
As explained in <<Docker_Basics>>, Docker image is the *build component* of Docker and a read-only template of application operating system.
=== Dockerfile
Docker build images by reading instructions from a _Dockerfile_. A _Dockerfile_ is a text document that contains all the commands a user could call on the command line to assemble an image. `docker build` command uses this file and executes all the commands in succession to create an image.
`build` command is also passed a context that is used during image creation. This context can be a path on your local filesystem or a URL to a Git repository.
_Dockerfile_ is usually called _Dockerfile_. The complete list of commands that can be specified in this file are explained at https://docs.docker.com/reference/builder/. The common commands are listed below:
.Common commands for Dockerfile
[width="100%", options="header", cols="1,4,4"]
|==================
| Command | Purpose | Example
| FROM | First non-comment instruction in _Dockerfile_ | `FROM ubuntu`
| COPY | Copies mulitple source files from the context to the file system of the container at the specified path | `COPY .bash_profile /home`
| ENV | Sets the environment variable | `ENV HOSTNAME=test`
| RUN | Executes a command | `RUN apt-get update`
| CMD | Defaults for an executing container | `CMD ["/bin/echo", "hello world"]`
| EXPOSE | Informs the network ports that the container will listen on | `EXPOSE 8093`
|==================
=== Create your first Docker image
Create a new directory.
Create a new text file, name it _Dockerfile_, and use the following contents:
[source, text]
----
FROM ubuntu
CMD ["/bin/echo", "hello world"]
----
This image uses `ubuntu` as the base image. `CMD` command defines the command that needs to run. It provides a different entry point of `/bin/echo` and gives the argument "`hello world`".
Build the image:
[source, text]
----
> docker build -t helloworld .
Sending build context to Docker daemon 2.048 kB
Step 0 : FROM ubuntu
Pulling repository docker.io/library/ubuntu
a5a467fddcb8: Download complete
3fd0c2ae8ed2: Download complete
9e19ac89d27c: Download complete
ac65c371c3a5: Download complete
Status: Downloaded newer image for ubuntu:latest
---> a5a467fddcb8
Step 1 : CMD /bin/echo hello world
---> Running in 132bb0bf823f
---> e81a394f71e3
Removing intermediate container 132bb0bf823f
Successfully built e81a394f71e3
----
`.` in this command is the context for `docker build`.
List the images available:
[source, text]
----
> docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
helloworld latest 9c0e7b56cbee 13 minutes ago 187.9 MB
----
Run the container:
docker run helloworld
to see the output:
hello world
If you do not see the expected output, check your Dockerfile, build the image again, and now run it!
Change the base image from `ubuntu` to `busybox` in `Dockerfile`. Build the image again:
docker build -t helloworld2 .
and view the images using `docker images` command:
[source, text]
----
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
helloworld latest e81a394f71e3 26 minutes ago 187.9 MB
helloworld2 latest c458787fadcf 3 seconds ago 1.113 MB
ubuntu latest a5a467fddcb8 2 days ago 187.9 MB
busybox latest 3d5bcd78e074 4 days ago 1.113 MB
----
Note how base images for Ubuntu and Busybox are downloaded.
=== Create your first Docker image using Java
==== Create a simple Java application
Create a new Java project:
[source, text]
----
mvn archetype:generate -DgroupId=org.examples.java -DartifactId=helloworld -DinteractiveMode=false
----
Build the project:
[source, text]
----
cd helloworld
mvn package
----
Run the Java class:
[source, text]
----
java -cp target/helloworld-1.0-SNAPSHOT.jar org.examples.java.App
----
This shows the output:
[source, text]
----
Hello World!
----
Let's package this application as a Docker image.
==== Java Docker image
Pull the latest Docker image for Java:
[source, text]
----
docker pull java
----
Run the container in an interactive manner:
[source, text]
----
docker run -it java
----
Check the version of Java:
[source, text]
----
root@44b355b45ab1:/# java -version
openjdk version "1.8.0_72-internal"
OpenJDK Runtime Environment (build 1.8.0_72-internal-b15)
OpenJDK 64-Bit Server VM (build 25.72-b15, mixed mode)
----
A different version may be seen in your case.
==== Package and Run Java application as Docker image
Create a new Dockerfile in `helloworld` directory:
[source, text]
----
FROM java
COPY target/helloworld-1.0-SNAPSHOT.jar /usr/src/helloworld-1.0-SNAPSHOT.jar
CMD java -cp /usr/src/helloworld-1.0-SNAPSHOT.jar org.examples.java.App
----
Build the image:
[source, text]
----
docker build -t hello-java .
----
Run the image:
[source, text]
----
docker run hello-java
----
This displays the output:
[source, text]
----
Hello World!
----
==== Package and Run Java Application using Docker Maven Plugin
https://github.com/fabric8io/docker-maven-plugin[Docker Maven Plugin] allows you to manage Docker images and containers using Maven. It comes with predefined goals:
[options="header"]
|====
|Goal | Description
| `docker:build` | Build images
| `docker:start` | Create and start containers
| `docker:stop` | Stop and destroy containers
| `docker:push` | Push images to a registry
| `docker:remove` | Remove images from local docker host
| `docker:logs` | Show container logs
|====
Clone the sample code from https://github.com/arun-gupta/docker-java-sample/.
Create the Docker image:
[source, text]
----
cd docker-java-sample
mvn package -Pdocker
----
This will show an output like:
[source, text]
----
[INFO] DOCKER> [hello-java] : Built image sha256:09ab7
----
The list of images can be checked:
[source, text]
----
docker images | grep hello-java
hello-java latest 09ab715ec59d 44 seconds ago 642.4 MB
----
Run the Docker container:
[source, text]
----
mvn install -Pdocker
----
This will show an output like:
[source, text]
----
[INFO] DOCKER> [hello-java] : Start container 11550a8dc086
[INFO] DOCKER> [hello-java] : Waited on log out 'Hello' 503 ms
[INFO]
[INFO] --- docker-maven-plugin:0.14.2:logs (docker:start) @ helloworld ---
11550a> Hello World!
----
This is similar output when running the container using `docker run` command.
Only one change was required in the project to enable Docker packaging and running. A Maven profile is added in `pom.xml`:
[source, text]
----
<profiles>
<profile>
<id>docker</id>
<build>
<plugins>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.14.2</version>
<configuration>
<images>

</images>
</configuration>
<executions>
<execution>
<id>docker:build</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
<execution>
<id>docker:start</id>
<phase>install</phase>
<goals>
<goal>start</goal>
<goal>logs</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
----
=== Dockerfile Command Design Patterns
==== Difference between CMD and ENTRYPOINT
*TL;DR* `CMD` will work for most of the cases.
Default entry point for a container is `/bin/sh`, the default shell.
Running a container as `docker run -it ubuntu` uses that command and starts the default shell. The output is shown as:
```console
> docker run -it ubuntu
root@88976ddee107:/#
```
`ENTRYPOINT` allows to override the entry point to some other command, and even customize it. For example, a container can be started as:
```console
> docker run -it --entrypoint=/bin/cat ubuntu /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
. . .
```
This command overrides the entry point to the container to `/bin/cat`. The argument(s) passed to the CLI are used by the entry point.
==== Difference between ADD and COPY
*TL;DR* `COPY` will work for most of the cases.
`ADD` has all capabilities of `COPY` and has the following additional features:
. Allows tar file auto-extraction in the image, for example, `ADD app.tar.gz /opt/var/myapp`.
. Allows files to be downloaded from a remote URL. However, the downloaded files will become part of the image. This causes the image size to bloat. So its recommended to use `curl` or `wget` to download the archive explicitly, extract, and remove the archive.
==== Import and export images
Docker images can be saved using `save` command to a .tar file:
docker save helloworld > helloworld.tar
These tar files can then be imported using `load` command:
docker load -i helloworld.tar
| 28.906336 | 291 | 0.624893 |
c5cea0aaed8f7957802d4021347de3a39d4e56ed | 1,326 | adoc | AsciiDoc | docs/projektbericht/ergebnisse/reflexion_theresa.adoc | mribrgr/StuRa-Mitgliederdatenbank | 87a261d66c279ff86056e315b05e6966b79df9fa | [
"MIT"
] | 8 | 2019-11-26T13:34:46.000Z | 2021-06-21T13:41:57.000Z | docs/projektbericht/ergebnisse/reflexion_theresa.adoc | mribrgr/StuRa-Mitgliederdatenbank | 87a261d66c279ff86056e315b05e6966b79df9fa | [
"MIT"
] | 93 | 2019-12-16T09:29:10.000Z | 2021-04-24T12:03:33.000Z | docs/projektbericht/ergebnisse/reflexion_theresa.adoc | mribrgr/StuRa-Mitgliederdatenbank | 87a261d66c279ff86056e315b05e6966b79df9fa | [
"MIT"
] | 2 | 2020-12-03T12:43:19.000Z | 2020-12-22T21:48:47.000Z | = Theresa Schüttig (TSc)
Durch das Projekt konnte ich meine Kenntnisse in Python, Javascript und Git vertiefen und erstmals lernen, mit den Frameworks Django und Materialize zu arbeiten. Zudem war es mir eine neue Erfahrung, gemeinsam in einem Team an einer umfangreichen Software zu arbeiten und zu sehen, welche Vor- und Nachteile dies bietet. Die gute und regelmäßige Kommunikation innerhalb des Teams, die gegenseitige Unterstützung bei Problemen sowie die Arbeitsteilung stachen in meinen Augen besonders als positive Faktoren heraus.
Stolz bin ich auf die von uns entwickelte funktionsfähige, intuitiv bedienbare und optisch ansprechende Webanwendung, welche dem StuRa viel unnötige mühsame Arbeit abnehmen sollte.
Im nächsten Projekt würde ich im Team eine Person für den Aufbau des GUI einsetzen, um sich als Entwickler mehr mit der Funktionalität befassen zu können. Zudem würde ich zu Zeiten arbeiten, zu denen auch andere Teammitglieder für Absprachen erreichbar sind, um Problematiken zeitnah lösen zu können. Vor der Implementierung würde ich mich mehr mit mir unbekannten Thematiken befassen, um nicht im Nachhinein festzustellen, dass eine andere Lösung besser geeignet wäre. Gleichzeitig würde ich mehr Wert darauf legen, den Code geeignet zu strukturieren, um z.B. bestimmte Funktionen auffinden zu können. | 189.428571 | 602 | 0.831825 |
baa2ec4c9e1c50c2d1acf2380ace062031a195c9 | 964 | adoc | AsciiDoc | documentation/modules/oauth/con-oauth-config.adoc | jpkrohling/strimzi-kafka-operator | 7f5dcd6c8e7aebd4ef1ccf8e957ae0cc45a8610f | [
"Apache-2.0"
] | null | null | null | documentation/modules/oauth/con-oauth-config.adoc | jpkrohling/strimzi-kafka-operator | 7f5dcd6c8e7aebd4ef1ccf8e957ae0cc45a8610f | [
"Apache-2.0"
] | 5 | 2020-04-23T20:30:41.000Z | 2021-12-14T21:39:00.000Z | documentation/modules/oauth/con-oauth-config.adoc | jpkrohling/strimzi-kafka-operator | 7f5dcd6c8e7aebd4ef1ccf8e957ae0cc45a8610f | [
"Apache-2.0"
] | 1 | 2020-01-02T09:39:33.000Z | 2020-01-02T09:39:33.000Z | // Module included in the following assemblies:
//
// assembly-oauth.adoc
[id='con-oauth-strimzi-config-{context}']
= Configuring {oauth} authentication
{oauth} is used for interaction between Kafka clients and {ProductName} components.
In order to use {oauth} for {ProductName}, you must:
. xref:proc-oauth-server-config-{context}[Deploy an authorization server and configure the deployment to integrate with {ProductName}]
. xref:proc-oauth-broker-config-{context}[Deploy or update the Kafka cluster with Kafka broker listeners configured to use {oauth}]
. xref:proc-oauth-client-config-{context}[Update your Java-based Kafka clients to use {oauth}]
. xref:proc-oauth-kafka-config-{context}[Update Kafka component clients to use {oauth}]
include::proc-oauth-server-config.adoc[leveloffset=+1]
include::proc-oauth-broker-config.adoc[leveloffset=+1]
include::proc-oauth-client-config.adoc[leveloffset=+1]
include::proc-oauth-kafka-config.adoc[leveloffset=+1]
| 45.904762 | 134 | 0.780083 |
df40dc5e76765297323c7852dd7294050db7f141 | 1,952 | adoc | AsciiDoc | spark-sql-StructField.adoc | yashwanth2804/mastering-spark-sql-book | e34670138b23a28bf9052d3a41e910d3df3637a4 | [
"Apache-2.0"
] | null | null | null | spark-sql-StructField.adoc | yashwanth2804/mastering-spark-sql-book | e34670138b23a28bf9052d3a41e910d3df3637a4 | [
"Apache-2.0"
] | null | null | null | spark-sql-StructField.adoc | yashwanth2804/mastering-spark-sql-book | e34670138b23a28bf9052d3a41e910d3df3637a4 | [
"Apache-2.0"
] | 1 | 2020-09-21T00:31:38.000Z | 2020-09-21T00:31:38.000Z | == [[StructField]] StructField -- Single Field in StructType
[[creating-instance]]
`StructField` describes a single field in a <<spark-sql-StructType.adoc#, StructType>> with the following:
* [[name]] Name
* [[dataType]] <<spark-sql-DataType.adoc#, DataType>>
* [[nullable]] `nullable` flag (enabled by default)
* [[metadata]] `Metadata` (empty by default)
A comment is part of metadata under `comment` key and is used to build a Hive column or when describing a table.
[source, scala]
----
scala> schemaTyped("a").getComment
res0: Option[String] = None
scala> schemaTyped("a").withComment("this is a comment").getComment
res1: Option[String] = Some(this is a comment)
----
As of Spark 2.4.0, `StructField` can be converted to DDL format using <<toDDL, toDDL>> method.
.Example: Using StructField.toDDL
[source, scala]
----
import org.apache.spark.sql.types.MetadataBuilder
val metadata = new MetadataBuilder()
.putString("comment", "this is a comment")
.build
import org.apache.spark.sql.types.{LongType, StructField}
val f = new StructField(name = "id", dataType = LongType, nullable = false, metadata)
scala> println(f.toDDL)
`id` BIGINT COMMENT 'this is a comment'
----
=== [[toDDL]] Converting to DDL Format -- `toDDL` Method
[source, scala]
----
toDDL: String
----
`toDDL` gives a text in the format:
```
[quoted name] [dataType][optional comment]
```
[NOTE]
====
`toDDL` is used when:
* `StructType` is requested to <<spark-sql-StructType.adoc#toDDL, convert itself to DDL format>>
* <<spark-sql-LogicalPlan-ShowCreateTableCommand.adoc#, ShowCreateTableCommand>> logical command is executed (and <<spark-sql-LogicalPlan-ShowCreateTableCommand.adoc#showHiveTableHeader, showHiveTableHeader>>, <<spark-sql-LogicalPlan-ShowCreateTableCommand.adoc#showHiveTableNonDataColumns, showHiveTableNonDataColumns>>, <<spark-sql-LogicalPlan-ShowCreateTableCommand.adoc#showDataSourceTableDataColumns, showDataSourceTableDataColumns>>)
====
| 33.655172 | 439 | 0.741803 |
6789810feb4463e45c8cb5bb6fc2110a9c0d9a78 | 7,607 | adoc | AsciiDoc | doc/WebSites.adoc | raandree/CommonTasks | 462f645025f3dc407ebf6027f08c32e197f4ef7a | [
"MIT"
] | null | null | null | doc/WebSites.adoc | raandree/CommonTasks | 462f645025f3dc407ebf6027f08c32e197f4ef7a | [
"MIT"
] | null | null | null | doc/WebSites.adoc | raandree/CommonTasks | 462f645025f3dc407ebf6027f08c32e197f4ef7a | [
"MIT"
] | null | null | null | // CommonTasks YAML Reference: WebSites
// ========================================
:YmlCategory: WebSites
[[dscyml_websites, {YmlCategory}]]
= DSC Resource 'WebSites'
// didn't work in production: = DSC Resource '{YmlCategory}'
[[dscyml_websites_abstract]]
.{YmlCategory} module is used to manage ISS web sites.
[cols="1,3a" options="autowidth" caption=]
|===
| Source | https://github.com/dsccommunity/CommonTasks/tree/dev/CommonTasks/DscResources/WebSites
| DSC Resource | https://github.com/dsccommunity/xWebAdministration[xWebAdministration]
| Documentation | https://github.com/dsccommunity/xWebAdministration#xwebsite[xWebSite]
|===
.Attributes of category '{YmlCategory}'
[cols="1,1,1,2a,1a" options="header"]
|===
| Parameter
| Attribute
| DataType
| Description
| Allowed Values
| [[dscyml_websites_items, {YmlCategory}/Items]]<<dscyml_websites_items_details, Items>>
|
| Hashtable[]
| IIS Web Sites
|
|===
[[dscyml_websites_items_details]]
.Attributes of category '<<dscyml_websites_items>>'
[cols="1,1,1,2a,1a" options="header"]
|===
| Parameter
| Attribute
| DataType
| Description
| Allowed Values
| Name
| Key
| String
| The desired name of the website.
|
| SiteId
|
| UInt32
| Optional. The desired IIS site Id for the website.
|
| PhysicalPath
|
| String
| The path to the files that compose the website.
|
| State
|
| String
| The state of the website
| - Started
- Stopped
| BindingInfo
|
| Hashtable[]
| Website's binding information in the form of an array of embedded instances of the MSFT_xWebBindingInformation CIM class that implements the following properties:
- *Protocol*: The protocol of the binding. This property is required. The acceptable values for this property are: http, https, msmq.formatname, net.msmq, net.pipe, net.tcp.
- *BindingInformation*: The binding information in the form a colon-delimited string that includes the IP address, port, and host name of the binding. This property is ignored for http and https bindings if at least one of the following properties is specified: IPAddress, Port, HostName.
- *IPAddress*: The IP address of the binding. This property is only applicable for http and https bindings. The default value is *.
- *Port*: The port of the binding. The value must be a positive integer between 1 and 65535. This property is only applicable for http (the default value is 80) and https (the default value is 443) bindings.
- *HostName*: The host name of the binding. This property is only applicable for http and https bindings.
- *CertificateThumbprint*: The thumbprint of the certificate. This property is only applicable for https bindings.
- *CertificateSubject*: The subject of the certificate if the thumbprint isn't known. This property is only applicable for https bindings.
- *CertificateStoreName*: The name of the certificate store where the certificate is located. This property is only applicable for https bindings. The acceptable values for this property are: My, WebHosting. The default value is My.
- *SslFlags*: The type of binding used for Secure Sockets Layer (SSL) certificates. This property is supported in IIS 8.0 or later, and is only applicable for https bindings. The acceptable values for this property are:
* 0: The default value. The secure connection be made using an IP/Port combination. Only one certificate can be bound to a combination of IP address and the port.
* 1: The secure connection be made using the port number and the host name obtained by using Server Name Indication (SNI). It allows multiple secure websites with different certificates to use the same IP address.
* 2: The secure connection be made using the Centralized Certificate Store without requiring a Server Name Indication.
* 3: The secure connection be made using the Centralized Certificate Store while requiring Server Name Indication.
|
| ApplicationPool
|
| String
| The name of the website’s application pool.
|
| DefaultPage
|
| String[]
| One or more names of files that will be set as Default Documents for this website.
|
| EnabledProtocols
|
| String
| The protocols that are enabled for the website.
|
| ServerAutoStart
|
| Boolean
| When set to $true this will enable Autostart on a Website
| - True
- False
| Ensure
|
| String
| Ensures that the website is Present or Absent.
| - *Present* (default)
- Absent
| PreloadEnabled
|
| Boolean
| When set to `True` this will allow WebSite to automatically start without a request
| - True
- False
| ServiceAutoStartEnabled
|
| Boolean
| When set to `True` this will enable application Autostart (application initalization without an initial request) on a Website
| - True
- False
| ServiceAutoStartProvider
|
| String
| Adds a AutostartProvider
|
| ApplicationType
|
| String
| Adds a AutostartProvider ApplicationType
|
| AuthenticationInfo
|
| Hashtable
| Website's authentication information in the form of an embedded instance of the MSFT_xWebAuthenticationInformation CIM class.
MSFT_xWebAuthenticationInformation takes the following properties:
- *Anonymous*: The acceptable values for this property are: $true, $false
- *Basic* The acceptable values for this property are: $true, $false
- *Digest*: The acceptable values for this property are: $true, $false
- *Windows*: The acceptable values for this property are: $true, $false
|
| LogPath
|
|
| The directory to be used for logfiles.
|
| LogFlags
|
|
| The W3C logging fields
The values that are allowed for this property are: Date,Time,ClientIP,UserName,SiteName,ComputerName,ServerIP,Method,UriStem,UriQuery,HttpStatus,Win32Status,BytesSent,BytesRecv,TimeTaken,ServerPort,UserAgent,Cookie,Referer,ProtocolVersion,Host,HttpSubStatus
|
| LogPeriod
|
|
| How often the log file should rollover.
The values that are allowed for this property are: Hourly,Daily,Weekly,Monthly,MaxSize
|
| LogTargetW3C
|
|
| Log Target of the W3C Logfiles.
| - File
- ETW
- File,ETW
| LogTruncateSize
|
|
| How large the file should be before it is truncated.
If this is set then LogPeriod will be ignored if passed in and set to MaxSize.
The value must be a valid integer between 1048576 (1MB) and 4294967295 (4GB).
| 1MB - 4GB
| LoglocalTimeRollover
|
| Boolean
| Use the localtime for file naming and rollover.
| - True
- False
| LogFormat
|
|
| Format of the Logfiles.
NOTE: Only `W3C` supports `LogFlags`.
| - IIS
- W3C
- NCSA
| LogCustomFields
|
| Hashtable[]
| Custom logging field information the form of an array of embedded instances of the MSFT_xLogCustomFieldInformation CIM class that implements the following properties:
- *LogFieldName*: Field name to identify the custom field within the log file. Please note that the field name cannot contain spaces.
- *SourceType*: The acceptable values for this property are: RequestHeader, ResponseHeader, or ServerVariable (note that enhanced logging cannot log a server variable with a name that contains lower-case characters - to include a server variable in the event log just make sure that its name consists of all upper-case characters).
- *SourceName*: Name of the HTTP header or server variable (depending on the Source Type you selected) that contains a value that you want to log.
|===
.Example
[source, yaml]
----
WebSites:
Items:
# Remove Default WebSite
- Name: Default Web Site
Ensure: Absent
# Create New WebSite
- Name: TestSite2
ApplicationPool: TestAppPool2
AuthenticationInfo:
Anonymous: False
Basic: True
Digest: False
Windows: True
---- | 29.257692 | 333 | 0.744314 |
fa9fbcef90156bf7c9869ed8ee15ed8e7ad7f9f4 | 5,006 | asciidoc | AsciiDoc | filebeat/docs/modules/postgresql.asciidoc | lalit-satapathy/beats | 0e0f5bf78c854fc47696e38d1e1e3b1c74d0c104 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | filebeat/docs/modules/postgresql.asciidoc | lalit-satapathy/beats | 0e0f5bf78c854fc47696e38d1e1e3b1c74d0c104 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-12-08T15:00:30.000Z | 2022-03-28T06:45:16.000Z | filebeat/docs/modules/postgresql.asciidoc | lalit-satapathy/beats | 0e0f5bf78c854fc47696e38d1e1e3b1c74d0c104 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | ////
This file is generated! See scripts/docs_collector.py
////
:edit_url: https://github.com/elastic/beats/edit/main/filebeat/module/postgresql/_meta/docs.asciidoc
[[filebeat-module-postgresql]]
:modulename: postgresql
:has-dashboards: true
== PostgreSQL module
include::{libbeat-dir}/shared/integration-link.asciidoc[]
The +{modulename}+ module collects and parses logs created by
https://www.postgresql.org/[PostgreSQL].
include::../include/what-happens.asciidoc[]
include::../include/gs-link.asciidoc[]
[float]
=== Compatibility
This module comes in two flavours: a parser of log files based on Linux distribution
defaults, and a CSV log parser, that you need to enable in database configuration.
The +{modulename}+ module using `.log` was tested with logs from versions 9.5 on Ubuntu,
9.6 on Debian, and finally 10.11, 11.4 and 12.2 on Arch Linux 9.3.
The +{modulename}+ module using `.csv` was tested using versions 11 and 13 (distro is not relevant here).
[float]
=== Supported log formats
This module can collect any logs from PostgreSQL servers, but to be able to
better analyze their contents and extract more information, they should be
formatted in a determined way.
There are some settings to take into account for the log format.
Log lines should be preffixed with the timestamp in milliseconds, the process
id, the user id and the database name. This uses to be the default in most
distributions, and is translated to this setting in the configuration file:
["source","sh"]
----------------------------
log_line_prefix = '%m [%p] %q%u@%d '
----------------------------
PostgreSQL server can be configured to log statements and their durations and
this module is able to collect this information. To be able to correlate each
duration with their statements, they must be logged in the same line. This
happens when the following options are used:
["source","sh"]
----------------------------
log_duration = 'on'
log_statement = 'none'
log_min_duration_statement = 0
----------------------------
Setting a zero value in `log_min_duration_statement` will log all statements
executed by a client. You probably want to configure it to a higher value, so it
logs only slower statements. This value is configured in milliseconds.
When using `log_statement` and `log_duration` together, statements and durations
are logged in different lines, and {beatname_uc} is not able to correlate both
values, for this reason it is recommended to disable `log_statement`.
NOTE: The PostgreSQL module of Metricbeat is also able to collect information
about all statements executed in the server. You may chose which one is better
for your needings. An important difference is that the Metricbeat module
collects aggregated information when the statement is executed several times,
but cannot know when each statement was executed. This information can be
obtained from logs.
Other logging options that you may consider to enable are the following ones:
["source","sh"]
----------------------------
log_checkpoints = 'on';
log_connections = 'on';
log_disconnections = 'on';
log_lock_waits = 'on';
----------------------------
Both `log_connections` and `log_disconnections` can cause a lot of events if you
don't have persistent connections, so enable with care.
[float]
=== Using CSV logs
Since the PostgreSQL CSV log file is a well-defined format,
there is almost no configuration to be done in {beatname_uc}, just the filepath.
On the other hand, it's necessary to configure postgresql to emit `.csv` logs.
The recommended parameters are:
["source","sh"]
----------------------------
logging_collector = 'on';
log_destination = 'csvlog';
----------------------------
include::../include/configuring-intro.asciidoc[]
The following example shows how to set paths in the +modules.d/{modulename}.yml+
file to override the default paths for PostgreSQL logs:
["source","yaml",subs="attributes"]
-----
- module: postgresql
log:
enabled: true
var.paths: ["/path/to/log/postgres/*.log*"]
-----
To specify the same settings at the command line, you use:
["source","sh",subs="attributes"]
-----
-M "postgresql.log.var.paths=[/path/to/log/postgres/*.log*]"
-----
//set the fileset name used in the included example
:fileset_ex: log
include::../include/config-option-intro.asciidoc[]
[float]
==== `log` fileset settings
include::../include/var-paths.asciidoc[]
[float]
=== Example dashboards
This module comes with two sample dashboards.
The first dashboard is for regular logs.
[role="screenshot"]
image::./images/filebeat-postgresql-overview.png[]
The second one shows the slowlogs of PostgreSQL. If `log_min_duration_statement`
is not used, this dashboard will show incomplete or no data.
[role="screenshot"]
image::./images/filebeat-postgresql-slowlog-overview.png[]
:has-dashboards!:
:fileset_ex!:
:modulename!:
[float]
=== Fields
For a description of each field in the module, see the
<<exported-fields-postgresql,exported fields>> section.
edit_url!: | 29.621302 | 105 | 0.726728 |
f65cffa3af84dee73ac896f9bed0f041fe9825b5 | 6,086 | adoc | AsciiDoc | sechub-doc/src/docs/asciidoc/sechub-quickstart-guide.adoc | Vivek-Prajapatii/sechub | 12ef4ecad0cc92296a94f0f3661d9975797dfe1d | [
"Apache-2.0",
"MIT"
] | 79 | 2019-07-30T14:08:37.000Z | 2022-01-17T03:03:19.000Z | sechub-doc/src/docs/asciidoc/sechub-quickstart-guide.adoc | Vivek-Prajapatii/sechub | 12ef4ecad0cc92296a94f0f3661d9975797dfe1d | [
"Apache-2.0",
"MIT"
] | 699 | 2019-07-30T08:37:52.000Z | 2022-01-24T16:06:12.000Z | sechub-doc/src/docs/asciidoc/sechub-quickstart-guide.adoc | Vivek-Prajapatii/sechub | 12ef4ecad0cc92296a94f0f3661d9975797dfe1d | [
"Apache-2.0",
"MIT"
] | 36 | 2019-07-29T15:37:19.000Z | 2022-01-08T09:22:22.000Z | // SPDX-License-Identifier: MIT
include::documents/gen/server-version.adoc[]
include::documents/config.adoc[]
= image:sechub-logo.png[sechub] SecHub Quickstart Guide
include::documents/shared/about_sechub.adoc[]
include::documents/shared/about_documentation_all.adoc[]
//--
== Guide
This guide describes how to get started with SecHub.
The following topics are covered:
* [x] Getting SecHub
* [x] Building SecHub
* [x] Starting SecHub server in Integration Test mode
* [x] Default passwords
* [x] Working with the REST API
* [x] Creating a project on SecHub server
* [x] Code scan with the SecHub client
* [x] Stopping the server
=== Requirements
* Java SDK
* Go
* Git
* cURL
* jq
[NOTE]
--
SecHub is compatible with Java 8 and will continue to do so. We aim to support all long term support versions (LTS) of the JDK: Java 8, 11 and 17 when released.
SecHub can be build and runs with https://openjdk.java.net/groups/hotspot/[OpenJDK Hotspot] and https://www.eclipse.org/openj9/[Eclipse OpenJ9].
--
==== Alpine Linux
----
apk add openjdk11 go curl git bash jq
----
NOTE: Tested with Alpine Linux 3.12, 3.13 and 3.14.
WARNING: Cross-compiling the SecHub client on Alpine Linux 3.11 does not work (Go version go1.13.13 linux/amd64).
==== Debian
----
sudo apt install openjdk-11-jdk-headless golang git curl jq
----
NOTE: Tested with Debian 10 "Buster".
==== Fedora and CentOS
----
sudo dnf install java-11-openjdk-devel golang git curl jq
----
NOTE: Tested with Fedora 34 and CentOS 8.
==== Ubuntu
----
sudo apt install openjdk-11-jdk-headless golang git curl jq
----
NOTE: Tested with Ubuntu 18.04 "Bionic" and 20.04 "Focal" LTS.
=== Instructions
Let's start with:
. Cloning the repository
+
----
cd ~
git clone https://github.com/Daimler/sechub.git
cd sechub
----
+
[TIP]
--
**Proxy**: +
In case you have to connect via proxy to the internet, please have a look on how to setup a proxy in the Gradle documentation: https://docs.gradle.org/current/userguide/build_environment.html#sec:accessing_the_web_via_a_proxy[Accessing the web through a HTTP proxy]
Example: +
Add these lines to your ~/.gradle/gradle.properties file:
----
systemProp.http.proxyHost=yourproxy.youcompany.com
systemProp.http.proxyPort=3128
systemProp.http.proxyUser=userid
systemProp.http.proxyPassword=password
systemProp.http.nonProxyHosts=*.nonproxyrepos.com|localhost
----
--
. Build SecHub
+
----
./buildExecutables
----
. Start SecHub server in Integration Test mode
+
----
./gradlew startIntegrationTestServer
----
+
WARNING: Do not use the Integration Test Server mode in production.
. Credentials
+
These are the initial credentials when starting SecHub server in `integration-test` mode:
+
SecHub Superadmin:
+
----
username: int-test_superadmin
password: int-test_superadmin-pwd
----
+
SecHub User Account:
+
----
username: int-test_onlyuser
password: int-test_onlyuser-pwd
----
. Environment variables
+
Set search path and environment variables for the SecHub client and `sechub-api.sh` script:
+
[source,bash]
----
export SECHUB_SERVER=https://localhost:8443
export SECHUB_USERID=int-test_superadmin
export SECHUB_APITOKEN=int-test_superadmin-pwd
export SECHUB_TRUSTALL=true
export PATH="`pwd`/sechub-cli/build/go/platform/linux-amd64:`pwd`/sechub-developertools/scripts:$PATH"
----
. Test: List all users as administrator
+
[NOTE]
`sechub-api.sh` is a helper Bash script based on `curl` that eases the use of the https://daimler.github.io/sechub/latest/sechub-restapi.html[SecHub server REST API]. We use it here to get a list of the users.
+
[source,bash]
----
sechub-api.sh user_list
----
+
Expected result:
+
[source,json]
----
[
"int-test_onlyuser",
"int-test_superadmin"
]
----
. Create a project on SecHub server
+
The output of the API calls are omitted here for better readability:
+
[source,bash]
----
# Create "testproject"
sechub-api.sh project_create testproject int-test_superadmin
# Assign "int-test_superadmin" as scan user to our project
sechub-api.sh project_assign_user testproject int-test_superadmin
# List project details
sechub-api.sh project_details testproject
----
. Scan with SecHub client
+
Let's do a scan of our SecHub code:
+
[source,bash]
----
sechub -project testproject -reportformat html scan
WARNING: Configured to trust all - means unknown service certificate is accepted. Don't use this in production!
_____ _ _ _
/ ___| | | | | | |
\ `--. ___ ___| |_| |_ _| |__
`--. \/ _ \/ __| _ | | | | '_ \
/\__/ / __/ (__| | | | |_| | |_) |
\____/ \___|\___\_| |_/\__,_|_.__/ Client Version 0.0.0-3e13084-dirty-20210622120507
2021-06-22 13:26:24 (+02:00) Creating new sechub job
2021-06-22 13:26:24 (+02:00) Zipping folder: . (/home/user/sechub)
2021-06-22 13:26:25 (+02:00) Uploading source zip file
2021-06-22 13:26:26 (+02:00) Approve sechub job
2021-06-22 13:26:26 (+02:00) Waiting for job 7045e25f-592b-46bf-9713-c31995d37e99 to be done
.
2021-06-22 13:26:30 (+02:00) Fetching result (format=html) for job 7045e25f-592b-46bf-9713-c31995d37e99
2021-06-22 13:26:30 (+02:00) SecHub report written to sechub_report_testproject_7045e25f-592b-46bf-9713-c31995d37e99.html
GREEN - no severe security vulnerabilities identified
----
+
_Congratulations! You have done your first SecHub code scan._ +
You can open the SecHub report file in your browser.
+
[NOTE]
In order to scan, you need a `sechub.json` config file. In our case, it is already in the repository so we can use it right away. +
+
For real results, you have to define an 'execution profile' with a scanner (via a product adapter) attached. Assign it to your project and you get real results. Have a look at the https://daimler.github.io/sechub/latest/sechub-operations.html#section-initial-profile-and-executors[SecHub operations documentation] for details.
. Stop SecHub integration test server
+
----
./gradlew stopIntegrationTestServer
----
==== Troubleshooting
===== Log files
Open the log file `./sechub-integrationtest/integrationtest-server.log` to get more details about the problem. | 26.120172 | 326 | 0.729543 |
cd7439d545e71465242294deb34f6be6d4fda695 | 7,624 | adoc | AsciiDoc | Collaborative exams.adoc | oliviercailloux/projets | f86e3f88af4fb372636115f44a5d5b8e42a79adf | [
"MIT-0",
"MIT"
] | 5 | 2017-01-03T17:40:43.000Z | 2022-03-29T21:04:08.000Z | Collaborative exams.adoc | oliviercailloux/projets | f86e3f88af4fb372636115f44a5d5b8e42a79adf | [
"MIT-0",
"MIT"
] | 3 | 2017-01-23T16:23:36.000Z | 2017-03-07T17:01:54.000Z | Collaborative exams.adoc | oliviercailloux/projets | f86e3f88af4fb372636115f44a5d5b8e42a79adf | [
"MIT-0",
"MIT"
] | 10 | 2017-01-18T09:30:40.000Z | 2021-01-04T14:06:21.000Z | = Collaborative exams
Mise au point collaborative de questionnaires
== Contexte
Ce projet permettra à des enseignants de créer des ensembles de questions et réponses. Ces questions peuvent être réutilisées par d’autres enseignants et combinées pour créer des questionnaires ou des sondages. Les étudiants pourront se servir de ces ensembles de questions pour s’entrainer.
L’application laissera la place à la subjectivité des enseignants, en leur permettant par exemple de privilégier certaines questions. Pour permettre la réutilisation, l’application séparera clairement les informations objectives (par exemple la bonne réponse à une question) des informations subjectives (dépendant des enseignants, par exemple l’évaluation de la difficulté, le choix des réponses…).
(On pourra éventuellement s’inspirer de deux https://github.com/oliviercailloux/Collaborative-exams-2016[implémentations] https://github.com/oliviercailloux/Collaborative-exams-2019[existantes].)
== À faire
* Interfaces et objets pour Question et objets associés. Une question comprend un énoncé (« phrasing »), une langue, un ensemble de réponses possibles, chacune associée à l’information indiquant si cette réponse est correcte, et un auteur. Un auteur est identifié par un e-mail. Utiliser un objet Person pour les auteurs. Un énoncé et une réponse sont des textes simples. L’ensemble de réponses peut être remplacé par une information « Y/N » ou « T/F » ou « free », indiquant une question oui / non, ou vrai / faux, ou de réponse libre. Créer à cet effet une interface MultipleChoiceQuestion qui étend Question. L’auteur peut associer un identifiant personnel (string) à une question. Ceci est stocké hors de l’objet Question. Cet identifiant peut changer. (La paire (auteur, identifiant) doit être unique.) Ces objets sont immuables.
* Lecture et écriture d’une question en format JSON.
* Édition d’une nouvelle question Y/N à l’aide d’une interface graphique. La question est sauvegardée dans un fichier appelé `Q1.json` dans le répertoire courant, ou `Q2.json` si `Q1.json` existe, etc.
* Quand on arrive au bout des numéros à un chiffre, renommer tous les fichiers en ajoutant un zéro en préfixe du numéro (par exemple `Q17.json` devient `Q017.json`).
* Interfaces graphiques pour éditer d’autres types de questions.
* Création d’examens (autrement dit, de questionnaires) : un examen (`Exam`) regroupe un ensemble de questions, ordonnées ; il a un auteur ; il est immuable. Un `MultipleChoiceExam` contient uniquement des questions de type `MultipleChoiceQuestion`. Une interface graphique permet à un utilisateur de sélectionner un sous-ensemble de questions pour intégrer un examen.
* Pouvoir passer un examen, et obtenir sa note après l’examen (dans le cas d’un MCE). Les réponses sont également enregistrées dans le répertoire courant pour pouvoir être visualisées par la suite ; ces réponses ont un auteur et identifient l’examen auquel elles répondent.
* Il ne doit pas être possible de modifier un examen après que des réponses aient été fournies. Éditer une question d’un examen ou un examen qui a une réponse crée un nouvel objet plutôt que modifier l’existant.
* L’enseignant peut spécifier le coefficient associé à chaque question, voire, la note associée à chaque réponse (points négatifs, …)
* Stocker une relation d’équivalence « same ability » entre questions, associée à une personne. Une personne peut indiquer qu’une question interroge, à son avis, sur la même compétence qu’une autre. (On pourra l’utiliser pour éviter que ces questions figurent dans le même examen.)
* Stocker une relation « improvement » entre questions, associée à une personne. Une personne peut indiquer qu’une question, à son avis, est meilleure qu’une autre, tout en interrogeant sur les mêmes compétences. Exemples : correction d’une faute d’orthographe ; correction d’une réponse erronée. (On pourra l’utiliser pour présenter uniquement les questions avec les meilleures formulations.) Ceci implique Same ability et At least as subtle as.
* Serveur web qui renvoie une question en JSON au hasard étant donné une paire (e-mail auteur, identifiant personnel d’une question).
== Autres idées
* Création et visualisation de questions au format asciidoc. Il faudra pouvoir voir le rendu en plus du code source.
* Stocker une relation « at least as subtle as » entre questions, associée à une personne, et créer un servlet associé. Une personne peut indiquer qu’une question est (à son avis) au moins aussi subtile qu’une autre. Une question est au moins aussi subtile que sa variante ssi la connaissance de la réponse à la première question implique nécessairement la connaissance de la réponse à la deuxième. Exemples de variantes aussi subtiles : une reformulation dans un autre style littéraire, ou une traduction. La relation est réflexive mais pas nécessairement symétrique ni complète.
* Chaque personne peut associer un identifiant personnel à chaque question (y compris celles dont la personne n’est pas auteur). La paire (personne, identifiant) doit être unique. Chaque personne peut associer un ensemble de sujets à chaque question (exemple : Math, Java, Programmation). Ces sujets sont personnels. (Donc deux personnes peuvent indiquer des sujets différents pour une même question.)
* On peut afficher ce que pensent tous les utilisateurs de la relation entre deux questions.
* Récupérer toutes les questions qui portent le sujet S donné par l’utilisateur U. Plus généralement, toutes les questions qui satisfont une certaine requête.
* Un utilisateur peut déclarer qu’il trouve que les questions marquées par untel comme étant de tel sujet sont de tel sujet (éventuellement différent), à son avis. Il peut cependant soustraire certaines questions de cet ensemble. (Exemple : le sujet « Java » regroupe, à mon avis, toutes les questions marquées « programmation » par Untel sauf les questions q1 et q2.) Cet ensemble s’ajuste lorsque l’utilisateur suivi modifie son opinion.
* Un utilisateur peut créer un modèle de questionnaire : il indique combien de questions doivent être tirées de quels sujet, avec éventuellement une probabilité de tirage pour chaque question au sein d’un sujet donné pour ce questionnaire.
* Un utilisateur peut utiliser un modèle de questionnaire pour générer un ou plusieurs questionnaires.
* Un utilisateur peut modifier un questionnaire (créé manuellement ou généré).
* Affichage d’un questionnaire (généré sur le champ ou précédemment) et recueil des réponses de l’étudiant.
* Affichage de la note de l’étudiant à l’issue du questionnaire.
* Affichage d’une correction à l’issue du questionnaire.
* Un utilisateur peut indiquer à quels autres utilisateurs il fait confiance. Cela a un impact uniquement sur les relations. Un utilisateur se fait toujours confiance.
* Calcul de relations résultantes : l’affichage indique à l’utilisateur, et prend en compte les relations qui sont soit plébiscitées par au moins 80% des utilisateurs, soit indiquées par des utilisateurs auxquels il fait confiance et contredites par moins de 20% des utilisateurs.
* Export d’un questionnaire en PDF.
* Possibilité de créer des questions et des questionnaires en local plutôt que en ligne (via client lourd). Pour éviter que les étudiants voient les questions avant l’examen.
* Possibilité d’envoyer en ligne des questions et questionnaires créés localement.
* Un utilisateur peut indiquer une relation de préférence subjective entre deux questions. Dans ce cas il ne prétend pas que l’une est objectivement meilleure que l’autre, mais il souhaite néanmoins que la moins bonne ne soit jamais prise dans un questionnaire.
| 165.73913 | 834 | 0.802859 |
0d147163d23730fb86b5a1ae389c0fac7dc57df5 | 1,260 | adoc | AsciiDoc | content/reporting/adoc/ru/execution_history/history_output_documents.adoc | Andrew-Archer/documentation | 2430e0d5e96b122079dc3fb05a1cf98aebfc608b | [
"CC-BY-4.0"
] | 14 | 2019-03-04T15:30:49.000Z | 2020-09-11T05:47:07.000Z | content/reporting/adoc/ru/execution_history/history_output_documents.adoc | Andrew-Archer/documentation | 2430e0d5e96b122079dc3fb05a1cf98aebfc608b | [
"CC-BY-4.0"
] | 83 | 2019-01-24T10:11:56.000Z | 2019-03-01T02:23:32.000Z | content/reporting/adoc/ru/execution_history/history_output_documents.adoc | Andrew-Archer/documentation | 2430e0d5e96b122079dc3fb05a1cf98aebfc608b | [
"CC-BY-4.0"
] | 5 | 2019-03-29T14:25:05.000Z | 2020-09-11T05:47:19.000Z | :sourcesdir: ../../../source
[[history_output_documents]]
=== Выходные документы
Механизм предусматривает возможность сохранения выходных документов – файлов результатов отчётов – в {main_man_url}/file_storage.html[хранилище файлов]. Эта функция использует дисковое пространство для хранения файлов; она настраивается отдельно и по умолчанию отключена. Для ее включения определите свойство <<reporting.executionHistory.saveOutputDocument,reporting.executionHistory.saveOutputDocument>> в экране *Administration > Application Properties*:
[source, properties]
----
reporting.executionHistory.saveOutputDocument = true
----
Теперь, если в таблице просмотра истории выбрана запись, кнопка *Download document* становится доступной. При нажатии на кнопку скачивается документ, представляющий собой файл результата отчёта.
Отчёты с типом вывода <<template_chart,chart>>, <<pivotTable_output,pivot table>> и <<table_output,table>> не имеют результирующих файлов, поэтому история исполнения таких отчётов не сохраняет никаких документов.
Если вы вызываете запуск отчёта программно с помощью метода `createAndSaveReport()`, он сохраняет другую копию того же выходного документа в хранилище файлов. Эти два файла помещаются в хранилище независимо друг от друга.
| 70 | 458 | 0.819841 |
fa5d2bc7fd283cd1f0adf2d5fcb0e33fbe095a79 | 362 | adoc | AsciiDoc | content/charts/adoc/en/preface/additional_info.adoc | crontabpy/documentation | 45c57a42ff729207b1967241003bd7b747361552 | [
"CC-BY-4.0"
] | null | null | null | content/charts/adoc/en/preface/additional_info.adoc | crontabpy/documentation | 45c57a42ff729207b1967241003bd7b747361552 | [
"CC-BY-4.0"
] | null | null | null | content/charts/adoc/en/preface/additional_info.adoc | crontabpy/documentation | 45c57a42ff729207b1967241003bd7b747361552 | [
"CC-BY-4.0"
] | null | null | null | :sourcesdir: ../../../source
[[additional_info]]
=== Additional Materials
This guide, as well as any other documentation on CUBA platform, is available at https://www.cuba-platform.com/manual.
CUBA charts display subsystem implementation is based on *AmCharts* library, therefore familiarity with this library may be beneficial. See http://www.amcharts.com.
| 36.2 | 164 | 0.770718 |
e13f6ac8cc746199271d10ee812cba55831ef38e | 190 | adoc | AsciiDoc | src/main/bash/etc/declarations/errorcodes/052.adoc | vdmeer/skb-framework | 2fe7e0b163654967dea70317c2153517d80049ba | [
"Apache-2.0"
] | null | null | null | src/main/bash/etc/declarations/errorcodes/052.adoc | vdmeer/skb-framework | 2fe7e0b163654967dea70317c2153517d80049ba | [
"Apache-2.0"
] | 1 | 2019-05-28T22:32:40.000Z | 2019-05-28T22:40:53.000Z | src/main/bash/etc/declarations/errorcodes/052.adoc | vdmeer/skb-framework | 2fe7e0b163654967dea70317c2153517d80049ba | [
"Apache-2.0"
] | null | null | null | A task has found an error in its command line.
This happens when a task is parsing the command line and detects one or more unknown options.
Detailed error messages should have been printed. | 63.333333 | 93 | 0.810526 |
86f8b3a4d6b37c18d59b30cda94b6f11f12497c3 | 3,727 | adoc | AsciiDoc | doc-content/drools-docs/src/main/asciidoc/DecisionEngine/cep-clock-ref.adoc | tiagodolphine/kie-docs | f24afcfaa538b1f74769ef9b2b526dd6d8ef371c | [
"Apache-2.0"
] | 35 | 2017-03-20T06:05:47.000Z | 2022-01-17T19:06:21.000Z | doc-content/drools-docs/src/main/asciidoc/DecisionEngine/cep-clock-ref.adoc | tiagodolphine/kie-docs | f24afcfaa538b1f74769ef9b2b526dd6d8ef371c | [
"Apache-2.0"
] | 2,306 | 2017-03-13T15:02:48.000Z | 2022-03-31T12:49:12.000Z | doc-content/drools-docs/src/main/asciidoc/DecisionEngine/cep-clock-ref.adoc | tiagodolphine/kie-docs | f24afcfaa538b1f74769ef9b2b526dd6d8ef371c | [
"Apache-2.0"
] | 170 | 2017-03-13T12:51:20.000Z | 2022-02-25T13:46:45.000Z | [id='cep-clock-ref_{context}']
= Session clock implementations in the {DECISION_ENGINE}
During complex event processing, events in the {DECISION_ENGINE} may have temporal constraints and therefore require a session clock that provides the current time. For example, if a rule needs to determine the average price of a given stock over the last 60 minutes, the {DECISION_ENGINE} must be able to compare the stock price event time stamp with the current time in the session clock.
The {DECISION_ENGINE} supports a real-time clock and a pseudo clock. You can use one or both clock types depending on the scenario:
* *Rules testing:* Testing requires a controlled environment, and when the tests include rules with temporal constraints, you must be able to control the input rules and facts and the flow of time.
* *Regular execution:* The {DECISION_ENGINE} reacts to events in real time and therefore requires a real-time clock.
* *Special environments:* Specific environments may have specific time control requirements. For example, clustered environments may require clock synchronization or Java Enterprise Edition (JEE) environments may require a clock provided by the application server.
* *Rules replay or simulation:* In order to replay or simulate scenarios, the application must be able to control the flow of time.
Consider your environment requirements as you decide whether to use a real-time clock or pseudo clock in the {DECISION_ENGINE}.
Real-time clock::
The real-time clock is the default clock implementation in the {DECISION_ENGINE} and uses the system clock to determine the current time for time stamps. To configure the {DECISION_ENGINE} to use the real-time clock, set the KIE session configuration parameter to `realtime`:
+
--
.Configure real-time clock in KIE session
[source,java]
----
import org.kie.api.KieServices.Factory;
import org.kie.api.runtime.conf.ClockTypeOption;
import org.kie.api.runtime.KieSessionConfiguration;
KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration();
config.setOption(ClockTypeOption.get("realtime"));
----
--
Pseudo clock::
The pseudo clock implementation in the {DECISION_ENGINE} is helpful for testing temporal rules and it can be controlled by the application. To configure the {DECISION_ENGINE} to use the pseudo clock, set the KIE session configuration parameter to `pseudo`:
+
--
.Configure pseudo clock in KIE session
[source,java]
----
import org.kie.api.runtime.conf.ClockTypeOption;
import org.kie.api.runtime.KieSessionConfiguration;
import org.kie.api.KieServices.Factory;
KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration();
config.setOption(ClockTypeOption.get("pseudo"));
----
You can also use additional configurations and fact handlers to control the pseudo clock:
.Control pseudo clock behavior in KIE session
[source,java]
----
import java.util.concurrent.TimeUnit;
import org.kie.api.runtime.KieSessionConfiguration;
import org.kie.api.KieServices.Factory;
import org.kie.api.runtime.KieSession;
import org.drools.core.time.SessionPseudoClock;
import org.kie.api.runtime.rule.FactHandle;
import org.kie.api.runtime.conf.ClockTypeOption;
KieSessionConfiguration conf = KieServices.Factory.get().newKieSessionConfiguration();
conf.setOption( ClockTypeOption.get("pseudo"));
KieSession session = kbase.newKieSession(conf, null);
SessionPseudoClock clock = session.getSessionClock();
// While inserting facts, advance the clock as necessary.
FactHandle handle1 = session.insert(tick1);
clock.advanceTime(10, TimeUnit.SECONDS);
FactHandle handle2 = session.insert(tick2);
clock.advanceTime(30, TimeUnit.SECONDS);
FactHandle handle3 = session.insert(tick3);
----
--
| 46.5875 | 390 | 0.800376 |
f4fa5789a6192c1ab1b9590ad6af489a178d2101 | 1,921 | adoc | AsciiDoc | install_config/topics/configuring_a_security_group.adoc | bbeaudoin/openshift-docs | ce56a876a91734fbd98f9f7a714736b956af7eb4 | [
"Apache-2.0"
] | 2 | 2019-08-21T04:51:15.000Z | 2019-08-21T04:52:22.000Z | install_config/topics/configuring_a_security_group.adoc | VonRosenchild/openshift-docs | ab671324bdeaba9d907ab7748105778fc3cd1290 | [
"Apache-2.0"
] | null | null | null | install_config/topics/configuring_a_security_group.adoc | VonRosenchild/openshift-docs | ab671324bdeaba9d907ab7748105778fc3cd1290 | [
"Apache-2.0"
] | 1 | 2019-08-21T04:50:51.000Z | 2019-08-21T04:50:51.000Z | ////
Configuring a Security Group
This module included in the following assemblies:
* install_config/configuring_aws.adoc
* install_config/configuring_openstack.adoc
////
These are some ports that you must have in your security
groups, without which the installation fails. You may need more depending on
the cluster configuration you want to install. For more information and to
adjust your security groups accordingly, see xref:../install_config/install/prerequisites.adoc#required-ports[Required Ports]
for more information.
[cols="h,2"]
|===
|All {product-title} Hosts
a|- tcp/22 from host running the installer/Ansible
|etcd Security Group
a|- tcp/2379 from masters
- tcp/2380 from etcd hosts
|Master Security Group
a|- tcp/8443 from 0.0.0.0/0
ifdef::openshift-origin[]
- tcp/53 from all {product-title} hosts for environments installed prior to or upgraded to 1.2
- udp/53 from all {product-title} hosts for environments installed prior to or upgraded to 1.2
- tcp/8053 from all {product-title} hosts for new environments installed with 1.2
- udp/8053 from all {product-title} hosts for new environments installed with 1.2
endif::[]
ifdef::openshift-enterprise[]
- tcp/53 from all {product-title} hosts for environments installed prior to or upgraded to 3.2
- udp/53 from all {product-title} hosts for environments installed prior to or upgraded to 3.2
- tcp/8053 from all {product-title} hosts for new environments installed with 3.2
- udp/8053 from all {product-title} hosts for new environments installed with 3.2
endif::[]
|Node Security Group
a|- tcp/10250 from masters
- udp/4789 from nodes
|Infrastructure Nodes
(ones that can host the {product-title} router)
a|- tcp/443 from 0.0.0.0/0
- tcp/80 from 0.0.0.0/0
|===
If configuring external load-balancers (ELBs) for load balancing the masters
and/or routers, you also need to configure Ingress and Egress security groups
for the ELBs appropriately.
| 36.245283 | 125 | 0.774076 |
4768831192322c734b0e6e1609f82f3812cc464a | 2,310 | adoc | AsciiDoc | docs/modules/ROOT/pages/mapping/response.adoc | hauner/openapi-spring-generator | 0b1232400ddc857bdd30b9472ab28eecc5e58a97 | [
"Apache-2.0"
] | 13 | 2020-06-28T00:43:00.000Z | 2022-02-11T03:41:04.000Z | docs/modules/ROOT/pages/mapping/response.adoc | hauner/openapi-spring-generator | 0b1232400ddc857bdd30b9472ab28eecc5e58a97 | [
"Apache-2.0"
] | 47 | 2019-09-07T12:16:27.000Z | 2020-02-12T20:49:20.000Z | docs/modules/ROOT/pages/mapping/response.adoc | hauner/openapi-processor-spring | 0b1232400ddc857bdd30b9472ab28eecc5e58a97 | [
"Apache-2.0"
] | 3 | 2020-12-08T18:08:38.000Z | 2020-12-08T18:10:07.000Z | = (global) Response mappings
Global response mapping will replace the result type of the endpoint in the api description based on
its **content type** to the given java type.
It is defined like below, and it should be added to the `map/responses` section in the mapping.yaml
which is a list of global response mappings.
A single global response mapping can have the following properties:
[source,yaml]
----
- content: {content type} => {target type}
generics:
- {a generic type}
- {another generic type}
----
* **content** is required.
** **{content type}** is the name of the content type of the endpoint response that should be replaced
by **{target type}**.
** **{target type}** is the fully qualified class name of the java type that should be used for all
endpoint content types **{content type}**.
* **generics** defines the list of types that should be used as generic type parameters to the
java type given by **{target type}**.
[CAUTION]
====
Since the processor will simply match the content type string take care that all responses of this
content type should really use the same type!
This is probably only useful for a vendor content types. Globally mapping the content type for
example of `application/json` does not look like a good idea.
====
== Example
Given the following (global) response mapping
[source,yaml]
----
map:
# list of global response mappings, mapped by content type
responses:
- content: application/vnd.something => io.openapiprocessor.Something
----
and an openapi.yaml with multiple endpoints returning their result as content type
`application/vnd.something`
[source,yaml]
----
openapi: 3.0.2
info:
title: global response content type mapping example
version: 1.0.0
paths:
/do-something:
get:
responses:
'200':
description: response
content:
application/vnd.something:
schema:
type: string
/do-something-else:
get:
responses:
'200':
description: response
content:
application/vnd.something:
schema:
type: string
----
the processor will use `io.openapiprocessor.Something` as java type for **all** responses with
the content type `application/vnd.something`.
| 26.551724 | 102 | 0.688312 |
9c0d838f91d169975589957c31389f7d9d625388 | 2,560 | adoc | AsciiDoc | whileloop.adoc | xt0fer/PythonPrepBook | a4ea8e58d09ff87ffd490a14e54ce688879f9051 | [
"MIT"
] | null | null | null | whileloop.adoc | xt0fer/PythonPrepBook | a4ea8e58d09ff87ffd490a14e54ce688879f9051 | [
"MIT"
] | null | null | null | whileloop.adoc | xt0fer/PythonPrepBook | a4ea8e58d09ff87ffd490a14e54ce688879f9051 | [
"MIT"
] | null | null | null |
== Loops
Loops allow you control over repetitive steps you need to do in your _control flow_. Python has two different kinds of loops we will talk about: *while* loops and *for* loops.
Either one can be used interchangeably; but, as you will see there are couple cases where using one over the other makes more sense.
The primary purpose of loops is to avoid having lots of repetitive code.
=== While Loop
Loop through a block of code (the body) WHILE a condition is true.
----
while (condition_is_true):
# execute the code statements
# in the loop body
pass
----
See the code below.
In this case, we start with a simple counter in x = 1. Then, after the loop starts, it checks to see if x < 6, and 1 is less than 6, so the loop body gets executed. We print out 1 and then increment x. Then we go to the top of the loop and check to see if x (now 2) is less than 6. Since that's true so we print out 2 and increment x again. This continues like this for three more times, printing 3, 4, and 5.
Then, x is incremented to 6, and the check is made again, 6 < 6 ... well, no that is false. So we don't execute the loop's body and we fall through to the last print line, and print out x.
[source,python]
----
x = 1
while (x < 6):
print(x)
x += 1
print("ending at", x) # ? what will print here ?
----
While loops work well in situations where the condition you are testing at the top
of the loop is one that may not be related to a simple number.
----
while (player[1].isAlive() == True):
player[1].takeTurn()
game.updateStatus(player[1])
----
This will keep letting player[1] take a turn in the game until the player dies. Another way to do something like this is with an _infinite loop_.
(No, infinite loops are not necessarily a bad thing, watch.)
We're going to use both *continue* and *break* in this example, and we will describe them better after we're done with loops.
----
player = game.newPlayer()
while (true): # <- notice right here, an infinite loop
player.takeTurn()
game.updateScores()
game.advanceTime()
if (player.isAlive() == true):
continue # start at top of loop again.
else:
break # breaks out of loop and ends game.
game.sayToHuman("Game Over!")
----
Here, we are using the continue statement to force the flow of control to the top of the loop if the player is still alive after the 'take turn' code. We are also using the break statement to break out of the infinite loop when the player dies, letting us do other things after the player has 'died'.
| 37.647059 | 409 | 0.709375 |
23d91d30dd279eb90ed9c8909ea0f1b28ba9421c | 76,693 | adoc | AsciiDoc | src/docs/asciidoc/user-guide/assertj-db-features.adoc | Boiarshinov/doc | 04f09825c94682bbdf130701d77e8d22127f6a03 | [
"Apache-2.0"
] | 19 | 2018-09-09T17:27:08.000Z | 2022-01-21T13:48:22.000Z | src/docs/asciidoc/user-guide/assertj-db-features.adoc | Boiarshinov/doc | 04f09825c94682bbdf130701d77e8d22127f6a03 | [
"Apache-2.0"
] | 77 | 2019-01-17T00:54:46.000Z | 2022-03-25T12:43:07.000Z | src/docs/asciidoc/user-guide/assertj-db-features.adoc | Boiarshinov/doc | 04f09825c94682bbdf130701d77e8d22127f6a03 | [
"Apache-2.0"
] | 31 | 2019-02-15T18:13:56.000Z | 2022-03-21T16:37:58.000Z | [[assertj-db-features]]
=== Features highlight
Before reading this page, it is recommended to be familiar with the <<assertj-db-concepts,concepts of AssertJ-DB>>.
The purpose of this page is to show the different features of AssertJ-DB.
* <<assertj-db-features-navigation,Navigation>>
** <<assertj-db-features-tableorrequestasroot,With a Table or a Request as root>>
*** <<assertj-db-features-tableorrequesttorow,To a Row>>
*** <<assertj-db-features-tableorrequesttocolumn,To a Column>>
*** <<assertj-db-features-tableorrequesttovalue,To a Value>>
** <<assertj-db-features-changesasroot,With Changes as root>>
*** <<assertj-db-features-changestochanges,To Changes>>
*** <<assertj-db-features-changestochange,To a Change>>
*** <<assertj-db-features-changestorow,To a Row>>
*** <<assertj-db-features-changestocolumn,To a Column>>
*** <<assertj-db-features-changestovalue,To a Value>>
* <<assertj-db-features-assertions,Assertions>>
** <<assertj-db-features-onchangetype,On the type of change>>
** <<assertj-db-features-oncolumnequality,On the equality with the values of a column>>
*** <<assertj-db-features-oncolumnequality-boolean,With Boolean>>
*** <<assertj-db-features-oncolumnequality-bytes,With Bytes>>
*** <<assertj-db-features-oncolumnequality-number,With Number>>
*** <<assertj-db-features-oncolumnequality-date,With Date>>
*** <<assertj-db-features-oncolumnequality-time,With Time>>
*** <<assertj-db-features-oncolumnequality-datetime,With Date/Time>>
*** <<assertj-db-features-oncolumnequality-string,With String>>
*** <<assertj-db-features-oncolumnequality-uuid,With UUID>> [.small]#(since in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
*** <<assertj-db-features-oncolumnequality-character,With Character>> [.small]#(since in <<assertj-db-1-2-0-release-notes,1.2.0>>)#
** <<assertj-db-features-oncolumnname,On the name of a column>>
** <<assertj-db-features-oncolumnnullity,On the nullity of the values of a column>>
** <<assertj-db-features-onrownullity,On the nullity of the values of a row>> [.small]#(since in <<assertj-db-1-2-0-release-notes,1.2.0>>)#
** <<assertj-db-features-oncolumntype,On the type of column>>
** <<assertj-db-features-oncolumnclass,On the class of column>> [.small]#(since in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
** <<assertj-db-features-oncolumncontent,On the content of column>> [.small]#(since in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
** <<assertj-db-features-ondatatype,On the type of data>>
** <<assertj-db-features-onmodifiedcolumns,On the modified columns in a change>> [.small]#(modified in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
** <<assertj-db-features-onnumberchanges,On the number of changes>> [.small]#(modified in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
** <<assertj-db-features-onnumbercolumns,On the number of columns>> [.small]#(modified in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
** <<assertj-db-features-onnumberrows,On the number of rows>> [.small]#(modified in <<assertj-db-1-1-0-release-notes,1.1.0>> and <<assertj-db-1-2-0-release-notes,1.2.0>>)#
** <<assertj-db-features-onprimarykeys,On the primary keys>>
** <<assertj-db-features-onrowequality,On the equality with the values of a row>>
** <<assertj-db-features-onrowexistence,On the existence of a row in a change>>
** <<assertj-db-features-onvaluechronology,On the chronology of a value>>
** <<assertj-db-features-onvaluecomparison,On the comparison with a value>>
** <<assertj-db-features-onvaluecloseness,On the closeness of a value>> [.small]#(since in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
** <<assertj-db-features-onvaluequality,On the equality with a value>>
*** <<assertj-db-features-onvaluequality-boolean,With Boolean>>
*** <<assertj-db-features-onvaluequality-bytes,With Bytes>>
*** <<assertj-db-features-onvaluequality-number,With Number>>
*** <<assertj-db-features-onvaluequality-date,With Date>>
*** <<assertj-db-features-onvaluequality-time,With Time>>
*** <<assertj-db-features-onvaluequality-datetime,With Date/Time>>
*** <<assertj-db-features-onvaluequality-string,With String>>
*** <<assertj-db-features-onvaluequality-uuid,With UUID>> [.small]#(since in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
*** <<assertj-db-features-onvaluequality-character,With Character>> [.small]#(since in <<assertj-db-1-2-0-release-notes,1.2.0>>)#
** <<assertj-db-features-onvaluenonquality,On the non equality with a value>>
*** <<assertj-db-features-onvaluenonquality-boolean,With Boolean>>
*** <<assertj-db-features-onvaluenonquality-bytes,With Bytes>>
*** <<assertj-db-features-onvaluenonquality-number,With Number>>
*** <<assertj-db-features-onvaluenonquality-date,With Date>>
*** <<assertj-db-features-onvaluenonquality-time,With Time>>
*** <<assertj-db-features-onvaluenonquality-datetime,With Date/Time>>
*** <<assertj-db-features-onvaluenonquality-string,With String>>
*** <<assertj-db-features-onvaluenonquality-uuid,With UUID>> [.small]#(since in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
*** <<assertj-db-features-onvaluenonquality-character,With Character>> [.small]#(since in <<assertj-db-1-2-0-release-notes,1.2.0>>)#
** <<assertj-db-features-onvaluenullity,On the nullity of a value>>
** <<assertj-db-features-onvaluetype,On the type of a value>>
** <<assertj-db-features-onvalueclass,On the class of a value>> [.small]#(since in <<assertj-db-1-1-0-release-notes,1.1.0>>)#
[[assertj-db-features-navigation]]
==== Navigation
[[assertj-db-features-tableorrequestasroot]]
===== With a Table or a Request as root
As shown in the <<assertj-db-concepts-tableorrequestasroot,concepts>> (to easily understand this chapter it is important to know the concepts of assertj-db),
the `assertThat(...)` static method is used
to begin an assertion
on a https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/type/Table.html[Table]
or on a https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/type/Request.html[Request].
The navigation from a table or from a request are similar, so in most of the examples below a table will be used :
[source,java]
----
assertThat(tableOrRequest)...
----
If there is a difference if will be specified.
All the navigation methods work from an origin point.
That means that if the method is executed from another point,
it is like the execution is from the point of view of the origin.
There are some recurring points in the different navigation methods :
* a method without parameter which allows to navigate on the next element after the element reached on the last call
(if it is the first call, navigate to the first element)
* a method with an `int` parameter (an index) which allows to navigate on the element which is
at the corresponding index
* a method with an `String` parameter (a column name) which allows to navigate on the element corresponding
at the column name
[[assertj-db-features-tableorrequesttorow]]
====== To a Row
These methods are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToRow.html[ToRow] interface.
The `row()` method allows to navigate to the next row after the row reached on the last call.
[source,java]
----
// If it is the first call, navigate to the first row
assertThat(tableOrRequest).row()...
// It is possible to chain the calls to navigate to the next row
// after the first row (so the second row)
assertThat(tableOrRequest).row().row()...
----
The `row(int index)` method with `index` as parameter
allows to navigate to the row corresponding to row at the index.
[source,java]
----
// Navigate to the row at index 2
assertThat(tableOrRequest).row(2)...
// It is possible to chain the calls to navigate to another row.
// Here row at index 6
assertThat(tableOrRequest).row(2).row(6)...
// It is possible to combine the calls to navigate to the next row
// after the row at index 2. Here row at index 3
assertThat(tableOrRequest).row(2).row()...
----
This picture shows from where it is possible to navigate to a row.
[ditaa, target="db-navigation-with-table-or-request-to-row", shadows=false, transparent=true]
....
+-----------+<--------+----------------------+
+--------->|On a Column| |On a Value Of a Column|
| +-----+-----+-------->+---------------------++
| | : |
+----------------+------+<--------+ | |
|On a Table Or a Request| navigate to a row |
+----+-----------+------+<--------+ | |
: | | v |
| | +-----+-----+<--------+-------------------+ |
| +--------->| On a Row | |On a Value Of a Row| |
| +--+----+---+-------->+----------+--------+ |
| ^ ^ : ^ : |
| navigate to a row | | | | | |
+----------------------+ +----+ +--------------------+----------+
navigate to a row navigate to a row
....
The origin point of the `row(...)` methods is the Table or the Request.
So if the method is executed from a row, from a column or from a value
it is like if the method was executed from the Table or the Request.
When the position is on a row, it is possible to return to the origin.
[source,java]
----
// Return to the table from a row of a table
assertThat(table).row().returnToTable()...
// Return to the request from a row of a request
assertThat(request).row().returnToRequest()...
----
That also means that the two navigations below are equivalent.
[source,java]
----
// Navigate to the first row
// Return to the table from this row
// Navigate to the next row
assertThat(table).row().returnToTable().row()...
// The same thing is done but the return to the table is implicit
assertThat(table).row().row()...
----
[[assertj-db-features-tableorrequesttocolumn]]
====== To a Column
These methods are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToColumn.html[ToColumn] interface.
The `column()` method allows to navigate to the next column after the column reached on the last call.
[source,java]
----
// If it is the first call, navigate to the first column
assertThat(tableOrRequest).column()...
// It is possible to chain the calls to navigate to the next column
// after the first column (so the second column)
assertThat(tableOrRequest).column().column()...
----
The `column(int index)` method with `index` as parameter
allows to navigate to the column corresponding to column at the index.
[source,java]
----
// Navigate to the column at index 2
assertThat(tableOrRequest).column(2)...
// It is possible to chain the calls to navigate to another column.
// Here column at index 6
assertThat(tableOrRequest).column(2).column(6)...
// It is possible to combine the calls to navigate to the next column
// after the column at index 2. Here column at index 3
assertThat(tableOrRequest).column(2).column()...
// It is possible to combine the calls with other navigation methods
// Here first column
assertThat(tableOrRequest).row(2).column()...
// Here column at index 3
assertThat(tableOrRequest).row(2).column(3)...
// Here column at index 4 because the origin remember last navigation to a column
assertThat(tableOrRequest).column(3).row(2).column()...
----
The `column(String columnName)` method with `columnName` as parameter
allows to navigate to the column corresponding to the column with the column name.
[source,java]
----
// Navigate to the column with the name "SURNAME"
assertThat(tableOrRequest).column("surname")...
// Like for the other methods, it is possible to chain the calls
assertThat(tableOrRequest).column("surname").column().column(6).column("id")...
----
This picture shows from where it is possible to navigate to a column.
[ditaa, target="db-navigation-with-table-or-request-to-column", shadows=false, transparent=true]
....
navigate to a column navigate to a column navigate to a column
+----------------------+ +----+ +---------------------+-----------+
| | | | | | |
| v v : v | |
| +--+----+---+<--------+-----------+----------+|
| +--------->|On a Column| |On a Value Of a Column||
| | +-----+-----+-------->+---------------------++|
: | | ^ |
+----+-----------+------+<--------+ | |
|On a Table Or a Request| navigate to a column |
+----------------+------+<--------+ | |
| | : |
| +-----+-----+<--------+-------------------+ |
+--------->| On a Row | |On a Value Of a Row+-=-+
+-----------+-------->+-------------------+
....
The origin point of the `column(...)` methods is the Table or the Request.
So if the method is executed from a row, from a column or from a value
it is like if the method was executed from the Table or The Request.
When the position is on a column, it is possible to return to the origin.
[source,java]
----
// Return to the table from a column of a table
assertThat(table).column().returnToTable()...
// Return to the request from a column of a request
assertThat(request).column().returnToRequest()...
----
That also means that the two navigations below are equivalent.
[source,java]
----
// Navigate to the first column
// Return to the table from this column
// Navigate to the next column
assertThat(table).column().returnToTable().column()...
// The same thing is done but the return to the table is implicit
assertThat(table).column().column()...
----
[[assertj-db-features-tableorrequesttovalue]]
====== To a Value
These methods are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToValue.html[ToValue]
and the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToValueFromRow.html[ToValueFromRow] interfaces.
The `value()` method allows to navigate to the next value after the value reached on the last call.
[source,java]
----
// If it is the first call, navigate to the first value
assertThat(tableOrRequest).row().value()...
// It is possible to chain the calls to navigate to the next value
// after the first value (so the second value)
assertThat(tableOrRequest).column().value().value()...
----
The `value(int index)` method with `index` as parameter
allows to navigate to the value corresponding to value at the index.
[source,java]
----
// Navigate to the value at index 2
assertThat(tableOrRequest).column().value(2)...
// It is possible to chain the calls to navigate to another value.
// Here value at index 6
assertThat(tableOrRequest).row(4).value(2).value(6)...
// It is possible to combine the calls to navigate to the next value
// after the value at index 2. Here value at index 3
assertThat(tableOrRequest).column(4).value(2).value()...
// Here value at index 4 because the origin remember last navigation to a column
assertThat(tableOrRequest).column().value(3).row(2).column(0).value()...
----
The `value(String columnName)` method with `columnName` as parameter (only available from a row)
allows to navigate to the value of the column corresponding to the column with the column name.
[source,java]
----
// Navigate to the value of the column with the name "SURNAME"
assertThat(tableOrRequest).row().value("surname")...
// Like for the other methods, it is possible to chain the calls
assertThat(tableOrRequest).row().value("surname").value().value(6).value("id")...
----
This picture shows from where it is possible to navigate to a value.
[ditaa, target="db-navigation-with-table-or-request-to-value", shadows=false, transparent=true]
....
+--------------------+
|navigate to a value |
: v
+-----+-----+<--+----------+-----------+=--+
+-->|On a Column| |On a Value Of a Column| |navigate to a value
| +-----+-----+-->+----------------------+<--+
| |
+----------------+------+<-+
|On a Table Or a Request|
+----------------+------+<-+
| |
| +-----+-----+<--+-------------------+=--+
+-->| On a Row | |On a Value Of a Row| |navigate to a value
+-----+-----+-->+----------+--------+<--+
: ^
|navigate to a value |
+--------------------+
....
The origin point of the `value(...)` methods is the Row or the Column.
So if the method is executed from a value
it is like if the method was executed from the Row or The Column.
When the position is on a value, it is possible to return to the origin.
[source,java]
----
// Return to the column from a value
assertThat(table).column().value().returnToColumn()...
// Return to the row from a value
assertThat(request).row().value().returnToRow()...
----
That also means that the two navigations below are equivalent.
[source,java]
----
// Navigate to the first column
// Navigate to the first value
// Return to the column from this value
// Navigate to the next value
assertThat(table).column().value().returnToColumn().value()...
// The same thing is done but the return to the column is implicit
assertThat(table).column().value().value()...
----
[[assertj-db-features-changesasroot]]
===== With Changes as root
[[assertj-db-features-changestochanges]]
====== To Changes
These methods are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToChanges.html[ToChanges] interface.
The `ofCreation()` method allows to navigate to the changes of creation.
[source,java]
----
// Navigate to the changes of creation
assertThat(changes).ofCreation()...
----
The `ofCreationOnTable()` method with `tableName` as parameter
allows to navigate to the changes of creation of a table.
[source,java]
----
// Navigate to the changes of creation on the "members" table
assertThat(changes).ofCreationOnTable("members")...
----
The `ofCreation()` method allows to navigate to the changes of modification.
[source,java]
----
// Navigate to the changes of modification
assertThat(changes).ofModification()...
----
The `ofModificationOnTable()` method with `tableName` as parameter
allows to navigate to the changes of modification of a table.
[source,java]
----
// Navigate to the changes of modification on the "members" table
assertThat(changes).ofModificationOnTable("members")...
----
The `ofCreation()` method allows to navigate to the changes of deletion.
[source,java]
----
// Navigate to the changes of deletion
assertThat(changes).ofDeletion()...
----
The `ofDeletionOnTable()` method with `tableName` as parameter
allows to navigate to the changes of deletion of a table.
[source,java]
----
// Navigate to the changes of deletion on the "members" table
assertThat(changes).ofDeletionOnTable("members")...
----
The `onTable(String tableName)` method with `tableName` as parameter
allows to navigate to the changes of a table.
[source,java]
----
// Navigate to all the changes on the "members" table
assertThat(changes).onTable("members")...
----
The `ofAll()` method allows to navigate to all the changes.
[source,java]
----
// Navigate to all the changes
assertThat(changes).ofAll()...
// The navigation can be chained
assertThat(changes).ofCreation().ofAll()...
----
This picture shows from where it is possible to navigate to changes.
[ditaa, target="db-navigation-with-changes-to-changes", shadows=false, transparent=true]
....
+--------------------------------+----------------------+
| navigate to changes | :
| +-----+-----+<---+-----------+----------+
|navigate to changes +-->|On a Column| |On a Value Of a Column|
+------------------+ | +-----+-----+--->+----------------------+
| | | |
navigate to changes v : | |
+------------=+-----+-----+<-----+-----+---+-+<------+
| |On Changes | |On a Change|
+------------>+-----+-----+----->+---------+-+<------+
^ | |
| | +-----+--+<---+-------------------+
| +-->|On a Row| |On a Value Of a Row|
| +-----+--+--->+-----------+-------+
| navigate to changes : |
+--------------------------------+-------------------+
....
The origin point of these methods is the Changes.
So if the method is executed from a change, a column, a row or a value
it is like if the method was executed from the Changes.
[[assertj-db-features-changestochange]]
====== To a Change
These methods are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToChange.html[ToChange] interface.
The `change()` method allows to navigate to the next change after the change reached on the last call.
[source,java]
----
// If it is the first call, navigate to the first change
assertThat(changes).change()...
// It is possible to chain the calls to navigate to the next change
// after the first change (so the second change)
assertThat(changes).change().change()...
----
The `change(int index)` method with `index` as parameter
allows to navigate to the change corresponding to change at the index.
[source,java]
----
// Navigate to the change at index 2
assertThat(changes).change().change(2)...
// It is possible to chain the calls to navigate to another change.
// Here change at index 7
assertThat(changes).change(6).change()...
----
The `changeOnTable(String tableName)` method with `tableName` as parameter
allows to navigate to the next change corresponding to the table name after the change corresponding to the table name reached on the last call.
[source,java]
----
// If it is the first call, navigate to the first change on "members" table
assertThat(changes).changeOnTable("members")...
// It is possible to chain the calls to navigate to the next change on the "members" table
// after the first change on the "members" table (so the second change)
assertThat(changes).changeOnTable("members").changeOnTable("members")...
----
The `changeOnTable(String tableName, int index)` method with `tableName` and `index` as parameters
allows to navigate to the change corresponding to change on the table name at the index.
[source,java]
----
// Navigate to the change at index 2 of "members" table
assertThat(changes).changeOnTable("members").changeOnTable("members", 2)...
// It is possible to chain the calls to navigate to another change.
// Here change at index 7 of "members" table
assertThat(changes).changeOnTable("members", 6).changeOnTable("members")...
----
There are 12 other methods which are derived from the 4 methods above :
* `changeOfCreation()`, `changeOfModification()` and `changeOfDeletion()`
methods which allows to navigate to the next change of creation, modification and deletion like `change()` method
[source,java]
----
// If it is the first call, navigate to the first change of creation
assertThat(changes).changeOfCreation()...
// Navigate to the the first change of creation
// and after the second change of creation
assertThat(changes).changeOfCreation().changeOfCreation()...
----
* `changeOfCreation(int index)`, `changeOfModification(int index)` and `changeOfDeletion(int index)`
methods with `index` as parameter which allows to navigate to the change of creation, modification and deletion corresponding to change of creation, modification and deletion at the index like `change(int index)` method
[source,java]
----
// Navigate to the change of modification at index 2
assertThat(changes).changeOfModification()
.changeOfModification(2)...
// It is possible to chain the calls
// to navigate to another change of modification.
// Here change of modification at index 5
assertThat(changes).changeOfModification(4)
.changeOfModification()...
----
* `changeOfCreationOnTable(String tableName)`, `changeOfModificationOnTable(String tableName)` and `changeOfDeletionOnTable(String tableName)`
methods with `tableName` as parameter which allows to navigate to the next change of creation, modification and deletion corresponding to the table name like `changeOnTable(String tableName)` method
[source,java]
----
// If it is the first call, navigate
// to the first change of creation on "members" table
assertThat(changes).changeOfCreationOnTable("members")...
// It is possible to chain the calls to navigate
// to the next change of creation on the "members" table
// after the first change of creation on the "members" table
// (so the second change of creation)
assertThat(changes).changeOfCreationOnTable("members")
.changeOfCreationOnTable("members")...
----
* `changeOfCreationOnTable(String tableName, int index)`, `changeOfModificationOnTable(String tableName, int index)` and `changeOfDeletionOnTable(String tableName, int index)`
methods with `tableName` and `index` as parameters which allows to navigate to the next change of creation, modification and deletion corresponding to the table name and index like `changeOnTable(String tableName, int index)` method
[source,java]
----
// Navigate to the change of deletion at index 2 of "members" table
assertThat(changes).changeOfDeletionOnTable("members")
.changeOfDeletionOnTable("members", 2)...
// It is possible to chain the calls
// to navigate to another change of deletion.
// Here change of deletion at index 7 of "members" table
assertThat(changes).changeOfDeletionOnTable("members", 6)
.changeOfDeletionOnTable("members")...
----
The `changeOnTableWithPks(String tableName, Object... pksValues)` method
allows to navigate to the change corresponding to the table and the primary keys.
[source,java]
----
// Navigate to the change with primary key 1 of "members" table
assertThat(changes).changeOnTableWithPks("members", 1)...
// It is possible to chain the calls to navigate to the next change
// after the change with primary key 1 of "members" table
assertThat(changes).changeOnTableWithPks("members", 1).change()...
----
This picture shows from where it is possible to navigate to a change.
[ditaa, target="db-navigation-with-changes-to-change", shadows=false, transparent=true]
....
navigate to change
+-------------+----------------------+
| | :
navigate to change | +-----+-----+<---+-----------+----------+
+------------------+ +-->|On a Column| |On a Value Of a Column|
| | | +-----+-----+--->+----------------------+
: v | |
+-----+-----+<-----+-----+---+-+<------+
|On Changes | |On a Change|
+-----------+----->+-----+---+-+<------+
| ^ | |
| | | +-----+--+<---+-------------------+
navigate to change| | +-->|On a Row| |On a Value Of a Row|
| | +-----+--+--->+-----------+-------+
| | : |
+-----+-------------+-------------------+
navigate to change
....
The origin point of the `change(...)` methods is the current Changes
and the origin point of other methods is the Changes of origin.
So if the method is executed from a change, a column, a row or a value
it is like if the method was executed from these origins.
That means there is an important difference.
[source,java]
----
// Navigate to the changes of deletion
// Navigate to the first change of this changes of deletion
assertThat(changes).ofDeletion().change()...
// Navigate to the changes of deletion
// Navigate to the first change of this changes of creation
assertThat(changes).ofDeletion().changeOfCreation()...
// This is equivalent to
assertThat(changes).ofDeletion().ofAll().changeOfCreation()...
----
When the position is on a change, it is possible to return to the origin.
[source,java]
----
// Return to the change from a column
assertThat(changes).change().returnToChanges()...
----
That also means that the two navigations below are equivalent.
[source,java]
----
// Navigate to the first change
// Return to the changes
// Navigate to the next change
assertThat(changes).change().returnToChanges().change()...
// The same thing is done but the return to the changes is implicit
assertThat(changes).change().change()...
----
[[assertj-db-features-changestorow]]
====== To a Row
These methods are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToRowFromChange.html[ToRowFromChange] interface.
The `rowAtStartPoint()` and `rowAtEndPoint()` methods
allows to navigate to the row at the start point and at the end point.
[source,java]
----
// Navigate to the row at the start point
assertThat(changes).change().rowAtStartPoint()...
// Navigate to the row at the end point (note that the methods can be chained)
assertThat(changes).change().rowAtStartPoint().rowAtEndPoint()...
----
This picture shows from where it is possible to navigate to a row.
[ditaa, target="db-navigation-with-changes-to-row", shadows=false, transparent=true]
....
+-----------+<---+----------------------+
+-->|On a Column| |On a Value Of a Column|
| +-----+--+--+--->+--------------------+-+
| | : :
+-----------+<-----+-----+-----+<--+ | |
|On Changes | |On a Change| navigate to a row |
+-----------+----->+-+---+-----+<--+ | |
: | | v |
navigate to a row| | +-----+--+<---+-------------------+ |
| +-->|On a Row| |On a Value Of a Row| |
+------>++--+--+-+--->+-----------+-------+ |
: ^ ^ | |
| | | | |
+--+ +------------------+-----------+
navigate to a row navigate to a row
....
The origin point of the `rowAtStartPoint()` and `rowAtEndPoint()` methods is the Change.
So if the method is executed from a row, from a column or from a value
it is like if the method was executed from the Change.
When the position is on a row, it is possible to return to the origin.
[source,java]
----
// Return to the change from a row
assertThat(changes).change().rowAtStartPoint().returnToChange()...
----
That also means that the two navigations below are equivalent.
[source,java]
----
// Navigate to the first change
// Navigate to the row at start point
// Return to the change from this column
// Navigate to the row at end point
assertThat(changes).change().rowAtStartPoint().returnToChange().rowAtEndPoint()...
// The same thing is done but the return to the change is implicit
assertThat(changes).change().rowAtStartPoint().rowAtEndPoint()...
----
[[assertj-db-features-changestocolumn]]
====== To a Column
These methods are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToColumn.html[ToColumn]
and https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToColumnFromChange.html[ToColumnFromChange] interfaces.
The `column()` method allows to navigate to the next column after the column reached on the last call.
[source,java]
----
// If it is the first call, navigate to the first column
assertThat(changes).change().column()...
// It is possible to chain the calls to navigate to the next column
// after the first column (so the second column)
assertThat(changes).change().column().column()...
----
The `column(int index)` method with `index` as parameter
allows to navigate to the column corresponding to column at the index.
[source,java]
----
// Navigate to the column at index 2
assertThat(changes).change().column(2)...
// It is possible to chain the calls to navigate to another column.
// Here column at index 6
assertThat(changes).change().column(2).column(6)...
// It is possible to combine the calls to navigate to the next column
// after the column at index 2. Here column at index 3
assertThat(changes).change().column(2).column()...
// It is possible to combine the calls with other navigation methods
// Here first column
assertThat(changes).change().rowAtStartPoint().column()...
// Here column at index 3
assertThat(changes).change().rowAtEndPoint().column(3)...
// Here column at index 4 because the origin remember last navigation to a column
assertThat(changes).change().column(3).rowAtEndPoint().column()...
----
The `column(String columnName)` method with `columnName` as parameter
allows to navigate to the column corresponding to the column with the column name.
[source,java]
----
// Navigate to the column with the name "SURNAME"
assertThat(changes).change().column("surname")...
// Like for the other methods, it is possible to chain the calls
assertThat(changes).change().column("surname").column().column(6).column("id")...
----
The `columnAmongTheModifiedOnes()` method allows to navigate to the next column with modifications after the column reached on the last call.
[source,java]
----
// If it is the first call, navigate to the first column with modifications
assertThat(changes).change().columnAmongTheModifiedOnes()...
// It is possible to chain the calls to navigate to the next column
// after the first column (so the second column with modifications)
assertThat(changes).change().columnAmongTheModifiedOnes()
.columnAmongTheModifiedOnes()...
----
The `columnAmongTheModifiedOnes(int index)` method with `index` as parameter allows to navigate to the column with modifications corresponding to column at the index.
[source,java]
----
// Navigate to the column at index 2 (the third column with modifications)
assertThat(changes).change().columnAmongTheModifiedOnes(2)...
// It is possible to chain the calls to navigate to another column.
// Here column at index 0 (the first column with modifications)
assertThat(changes).change().columnAmongTheModifiedOnes(2)
.columnAmongTheModifiedOnes(0)...
----
The `columnAmongTheModifiedOnes(String columnName)` method with `columnName` as parameter
allows to navigate to the column with modifications corresponding to the column with the column name.
[source,java]
----
// Navigate to the column with modifications and the name "SURNAME"
assertThat(changes).change().columnAmongTheModifiedOnes("surname")...
// Like for the other methods, it is possible to chain the calls
assertThat(changes).change().column("surname").columnAmongTheModifiedOnes()
.column(6).columnAmongTheModifiedOnes("id")...
----
This picture shows from where it is possible to navigate to a column.
[ditaa, target="db-navigation-with-changes-to-column", shadows=false, transparent=true]
....
navigate to a column navigate to a column
+---+ +-----------------+-----------+
| | | | |
: v v | |
+------>+-+---+-----+<---+-----------+----------+|
navigate to a column| +-->|On a Column| |On a Value Of a Column||
| | +-----+-----+--->+----------------------+|
: | | ^ |
+-----------+<-----+-+---+-----+<--+ | |
|On Changes | |On a Change| navigate to a column |
+-----------+----->+-----+-----+<--+ | |
| | : |
| +-----+--+<---+-------------------+ |
+-->|On a Row| |On a Value Of a Row|=-----+
+--------+--->+-------------------+
....
The origin point of the `column(...)` methods is the Change.
So if the method is executed from a row, from a column or from a value
it is like if the method was executed from the Change.
When the position is on a column, it is possible to return to the origin.
[source,java]
----
// Return to the change from a column
assertThat(changes).change().column().returnToChange()...
----
That also means that the two navigations below are equivalent.
[source,java]
----
// Navigate to the first change
// Navigate to the first column
// Return to the change from this column
// Navigate to the next column
assertThat(changes).change().column().returnToChange().column()...
// The same thing is done but the return to the change is implicit
assertThat(changes).change().column().column()...
----
[[assertj-db-features-changestovalue]]
====== To a Value
These methods are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToValue.html[ToValue],
https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToValueFromColumn.html[ToValueFromColumn]
and https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/navigation/ToValueFromRow.html[ToValueFromRow] interfaces.
This picture shows from where it is possible to navigate to a value.
The `value()` method (only available from a row) allows to navigate to the next value after the value reached on the last call.
[source,java]
----
// If it is the first call, navigate to the first value
assertThat(changes).change().rowAtEndPoint().value()...
// It is possible to chain the calls to navigate to the next value
// after the first value (so the second value)
assertThat(changes).change().rowAtEndPoint().value().value()...
----
The `value(int index)` method with `index` as parameter (only available from a row)
allows to navigate to the value corresponding to value at the index.
[source,java]
----
// Navigate to the value at index 2
assertThat(changes).change().rowAtEndPoint().value(2)...
// It is possible to chain the calls to navigate to another value.
// Here value at index 6
assertThat(changes).change().rowAtEndPoint().value(2).value(6)...
// It is possible to combine the calls to navigate to the next value
// after the value at index 2. Here value at index 3
assertThat(changes).change().rowAtEndPoint().value(2).value()...
// Here value at index 4 because the origin remember last navigation to the row
assertThat(changes).change().rowAtEndPoint().value(3).column(2).rowAtEndPoint().value()...
----
The `value(String columnName)` method with `columnName` as parameter (only available from a row)
allows to navigate to the value of the column corresponding to the column with the column name.
[source,java]
----
// Navigate to the value of the column with the name "SURNAME"
assertThat(changes).change().rowAtEndPoint().value("surname")...
// Like for the other methods, it is possible to chain the calls
assertThat(changes).change().rowAtEndPoint().value("surname").value().value(6).value("id")...
----
The `valueAtStartPoint()` and `valueAtEndPoint()` methods (only available from a column)
allows to navigate to the value at the start point and at the end point.
[source,java]
----
// Navigate to the value at the start point of the row
assertThat(changes).change().column().valueAtStartPoint()...
// Navigate to the value at the end point of the row (note that the methods can be chained)
assertThat(changes).change().column().valueAtStartPoint().valueAtEndPoint()...
----
This picture shows from where it is possible to navigate to a value.
[ditaa, target="db-navigation-with-changes-to-value", shadows=false, transparent=true]
....
+-------------------+
|navigate to a value|
: v
+---+-------+<---+------+---------------+=-+
+-->|On a Column| |On a Value Of a Column| |navigate to a value
| +-----+-----+--->+----------------------+<-+
| |
+-----------+<-+-----+-----+<--+
|On Changes | |On a Change|
+-----------+->+-----+-----+<--+
| |
| +-----+--+<---+-------------------+=-+
+-->|On a Row| |On a Value Of a Row| |navigate to a value
+---+----+--->+---------+---------+<-+
: ^
|navigate to a value|
+-------------------+
....
The origin point of the `value(...)` methods is the Row or the Column.
So if the method is executed from a value
it is like if the method was executed from the Row or The Column.
When the position is on a value, it is possible to return to the origin.
[source,java]
----
// Return to the column from a value
assertThat(changes).change().column().valueAtEndPoint().returnToColumn()...
// Return to the row from a value
assertThat(changes).change().rowAtEndPoint().value().returnToRow()...
----
That also means that the two navigations below are equivalent.
[source,java]
----
// Navigate to the first change
// Navigate to the row at end point
// Navigate to the first value
// Return to the column from this value
// Navigate to the next value
assertThat(changes).change().rowAtEndPoint().value().returnToRow().value()...
// The same thing is done but the return to the row is implicit
assertThat(changes).change().rowAtEndPoint().value().value()...
----
[[assertj-db-features-assertions]]
==== Assertions
[[assertj-db-features-onchangetype]]
===== On the type of change
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnChangeType.html[AssertOnChangeType] interface.
These assertions allow to verify the type of a change (the concept of change of type is described <<assertj-db-concepts-changetype,here>>).
[source,java]
----
// Verify that the first change is a change of creation
assertThat(changes).change().isOfType(ChangeType.CREATION);
----
There are specific assertion methods for each type of change. For example, the assertion below is equivalent to the one above
[source,java]
----
assertThat(changes).change().isCreation();
----
[[assertj-db-features-oncolumnequality]]
===== On the equality with the values of a column
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnColumnEquality.html[AssertOnColumnEquality]
and the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnColumnOfChangeEquality.html[AssertOnColumnOfChangeEquality] interfaces.
These assertion allow to verify the values of a column (the column of a table, of a request or of a change).
[[assertj-db-features-oncolumnequality-boolean]]
====== With Boolean
[source,java]
----
// Verify that the values of the column "live" of the request
// was equal to true, to false and after to true
assertThat(request).column("live").hasValues(true, false, true);
// Verify that the value of the first column of the first change
// was false at start point and is true at end point
assertThat(changes).change().column().hasValues(false, true);
// Verify that the value of the third column of the first change
// is not modified and is true
assertThat(changes).change().column(2).hasValues(true);
----
[[assertj-db-features-oncolumnequality-bytes]]
====== With Bytes
[source,java]
----
// Get bytes from a file and from a resource in the classpath
byte[] bytesFromFile = Assertions.bytesContentOf(file);
byte[] bytesFromClassPath = Assertions.bytesContentFromClassPathOf(resource);
// Verify that the values of the second column of the request
// was equal to the bytes from the file, to null and to bytes from the resource
assertThat(request).column(1).hasValues(bytesFromFile, null, bytesFromClassPath);
// Verify that the value of the first column of the first change
// was equal to bytes from the file at start point and to bytes from the resource at end point
assertThat(changes).change().column().hasValues(bytesFromFile, bytesFromClassPath);
----
[[assertj-db-features-oncolumnequality-number]]
====== With Number
[source,java]
----
// Verify that the values of the first column of the table
// was equal to 5.9, 4 and 15000
assertThat(table).column().hasValues(5.9, 4, new BigInteger("15000"));
// Verify that the value of the first column of the first change
// is not modified and is equal to 5
assertThat(changes).change().column().hasValues(5);
----
[[assertj-db-features-oncolumnequality-date]]
====== With Date
[source,java]
----
// Verify that the values of the first column of the table
// was equal to December 23rd 2007 and May 19th 1975
assertThat(table).column()
.hasValues(LocalDate.of(2007, 12, 23),
LocalDate.of(1975, 5, 19));
// Verify that the value of the first column of the first change
// was equal December 23rd 2007 at start point
// and is equal to May 19th 1975 at end point
assertThat(changes).change().column()
.hasValues(LocalDate.parse("2007-12-23"),
LocalDate.parse("1975-05-19"));
----
[[assertj-db-features-oncolumnequality-time]]
====== With Time
[source,java]
----
// Verify that the values of the first column of the table
// was equal to 09:01am and 05:30:50pm
assertThat(table).column()
.hasValues(LocalTime.of(9, 1),
LocalTime.of(17, 30, 50));
// Verify that the value of the first column of the first change
// was equal to 09:01am at start point
// and is equal to 05:30:50pm at end point
assertThat(changes).change().column()
.hasValues(LocalTime.parse("09:01"),
LocalTime.parse("17:30:50"));
----
[[assertj-db-features-oncolumnequality-datetime]]
====== With Date/Time
[source,java]
----
// Verify that the values of the first column of the table
// was equal to December 23rd 2007 09:01am and May 19th 1975
assertThat(table).column()
.hasValues(LocalDateTime.of(LocalDate.of(2007, 12, 23),
LocalTime.parse("09:01")),
LocalDateTime.of(LocalDate.of(1975, 5, 19),
LocalTime.MIDNIGHT));
// Verify that the value of the first column of the first change
// was equal December 23rd 2007 09:01am at start point
// and is equal to May 19th 1975 at end point
assertThat(changes).change().column()
.hasValues(LocalDateTime.parse("2007-12-23T09:01"),
LocalDateTime.parse("1975-05-19T00:00"));
----
[[assertj-db-features-oncolumnequality-string]]
====== With String
[source,java]
----
// Verify that values are equal to texts
assertThat(table).column("name")
.hasValues("Hewson",
"Evans",
"Clayton",
"Mullen");
// Verify that the value of the column "size" of the first change of modification
// is not modified and is equal to 1.75 by parsing
assertThat(changes).changeOfModification().column("size")
.hasValues("1.75");
// Verify that values are equal to dates, times or dates/times by parsing
assertThat(table).column()
.hasValues("2007-12-23T09:01"),
"1975-05-19");
----
[[assertj-db-features-oncolumnequality-uuid]]
====== With UUID
Since <<assertj-db-1-1-0-release-notes,1.1.0>>
[source,java]
----
// Verify that the values of the first column of the table
// was equal to 30B443AE-C0C9-4790-9BEC-CE1380808435, 0E2A1269-EFF0-4233-B87B-B53E8B6F164D
// and 2B0D1BDD-909E-4362-BA10-C930BA82718D
assertThat(table).column().hasValues(UUID.fromString("30B443AE-C0C9-4790-9BEC-CE1380808435"),
UUID.fromString("0E2A1269-EFF0-4233-B87B-B53E8B6F164D"),
UUID.fromString("2B0D1BDD-909E-4362-BA10-C930BA82718D"));
// Verify that the value of the first column of the first change
// is not modified and is equal to 399FFFCA-7874-4225-9903-E227C4E9DCC1
assertThat(changes).change()
.column().hasValues(UUID.fromString("399FFFCA-7874-4225-9903-E227C4E9DCC1"));
----
[[assertj-db-features-oncolumnequality-character]]
====== With Character
Since <<assertj-db-1-2-0-release-notes,1.2.0>>
[source,java]
----
// Verify that the values of the first column of the table
// was equal to 'T', 'e', 's' and 't'
assertThat(table).column().hasValues('T', 'e', 's', 't');
// Verify that the value of the first column of the first change
// is not modified and is equal to 'T'
assertThat(changes).change().column().hasValues('T');
----
[[assertj-db-features-oncolumnname]]
===== On the name of a column
This assertion is described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnColumnName.html[AssertOnColumnName] interface.
This assertion allows to verify the name of a column (the column of a table, of a request or of a change).
[source,java]
----
// Verify that the fifth column of the table is called "firstname"
assertThat(table).column(4).hasColumnName("firstname");
// Verify that the third value of the second row of the request is in a column called "name"
assertThat(request).row(1).value(2).hasColumnName("name");
// Verify that the first column of the first change is called "id"
assertThat(changes).change().column().hasColumnName("id");
----
[[assertj-db-features-oncolumnnullity]]
===== On the nullity of the values of a column
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnColumnNullity.html[AssertOnColumnNullity] interface.
These assertion allows to verify the nullity of the values of a column (the column of a table or of a request).
[source,java]
----
// Verify that the fifth column of the table has only null values
assertThat(table).column(4).hasOnlyNullValues();
// Verify that the column "name" has only not null values
assertThat(request).column("name").hasOnlyNotNullValues();
----
[[assertj-db-features-onrownullity]]
===== On the nullity of the values of a row
Since <<assertj-db-1-2-0-release-notes,1.2.0>>
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnRowNullity.html[AssertOnRowNullity] interface.
These assertion allows to verify the nullity of the values of a row (the row of a table or of a request).
[source,java]
----
// Verify that the fifth row of the table has only not null values
assertThat(table).row(4).hasOnlyNotNullValues();
// Verify that the first column has only not null values
assertThat(request).row().hasOnlyNotNullValues();
----
[[assertj-db-features-oncolumntype]]
===== On the type of column
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnColumnType.html[AssertOnColumnType] interface.
These assertions allow to verify the type of the values of a column (a column from a table, from a request or from a change).
[source,java]
----
// Verify that the values of the column called "firstname"
// of the table are a text (null values are considered as wrong)
assertThat(table).column("firstname").isOfType(ValueType.TEXT, false);
// The same verification (with the specific method)
// on the third column of the request
assertThat(request).column(2).isText(false);
// Now the same verification again but with a lenience with null values
// (the null values are not considered as wrong)
assertThat(request).column(2).isText(true);
// Verify that the values of the first column
// of the first change is either a date or a number
assertThat(changes).change().column()
.isOfAnyOfTypes(ValueType.DATE, ValueType.NUMBER);
----
[[assertj-db-features-oncolumnclass]]
===== On the class of column
Since <<assertj-db-1-1-0-release-notes,1.1.0>>
This assertion is described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnColumnClass.html[AssertOnColumnClass] interface.
This assertion allows to verify the class of the values of a column (a column from a table, from a request or from a change).
[source,java]
----
// Verify that the values of the column called "firstname"
// of the table are a String (null values are considered as wrong)
assertThat(table).column("firstname").isOfClass(String.class, false);
// Verify that the values of the first column
// of the first change is a Locale (null values are considered as right)
assertThat(changes).change().column().isOfClass(Locale.class, true);
----
[[assertj-db-features-oncolumncontent]]
===== On the content of column
Since <<assertj-db-1-1-0-release-notes,1.1.0>>
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnColumnContent.html[AssertOnColumnContent] interface.
These assertions allow to verify the content of a column (a column from a table or from a request).
[source,java]
----
// Verify that the content of the column called "name"
assertThat(table).column("name").containsValues("Hewson",
"Evans",
"Clayton",
"Mullen");
// This second assertion is equivalent because the order of the values is not important
assertThat(table).column("name").containsValues("Evans",
"Clayton",
"Hewson",
"Mullen");
----
[[assertj-db-features-ondatatype]]
===== On the type of data
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnDataType.html[AssertOnDataType] interface.
These assertions allow to verify the type of the date on which is a change.
[source,java]
----
// Verify that the change is on a table
assertThat(changes).change().isOnDataType(DataType.TABLE);
// The same verification (with the specific method)
assertThat(changes).change().isOnTable();
// Verify that the change is on the "members" table
assertThat(changes).change().isOnTable("members");
----
[[assertj-db-features-onmodifiedcolumns]]
===== On the modified columns in a change
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnModifiedColumn.html[AssertOnModifiedColumn]
and the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnModifiedColumns.html[AssertOnModifiedColumns] interfaces.
These assertions allow to verify if a column of a change have been modified between the start point and the end point (see the <<assertj-db-concepts-changes,concept of changes>>).
[source,java]
----
// Verify that first column of the change is not modified
// and the second column is modified
assertThat(changes).change().column().isNotModified().column().isModified();
// Verify that there are 2 modified columns in the change
assertThat(changes).change().hasNumberOfModifiedColumns(2);
// Verify that the modified column in change are at index 1 and 2
assertThat(changes).change().hasModifiedColumns(1, 2);
// Verify that the modified column in change are "name" and "firstname"
assertThat(changes).change().hasModifiedColumns("name", "firstname");
----
Since version <<assertj-db-1-1-0-release-notes,1.1.0>>, there are new assertions which allow to compare the number of modified columns between the start point and the end point.
[source,java]
----
// Verify that the number of modified columns in the first change is more than 5
assertThat(changes).change().hasNumberOfModifiedColumnsGreaterThan(5);
// Verify that the number of modified columns in the first change is at least 5
assertThat(changes).change().hasNumberOfModifiedColumnsGreaterThanOrEqualTo(5);
// Verify that the number of modified columns in the first change is less than 6
assertThat(changes).change().hasNumberOfModifiedColumnsLessThan(6);
// Verify that the number of modified columns in the first change is at most 6
assertThat(changes).change().hasNumberOfModifiedColumnsLessThanOrEqualTo(6);
----
[[assertj-db-features-onnumberchanges]]
===== On the number of changes
This assertion is described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnNumberOfChanges.html[AssertOnNumberOfChanges] interface.
This assertion allows to verify the number of changes.
[source,java]
----
// Verify that there are 4 changes
assertThat(changes).hasNumberOfChanges(4);
----
Since version <<assertj-db-1-1-0-release-notes,1.1.0>>, there are new assertions which allow to compare the number of changes between the start point and the end point.
[source,java]
----
// Verify that the number of changes is more than 5
assertThat(changes).hasNumberOfChangesGreaterThan(5);
// Verify that the number of changes is at least 5
assertThat(changes).hasNumberOfChangesGreaterThanOrEqualTo(5);
// Verify that the number of changes is less than 6
assertThat(changes).hasNumberOfChangesLessThan(6);
// Verify that the number of changes is at most 6
assertThat(changes).hasNumberOfChangesLessThanOrEqualTo(6);
----
[[assertj-db-features-onnumbercolumns]]
===== On the number of columns
This assertion is described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnNumberOfColumns.html[AssertOnNumberOfColumns] interface.
This assertion allows to verify the number of columns (columns from a table, from a request or from a change).
[source,java]
----
// Verify that there are 6 columns in the table
assertThat(table).hasNumberOfColumns(6);
// Verify that there are 4 columns in the change
assertThat(changes).change().hasNumberOfColumns(4);
----
Since version <<assertj-db-1-1-0-release-notes,1.1.0>>, there are new assertions which allow to compare the number of columns.
[source,java]
----
// Verify that the number of columns is more than 5
assertThat(table).hasNumberOfColumnsGreaterThan(5);
// Verify that the number of columns is at least 5
assertThat(request).hasNumberOfColumnsGreaterThanOrEqualTo(5);
// Verify that the number of columns is less than 6
assertThat(changes).hasNumberOfColumnsLessThan(6);
// Verify that the number of columns is at most 6
assertThat(changes).hasNumberOfColumnsLessThanOrEqualTo(6);
----
[[assertj-db-features-onnumberrows]]
===== On the number of rows
This assertion is described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnNumberOfRows.html[AssertOnNumberOfRows] interface.
This assertion allows to verify the number of rows (rows from a table or from a request).
[source,java]
----
// Verify that there are 7 rows in the table
assertThat(table).hasNumberOfRows(7);
----
Since version <<assertj-db-1-1-0-release-notes,1.1.0>>, there are new assertions which allow to compare the number of rows.
[source,java]
----
// Verify that the number of rows is more than 5
assertThat(table).hasNumberOfRowsGreaterThan(5);
// Verify that the number of rows is at least 5
assertThat(request).hasNumberOfRowsGreaterThanOrEqualTo(5);
// Verify that the number of rows is less than 6
assertThat(changes).hasNumberOfRowsLessThan(6);
// Verify that the number of rows is at most 6
assertThat(changes).hasNumberOfRowsLessThanOrEqualTo(6);
----
Since version <<assertj-db-1-2-0-release-notes,1.2.0>>, there is a new assertion which allow to verify if rows are empty (equivalent to `hasNumberOfRows(0)`).
[source,java]
----
// Verify that the table are empty
assertThat(table).isEmpty();
----
[[assertj-db-features-onprimarykeys]]
===== On the primary keys
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnPrimaryKey.html[AssertOnPrimaryKey] interface.
These assertions allow to verify the names and the values of the columns which compose the primary keys of the rows from a change.
[source,java]
----
// Verify that the columns of the primary keys are "id" and "name"
assertThat(changes).change().hasPksNames("id", "name");
// Verify that the values of the primary keys are 1 and "HEWSON"
assertThat(changes).change().hasPksValues(1, "HEWSON");
----
[[assertj-db-features-onrowequality]]
===== On the equality with the values of a row
This assertion is described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnRowEquality.html[AssertOnRowEquality] interface.
This assertion allow to verify the values of a row (the row of a table, of a request or of a change).
[source,java]
----
// Verify the values of the row at index 1
assertThat(table).row(1)
.hasValues(2,
"Evans",
"David Howell",
"The Edge",
DateValue.of(1961, 8, 8),
1.77);
// Verify the values of the row at end point
assertThat(changes).change().rowAtEndPoint()
.hasValues(5,
"McGuiness",
"Paul",
null,
"1951-06-17",
null);
----
[[assertj-db-features-onrowexistence]]
===== On the existence of a row in a change
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnRowOfChangeExistence.html[AssertOnRowOfChangeExistence] interface.
These assertions allow to verify that the row at start point or at end point of a change exists or not (for a creation, the row do not exist at start point and for a deletion it is the contrary : the row do not exist at end point).
[source,java]
----
// Verify that row at start point exists
assertThat(changes).change().rowAtStartPoint().exists();
// Verify that the row at end point do not exist
assertThat(changes).change().rowAtEndPoint().doesNotExist();
----
[[assertj-db-features-onvaluechronology]]
===== On the chronology of a value
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnValueChronology.html[AssertOnValueChronology] interface.
These assertions allow to compare a value (the value of a table, of a request or of a change) to a date, a time or a date/time.
[source,java]
----
// Compare the value with a date
assertThat(table).row(1).value("birthdate")
.isAfter(DateValue.of(1950, 8, 8));
// Verify the value is between two dates/times
assertThat(changes).change().column("release").valueAtEndPoint()
.isAfterOrEqualTo(DateTimeValue.parse("2014-09-08T23:30"))
.isBeforeOrEqualTo(DateTimeValue.parse("2014-09-09T05:30"));
----
[[assertj-db-features-onvaluecomparison]]
===== On the comparison with a value
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnValueComparison.html[AssertOnValueComparison] interface.
These assertions allow to compare a value (the value of a table, of a request or of a change) to a number.
[source,java]
----
// Compare the value with a number
assertThat(table).row(1).value("size")
.isGreaterThan(1.5);
// Verify the value is between two numbers
assertThat(changes).change().column("size").valueAtEndPoint()
.isGreaterThanOrEqualTo(1.7)
.isLessThanOrEqualTo(1.8);
----
[[assertj-db-features-onvaluecloseness]]
===== On the closeness of a value
Since <<assertj-db-1-1-0-release-notes,1.1.0>>
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnValueCloseness.html[AssertOnValueCloseness] interface.
These assertions allow to verify if a value (the value of a table, of a request or of a change) is close to another.
[source,java]
----
// Verify if the value is close to 2 with a tolerance of 0.5
// So the values between 1.5 and 2.5 are right
assertThat(table).row(1).value("size")
.isCloseTo(2, 0.5);
// Verify the value is close to 05-10-1960 with a tolerance of two days
assertThat(changes).change().column("birth").valueAtEndPoint()
.isCloseTo(DateValue(1960, 5, 10),
DateValue(0, 0, 2));
----
[[assertj-db-features-onvaluequality]]
===== On the equality with a value
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnValueEquality.html[AssertOnValueEquality] interface.
These assertion allow to verify that a value (the value of a table, of a request or of a change) is equal to another value (in parameter).
[[assertj-db-features-onvaluequality-boolean]]
====== With Boolean
[source,java]
----
// Verify that the value is equal to true
assertThat(table).row(3).value("live").isEqualTo(true);
// Do the same thing with the specific method
assertThat(table).row(3).value("live").isTrue();
----
[[assertj-db-features-onvaluequality-bytes]]
====== With Bytes
[source,java]
----
// Get bytes from a file
byte[] bytesFromFile = Assertions.bytesContentOf(file);
// Verify that the value at end point of the first column of the first change
// is equal to bytes from the file
assertThat(changes).change().column().valueAtStartPoint().isEqualTo(bytesFromFile);
----
[[assertj-db-features-onvaluequality-number]]
====== With Number
[source,java]
----
// Verify that the first value is equal to 1.77,
// the second is equal to 50 and the last is equal to zero
assertThat(request).column("size").value().isEqualTo(1.77)
.value().isEqualTo(50)
.value().isEqualTo(0).isZero();
----
[[assertj-db-features-onvaluequality-date]]
====== With Date
[source,java]
----
// Verify that values are equal to dates
assertThat(changes).changeOfCreation()
.rowAtEndPoint()
.value("birthdate")
.isEqualTo(LocalDate.of(1951, 6, 17))
.changeOfModification()
.column("birthdate")
.isEqualTo()
.isNotEqualTo(LocalDate.parse("1960-05-10"))
.valueAtEndPoint()
.isEqualTo(LocalDate.of(1960, 5, 10));
----
[[assertj-db-features-onvaluequality-time]]
====== With Time
[source,java]
----
// Verify that the value is equal to a time
assertThat(table).row().value("duration").isEqualTo(LocalTime.of(9, 1));
----
[[assertj-db-features-onvaluequality-datetime]]
====== With Date/Time
[source,java]
----
// Verify that the value is equal to a date/time
assertThat(request).column().value()
.isEqualTo(LocalDateTime.of(2007, 12, 23,9, 1, 0))
.isEqualTo(LocalDateTime.parse("2007-12-23T09:01"));
----
[[assertj-db-features-onvaluequality-string]]
====== With String
[source,java]
----
// Verify that the values are equal to numbers, texts and dates
assertThat(table).row().value().isEqualTo("1")
.value().isEqualTo("Hewson")
.value().isEqualTo("Paul David")
.value().isEqualTo("Bono")
.value().isEqualTo("1960-05-10")
.value().isEqualTo("1.75");
----
[[assertj-db-features-onvaluequality-uuid]]
====== With UUID
Since <<assertj-db-1-1-0-release-notes,1.1.0>>
[source,java]
----
// Verify that the values are equal to UUID
assertThat(table).column().value().isEqualTo(UUID.fromString("30B443AE-C0C9-4790-9BEC-CE1380808435"))
.value().isEqualTo(UUID.fromString("0E2A1269-EFF0-4233-B87B-B53E8B6F164D"))
.value().isEqualTo(UUID.fromString("2B0D1BDD-909E-4362-BA10-C930BA82718D"));
----
[[assertj-db-features-onvaluequality-character]]
====== With Character
Since <<assertj-db-1-2-0-release-notes,1.2.0>>
[source,java]
----
// Verify that the values are equal to Character
assertThat(table).column().value().isEqualTo('T')
.value().isEqualTo('e')
.value().isEqualTo('s')
.value().isEqualTo('t');
----
[[assertj-db-features-onvaluenonquality]]
===== On the non equality with a value
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnValueNonEquality.html[AssertOnValueNonEquality] interface.
These assertion allow to verify that a value (the value of a table, of a request or of a change) is not equal to another value (in parameter).
[[assertj-db-features-onvaluenonquality-boolean]]
====== With Boolean
[source,java]
----
// Verify that the values (values "live" in the row at index 3 and index 5)
// are not equal to false
assertThat(table).row(3).value("live").isNotEqualTo(false)
.row(5).value("live").isNotEqualTo(false);
----
[[assertj-db-features-onvaluenonquality-bytes]]
====== With Bytes
[source,java]
----
// Get bytes from a resource in the classpath
byte[] bytesFromClassPath = Assertions.bytesContentFromClassPathOf(resource);
// Verify that the value at end point of the first column of the first change
// is not equal to bytes from the resource
assertThat(changes).change().column().valueAtStartPoint().isNotEqualTo(bytesFromClassPath);
----
[[assertj-db-features-onvaluenonquality-number]]
====== With Number
[source,java]
----
// Verify that the first value is not equal to 1.78,
// the second is not equal to 55 and the last is not equal to 15
assertThat(request).column("size").value().isNotEqualTo(1.78)
.value().isNotEqualTo(55)
.value().isNotEqualTo(15);
----
[[assertj-db-features-onvaluenonquality-date]]
====== With Date
[source,java]
----
// Verify that values are not equal to dates
assertThat(changes).changeOfCreation()
.rowAtEndPoint()
.value("birthdate")
.isNotEqualTo(LocalDate.of(1951, 6, 17))
.changeOfModification()
.column("birthdate")
.valueAtStartPoint()
.isNotEqualTo(LocalDate.parse("1960-05-10"))
.valueAtEndPoint()
.isNotEqualTo(LocalDate.of(1960, 5, 10));
----
[[assertj-db-features-onvaluenonquality-time]]
====== With Time
[source,java]
----
// Verify that the value is equal to a time
assertThat(table).row().value("duration").isNotEqualTo(LocalTime.of(9, 1));
----
[[assertj-db-features-onvaluenonquality-datetime]]
====== With Date/Time
[source,java]
----
// Verify that the value is not equal to a date/time
assertThat(request).column().value()
.isNotEqualTo(LocalDateTime.of(2015, 5, 26,22, 46)))
.isNotEqualTo(LocalDateTime.parse("2015-05-26T22:46"));
----
[[assertj-db-features-onvaluenonquality-string]]
====== With String
[source,java]
----
// Verify that the values are not equal to numbers, texts and dates
assertThat(table).row().value().isNotEqualTo("5")
.value().isNotEqualTo("McGuiness")
.value().isNotEqualTo("Paul")
.value("birthdate").isNotEqualTo("1951-06-17");
----
[[assertj-db-features-onvaluenonquality-uuid]]
====== With UUID
Since <<assertj-db-1-1-0-release-notes,1.1.0>>
[source,java]
----
// Verify that the values are not equal to UUID
assertThat(table).column()
.value().isNotEqualTo(UUID.fromString("30B443AE-C0C9-4790-9BEC-CE1380808435"))
.value().isNotEqualTo(UUID.fromString("0E2A1269-EFF0-4233-B87B-B53E8B6F164D"))
.value().isNotEqualTo(UUID.fromString("2B0D1BDD-909E-4362-BA10-C930BA82718D"));
----
[[assertj-db-features-onvaluenonquality-character]]
====== With Character
Since <<assertj-db-1-2-0-release-notes,1.2.0>>
[source,java]
----
// Verify that the values are not equal to Character
assertThat(table).column()
.value().isNotEqualTo('T')
.value().isNotEqualTo('e')
.value().isNotEqualTo('s')
.value().isNotEqualTo('t');
----
[[assertj-db-features-onvaluenullity]]
===== On the nullity of a value
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnValueNullity.html[AssertOnValueNullity] interface.
These assertions allow to verify if a value (the value of a table, of a request or of a change) is null or not.
[source,java]
----
// Verify that the value at index 1 is null and the next value is not null
assertThat(table).column().value(1).isNull()
.value().isNotNull();
// Verify the value is not null
assertThat(changes).change().rowAtStartPoint().value("live")
.isNotNull();
----
[[assertj-db-features-onvaluetype]]
===== On the type of a value
These assertions are described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnValueType.html[AssertOnValueType] interface.
This assertion allows to verify the type of a value (a value from a table, from a request or from changes).
[source,java]
----
// Verify that the value of the column called "firstname"
// of the fifth row of the table is a text
assertThat(table).row(4).value("firstname").isOfType(ValueType.TEXT);
// The same verification (with the specific method)
// on the third value of the second row of the request
assertThat(request).row(1).value(2).isText();
// Verify that the value at start point of the first column
// of the first change is either a date or a number
assertThat(changes).change().column().valueAtStartPoint()
.isOfAnyOfTypes(ValueType.DATE, ValueType.NUMBER);
----
[[assertj-db-features-onvalueclass]]
===== On the class of a value
Since <<assertj-db-1-1-0-release-notes,1.1.0>>
This assertion is described in the https://www.javadoc.io/doc/org.assertj/assertj-db/latest/org/assertj/db/api/assertions/AssertOnValueClass.html[AssertOnValueClass] interface.
This assertion allows to verify the class of a value (a value from a table, from a request or from changes).
[source,java]
----
// Verify that the value of the column called "firstname"
// of the fifth row of the table is a String
assertThat(table).row(4).value("firstname").isOfClass(String.class);
// Verify that the value at start point of the first column
// of the first change is a Locale
assertThat(changes).change().column().valueAtStartPoint()
.isOfClass(Locale.class);
---- | 41.977559 | 232 | 0.652354 |
833ad46d97608836df7b193ec7cf4ee3f9132d9d | 635 | adoc | AsciiDoc | docs/src/main/asciidoc/_known-issues.adoc | mreyes/camunda | 94390d1048980fae925a23277ea9b71cd18ebee2 | [
"Apache-2.0"
] | null | null | null | docs/src/main/asciidoc/_known-issues.adoc | mreyes/camunda | 94390d1048980fae925a23277ea9b71cd18ebee2 | [
"Apache-2.0"
] | null | null | null | docs/src/main/asciidoc/_known-issues.adoc | mreyes/camunda | 94390d1048980fae925a23277ea9b71cd18ebee2 | [
"Apache-2.0"
] | null | null | null | == Known issues
* The `camunda-bpm-spring-boot-starter-test` used to also reference `camunda-bpm-assert`. +
After switching to spring-boot v1.5.2, this was not possible because v1.5.2 brings a dependency +
to assertj-core 2.6.0 which is no longer compatible to the outdated version used in the test-extension. +
That's why we now include the bpm-assert classes directly and compile them with the starter. +
+
**Attention:** This means you must *not* add an additional dependency on `camunda-bpm-assert` when you +
use the `camunda-bpm-spring-boot-starter-test` in your test scope! This hopefully will change with following versions.
| 63.5 | 119 | 0.768504 |
bd3ff5dac9706e2cdbc87a48a39197d5b98690d3 | 230 | adoc | AsciiDoc | _includes/tutorials/produce-consume-lang/scala/markup/dev/2_1-settings-file.adoc | lct45/kafka-tutorials | 104d85036930ad527aeb371773fd85840029a84e | [
"Apache-2.0"
] | 213 | 2019-08-08T17:47:00.000Z | 2022-03-30T04:00:51.000Z | _includes/tutorials/produce-consume-lang/scala/markup/dev/2_1-settings-file.adoc | lct45/kafka-tutorials | 104d85036930ad527aeb371773fd85840029a84e | [
"Apache-2.0"
] | 832 | 2019-08-07T20:57:20.000Z | 2022-03-31T23:00:50.000Z | _includes/tutorials/produce-consume-lang/scala/markup/dev/2_1-settings-file.adoc | lct45/kafka-tutorials | 104d85036930ad527aeb371773fd85840029a84e | [
"Apache-2.0"
] | 82 | 2019-08-07T23:57:41.000Z | 2022-03-25T08:59:55.000Z | Create the following Gradle setting file, named `settings.gradle` for the project:
+++++
<pre class="snippet"><code class="groovy">{%
include_raw tutorials/produce-consume-lang/scala/code/settings.gradle
%}</code></pre>
+++++ | 32.857143 | 82 | 0.717391 |
405e6f875caff4aee7e88b64dbda56a335b9e4e7 | 2,308 | adoc | AsciiDoc | spring-credhub-docs/src/docs/asciidoc/boot-configuration.adoc | malston/spring-credhub | 75f4ffe65bb9220fcfe3c8a4937a0b8ee085851f | [
"Apache-2.0"
] | null | null | null | spring-credhub-docs/src/docs/asciidoc/boot-configuration.adoc | malston/spring-credhub | 75f4ffe65bb9220fcfe3c8a4937a0b8ee085851f | [
"Apache-2.0"
] | null | null | null | spring-credhub-docs/src/docs/asciidoc/boot-configuration.adoc | malston/spring-credhub | 75f4ffe65bb9220fcfe3c8a4937a0b8ee085851f | [
"Apache-2.0"
] | 1 | 2019-08-29T09:57:28.000Z | 2019-08-29T09:57:28.000Z | :credhub-api-mtls: {credhub-api-home}version/2.0/#mutual-tls
:credhub-api-oauth: {credhub-api-home}version/2.0/#uaa-oauth2
:spring-boot-oauth: https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#boot-features-security-oauth2
[[boot-configuration]]
== Spring Boot Configuration
When using the Spring CredHub starter dependency, Spring CredHub can be configured with https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html#boot-features-external-config-application-property-files[Spring Boot application properties].
With the proper configuration properties, Spring CredHub will auto-configure a connection to a CredHub server.
=== Mutual TLS Authentication
An application running on Cloud Foundry can authenticate to a CredHub server deployed to the same platform using mutual TLS.
Mutual TLS is the default authentication scheme when no other authentication credentials are provided.
To use mutual TLS authentication to a CredHub server, simply provide the URL of the CredHub server as an application property:
[source,properties,%autofit]
----
include::{examples-dir}config-minimal.yml[]
----
See the {credhub-api-mtls}[CredHub documentation] for more information on mutual TLS authentication.
An application running on Cloud Foundry can use the internal address `https://credhub.service.cf.internal:8844` to communicate with a CredHub server deployed to the same platform.
=== OAuth2 Authentication
OAuth2 can be used to authenticate via UAA to any CredHub server.
Spring CredHub supports client credentials grant tokens for authentication using the following Spring CredHub and Spring Security configuration:
[source,properties,%autofit]
----
include::{examples-dir}config-oauth2.yml[]
----
The ID provided in `spring.credhub.oauth2.registration-id` must refer to a client configured under `spring.security.oauth2.client.registration`.
See the {spring-boot-oauth}[Spring Boot documentation] for more information on Spring Boot OAuth2 client configuration.
The OAuth2 client specified in the Spring Security client registration must have CredHub scopes such as `credhub.read` or `credhub.write` to perform most operations.
See the {credhub-api-oauth}[CredHub documentation] for more information on OAuth2 authentication with UAA.
| 51.288889 | 279 | 0.807192 |
a6fe62e049ddddd0dae12b5c02f00e51a9ba50b1 | 187 | adoc | AsciiDoc | docs/src/reference/asciidoc/appendix/release-notes/6.5.2.adoc | bigthinka/elasticsearch-hadoop | d26dc6cbd8dae1b93460344a815cfdacc1f4a384 | [
"Apache-2.0"
] | 3 | 2019-04-24T02:32:07.000Z | 2021-06-22T02:24:11.000Z | docs/src/reference/asciidoc/appendix/release-notes/6.5.2.adoc | bigthinka/elasticsearch-hadoop | d26dc6cbd8dae1b93460344a815cfdacc1f4a384 | [
"Apache-2.0"
] | null | null | null | docs/src/reference/asciidoc/appendix/release-notes/6.5.2.adoc | bigthinka/elasticsearch-hadoop | d26dc6cbd8dae1b93460344a815cfdacc1f4a384 | [
"Apache-2.0"
] | 2 | 2016-10-20T12:32:58.000Z | 2017-12-25T05:44:21.000Z | [[eshadoop-6.5.2]]
== Elasticsearch for Apache Hadoop version 6.5.2
December 5, 2018
ES-Hadoop 6.5.2 is a version compatibility release, tested specifically against Elasticsearch 6.5.2.
| 31.166667 | 100 | 0.770053 |
6cf19b2eb11a4ae94e94cade1bf58168e2b8a55f | 3,099 | adoc | AsciiDoc | docs/index.adoc | ocraft/rl-sandbox | fba6571545cf040829998ba4cd9009a15ac1bbdd | [
"MIT"
] | 2 | 2019-03-23T17:52:39.000Z | 2019-03-29T17:29:52.000Z | docs/index.adoc | ocraft/rl-sandbox | fba6571545cf040829998ba4cd9009a15ac1bbdd | [
"MIT"
] | null | null | null | docs/index.adoc | ocraft/rl-sandbox | fba6571545cf040829998ba4cd9009a15ac1bbdd | [
"MIT"
] | 2 | 2020-05-19T21:32:52.000Z | 2020-09-30T09:28:45.000Z | :imagesdir: img
.Testbed
[plantuml, testbed, svg]
....
skinparam packageStyle Frame
package rlbox.core {
interface AgentProgram {
action call(observation)
}
interface Environment {
boolean done()
}
interface Actuator {
call(action, environment)
}
interface Sensor {
observation call(environment)
}
class Spec
class Space
note left of Spec: action spec and observation spec
class Agent
class Run
}
package rlbox.testbed.sink {
class Sink
class SharedMem
class Store
}
package rlbox.testbed {
interface Testbed {
run()
worker(runs, env_cfg, task, param)
plot()
}
}
Spec --> "1..*" Space
Environment --> Spec : provides >
AgentProgram --> Spec
Actuator --> Environment : executes action >
Sensor --> Environment : gets observation from >
Run --> Agent
Run --> Environment
Agent --o AgentProgram
Agent --o Actuator
Agent --o Sensor
Testbed --> "1..*" Run
Testbed --> Sink : dumps data to >
Sink --> SharedMem : provides >
Testbed --> Store : saves data in >
Sink --> Store : saves data in >
....
.Interfaces
AgentProgram:: defines behaviour of an agent, maps observation to action
Actuator:: maps agent decision to actual activiy in an environment
Sensor:: maps observation of an environment to an input for agent program
Environment:: the surroundings or conditions in which an agent operates
Testbed:: multiprocess experimental runs of the agents in an environment with data gathering and plotting
.Classes
Run:: series of an agent activities in an environment
Agent:: entity which observes through sensors and acts upon an environment using actuators
Sink, SharedMem:: multiprocess shared memory for statistical/performance data that are saved as HDF5 file
Sampler:: provides precomputed random samples of various distribution
Space:: holds information about any generic space
Spec:: convenience class to hold a list of spaces
Store:: wrapper for access to hdf5 data file
.N-Armed Bandit Testbed
[plantuml, narmedbandit, svg]
....
skinparam packageStyle Frame
package rlbox.core {
interface AgentProgram
interface Environment
}
package rlbox.agent.tab.epsilongreedy {
abstract class EpsilonGreedy
class SampleAverage
class WeightedAverage
class WeightedAverageNBias
}
package rlbox.agent.tab.ucb {
class Ucb1
}
package rlbox.agent.tab.gradient {
class GradientBandit
}
package rlbox.env.narmedbandit {
class Bandit
class NonstationaryBandit
class NArmedBanditEnv
}
package rlbox.testbed {
interface Testbed
}
package rlbox.testbed.narmedbandit {
class NArmedBanditTestbed
class NArmedBanditParamStudy
}
AgentProgram <|-- EpsilonGreedy
AgentProgram <|-- Ucb1
AgentProgram <|-- GradientBandit
EpsilonGreedy <|-- SampleAverage
EpsilonGreedy <|-- WeightedAverage
WeightedAverage <|-- WeightedAverageNBias
Environment <|-- NArmedBanditEnv
NArmedBanditEnv --> Bandit
NArmedBanditEnv --> NonstationaryBandit
Testbed <|-- NArmedBanditTestbed
NArmedBanditTestbed <|-- NArmedBanditParamStudy
.... | 22.135714 | 105 | 0.731526 |
ec0e0a579ed6bd3763039bb55c0d39ca0ed5d220 | 3,018 | adoc | AsciiDoc | doc/preface.adoc | stetre/moonglmath | 5ec1b0f5442f78bf11270b4b7f9d9cda47b1abb5 | [
"MIT"
] | 13 | 2018-03-13T12:48:53.000Z | 2022-01-29T17:14:22.000Z | doc/preface.adoc | stetre/moonglmath | 5ec1b0f5442f78bf11270b4b7f9d9cda47b1abb5 | [
"MIT"
] | null | null | null | doc/preface.adoc | stetre/moonglmath | 5ec1b0f5442f78bf11270b4b7f9d9cda47b1abb5 | [
"MIT"
] | 1 | 2018-02-24T07:27:34.000Z | 2018-02-24T07:27:34.000Z |
== Preface
This is the reference manual of *MoonGLMATH*, which is a
http://www.lua.org[*Lua*] math library for
https://github.com/stetre/moongl[*MoonGL*].
footnote:[
This manual is written in
http://www.methods.co.nz/asciidoc/[AsciiDoc], rendered with
http://asciidoctor.org/[AsciiDoctor] and a CSS from the
https://github.com/asciidoctor/asciidoctor-stylesheet-factory[AsciiDoctor Stylesheet Factory].
The PDF version is produced with
https://github.com/asciidoctor/asciidoctor-pdf[AsciiDoctor-Pdf].]
It is assumed that the reader is familiar with the Lua programming language.
For convenience of reference, this document contains external (deep) links to the
http://www.lua.org/manual/5.3/manual.html[Lua Reference Manual].
=== Getting and installing
For installation intructions, refer to the README file in the
https://github.com/stetre/moonglmath[*MoonGLMATH official repository*]
on GitHub.
////
The *official repository* of MoonGLMATH is on GitHub at the following link:
*https://github.com/stetre/moonglmath* .
MoonGLMATH runs on GNU/Linux and requires
*http://www.lua.org[Lua]* version 5.3 or greater.
To install MoonGLMATH, download the
https://github.com/stetre/moonglmath/releases[latest release] and do the following:
[source,shell]
----
# ... download moonglmath-0.1.tar.gz ...
[ ]$ tar -zxpvf moonglmath-0.1.tar.gz
[ ]$ cd moonglmath-0.1
[moonglmath-0.1]$ make
[moonglmath-0.1]$ make check
[moonglmath-0.1]$ sudo make install
----
The _$make check_ command shows you what will be installed and where (please read
its output before executing _$make install_).
By default, MoonGLMATH installs its components in subdirectories of `/usr/local/`
(and creates such directories, if needed).
This behaviour can be changed by defining PREFIX with the desired alternative
base installation directory. For example, this will install the components
in `/home/joe/local`:
[source,shell]
----
[moonglmath-0.1]$ make
[moonglmath-0.1]$ make install PREFIX=/home/joe/local
----
////
=== Module organization
The MoonGLMATH module is loaded using Lua's
http://www.lua.org/manual/5.3/manual.html#pdf-require[require]() and
returns a table containing the functions it provides
(as usual with Lua modules). This manual assumes that such
table is named *glmath*, i.e. that it is loaded with:
[source,lua,indent=1]
----
glmath = require("moonglmath")
----
but nothing forbids the use of a different name.
=== Examples
A few examples can be found in the *examples/* directory of the release package.
=== License
MoonGLMATH is released under the *MIT/X11 license* (same as
http://www.lua.org/license.html[Lua], and with the same only requirement to give proper
credits to the original author).
The copyright notice is in the LICENSE file in the base directory
of the https://github.com/stetre/moonglmath[official repository] on GitHub.
[[see-also]]
=== See also
MoonGLMATH is part of https://github.com/stetre/moonlibs[MoonLibs], a collection of
Lua libraries for graphics and audio programming.
| 32.106383 | 94 | 0.757787 |
9316949925a347a84d91d98532d853b4a0ade413 | 8,906 | adoc | AsciiDoc | source/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/index.adoc | anhhai986/ovirt-site | fcdd23b676cb105c0da7a55ce3c7eae3c94ce03a | [
"MIT"
] | null | null | null | source/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/index.adoc | anhhai986/ovirt-site | fcdd23b676cb105c0da7a55ce3c7eae3c94ce03a | [
"MIT"
] | null | null | null | source/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/index.adoc | anhhai986/ovirt-site | fcdd23b676cb105c0da7a55ce3c7eae3c94ce03a | [
"MIT"
] | null | null | null | :ovirt-doc:
include::../common/collateral_files/attributes.adoc[]
= Installing {virt-product-fullname} as a self-hosted engine using the Cockpit web interface
:context: SHE_cockpit_deploy
:SHE_cockpit_deploy:
// Make sure Jekyll displays a guide title
[discrete]
= Installing {virt-product-fullname} as a self-hosted engine using the Cockpit web interface
Self-hosted engine installation is automated using Ansible. The Cockpit web interface's installation wizard runs on an initial deployment host, and the {virt-product-fullname} {engine-name} (or "engine") is installed and configured on a virtual machine that is created on the deployment host. The {engine-name} and Data Warehouse databases are installed on the {engine-name} virtual machine, but can be migrated to a separate server post-installation if required.
Cockpit is available by default on {hypervisor-fullname}s, and can be installed on {enterprise-linux-host-fullname}s.
Hosts that can run the {engine-name} virtual machine are referred to as self-hosted engine nodes. At least two self-hosted engine nodes are required to support the high availability feature.
A storage domain dedicated to the {engine-name} virtual machine is referred to as the self-hosted engine storage domain. This storage domain is created by the installation script, so the underlying storage must be prepared before beginning the installation.
See the link:{URL_downstream_virt_product_docs}planning_and_prerequisites_guide/index[_Planning and Prerequisites Guide_] for information on environment options and recommended configuration. See link:{URL_downstream_virt_product_docs}planning_and_prerequisites_guide/index#self-hosted-engine-recommendations[Self-Hosted Engine Recommendations] for configuration specific to a self-hosted engine environment.
include::../common/arch/con-RHV_key_components.adoc[]
[discrete]
include::../common/arch/con-Self-hosted_Engine_Architecture.adoc[leveloffset=+1]
[id='Install_overview_SHE_cockpit_deploy']
== Installation Overview
The self-hosted engine installation uses Ansible and the {engine-appliance-name} (a pre-configured {engine-name} virtual machine image) to automate the following tasks:
* Configuring the first self-hosted engine node
* Installing a {enterprise-linux} virtual machine on that node
* Installing and configuring the {virt-product-fullname} {engine-name} on that virtual machine
* Configuring the self-hosted engine storage domain
include::../common/install/snip-rhvm-appliance-note.adoc[]
Installing a self-hosted engine environment involves the following steps:
. xref:Preparing_Storage_for_RHV_SHE_cockpit_deploy[Prepare storage to use for the self-hosted engine storage domain and for standard storage domains.] You can use one of the following storage types:
* xref:Preparing_NFS_Storage_SHE_cockpit_deploy[NFS]
* xref:Preparing_iSCSI_Storage_SHE_cockpit_deploy[iSCSI]
* xref:Preparing_FCP_Storage_SHE_cockpit_deploy[Fibre Channel (FCP)]
* xref:Preparing_Red_Hat_Gluster_Storage_SHE_cockpit_deploy[{gluster-storage-fullname}]
. xref:Installing_the_self-hosted_engine_deployment_host_SHE_cockpit_deploy[Install a deployment host to run the installation on.] This host will become the first self-hosted engine node. You can use either host type:
* xref:Installing_Red_Hat_Virtualization_Hosts_SHE_deployment_host[{hypervisor-fullname}]
* xref:Installing_Red_Hat_Enterprise_Linux_Hosts_SHE_deployment_host[{enterprise-linux}]
+
Cockpit is available by default on {hypervisor-fullname}s, and can be installed on {enterprise-linux-host-fullname}s.
. xref:Installing_the_Red_Hat_Virtualization_Manager_SHE_cockpit_deploy[Install and configure the {virt-product-fullname} {engine-name}:]
.. xref:Enabling-and-configuring-firewall_install_RHVM[Enabling and configuring the firewall]
.. xref:Deploying_the_Self-Hosted_Engine_Using_Cockpit_install_RHVM[Install the self-hosted engine through the deployment host's Cockpit web interface.]
.. xref:Enabling_the_Red_Hat_Virtualization_Manager_Repositories_install_RHVM[Enable the {virt-product-fullname} {engine-name} repositories.]
.. xref:Connecting_to_the_Administration_Portal_install_RHVM[Connect to the Administration Portal to add hosts and storage domains.]
. xref:Installing_Hosts_for_RHV_SHE_cockpit_deploy[Add more self-hosted engine nodes and standard hosts to the {engine-name}.] Self-hosted engine nodes can run the {engine-name} virtual machine and other virtual machines. Standard hosts can run all other virtual machines, but not the {engine-name} virtual machine.
.. Use either host type, or both:
* xref:Red_Hat_Virtualization_Hosts_SHE_cockpit_deploy[{hypervisor-fullname}]
* xref:Red_Hat_Enterprise_Linux_hosts_SHE_cockpit_deploy[{enterprise-linux}]
.. xref:Adding_self-hosted_engine_nodes_to_the_Manager_SHE_cockpit_deploy[Add hosts to the {engine-name} as self-hosted engine nodes.]
.. xref:Adding_standard_hosts_to_the_Manager_SHE_cockpit_deploy[Add hosts to the {engine-name} as standard hosts.]
. xref:Adding_Storage_Domains_to_RHV_SHE_cockpit_deploy[Add more storage domains to the {engine-name}.] The self-hosted engine storage domain is not recommended for use by anything other than the {engine-name} virtual machine.
. If you want to host any databases or services on a server separate from the {engine-name}, xref:Migrating_to_remote_servers_SHE_cockpit_deploy[you can migrate them after the installation is complete.]
[IMPORTANT]
====
Keep the environment up to date. Since bug fixes for known issues are frequently released, use scheduled tasks to update the hosts and the {engine-name}.
====
include::../common/prereqs/asm-Requirements.adoc[leveloffset=+1]
include::../common/storage/assembly-Preparing_Storage_for_RHV.adoc[leveloffset=+1]
include::../common/install/assembly-Installing_the_self-hosted_engine_deployment_host.adoc[leveloffset=+1]
// Adding context back after assembly
:context: SHE_cockpit_deploy
[id='Installing_the_Red_Hat_Virtualization_Manager_SHE_cockpit_deploy']
== Installing the {virt-product-fullname} {engine-name}
:context: install_RHVM
include::../common/she/proc_Manually_installing_the_appliance.adoc[leveloffset=+2]
include::common/network/proc_Enabling-and-configuring-firewall.adoc[leveloffset=+2]
include::../common/install/proc-Deploying_the_Self-Hosted_Engine_Using_Cockpit.adoc[leveloffset=+2]
The next step is to enable the {virt-product-fullname} {engine-name} repositories.
include::../common/install/proc-Enabling_the_Red_Hat_Virtualization_Manager_Repositories.adoc[leveloffset=+2]
Log in to the Administration Portal, where you can add hosts and storage to the environment:
include::../common/admin/proc-Connecting_to_the_Administration_Portal.adoc[leveloffset=+2]
//end sect
// Adding context back after assembly
:context: SHE_cockpit_deploy
include::../common/install/assembly-Installing_Hosts_for_RHV.adoc[leveloffset=+1]
include::../common/storage/assembly-Adding_Storage_Domains_to_RHV.adoc[leveloffset=+1]
:numbered!:
[appendix]
include::../common/she/assembly-Troubleshooting_a_self-hosted_engine_deployment.adoc[leveloffset=+1]
[appendix]
include::../common/install/proc-customizing_engine_vm_during_deployment_auto.adoc[leveloffset=+1]
[appendix]
[id='Migrating_to_remote_servers_SHE_cockpit_deploy']
== Migrating Databases and Services to a Remote Server
Although you cannot configure remote databases and services during the automated installation, you can migrate them to a remote server post-installation.
//ddacosta - removed the Migrating the Manager Database topics for BZ#1854791
//include::../common/database/proc-Migrating_the_self-hosted_engine_database_to_a_remote_server.adoc[leveloffset=+2]
// Adding context back after assembly
//:context: SHE_cockpit_deploy
include::../common/database/assembly-Migrating_Data_Warehouse_to_a_Separate_Machine.adoc[leveloffset=+2]
// Adding context back after assembly
:context: SHE_cockpit_deploy
// include::../common/admin/proc-Migrating_the_Websocket_Proxy_to_a_Separate_Machine.adoc[leveloffset=+2]
//end sect
[appendix]
include::../common/install/proc-Configuring_a_Host_for_PCI_Passthrough.adoc[leveloffset=+1]
ifdef::context[:parent-context: {context}]
:context: Install_nodes_RHVH
[appendix]
include::common/install/proc-Preventing_Kernel_Modules_from_Loading_Automatically.adoc[leveloffset=+1]
ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
== Legal notice
Certain portions of this text first appeared in link:{URL_downstream_virt_product_docs}installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_cockpit_web_interface/[Red Hat Virtualization {vernum_rhv} Installing Red Hat Virtualization as a self-hosted engine using the Cockpit web interface]. Copyright © 2020 Red Hat, Inc. Licensed under a link:http://creativecommons.org/licenses/by-sa/3.0/[Creative Commons Attribution-ShareAlike 3.0 Unported License].
| 59.771812 | 468 | 0.82843 |
da2ddd251ab610cd1c4d0e519973c9927458e448 | 1,320 | adoc | AsciiDoc | doc/integrating_applications/topics/adding_twitter_connections.adoc | sunilmeddr/syndesis | 7f82a50eee10aff21a2430c0cf5af0de7694e961 | [
"Apache-2.0"
] | null | null | null | doc/integrating_applications/topics/adding_twitter_connections.adoc | sunilmeddr/syndesis | 7f82a50eee10aff21a2430c0cf5af0de7694e961 | [
"Apache-2.0"
] | 7 | 2021-03-02T01:40:08.000Z | 2022-03-08T23:30:59.000Z | doc/integrating_applications/topics/adding_twitter_connections.adoc | igarashitm/syndesis | a26cd1b456f9c69508412a71cbf3eb400635bfa1 | [
"Apache-2.0"
] | null | null | null | [id='adding-twitter-connections']
= Adding a Twitter connection to an integration
You must create a Twitter connection before you can add a Twitter
connection to an integration. If you did not already create a Twitter connection,
see <<create-twitter-connection>>.
You must be creating an integration or updating an integration to
add a connection to that integration. If you need to, see
<<procedure-for-creating-an-integration>> or <<updating-integrations>>.
The instructions below
assume that {prodname} is prompting you to select a start connection, a
finish connection or a middle connection.
To add a Twitter connection to an integraton:
. On the page that displays available connections, click the Twitter
connection that you want to add to the integration. When the integration
uses the selected connection to connect to Twitter, {prodname} uses the
credentials defined in that connection.
. Click the action that you want the selected connection to perform.
Each Twitter connection that you add to an integration performs only
the action you choose.
. Optionally, enter the configuration information that {prodname}
prompts for. For example, the *Search* action prompts you to specify
how often to search and keywords to search for.
. Click *Done* to add the connection to the integration.
| 41.25 | 82 | 0.790909 |
8b457dbb2dd777f3d42fb51997d3ee04949a0175 | 2,223 | adoc | AsciiDoc | src/docs/asciidoc/user-manual/spring/sms-controller-full.adoc | demkada/ogham | f399f2c8f3eb3d43b46cfa62da86cee384b23390 | [
"Apache-2.0"
] | null | null | null | src/docs/asciidoc/user-manual/spring/sms-controller-full.adoc | demkada/ogham | f399f2c8f3eb3d43b46cfa62da86cee384b23390 | [
"Apache-2.0"
] | null | null | null | src/docs/asciidoc/user-manual/spring/sms-controller-full.adoc | demkada/ogham | f399f2c8f3eb3d43b46cfa62da86cee384b23390 | [
"Apache-2.0"
] | null | null | null | :relative-path: ../../
include::{docdir}/variables.adoc[]
Usage of `MessagingService` is exactly the same as standalone usage. The only difference is that `MessagingService` is automatically created and injectable.
The following sample shows a Spring Web that exposes one simple endpoint for sending SMS using Ogham. The sample shows several Ogham features at once:
* Using text template (using FreeMarker)
* Templates are located in a sub-folder and prefix for SMS templates is configured through Ogham property
* SMS template extension is configured globally in order to avoid extension in Java code
* Using a configuration property to define the sender phone number
* The SMPP server host, port and authentication are defined using properties
[role="tab-container no-max-height"]
SMS Sample with Spring Boot
[role=tab]
image:{images-dir}/icons/java-logo.png[width=16,height=30] Java
[source, java, role="collapse-lines:1-18,47-59 irrelevant-lines:1-18 highlight-lines:32,33,39-42"]
----
include::{spring-sms-samples-sourcedir}/TemplateSample.java[]
----
<1> Inject Ogham service
<2> Use the Ogham service to send a SMS
<3> Use a text template as SMS content
<4> Use any Java object for evaluating template variables
<5> The sender phone number that comes from request parameter
{spring-sms-samples-sourcedir-url}/TemplateSample.java?ts=4[Source code of the sample].
[role=tab]
image:{images-dir}/icons/freemarker-logo.png[width=60,height=24] Template
[source]
----
include::{spring-samples-resourcesdir}/sms/register.txt.ftl[]
----
{spring-samples-resourcesdir-url}/sms/register.txt.ftl?ts=4[Source code of the template]
[role=tab]
image:{images-dir}/icons/properties.png[width=37,height=30] Spring properties
[source, python]
----
include::{spring-samples-resourcesdir}/application-sms-template.properties[]
----
<1> The SMPP host
<2> The SMPP port
<3> The SMPP system ID (account)
<4> The SMPP password
<5> The sender phone number that is declared globally
<6> The path prefix for SMS templates
<7> The path suffix for SMS templates (FreeMarker extension in this case)
{spring-samples-resourcesdir-url}/application-sms-template.properties?ts=4[Source code of the properties]
[role=tab-container-end]
-
| 35.854839 | 156 | 0.77103 |
3dc703ec3de8708351576c8bfd9b53125ce6fcf7 | 1,904 | adoc | AsciiDoc | documentation/modules/ROOT/pages/index.adoc | acidonper/redhat-workshop-cicd-gitops | 311f68d730ff8e144469857884735e91e0a19ec8 | [
"Apache-2.0"
] | 1 | 2022-03-01T09:54:26.000Z | 2022-03-01T09:54:26.000Z | documentation/modules/ROOT/pages/index.adoc | acidonper/redhat-workshop-cicd-gitops | 311f68d730ff8e144469857884735e91e0a19ec8 | [
"Apache-2.0"
] | null | null | null | documentation/modules/ROOT/pages/index.adoc | acidonper/redhat-workshop-cicd-gitops | 311f68d730ff8e144469857884735e91e0a19ec8 | [
"Apache-2.0"
] | 1 | 2021-08-25T14:10:25.000Z | 2021-08-25T14:10:25.000Z | = Welcome to Red Hat Workshop - CI/CD & GitOps
:page-layout: home
:!sectids:
[.text-center.strong]
== Introduction
_Red Hat Workshop CI/CD & GitOps_ is a repository which includes a tutorial with a set of practices for the purpose of practicing with Kubernetes Native CI/CD pipelines in a GitOps model.
The general idea behind this workshop is to get an initial understanding of the following points:
- Deploy microservice-based applications in Kubernetes using ArgoCD and Helm
- Operate applications in Kubernetes using ArgoCD in a GitOps model
- Implement Kubernetes Native CI/CD Pipelines strategy with Tekton
[.text-center.strong]
== Getting Started
First of all, it is required to review your local machine prerequisites and laboratory environment access in order to be able to start working on this tutorial.
Please follow xref:01-setup.adoc[Getting Started] for more information.
[.text-center.strong]
== Tutorial Steps
=== Deployment Workflow
In this step, you will prepare a GitOps repository and deploy _Jump App_ application in Openshift using ArgoCD and Helm.
xref:02-deployment.adoc[Start this exercise...]
=== GitOps Workflow
In this step, you will make a change in your GitOps GitHub repository where application settings are located and then ArgoCD applies this configuration automatically.
xref:03-gitops.adoc[Start this exercise...]
=== CI/CD Workflow
In this step, you will perform a set of tasks in order to understand and implement a CI/CD strategy:
* Fork a _Jump App_ microservice project in order to be able to make changes in the source code
* Change some _Jump App_ microservices settings in a GitOps fashion
* Deploy a CI/CD project in Openshift
* Configure GitHub to notify your CI/CD project for changes
* Modify a microservice source code and integrate this new release in a continuous integration & deployment way
xref:04-cicd.adoc[Start this exercise...] | 39.666667 | 187 | 0.790966 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.