Instruction stringlengths 14 778 | input_code stringlengths 0 4.24k | output_code stringlengths 1 5.44k |
|---|---|---|
Remove tech preview for odo debugging | [id='debugging-applications-in-odo']
= Debugging applications in `{odo-title}`
include::modules/developer-cli-odo-attributes.adoc[]
include::modules/common-attributes.adoc[]
:context: debugging-applications-in-odo
toc::[]
:FeatureName: Interactive debugging in {odo-title}
include::modules/technology-preview.adoc[leveloffset=+1]
With `{odo-title}`, you can attach a debugger to remotely debug your application. This feature is only supported for NodeJS and Java components.
Components created with `{odo-title}` run in the debug mode by default. A debugger agent runs on the component, on a specific port. To start debugging your application, you must start port forwarding and attach the local debugger bundled in your Integrated development environment (IDE).
include::modules/developer-cli-odo-debugging-an-application.adoc[leveloffset=+1]
include::modules/developer-cli-odo-configuring-debugging-parameters.adoc[leveloffset=+1]
| [id='debugging-applications-in-odo']
= Debugging applications in `{odo-title}`
include::modules/developer-cli-odo-attributes.adoc[]
include::modules/common-attributes.adoc[]
:context: debugging-applications-in-odo
toc::[]
With `{odo-title}`, you can attach a debugger to remotely debug your application. This feature is only supported for NodeJS and Java components.
Components created with `{odo-title}` run in the debug mode by default. A debugger agent runs on the component, on a specific port. To start debugging your application, you must start port forwarding and attach the local debugger bundled in your Integrated development environment (IDE).
include::modules/developer-cli-odo-debugging-an-application.adoc[leveloffset=+1]
include::modules/developer-cli-odo-configuring-debugging-parameters.adoc[leveloffset=+1]
|
Update URL to Groovy Docs | This sample application uses Spring Boot and
http://beta.groovy-lang.org/docs/groovy-2.3.1/html/documentation/markup-template-engine.html[Groovy templates]
in the View layer. The templates for this app live in `classpath:/templates/`, which is
the conventional location for Spring Boot. External configuration is available via
``spring.groovy.template.*''.
| This sample application uses Spring Boot and
http://docs.groovy-lang.org/latest/html/documentation/#_the_markuptemplateengine[Groovy templates]
in the View layer. The templates for this app live in `classpath:/templates/`, which is
the conventional location for Spring Boot. External configuration is available via
``spring.groovy.template.*''.
|
Update to the new {index}/_mapping/{type} format | [[indices-get-mapping]]
== Get Mapping
The get mapping API allows to retrieve mapping definitions for an index or
index/type.
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/twitter/_mapping/tweet'
--------------------------------------------------
[float]
=== Multiple Indices and Types
The get mapping API can be used to get more than one index or type
mapping with a single call. General usage of the API follows the
following syntax: `host:port/{index}/{type}/_mapping` where both
`{index}` and `{type}` can accept a comma-separated list of names. To
get mappings for all indices you can use `_all` for `{index}`. The
following are some examples:
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_mapping/twitter,kimchy'
curl -XGET 'http://localhost:9200/_all/_mapping/tweet,book'
--------------------------------------------------
If you want to get mappings of all indices and types then the following
two examples are equivalent:
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_all/_mapping'
curl -XGET 'http://localhost:9200/_mapping'
--------------------------------------------------
| [[indices-get-mapping]]
== Get Mapping
The get mapping API allows to retrieve mapping definitions for an index or
index/type.
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/twitter/_mapping/tweet'
--------------------------------------------------
[float]
=== Multiple Indices and Types
The get mapping API can be used to get more than one index or type
mapping with a single call. General usage of the API follows the
following syntax: `host:port/{index}/_mapping/{type}` where both
`{index}` and `{type}` can accept a comma-separated list of names. To
get mappings for all indices you can use `_all` for `{index}`. The
following are some examples:
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_mapping/twitter,kimchy'
curl -XGET 'http://localhost:9200/_all/_mapping/tweet,book'
--------------------------------------------------
If you want to get mappings of all indices and types then the following
two examples are equivalent:
[source,js]
--------------------------------------------------
curl -XGET 'http://localhost:9200/_all/_mapping'
curl -XGET 'http://localhost:9200/_mapping'
--------------------------------------------------
|
Remove link to in-source docs | = Mongo Client
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-mongo-client["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-mongo-client/"]
An asynchronous client for interacting with a MongoDB database
Please see the in source asciidoc link:vertx-mongo-client/src/main/asciidoc/index.adoc[documentation] or the main documentation on the http://vertx.io/docs/#data_access[web-site] for a full description
of Mongo service.
The following docker CLI can be used for running mongo tests:
```
docker run --rm --name vertx-mongo -p 27017:27017 mongo
```
| = Mongo Client
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-mongo-client["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-mongo-client/"]
An asynchronous client for interacting with a MongoDB database
Please see the main documentation on the web-site for a full description:
* https://vertx.io/docs/vertx-mongo-client/java/[Java documentation]
* https://vertx.io/docs/vertx-mongo-client/js/[JavaScript documentation]
* https://vertx.io/docs/vertx-mongo-client/kotlin/[Kotlin documentation]
* https://vertx.io/docs/vertx-mongo-client/groovy/[Groovy documentation]
* https://vertx.io/docs/vertx-mongo-client/ruby/[Ruby documentation]
The following docker CLI can be used for running mongo tests:
```
docker run --rm --name vertx-mongo -p 27017:27017 mongo
```
|
Add link to 2nd part of JAXenter tutorial | # Janitor image:https://travis-ci.org/techdev-solutions/janitor.svg?branch=master["Build Status",link="https://travis-ci.org/techdev-solutions/janitor"]
An application to perform cleanup work using the https://getpocket.com[Pocket API].
## API Documentation
The documentation for the Kotlin API bindings can be found https://techdev-solutions.github.io/janitor/pocket-api/[here].
## Tutorial (German)
- https://jaxenter.de/kotlin-tutorial-48156[Kotlin: Ein Tutorial fΓΌr Einsteiger @ JAXenter]
| # Janitor image:https://travis-ci.org/techdev-solutions/janitor.svg?branch=master["Build Status",link="https://travis-ci.org/techdev-solutions/janitor"]
An application to perform cleanup work using the https://getpocket.com[Pocket API].
## API Documentation
The documentation for the Kotlin API bindings can be found https://techdev-solutions.github.io/janitor/pocket-api/[here].
## Tutorial (German)
Kotlin: Ein Tutorial fΓΌr Einsteiger @ JAXenter
- https://jaxenter.de/kotlin-tutorial-48156[Part 1]
- https://jaxenter.de/kotlin-ein-tutorial-fuer-einsteiger-teil-2-48587[Part 2]
|
Update userguide: show more levels in the TOC | = Activiti User Guide
v 5.17.1-SNAPSHOT
:doctype: book
:toc: left
:icons: font
:numbered:
:source-highlighter: pygments
:pygments-css: class
:pygments-linenums-mode: table
:compat-mode:
include::ch01-Introduction.adoc[]
include::ch02-GettingStarted.adoc[]
include::ch03-Configuration.adoc[]
include::ch04-API.adoc[]
include::ch05-Spring.adoc[]
include::ch06-Deployment.adoc[]
include::ch07a-BPMN-Introduction.adoc[]
include::ch07b-BPMN-Constructs.adoc[]
include::ch08-Forms.adoc[]
include::ch09-JPA.adoc[]
include::ch10-History.adoc[]
include::ch11-Designer.adoc[]
include::ch12-Explorer.adoc[]
include::ch13-Modeler.adoc[]
include::ch14-REST.adoc[]
include::ch15-Cdi.adoc[]
include::ch16-Ldap.adoc[]
include::ch17-Advanced.adoc[]
include::ch18-Simulation.adoc[]
include::ch19-operation-control.adoc[]
| = Activiti User Guide
v 5.17.1-SNAPSHOT
:doctype: book
:toc: left
:toclevels: 5
:icons: font
:numbered:
:source-highlighter: pygments
:pygments-css: class
:pygments-linenums-mode: table
:compat-mode:
include::ch01-Introduction.adoc[]
include::ch02-GettingStarted.adoc[]
include::ch03-Configuration.adoc[]
include::ch04-API.adoc[]
include::ch05-Spring.adoc[]
include::ch06-Deployment.adoc[]
include::ch07a-BPMN-Introduction.adoc[]
include::ch07b-BPMN-Constructs.adoc[]
include::ch08-Forms.adoc[]
include::ch09-JPA.adoc[]
include::ch10-History.adoc[]
include::ch11-Designer.adoc[]
include::ch12-Explorer.adoc[]
include::ch13-Modeler.adoc[]
include::ch14-REST.adoc[]
include::ch15-Cdi.adoc[]
include::ch16-Ldap.adoc[]
include::ch17-Advanced.adoc[]
include::ch18-Simulation.adoc[]
include::ch19-operation-control.adoc[]
|
Fix markup of preformatted text. | = Tiny Git
A tiny model of Git, used for learning and demonstrating how Git
works. A series of version of Tiny Git is available, in increasing
order of complexity. The list of versions and their features is listed
below.
[options="header"]
|======
| Version | Description
| v0 | Only argument parsing
| v1 | Implements single file commits without history tracking
| v2 | Adds support for history tracking and logs
| v3 | Adds support for checking-out older revisions
| v4 | Adds support for creating branching
| v5 | Adds support for merging changes
|======
== Usage
The first step is to activate a particular version of Tiny Git. From
the top level directory of the project source `activate` and specify
the version no. to activate. For example, to activate version `v5`,
the following command can be used.
------
$ source activate v5
------
Type `tig` to get the available list of sub-commands.
------
$ tig
Usage:
tig init
tig commit <msg>
tig checkout <start-point> [-b <branch-name>]
tig diff
tig log
tig branch
tig merge <branch>
-----
== Slides: Building Git From Scratch
The link:docs/slides.asciidoc[] provide more information about the
various revsions and how to build Git incrementally. | = Tiny Git
A tiny model of Git, used for learning and demonstrating how Git
works. A series of version of Tiny Git is available, in increasing
order of complexity. The list of versions and their features is listed
below.
[options="header"]
|======
| Version | Description
| v0 | Only argument parsing
| v1 | Implements single file commits without history tracking
| v2 | Adds support for history tracking and logs
| v3 | Adds support for checking-out older revisions
| v4 | Adds support for creating branching
| v5 | Adds support for merging changes
|======
== Usage
The first step is to activate a particular version of Tiny Git. From
the top level directory of the project source `activate` and specify
the version no. to activate. For example, to activate version `v5`,
the following command can be used.
------
$ source activate v5
------
Type `tig` to get the available list of sub-commands.
------
$ tig
Usage:
tig init
tig commit <msg>
tig checkout <start-point> [-b <branch-name>]
tig diff
tig log
tig branch
tig merge <branch>
------
== Slides: Building Git From Scratch
The link:docs/slides.asciidoc[] provide more information about the
various revsions and how to build Git incrementally. |
Update What's New for 5.7 | [[new]]
= What's New in Spring Security 5.7
Spring Security 5.7 provides a number of new features.
Below are the highlights of the release.
[[whats-new-servlet]]
== Servlet
* Web
** Introduced xref:servlet/authentication/persistence.adoc#requestattributesecuritycontextrepository[`RequestAttributeSecurityContextRepository`]
** Introduced xref:servlet/authentication/persistence.adoc#securitycontextholderfilter[`SecurityContextHolderFilter`] - Ability to require explicit saving of the `SecurityContext`
* OAuth 2.0 Client
** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]
** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerOAuth2AuthorizedClientProvider`
[[whats-new-webflux]]
== WebFlux
* OAuth 2.0 Client
** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]
** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerReactiveOAuth2AuthorizedClientProvider`
| [[new]]
= What's New in Spring Security 5.7
Spring Security 5.7 provides a number of new features.
Below are the highlights of the release.
[[whats-new-servlet]]
== Servlet
* Web
** Introduced xref:servlet/authentication/persistence.adoc#requestattributesecuritycontextrepository[`RequestAttributeSecurityContextRepository`]
** Introduced xref:servlet/authentication/persistence.adoc#securitycontextholderfilter[`SecurityContextHolderFilter`] - Ability to require explicit saving of the `SecurityContext`
* OAuth 2.0 Client
** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]
** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerOAuth2AuthorizedClientProvider`
** Allow customizing claims on https://github.com/spring-projects/spring-security/issues/9855[JWT client assertions]
[[whats-new-webflux]]
== WebFlux
* OAuth 2.0 Client
** Allow configuring https://github.com/spring-projects/spring-security/issues/6548[PKCE for confidential clients]
** Allow configuring a https://github.com/spring-projects/spring-security/issues/9812[JWT assertion resolver] in `JwtBearerReactiveOAuth2AuthorizedClientProvider`
|
Fix XSD includes: use -1 as end of file indicator | include::common.adoc[]
= Ehcache XSDs
include::menu.adoc[]
== Core
[source,xsd,indent=0]
----
include::{sourcedir}/xml/src/main/resources/ehcache-core.xsd[lines=18..260]
----
== JSR-107 extension
[source,xsd,indent=0]
----
include::{sourcedir}/107/src/main/resources/ehcache-107ext.xsd[lines=18..44]
----
| include::common.adoc[]
= Ehcache XSDs
include::menu.adoc[]
== Core
[source,xsd,indent=0]
----
include::{sourcedir}/xml/src/main/resources/ehcache-core.xsd[lines=18..-1]
----
== JSR-107 extension
[source,xsd,indent=0]
----
include::{sourcedir}/107/src/main/resources/ehcache-107ext.xsd[lines=18..-1]
----
|
Remove link to nonexistent ILM API | [[index-lifecycle-management-api]]
== Index Lifecycle Management API
You can use the following APIs to manage policies on indices.
[float]
[[ilm-api-policy-endpoint]]
=== Policy Management APIs
* <<ilm-put-lifecycle,Create Lifecycle Policy>>
* <<ilm-get-lifecycle,Get Lifecycle Policy>>
* <<ilm-delete-lifecycle,Delete Lifecycle Policy>>
[float]
[[ilm-api-index-endpoint]]
=== Index Management APIs
* <<ilm-move-to-step,Move Index To Step>>
* <<ilm-set-policy,Set Policy On Index>>
* <<ilm-retry-policy,Retry Policy On Indices>>
[float]
[[ilm-api-management-endpoint]]
=== Operation Management APIs
* <<ilm-get-status,Get ILM Operation Mode>>
* <<ilm-start,Start ILM>>
* <<ilm-stop,Stop ILM>>
* <<ilm-explain,Explain API>>
include::put-lifecycle.asciidoc[]
include::get-lifecycle.asciidoc[]
include::delete-lifecycle.asciidoc[]
include::move-to-step.asciidoc[]
include::remove-policy.asciidoc[]
include::retry-policy.asciidoc[]
include::get-status.asciidoc[]
include::explain.asciidoc[]
include::start.asciidoc[]
include::stop.asciidoc[]
| [[index-lifecycle-management-api]]
== Index Lifecycle Management API
You can use the following APIs to manage policies on indices.
[float]
[[ilm-api-policy-endpoint]]
=== Policy Management APIs
* <<ilm-put-lifecycle,Create Lifecycle Policy>>
* <<ilm-get-lifecycle,Get Lifecycle Policy>>
* <<ilm-delete-lifecycle,Delete Lifecycle Policy>>
[float]
[[ilm-api-index-endpoint]]
=== Index Management APIs
* <<ilm-move-to-step,Move Index To Step>>
* <<ilm-retry-policy,Retry Policy On Indices>>
[float]
[[ilm-api-management-endpoint]]
=== Operation Management APIs
* <<ilm-get-status,Get ILM Operation Mode>>
* <<ilm-start,Start ILM>>
* <<ilm-stop,Stop ILM>>
* <<ilm-explain,Explain API>>
include::put-lifecycle.asciidoc[]
include::get-lifecycle.asciidoc[]
include::delete-lifecycle.asciidoc[]
include::move-to-step.asciidoc[]
include::remove-policy.asciidoc[]
include::retry-policy.asciidoc[]
include::get-status.asciidoc[]
include::explain.asciidoc[]
include::start.asciidoc[]
include::stop.asciidoc[]
|
Update What's New for 6.0 | [[new]]
= What's New in Spring Security 6.0
Spring Security 6.0 provides a number of new features.
Below are the highlights of the release.
== Breaking Changes
* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.
Instead use data storage to encrypt values. | [[new]]
= What's New in Spring Security 6.0
Spring Security 6.0 provides a number of new features.
Below are the highlights of the release.
== Breaking Changes
* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.
Instead use data storage to encrypt values.
* https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default
|
Update link to puppet module and remove link to other RPM repo as we have our own. | [[misc]]
== Misc
* https://github.com/electrical/puppet-elasticsearch[Puppet]:
Elasticsearch puppet module.
* http://github.com/elasticsearch/cookbook-elasticsearch[Chef]:
Chef cookbook for Elasticsearch
* https://github.com/tavisto/elasticsearch-rpms[elasticsearch-rpms]:
RPMs for elasticsearch.
* http://www.github.com/neogenix/daikon[daikon]:
Daikon Elasticsearch CLI
* https://github.com/Aconex/scrutineer[Scrutineer]:
A high performance consistency checker to compare what you've indexed
with your source of truth content (e.g. DB)
| [[misc]]
== Misc
* https://github.com/elasticsearch/puppet-elasticsearch[Puppet]:
Elasticsearch puppet module.
* http://github.com/elasticsearch/cookbook-elasticsearch[Chef]:
Chef cookbook for Elasticsearch
* http://www.github.com/neogenix/daikon[daikon]:
Daikon Elasticsearch CLI
* https://github.com/Aconex/scrutineer[Scrutineer]:
A high performance consistency checker to compare what you've indexed
with your source of truth content (e.g. DB)
|
Add Patreon badge to readme | = griffon-monitor-plugin
:linkattrs:
:project-name: griffon-monitor-plugin
image:http://img.shields.io/travis/griffon-plugins/{project-name}/master.svg["Build Status", link="https://travis-ci.org/griffon-plugins/{project-name}"]
image:http://img.shields.io/coveralls/griffon-plugins/{project-name}/master.svg["Coverage Status", link="https://coveralls.io/r/griffon-plugins/{project-name}"]
image:http://img.shields.io/badge/license-ASF2-blue.svg["Apache License 2", link="http://www.apache.org/licenses/LICENSE-2.0.txt"]
image:https://api.bintray.com/packages/griffon/griffon-plugins/{project-name}/images/download.svg[link="https://bintray.com/griffon/griffon-plugins/{project-name}/_latestVersion"]
---
Enables application statistics through JMX.
Refer to the link:http://griffon-plugins.github.io/{project-name}/[plugin guide, window="_blank"] for
further information on configuration and usage.
| = griffon-monitor-plugin
:linkattrs:
:project-name: griffon-monitor-plugin
image:http://img.shields.io/travis/griffon-plugins/{project-name}/master.svg["Build Status", link="https://travis-ci.org/griffon-plugins/{project-name}"]
image:http://img.shields.io/coveralls/griffon-plugins/{project-name}/master.svg["Coverage Status", link="https://coveralls.io/r/griffon-plugins/{project-name}"]
image:http://img.shields.io/badge/license-ASF2-blue.svg["Apache License 2", link="http://www.apache.org/licenses/LICENSE-2.0.txt"]
image:https://api.bintray.com/packages/griffon/griffon-plugins/{project-name}/images/download.svg[link="https://bintray.com/griffon/griffon-plugins/{project-name}/_latestVersion"]
---
image:https://img.shields.io/gitter/room/griffon/griffon.js.svg[link="https://gitter.im/griffon/griffon]
image:https://img.shields.io/badge/donations-Patreon-orange.svg[https://www.patreon.com/user?u=6609318]
---
Enables application statistics through JMX.
Refer to the link:http://griffon-plugins.github.io/{project-name}/[plugin guide, window="_blank"] for
further information on configuration and usage.
|
Document ExpectedExceptionSupport bug fix in release notes | [[release-notes-5.1.0-M1]]
=== 5.1.0-M1
*Date of Release:* β
*Scope:* β
For a complete list of all _closed_ issues and pull requests for this release, consult the
link:{junit5-repo}+/milestone/14?closed=1+[5.1 M1] milestone page in the JUnit repository
on GitHub.
[[release-notes-5.1.0-junit-platform]]
==== JUnit Platform
===== Bug Fixes
* β
===== Deprecations and Breaking Changes
* β
===== New Features and Improvements
* The `junit-platform-surefire-provider` now requires `maven-surefire-plugin` version
2.20.1 or higher.
[[release-notes-5.1.0-junit-jupiter]]
==== JUnit Jupiter
===== Bug Fixes
* β
===== Deprecations and Breaking Changes
* β
===== New Features and Improvements
* β
[[release-notes-5.1.0-junit-vintage]]
==== JUnit Vintage
===== Bug Fixes
* β
===== Deprecations and Breaking Changes
* β
===== New Features and Improvements
* β
| [[release-notes-5.1.0-M1]]
=== 5.1.0-M1
*Date of Release:* β
*Scope:* β
For a complete list of all _closed_ issues and pull requests for this release, consult the
link:{junit5-repo}+/milestone/14?closed=1+[5.1 M1] milestone page in the JUnit repository
on GitHub.
[[release-notes-5.1.0-junit-platform]]
==== JUnit Platform
===== Bug Fixes
* β
===== Deprecations and Breaking Changes
* β
===== New Features and Improvements
* The `junit-platform-surefire-provider` now requires `maven-surefire-plugin` version
2.20.1 or higher.
[[release-notes-5.1.0-junit-jupiter]]
==== JUnit Jupiter
===== Bug Fixes
* `ExpectedExceptionSupport` from the `junit-jupiter-migrationsupport` module no longer
swallows exceptions if the test class does not declare a JUnit 4 `ExpectedException`
rule.
- Consequently, `@EnableRuleMigrationSupport` and `ExpectedExceptionSupport` may now be
used without declaring an `ExpectedException` rule.
===== Deprecations and Breaking Changes
* β
===== New Features and Improvements
* β
[[release-notes-5.1.0-junit-vintage]]
==== JUnit Vintage
===== Bug Fixes
* β
===== Deprecations and Breaking Changes
* β
===== New Features and Improvements
* β
|
Add syskeygen to Elasticsearch Reference | [role="xpack"]
[[xpack-commands]]
= {xpack} Commands
[partintro]
--
{xpack} includes commands that help you configure security:
* <<certgen>>
* <<setup-passwords>>
* <<users-command>>
--
include::certgen.asciidoc[]
include::setup-passwords.asciidoc[]
include::users-command.asciidoc[]
| [role="xpack"]
[[xpack-commands]]
= {xpack} Commands
[partintro]
--
{xpack} includes commands that help you configure security:
* <<certgen>>
* <<setup-passwords>>
* <<syskeygen>>
* <<users-command>>
--
include::certgen.asciidoc[]
include::setup-passwords.asciidoc[]
include::syskeygen.asciidoc[]
include::users-command.asciidoc[]
|
Add blog post link to site | spray-kamon-metrics: Better Kamon metrics for your Spray services
=================================================================
Daniel Solano_GΓ³mez
:description: The spray-kamon-metrics library augments kamon-spray to make it provide more useful metrics, \
particularly by providing Spray can server metrics and better Spray service response metrics.
:keywords: kamon, metrics, spray, scala
include::../../README.asciidoc[tags=status-badges]
include::../../README.asciidoc[tags=preamble]
== Documentation
* The https://github.com/MonsantoCo/spray-kamon-metrics/blob/master/README.asciidoc[README] on GitHub contains the most
comprehensive documentation for using the library.
* API documentation
** link:api/latest[latest]
** link:api/0.1.2[0.1.2]
== Release notes
include::../../release-notes.asciidoc[tags=release-notes]
| spray-kamon-metrics: Better Kamon metrics for your Spray services
=================================================================
Daniel Solano_GΓ³mez
:description: The spray-kamon-metrics library augments kamon-spray to make it provide more useful metrics, \
particularly by providing Spray can server metrics and better Spray service response metrics.
:keywords: kamon, metrics, spray, scala
include::../../README.asciidoc[tags=status-badges]
include::../../README.asciidoc[tags=preamble]
== Documentation
* The https://github.com/MonsantoCo/spray-kamon-metrics/blob/master/README.asciidoc[README] on GitHub contains the most
comprehensive documentation for using the library.
* API documentation
** link:api/latest[latest]
** link:api/0.1.2[0.1.2]
* Blog post: http://engineering.monsanto.com/2015/09/24/better-spray-metrics-with-kamon/[Better Spray metrics with Kamon]
== Release notes
include::../../release-notes.asciidoc[tags=release-notes]
|
Include M5 document in list of release notes | [[release-notes]]
== Release Notes
:numbered!:
include::release-notes-5.0.0-ALPHA.adoc[]
include::release-notes-5.0.0-M1.adoc[]
include::release-notes-5.0.0-M2.adoc[]
include::release-notes-5.0.0-M3.adoc[]
include::release-notes-5.0.0-M4.adoc[]
:numbered:
| [[release-notes]]
== Release Notes
:numbered!:
include::release-notes-5.0.0-ALPHA.adoc[]
include::release-notes-5.0.0-M1.adoc[]
include::release-notes-5.0.0-M2.adoc[]
include::release-notes-5.0.0-M3.adoc[]
include::release-notes-5.0.0-M4.adoc[]
include::release-notes-5.0.0-M5.adoc[]
:numbered:
|
Fix Reactive Web link in the ref doc | [[spring-web]]
= Web
:doc-root: https://docs.spring.io
:api-spring-framework: {doc-root}/spring-framework/docs/{spring-version}/javadoc-api/org/springframework
:toc: left
:toclevels: 2
This part of the documentation covers support for web applications designed to run on a
traditional Servlet stack (Servlet API + Servlet container).
Chapters cover the Servlet-based <<mvc,Spring MVC>> web framework including <<mvc-view,Views>>,
<<mvc-cors,CORS>>, and <<websocket,WebSocket>> support.
Note that as of Spring Framework 5.0 web applications can also run on a
<<spring-web-reactive, reactive web stack>> (Reactive Streams API + non-blocking runtime).
include::web/webmvc.adoc[leveloffset=+1]
include::web/webmvc-view.adoc[leveloffset=+1]
include::web/webmvc-cors.adoc[leveloffset=+1]
include::web/websocket.adoc[leveloffset=+1]
| [[spring-web]]
= Web
:doc-root: https://docs.spring.io
:api-spring-framework: {doc-root}/spring-framework/docs/{spring-version}/javadoc-api/org/springframework
:toc: left
:toclevels: 2
This part of the documentation covers support for web applications designed to run on a
traditional Servlet stack (Servlet API + Servlet container).
Chapters cover the Servlet-based <<mvc,Spring MVC>> web framework including <<mvc-view,Views>>,
<<mvc-cors,CORS>>, and <<websocket,WebSocket>> support.
Note that as of Spring Framework 5.0 web applications can also run on a
<<reactive-web.adoc#spring-reactive-web, reactive web stack>> (Reactive Streams API + non-blocking runtime).
include::web/webmvc.adoc[leveloffset=+1]
include::web/webmvc-view.adoc[leveloffset=+1]
include::web/webmvc-cors.adoc[leveloffset=+1]
include::web/websocket.adoc[leveloffset=+1]
|
Remove a comma in doc to make example a valid json. | [[mapping-dynamic-mapping]]
== Dynamic Mapping
Default mappings allow to automatically apply generic mapping definition
to types that do not have mapping pre defined. This is mainly done
thanks to the fact that the
<<mapping-object-type,object mapping>> and
namely the <<mapping-root-object-type,root
object mapping>> allow for schema-less dynamic addition of unmapped
fields.
The default mapping definition is plain mapping definition that is
embedded within the distribution:
[source,js]
--------------------------------------------------
{
"_default_" : {
}
}
--------------------------------------------------
Pretty short, no? Basically, everything is defaulted, especially the
dynamic nature of the root object mapping. The default mapping
definition can be overridden in several manners. The simplest manner is
to simply define a file called `default-mapping.json` and placed it
under the `config` directory (which can be configured to exist in a
different location). It can also be explicitly set using the
`index.mapper.default_mapping_location` setting.
The dynamic creation of mappings for unmapped types can be completely
disabled by setting `index.mapper.dynamic` to `false`.
As an example, here is how we can change the default
<<mapping-date-format,date_formats>> used in the
root and inner object types:
[source,js]
--------------------------------------------------
{
"_default_" : {
"date_formats" : ["yyyy-MM-dd", "dd-MM-yyyy", "date_optional_time"],
}
}
--------------------------------------------------
| [[mapping-dynamic-mapping]]
== Dynamic Mapping
Default mappings allow to automatically apply generic mapping definition
to types that do not have mapping pre defined. This is mainly done
thanks to the fact that the
<<mapping-object-type,object mapping>> and
namely the <<mapping-root-object-type,root
object mapping>> allow for schema-less dynamic addition of unmapped
fields.
The default mapping definition is plain mapping definition that is
embedded within the distribution:
[source,js]
--------------------------------------------------
{
"_default_" : {
}
}
--------------------------------------------------
Pretty short, no? Basically, everything is defaulted, especially the
dynamic nature of the root object mapping. The default mapping
definition can be overridden in several manners. The simplest manner is
to simply define a file called `default-mapping.json` and placed it
under the `config` directory (which can be configured to exist in a
different location). It can also be explicitly set using the
`index.mapper.default_mapping_location` setting.
The dynamic creation of mappings for unmapped types can be completely
disabled by setting `index.mapper.dynamic` to `false`.
As an example, here is how we can change the default
<<mapping-date-format,date_formats>> used in the
root and inner object types:
[source,js]
--------------------------------------------------
{
"_default_" : {
"date_formats" : ["yyyy-MM-dd", "dd-MM-yyyy", "date_optional_time"]
}
}
--------------------------------------------------
|
Remove abbrevtitles for Asciidoctor migration | [[release-highlights]]
= {es} Release Highlights
++++
<titleabbrev>Release Highlights</titleabbrev>
++++
[partintro]
--
This section summarizes the most important changes in each release. For the
full list, see <<es-release-notes>> and <<breaking-changes>>.
* <<release-highlights-6.7.0>>
* <<release-highlights-6.6.0>>
* <<release-highlights-6.5.0>>
* <<release-highlights-6.4.0>>
* <<release-highlights-6.3.0>>
--
include::highlights-6.7.0.asciidoc[]
include::highlights-6.6.0.asciidoc[]
include::highlights-6.5.0.asciidoc[]
include::highlights-6.4.0.asciidoc[]
include::highlights-6.3.0.asciidoc[]
| [[release-highlights]]
= Release Highlights
[partintro]
--
This section summarizes the most important changes in each release. For the
full list, see <<es-release-notes>> and <<breaking-changes>>.
* <<release-highlights-6.7.0>>
* <<release-highlights-6.6.0>>
* <<release-highlights-6.5.0>>
* <<release-highlights-6.4.0>>
* <<release-highlights-6.3.0>>
--
include::highlights-6.7.0.asciidoc[]
include::highlights-6.6.0.asciidoc[]
include::highlights-6.5.0.asciidoc[]
include::highlights-6.4.0.asciidoc[]
include::highlights-6.3.0.asciidoc[]
|
Add note about link-local IP addresses | [id="understanding-networking"]
= Understanding networking
include::modules/common-attributes.adoc[]
:context: understanding-networking
toc::[]
Kubernetes ensures that Pods are able to network with each other, and allocates
each Pod an IP address from an internal network. This ensures all containers
within the Pod behave as if they were on the same host. Giving each Pod its own
IP address means that Pods can be treated like physical hosts or virtual
machines in terms of port allocation, networking, naming, service discovery,
load balancing, application configuration, and migration.
include::modules/nw-ne-openshift-dns.adoc[leveloffset=+1]
| [id="understanding-networking"]
= Understanding networking
include::modules/common-attributes.adoc[]
:context: understanding-networking
toc::[]
Kubernetes ensures that Pods are able to network with each other, and allocates
each Pod an IP address from an internal network. This ensures all containers
within the Pod behave as if they were on the same host. Giving each Pod its own
IP address means that Pods can be treated like physical hosts or virtual
machines in terms of port allocation, networking, naming, service discovery,
load balancing, application configuration, and migration.
[NOTE]
====
Some cloud platforms offer metadata APIs that listen on the 169.254.169.254 IP address, a link-local IP address in the IPv4 `169.254.0.0/16` CIDR block.
This CIDR block is not reachable from the pod network. Pods that need access to these IP addresses must be given host network access by setting the `spec.hostNetwork` field in the Pod spec to `true`.
If you allow a Pod host network access, you grant the Pod privileged access to the underlying network infrastructure.
====
include::modules/nw-ne-openshift-dns.adoc[leveloffset=+1]
|
Update What's New for 6.0 | [[new]]
= What's New in Spring Security 6.0
Spring Security 6.0 provides a number of new features.
Below are the highlights of the release.
== Breaking Changes
* https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support.
Use the OpenSaml 4 Support instead.
* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.
Instead use data storage to encrypt values.
* https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default
* https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package
Reorganize imports
* https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages
Reorganize imports
* https://github.com/spring-projects/spring-security/issues/11026[gh-11026] - Use `RequestAttributeSecurityContextRepository` instead of `NullSecurityContextRepository`
| [[new]]
= What's New in Spring Security 6.0
Spring Security 6.0 provides a number of new features.
Below are the highlights of the release.
== Breaking Changes
* https://github.com/spring-projects/spring-security/issues/10556[gh-10556] - Remove EOL OpenSaml 3 Support.
Use the OpenSaml 4 Support instead.
* https://github.com/spring-projects/spring-security/issues/8980[gh-8980] - Remove unsafe/deprecated `Encryptors.querableText(CharSequence,CharSequence)`.
Instead use data storage to encrypt values.
* https://github.com/spring-projects/spring-security/issues/11520[gh-11520] - Remember Me uses SHA256 by default
* https://github.com/spring-projects/spring-security/issues/8819 - Move filters to web package
Reorganize imports
* https://github.com/spring-projects/spring-security/issues/7349 - Move filter and token to appropriate packages
Reorganize imports
* https://github.com/spring-projects/spring-security/issues/11026[gh-11026] - Use `RequestAttributeSecurityContextRepository` instead of `NullSecurityContextRepository`
* https://github.com/spring-projects/spring-security/pull/11887[gh-11827] - Change default authority for `oauth2Login()`
|
Add note to etcd encryption section | // Module included in the following assemblies:
//
// * security/encrypting-etcd.adoc
// * post_installation_configuration/cluster-tasks.adoc
[id="about-etcd_{context}"]
= About etcd encryption
By default, etcd data is not encrypted in {product-title}. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.
When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:
* Secrets
* Config maps
* Routes
* OAuth access tokens
* OAuth authorize tokens
When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup.
| // Module included in the following assemblies:
//
// * security/encrypting-etcd.adoc
// * post_installation_configuration/cluster-tasks.adoc
[id="about-etcd_{context}"]
= About etcd encryption
By default, etcd data is not encrypted in {product-title}. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.
When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:
* Secrets
* Config maps
* Routes
* OAuth access tokens
* OAuth authorize tokens
When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup.
[NOTE]
====
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.
====
|
Fix link to milestone page | [[release-notes-5.7.0]]
== 5.7.0
*Date of Release:* β
*Scope:* β
For a complete list of all _closed_ issues and pull requests for this release, consult
the link:{junit5-repo}+/milestone/?closed=1+[5.7.0] milestone page in the JUnit repository
on GitHub.
[[release-notes-5.7.0-junit-platform]]
=== JUnit Platform
==== Bug Fixes
* β
==== Deprecations and Breaking Changes
* β
==== New Features and Improvements
* β
[[release-notes-5.7.0-junit-jupiter]]
=== JUnit Jupiter
==== Bug Fixes
* β
==== Deprecations and Breaking Changes
* β
==== New Features and Improvements
* β
[[release-notes-5.7.0-junit-vintage]]
=== JUnit Vintage
==== Bug Fixes
* β
==== Deprecations and Breaking Changes
* β
==== New Features and Improvements
* β
| [[release-notes-5.7.0]]
== 5.7.0
*Date of Release:* β
*Scope:* β
For a complete list of all _closed_ issues and pull requests for this release, consult
the link:{junit5-repo}+/milestone/50?closed=1+[5.7.0] milestone page in the JUnit repository
on GitHub.
[[release-notes-5.7.0-junit-platform]]
=== JUnit Platform
==== Bug Fixes
* β
==== Deprecations and Breaking Changes
* β
==== New Features and Improvements
* β
[[release-notes-5.7.0-junit-jupiter]]
=== JUnit Jupiter
==== Bug Fixes
* β
==== Deprecations and Breaking Changes
* β
==== New Features and Improvements
* β
[[release-notes-5.7.0-junit-vintage]]
=== JUnit Vintage
==== Bug Fixes
* β
==== Deprecations and Breaking Changes
* β
==== New Features and Improvements
* β
|
Fix BibText reference (the author tag didn't didn't compile well in latex) | = Research
:awestruct-description: Academic research for papers and articles.
:awestruct-layout: normalBase
:showtitle:
OptaPlanner is a good base for metaheuristics research. Read about some of the advantages in
http://www.orcomplete.com/research/geoffrey-de-smet/open-source-metaheuristics-research-on-drools-planner[this article].
Especially https://www.youtube.com/watch?v=JpcPEieU3Cg[OptaPlanner Benchmarker] and our big set of already implemented use cases,
make it easy to test your new algorithm objectively against the existing algorithms.
If you're doing academic or professional research on top of OptaPlanner, link:../community/team.html[let us know].
To reference OptaPlanner or the user manual, please use this BibTeX reference:
----
@manual{optaplanner,
author = {Geoffrey De Smet et al},
title = {OptaPlanner User Guide},
organization = {Red Hat and the community},
url = {https://www.optaplanner.org},
note = {OptaPlanner is an open source constraint satisfaction solver in Java}
}
----
| = Research
:awestruct-description: Academic research for papers and articles.
:awestruct-layout: normalBase
:showtitle:
OptaPlanner is a good base for metaheuristics research. Read about some of the advantages in
http://www.orcomplete.com/research/geoffrey-de-smet/open-source-metaheuristics-research-on-drools-planner[this article].
Especially https://www.youtube.com/watch?v=JpcPEieU3Cg[OptaPlanner Benchmarker] and our big set of already implemented use cases,
make it easy to test your new algorithm objectively against the existing algorithms.
If you're doing academic or professional research on top of OptaPlanner, link:../community/team.html[let us know].
To reference OptaPlanner or the user manual, please use this BibTeX reference:
----
@manual{optaplanner,
author = {De Smet, Geoffrey and open source contributors},
title = {OptaPlanner User Guide},
year = {2006},
organization = {Red Hat, Inc. or third-party contributors},
url = {https://www.optaplanner.org},
note = {OptaPlanner is an open source constraint solver in Java}
}
----
|
Add description to reload changes in poll-outages.xml |
// Allow GitHub image rendering
:imagesdir: ../../../images
[[ga-opennms-operation-daemon-config-files-pollerd]]
==== Pollerd
[options="header, autowidth"]
|===
| Internal Daemon Name | Reload Event
| _Pollerd_ | `uei.opennms.org/internal/reloadDaemonConfig -p 'daemonName Pollerd'`
|===
.Pollerd configuration file overview
[options="header, autowidth"]
|===
| File | Restart Required | Reload Event | Description
| `poller-configuration.xml` | yes | yes | Restart is required in case new monitors are created or removed.
Reload Event loads changed configuration parameters of existing monitors.
| `response-graph.properties` | no | no | Graph definition for response time graphs from monitors
| `poll-outages.xml` | ? | ? | ?
|===
|
// Allow GitHub image rendering
:imagesdir: ../../../images
[[ga-opennms-operation-daemon-config-files-pollerd]]
==== Pollerd
[options="header, autowidth"]
|===
| Internal Daemon Name | Reload Event
| _Pollerd_ | `uei.opennms.org/internal/reloadDaemonConfig -p 'daemonName Pollerd'`
|===
.Pollerd configuration file overview
[options="header, autowidth"]
|===
| File | Restart Required | Reload Event | Description
| `poller-configuration.xml` | yes | yes | Restart is required in case new monitors are created or removed.
Reload Event loads changed configuration parameters of existing monitors.
| `response-graph.properties` | no | no | Graph definition for response time graphs from monitors
| `poll-outages.xml` | no | yes | Can be reloaded with `uei.opennms.org/internal/schedOutagesChanged`
|===
|
Introduce README and externalize common information via variables | = {project-name} - Using JWT RBAC
This guide explains how your {project-name} application can utilize MicroProfile JWT RBAC to provide
secured access to the JAX-RS endpoints.
== TODO
| = {project-name} - Using JWT RBAC
This guide explains how your {project-name} application can utilize MicroProfile JWT RBAC to provide
secured access to the JAX-RS endpoints.
[cols="<m,<m,<2",options="header"]
|===
|Property Name|Default|Description
|quarkus.jwt.enabled|true|Determine if the jwt extension is enabled.
|quarkus.jwt.realm-name|Quarkus-JWT|Name to use for security realm.
|quarkus.jwt.auth-mechanism|MP-JWT|Name to use for authentication mechanism
|===
== Solution
We recommend that you follow the instructions in the next sections and create the application step by step.
However, you can skip right to the completed example.
Clone the Git repository: `git clone `, or download an {quickstart-url}/archive/master.zip[archive].
The solution is located in the `using-opentracing` {quickstart-url}[directory].
|
Remove references to changelog and to highlights |
include::how-to.asciidoc[]
include::testing.asciidoc[]
include::glossary.asciidoc[]
include::release-notes/highlights.asciidoc[]
include::release-notes.asciidoc[] |
include::how-to.asciidoc[]
include::testing.asciidoc[]
include::glossary.asciidoc[]
|
Align title with docs.asciidoctor.org site requirements | = Asciidoctor Maven User Manual
The Asciidoctor Maven Plugin is the official way to convert your {uri-asciidoc}[AsciiDoc] documentation using {uri-asciidoctor}[Asciidoctor] from an {uri-maven}[Apache Maven] build.
The project main goal is to offer a thin layer on top of https://github.com/asciidoctor/asciidoctorj[Asciidoctorj] following as much as possible to common Maven practices.
The conversion can happen in 2 flavors:
. As a xref:plugin:introduction.adoc[Maven plugin]: AsciiDoc files are converted at full Asciidoctor power independently from Maven site,
. As a xref:plugin:introduction.adoc[Maven site module]: AsciiDoc files are integrated with https://maven.apache.org/doxia/[Maven Doxia] tools, however, with a few limitations.
| = Asciidoctor Maven Tools Documentation
:navtitle: Introduction
The Asciidoctor Maven Plugin is the official way to convert your {uri-asciidoc}[AsciiDoc] documentation using {uri-asciidoctor}[Asciidoctor] from an {uri-maven}[Apache Maven] build.
The project main goal is to offer a thin layer on top of https://github.com/asciidoctor/asciidoctorj[Asciidoctorj] following as much as possible to common Maven practices.
The conversion can happen in 2 flavors:
. As a xref:plugin:introduction.adoc[Maven plugin]: AsciiDoc files are converted at full Asciidoctor power independently from Maven site,
. As a xref:plugin:introduction.adoc[Maven site module]: AsciiDoc files are integrated with https://maven.apache.org/doxia/[Maven Doxia] tools, however, with a few limitations.
|
Use the guides without toc layout | ////
This guide is maintained in the main Quarkus repository
and pull requests should be submitted there:
https://github.com/quarkusio/quarkus/tree/master/docs/src/main/asciidoc
////
= Quarkus - All configuration options
include::./attributes.adoc[]
include::{generated-dir}/config/all-config.adoc[opts=optional]
| ---
layout: guides-configuration-reference
---
////
This guide is maintained in the main Quarkus repository
and pull requests should be submitted there:
https://github.com/quarkusio/quarkus/tree/master/docs/src/main/asciidoc
////
= Quarkus - All configuration options
include::./attributes.adoc[]
include::{generated-dir}/config/all-config.adoc[opts=optional]
|
Update documentation version and date | = tick
Malcolm Sparks <mal@juxt.pro>; Henry Widd; Johanna Antonelli <joa@juxt.pro>
0.4-1, 2018-02-09
:toc: left
:toclevels: 4
:docinfo: shared
:sectnums: true
:sectnumlevels: 2
:xrefstyle: short
:nofooter:
:leveloffset: +1
include::intro.adoc[]
include::setup.adoc[]
include::api.adoc[]
include::dates.adoc[]
include::durations.adoc[]
include::clocks.adoc[]
include::intervals.adoc[]
include::calendars.adoc[]
include::schedules.adoc[]
include::formatting.adoc[]
include::cookbook/index.adoc[]
include::bibliography.adoc[]
| = tick
Malcolm Sparks <mal@juxt.pro>; Henry Widd; Johanna Antonelli <joa@juxt.pro>
0.4.5-alpha, 2018-10-10
:toc: left
:toclevels: 4
:docinfo: shared
:sectnums: true
:sectnumlevels: 2
:xrefstyle: short
:nofooter:
:leveloffset: +1
include::intro.adoc[]
include::setup.adoc[]
include::api.adoc[]
include::dates.adoc[]
include::durations.adoc[]
include::clocks.adoc[]
include::intervals.adoc[]
include::calendars.adoc[]
include::schedules.adoc[]
include::formatting.adoc[]
include::cookbook/index.adoc[]
include::bibliography.adoc[]
|
Add docs for how to use the capsule command | = Creating an Uberjar
Edge provides a script for running pack to build an uberjar.
To use it, you can simply run the below from your project sub-directory (the same folder as your deps.edn).
[source,shell]
----
$ ../bin/onejar -A:prod --args '-m edge.main' project.jar
----
The `-A:prod` indicates an alias you would like to have its `:extra-deps` and `:paths` included in your resulting jar.
`--args` are default arguments to your jar, in this case we are specifying that the application should run edge.main, part of the edge production modules.
You can run this jar in production quite easily:
[source,shell]
----
$ java -Xmx1G -jar project.jar
----
We recommend that you specify the memory usage of your JVM, as the default on Java 6+ is usually insufficient for hosts run only this JVM process.
A rule of thumb is to use 2/3rds of the memory of your host.
| = Creating an Uberjar
== OneJar
Edge provides a script for running pack to build an uberjar.
To use it, you can simply run the below from your project sub-directory (the same folder as your deps.edn).
[source,shell]
----
$ ../bin/onejar -A:prod --args '-m edge.main' project.jar
----
The `-A:prod` indicates an alias you would like to have its `:extra-deps` and `:paths` included in your resulting jar.
`--args` are default arguments to your jar, in this case we are specifying that the application should run edge.main, part of the edge production modules.
== Capsule
Build a capsule uberjar using this command:
[source,shell]
----
$ ../bin/capsule -m edge.main -e 'target/prod' -A:prod project.jar
----
If you are using clojurescript in your project make sure you run this command first:
[source,shell]
----
$ clojure -A:build:build/once
----
It will build the clojurescript files and put them in the correct folder to be included in the uberjar.
== Running the jar
You can run the produced jar in production quite easily:
[source,shell]
----
$ java -Xmx1G -jar project.jar
----
We recommend that you specify the memory usage of your JVM, as the default on Java 6+ is usually insufficient for hosts run only this JVM process.
A rule of thumb is to use 2/3rds of the memory of your host.
|
Remove legacy LaTeX math delimiters in latexmath:[] | // .basic
[why]#chunky bacon#
// .emphasis
_chunky bacon_
// .emphasis-with-role
[why]_chunky bacon_
// .strong
*chunky bacon*
// .strong-with-role
[why]*chunky bacon*
// .monospaced
`hello world!`
// .monospaced-with-role
[why]`hello world!`
// .superscript
^super^chunky bacon
// .superscript-with-role
[why]^super^chunky bacon
// .subscript
~sub~chunky bacon
// .subscript-with-role
[why]~sub~chunky bacon
// .mark
#chunky bacon#
// .double
"`chunky bacon`"
// .double-with-role
[why]"`chunky bacon`"
// .single
'`chunky bacon`'
// .single-with-role
[why]'`chunky bacon`'
// .asciimath
asciimath:[sqrt(4) = 2]
// .latexmath
latexmath:[$C = \alpha + \beta Y^{\gamma} + \epsilon$]
// .with-id
[#why]_chunky bacon_
// .mixed-monospace-bold-italic
`*_monospace bold italic phrase_*` and le``**__tt__**``ers
| // .basic
[why]#chunky bacon#
// .emphasis
_chunky bacon_
// .emphasis-with-role
[why]_chunky bacon_
// .strong
*chunky bacon*
// .strong-with-role
[why]*chunky bacon*
// .monospaced
`hello world!`
// .monospaced-with-role
[why]`hello world!`
// .superscript
^super^chunky bacon
// .superscript-with-role
[why]^super^chunky bacon
// .subscript
~sub~chunky bacon
// .subscript-with-role
[why]~sub~chunky bacon
// .mark
#chunky bacon#
// .double
"`chunky bacon`"
// .double-with-role
[why]"`chunky bacon`"
// .single
'`chunky bacon`'
// .single-with-role
[why]'`chunky bacon`'
// .asciimath
asciimath:[sqrt(4) = 2]
// .latexmath
latexmath:[C = \alpha + \beta Y^{\gamma} + \epsilon]
// .with-id
[#why]_chunky bacon_
// .mixed-monospace-bold-italic
`*_monospace bold italic phrase_*` and le``**__tt__**``ers
|
Fix storage class CSI note in Creating SC-EBS/GCE | // Be sure to set the :StorageClass: and :Provisioner: value in each assembly
// on the line before the include statement for this module. For example, to
// set the StorageClass value to "AWS EBS", add the following line to the
// assembly:
// :StorageClass: AWS EBS
// Module included in the following assemblies:
//
// * storage/persistent_storage-aws.adoc
[id="storage-create-{StorageClass}-storage-class_{context}"]
= Creating the {StorageClass} storage class
Storage classes are used to differentiate and delineate storage levels and
usages. By defining a storage class, users can obtain dynamically provisioned
persistent volumes.
.Procedure
. In the {product-title} console, click *Storage* -> *Storage Classes*.
. In the storage class overview, click *Create Storage Class*.
. Define the desired options on the page that appears.
.. Enter a name to reference the storage class.
.. Enter an optional description.
.. Select the reclaim policy.
.. Select `{Provisioner}` from the drop down list.
+
[NOTE]
====
For Container Storage Interface (CSI) provisioning, select `ebs.csi.aws.com`.
====
.. Enter additional parameters for the storage class as desired.
. Click *Create* to create the storage class.
// Undefine {StorageClass} attribute, so that any mistakes are easily spotted
:!StorageClass:
| // Be sure to set the :StorageClass: and :Provisioner: value in each assembly
// on the line before the include statement for this module. For example, to
// set the StorageClass value to "AWS EBS", add the following line to the
// assembly:
// :StorageClass: AWS EBS
// Module included in the following assemblies:
//
// * storage/persistent_storage-aws.adoc
[id="storage-create-{StorageClass}-storage-class_{context}"]
= Creating the {StorageClass} storage class
Storage classes are used to differentiate and delineate storage levels and
usages. By defining a storage class, users can obtain dynamically provisioned
persistent volumes.
.Procedure
. In the {product-title} console, click *Storage* -> *Storage Classes*.
. In the storage class overview, click *Create Storage Class*.
. Define the desired options on the page that appears.
.. Enter a name to reference the storage class.
.. Enter an optional description.
.. Select the reclaim policy.
.. Select `{Provisioner}` from the drop down list.
.. Enter additional parameters for the storage class as desired.
. Click *Create* to create the storage class.
// Undefine {StorageClass} attribute, so that any mistakes are easily spotted
:!StorageClass:
|
Add Dave Syer to Authors | = Spring Security Reference
Ben Alex; Luke Taylor; Rob Winch; Gunnar Hillert; Joe Grandja; Jay Bryant; EddΓΊ MelΓ©ndez; Josh Cummings
:include-dir: _includes
:security-api-url: https://docs.spring.io/spring-security/site/docs/current/api/
:source-indent: 0
:tabsize: 4
:toc: left
// FIXME: Add links for authentication, authorization, common attacks
Spring Security is a framework that provides authentication, authorization, and protection against common attacks.
// FIXME: Add links for imperative and reactive applications
With first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.
include::{include-dir}/about/index.adoc[]
include::{include-dir}/servlet/index.adoc[]
include::{include-dir}/reactive/index.adoc[]
| = Spring Security Reference
Ben Alex; Luke Taylor; Rob Winch; Gunnar Hillert; Joe Grandja; Jay Bryant; EddΓΊ MelΓ©ndez; Josh Cummings; Dave Syer
:include-dir: _includes
:security-api-url: https://docs.spring.io/spring-security/site/docs/current/api/
:source-indent: 0
:tabsize: 4
:toc: left
// FIXME: Add links for authentication, authorization, common attacks
Spring Security is a framework that provides authentication, authorization, and protection against common attacks.
// FIXME: Add links for imperative and reactive applications
With first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.
include::{include-dir}/about/index.adoc[]
include::{include-dir}/servlet/index.adoc[]
include::{include-dir}/reactive/index.adoc[]
|
Include REST API docs that were already produced. | [[rest-api-node-properties]]
== Node properties ==
include::set-property-on-node.asciidoc[]
include::update-node-properties.asciidoc[]
include::get-properties-for-node.asciidoc[]
include::property-values-can-not-be-null.asciidoc[]
include::property-values-can-not-be-nested.asciidoc[]
include::delete-all-properties-from-node.asciidoc[]
include::delete-a-named-property-from-a-node.asciidoc[]
| [[rest-api-node-properties]]
== Node properties ==
include::set-property-on-node.asciidoc[]
include::update-node-properties.asciidoc[]
include::get-properties-for-node.asciidoc[]
include::get-property-for-node.asciidoc[]
include::property-values-can-not-be-null.asciidoc[]
include::property-values-can-not-be-nested.asciidoc[]
include::delete-all-properties-from-node.asciidoc[]
include::delete-a-named-property-from-a-node.asciidoc[]
|
Include shared/attributes.asciidoc directly from docs master | [[elasticsearch-reference]]
= Elasticsearch Reference
:include-xpack: true
:xes-repo-dir: {docdir}
:es-repo-dir: {docdir}/../../../../elasticsearch/docs
:es-test-dir: {docdir}/../../../../elasticsearch/docs/src/test
:plugins-examples-dir: {docdir}/../../../../elasticsearch/plugins/examples
:docs-dir: {docdir}/../../../../docs
include::{es-repo-dir}/Versions.asciidoc[]
include::{es-repo-dir}/reference/index-shared1.asciidoc[]
:edit_url!:
include::setup-xes.asciidoc[]
:edit_url:
include::{es-repo-dir}/reference/index-shared2.asciidoc[]
:edit_url!:
include::rest-api/index.asciidoc[]
:edit_url:
include::{es-repo-dir}/reference/index-shared3.asciidoc[]
| [[elasticsearch-reference]]
= Elasticsearch Reference
:include-xpack: true
:xes-repo-dir: {docdir}
:es-repo-dir: {docdir}/../../../../elasticsearch/docs
:es-test-dir: {docdir}/../../../../elasticsearch/docs/src/test
:plugins-examples-dir: {docdir}/../../../../elasticsearch/plugins/examples
include::{es-repo-dir}/Versions.asciidoc[]
include::{es-repo-dir}/reference/index-shared1.asciidoc[]
:edit_url!:
include::setup-xes.asciidoc[]
:edit_url:
include::{es-repo-dir}/reference/index-shared2.asciidoc[]
:edit_url!:
include::rest-api/index.asciidoc[]
:edit_url:
include::{es-repo-dir}/reference/index-shared3.asciidoc[]
|
Fix Java API documentation for indexed scripts | [[indexed-scripts]]
== Indexed Scripts API
The indexed script API allows one to interact with scripts and templates
stored in an elasticsearch index. It can be used to create, update, get,
and delete indexed scripts and templates.
[source,java]
--------------------------------------------------
PutIndexedScriptResponse = client.preparePutIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.setSource("_score * doc['my_numeric_field'].value")
.execute()
.actionGet();
GetIndexedScriptResponse = client.prepareGetIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.execute()
.actionGet();
DeleteIndexedScriptResponse = client.prepareDeleteIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.execute()
.actionGet();
--------------------------------------------------
To store templates simply use "mustache" for the scriptLang.
=== Script Language
The API allows one to set the language of the indexed script being
interacted with. If one is not provided the default scripting language
will be used. | [[indexed-scripts]]
== Indexed Scripts API
The indexed script API allows one to interact with scripts and templates
stored in an elasticsearch index. It can be used to create, update, get,
and delete indexed scripts and templates.
[source,java]
--------------------------------------------------
PutIndexedScriptResponse = client.preparePutIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.setSource("script", "_score * doc['my_numeric_field'].value")
.execute()
.actionGet();
GetIndexedScriptResponse = client.prepareGetIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.execute()
.actionGet();
DeleteIndexedScriptResponse = client.prepareDeleteIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.execute()
.actionGet();
--------------------------------------------------
To store templates simply use "mustache" for the scriptLang.
=== Script Language
The API allows one to set the language of the indexed script being
interacted with. If one is not provided the default scripting language
will be used. |
Add Link to DispatcherServlet in Filter Review Doc | [[servlet-filters-review]]
= A Review of ``Filter``s
Spring Security's Servlet support is based on Servlet ``Filter``s, so it is helpful to look at the role of ``Filter``s generally first.
The picture below shows the typical layering of the handlers for a single HTTP request.
.FilterChain
[[servlet-filterchain-figure]]
image::{figures}/filterchain.png[]
The client sends a request to the application, and the container creates a `FilterChain` which contains the ``Filter``s and `Servlet` that should process the `HttpServletRequest` based on the path of the request URI.
At most one `Servlet` can handle a single `HttpServletRequest` and `HttpServletResponse`.
However, more than one `Filter` can be used to:
* Prevent downstream ``Filter``s or the `Servlet` from being invoked.
In this instance the `Filter` will typically write the `HttpServletResponse`.
* Modify the `HttpServletRequest` or `HttpServletResponse` used by the downstream ``Filter``s and `Servlet`
The power of the `Filter` comes from the `FilterChain` that is passed into it.
.`FilterChain` Usage Example
===
[source,java]
----
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) {
// do something before the rest of the application
chain.doFilter(request, response); // invoke the rest of the application
// do something after the rest of the application
}
----
===
Since a `Filter` only impacts downstream ``Filter``s and the `Servlet`, the order each `Filter` is invoked is extremely important.
| [[servlet-filters-review]]
= A Review of ``Filter``s
Spring Security's Servlet support is based on Servlet ``Filter``s, so it is helpful to look at the role of ``Filter``s generally first.
The picture below shows the typical layering of the handlers for a single HTTP request.
.FilterChain
[[servlet-filterchain-figure]]
image::{figures}/filterchain.png[]
The client sends a request to the application, and the container creates a `FilterChain` which contains the ``Filter``s and `Servlet` that should process the `HttpServletRequest` based on the path of the request URI.
In a Spring MVC application the `Servlet` is an instance of https://docs.spring.io/spring-security/site/docs/current-SNAPSHOT/reference/html5/#servlet-filters-review[`DispatcherServlet`].
At most one `Servlet` can handle a single `HttpServletRequest` and `HttpServletResponse`.
However, more than one `Filter` can be used to:
* Prevent downstream ``Filter``s or the `Servlet` from being invoked.
In this instance the `Filter` will typically write the `HttpServletResponse`.
* Modify the `HttpServletRequest` or `HttpServletResponse` used by the downstream ``Filter``s and `Servlet`
The power of the `Filter` comes from the `FilterChain` that is passed into it.
.`FilterChain` Usage Example
===
[source,java]
----
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) {
// do something before the rest of the application
chain.doFilter(request, response); // invoke the rest of the application
// do something after the rest of the application
}
----
===
Since a `Filter` only impacts downstream ``Filter``s and the `Servlet`, the order each `Filter` is invoked is extremely important.
|
Add Eleftheria Stein to Reference Authors | = Spring Security Reference
Ben Alex; Luke Taylor; Rob Winch; Gunnar Hillert; Joe Grandja; Jay Bryant; EddΓΊ MelΓ©ndez; Josh Cummings; Dave Syer
:include-dir: _includes
:security-api-url: https://docs.spring.io/spring-security/site/docs/current/api/
:source-indent: 0
:tabsize: 4
:toc: left
// FIXME: Add links for authentication, authorization, common attacks
Spring Security is a framework that provides authentication, authorization, and protection against common attacks.
// FIXME: Add links for imperative and reactive applications
With first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.
include::{include-dir}/about/index.adoc[]
include::{include-dir}/servlet/index.adoc[]
include::{include-dir}/reactive/index.adoc[]
| = Spring Security Reference
Ben Alex; Luke Taylor; Rob Winch; Gunnar Hillert; Joe Grandja; Jay Bryant; EddΓΊ MelΓ©ndez; Josh Cummings; Dave Syer; Eleftheria Stein
:include-dir: _includes
:security-api-url: https://docs.spring.io/spring-security/site/docs/current/api/
:source-indent: 0
:tabsize: 4
:toc: left
// FIXME: Add links for authentication, authorization, common attacks
Spring Security is a framework that provides authentication, authorization, and protection against common attacks.
// FIXME: Add links for imperative and reactive applications
With first class support for both imperative and reactive applications, it is the de-facto standard for securing Spring-based applications.
include::{include-dir}/about/index.adoc[]
include::{include-dir}/servlet/index.adoc[]
include::{include-dir}/reactive/index.adoc[]
|
Fix index of a list item |
During its life a Wicket component goes through the following stages:
1. *Initialization:* a component is instantiated and initialized by Wicket.
2. *Rendering:* components are prepared for rendering and generate markup. If a component contains children (i.e. is a subclass of _MarkupContainer_) their rendering result is included in the resulting markup.
3. *Removed:* this stage is triggered when a component is explicitly removed from its component hierarchy, i.e. when its parent invokes _remove(component)_ on it. This stage is facultative and is never triggered for pages.
3. *Detached:* after request processing has ended all components are notified to detach any state that is no longer needed.
The following picture shows the state diagram of component lifecycle:
image::../img/component-lifecycle.png[]
Once a component has been removed it could be added again to a container, but the initialization stage won't be executed again - it is easier to just create a new component instance instead.
NOTE: If you read the JavaDoc of class _Component_ you will find a more detailed description of component lifecycle.
However this description introduces some advanced topics we didn't covered yet hence, to avoid confusion, in this chapter some details have been omitted and they will be covered later in the next chapters.
For now you can consider just the simplified version of the lifecycle described above.
|
During its life a Wicket component goes through the following stages:
1. *Initialization:* a component is instantiated and initialized by Wicket.
2. *Rendering:* components are prepared for rendering and generate markup. If a component contains children (i.e. is a subclass of _MarkupContainer_) their rendering result is included in the resulting markup.
3. *Removed:* this stage is triggered when a component is explicitly removed from its component hierarchy, i.e. when its parent invokes _remove(component)_ on it. This stage is facultative and is never triggered for pages.
4. *Detached:* after request processing has ended all components are notified to detach any state that is no longer needed.
The following picture shows the state diagram of component lifecycle:
image::../img/component-lifecycle.png[]
Once a component has been removed it could be added again to a container, but the initialization stage won't be executed again - it is easier to just create a new component instance instead.
NOTE: If you read the JavaDoc of class _Component_ you will find a more detailed description of component lifecycle.
However this description introduces some advanced topics we didn't covered yet hence, to avoid confusion, in this chapter some details have been omitted and they will be covered later in the next chapters.
For now you can consider just the simplified version of the lifecycle described above.
|
Fix Java API documentation for indexed scripts | [[indexed-scripts]]
== Indexed Scripts API
The indexed script API allows one to interact with scripts and templates
stored in an elasticsearch index. It can be used to create, update, get,
and delete indexed scripts and templates.
[source,java]
--------------------------------------------------
PutIndexedScriptResponse = client.preparePutIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.setSource("_score * doc['my_numeric_field'].value")
.execute()
.actionGet();
GetIndexedScriptResponse = client.prepareGetIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.execute()
.actionGet();
DeleteIndexedScriptResponse = client.prepareDeleteIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.execute()
.actionGet();
--------------------------------------------------
To store templates simply use "mustache" for the scriptLang.
=== Script Language
The API allows one to set the language of the indexed script being
interacted with. If one is not provided the default scripting language
will be used. | [[indexed-scripts]]
== Indexed Scripts API
The indexed script API allows one to interact with scripts and templates
stored in an elasticsearch index. It can be used to create, update, get,
and delete indexed scripts and templates.
[source,java]
--------------------------------------------------
PutIndexedScriptResponse = client.preparePutIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.setSource("script", "_score * doc['my_numeric_field'].value")
.execute()
.actionGet();
GetIndexedScriptResponse = client.prepareGetIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.execute()
.actionGet();
DeleteIndexedScriptResponse = client.prepareDeleteIndexedScript()
.setScriptLang("groovy")
.setId("script1")
.execute()
.actionGet();
--------------------------------------------------
To store templates simply use "mustache" for the scriptLang.
=== Script Language
The API allows one to set the language of the indexed script being
interacted with. If one is not provided the default scripting language
will be used. |
Fix apt-get install command for Cassandra |
// Allow GitHub image rendering
:imagesdir: ../../images
[[gi-install-cassandra-debian]]
==== Installing on Debian-based systems
This section describes how to install the latest _Cassandra 2.1.x_ release on a _Debian_ based systems for _Newts_.
The first steps add the _DataStax_ community repository and install the required _GPG Key_ to verify the integrity of the _DEB packages_.
Installation of the package is done with _apt_ and the _Cassandra_ service is added to the run level configuration.
NOTE: This description is build on _Debian 8_ and _Ubuntu 14.04 LTS_.
.Add DataStax repository
[source, bash]
----
vi /etc/apt/sources.list.d/cassandra.sources.list
----
.Content of the cassandra.sources.list file
[source, bash]
----
deb http://debian.datastax.com/community stable main
----
.Install GPG key to verify DEB packages
[source, bash]
----
wget -O - http://debian.datastax.com/debian/repo_key | apt-key add -
----
.Install latest Cassandra 2.1.x package
[source, bash]
----
dsc21=2.1.8-1 cassandra=2.1.8
----
The _Cassandra_ service is added to the run level configuration and is automatically started after installing the package.
TIP: Verify if the _Cassandra_ service is automatically started after rebooting the server.
|
// Allow GitHub image rendering
:imagesdir: ../../images
[[gi-install-cassandra-debian]]
==== Installing on Debian-based systems
This section describes how to install the latest _Cassandra 2.1.x_ release on a _Debian_ based systems for _Newts_.
The first steps add the _DataStax_ community repository and install the required _GPG Key_ to verify the integrity of the _DEB packages_.
Installation of the package is done with _apt_ and the _Cassandra_ service is added to the run level configuration.
NOTE: This description is build on _Debian 8_ and _Ubuntu 14.04 LTS_.
.Add DataStax repository
[source, bash]
----
vi /etc/apt/sources.list.d/cassandra.sources.list
----
.Content of the cassandra.sources.list file
[source, bash]
----
deb http://debian.datastax.com/community stable main
----
.Install GPG key to verify DEB packages
[source, bash]
----
wget -O - http://debian.datastax.com/debian/repo_key | apt-key add -
----
.Install latest Cassandra 2.1.x package
[source, bash]
----
apt-get update
apt-get install dsc21=2.1.8-1 cassandra=2.1.8
----
The _Cassandra_ service is added to the run level configuration and is automatically started after installing the package.
TIP: Verify if the _Cassandra_ service is automatically started after rebooting the server.
|
Update Java documentation for 5.0 | [[java-aggs-bucket-histogram]]
==== Histogram Aggregation
Here is how you can use
{ref}/search-aggregations-bucket-histogram-aggregation.html[Histogram Aggregation]
with Java API.
===== Prepare aggregation request
Here is an example on how to create the aggregation request:
[source,java]
--------------------------------------------------
AggregationBuilder aggregation =
AggregationBuilders
.histogram("agg")
.field("height")
.interval(1);
--------------------------------------------------
===== Use aggregation response
Import Aggregation definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
Histogram agg = sr.getAggregations().get("agg");
// For each entry
for (Histogram.Bucket entry : agg.getBuckets()) {
Double key = (Double) entry.getKey(); // Key
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], doc_count [{}]", key, docCount);
}
--------------------------------------------------
| [[java-aggs-bucket-histogram]]
==== Histogram Aggregation
Here is how you can use
{ref}/search-aggregations-bucket-histogram-aggregation.html[Histogram Aggregation]
with Java API.
===== Prepare aggregation request
Here is an example on how to create the aggregation request:
[source,java]
--------------------------------------------------
AggregationBuilder aggregation =
AggregationBuilders
.histogram("agg")
.field("height")
.interval(1);
--------------------------------------------------
===== Use aggregation response
Import Aggregation definition classes:
[source,java]
--------------------------------------------------
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
--------------------------------------------------
[source,java]
--------------------------------------------------
// sr is here your SearchResponse object
Histogram agg = sr.getAggregations().get("agg");
// For each entry
for (Histogram.Bucket entry : agg.getBuckets()) {
Number key = (Number) entry.getKey(); // Key
long docCount = entry.getDocCount(); // Doc count
logger.info("key [{}], doc_count [{}]", key, docCount);
}
--------------------------------------------------
|
Remove XPackExtension break from 7.0 it's ported to 6.3 | [role="xpack"]
[[breaking-changes-xes]]
= {xpack} Breaking Changes
[partintro]
--
This section summarizes the changes that you need to be aware of when migrating
your application from one version of {xpack} to another.
* <<breaking-7.0.0-xes>>
See also:
* <<breaking-changes,{es} Breaking Changes>>
* {kibana-ref}/breaking-changes-xkb.html[{kib} {xpack} Breaking Changes]
* {logstash-ref}/breaking-changes-xls.html[Logstash {xpack} Breaking Changes]
--
[role="xpack"]
[[breaking-7.0.0-xes]]
== {xpack} Breaking changes in 7.0.0
Machine Learning::
* The `max_running_jobs` node property is removed in this release. Use the
`xpack.ml.max_open_jobs` setting instead. For more information, <<ml-settings>>.
Security::
* The fields returned as part of the mappings section by get index, get
mappings, get field mappings and field capabilities API are now only the ones
that the user is authorized to access in case field level security is enabled.
* The legacy `XPackExtension` extension mechanism has been removed and replaced
with an SPI based extension mechanism that is installed and built as an elasticsearch
plugin.
See also:
* <<breaking-changes-7.0,{es} Breaking Changes>>
| [role="xpack"]
[[breaking-changes-xes]]
= {xpack} Breaking Changes
[partintro]
--
This section summarizes the changes that you need to be aware of when migrating
your application from one version of {xpack} to another.
* <<breaking-7.0.0-xes>>
See also:
* <<breaking-changes,{es} Breaking Changes>>
* {kibana-ref}/breaking-changes-xkb.html[{kib} {xpack} Breaking Changes]
* {logstash-ref}/breaking-changes-xls.html[Logstash {xpack} Breaking Changes]
--
[role="xpack"]
[[breaking-7.0.0-xes]]
== {xpack} Breaking changes in 7.0.0
Machine Learning::
* The `max_running_jobs` node property is removed in this release. Use the
`xpack.ml.max_open_jobs` setting instead. For more information, <<ml-settings>>.
Security::
* The fields returned as part of the mappings section by get index, get
mappings, get field mappings and field capabilities API are now only the ones
that the user is authorized to access in case field level security is enabled.
See also:
* <<breaking-changes-7.0,{es} Breaking Changes>>
|
Comment out logo for now | = TCK Reference Guide for Jakarta Bean Validation
Emmanuel Bernard - Red Hat, Inc.; Hardy Ferentschik - Red Hat, Inc.; Gunnar Morling - Red Hat, Inc.
:doctype: book
:revdate: {docdate}
:revnumber: {tckVersion}
:revremark: Copyright {copyrightYear} - Red Hat, Inc. (Specification Lead)
:sectanchors:
:anchor:
:toc: left
:docinfodir: {docinfodir}
:docinfo:
:title-logo-image: image:beanvalidation_logo.png[align=left,pdfwidth=20%]
// PDF uses :title-logo-image: on first page, no need to repeat image later on
ifndef::backend-pdf[]
image::beanvalidation_logo_smaller.png[align="center"]
endif::[]
include::preface.asciidoc[]
:numbered:
include::introduction.asciidoc[]
include::appeals-process.asciidoc[]
include::installation.asciidoc[]
include::reporting.asciidoc[]
include::configuration.asciidoc[]
include::sigtest.asciidoc[] | = TCK Reference Guide for Jakarta Bean Validation
Emmanuel Bernard - Red Hat, Inc.; Hardy Ferentschik - Red Hat, Inc.; Gunnar Morling - Red Hat, Inc.
:doctype: book
:revdate: {docdate}
:revnumber: {tckVersion}
:revremark: Copyright {copyrightYear} - Red Hat, Inc. (Specification Lead)
:sectanchors:
:anchor:
:toc: left
:docinfodir: {docinfodir}
:docinfo:
// Comment out logo for final release for now
//:title-logo-image: image:beanvalidation_logo.png[align=left,pdfwidth=20%]
// PDF uses :title-logo-image: on first page, no need to repeat image later on
ifndef::backend-pdf[]
image::beanvalidation_logo_smaller.png[align="center"]
endif::[]
include::preface.asciidoc[]
:numbered:
include::introduction.asciidoc[]
include::appeals-process.asciidoc[]
include::installation.asciidoc[]
include::reporting.asciidoc[]
include::configuration.asciidoc[]
include::sigtest.asciidoc[] |
Correct and highlight port number, plus how-to-change. | = SpringBoot WebApp Demo
SpringBoot looks like a nice way to get started.
This is a trivial webapp created using SpringBoot.
== HowTo
mvn spring-boot:run
then connect to http://localhost:8080/notaservlet
== War Deployment
It seems that you can't have both the instant-deployment convenience of Spring Boot
AND the security of a full WAR deployment in the same pom file. You will need to
make several changes to deploy as a WAR file. See the section entitled
"Traditional Deployment"--"Create a deployable war file" in the
spring-boot reference manual (Section 73.1 in the current snapshot as of
this writing).
| = SpringBoot WebApp Demo
SpringBoot looks like a nice way to get started.
This is a trivial webapp created using SpringBoot.
== HowTo
mvn spring-boot:run
then connect to http://localhost:8000/notaservlet
Note that is 8000 not the usual 8080 to avoid conflicts.
Chante this in application.properties if you don't like it.
== War Deployment
It seems that you can't have both the instant-deployment convenience of Spring Boot
AND the security of a full WAR deployment in the same pom file. You will need to
make several changes to deploy as a WAR file. See the section entitled
"Traditional Deployment"--"Create a deployable war file" in the
spring-boot reference manual (Section 73.1 in the current snapshot as of
this writing).
|
Add links to supported connection pools | = flexy-pool
Author <mih_vlad@yahoo.com>
v1.0.0, 2014-02-25
:toc:
:imagesdir: images
:homepage: http://vladmihalcea.com/
== Introduction
The flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.
This is very handy since most connection pools offer a limited set of dynamic configuration strategies.
== Features
* extensive connection pool support(Bitronix TM, C3PO, DBCP 1, DBCP 2, BoneCP, HikariCP)
* statistics support
** source connection acquiring time histogram
** total connection acquiring time histogram
** retries attempts histogram
** maximum CP size histogram
** connection request count histogram
** connection lease time histogram
https://github.com/vladmihalcea/flexy-pool/wiki/Flexy-Pool-User-Guide[User Guide]
== 1.0 TODO list
* explain all configuration settings
* explain jmx metrics
* add real-life case study
| = flexy-pool
Author <mih_vlad@yahoo.com>
v1.0.0, 2014-02-25
:toc:
:imagesdir: images
:homepage: http://vladmihalcea.com/
== Introduction
The flexy-pool library brings adaptability to a given Connection Pool, allowing it to resize on demand.
This is very handy since most connection pools offer a limited set of dynamic configuration strategies.
== Features
* extensive connection pool support
** http://docs.codehaus.org/display/BTM/Home[Bitronix Transaction Manager]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP]
** http://commons.apache.org/proper/commons-dbcp/[Apache DBCP2]
** http://www.mchange.com/projects/c3p0/[C3P0]
** http://jolbox.com/[BoneCP]
** http://brettwooldridge.github.io/HikariCP/[HikariCP]
* statistics support
** source connection acquiring time histogram
** total connection acquiring time histogram
** retries attempts histogram
** maximum CP size histogram
** connection request count histogram
** connection lease time histogram
== Documentation
https://github.com/vladmihalcea/flexy-pool/wiki/Flexy-Pool-User-Guide[Flexy Pool User Guide]
== 1.0 Release TODO list
* explain all configuration settings
* explain jmx metrics
* add real-life case study
|
Update the doc with the instructions to run | = Microservices libraries comparison
== Purpose
This project is the companion of this Blog article: https://cdelmas.github.io/xxxxxxx.
== Build
To build the servers, just run `gradlew make` in the command line. By the way, you need a JDK 8, but I guess you're up-to-date :)
== Run the perf tests
== Notes
There are no unit tests, and it is fully assumed.
== The missing guys (again)
Feel free to add competing frameworks, such as Restx, Payara and Swarm to the comparison.
| = Microservices libraries comparison
== Purpose
This project is the companion of this Blog article: https://cdelmas.github.io/xxxxxxx.
== Build
To build the servers, just run `gradlew make` in the command line. By the way, you need a JDK 8, but I guess you're up-to-date :)
== Run the perf tests
1. Move inside the `perf-runner` directory
1. Each server has its own url :
+
- http://localhost:8081/dropwizard
- http://localhost:8082/restlet
- http://localhost:8083/spark
- http://localhost:8084/spring
- http://localhost:8085/vertx
+
Set the `SERVICE_URI` environment variable with one of these values. Run the correspounding server with `java -jar <project>/build/lib/xxx.jar`.
1. Run the Gatling scenario:Β `./gradlew loadTest`
== Notes
There are no unit tests, and it is fully assumed.
== The missing guys (again)
Feel free to add competing frameworks, such as Restx, Payara and Swarm to the comparison.
|
Revert "Revert "Test trigger on push"" | = Infinispan Cluster Manager
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/"]
This is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].
Please see the in-source asciidoc documentation or the main documentation on the web-site for a full description
of this component:
* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]
* link:src/main/asciidoc/java/index.adoc[in-source docs]
| = Infinispan Cluster Manager
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/"]
This is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].
Please see the in-source asciidoc documentation or the main documentation on the web-site for a full description
of this component:
* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]
* link:src/main/asciidoc/java/index.adoc[in-source docs]
-- will remove --
|
Fix minor issues in delimited payload token filter docs | [[analysis-delimited-payload-tokenfilter]]
=== Delimited Payload Token Filter
Named `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.
Example: "the|1 quick|2 fox|3" is split per default int to tokens `fox`, `quick` and `the` with payloads `1`, `2` and `3` respectively.
Parameters:
`delimiter`::
Character used for splitting the tokens. Default is `|`.
`encoding`::
The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`. | [[analysis-delimited-payload-tokenfilter]]
=== Delimited Payload Token Filter
Named `delimited_payload_filter`. Splits tokens into tokens and payload whenever a delimiter character is found.
Example: "the|1 quick|2 fox|3" is split by default into tokens `the`, `quick`, and `fox` with payloads `1`, `2`, and `3` respectively.
Parameters:
`delimiter`::
Character used for splitting the tokens. Default is `|`.
`encoding`::
The type of the payload. `int` for integer, `float` for float and `identity` for characters. Default is `float`. |
Remove searching code that doesn't work on github | GRR Rapid Response documentation.
=================================
GRR is an Incident Response Framework focused on Remote Live Forensics.
Index
-----
1. link:user_manual.adoc[User Manual]
2. link:admin.adoc[Administration Documentation (Setup and Configuration)]
3. link:implementation.adoc[Developer and Implementation Documentation]
4. link:configuration.adoc[The GRR Configuration system]
5. link:https://code.google.com/p/grr/w/list[Wiki]
Search Wiki and Documentation
-----------------------------
++++
<script>
(function() {
var cx = '017727954489331006196:9wffyoayvxs';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
'//www.google.com/cse/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>
<gcse:searchbox gname="docs"></gcse:searchbox>
++++
Search Code
-----------
++++
<script>
(function() {
var cx = '017727954489331006196:pfi9rtrihuq';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
'//www.google.com/cse/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>
<gcse:searchbox gname="code"></gcse:searchbox>
++++
Results
-------
++++
<gcse:searchresults gname="docs"></gcse:searchresults>
<gcse:searchresults gname="code"></gcse:searchresults>
++++ | GRR Rapid Response documentation.
=================================
GRR is an Incident Response Framework focused on Remote Live Forensics.
Index
-----
1. link:user_manual.adoc[User Manual]
2. link:admin.adoc[Administration Documentation (Setup and Configuration)]
3. link:implementation.adoc[Developer and Implementation Documentation]
4. link:configuration.adoc[The GRR Configuration system]
5. link:https://code.google.com/p/grr/w/list[Wiki]
|
Update the Introduction and header | = Inform: A C library for information analysis of complex systems
Douglas G. Moore <doug@dglmoore.com>
v0.0.5, June 2017
:toc2:
:toclevels: 2
:source-highlighter: prettify
:stem: latexmath
Inform is a cross-platform C library designed for performing information
analysis of complex system.
1. The `inform_dist` struct provides discrete, emperical probability
distributions. These form the basis for all of the information-theoretic
measures.
2. A collection of information measures built upon the distribution struct
provide the core algorithms for the library, and are provided through the
`shannon.h` header.
3. A host of measures of the information dynamics on time series are built upon
the core information measures. Each measure is housed in its own header, e.g.
`active_info.h`.
In addition to the core components, a small collection of utilities are also
provided.
:leveloffset: 1
include::getting-started.adoc[]
include::distributions.adoc[]
include::shannon.adoc[]
| = Inform: A C library for information analysis of complex systems
Douglas G. Moore <douglas.g.moore@asu.edu>
v1.0.0, November 2017
:toc2:
:toclevels: 2
:source-highlighter: prettify
:stem: latexmath
image:https://travis-ci.org/ELIFE-ASU/Inform.svg?branch=master[Build Status (Travis CI),
link=https://travis-ci.org/ELIFE-ASU/Inform]
image:https://ci.appveyor.com/api/projects/status/7y015h6p7n0q7097/branch/master?svg=true[Build
Status (Appveyor), link=https://ci.appveyor.com/project/dglmoore/inform-vx977]
Inform is a cross-platform C library designed for performing information
analysis of complex system.
1. The `inform_dist` struct provides discrete, emperical probability
distributions. These form the basis for all of the information-theoretic
measures.
2. A collection of information measures built upon the distribution struct
provide the core algorithms for the library, and are provided through the
`shannon.h` header.
3. A host of measures of the information dynamics on time series are built upon
the core information measures. Each measure is housed in its own header, e.g.
`active_info.h`.
In addition to the core components, a small collection of utilities are also
provided.
:leveloffset: 1
include::getting-started.adoc[]
include::distributions.adoc[]
include::shannon.adoc[]
|
Update Release notes for v2.34.0 | = [.ebi-color]#Release notes#
:toc: auto
This pages contains links to release notes for DSP's REST API application.
[[section]]
== v2.33.0 Release notes
New features:
----------------
1. Add the possibility to set an additional checklist on a submittable for schema validation.
2. Add the possibility to set an additional checklist on a submittable when they uploaded with a spreadsheet.
3. Document checklist usage in API documentation.
----------------
Platform upgrades:
--------------
1. DSP's REST API application now runs on Java 11 (Open JDK 11).
--------------
| = [.ebi-color]#Release Notes#
:toc: auto
This pages contains links to release notes for DSP's REST API application.
[[section]]
== v2.34.0 Release Notes
New feautures:
----------------
1. DSP is going under maintenance from 8th March to 4th April 2020. For that period on DSP's welcome page displays a message regarding to this maintenance.
----------------
[[section]]
== v2.33.0 Release Notes
New features:
----------------
1. Add the possibility to set an additional checklist on a submittable for schema validation.
2. Add the possibility to set an additional checklist on a submittable when they uploaded with a spreadsheet.
3. Document checklist usage in API documentation.
----------------
Platform upgrades:
--------------
1. DSP's REST API application now runs on Java 11 (Open JDK 11).
--------------
|
Replace Travis badge by GitHub Actions badge | = Asciidoctor.js CLI
ifdef::env-github[]
image:https://img.shields.io/travis/asciidoctor/asciidoctor-cli.js/master.svg[Travis build status, link=https://travis-ci.org/asciidoctor/asciidoctor-cli.js]
image:https://img.shields.io/npm/v/@asciidoctor/cli.svg[npm version, link=https://www.npmjs.org/package/@asciidoctor/cli]
endif::[]
The Command Line Interface (CLI) for Asciidoctor.js.
Install Asciidoctor.js globally and you'll have access to the `asciidoctor` command anywhere on your system:
$ npm i -g asciidoctor
Type `asciidoctor --help` for more information.
| = Asciidoctor.js CLI
ifdef::env-github[]
image:https://github.com/asciidoctor/asciidoctor-cli.js/workflows/Build/badge.svg[GitHub Actions Status, link=https://github.com/asciidoctor/asciidoctor-cli.js/actions]
image:https://img.shields.io/npm/v/@asciidoctor/cli.svg[npm version, link=https://www.npmjs.org/package/@asciidoctor/cli]
endif::[]
The Command Line Interface (CLI) for Asciidoctor.js.
Install Asciidoctor.js globally and you'll have access to the `asciidoctor` command anywhere on your system:
$ npm i -g asciidoctor
Type `asciidoctor --help` for more information.
|
Use single-colon for image notation. | README
======
author::
grml solutions
version::
1.0.0
travis state::
image::https://api.travis-ci.org/meisterluk/screenshot-compare.svg?branch=master[]
What?
-----
You can compare two image files and it will show a difference score between 0 and 1.
Using transparent reference PNG images, you can also skip certain areas of the file.
Why?
----
We take screenshots of running live systems and check whether they are (visually) in a certain state.
Who?
----
Especially software developers testing software might find this software useful.
Consider that Mozilla Firefox's just got a link:https://developer.mozilla.org/en-US/Firefox/Headless_mode[headless mode] and soon you will be able to take screenshots.
Source Code
-----------
The source code is available at link:https://github.com/mika/screenshot-compare/issues[Github].
Issues
------
Please report any issues on the link:https://github.com/mika/screenshot-compare/issues[Github issues page].
License
-------
See link:LICENSE[the LICENSE file] (Hint: MIT license).
Changelog
---------
0.0.1::
first release: PNG only, transparency support
0.0.2::
goroutine support, timeout argument, slight performance improvement
1.0.0::
complete rewrite, `--wait` and `--timeout` parameters, `Y'UV` support
| README image:https://api.travis-ci.org/meisterluk/screenshot-compare.svg?branch=master[]
========================================================================================
author::
grml solutions
version::
1.0.0
What?
-----
You can compare two image files and it will show a difference score between 0 and 1.
Using transparent reference PNG images, you can also skip certain areas of the file.
Why?
----
We take screenshots of running live systems and check whether they are (visually) in a certain state.
Who?
----
Especially software developers testing software might find this software useful.
Consider that Mozilla Firefox's just got a link:https://developer.mozilla.org/en-US/Firefox/Headless_mode[headless mode] and soon you will be able to take screenshots.
Source Code
-----------
The source code is available at link:https://github.com/mika/screenshot-compare/issues[Github].
Issues
------
Please report any issues on the link:https://github.com/mika/screenshot-compare/issues[Github issues page].
License
-------
See link:LICENSE[the LICENSE file] (Hint: MIT license).
Changelog
---------
0.0.1::
first release: PNG only, transparency support
0.0.2::
goroutine support, timeout argument, slight performance improvement
1.0.0::
complete rewrite, `--wait` and `--timeout` parameters, `Y'UV` support
|
Include Hello World Youtube video | = Minimal-J
Java - but small.
image::doc/frontends.png[]
Minimal-J applications are
* Responsive to use on every device
* Straight forward to specify and implement and therefore
* Easy to plan and manage
=== Idea
Business applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented.
Minimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user.
== Technical Features
* Independent of the used UI technology. Implementations for Web / Mobile / Desktop.
* ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported.
* Small: The minimalj.jar is still < 1MB
* Very few dependencies
* Applications can run standalone (like SpringBoot)
== Documentation
* link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications.
* link:doc/topics.adoc[Tutorial and examples] Informations for developers.
* link:doc/release_notes.adoc[Release Notes]
== Hello World
How to implement Hello World in Minimal-J:
video::[0VHz7gv6TpA][youtube]
=== Contact
* Bruno Eberhard, mailto:minimalj@hispeed.ch[minimalj@hispeed.ch] | = Minimal-J
Java - but small.
image::doc/frontends.png[]
Minimal-J applications are
* Responsive to use on every device
* Straight forward to specify and implement and therefore
* Easy to plan and manage
=== Idea
Business applications tend to get complex and complicated. Minimal-J prevents this by setting clear rules how an application should behave and how it should be implemented.
Minimal applications may not always look the same. But the UI concepts never change. There are no surprises for the user.
== Technical Features
* Independent of the used UI technology. Implementations for Web / Mobile / Desktop.
* ORM persistence layer for Maria DB or in memory DB. Transactions and Authorization supported.
* Small: The minimalj.jar is still < 1MB
* Very few dependencies
* Applications can run standalone (like SpringBoot)
== Documentation
* link:doc/user_guide/user_guide.adoc[Minimal user guide] User guide for Minimal-J applications.
* link:doc/topics.adoc[Tutorial and examples] Informations for developers.
* link:doc/release_notes.adoc[Release Notes]
== Hello World
How to implement Hello World in Minimal-J:
link:_includes/ex-video.adoc[0VHz7gv6TpA]
=== Contact
* Bruno Eberhard, mailto:minimalj@hispeed.ch[minimalj@hispeed.ch] |
Remove information admonition from readme | = Custom New Tab
Thor Andreas Rognan <thor.rognan@gmail.com>
:imagesdir: doc/assets/images
ifdef::env-github[]
:tip-caption: :bulb:
:note-caption: :information_source:
:important-caption: :heavy_exclamation_mark:
:caution-caption: :fire:
:warning-caption: :warning:
endif::[]
'''
NOTE: Chrome extension that creates a custom 'new tab'.
image::example.png[]
== Copyright and Licensing
Copyright (C) 2017- Thor Andreas Rognan
Free use of this software is granted under the terms of the MIT License.
See the <<LICENSE#,LICENSE>> file for details.
| = Custom New Tab
Thor Andreas Rognan <thor.rognan@gmail.com>
:imagesdir: doc/assets/images
ifdef::env-github[]
:tip-caption: :bulb:
:note-caption: :information_source:
:important-caption: :heavy_exclamation_mark:
:caution-caption: :fire:
:warning-caption: :warning:
endif::[]
'''
_Chrome extension that creates a custom 'new tab'._
image::example.png[]
== Copyright and Licensing
Copyright (C) 2017- Thor Andreas Rognan
Free use of this software is granted under the terms of the MIT License.
See the <<LICENSE#,LICENSE>> file for details.
|
Add notes about graduation date, scholarship | == School
I entered the Honours Computer Science Co-Op program at
link:http://mcmaster.ca[McMaster University] in Fall 2010. My original
graduation date would have been April 2014; however, due to my long-term
internship at Red Hat, this has been delayed by one year.
| == School
I entered the Honours Computer Science Co-Op program at
link:http://mcmaster.ca[McMaster University] in Fall 2010. My original
graduation date would have been April 2014; however, due to my long-term
internship at Red Hat, this has been delayed by one year. I'm on track
to complete my Bachelor's of Applied Science, Computer Science degree
in the Spring of 2015, with two advanced graduate level credits already
completed, and having earned the Louis J. Shein scholarship for "academic
excellence in Russian Language course" after first year.
|
Add a section on how to run Mozilla add-on linter | = Hacking
:uri-nodejs: http://nodejs.org
:uri-nvm: https://github.com/creationix/nvm
This guide will give you all the necessary information you need to become a successful code contributor!
== Setup
To build this project, you will need {uri-nodejs}[Node.js] >= 4 and `npm` (we recommend {uri-nvm}[nvm] to manage multiple active Node.js versions).
NOTE: If you feel more comfortable with `yarn`, you can use it as an alternative to `npm`
== Building
. Install all the dependencies
+
$ npm install
. Build the project
+
$ npm run build
+
This command will produce a zip file that can be load in Chrome: [.path]_dist/asciidoctor-browser-extension.zip_
. Run tests
+
$ npm run test
This project is using a code linter to enforce code consistency.
To make sure that the code you have contributed, follow the code rules, execute the following command:
$ npm run lint
| = Hacking
:uri-nodejs: http://nodejs.org
:uri-nvm: https://github.com/creationix/nvm
This guide will give you all the necessary information you need to become a successful code contributor!
== Setup
To build this project, you will need {uri-nodejs}[Node.js] >= 4 and `npm` (we recommend {uri-nvm}[nvm] to manage multiple active Node.js versions).
NOTE: If you feel more comfortable with `yarn`, you can use it as an alternative to `npm`
== Building
. Install all the dependencies
+
$ npm install
. Build the project
+
$ npm run build
+
This command will produce a zip file that can be load in Chrome: [.path]_dist/asciidoctor-browser-extension.zip_
. Run tests
+
$ npm run test
This project is using a code linter to enforce code consistency.
To make sure that the code you have contributed, follow the code rules, execute the following command:
$ npm run lint
== Add-on Linter
Mozilla provides a Node.js package to validate an add-on.
You can install the linter with `npm`:
$ npm install -g addons-linter
This command will install the package globally, so you can use the linter from any directory on your machine.
After installation, run the linter on the archive produced by `npm run build`:
$ addons-linter dist/asciidoctor-browser-extension.zip
|
Add link to top project page. | = EB4J xml2eb tool
:doctype: article
:docinfo:
:toc:
:toclevels: 2
:version: 1.99.0-SNAPSHOT
:project-name: xml2eb
This is a project page for xml2eb.
include::xml2eb.adoc[]
include::converter.adoc[]
include::reports.adoc[]
include::links.adoc[]
| = EB4J xml2eb tool
:doctype: article
:docinfo:
:toc:
:toclevels: 2
:version: 1.99.0-SNAPSHOT
:project-name: xml2eb
IMPORTANT: link:https://github.com/eb4j/xml2eb[View on GitHub]
| link:https://eb4j.github.io/[Top project page]
This is a project page for xml2eb.
include::xml2eb.adoc[]
include::converter.adoc[]
include::reports.adoc[]
include::links.adoc[]
|
Expand a bit on the typing | Goals
-----
GearScript aims to introduce strong static typing into TorqueScript, allowing for
a more consistent syntax while not changing semantics too much. Static type information
will initially be erased (compare to Java Generics) as part of the compilation process,
but that may change further down along the line. Types should be inferred to the extent
possible, in order to reduce the amount of typing needed.
Hierarchy
---------
GearScript's type system is hierarchical, but there is no true top type. Instead, there are six different top-level types: void, (float) numbers, strings, arrays, function references, and object references. This is because TorqueScript's syntax means that the primary way to do something often depends on which of these three types it belongs to. The most obvious example would be comparisons, which are done using `==` for number types and object reference comparison, but `$=` for strings. This ambiguity only arises when we have no type information available, and thus GearScript ought to resolve it transparently for the user.
Inference
---------
Types should be inferred for the user whenever possible. In most cases this is obvious from a variable's definition, since the literal syntax always unambiguously contains the type of a given value. The exception to this is function parameters, the types of these have to be inferred from how the parameter is used. If ultimately multiple different branches of types are valid for a value then a type must be defined explicitly. | Goals
-----
GearScript aims to introduce strong static typing into TorqueScript, allowing for a more consistent syntax while not changing semantics too much. Static type information will initially be erased (similarly to Java Generics) as part of the compilation process, but that may change further down along the line. Types should be inferred to the extent possible, in order to reduce the amount of typing needed.
Hierarchy
---------
GearScript's type system is hierarchical, but there is no true top type. Instead, there are six different top-level types: void, (float) numbers, strings, arrays, function references, and object references. This is because TorqueScript's syntax means that the primary way to do something often depends on which of these three types it belongs to. The most obvious example would be comparisons, which are done using `==` for number types and object reference comparison, but `$=` for strings. This ambiguity only arises when we have no type information available, and thus GearScript ought to resolve it transparently for the user.
Inference
---------
Types should be inferred for the user whenever possible. In most cases this is obvious from a variable's definition, since the literal syntax always unambiguously contains the type of a given value. The exception to this is function parameters, the types of these have to be inferred from how the parameter is used. If they are used as arguments for a function, inherit that type. If methods are called on them, look at what types we know about that expose those methods. If ultimately multiple different branches of types (like either an int or a string, or two different subclasses of the same class) are valid for a value then a type must be defined explicitly.
Custom Top Types
----------------
It should be possible for users to define new top-level types with unique behaviour that wrap other types. Use case: tagged strings.
|
Add Kerberos/SPNEGO Shield custom realm | [[security]]
== Security Plugins
Security plugins add a security layer to Elasticsearch.
[float]
=== Core security plugins
The core security plugins are:
link:/products/shield[Shield]::
Shield is the Elastic product that makes it easy for anyone to add
enterprise-grade security to their ELK stack. Designed to address the growing security
needs of thousands of enterprises using ELK today, Shield provides peace of
mind when it comes to protecting your data.
[float]
=== Community contributed security plugins
The following plugin has been contributed by our community:
* https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin[Readonly REST]:
High performance access control for Elasticsearch native REST API (by Simone Scarduzio)
This community plugin appears to have been abandoned:
* https://github.com/sonian/elasticsearch-jetty[Jetty HTTP transport plugin]:
Uses Jetty to provide SSL connections, basic authentication, and request logging (by Sonian Inc.) | [[security]]
== Security Plugins
Security plugins add a security layer to Elasticsearch.
[float]
=== Core security plugins
The core security plugins are:
link:/products/shield[Shield]::
Shield is the Elastic product that makes it easy for anyone to add
enterprise-grade security to their ELK stack. Designed to address the growing security
needs of thousands of enterprises using ELK today, Shield provides peace of
mind when it comes to protecting your data.
[float]
=== Community contributed security plugins
The following plugins have been contributed by our community:
* https://github.com/codecentric/elasticsearch-shield-kerberos-realm[Kerberos/SPNEGO Realm]:
Custom Shield realm to Authenticate HTTP and Transport requests via Kerberos/SPNEGO (by codecentric AG)
* https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin[Readonly REST]:
High performance access control for Elasticsearch native REST API (by Simone Scarduzio)
This community plugin appears to have been abandoned:
* https://github.com/sonian/elasticsearch-jetty[Jetty HTTP transport plugin]:
Uses Jetty to provide SSL connections, basic authentication, and request logging (by Sonian Inc.)
|
Add the good link to the blog article. | = Microservices libraries comparison
== Purpose
This project is the companion of this Blog article: http://cdelmas.github.io/.....
== Build
To build it, just run `gradle shadowJar` in the command line.
== Run
Then you can run each server using `java -jar <server>.jar`.
| = Microservices libraries comparison
== Purpose
This project is the companion of this Blog article: https://cdelmas.github.io/2015/11/01/A-comparison-of-Microservices-Frameworks.html.
== Build
To build it, just run `gradle shadowJar` in the command line.
== Run
Then you can run each server using `java -jar <server>.jar`.
|
Use block syntax for note/warning | :function-title: {{name}}
[#{{id}}]
=== {function-title}
{{#if deprecated}}
[CAUTION]
====
{{{deprecatedDocs}}}
====
{{/if}}
{{{brief}}}
++++
<pre class="highlightjs highlight"><code class="language-{{@root.sourceLanguage}} hljs" data-lang="{{@root.sourceLanguage}}">{{declaration}}</code></pre>
++++
{{#if return.description}}==== Return
{{return.description}}{{/if}}
{{{description}}}
{{#each note}}
NOTE: {{{this}}}
{{/each}}
{{#each warning}}
WARNING: {{{this}}}
{{/each}}
{{#if params}}
==== Parameters
[cols="1,3a", stripes="even"]
|===
|Name |Description
{{#each params}}
|``{{name}}``
|{{{description}}}
{{/each}}
|===
{{/if}}
| :function-title: {{name}}
[#{{id}}]
=== {function-title}
{{#if deprecated}}
[CAUTION]
====
{{{deprecatedDocs}}}
====
{{/if}}
{{{brief}}}
++++
<pre class="highlightjs highlight"><code class="language-{{@root.sourceLanguage}} hljs" data-lang="{{@root.sourceLanguage}}">{{declaration}}</code></pre>
++++
{{#if return.description}}==== Return
{{return.description}}{{/if}}
{{{description}}}
{{#each note}}
[NOTE]
====
{{{this}}}
====
{{/each}}
{{#each warning}}
[WARNING]
====
{{{this}}}
====
{{/each}}
{{#if params}}
==== Parameters
[cols="1,3a", stripes="even"]
|===
|Name |Description
{{#each params}}
|``{{name}}``
|{{{description}}}
{{/each}}
|===
{{/if}}
|
Add beat for readings stats from uWSGI | [[community-beats]]
== Community Beats
The open source community has been hard at work developing new Beats. You can check
out a few of them here:
[horizontal]
https://github.com/Ingensi/dockerbeat[dockerbeat]:: Reads docker container
statistics and indexes them in Elasticsearch
https://github.com/mrkschan/nginxbeat[nginxbeat]:: Reads status from Nginx
https://github.com/joshuar/pingbeat[pingbeat]:: Sends ICMP pings to a list
of targets and stores the round trip time (RTT) in Elasticsearch
Have you created a Beat that's not listed? Open a pull request to add your link
here: https://github.com/elastic/libbeat/blob/master/docs/communitybeats.asciidoc
NOTE: Elastic provides no warranty or support for community-sourced Beats.
[[contributing-beats]]
=== Contributing to Beats
Remember, you can be a Beats developer, too. <<new-beat, Learn how>>
| [[community-beats]]
== Community Beats
The open source community has been hard at work developing new Beats. You can check
out a few of them here:
[horizontal]
https://github.com/Ingensi/dockerbeat[dockerbeat]:: Reads docker container
statistics and indexes them in Elasticsearch
https://github.com/mrkschan/nginxbeat[nginxbeat]:: Reads status from Nginx
https://github.com/joshuar/pingbeat[pingbeat]:: Sends ICMP pings to a list
of targets and stores the round trip time (RTT) in Elasticsearch
https://github.com/mrkschan/uwsgibeat[uwsgibeat]:: Reads stats from uWSGI
Have you created a Beat that's not listed? Open a pull request to add your link
here: https://github.com/elastic/libbeat/blob/master/docs/communitybeats.asciidoc
NOTE: Elastic provides no warranty or support for community-sourced Beats.
[[contributing-beats]]
=== Contributing to Beats
Remember, you can be a Beats developer, too. <<new-beat, Learn how>>
|
Revert "[Docs] Fix base directory to include for put_mapping.asciidoc" | [[java-admin-indices-put-mapping]]
:base-dir: {docdir}/../../server/src/test/java/org/elasticsearch/action/admin/indices/create
==== Put Mapping
The PUT mapping API allows you to add a new type while creating an index:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[addMapping-create-index-request]
--------------------------------------------------
<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`
<2> It also adds a `tweet` mapping type.
The PUT mapping API also allows to add a new type to an existing index:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Adds a `user` mapping type.
<3> This `user` has a predefined type
<4> type can be also provided within the source
You can use the same API to update an existing mapping:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source-append]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Updates the `user` mapping type.
<3> This `user` has now a new field `user_name`
:base-dir!:
| [[java-admin-indices-put-mapping]]
:base-dir: {docdir}/../../core/src/test/java/org/elasticsearch/action/admin/indices/create
==== Put Mapping
The PUT mapping API allows you to add a new type while creating an index:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[addMapping-create-index-request]
--------------------------------------------------
<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`
<2> It also adds a `tweet` mapping type.
The PUT mapping API also allows to add a new type to an existing index:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Adds a `user` mapping type.
<3> This `user` has a predefined type
<4> type can be also provided within the source
You can use the same API to update an existing mapping:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source-append]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Updates the `user` mapping type.
<3> This `user` has now a new field `user_name`
:base-dir!: |
Fix wrong link (even if it's temporally) | = Artificial Intelligence for Spring Boot
:awestruct-description: Learn how to use OptaPlanner (open source, java) for Artificial Intelligence planning optimization on Spring Boot.
:awestruct-layout: compatibilityBase
:awestruct-priority: 1.0
:awestruct-related_tag: spring
:showtitle:
OptaPlanner has a Spring Boot Starter to get up and running quickly.
Usage is similar to the link:quarkus.html[Quarkus] extension, but without the performance benefits.
video::U2N02ReT9CI[youtube]
== Guide
**https://github.com/ge0ffrey/gs-constraint-solving-ai-optaplanner-backup/blob/master/README.adoc[Read the OptaPlanner on Spring Boot guide.]**
== Quick start
Run the quick start yourself:
. Download https://github.com/ge0ffrey/gs-constraint-solving-ai-optaplanner-backup/tree/master/complete[the source code].
. Run `./mvnw clean install`
. Open http://localhost:8080 in your browser
| = Artificial Intelligence for Spring Boot
:awestruct-description: Learn how to use OptaPlanner (open source, java) for Artificial Intelligence planning optimization on Spring Boot.
:awestruct-layout: compatibilityBase
:awestruct-priority: 1.0
:awestruct-related_tag: spring
:showtitle:
OptaPlanner has a Spring Boot Starter to get up and running quickly.
Usage is similar to the link:quarkus.html[Quarkus] extension, but without the performance benefits.
video::U2N02ReT9CI[youtube]
== Guide
**https://github.com/ge0ffrey/getting-started-guides/blob/gs-constraint-solving-ai-optaplanner/README.adoc[Read the OptaPlanner on Spring Boot guide.]**
== Quick start
Run the quick start yourself:
. Download https://github.com/ge0ffrey/gs-constraint-solving-ai-optaplanner-backup/tree/master/complete[the source code].
. Run `./mvnw clean install`
. Open http://localhost:8080 in your browser
|
Fix the mTLS entry in the feature matrix | // Module included in the following assemblies:
//
// * serverless/serverless-release-notes.adoc
:_content-type: REFERENCE
[id="serverless-deprecated-removed-features_{context}"]
= Deprecated and removed features
Some features available in previous releases have been deprecated or removed.
Deprecated functionality is still included in {ServerlessProductName} and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within {ServerlessProductName}, refer to the table below.
In the table, features are marked with the following statuses:
* *-*: _Not yet available_
* *TP*: _Technology Preview_
* *GA*: _General Availability_
* *DEP*: _Deprecated_
* *REM*: _Removed_
.Deprecated and removed features tracker
[cols="3,1,1,1",options="header"]
|====
|Feature |1.18|1.19|1.20
|`kn func emit` (`kn func invoke` in 1.21+)
|TP
|TP
|TP
|mTLS
|GA
|GA
|GA
|`kn func` TypeScript templates
|TP
|TP
|TP
|`kn func` Rust templates
|TP
|TP
|TP
|`emptyDir` volumes
|GA
|GA
|GA
|`KafkaBinding` API
|GA
|DEP
|DEP
|HTTPS redirection
|-
|GA
|GA
|Kafka broker
|-
|-
|TP
|====
| // Module included in the following assemblies:
//
// * serverless/serverless-release-notes.adoc
:_content-type: REFERENCE
[id="serverless-deprecated-removed-features_{context}"]
= Deprecated and removed features
Some features available in previous releases have been deprecated or removed.
Deprecated functionality is still included in {ServerlessProductName} and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within {ServerlessProductName}, refer to the table below.
In the table, features are marked with the following statuses:
* *-*: _Not yet available_
* *TP*: _Technology Preview_
* *GA*: _General Availability_
* *DEP*: _Deprecated_
* *REM*: _Removed_
.Deprecated and removed features tracker
[cols="3,1,1,1",options="header"]
|====
|Feature |1.18|1.19|1.20
|`kn func emit` (`kn func invoke` in 1.21+)
|TP
|TP
|TP
|Service Mesh mTLS
|GA
|GA
|GA
|`kn func` TypeScript templates
|TP
|TP
|TP
|`kn func` Rust templates
|TP
|TP
|TP
|`emptyDir` volumes
|GA
|GA
|GA
|`KafkaBinding` API
|GA
|DEP
|DEP
|HTTPS redirection
|-
|GA
|GA
|Kafka broker
|-
|-
|TP
|====
|
Remove lines added for testing CI setup | = Infinispan Cluster Manager
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/"]
This is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].
Please see the in-source asciidoc documentation or the main documentation on the web-site for a full description
of this component:
* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]
* link:src/main/asciidoc/java/index.adoc[in-source docs]
-- will remove --
| = Infinispan Cluster Manager
image:https://vertx.ci.cloudbees.com/buildStatus/icon?job=vert.x3-infinispan["Build Status",link="https://vertx.ci.cloudbees.com/view/vert.x-3/job/vert.x3-infinispan/"]
This is a cluster manager implementation for Vert.x that uses http://infinispan.org[Infinispan].
Please see the in-source asciidoc documentation or the main documentation on the web-site for a full description
of this component:
* link:http://vertx.io/docs/vertx-infinispan/java/[web-site docs]
* link:src/main/asciidoc/java/index.adoc[in-source docs]
|
Move shields right after the title | image:https://img.shields.io/github/release/heruan/humanize.svg[link=https://github.com/heruan/humanize/releases,title=Latest release]
image:https://img.shields.io/github/downloads/heruan/humanize/total.svg[link=https://github.com/heruan/humanize/archive/master.zip,title=GitHub]
image:https://img.shields.io/circleci/project/github/heruan/humanize.svg[link=https://circleci.com/gh/heruan/humanize,title=CricleCI]
image:https://img.shields.io/codecov/c/github/heruan/humanize.svg[link=https://codecov.io/gh/heruan/humanize,title=Codecov]
image:https://img.shields.io/github/license/heruan/humanize.svg[link=http://www.apache.org/licenses/LICENSE-2.0.html,title=Apache License 2.0]
= Humanization libraries for Java
== humanize-time
image:https://img.shields.io/maven-central/v/to.lova.humanize/humanize-time.svg[]
[source,java]
----
Temporal minutesAgo = ZonedDateTime.now().minus(5, ChronoUnit.MINUTES);
String relative = HumanizeTime.fromNow(minutesAgo);
// produces: "5 minutes ago"
----
| = Humanization libraries for Java
image:https://img.shields.io/github/release/heruan/humanize.svg[link=https://github.com/heruan/humanize/releases,title=Latest release]
image:https://img.shields.io/github/downloads/heruan/humanize/total.svg[link=https://github.com/heruan/humanize/archive/master.zip,title=GitHub]
image:https://img.shields.io/circleci/project/github/heruan/humanize.svg[link=https://circleci.com/gh/heruan/humanize,title=CricleCI]
image:https://img.shields.io/codecov/c/github/heruan/humanize.svg[link=https://codecov.io/gh/heruan/humanize,title=Codecov]
image:https://img.shields.io/github/license/heruan/humanize.svg[link=http://www.apache.org/licenses/LICENSE-2.0.html,title=Apache License 2.0]
== humanize-time
image:https://img.shields.io/maven-central/v/to.lova.humanize/humanize-time.svg[title=humanize-time]
[source,java]
----
Temporal minutesAgo = ZonedDateTime.now().minus(5, ChronoUnit.MINUTES);
String relative = HumanizeTime.fromNow(minutesAgo);
// produces: "5 minutes ago"
----
|
Revert "[Docs] Fix base directory to include for put_mapping.asciidoc" | [[java-admin-indices-put-mapping]]
:base-dir: {docdir}/../../server/src/test/java/org/elasticsearch/action/admin/indices/create
==== Put Mapping
The PUT mapping API allows you to add a new type while creating an index:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[addMapping-create-index-request]
--------------------------------------------------
<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`
<2> It also adds a `tweet` mapping type.
The PUT mapping API also allows to add a new type to an existing index:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Adds a `user` mapping type.
<3> This `user` has a predefined type
<4> type can be also provided within the source
You can use the same API to update an existing mapping:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source-append]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Updates the `user` mapping type.
<3> This `user` has now a new field `user_name`
:base-dir!:
| [[java-admin-indices-put-mapping]]
:base-dir: {docdir}/../../core/src/test/java/org/elasticsearch/action/admin/indices/create
==== Put Mapping
The PUT mapping API allows you to add a new type while creating an index:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[addMapping-create-index-request]
--------------------------------------------------
<1> <<java-admin-indices-create-index,Creates an index>> called `twitter`
<2> It also adds a `tweet` mapping type.
The PUT mapping API also allows to add a new type to an existing index:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Adds a `user` mapping type.
<3> This `user` has a predefined type
<4> type can be also provided within the source
You can use the same API to update an existing mapping:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{base-dir}/CreateIndexIT.java[putMapping-request-source-append]
--------------------------------------------------
<1> Puts a mapping on existing index called `twitter`
<2> Updates the `user` mapping type.
<3> This `user` has now a new field `user_name`
:base-dir!: |
Fix cross link in 4.3 and master | [id="persistent-storage-using-flexvolume"]
= Persistent storage using FlexVolume
include::modules/common-attributes.adoc[]
:context: persistent-storage-flexvolume
toc::[]
{product-title} supports FlexVolume, an out-of-tree plug-in that uses an executable model to interface with drivers.
To use storage from a back-end that does not have a built-in plug-in, you can extend {product-title} through FlexVolume drivers and provide persistent storage to applications.
Pods interact with FlexVolume drivers through the `flexvolume` in-tree plugin.
.Additional References
* link:expanding-persistent-volumes.adoc[Expanding persistent volumes]
include::modules/persistent-storage-flexvolume-drivers.adoc[leveloffset=+1]
include::modules/persistent-storage-flexvolume-driver-example.adoc[leveloffset=+1]
include::modules/persistent-storage-flexvolume-installing.adoc[leveloffset=+1]
include::modules/persistent-storage-flexvolume-consuming.adoc[leveloffset=+1]
| [id="persistent-storage-using-flexvolume"]
= Persistent storage using FlexVolume
include::modules/common-attributes.adoc[]
:context: persistent-storage-flexvolume
toc::[]
{product-title} supports FlexVolume, an out-of-tree plug-in that uses an executable model to interface with drivers.
To use storage from a back-end that does not have a built-in plug-in, you can extend {product-title} through FlexVolume drivers and provide persistent storage to applications.
Pods interact with FlexVolume drivers through the `flexvolume` in-tree plugin.
.Additional References
* link:../expanding-persistent-volumes.adoc#expanding-persistent-volumes_{context}[Expanding persistent volumes]
include::modules/persistent-storage-flexvolume-drivers.adoc[leveloffset=+1]
include::modules/persistent-storage-flexvolume-driver-example.adoc[leveloffset=+1]
include::modules/persistent-storage-flexvolume-installing.adoc[leveloffset=+1]
include::modules/persistent-storage-flexvolume-consuming.adoc[leveloffset=+1]
|
Add missing cd into build. | # Garage Client
## TODO
- Smarter upload process
- Handle trailing slash on URLs correctly
## Developing
Getting started:
sudo apt-get install build-essential cmake g++ libboost-dev libboost-program-options-dev libboost-filesystem-dev libboost-system-dev libcurl4-gnutls-dev clang clang-format-3.6 ninja-build
mkdir build
cmake -DCMAKE_BUILD_TYPE=Debug ..
make
./garage-push --help
Generate tags
make tags
Before commiting, run the pre-commit checks:
CTEST_OUTPUT_ON_FAILURE=1 make qa
This will reformat all the code with clang-format and run clang-check and the test suite.
Please follow the https://google.github.io/styleguide/cppguide.html[Google C++ Style Guide] coding standard.
## Dockerfile
A Dockerfile is provided to check that the list of package dependencies is
up-to-date. For day-to-day development it is recommended to build
`garage-push` in your native environment.
To build inside docker:
docker build -t garage-tools-build .
docker run -ti -v $PWD:/src garage-tools-build
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Debug /src
make qa
// vim: set tabstop=4 shiftwidth=4 expandtab:
| # Garage Client
## TODO
- Smarter upload process
- Handle trailing slash on URLs correctly
## Developing
Getting started:
sudo apt-get install build-essential cmake g++ libboost-dev libboost-program-options-dev libboost-filesystem-dev libboost-system-dev libcurl4-gnutls-dev clang clang-format-3.6 ninja-build
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Debug ..
make
./garage-push --help
Generate tags
make tags
Before commiting, run the pre-commit checks:
CTEST_OUTPUT_ON_FAILURE=1 make qa
This will reformat all the code with clang-format and run clang-check and the test suite.
Please follow the https://google.github.io/styleguide/cppguide.html[Google C++ Style Guide] coding standard.
## Dockerfile
A Dockerfile is provided to check that the list of package dependencies is
up-to-date. For day-to-day development it is recommended to build
`garage-push` in your native environment.
To build inside docker:
docker build -t garage-tools-build .
docker run -ti -v $PWD:/src garage-tools-build
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Debug /src
make qa
// vim: set tabstop=4 shiftwidth=4 expandtab:
|
Add addtional upgrade notes for 19.0.0. | [[releasenotes-19]]
== OpenNMS 19
=== Important Upgrade Notes
* *Simple Topology Provider*: The Simple Topology Provider has been removed. A new GraphML Topology Provider is introduced and should be used instead.
=== New Features
* GraphML Topology Provider: A new Topology Provider which reads GraphML formatted graphs and displays them in the the Topology UI. | [[releasenotes-19]]
== OpenNMS 19
=== Important Upgrade Notes
* *Simple Topology Provider*: The Simple Topology Provider has been removed. A new GraphML Topology Provider is introduced and should be used instead.
* *Cassandra JMX Metrics*: The default value for the `friendly-name` attribute on the `JMX-Cassandra` collection service has changed from `cassandra21x` to `cassandra`.
This changes the path in which the metrics are stored.
If you have already been collecting these metrics and wish to preserve them, you can ignore this change when merging your configuration.
* *Jetty 9.3.x Upgrade*: Jetty has been upgraded from `8.1.x` to the latest `9.3.x`.
If you have a custom `jetty.xml` in your `etc` folder, you will need to migrate your changes.
Use `etc/examples/jetty.xml` as a starting point.
* *Drools 6.4.0 Upgrade*: Drools has been upgraded from `6.0.1.Final` to `6.4.0.Final`.
If you have custom Drools rules, they may need to be revised.
The compiler used in `6.4.0` is stricter than the compiler in previous versions.
=== New Features
* GraphML Topology Provider: A new Topology Provider which reads GraphML formatted graphs and displays them in the the Topology UI.
|
Update format docs to cover the UNIX format | *-F, --format* '[json|text]'::
Set the output format for stdout. Defaults to "text". | *-F, --format* '[json|text|unix]'::
Set the output format for stdout. Defaults to "text".
+
*TEXT* output is human-friendly textual output, usually in table or
record-oriented format.
In some cases, *TEXT* format is intentionally kept simple to support naive use
of commands within a subshell, but it is not generally guaranteed to be
parseable.
+
*JSON* output will produce raw JSON data from underlying Globus services.
This data is useful for using the CLI as part of a programmatic process, or for
deciding on valid *--jmespath* queries.
+
*UNIX* output will produce a line-by-line serialization of the result data
(which would be visible with *JSON*).
This is very suitable for use in pipelines with `grep`, `sort`, and other
standard unix tools.
+
Whenever you request *--format=UNIX*, you should also be using a *--jmespath*
query to select the exact fields that you want.
This better guarantees the consistency of output contents and ordering.
|
Rename reference docs to Elasticsearch Reference | [[elasticsearch-reference]]
= Reference
:version: 1.5.2
:branch: 1.5
:jdk: 1.8.0_25
:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current
include::getting-started.asciidoc[]
include::setup.asciidoc[]
include::migration/index.asciidoc[]
include::api-conventions.asciidoc[]
include::docs.asciidoc[]
include::search.asciidoc[]
include::aggregations.asciidoc[]
include::indices.asciidoc[]
include::cat.asciidoc[]
include::cluster.asciidoc[]
include::query-dsl.asciidoc[]
include::mapping.asciidoc[]
include::analysis.asciidoc[]
include::modules.asciidoc[]
include::index-modules.asciidoc[]
include::testing.asciidoc[]
include::glossary.asciidoc[]
| [[elasticsearch-reference]]
= Elasticsearch Reference
:version: 1.5.2
:branch: 1.5
:jdk: 1.8.0_25
:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current
include::getting-started.asciidoc[]
include::setup.asciidoc[]
include::migration/index.asciidoc[]
include::api-conventions.asciidoc[]
include::docs.asciidoc[]
include::search.asciidoc[]
include::aggregations.asciidoc[]
include::indices.asciidoc[]
include::cat.asciidoc[]
include::cluster.asciidoc[]
include::query-dsl.asciidoc[]
include::mapping.asciidoc[]
include::analysis.asciidoc[]
include::modules.asciidoc[]
include::index-modules.asciidoc[]
include::testing.asciidoc[]
include::glossary.asciidoc[]
|
Document Jackson serialization support for OAuth 2.0 Client | [[jackson]]
== Jackson Support
Spring Security has added Jackson Support for persisting Spring Security related classes.
This can improve the performance of serializing Spring Security related classes when working with distributed sessions (i.e. session replication, Spring Session, etc).
To use it, register the `SecurityJackson2Modules.getModules(ClassLoader)` as https://wiki.fasterxml.com/JacksonFeatureModules[Jackson Modules].
[source,java]
----
ObjectMapper mapper = new ObjectMapper();
ClassLoader loader = getClass().getClassLoader();
List<Module> modules = SecurityJackson2Modules.getModules(loader);
mapper.registerModules(modules);
// ... use ObjectMapper as normally ...
SecurityContext context = new SecurityContextImpl();
// ...
String json = mapper.writeValueAsString(context);
----
| [[jackson]]
== Jackson Support
Spring Security provides Jackson support for persisting Spring Security related classes.
This can improve the performance of serializing Spring Security related classes when working with distributed sessions (i.e. session replication, Spring Session, etc).
To use it, register the `SecurityJackson2Modules.getModules(ClassLoader)` with `ObjectMapper` (https://github.com/FasterXML/jackson-databind[jackson-databind]):
[source,java]
----
ObjectMapper mapper = new ObjectMapper();
ClassLoader loader = getClass().getClassLoader();
List<Module> modules = SecurityJackson2Modules.getModules(loader);
mapper.registerModules(modules);
// ... use ObjectMapper as normally ...
SecurityContext context = new SecurityContextImpl();
// ...
String json = mapper.writeValueAsString(context);
----
[NOTE]
====
The following Spring Security modules provide Jackson support:
- spring-security-core (`CoreJackson2Module`)
- spring-security-web (`WebJackson2Module`, `WebServletJackson2Module`, `WebServerJackson2Module`)
- <<oauth2client, spring-security-oauth2-client>> (`OAuth2ClientJackson2Module`)
- spring-security-cas (`CasJackson2Module`)
====
|
Add date to the changelog entry. | = Changelog
== Version 0.4.2
- Bug fix related to :iat param validating on jws. (thanks to @tvanhens)
== Version 0.4.1
Date: 2015-03-14
- Update nippy version from 2.7.1 to 2.8.0
- Update buddy-core from 0.4.0 to 0.4.2
- Update cats from 0.3.2 to 0.3.4
== Version 0.4.0
Date: 2015-02-22
- Add encode/decode functions to JWS/JWT implementation. Them instead of return
plain value, return a monadic either. That allows granular error reporting
instead something like nil that not very useful. The previous sign/unsign
are conserved for backward compatibility but maybe in future will be removed.
- Rename parameter `maxage` to `max-age` on jws implementation. This change
introduces a little backward incompatibility.
- Add "compact" signing implementation as replacemen of django based one.
- Django based generic signing is removed.
- Update buddy-core version to 0.4.0
== Version 0.3.0
Date: 2014-01-18
- First version splitted from monolitic buddy package.
- No changes from original version.
| = Changelog
== Version 0.4.2
Date: 2015-03-29
- Bug fix related to :iat param validating on jws. (thanks to @tvanhens)
== Version 0.4.1
Date: 2015-03-14
- Update nippy version from 2.7.1 to 2.8.0
- Update buddy-core from 0.4.0 to 0.4.2
- Update cats from 0.3.2 to 0.3.4
== Version 0.4.0
Date: 2015-02-22
- Add encode/decode functions to JWS/JWT implementation. Them instead of return
plain value, return a monadic either. That allows granular error reporting
instead something like nil that not very useful. The previous sign/unsign
are conserved for backward compatibility but maybe in future will be removed.
- Rename parameter `maxage` to `max-age` on jws implementation. This change
introduces a little backward incompatibility.
- Add "compact" signing implementation as replacemen of django based one.
- Django based generic signing is removed.
- Update buddy-core version to 0.4.0
== Version 0.3.0
Date: 2014-01-18
- First version splitted from monolitic buddy package.
- No changes from original version.
|
Fix link and snippet url | = Using the Kafka controller
:page-sidebar: apim_3_x_sidebar
:page-permalink: apim/3.x/apim_publishersme_using_kafka.html
:page-folder: apim/user-guide/publisher
:page-layout: apim3x
== Overview
This section describes the basic usage of the Kafka controller - producing and consuming messages.
=== Producing messages
Using the HTTP `POST` command to the example endpoint `https://apim-3-x-x-gateway.cloud.gravitee.io/kafka/messages`, you can send a message with the followig structure:
[source,json]
----
https://apim-3-13-x-gateway.cloud.gravitee.io/kafka/messages
{
"messages": [
{
"key": "key",
"value": {
"val1": "hello"
}
}
]
}
----
=== Consuming messages
Using the HTTP `GET` command to the example endpoint `https://apim-3-x-x-gateway.cloud.gravitee.io/kafka/messages`, you can receive any available messages. | = Using the Kafka controller
:page-sidebar: apim_3_x_sidebar
:page-permalink: apim/3.x/apim_publisherguide_using_kafka.html
:page-folder: apim/user-guide/publisher
:page-layout: apim3x
== Overview
This section describes the basic usage of the Kafka controller - producing and consuming messages.
=== Producing messages
Using the HTTP `POST` command to the example endpoint `https://api.company.com/kafka/messages`, you can send a message with the followig structure:
[source,json]
----
https://api.company.com/kafka/messages
{
"messages": [
{
"key": "key",
"value": {
"val1": "hello"
}
}
]
}
----
=== Consuming messages
Using the HTTP `GET` command to the example endpoint `https://api.company.com/kafka/messages`, you can receive any available messages. |
Update to reflect Ali Cloud TP status in 4.10 | // Module included in the following assemblies:
//
// * operators/operator-reference.adoc
[id="cluster-cloud-controller-manager-operator_{context}"]
= Cluster Cloud Controller Manager Operator
[discrete]
== Purpose
[NOTE]
====
This Operator is only fully supported for Alibaba Cloud, Microsoft Azure Stack Hub, and IBM Cloud.
It is available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, {rh-openstack-first}, and VMware vSphere.
====
The Cluster Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of {product-title}. The Operator is based on the Kubebuilder framework and `controller-runtime` libraries. It is installed via the Cluster Version Operator (CVO).
It contains the following components:
* Operator
* Cloud configuration observer
By default, the Operator exposes Prometheus metrics through the `metrics` service.
[discrete]
== Project
link:https://github.com/openshift/cluster-cloud-controller-manager-operator[cluster-cloud-controller-manager-operator]
| // Module included in the following assemblies:
//
// * operators/operator-reference.adoc
[id="cluster-cloud-controller-manager-operator_{context}"]
= Cluster Cloud Controller Manager Operator
[discrete]
== Purpose
[NOTE]
====
This Operator is only fully supported for Microsoft Azure Stack Hub and IBM Cloud.
It is available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] for Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, {rh-openstack-first}, and VMware vSphere.
====
The Cluster Cloud Controller Manager Operator manages and updates the cloud controller managers deployed on top of {product-title}. The Operator is based on the Kubebuilder framework and `controller-runtime` libraries. It is installed via the Cluster Version Operator (CVO).
It contains the following components:
* Operator
* Cloud configuration observer
By default, the Operator exposes Prometheus metrics through the `metrics` service.
[discrete]
== Project
link:https://github.com/openshift/cluster-cloud-controller-manager-operator[cluster-cloud-controller-manager-operator]
|
Add new to-do items based on field testing from 04-14 | = To-Do List
- mock_cone_detector creates infinite area and overflows h
- (*DONE*) new waypoints shorter than old don't delete existing waypoints
- adjust waypoints for start position and cone position
- cone area goes down when very close to cone
- (*DONE*) parameterize throttle and steering PWM values
- (*DONE*) touch sensor does not work
- (*DONE*) cone detection in bright light does not work
- GUIDED mode does not work
- (*DONE*) Encode PWM values or range set to use in the waypoints file
- If waypoint encountered before cone is seen, rover goes into HOLD mode
with no recovery. Needs to be fixed.
== Possible To-Do
- (*DONE*) Change from using WP_SPEED to CRUISE_SPEED. (Seems to be used by Vicky,
while WP_SPEED is not.)
- Have a way of manually triggering parameter reload
== Notes
MAV_CMD_DO_SET_HOME appears to reset the map origin, as well as zero the
offset between the map origin and base_link (for /mavros/local_position/pose
and /mavros/local_position/odom).
| = To-Do List
- shouldn't need intermediate waypoint to trigger EXPECTING_CONE
- have to handle case where cone waypoint is achieved before seeing cone
- need to limit time of reverse
- have to handle case where in MANUAL mode and don't see cone
- mock_cone_detector creates infinite area and overflows h
- (*DONE*) new waypoints shorter than old don't delete existing waypoints
- adjust waypoints for start position and cone position
- cone area goes down when very close to cone
- (*DONE*) parameterize throttle and steering PWM values
- (*DONE*) touch sensor does not work
- (*DONE*) cone detection in bright light does not work
- GUIDED mode does not work
- (*DONE*) Encode PWM values or range set to use in the waypoints file
- If waypoint encountered before cone is seen, rover goes into HOLD mode
with no recovery. Needs to be fixed.
== Possible To-Do
- (*DONE*) Change from using WP_SPEED to CRUISE_SPEED. (Seems to be used by Vicky,
while WP_SPEED is not.)
- Have a way of manually triggering parameter reload
== Notes
MAV_CMD_DO_SET_HOME appears to reset the map origin, as well as zero the
offset between the map origin and base_link (for /mavros/local_position/pose
and /mavros/local_position/odom).
|
Convert wiki link to jenkinsio plugin link | [[ivy-plugin]]
= ivy-plugin
image:https://img.shields.io/jenkins/plugin/v/ivy.svg[Jenkins Plugin,link=https://plugins.jenkins.io/ivy]
image:https://img.shields.io/github/release/jenkinsci/ivy-plugin.svg?label=release[GitHub release,link=https://github.com/jenkinsci/ivy-plugin/releases/latest]
image:https://img.shields.io/jenkins/plugin/i/ivy.svg?color=blue[Jenkins Plugin Installs,link=https://plugins.jenkins.io/ivy]
image:https://ci.jenkins.io/job/Plugins/job/ivy-plugin/job/master/badge/icon[Build Status,link=https://ci.jenkins.io/job/Plugins/job/ivy-plugin/job/master/]
Provides Jenkins integration with http://ant.apache.org/ivy/[Apache Ivy].
See http://wiki.jenkins-ci.org/display/JENKINS/Ivy+Plugin[Ivy Plugin] on the Jenkins wiki for more information.
| [[ivy-plugin]]
= ivy-plugin
image:https://img.shields.io/jenkins/plugin/v/ivy.svg[Jenkins Plugin,link=https://plugins.jenkins.io/ivy]
image:https://img.shields.io/github/release/jenkinsci/ivy-plugin.svg?label=release[GitHub release,link=https://github.com/jenkinsci/ivy-plugin/releases/latest]
image:https://img.shields.io/jenkins/plugin/i/ivy.svg?color=blue[Jenkins Plugin Installs,link=https://plugins.jenkins.io/ivy]
image:https://ci.jenkins.io/job/Plugins/job/ivy-plugin/job/master/badge/icon[Build Status,link=https://ci.jenkins.io/job/Plugins/job/ivy-plugin/job/master/]
Provides Jenkins integration with http://ant.apache.org/ivy/[Apache Ivy].
See https://plugins.jenkins.io/ivy/[Ivy Plugin] on the Jenkins wiki for more information.
|
Add a note about completed example | To complete this guide you will need to checkout the source from Github and work through the steps presented by the guide.
To get started do the following:
* link:https://github.com/{githubSlug}/archive/master.zip[Download] and unzip the source or if you already have https://git-scm.com/[Git]: `git clone https://github.com/{githubSlug}.git`
* `cd` into `{githubSlug}/initial`
* Head on over to the next section
| To complete this guide you will need to checkout the source from Github and work through the steps presented by the guide.
To get started do the following:
* link:https://github.com/{githubSlug}/archive/master.zip[Download] and unzip the source or if you already have https://git-scm.com/[Git]: `git clone https://github.com/{githubSlug}.git`
* `cd` into `{githubSlug}/initial`
* Head on over to the next section
TIP: You can go right to the completed example if you `cd` into `{githubSlug}/complete`
|
Add doc push on snapshot release | :source-highlighter: pygments
= Muon Clojure
Muon Clojure is just awesome and clojurific
| ---
---
:title: Muon Clojure
:layout: documentation
:source-highlighter: pygments
:toc: right
= Muon Clojure docs
Muon Clojure is just awesome and clojurific
|
Fix nonsensical sentence in standard analyzer documentation so that it is more understandable | [[analysis-standard-analyzer]]
=== Standard Analyzer
An analyzer of type `standard` that is built of using
<<analysis-standard-tokenizer,Standard
Tokenizer>>, with
<<analysis-standard-tokenfilter,Standard
Token Filter>>,
<<analysis-lowercase-tokenfilter,Lower
Case Token Filter>>, and
<<analysis-stop-tokenfilter,Stop
Token Filter>>.
The following are settings that can be set for a `standard` analyzer
type:
[cols="<,<",options="header",]
|=======================================================================
|Setting |Description
|`stopwords` |A list of stopword to initialize the stop filter with.
Defaults to the english stop words.
|`max_token_length` |The maximum token length. If a token is seen that
exceeds this length then it is discarded. Defaults to `255`.
|=======================================================================
| [[analysis-standard-analyzer]]
=== Standard Analyzer
An analyzer of type `standard` is built using the
<<analysis-standard-tokenizer,Standard
Tokenizer>> with the
<<analysis-standard-tokenfilter,Standard
Token Filter>>,
<<analysis-lowercase-tokenfilter,Lower
Case Token Filter>>, and
<<analysis-stop-tokenfilter,Stop
Token Filter>>.
The following are settings that can be set for a `standard` analyzer
type:
[cols="<,<",options="header",]
|=======================================================================
|Setting |Description
|`stopwords` |A list of stopword to initialize the stop filter with.
Defaults to the english stop words.
|`max_token_length` |The maximum token length. If a token is seen that
exceeds this length then it is discarded. Defaults to `255`.
|=======================================================================
|
Document the -r <repository> option of the `kn func create` command | [id="serverless-create-func-kn_{context}"]
= Creating functions
You can create a basic serverless function using the `kn` CLI.
You can specify the path, runtime, and template as flags on the command line, or use the `-c` flag to start the interactive experience in the terminal.
.Procedure
* Create a function project:
+
[source,terminal]
----
$ kn func create <path> -l <runtime> -t <template>
----
** Supported runtimes include `node`, `go`, `python`, `quarkus`, and `typescript`.
** Supported templates include `http` and `events`.
+
.Example command
[source,terminal]
----
$ kn func create -l typescript -t events examplefunc
----
+
.Example output
[source,terminal]
----
Project path: /home/user/demo/examplefunc
Function name: examplefunc
Runtime: typescript
Template: events
Writing events to /home/user/demo/examplefunc
----
| [id="serverless-create-func-kn_{context}"]
= Creating functions
You can create a basic serverless function using the `kn` CLI.
You can specify the path, runtime, template, and repository with the template as flags on the command line, or use the `-c` flag to start the interactive experience in the terminal.
.Procedure
* Create a function project:
+
[source,terminal]
----
$ kn func create -r <repository> -l <runtime> -t <template> <path>
----
** Supported runtimes include `node`, `go`, `python`, `quarkus`, and `typescript`.
** Supported templates include `http` and `events`.
+
.Example command
[source,terminal]
----
$ kn func create -l typescript -t events examplefunc
----
+
.Example output
[source,terminal]
----
Project path: /home/user/demo/examplefunc
Function name: examplefunc
Runtime: typescript
Template: events
Writing events to /home/user/demo/examplefunc
----
+
** Alternatively, you can specify a repository that contains a custom template.
+
.Example command
[source,terminal]
----
$ kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc
----
+
.Example output
[source,terminal]
----
Project path: /home/user/demo/examplefunc
Function name: examplefunc
Runtime: node
Template: hello-world
Writing events to /home/user/demo/examplefunc
----
|
Update readme with latest stats output | libframetime
============
A preloadable library, able to dump the frame times of any OpenGL application on Linux, on
any driver.
By default, the timing is written into /tmp/libframetime.out, but you can specify an
alternate file with the LIBFRAMETIME_FILE env var.
Usage
-----
----
LD_PRELOAD=path/to/libframetime.so dota2
----
Or with a custom output file:
----
LIBFRAMETIME_FILE=/tmp/dota2.frametime LD_PRELOAD=path/to/libframetime.so dota2
----
The accompanying awk script can be used to calculate the usual stats:
----
$ stats.awk < libframetime.out
----
----
Min/avg/max frametimes (us): 166 / 625.626 / 5955
Min/avg/max FPS: 167.926 / 1598.4 / 6024.1
----
| libframetime
============
A preloadable library, able to dump the frame times of any OpenGL application on Linux, on
any driver.
By default, the timing is written into /tmp/libframetime.out, but you can specify an
alternate file with the LIBFRAMETIME_FILE env var.
Usage
-----
----
LD_PRELOAD=path/to/libframetime.so dota2
----
Or with a custom output file:
----
LIBFRAMETIME_FILE=/tmp/dota2.frametime LD_PRELOAD=path/to/libframetime.so dota2
----
The accompanying awk script can be used to calculate the usual stats:
----
$ stats.awk < libframetime.out
----
----
Min/avg/max frametimes (us): 166 / 625.626 / 5955
Min/avg/max FPS: 167.926 / 1598.4 / 6024.1
50/90/95/99 percentiles (us): 410 / 434 / 589 / 5018
----
|
Add a link to the docs in the release announcement | == New Check and 2 Fixes
:docname: 20180406-new-check-and-2-fixes
I fixed 3 issues for which there are releases of the following components:
* `revapi-basic-features-0.7.1` that contains a fix for https://github.com/revapi/revapi/issues/119[#119] which means
that the semver transform should no longer crash when there is no prior version of artifacts
* `revapi-java-spi-0.15.1` that includes the definition of the new `serialVersionUIDChanged` check (prompted by
https://github.com/revapi/revapi/issues/120[#120])
* `revapi-java-0.16.0` that contains the implementation of `serialVersionUIDChanged` and additionally contains
an important fix that could cause problems be reported on wrong elements in some cases.
* `revapi-maven-plugin-0.10.1` that bundles the latest revapi-basic-features version
You are urged to upgrade especially to `revapi-java-0.16.0` to avoid some head scratching when examining the Revapi
reports.
Thanks go out to Ricardo Ferreira for reporting https://github.com/revapi/revapi/issues/120[#120] and Matthew Kavanagh
for his analysis of https://github.com/revapi/revapi/issues/119[#119].
include::../util/disqus.adoc[]
| == New Check and 2 Fixes
:docname: 20180406-new-check-and-2-fixes
I fixed 3 issues for which there are releases of the following components:
* `revapi-basic-features-0.7.1` that contains a fix for https://github.com/revapi/revapi/issues/119[#119] which means
that the semver transform should no longer crash when there is no prior version of artifacts
* `revapi-java-spi-0.15.1` that includes the definition of the new `serialVersionUIDChanged`
https://revapi.org/modules/revapi-java/index.html#field_code_serialversionuid_code_changed[check] (prompted by
https://github.com/revapi/revapi/issues/120[#120])
* `revapi-java-0.16.0` that contains the implementation of `serialVersionUIDChanged` and additionally contains
an important fix that could cause problems be reported on wrong elements in some cases.
* `revapi-maven-plugin-0.10.1` that bundles the latest revapi-basic-features version
You are urged to upgrade especially to `revapi-java-0.16.0` to avoid some head scratching when examining the Revapi
reports.
Thanks go out to Ricardo Ferreira for reporting https://github.com/revapi/revapi/issues/120[#120] and Matthew Kavanagh
for his analysis of https://github.com/revapi/revapi/issues/119[#119].
include::../util/disqus.adoc[]
|
Update readme with blank info | = Vert.x Unit examples
Here you'll find some examples of how to use Vert.x unit to test your asynchronous applications.
Tests are located in the link:src/test/java/io/vertx/example/unit/test directory.
Examples can be run directly from the IDE.
== Vertx Unit Test
The link:src/test/java/io/vertx/example/unit/test/VertxUnitTest.java demonstrates how the Vert.x Unit API can be used to run tests using the Vert.x Unit test runner.
Run this example by running the `main` method.
== Junit
Vert.x Unit can be used with Junit:
* link:src/test/java/io/vertx/example/unit/test/MyJunitTest.java demonstrates how to use the Vert.x Unit Junit runner
to execute your tests
* link:src/test/java/io/vertx/example/unit/test/ParameterizedTest.java demonstrates how to inject parameters into
your Junit tests
* link:src/test/java/io/vertx/example/unit/test/RunOnContextTest.java demonstrates how to delegate Vert.x instance
creation to Vert.x Unit and how to use a rule to run the test methods on the event loop (caution: they must be non-blocking)
* link:src/test/java/io/vertx/example/unit/test/JUnitAndAssertJTest.java demonstrates how to use AssertJ in
combination with Vert.x Unit
* link:src/test/java/io/vertx/example/unit/test/JUnitAndHamcrestTest.java demonstrates how to use Hamcrest in
combination with Vert.x Unit
All the tests can be run from your IDE, or directly with Maven:
```
mvn clean test
```
| = Vert.x gRPC examples
todo
|
Add required "name" parameter in example | = SpringBoot WebApp Demo
SpringBoot looks like a nice way to get started.
This is a trivial webapp created using SpringBoot.
== HowTo
mvn spring-boot:run
then connect to http://localhost:8000/notaservlet
Note that is 8000 not the usual 8080 to avoid conflicts.
Change this in application.properties if you don't like it.
== War Deployment
It seems that you can't have both the instant-deployment convenience of Spring Boot
AND the security of a full WAR deployment in the same pom file. You will need to
make several changes to deploy as a WAR file. See the section entitled
"Traditional Deployment"--"Create a deployable war file" in the
spring-boot reference manual (Section 73.1 in the current snapshot as of
this writing).
| = SpringBoot WebApp Demo
SpringBoot looks like a nice way to get started.
This is a trivial webapp created using SpringBoot.
== HowTo
mvn spring-boot:run
then connect to
http://localhost:8000/notaservlet?name=Robin Smith
Note that is 8000 not the usual 8080 to avoid conflicts.
Change this in application.properties if you don't like it.
== War Deployment
It seems that you can't have both the instant-deployment convenience of Spring Boot
AND the security of a full WAR deployment in the same pom file. You will need to
make several changes to deploy as a WAR file. See the section entitled
"Traditional Deployment"--"Create a deployable war file" in the
spring-boot reference manual (Section 73.1 in the current snapshot as of
this writing).
|
Add deprecatio not on changelog. | = Changelog
== Version 0.4.0
Date: unreleased
- Add encode/decode functions to JWS/JWT implementation. Them instead of return
plain value, return a monadic either. That allows granular error reporting
instead something like nil that not very useful. The previous sign/unsign
are conserved for backward compatibility but maybe in future will be removed.
- Rename parameter `maxage` to `max-age` on jws implementation. This change
introduces a little backward incompatibility.
== Version 0.3.0
Date: 2014-01-18
- First version splitted from monolitic buddy package.
- No changes from original version.
| = Changelog
== Version 0.4.0
Date: unreleased
- Add encode/decode functions to JWS/JWT implementation. Them instead of return
plain value, return a monadic either. That allows granular error reporting
instead something like nil that not very useful. The previous sign/unsign
are conserved for backward compatibility but maybe in future will be removed.
- Rename parameter `maxage` to `max-age` on jws implementation. This change
introduces a little backward incompatibility.
- Django based generic signing is deprecated.
== Version 0.3.0
Date: 2014-01-18
- First version splitted from monolitic buddy package.
- No changes from original version.
|
Add Gradle check to testApp snippet | To run the tests:
[source, bash]
----
./grailsw
grails> test-app
grails> open test-report
----
| To run the tests:
[source, bash]
----
./grailsw
grails> test-app
grails> open test-report
----
or
[source, bash]
----
./gradlew check
open build/reports/tests/index.html
----
|
Add DataTables and links to readme | = Spring Boot and Two DataSources
This project demonstrates how to use two `DataSource` s with Spring Boot.
It utilizes:
* Spring Data JPA/REST
* Flyway migrations for the two `DataSource` s
* Separate Hibernate properties for each `DataSource`
* Application properties with YAML
* Thymeleaf
* Unit tests for components | = Spring Boot and Two DataSources
This project demonstrates how to use two `DataSource` s with Spring Boot.
It utilizes:
* Spring Data https://github.com/spring-projects/spring-data-jpa[JPA] / https://github.com/spring-projects/spring-data-rest[REST]
* https://github.com/flyway/flyway[Flyway] migrations for the two `DataSource` s
* Separate Hibernate properties for each `DataSource` defined in the application.yml
* https://github.com/thymeleaf/thymeleaf[Thymeleaf]
* https://github.com/DataTables/DataTablesSrc[DataTables]
* Unit tests for components |
Add a little more docs. More to come. | = Database Table Replicator (DbShadow)
:Author: David Thompson, Matt Conroy
:Email: <dthompsn1@gmail.com> <matt@conroy.cc>
:Revision: 0.0.1 2017-02-08
== Description
Have you ever had the need to copy data from one database to another? How about between to different types of
databases? How about just verifying that your database tables between replicated instances are in sync?
DbShadow is a Java command line application that can help you perform these actions. Using the power of JDBC, it
can read data from one database instance and then using primary keys compare, propogate, and locate differences within
a separate database instance.
== ReleaseNotes
0.0.1 - Base code
== Usage
| = Database Table Replicator (DbShadow)
:Author: David Thompson, Matt Conroy
:Email: <dthompsn1@gmail.com> <matt@conroy.cc>
:Revision: 0.0.1 2017-02-08
== Description
Have you ever had the need to copy data from one database to another? How about between to different types of
databases? How about just verifying that your database tables between replicated instances are in sync?
DbShadow is a Java command line application that can help you perform these actions. Using the power of JDBC, it
can read data from one database instance and then using primary keys compare, propogate, and locate differences within
a separate database instance.
== ReleaseNotes
0.0.1 - Base code
== Building
DbShadow uses Scala Build Tool (sbt) in order to build. Clone the repo and type
....
$ sbt clean compile
....
In order to package the system into a Java binary that's useful for somebody to use:
....
$ sbt clean universal:packageZipTarball
....
This will produce a tarball that can then be installed with a bash script for launching.
== Usage
Help usage is available with the command line.
$ dbshadow -- --help
More documentation to follow.
|
Add minor schema changes to Changelog. | === Apm-Server version HEAD
https://github.com/elastic/apm-server/compare/x...master[View commits]
==== Breaking changes
==== Bugfixes
==== Added
==== Deprecated
==== Known Issue
| === Apm-Server version HEAD
https://github.com/elastic/apm-server/compare/x...master[View commits]
==== Breaking changes
==== Bugfixes
*Bugfixes in fields.yml leading to ES schema changes*
- changed `context.system.title` to `context.system.process_title`, removed `transaction.context`, `trace.context` (already available on top level). {pull}10[10]
==== Added
==== Deprecated
==== Known Issue
|
Fix paragraph name in Johnzon docs for Gitbook | [[Johnzon-Johnzon]]
Johnzon
~~~~~~~
*Available as of Camel 2.18*
Johnzon is a link:data-format.html[Data Format] which uses the
http://johnzon.apache.org/[Johnzon Library]
[source,java]
-------------------------------
from("activemq:My.Queue").
marshal().json(JsonLibrary.Johnzon).
to("mqseries:Another.Queue");
-------------------------------
[[JacksonXML-Dependencies]]
Dependencies
^^^^^^^^^^^^
To use Johnzon in your camel routes you need to add the dependency
on *camel-johnzon* which implements this data format.
If you use maven you could just add the following to your pom.xml,
substituting the version number for the latest & greatest release (see
link:download.html[the download page for the latest versions]).
[source,xml]
----------------------------------------------------------
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-johnzon</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
----------------------------------------------------------
| [[Johnzon-Johnzon]]
Johnzon
~~~~~~~
*Available as of Camel 2.18*
Johnzon is a link:data-format.html[Data Format] which uses the
http://johnzon.apache.org/[Johnzon Library]
[source,java]
-------------------------------
from("activemq:My.Queue").
marshal().json(JsonLibrary.Johnzon).
to("mqseries:Another.Queue");
-------------------------------
[[Johnzon-Dependencies]]
Dependencies
^^^^^^^^^^^^
To use Johnzon in your camel routes you need to add the dependency
on *camel-johnzon* which implements this data format.
If you use maven you could just add the following to your pom.xml,
substituting the version number for the latest & greatest release (see
link:download.html[the download page for the latest versions]).
[source,xml]
----------------------------------------------------------
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-johnzon</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
----------------------------------------------------------
|
Relocate tip for Fleet APIs | [role="xpack"]
[[fleet-apis]]
TIP: For the {kib} {fleet} APIs, see the
{fleet-guide}/fleet-api-docs.html[`Fleet API Documentation`].
== Fleet APIs
The following APIs support {fleet}'s use of {es} as a data store for internal
agent and action data. These APIs are experimental and for internal use by
{fleet} only.
* <<get-global-checkpoints,Get global checkpoints>>
// top-level
include::get-global-checkpoints.asciidoc[]
| [role="xpack"]
[[fleet-apis]]
== Fleet APIs
TIP: For the {kib} {fleet} APIs, see the
{fleet-guide}/fleet-api-docs.html[Fleet API Documentation].
The following APIs support {fleet}'s use of {es} as a data store for internal
agent and action data. These APIs are experimental and for internal use by
{fleet} only.
* <<get-global-checkpoints,Get global checkpoints>>
// top-level
include::get-global-checkpoints.asciidoc[]
|
Add the known kanban boards | = Report an issue
:awestruct-layout: normalBase
:showtitle:
== Issue tracker
We welcome issue reports (bugs, improvements, new feature requests, ...) in our issue tracker:
*Show https://issues.jboss.org/browse/drools[the JIRA issue tracker].*
Log in and click on the button _Create Issue_ to report a bug, improvement or feature request.
== Pull requests on GitHub
Want to fix the issue yourself? Fork https://github.com/droolsjbpm[the git repository] and send in a pull request.
We usually process all pull requests within a few days.
| = Report an issue
:awestruct-layout: normalBase
:showtitle:
== Issue tracker
We welcome issue reports (bugs, improvements, new feature requests, ...) in our issue tracker:
*Show https://issues.jboss.org/browse/drools[the JIRA issue tracker].*
Log in and click on the button _Create Issue_ to report a bug, improvement or feature request.
== Pull requests on GitHub
Want to fix the issue yourself? Fork https://github.com/droolsjbpm[the git repository] and send in a pull request.
We usually process all pull requests within a few days.
== Kanban boards
* https://issues.jboss.org/secure/RapidBoard.jspa?rapidView=4016[Drools]
* https://issues.jboss.org/secure/RapidBoard.jspa?rapidView=3828[OptaPlanner]
* https://issues.jboss.org/secure/RapidBoard.jspa?rapidView=3972[jBPM]
* AppFormer (todo)
* https://issues.jboss.org/secure/RapidBoard.jspa?rapidView=3838[Designer.NEXT]
|
Remove definition of unnecessary ProductName attribute in docs | // set attributes usually set by Antora
ifndef::site-gen-antora[]
:moduledir: ..
:attachmentsdir: {moduledir}/assets/attachments
:examplesdir: {moduledir}/examples
:imagesdir: {moduledir}/assets/images
:partialsdir: {moduledir}/pages/_partials
endif::[]
:ProductName: Debezium
:debezium-version: 1.0.0.Final
:debezium-dev-version: 1.1
:debezium-kafka-version: 2.4.0
:debezium-docker-label: 1.0
:install-version: 1.0
:confluent-platform-version: 5.3.1
:strimzi-version: 0.13.0
:jira-url: https://issues.redhat.com
:prodname: Debezium
:assemblies: ../assemblies
:modules: ../../modules
:mysql-connector-plugin-download: https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/1.0.0.Final/debezium-connector-mysql-1.0.0.Final-plugin.tar.gz
:mysql-version: 8.0
| // set attributes usually set by Antora
ifndef::site-gen-antora[]
:moduledir: ..
:attachmentsdir: {moduledir}/assets/attachments
:examplesdir: {moduledir}/examples
:imagesdir: {moduledir}/assets/images
:partialsdir: {moduledir}/pages/_partials
endif::[]
:debezium-version: 1.0.0.Final
:debezium-dev-version: 1.1
:debezium-kafka-version: 2.4.0
:debezium-docker-label: 1.0
:install-version: 1.0
:confluent-platform-version: 5.3.1
:strimzi-version: 0.13.0
:jira-url: https://issues.redhat.com
:prodname: Debezium
:assemblies: ../assemblies
:modules: ../../modules
:mysql-connector-plugin-download: https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/1.0.0.Final/debezium-connector-mysql-1.0.0.Final-plugin.tar.gz
:mysql-version: 8.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.