List<com.dropbox.core.v2.files.SearchMatchV2>
list of the file path found. For more
diff --git a/camel-dsl-modeline.md b/camel-dsl-modeline.md
new file mode 100644
index 0000000000000000000000000000000000000000..3386c7322d7b8cf2b5c4ed5f5a17584f2b61ef0a
--- /dev/null
+++ b/camel-dsl-modeline.md
@@ -0,0 +1,24 @@
+# Dsl-modeline.md
+
+**Since Camel 3.16**
+
+# Camel K
+
+Support for Camel K style modeline when running Camel standalone such as
+with Camel JBang.
+
+The following traits is supported:
+
+- dependency
+
+- env
+
+- name
+
+- property
+
+# Camel JBang
+
+There is also support for [JBang
+dependencies](https://www.jbang.dev/documentation/guide/latest/dependencies.html)
+using the `//DEPS` comments style.
diff --git a/camel-dsl.md b/camel-dsl.md
new file mode 100644
index 0000000000000000000000000000000000000000..858b79b20593ae547697e0be2bb3439219f95db8
--- /dev/null
+++ b/camel-dsl.md
@@ -0,0 +1,7 @@
+# Dsl.md
+
+# DSL components
+
+See the following for usage of each component:
+
+indexDescriptionList::\[attributes=*group=DSL*,descriptionformat=description\]
diff --git a/camel-durable-subscriber.md b/camel-durable-subscriber.md
new file mode 100644
index 0000000000000000000000000000000000000000..9183813408b6483691900955dd8f2dbf75e801fb
--- /dev/null
+++ b/camel-durable-subscriber.md
@@ -0,0 +1,47 @@
+# Durable-subscriber.md
+
+Camel supports the [Durable
+Subscriber](https://www.enterpriseintegrationpatterns.com/patterns/messaging/DurableSubscription.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+Camel supports the Durable Subscriber from the EIP patterns using
+components, such as the [JMS](#ROOT:jms-component.adoc) or
+[Kafka](#ROOT:kafka-component.adoc) component, which supports publish \&
+subscribe using topics with support for non-durable and durable
+subscribers.
+
+
+
+
+
+# Example
+
+Here is a simple example of creating durable subscribers to a JMS topic:
+
+Java
+from("direct:start")
+.to("activemq:topic:foo");
+
+ from("activemq:topic:foo?clientId=1&durableSubscriptionName=bar1")
+ .to("mock:result1");
+
+ from("activemq:topic:foo?clientId=2&durableSubscriptionName=bar2")
+ .to("mock:result2");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-dynamic-router.md b/camel-dynamic-router.md
index d684018562a38733abbc2387c3e472d8bd5fa2b1..9fd192f1c899f7d9e185c1a2033b502928ec2d7a 100644
--- a/camel-dynamic-router.md
+++ b/camel-dynamic-router.md
@@ -112,7 +112,7 @@ Spring XML
-# Dynamic Router EIP Component Use Cases
+# Examples
The benefit of the Dynamic Router EIP Component can best be seen,
perhaps, through looking at some use cases. These examples are not the
@@ -320,7 +320,7 @@ In the `camel-spring-boot-examples` project, the
this category that you can run and/or experiment with to get a practical
feel for how you might use this in your own multi-JVM application stack.
-# JMX Control and Monitoring Operations
+## JMX Control and Monitoring Operations
The Dynamic Router Control component supports some JMX operations that
allow you to control and monitor the component. It is beyond the scope
diff --git a/camel-dynamicRouter-eip.md b/camel-dynamicRouter-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..74be22c8e0118d84f5fdec963f58a127c69ad171
--- /dev/null
+++ b/camel-dynamicRouter-eip.md
@@ -0,0 +1,154 @@
+# DynamicRouter-eip.md
+
+The [Dynamic
+Router](http://www.enterpriseintegrationpatterns.com/DynamicRouter.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) allows
+you to route messages while avoiding the dependency of the router on all
+possible destinations while maintaining its efficiency.
+
+
+
+
+
+# Options
+
+# Exchange properties
+
+# Dynamic Router
+
+The Dynamic Router is similar to the [Routing
+Slip](#routingSlip-eip.adoc) EIP, but with the slip evaluated
+dynamically *on-the-fly*. The [Routing Slip](#routingSlip-eip.adoc) on
+the other hand evaluates the slip only once in the beginning.
+
+The Dynamic Router sets the exchange property (`Exchange.SLIP_ENDPOINT`)
+with the current slip. This allows you to know how far we have processed
+in the overall slip.
+
+# Example
+
+You can use the `dynamicRouter` as shown below:
+
+Java
+from("direct:start")
+// use a bean as the dynamic router
+.dynamicRouter(method(MySlipBean.class, "slip"));
+
+XML
+
+
+
+
+
+
+
+
+YAML
+\- from:
+uri: direct:start
+steps:
+\- dynamicRouter:
+method:
+beanType: com.foo.MySlipBean
+method: slip
+
+Which will call a [Bean Method](#languages:bean-language.adoc) to
+compute the slip *on-the-fly*. The bean could be implemented as follows:
+
+ /**
+ * Use this method to compute dynamic where we should route next.
+ *
+ * @param body the message body
+ * @return endpoints to go, or null to indicate the end
+ */
+ public String slip(String body) {
+ bodies.add(body);
+ invoked++;
+
+ if (invoked == 1) {
+ return "mock:a";
+ } else if (invoked == 2) {
+ return "mock:b,mock:c";
+ } else if (invoked == 3) {
+ return "direct:foo";
+ } else if (invoked == 4) {
+ return "mock:result";
+ }
+
+ // no more so return null
+ return null;
+ }
+
+**Beware** You must ensure the expression used for the `dynamicRouter`
+such as a bean, will return `null` to indicate the end. Otherwise, the
+`dynamicRouter` will keep repeating endlessly.
+
+## Thread safety beans
+
+Mind that this example is only for show and tell. The current
+implementation is not thread safe. You would have to store the state on
+the `Exchange`, to ensure thread safety, as shown below:
+
+ /**
+ * Use this method to compute dynamic where we should route next.
+ *
+ * @param body the message body
+ * @param properties the exchange properties where we can store state between invocations
+ * @return endpoints to go, or null to indicate the end
+ */
+ public String slip(String body, @ExchangeProperties Map properties) {
+ bodies.add(body);
+
+ // get the state from the exchange properties and keep track how many times
+ // we have been invoked
+ int invoked = 0;
+ Object current = properties.get("invoked");
+ if (current != null) {
+ invoked = Integer.parseInt(current.toString());
+ }
+ invoked++;
+ // and store the state back on the properties
+ properties.put("invoked", invoked);
+
+ if (invoked == 1) {
+ return "mock:a";
+ } else if (invoked == 2) {
+ return "mock:b,mock:c";
+ } else if (invoked == 3) {
+ return "direct:foo";
+ } else if (invoked == 4) {
+ return "mock:result";
+ }
+
+ // no more so return null
+ return null;
+ }
+
+You could also store state as message headers, but they are not
+guaranteed to be preserved during routing, whereas properties on the
+Exchange are.
+
+# @DynamicRouter annotation
+
+You can also use [Bean Integration](#manual::bean-integration.adoc) with
+the `@DynamicRouter` annotation, on a Java bean method.
+
+In the example below the `route` method would then be invoked repeatedly
+as the message is processed dynamically. The idea is to return the next
+endpoint uri where to go, and to return `null` to end. You can return
+multiple endpoints if you like, just as the [Routing
+Slip](#routingSlip-eip.adoc), where each endpoint is separated by a
+comma.
+
+ public class MyDynamicRouter {
+
+ @Consume(uri = "activemq:foo")
+ @DynamicRouter
+ public String route(@XPath("/customer/id") String customerId, @Header("Location") String location, Document body) {
+ // Query a database to find the best match of the endpoint based on the input parameters
+ // return the next endpoint uri, where to go. Return null to indicate the end.
+ }
+ }
+
+The parameters on the `route` method is bound to information from the
+Exchange using [Bean Parameter Binding](#manual::bean-binding.adoc).
diff --git a/camel-ehcache.md b/camel-ehcache.md
index 29a93884244c8d9795800b801bbf2dc3f8cbf845..1182a79b1425bbba74c55b08d7adb25e130ed598 100644
--- a/camel-ehcache.md
+++ b/camel-ehcache.md
@@ -24,7 +24,9 @@ for this component:
ehcache://cacheName[?options]
-# Ehcache based idempotent repository example:
+# Examples
+
+## Ehcache based idempotent repository example:
CacheManager manager = CacheManagerBuilder.newCacheManager(new XmlConfiguration("ehcache.xml"));
EhcacheIdempotentRepository repo = new EhcacheIdempotentRepository(manager, "idempotent-cache");
@@ -33,7 +35,7 @@ for this component:
.idempotentConsumer(header("messageId"), idempotentRepo)
.to("mock:out");
-# Ehcache based aggregation repository example:
+## Ehcache based aggregation repository example:
public class EhcacheAggregationRepositoryRoutesTest extends CamelTestSupport {
private static final String ENDPOINT_MOCK = "mock:result";
diff --git a/camel-eip-exchangeProperties.md b/camel-eip-exchangeProperties.md
new file mode 100644
index 0000000000000000000000000000000000000000..8267c446954ea40953873f5f547a43c87e925990
--- /dev/null
+++ b/camel-eip-exchangeProperties.md
@@ -0,0 +1,5 @@
+# Eip-exchangeProperties.md
+
+\|util.description(value) \\ \|util.valueAsString(value.defaultValue) \\
+\|util.javaSimpleName(value.javaType)' :requires:
+*util=util/jsonpath-util.js*
diff --git a/camel-eip-options.md b/camel-eip-options.md
new file mode 100644
index 0000000000000000000000000000000000000000..599d530c3180fe3f2d66d969ceddeb74072c546e
--- /dev/null
+++ b/camel-eip-options.md
@@ -0,0 +1,5 @@
+# Eip-options.md
+
+\|util.description(value) \\ \|util.valueAsString(value.defaultValue) \\
+\|util.javaSimpleName(value.javaType)' :requires:
+*util=util/jsonpath-util.js*
diff --git a/camel-elasticsearch-rest-client.md b/camel-elasticsearch-rest-client.md
index 70bd94528a6ff97cb955d4434cb1f23d7a4877ce..8d07127e4b60e0af28a3386f5955cea59fc7a78b 100644
--- a/camel-elasticsearch-rest-client.md
+++ b/camel-elasticsearch-rest-client.md
@@ -34,14 +34,14 @@ The following operations are currently supported.
-
+
-
+
INDEX_OR_UPDATE
String,
byte[], Reader or InputStream
@@ -53,7 +53,7 @@ parameter option, or by providing a message header with the key
INDEX_NAME. When updating indexed content, you must provide
its id via a message header with the key ID .
-
+
GET_BY_ID
String id of content to
retrieve
@@ -65,7 +65,7 @@ message header with the key INDEX_NAME. You must provide
the index id of the content to retrieve either in the message body, or
via a message header with the key ID .
-
+
DELETE
String id of content to
delete
@@ -78,7 +78,7 @@ URI parameter option, or by providing a message header with the key
delete either in the message body, or via a message header with the key
ID .
-
+
CREATE_INDEX
Creates the specified
@@ -90,7 +90,7 @@ header with the key INDEX_NAME. You may also provide a
header with the key INDEX_SETTINGS where the value is a
JSON String representation of the index settings.
-
+
DELETE_INDEX
Deletes the specified
@@ -100,7 +100,7 @@ You can set the name of the target index to create from the
indexName URI parameter option, or by providing a message
header with the key INDEX_NAME.
-
+
SEARCH
Map (optional)
Search for content with either a
@@ -115,7 +115,9 @@ the query criteria.
-# Index Content Example
+# Examples
+
+## Index Content Example
To index some content.
@@ -131,13 +133,13 @@ the message body with the updated content.
.setBody().constant("{\"content\": \"ElasticSearch REST Client With Camel\"}")
.to("elasticsearch-rest-client://myCluster?operation=INDEX_OR_UPDATE&indexName=myIndex");
-# Get By ID Example
+## Get By ID Example
from("direct:getById")
.setHeader("ID").constant("1")
.to("elasticsearch-rest-client://myCluster?operation=GET_BY_ID&indexName=myIndex");
-# Delete Example
+## Delete Example
To delete indexed content, provide the `ID` message header.
@@ -145,7 +147,7 @@ To delete indexed content, provide the `ID` message header.
.setHeader("ID").constant("1")
.to("elasticsearch-rest-client://myCluster?operation=DELETE&indexName=myIndex");
-# Create Index Example
+## Create Index Example
To create a new index.
@@ -160,14 +162,14 @@ To create a new index with some custom settings.
.setHeader("INDEX_SETTINGS").constant(indexSettings)
.to("elasticsearch-rest-client://myCluster?operation=CREATE_INDEX&indexName=myIndex");
-# Delete Index Example
+## Delete Index Example
To delete an index.
from("direct:deleteIndex")
.to("elasticsearch-rest-client://myCluster?operation=DELETE_INDEX&indexName=myIndex");
-# Search Example
+## Search Example
Search with a JSON query.
diff --git a/camel-elasticsearch.md b/camel-elasticsearch.md
index d3fcb071cf1d86bf6176c0f688ff24c08e43df3b..cfc9e4666cf741b20ea4b31720ebb4421c6e31ca 100644
--- a/camel-elasticsearch.md
+++ b/camel-elasticsearch.md
@@ -36,14 +36,14 @@ parameters or the message body to be set.
-
+
-
+
Index
Map ,
String , byte[] ,
@@ -55,7 +55,7 @@ index by setting the message header with the key "indexName". You can
set the indexId by setting the message header with the key
"indexId".
-
+
GetById
String or
GetRequest.Builder index id of content to
@@ -66,7 +66,7 @@ body. You can set the name of the target index by setting the message
header with the key "indexName". You can set the type of document by
setting the message header with the key "documentClass".
-
+
Delete
String or
DeleteRequest.Builder index id of content to
@@ -75,7 +75,7 @@ delete
returns a Result object in the body. You can set the name of the target
index by setting the message header with the key "indexName".
-
+
DeleteIndex
String or
DeleteIndexRequest.Builder index name of the index to
@@ -84,7 +84,7 @@ delete
returns a status code in the body. You can set the name of the target
index by setting the message header with the key "indexName".
-
+
Bulk
Iterable or
BulkRequest.Builder of any type that is already
@@ -97,7 +97,7 @@ index and returns a List<BulkResponseItem> object in the body You
can set the name of the target index by setting the message header with
the key "indexName".
-
+
Search
Map ,
String or
@@ -109,13 +109,13 @@ to return by setting the message header with the key "size". You can set
the starting document offset by setting the message header with the key
"from".
-
+
MultiSearch
MsearchRequest.Builder
Multiple search in one
-
+
MultiGet
Iterable<String>
or MgetRequest.Builder the id of the document to
@@ -124,7 +124,7 @@ retrieve
You can set the name of the target index by setting the message
header with the key "indexName".
-
+
Exists
None
Check whether the index exists or not
@@ -132,7 +132,7 @@ and returns a Boolean flag in the body.
You must set the name of the target index by setting the message
header with the key "indexName".
-
+
Update
byte[] ,
InputStream , String ,
@@ -153,7 +153,7 @@ globally at the component level thanks to the option
enableDocumentOnlyMode or by request by setting the header
ElasticsearchConstants.PARAM_DOCUMENT_MODE to true.
-
+
Ping
None
Pings the Elasticsearch cluster and
@@ -162,7 +162,9 @@ returns true if the ping succeeded, false otherwise
-# Configure the component and enable basic authentication
+# Usage
+
+## Configure the component and enable basic authentication
To use the Elasticsearch component, it has to be configured with a
minimum configuration.
@@ -184,7 +186,55 @@ and SSL on the component like the example below
camelContext.addComponent("elasticsearch", elasticsearchComponent);
-# Index Example
+## Document type
+
+For all the search operations, it is possible to indicate the type of
+document to retrieve to get the result already unmarshalled with the
+expected type.
+
+The document type can be set using the header "documentClass" or via the
+uri parameter of the same name.
+
+## Using Camel Elasticsearch with Spring Boot
+
+When you use `camel-elasticsearch-starter` with Spring Boot v2, then you
+must declare the following dependency in your own `pom.xml`.
+
+
+ jakarta.json
+ jakarta.json-api
+ 2.0.2
+
+
+This is needed because Spring Boot v2 provides jakarta.json-api:1.1.6,
+and Elasticsearch requires to use json-api v2.
+
+## Use RestClient provided by Spring Boot
+
+By default, Spring Boot will auto configure an Elasticsearch RestClient
+that will be used by camel, it is possible to customize the client with
+the following basic properties:
+
+ spring.elasticsearch.uris=myelkhost:9200
+ spring.elasticsearch.username=elkuser
+ spring.elasticsearch.password=secure!!
+
+More information can be found in
+[https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html#application-properties.data.spring.elasticsearch.connection-timeout](https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html#application-properties.data.spring.elasticsearch.connection-timeout)
+
+## Disable Sniffer when using Spring Boot
+
+When Spring Boot is on the classpath, the Sniffer client for
+Elasticsearch is enabled by default. This option can be disabled in the
+Spring Boot Configuration:
+
+ spring:
+ autoconfigure:
+ exclude: org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientAutoConfiguration
+
+# Examples
+
+## Index Example
Below is a simple INDEX example
@@ -205,7 +255,7 @@ the route. The result body contains the indexId created.
map.put("content", "test");
String indexId = template.requestBody("direct:index", map, String.class);
-# Search Example
+## Search Example
Searching on specific field(s) and value use the Operation ´Search´.
Pass in the query JSON String or the Map
@@ -258,7 +308,7 @@ Search using Elasticsearch scroll api to fetch all results.
.to("mock:output")
.end();
-# MultiSearch Example
+## MultiSearch Example
MultiSearching on specific field(s) and value uses the Operation
´MultiSearch´. Pass in the MultiSearchRequest instance
@@ -280,71 +330,25 @@ MultiSearch on specific field(s)
.body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build());
List> response = template.requestBody("direct:multiSearch", builder, List.class);
-# Document type
-
-For all the search operations, it is possible to indicate the type of
-document to retrieve to get the result already unmarshalled with the
-expected type.
-
-The document type can be set using the header "documentClass" or via the
-uri parameter of the same name.
-
-# Using Camel Elasticsearch with Spring Boot
-
-When you use `camel-elasticsearch-starter` with Spring Boot v2, then you
-must declare the following dependency in your own `pom.xml`.
-
-
- jakarta.json
- jakarta.json-api
- 2.0.2
-
-
-This is needed because Spring Boot v2 provides jakarta.json-api:1.1.6,
-and Elasticsearch requires to use json-api v2.
-
-## Use RestClient provided by Spring Boot
-
-By default, Spring Boot will auto configure an Elasticsearch RestClient
-that will be used by camel, it is possible to customize the client with
-the following basic properties:
-
- spring.elasticsearch.uris=myelkhost:9200
- spring.elasticsearch.username=elkuser
- spring.elasticsearch.password=secure!!
-
-More information can be found in
-[https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html#application-properties.data.spring.elasticsearch.connection-timeout](https://docs.spring.io/spring-boot/docs/current/reference/html/application-properties.html#application-properties.data.spring.elasticsearch.connection-timeout)
-
-## Disable Sniffer when using Spring Boot
-
-When Spring Boot is on the classpath, the Sniffer client for
-Elasticsearch is enabled by default. This option can be disabled in the
-Spring Boot Configuration:
-
- spring:
- autoconfigure:
- exclude: org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientAutoConfiguration
-
## Component Configurations
|Name|Description|Default|Type|
|---|---|---|---|
-|connectionTimeout|The time in ms to wait before connection will timeout.|30000|integer|
-|enableDocumentOnlyMode|Indicates whether the body of the message contains only documents. By default, it is set to false to be able to do the same requests as what the Document API supports (see https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html for more details). To ease the migration of routes based on the legacy component camel-elasticsearch-rest, you should consider enabling the mode especially if your routes do update operations.|false|boolean|
+|connectionTimeout|The time in ms to wait before connection will time out.|30000|integer|
+|enableDocumentOnlyMode|Indicates whether the body of the message contains only documents. By default, it is set to false to be able to do the same requests as what the Document API supports (see https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html for more details). To ease the migration of routes based on the legacy component camel-elasticsearch-rest, you should consider enabling the mode, especially if your routes do update operations.|false|boolean|
|hostAddresses|Comma separated list with ip:port formatted remote transport addresses to use. The ip and port options must be left blank for hostAddresses to be considered instead.||string|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|maxRetryTimeout|The time in ms before retry|30000|integer|
-|socketTimeout|The timeout in ms to wait before the socket will timeout.|30000|integer|
+|socketTimeout|The timeout in ms to wait before the socket will time out.|30000|integer|
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
-|client|To use an existing configured Elasticsearch client, instead of creating a client per endpoint. This allow to customize the client with specific settings.||object|
-|enableSniffer|Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean|
+|client|To use an existing configured Elasticsearch client, instead of creating a client per endpoint. This allows customizing the client with specific settings.||object|
+|enableSniffer|Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot, then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean|
|sniffAfterFailureDelay|The delay of a sniff execution scheduled after a failure (in milliseconds)|60000|integer|
|snifferInterval|The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions|300000|integer|
|certificatePath|The path of the self-signed certificate to use to access to Elasticsearch.||string|
|enableSSL|Enable SSL|false|boolean|
-|password|Password for authenticate||string|
+|password|Password for authenticating||string|
|user|Basic authenticate user||string|
## Endpoint Configurations
@@ -353,9 +357,9 @@ Spring Boot Configuration:
|Name|Description|Default|Type|
|---|---|---|---|
|clusterName|Name of the cluster||string|
-|connectionTimeout|The time in ms to wait before connection will timeout.|30000|integer|
+|connectionTimeout|The time in ms to wait before connection will time out.|30000|integer|
|disconnect|Disconnect after it finish calling the producer|false|boolean|
-|enableDocumentOnlyMode|Indicates whether the body of the message contains only documents. By default, it is set to false to be able to do the same requests as what the Document API supports (see https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html for more details). To ease the migration of routes based on the legacy component camel-elasticsearch-rest, you should consider enabling the mode especially if your routes do update operations.|false|boolean|
+|enableDocumentOnlyMode|Indicates whether the body of the message contains only documents. By default, it is set to false to be able to do the same requests as what the Document API supports (see https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html for more details). To ease the migration of routes based on the legacy component camel-elasticsearch-rest, you should consider enabling the mode, especially if your routes do update operations.|false|boolean|
|from|Starting index of the response.||integer|
|hostAddresses|Comma separated list with ip:port formatted remote transport addresses to use.||string|
|indexName|The name of the index to act against||string|
@@ -363,12 +367,12 @@ Spring Boot Configuration:
|operation|What operation to perform||object|
|scrollKeepAliveMs|Time in ms during which elasticsearch will keep search context alive|60000|integer|
|size|Size of the response.||integer|
-|socketTimeout|The timeout in ms to wait before the socket will timeout.|30000|integer|
+|socketTimeout|The timeout in ms to wait before the socket will time out.|30000|integer|
|useScroll|Enable scroll usage|false|boolean|
|waitForActiveShards|Index creation waits for the write consistency number of shards to be available|1|integer|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|documentClass|The class to use when deserializing the documents.|ObjectNode|string|
-|enableSniffer|Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean|
+|enableSniffer|Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot, then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean|
|sniffAfterFailureDelay|The delay of a sniff execution scheduled after a failure (in milliseconds)|60000|integer|
|snifferInterval|The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions|300000|integer|
|certificatePath|The certificate that can be used to access the ES Cluster. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string|
diff --git a/camel-elytron.md b/camel-elytron.md
new file mode 100644
index 0000000000000000000000000000000000000000..5c416ca8e1359e645b16bfb84a8e540a7d3ca9c9
--- /dev/null
+++ b/camel-elytron.md
@@ -0,0 +1,75 @@
+# Elytron.md
+
+**Since Camel 3.1**
+
+The Elytron Security Provider provides Elytron security over the Camel
+Elytron component. It enables the Camel Elytron component to use Elytron
+security capabilities. To force Camel Elytron to use elytron security
+provider, add the elytron security provider library on classpath and
+provide instance of `ElytronSecurityConfiguration` as
+`securityConfiguration` parameter into the Camel Elytron component or
+provide both `securityConfiguration` and `securityProvider` into the
+Camel Elytron component.
+
+Configuration has to provide all three security attributes:
+
+
+
+
+
+
+
+
+
+
+
+
+domainBuilder
+Builder for security domain.
+SecurityDomain.Builder
+
+
+mechanismName
+MechanismName should be selected with
+regard to default securityRealm. For example, to use bearer_token
+security, mechanism name has to be BEARER_TOKEN and realm
+has to be TokenSecurityReal
+String
+
+
+elytronProvider
+Instance of WildFlyElytronBaseProvider
+with respect of mechanismName
+WildFlyElytronBaseProvider
+
+
+
+
+Each exchange created by Undertow endpoint with Elytron security
+contains header `securityIdentity` with current Elytron’s security
+identity as value. (`org.wildfly.security.auth.server.SecurityIdentity`)
+or is *FORBIDDEN* (status code 403)
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-elytron
+ x.x.x
+
+
+
+# Other Elytron capabilities
+
+This security provider contains only basic Elytron dependencies (without
+any transitive dependency from `org.wildfly.security:wildfly-elytron`).
+Ignored libraries should be added among application’s dependencies for
+their usage.
diff --git a/camel-enrich-eip.md b/camel-enrich-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..de2eabbb7bdc838972775ecd3c1f3507d15d8bc3
--- /dev/null
+++ b/camel-enrich-eip.md
@@ -0,0 +1,219 @@
+# Enrich-eip.md
+
+Camel supports the [Content
+Enricher](http://www.enterpriseintegrationpatterns.com/DataEnricher.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+
+
+
+
+In Camel the Content Enricher can be done in several ways:
+
+- Using [Enrich](#enrich-eip.adoc) EIP or [Poll
+ Enrich](#pollEnrich-eip.adoc) EIP
+
+- Using a [Message Translator](#message-translator.adoc)
+
+- Using a [Processor](#manual::processor.adoc) with the enrichment
+ programmed in Java
+
+- Using a [Bean](#bean-eip.adoc) EIP with the enrichment programmed in
+ Java
+
+The most natural Camel approach is using [Enrich](#enrich-eip.adoc) EIP,
+which comes as two kinds:
+
+- [Enrich](#enrich-eip.adoc) EIP: This is the most common content
+ enricher that uses a `Producer` to obtain the data. It is usually
+ used for [Request Reply](#requestReply-eip.adoc) messaging, for
+ instance, to invoke an external web service.
+
+- [Poll Enrich](#pollEnrich-eip.adoc) EIP: Uses a [Polling
+ Consumer](#polling-consumer.adoc) to obtain the additional data. It
+ is usually used for [Event Message](#event-message.adoc) messaging,
+ for instance, to read a file or download one using
+ [FTP](#ROOT:ftp-component.adoc).
+
+This page documents the Enrich EIP.
+
+# Exchange properties
+
+# Content enrichment using Enrich EIP
+
+Enrich EIP is the most common content enricher that uses a `Producer` to
+obtain the data.
+
+The content enricher (`enrich`) retrieves additional data from a
+*resource endpoint* to enrich an incoming message (contained in the
+*original exchange*).
+
+An `AggregationStrategy` is used to combine the original exchange and
+the *resource exchange*. The first parameter of the
+`AggregationStrategy.aggregate(Exchange, Exchange)` method corresponds
+to the original exchange, the second parameter the resource exchange.
+
+Here’s an example for implementing an `AggregationStrategy`, which
+merges the two data as a `String` with colon separator:
+
+ public class ExampleAggregationStrategy implements AggregationStrategy {
+
+ public Exchange aggregate(Exchange newExchange, Exchange oldExchange) {
+ // this is just an example, for real-world use-cases the
+ // aggregation strategy would be specific to the use-case
+
+ if (newExchange == null) {
+ return oldExchange;
+ }
+ Object oldBody = oldExchange.getIn().getBody();
+ Object newBody = newExchange.getIn().getBody();
+ oldExchange.getIn().setBody(oldBody + ":" + newBody);
+ return oldExchange;
+ }
+
+ }
+
+In the example Camel will call the http endpoint to collect some data,
+that will then be merged with the original message using the
+`AggregationStrategy`:
+
+Java
+AggregationStrategy aggregationStrategy = ...
+
+ from("direct:start")
+ .enrich("http:remoteserver/foo", aggregationStrategy)
+ .to("mock:result");
+
+XML
+
+
+
+
+
+
+ http:remoteserver/foo
+
+
+
+
+
+YAML
+\- from:
+uri: direct:start
+steps:
+\- enrich:
+expression:
+constant: "http:remoteserver/foo"
+aggregationStrategy: "#myStrategy"
+\- to:
+uri: mock:result
+\- beans:
+\- name: myStrategy
+type: com.foo.ExampleAggregationStrategy
+
+## Aggregation Strategy is optional
+
+The aggregation strategy is optional. If not provided, then Camel will
+just use the result exchange as the result.
+
+The following example:
+
+Java
+from("direct:start")
+.enrich("http:remoteserver/foo")
+.to("direct:result");
+
+XML
+
+
+
+http:remoteserver/foo
+
+
+
+
+YAML
+\- from:
+uri: direct:start
+steps:
+\- enrich:
+expression:
+constant: "http:remoteserver/foo"
+\- to:
+uri: mock:result
+
+Would be the same as using `to`:
+
+Java
+from("direct:start")
+.to("http:remoteserver/foo")
+.to("direct:result");
+
+XML
+
+
+
+
+
+
+YAML
+\- from:
+uri: direct:start
+steps:
+\- to:
+uri: http:remoteserver/foo
+\- to:
+uri: mock:result
+
+## Using dynamic uris
+
+Both `enrich` and `pollEnrich` supports using dynamic uris computed
+based on information from the current Exchange. For example, to enrich
+from a HTTP endpoint where the header with key orderId is used as part
+of the content-path of the HTTP url:
+
+Java
+from("direct:start")
+.enrich().simple("http:myserver/${header.orderId}/order")
+.to("direct:result");
+
+XML
+
+
+
+http:myserver/${header.orderId}/order
+
+
+
+
+YAML
+\- from:
+uri: direct:start
+steps:
+\- enrich:
+expression:
+simple: "http:myserver/${header.orderId}/order"
+\- to:
+uri: mock:result
+
+See the `cacheSize` option for more details on *how much cache* to use
+depending on how many or few unique endpoints are used.
+
+## Using out-of-the-box Aggregation Strategies
+
+The `org.apache.camel.builder.AggregationStrategies` is a builder that
+can be used for creating commonly used aggregation strategies without
+having to create a class.
+
+For example, the `ExampleAggregationStrategy` from previously can be
+built as follows:
+
+ AggregationStrategy agg = AggregationStrategies.string(":");
+
+There are many other possibilities with the `AggregationStrategies`
+builder, and for more details see the [AggregationStrategies
+javadoc](https://www.javadoc.io/static/org.apache.camel/camel-core-model/3.12.0/org/apache/camel/builder/AggregationStrategies.html).
+
+# See More
+
+See [Poll Enrich](#pollEnrich-eip.adoc) EIP
diff --git a/camel-enterprise-integration-patterns.md b/camel-enterprise-integration-patterns.md
new file mode 100644
index 0000000000000000000000000000000000000000..d60f51549cc510d965880df9d399e236795df6ec
--- /dev/null
+++ b/camel-enterprise-integration-patterns.md
@@ -0,0 +1,739 @@
+# Enterprise-integration-patterns.md
+
+Camel supports most of the [Enterprise Integration
+Patterns](http://www.eaipatterns.com/toc.html) from the excellent book
+by Gregor Hohpe and Bobby Woolf.
+
+# Messaging Systems
+
+
+
+
+
+
+
+
+
+
+
+
+Message
+Channel
+How does one application communicate
+with another using messaging?
+
+
+
+
+
+Message
+How can two applications be connected
+by a message channel exchange a piece of information?
+
+
+
+
+
+Pipes and
+Filters
+How can we perform complex processing
+on a message while maintaining independence and flexibility?
+
+
+
+
+
+Message
+Router
+How can you decouple individual
+processing steps so that messages can be passed to different filters
+depending on a set of conditions?
+
+
+
+
+
+Message Translator
+How can systems using different data
+formats communicate with each other using messaging?
+
+
+
+
+
+Message Endpoint
+How does an application connect to a
+messaging channel to send and receive messages?
+
+
+
+
+# Messaging Channels
+
+
+
+
+
+
+
+
+
+
+
+
+Point to Point Channel
+How can the caller be sure that exactly
+one receiver will receive the document or perform the call?
+
+
+
+
+
+Publish Subscribe
+Channel
+How can the sender broadcast an event
+to all interested receivers?
+
+
+
+
+
+Dead Letter Channel
+What will the messaging system do with
+a message it cannot deliver?
+
+
+
+
+
+Guaranteed Delivery
+How can the sender make sure that a
+message will be delivered, even if the messaging system fails?
+
+
+
+
+
+Channel
+Adapter
+How can you connect an application to
+the messaging system so that it can send and receive messages?
+
+
+
+
+
+Messaging Bridge
+How can multiple messaging systems be
+connected so that messages available on one are also available on the
+others??
+
+
+
+
+
+Message
+Bus
+What is an architecture that enables
+separate applications to work together, but in a de-coupled fashion such
+that applications can be easily added or removed without affecting the
+others?
+
+
+
+
+
+Change Data Capture
+Data synchronization by capturing
+changes made to a database, and apply those changes to another
+system.
+
+
+
+
+# Message Construction
+
+
+
+
+
+
+
+
+
+
+
+
+Event
+Message
+How can messaging be used to transmit
+events from one application to another?
+
+
+
+
+
+Request Reply
+When an application sends a message,
+how can it get a response from the receiver?
+
+
+
+
+
+Return
+Address
+How does a replier know where to send
+the reply?
+
+
+
+
+
+Correlation Identifier
+How does a requestor that has received
+a reply know which request this is the reply for?
+
+
+
+
+
+Message Expiration
+How can a sender indicate when a
+message should be considered stale and thus shouldn’t be
+processed?
+
+
+
+
+# Message Routing
+
+
+
+
+
+
+
+
+
+
+
+
+Content-Based Router
+How do we handle a situation where the
+implementation of a single logical function (e.g., inventory check) is
+spread across multiple physical systems?
+
+
+
+
+
+Message
+Filter
+How can a component avoid receiving
+uninteresting messages?
+
+
+
+
+
+Dynamic Router
+How can you avoid the dependency of the
+router on all possible destinations while maintaining its
+efficiency?
+
+
+
+
+
+Recipient List
+How do we route a message to a list of
+(static or dynamically) specified recipients?
+
+
+
+
+
+Splitter
+How can we process a message if it
+contains multiple elements, each of which may have to be processed in a
+different way?
+
+
+
+
+
+Aggregator
+How do we combine the results of
+individual, but related, messages so that they can be processed as a
+whole?
+
+
+
+
+
+Resequencer
+How can we get a stream of related but
+out-of-sequence messages back into the correct order?
+
+
+
+
+
+Composed Message
+Processor
+How can you maintain the overall
+message flow when processing a message consisting of multiple elements,
+each of which may require different processing?
+
+
+
+
+
+Scatter-Gather
+How do you maintain the overall message
+flow when a message needs to be sent to multiple recipients, each of
+which may send a reply?
+
+
+
+
+
+Routing
+Slip
+How do we route a message consecutively
+through a series of processing steps when the sequence of steps is not
+known at design-time and may vary for each message?
+
+
+
+
+
+Process
+Manager
+How do we route a message through
+multiple processing steps when the required steps may not be known at
+design-time and may not be sequential?
+
+
+
+
+
+Message
+Broker
+How can you decouple the destination of
+a message from the sender and maintain central control over the flow of
+messages?
+
+
+
+
+
+Threads
+How can I decouple the continued
+routing of a message from the current thread?
+
+
+
+
+
+Throttler
+How can I throttle messages to ensure
+that a specific endpoint does not get overloaded, or we don’t exceed an
+agreed SLA with some external service?
+
+
+
+
+
+Sampling
+How can I sample one message out of
+many in a given period to avoid downstream route does not get
+overloaded?
+
+
+
+
+
+Kamelet
+How can I call Kamelets (route
+templates)?
+
+
+
+
+
+Delayer
+How can I delay the sending of a
+message?
+
+
+
+
+
+Load
+Balancer
+How can I balance load across a number
+of endpoints?
+
+
+
+
+
+Circuit Breaker
+How can I stop calling an external
+service if the service is broken?
+
+
+
+
+
+Stop
+How can I stop to continue routing a
+message?
+
+
+
+
+
+Service
+Call
+How can I call a remote service in a
+distributed system where the service is looked up from a service
+registry of some sorts?
+
+
+
+
+
+Saga
+How can I define a series of related
+actions in a Camel route that should be either completed successfully
+(all of them) or not-executed/compensated?
+
+
+
+
+
+Multicast
+How can I route a message to a number
+of endpoints at the same time?
+
+
+
+
+
+Loop
+How can I repeat processing a message
+in a loop?
+
+
+
+
+# Message Transformation
+
+
+
+
+
+
+
+
+
+
+
+
+Content Enricher
+How do we communicate with another
+system if the message originator does not have all the required data
+items available?
+
+
+
+
+
+Content Filter
+How do you simplify dealing with a
+large message when you are interested only in a few data items?
+
+
+
+
+
+Claim
+Check
+How can we reduce the data volume of a
+message sent across the system without sacrificing information
+content?
+
+
+
+
+
+Normalizer
+How do you process messages that are
+semantically equivalent, but arrive in a different format?
+
+
+
+
+
+Sort
+How can I sort the body of a
+message?
+
+
+
+
+
+Script
+How do I execute a script which may not
+change the message?
+
+
+
+
+
+Validate
+How can I validate a message?
+
+
+
+
+# Messaging Endpoints
+
+
+
+
+
+
+
+
+
+
+
+
+Messaging Mapper
+How do you move data between domain
+objects and the messaging infrastructure while keeping the two
+independent of each other?
+
+
+
+
+
+Event Driven Consumer
+How can an application automatically
+consume messages as they become available?
+
+
+
+
+
+Polling Consumer
+How can an application consume a
+message when the application is ready?
+
+
+
+
+
+Competing Consumers
+How can a messaging client process
+multiple messages concurrently?
+
+
+
+
+
+Message Dispatcher
+How can multiple consumers on a single
+channel coordinate their message processing?
+
+
+
+
+
+Selective Consumer
+How can a message consumer select which
+messages it wishes to receive?
+
+
+
+
+
+Durable Subscriber
+How can a subscriber avoid missing
+messages while it’s not listening for them?
+
+
+
+
+
+Idempotent Consumer
+How can a message receiver deal with
+duplicate messages?
+
+
+
+
+
+Resumable Consumer
+How can a message receiver resume from
+the last known offset?
+
+
+
+
+
+Transactional Client
+How can a client control its
+transactions with the messaging system?
+
+
+
+
+
+Messaging Gateway
+How do you encapsulate access to the
+messaging system from the rest of the application?
+
+
+
+
+
+Service Activator
+How can an application design a service
+to be invoked both via various messaging technologies and via
+non-messaging techniques?
+
+
+
+
+# System Management
+
+
+
+
+
+
+
+
+
+
+
+
+ControlBus
+How can we effectively administer a
+messaging system distributed across multiple platforms and a wide
+geographic area?
+
+
+
+
+
+Detour
+How can you route a message through
+intermediate steps to perform validation, testing or debugging
+functions?
+
+
+
+
+
+Wire
+Tap
+How do you inspect messages that travel
+on a point-to-point channel?
+
+
+
+
+
+Message
+History
+How can we effectively analyze and
+debug the flow of messages in a loosely coupled system?
+
+
+
+
+
+Log
+How can I log processing a
+message?
+
+
+
+
+
+Step
+Groups together a set of EIPs into a
+composite logical unit for metrics and monitoring.
+
+
+
+
+# EIP Icons
+
+The EIP icons library is available as a Visio stencil file adapted to
+render the icons with the Camel color. Download it
+[here](#attachment$Hohpe_EIP_camel_20150622.zip) for your presentation,
+functional and technical analysis documents.
+
+The original EIP stencil is also available in [OpenOffice 3.x
+Draw](#attachment$Hohpe_EIP_camel_OpenOffice.zip), [Microsoft
+Visio](http://www.eaipatterns.com/download/EIP_Visio_stencil.zip), or
+[Omnigraffle](http://www.graffletopia.com/stencils/137).
diff --git a/camel-etcd3.md b/camel-etcd3.md
index e5508b4fbcbd19d0d21dd5d550642c456882b391..d4d572cc5e87f2b8ee8c1df7265ab1acc28b2800 100644
--- a/camel-etcd3.md
+++ b/camel-etcd3.md
@@ -21,7 +21,9 @@ for this component:
etcd3:path[?options]
-# Producer Operations (Since 3.20)
+# Usage
+
+## Producer Operations (Since 3.20)
Apache Camel supports different etcd operations.
@@ -36,7 +38,7 @@ To define the operation, set the exchange header with a key of
-
+
-
+
set
String value of the
key-value pair to put
@@ -58,7 +60,7 @@ setting the exchange header with the key
setting the exchange header with the key
CamelEtcdValueCharset.
-
+
get
None
GetResponse result of the
@@ -71,7 +73,7 @@ by setting the exchange header with the key
setting the exchange header with the key CamelEtcdIsPrefix
to true.
-
+
delete
None
DeleteResponse result of
@@ -83,7 +85,7 @@ by setting the exchange header with the key
CamelEtcdKeyCharset. You indicate if the key is a prefix by
setting the exchange header with the key CamelEtcdIsPrefix
to true.
-== Consumer (Since 3.20)
+=== Consumer (Since 3.20)
The consumer of the etcd components allows watching changes on the
matching key-value pair(s). One exchange is created per event with the
header CamelEtcdPath set to the path of the corresponding
@@ -96,10 +98,10 @@ prefix by setting the exchange header with the key
also possible to start watching events from a specific revision by
setting the option fromIndex to the expected starting
index.
-== AggregationRepository
+=== AggregationRepository
The Etcd v3 component provides an AggregationStrategy to
use etcd as the backend datastore.
-== RoutePolicy (Since 3.20)
+=== RoutePolicy (Since 3.20)
The Etcd v3 component provides a RoutePolicy to use etcd
as clustered lock.
diff --git a/camel-event-message.md b/camel-event-message.md
new file mode 100644
index 0000000000000000000000000000000000000000..b6132e28feff35bcb2ca4de48e8b2cceb7dd7f93
--- /dev/null
+++ b/camel-event-message.md
@@ -0,0 +1,113 @@
+# Event-message.md
+
+Camel supports the [Event
+Message](http://www.enterpriseintegrationpatterns.com/EventMessage.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+How can messaging be used to transmit events from one application to
+another?
+
+
+
+
+
+Use an Event Message for reliable, asynchronous event notification
+between applications.
+
+Camel supports Event Message by the [Exchange
+Pattern](#manual::exchange-pattern.adoc) on a [Message](#message.adoc)
+which can be set to `InOnly` to indicate a oneway event message. Camel
+[Components](#ROOT:index.adoc) then implement this pattern using the
+underlying transport or protocols.
+
+The default behaviour of many [Components](#ROOT:index.adoc) is `InOnly`
+such as for [JMS](#ROOT:jms-component.adoc),
+[File](#ROOT:jms-component.adoc) or [SEDA](#ROOT:seda-component.adoc).
+
+Some components support both `InOnly` and `InOut` and act accordingly.
+For example, the [JMS](#ROOT:jms-component.adoc) can send messages as
+one-way (`InOnly`) or use request/reply messaging (`InOut`).
+
+See the related [Request Reply](#requestReply-eip.adoc) message.
+
+# Using endpoint URI
+
+If you are using a component which defaults to `InOut` you can override
+the [Exchange Pattern](#manual::exchange-pattern.adoc) for a
+**consumer** endpoint using the pattern property.
+
+ foo:bar?exchangePattern=InOnly
+
+This is only possible on endpoints used by consumers (i.e., in
+``).
+
+In the example below the message will be forced as an event message as
+the consumer is in `InOnly` mode.
+
+Java
+from("mq:someQueue?exchangePattern=InOnly")
+.to("activemq:queue:one-way");
+
+XML
+
+
+
+
+
+# Using `setExchangePattern` EIP
+
+You can specify the [Exchange Pattern](#manual::exchange-pattern.adoc)
+using `setExchangePattern` in the DSL.
+
+Java
+from("mq:someQueue")
+.setExchangePattern(ExchangePattern.InOnly)
+.to("activemq:queue:one-way");
+
+XML
+
+
+
+
+
+
+When using `setExchangePattern` then the [Exchange
+Pattern](#manual::exchange-pattern.adoc) on the
+[Exchange](#manual::exchange.adoc) is changed from this point onwards in
+the route.
+
+This means you can change the pattern back again at a later point:
+
+ from("mq:someQueue")
+ .setExchangePattern(ExchangePattern.InOnly)
+ .to("activemq:queue:one-way");
+ .setExchangePattern(ExchangePattern.InOut)
+ .to("activemq:queue:in-and-out")
+ .log("InOut MEP received ${body}")
+
+Using `setExchangePattern` to change the [Exchange
+Pattern](#manual::exchange-pattern.adoc) is often only used in special
+use-cases where you must force to be using either `InOnly` or `InOut`
+mode when using components that support both modes (such as messaging
+components like ActiveMQ, JMS, RabbitMQ etc.)
+
+# JMS component and InOnly vs. InOut
+
+When consuming messages from [JMS](#ROOT:jms-component.adoc) a Request
+Reply is indicated by the presence of the `JMSReplyTo` header. This
+means the JMS component automatic detects whether to use `InOnly` or
+`InOut` in the consumer.
+
+Likewise, the JMS producer will check the current [Exchange
+Pattern](#manual::exchange-pattern.adoc) on the
+[Exchange](#manual::exchange.adoc) to know whether to use `InOnly` or
+`InOut` mode (i.e., one-way vs. request/reply messaging)
+
+# Other Implementation Details
+
+There are concrete classes that implement the `Message` interface for
+each Camel-supported communications technology. For example, the
+`JmsMessage` class provides a JMS-specific implementation of the
+`Message` interface. The public API of the `Message` interface provides
+getters and setters methods to access the *message id*, *body* and
+individual *header* fields of a message.
diff --git a/camel-eventDrivenConsumer-eip.md b/camel-eventDrivenConsumer-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..2eaa7cada29b39340c455b53be9a49f5a8d13b14
--- /dev/null
+++ b/camel-eventDrivenConsumer-eip.md
@@ -0,0 +1,44 @@
+# EventDrivenConsumer-eip.md
+
+Camel supports the [Event Driven
+Consumer](http://www.enterpriseintegrationpatterns.com/EventDrivenConsumer.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+The default consumer model is event-based (i.e., asynchronous) as this
+means that the Camel container can then manage pooling, threading and
+concurrency for you in a declarative manner.
+
+The alternative consumer mode is [Polling
+Consumer](#polling-consumer.adoc).
+
+
+
+
+
+The Event Driven Consumer is implemented by consumers implementing the
+[Processor](http://javadoc.io/doc/org.apache.camel/camel-api/latest/org/apache/camel/Processor.html)
+interface which is invoked by the [Message
+Endpoint](#message-endpoint.adoc) when a [Message](#message.adoc) is
+available for processing.
+
+# Example
+
+The following demonstrates a [Bean](#bean-eip.adoc) being invoked when
+an event occurs from a [JMS](#ROOT:jms-component.adoc) queue.
+
+Java
+from("jms:queue:foo")
+.bean(MyBean.class);
+
+XML
+
+
+
+
+
+YAML
+\- from:
+uri: jms:queue:foo
+steps:
+\- bean:
+beanType: com.foo.MyBean
diff --git a/camel-exchangeProperty-language.md b/camel-exchangeProperty-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..2fccd046e639c130ab47ca7b3a889332cc0f2e33
--- /dev/null
+++ b/camel-exchangeProperty-language.md
@@ -0,0 +1,30 @@
+# ExchangeProperty-language.md
+
+**Since Camel 2.0**
+
+The ExchangeProperty Expression Language allows you to extract values of
+named exchange properties.
+
+# Exchange Property Options
+
+# Example
+
+The `recipientList` EIP can utilize a exchangeProperty like:
+
+
+
+
+ myProperty
+
+
+
+In this case, the list of recipients are contained in the property
+*myProperty*.
+
+And the same example in Java DSL:
+
+ from("direct:a").recipientList(exchangeProperty("myProperty"));
+
+# Dependencies
+
+The ExchangeProperty language is part of **camel-core**.
diff --git a/camel-exec.md b/camel-exec.md
index 9970856ab78532f434c8b3ad6d187d1f65fbd243..2702673b8d9111328c8111335f7cc60136c532c4 100644
--- a/camel-exec.md
+++ b/camel-exec.md
@@ -27,7 +27,9 @@ Where `executable` is the name, or file path, of the system command that
will be executed. If executable name is used (e.g. `exec:java`), the
executable must in the system path.
-# Message body
+# Usage
+
+## Message body
If the component receives an `in` message body that is convertible to
`java.io.InputStream`, it is used to feed input to the executable via
@@ -47,26 +49,26 @@ convenience:
-
+
-
+
ExecResult
java.io.InputStream
-
+
ExecResult
String
-
+
ExecResult
byte []
-
+
ExecResult
org.w3c.dom.Document
@@ -81,7 +83,7 @@ then this component will convert the `stdout` of the process to the
target type. For more details, please refer to the [usage
examples](#exec-component.adoc) below.
-# Usage examples
+# Examples
## Executing word count (Linux)
diff --git a/camel-failoverLoadBalancer-eip.md b/camel-failoverLoadBalancer-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..512c5e3ba61c12065ef82ac0961387f3b69e1e55
--- /dev/null
+++ b/camel-failoverLoadBalancer-eip.md
@@ -0,0 +1,137 @@
+# FailoverLoadBalancer-eip.md
+
+This EIP allows using fail-over (in case of failures, the exchange will
+be tried on the next endpoint) with the [Load
+Balancer](#loadBalance-eip.adoc) EIP.
+
+# Options
+
+# Exchange properties
+
+# Example
+
+In the example below, calling the three http services is done with the
+load balancer:
+
+Java
+from("direct:start")
+.loadBalance().failover()
+.to("http:service1")
+.to("http:service2")
+.to("http:service3")
+.end();
+
+XML
+
+
+
+
+
+
+
+
+
+
+In the default mode, the fail-over load balancer will always start with
+the first processor (i.e., "http:service1"). And in case this fails,
+then try the next, until either it succeeded or all of them failed. If
+all failed, then Camel will throw the caused exception which means the
+Exchange is failed.
+
+## Using round-robin mode
+
+You can use the `roundRobin` mode to start again from the beginning,
+which then will keep trying until one succeed. To prevent endless
+retries, then it’s recommended to set a maximum fail-over value.
+
+Setting this in Java DSL is not *pretty* as there are three parameters:
+
+ from("direct:start")
+ .loadBalance().failover(10, false, true)
+ .to("http:service1")
+ .to("http:service2")
+ .to("http:service3")
+ .end();
+
+ .failover(10, false, true)
+
+Where `10` is the maximum fail over attempts, And `false` is a special
+feature related to inheriting error handler. The last parameter `true`
+is to use round-robin mode.
+
+In XML, it is straightforward as shown:
+
+
+
+
+
+
+
+
+
+
+
+## Using sticky mode
+
+The sticky mode is used for remember the last known good endpoint, so
+the next exchange will start from there, instead from the beginning.
+
+For example, support that http:service1 is down, and that service2 is
+up. With sticky mode enabled, then Camel will keep starting from
+service2 until it fails, and then try service3.
+
+If sticky mode is not enabled (it’s disabled by default), then Camel
+will always start from the beginning, which means calling service1.
+
+Java
+Setting sticky mode in Java DSL is not *pretty* as there are four
+parameters.
+
+ from("direct:start")
+ .loadBalance().failover(10, false, true, true)
+ .to("http:service1")
+ .to("http:service2")
+ .to("http:service3")
+ .end();
+
+The last `true` argument is to enable sticky mode.
+
+XML
+
+
+
+
+
+
+
+
+
+
+## Fail-over on specific exceptions
+
+The fail-over load balancer can be configured to only apply for a
+specific set of exceptions. Suppose you only want to fail-over in case
+of `java.io.Exception` or `HttpOperationFailedException` then you can
+do:
+
+ from("direct:start")
+ .loadBalance().failover(IOException.class, HttpOperationFailedException.class)
+ .to("http:service1")
+ .to("http:service2")
+ .to("http:service3")
+ .end();
+
+And in XML DSL:
+
+
+
+
+
+ java.io.IOException
+ org.apache.camel.http.base.HttpOperationFailedException
+
+
+
+
+
+
diff --git a/camel-fastjson-dataformat.md b/camel-fastjson-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..9951ef824a2502d336a7e5b2e7150c46d755d388
--- /dev/null
+++ b/camel-fastjson-dataformat.md
@@ -0,0 +1,27 @@
+# Fastjson-dataformat.md
+
+**Since Camel 2.20**
+
+Fastjson is a Data Format that uses the [Fastjson
+Library](https://github.com/alibaba/fastjson)
+
+ from("activemq:My.Queue").
+ marshal().json(JsonLibrary.Fastjson).
+ to("mqseries:Another.Queue");
+
+# Fastjson Options
+
+# Dependencies
+
+To use Fastjson in your camel routes, you need to add the dependency on
+**camel-fastjson** which implements this data format.
+
+If you use maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-fastjson
+ x.x.x
+
+
diff --git a/camel-fault-tolerance-eip.md b/camel-fault-tolerance-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..929b9969a8c83bb1637b40a452d5b2e5b8686cf5
--- /dev/null
+++ b/camel-fault-tolerance-eip.md
@@ -0,0 +1,177 @@
+# Fault-tolerance-eip.md
+
+This component supports the [Circuit Breaker](#circuitBreaker-eip.adoc)
+EIP with the [MicroProfile Fault
+Tolerance](#others:microprofile-fault-tolerance.adoc) library.
+
+# Options
+
+The Fault Tolerance EIP supports two options which are listed below:
+
+
+
+
+
+
+
+
+
+
+
+
+
+faultToleranceConfiguration
+Configure the Fault Tolerance EIP. When
+the configuration is complete, use end() to return to the
+Fault Tolerance EIP.
+
+FaultToleranceConfigurationDefinition
+
+
+faultToleranceConfigurationRef
+Refers to a Fault Tolerance
+configuration to use for configuring the Fault Tolerance EIP.
+
+String
+
+
+
+
+See [Fault Tolerance
+Configuration](#faultToleranceConfiguration-eip.adoc) for all the
+configuration options on the Fault Tolerance [Circuit
+Breaker](#circuitBreaker-eip.adoc).
+
+# Using Fault Tolerance EIP
+
+Below is an example route showing a Fault Tolerance EIP circuit breaker
+that protects against a downstream HTTP operation with fallback.
+
+Java
+from("direct:start")
+.circuitBreaker()
+.to("http://fooservice.com/faulty")
+.onFallback()
+.transform().constant("Fallback message")
+.end()
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+Fallback message
+
+
+
+
+
+
+In case the calling the downstream HTTP service is failing, and an
+exception is thrown, then the circuit breaker will react and execute the
+fallback route instead.
+
+If there was no fallback, then the circuit breaker will throw an
+exception.
+
+For more information about fallback, see
+[onFallback](#onFallback-eip.adoc).
+
+## Configuring Fault Tolerance
+
+You can fine-tune the Fault Tolerance EIP by the many [Fault Tolerance
+Configuration](#faultToleranceConfiguration-eip.adoc) options.
+
+For example, to use a 2-second execution timeout, you can do as follows:
+
+Java
+from("direct:start")
+.circuitBreaker()
+// use a 2-second timeout
+.faultToleranceConfiguration().timeoutEnabled(true).timeoutDuration(2000).end()
+.log("Fault Tolerance processing start: ${threadName}")
+.to("http://fooservice.com/faulty")
+.log("Fault Tolerance processing end: ${threadName}")
+.end()
+.log("After Fault Tolerance ${body}");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+In this example, if calling the downstream service does not return a
+response within 2 seconds, a timeout is triggered, and the exchange will
+fail with a TimeoutException.
+
+# Camel’s Error Handler and Circuit Breaker EIP
+
+By default, the [Circuit Breaker](#circuitBreaker-eip.adoc) EIP handles
+errors by itself. This means if the circuit breaker is open, and the
+message fails, then Camel’s error handler is not reacting also.
+
+However, you can enable Camels error handler with circuit breaker by
+enabling the `inheritErrorHandler` option, as shown:
+
+ // Camel's error handler that will attempt to redeliver the message 3 times
+ errorHandler(deadLetterChannel("mock:dead").maximumRedeliveries(3).redeliveryDelay(0));
+
+ from("direct:start")
+ .to("log:start")
+ // turn on Camel's error handler on circuit breaker so Camel can do redeliveries
+ .circuitBreaker().inheritErrorHandler(true)
+ .to("mock:a")
+ .throwException(new IllegalArgumentException("Forced"))
+ .end()
+ .to("log:result")
+ .to("mock:result");
+
+This example is from a test, where you can see the Circuit Breaker EIP
+block has been hardcoded to always fail by throwing an exception.
+Because the `inheritErrorHandler` has been enabled, then Camel’s error
+handler will attempt to call the Circuit Breaker EIP block again.
+
+That means the `mock:a` endpoint will receive the message again, and a
+total of `1 + 3 = 4` message (first time + 3 redeliveries).
+
+If we turn off the `inheritErrorHandler` option (default) then the
+Circuit Breaker EIP will only be executed once because it handled the
+error itself.
+
+# Dependencies
+
+Camel provides the [Circuit Breaker](#circuitBreaker-eip.adoc) EIP in
+the route model, which allows to plug in different implementations.
+MicroProfile Fault Tolerance is one such implementation.
+
+Maven users will need to add the following dependency to their pom.xml
+to use this EIP:
+
+
+ org.apache.camel
+ camel-microprofile-fault-tolerance
+ x.x.x
+
+
+## Using Fault Tolerance with Spring Boot
+
+This component does not support Spring Boot. Instead, it is supported in
+Standalone and with Camel Quarkus.
diff --git a/camel-faultToleranceConfiguration-eip.md b/camel-faultToleranceConfiguration-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..9a1344e4a77979f121c370018131689cc21732f7
--- /dev/null
+++ b/camel-faultToleranceConfiguration-eip.md
@@ -0,0 +1,11 @@
+# FaultToleranceConfiguration-eip.md
+
+This page documents all the specific options for the [Fault
+Tolerance](#fault-tolerance-eip.adoc) EIP.
+
+# Exchange properties
+
+# Example
+
+See [Fault Tolerance](#fault-tolerance-eip.adoc) EIP for details how to
+use this EIP.
diff --git a/camel-fhirJson-dataformat.md b/camel-fhirJson-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..510ac3a7017a88d3504de075a85c1cf7059861e6
--- /dev/null
+++ b/camel-fhirJson-dataformat.md
@@ -0,0 +1,10 @@
+# FhirJson-dataformat.md
+
+**Since Camel 2.21**
+
+The FHIR-JSON Data Format leverages
+[HAPI-FHIR’s](https://github.com/jamesagnew/hapi-fhir/blob/master/hapi-fhir-base/src/main/java/ca/uhn/fhir/parser/JsonParser.java)
+JSON parser to parse to/from JSON format to/from a HAPI-FHIR’s
+`IBaseResource`.
+
+# FHIR JSON Format Options
diff --git a/camel-fhirXml-dataformat.md b/camel-fhirXml-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..f7707439b7bc9ab3ea260fbf6d862deb1598a43c
--- /dev/null
+++ b/camel-fhirXml-dataformat.md
@@ -0,0 +1,10 @@
+# FhirXml-dataformat.md
+
+**Since Camel 2.21**
+
+The FHIR-XML Data Format leverages
+[HAPI-FHIR’s](https://github.com/jamesagnew/hapi-fhir/blob/master/hapi-fhir-base/src/main/java/ca/uhn/fhir/parser/XmlParser.java)
+XML parser to parse to/from XML format to/from a HAPI-FHIR’s
+`IBaseResource`.
+
+# FHIR XML Format Options
diff --git a/camel-file-language.md b/camel-file-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..4df3d7ad4b05218445297581757a1ac099485f53
--- /dev/null
+++ b/camel-file-language.md
@@ -0,0 +1,411 @@
+# File-language.md
+
+**Since Camel 1.1**
+
+The File Expression Language is an extension to the
+[Simple](#simple-language.adoc) language, adding file related
+capabilities. These capabilities are related to common use cases working
+with file path and names. The goal is to allow expressions to be used
+with the [File](#components::file-component.adoc) and
+[FTP](#components::ftp-component.adoc) components for setting dynamic
+file patterns for both consumer and producer.
+
+The file language is merged with [Simple](#simple-language.adoc)
+language, which means you can use all the file syntax directly within
+the simple language.
+
+# File Language options
+
+# Syntax
+
+This language is an **extension** to the [Simple](#simple-language.adoc)
+language, so the [Simple](#simple-language.adoc) syntax applies also. So
+the table below only lists the additional file related functions.
+
+All the file tokens use the same expression name as the method on the
+`java.io.File` object, for instance `file:absolute` refers to the
+`java.io.File.getAbsolute()` method. Notice that not all expressions are
+supported by the current Exchange. For instance, the
+[FTP](#components::ftp-component.adoc) component supports some options,
+whereas the File component supports all of them.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+file:name
+String
+yes
+no
+yes
+no
+refers to the file name (is relative to
+the starting directory, see note below)
+
+
+file:name.ext
+String
+yes
+no
+yes
+no
+refers to the file extension
+only
+
+
+file:name.ext.single
+String
+yes
+no
+yes
+no
+refers to the file extension. If the
+file extension has multiple dots, then this expression strips and only
+returns the last part.
+
+
+file:name.noext
+String
+yes
+no
+yes
+no
+refers to the file name with no
+extension (is relative to the starting directory, see note
+below)
+
+
+file:name.noext.single
+String
+yes
+no
+yes
+no
+refers to the file name with no
+extension (is relative to the starting directory, see note below). If
+the file name has multiple dots, then this expression strips only the
+last part, and keeps the others.
+
+
+file:onlyname
+String
+yes
+no
+yes
+no
+refers to the file name only with no
+leading paths.
+
+
+file:onlyname.noext
+String
+yes
+no
+yes
+no
+refers to the file name only with no
+extension and with no leading paths.
+
+
+file:onlyname.noext.single
+String
+yes
+no
+yes
+no
+refers to the file name only with no
+extension and with no leading paths. If the file extension has multiple
+dots, then this expression strips only the last part, and keeps the
+others.
+
+
+file:ext
+String
+yes
+no
+yes
+no
+refers to the file extension
+only
+
+
+file:parent
+String
+yes
+no
+yes
+no
+refers to the file parent
+
+
+file:path
+String
+yes
+no
+yes
+no
+refers to the file path
+
+
+file:absolute
+Boolean
+yes
+no
+no
+no
+refers to whether the file is regarded
+as absolute or relative
+
+
+file:absolute.path
+String
+yes
+no
+no
+no
+refers to the absolute file
+path
+
+
+file:length
+Long
+yes
+no
+yes
+no
+refers to the file length returned as a
+Long type
+
+
+file:size
+Long
+yes
+no
+yes
+no
+refers to the file length returned as a
+Long type
+
+
+file:modified
+Date
+yes
+no
+yes
+no
+Refers to the file last modified
+returned as a Date type
+
+
+date:_command:pattern_
+String
+yes
+yes
+yes
+yes
+for date formatting using the
+java.text.SimpleDateFormat patterns. Is an
+extension to the Simple language. Additional command is:
+file (consumers only) for the last modified timestamp
+of the file. Notice: all the commands from the Simple language can also be
+used.
+
+
+
+
+# File token example
+
+## Relative paths
+
+We have a `java.io.File` handle for the file `hello.txt` in the
+following **relative** directory: `./filelanguage/test`. And we
+configure our endpoint to use this starting directory `./filelanguage`.
+The file tokens will return as:
+
+
+
+
+
+
+
+
+
+
+
+file:name
+test/hello.txt
+
+
+file:name.ext
+txt
+
+
+file:name.noext
+test/hello
+
+
+file:onlyname
+hello.txt
+
+
+file:onlyname.noext
+hello
+
+
+file:ext
+txt
+
+
+file:parent
+filelanguage/test
+
+
+file:path
+filelanguage/test/hello.txt
+
+
+file:absolute
+false
+
+
+file:absolute.path
+/workspace/camel/camel-core/target/filelanguage/test/hello.txt
+
+
+
+
+## Absolute paths
+
+We have a `java.io.File` handle for the file `hello.txt` in the
+following **absolute** directory:
+`/workspace/camel/camel-core/target/filelanguage/test`. And we configure
+the out endpoint to use the absolute starting directory
+`/workspace/camel/camel-core/target/filelanguage`. The file tokens will
+return as:
+
+
+
+
+
+
+
+
+
+
+
+file:name
+test/hello.txt
+
+
+file:name.ext
+txt
+
+
+file:name.noext
+test/hello
+
+
+file:onlyname
+hello.txt
+
+
+file:onlyname.noext
+hello
+
+
+file:ext
+txt
+
+
+file:parent
+/workspace/camel/camel-core/target/filelanguage/test
+
+
+file:path
+/workspace/camel/camel-core/target/filelanguage/test/hello.txt
+
+
+file:absolute
+true
+
+
+file:absolute.path
+/workspace/camel/camel-core/target/filelanguage/test/hello.txt
+
+
+
+
+# Examples
+
+You can enter a fixed file name such as `myfile.txt`:
+
+ fileName="myfile.txt"
+
+Let’s assume we use the file consumer to read files and want to move the
+read files to back up folder with the current date as a subfolder. This
+can be done using an expression like:
+
+ fileName="backup/${date:now:yyyyMMdd}/${file:name.noext}.bak"
+
+relative folder names are also supported so suppose the backup folder
+should be a sibling folder then you can append `..` as shown:
+
+ fileName="../backup/${date:now:yyyyMMdd}/${file:name.noext}.bak"
+
+As this is an extension to the [Simple](#simple-language.adoc) language,
+we have access to all the goodies from this language also, so in this
+use case we want to use the in.header.type as a parameter in the dynamic
+expression:
+
+ fileName="../backup/${date:now:yyyyMMdd}/type-${in.header.type}/backup-of-${file:name.noext}.bak"
+
+If you have a custom date you want to use in the expression, then Camel
+supports retrieving dates from the message header:
+
+ fileName="orders/order-${in.header.customerId}-${date:in.header.orderDate:yyyyMMdd}.xml"
+
+And finally, we can also use a bean expression to invoke a POJO class
+that generates some String output (or convertible to String) to be used:
+
+ fileName="uniquefile-${bean:myguidgenerator.generateid}.txt"
+
+Of course, all this can be combined in one expression where you can use
+the [File Language](#file-language.adoc), [Simple](#file-language.adoc)
+and the [Bean](#components::bean-component.adoc) language in one
+combined expression. This is pretty powerful for those common file path
+patterns.
+
+# Dependencies
+
+The File language is part of **camel-core**.
diff --git a/camel-file-watch.md b/camel-file-watch.md
index b891586f9bf17ac4450b946f2742424838364ed5..69ce33d0bfb3c4b178cc8bee2c550380c79079ae 100644
--- a/camel-file-watch.md
+++ b/camel-file-watch.md
@@ -10,7 +10,7 @@ folder. It is based on the project
# URI Options
-# Examples:
+# Examples
## Recursive watch all events (file creation, file deletion, file modification):
diff --git a/camel-file.md b/camel-file.md
index c39e36ed733307c7ef9df63f9a65ab24342ddc90..986f0632a47b6b15169f26779dc511d9ec65dc83 100644
--- a/camel-file.md
+++ b/camel-file.md
@@ -40,7 +40,9 @@ directly](#File2-Consumingfilesfromfolderswhereothersdropfilesdirectly).
By default, it will override any existing file if one exists with the
same name.
-# Move, Pre Move and Delete operations
+# Usage
+
+## Move, Pre Move and Delete operations
By default, Camel will move consumed files to the `.camel` subfolder
relative to the directory where the file was consumed.
@@ -53,7 +55,7 @@ There is a sample [showing reading from a directory and the default move
operation](#File2-ReadingFromADirectoryAndTheDefaultMoveOperation)
below.
-## Move, Delete and the Routing process
+### Move, Delete and the Routing process
Any move or delete operations are executed after the routing has
completed. So, during processing of the `Exchange` the file is still
@@ -83,7 +85,7 @@ which we use to return the file name to be used. This can be either
relative or absolute. If relative, the directory is created as a
subfolder from within the folder where the file was consumed.
-## Move and Pre Move operations
+### Move and Pre Move operations
We have introduced a `preMove` operation to move files **before** they
are processed. This allows you to mark which files have been scanned as
@@ -98,7 +100,7 @@ You can combine the `preMove` and the regular `move`:
So in this situation, the file is in the `inprogress` folder when being
processed, and after it’s processed, it’s moved to the `.done` folder.
-## Fine-grained control over Move and PreMove option
+### Fine-grained control over Move and PreMove option
The `move` and `preMove` options are Expression-based, so we have the
full power of the [File Language](#languages:file-language.adoc) to do
@@ -117,7 +119,7 @@ as the pattern, we can do:
move=backup/${date:now:yyyyMMdd}/${file:name}
-## About moveFailed
+### About moveFailed
The `moveFailed` option allows you to move files that **could not** be
processed successfully to another location such as an error folder of
@@ -126,7 +128,7 @@ timestamp you can use
See more examples at [File Language](#languages:file-language.adoc)
-# Exchange Properties, file consumer only
+## Exchange Properties, file consumer only
As the file consumer implements the `BatchConsumer` it supports batching
the files it polls. By batching, we mean that Camel will add the
@@ -138,23 +140,23 @@ following additional properties to the Exchange:
-
+
-
+
CamelBatchSize
The total number of files that was
polled in this batch.
-
+
CamelBatchIndex
The current index of the batch. Starts
from 0.
-
+
CamelBatchComplete
A boolean value indicating
@@ -168,7 +170,7 @@ This allows you, for instance, to know how many files exist in this
batch and for instance, let the Aggregator2 aggregate this number of
files.
-# Using charset
+## Using charset
The `charset` option allows configuring the encoding of the files on
both the consumer and producer endpoints. For example, if you read utf-8
@@ -236,7 +238,7 @@ And the logs:
DEBUG GenericFileConverter - Read file /Users/davsclaus/workspace/camel/camel-core/target/charset/input/input.txt with charset utf-8
DEBUG FileOperations - Using Reader to write file: target/charset/output.txt with charset: iso-8859-1
-# Common gotchas with folder and filenames
+## Common gotchas with folder and filenames
When Camel is producing files (writing files), there are a few gotchas
affecting how to set a filename of your choice. By default, Camel will
@@ -266,14 +268,14 @@ And a syntax where we set the filename on the endpoint with the
from("direct:report").to("file:target/reports/?fileName=report.txt");
-# Filename Expression
+## Filename Expression
Filename can be set either using the **expression** option or as a
string-based [File Language](#languages:file-language.adoc) expression
in the `CamelFileName` header. See the [File
Language](#languages:file-language.adoc) for syntax and samples.
-# Consuming files from folders where others drop files directly
+## Consuming files from folders where others drop files directly
Beware if you consume files from a folder where other applications write
files too. Take a look at the different `readLock` options to see what
@@ -288,9 +290,9 @@ this. You may also want to look at the `doneFileName` option, which uses
a marker file (*done file*) to signal when a file is done and ready to
be consumed.
-# Done files
+## Done files
-## Using done files
+### Using done files
See also section [*writing done files*](#File2-WritingDoneFiles) below.
@@ -332,7 +334,7 @@ You can also use a prefix for the *done file*, such as:
- `ready-hello.txt`: is the associated `done` file
-## Writing done files
+### Writing done files
After you have written a file, you may want to write an additional *done
file* as a kind of marker, to indicate to others that the file is
@@ -368,7 +370,7 @@ File name without the extension
Will, for example, create a file named `foo.done` if the target file was
`foo.txt` in the same directory as the target file.
-# Using flatten
+## Using flatten
If you want to store the files in the `outputdir` directory in the same
directory, disregarding the source directory layout (e.g., to flatten
@@ -382,13 +384,13 @@ It will result in the following output layout:
outputdir/foo.txt
outputdir/bar.txt
-# Writing to files
+## Writing to files
Camel is also able to write files, i.e., produce files. In the sample
below, we receive some reports on the SEDA queue that we process before
they are being written to a directory.
-## Write to subdirectory using `Exchange.FILE_NAME`
+### Write to subdirectory using `Exchange.FILE_NAME`
Using a single route, it is possible to write a file to any number of
subdirectories. If you have a route setup as such:
@@ -407,7 +409,7 @@ as:
This allows you to have a single route to write files to multiple
destinations.
-## Writing file through the temporary directory relative to the final destination
+### Writing file through the temporary directory relative to the final destination
Sometimes you need to temporarily write the files to some directory
relative to the destination directory. Such a situation usually happens
@@ -420,7 +422,7 @@ after data transfer is done, they will be atomically moved to the\`
from("direct:start").
to("file:///var/myapp/finalDirectory?tempPrefix=/../filesInProgress/");
-# Avoiding reading the same file more than once (idempotent consumer)
+## Avoiding reading the same file more than once (idempotent consumer)
Camel supports Idempotent Consumer directly within the component, so it
will skip already processed files. This feature can be enabled by
@@ -458,9 +460,9 @@ consumed before:
DEBUG FileConsumer is idempotent and the file has been consumed before. Will skip this file: target\idempotent\report.txt
-# Idempotent Repository
+## Idempotent Repository
-## Using a file-based idempotent repository
+### Using a file-based idempotent repository
In this section we will use the file-based idempotent repository
`org.apache.camel.processor.idempotent.FileIdempotentRepository` instead
@@ -479,7 +481,7 @@ idempotent repository and define our file consumer to use our repository
with the `idempotentRepository` using `#` sign to indicate Registry
lookup:
-## Using a JPA based idempotent repository
+### Using a JPA based idempotent repository
In this section, we will use the JPA based idempotent repository instead
of the in-memory based that is used as default.
@@ -522,17 +524,17 @@ option:
-# Filtering Strategies
+## Filtering Strategies
Camel supports pluggable filtering strategies. They are described below.
-## Filter using the `GenericFilter`
+### Filter using the `GenericFilter`
The `filter` option allows you to implement a custom filter in Java code
by implementing the `org.apache.camel.component.file.GenericFileFilter`
interface.
-### Implementing a GenericFilter
+#### Implementing a GenericFilter
The interface has an `accept` method that returns a boolean. The meaning
of the return values are:
@@ -545,7 +547,7 @@ There is also a `isDirectory` method on `GenericFile` to inform whether
the file is a directory. This allows you to filter unwanted directories,
to avoid traversing down unwanted directories.
-### Using the `GenericFilter`
+#### Using the `GenericFilter`
You can then configure the endpoint with such a filter to skip certain
files being processed.
@@ -565,7 +567,7 @@ spring XML file:
-## Filtering using ANT path matcher
+### Filtering using ANT path matcher
The ANT path matcher is based on
[AntPathMatcher](http://static.springframework.org/spring/docs/2.5.x/api/org/springframework/util/AntPathMatcher.html).
@@ -586,11 +588,11 @@ The sample below demonstrates how to use it:
from("file://inbox?antInclude=**/*.txt").to("...");
-# Sorting Strategies
+## Sorting Strategies
Camel supports pluggable sorting strategies. They are described below.
-## Sorting using Comparator
+### Sorting using Comparator
This strategy it to use the build in `java.util.Comparator` in Java. You
can then configure the endpoint with such a comparator and have Camel
@@ -618,7 +620,7 @@ Registry by prefixing the id with `#`. So writing `sorter=#mySorter`,
will instruct Camel to go look in the Registry for a bean with the ID,
`mySorter`.
-## Sorting using sortBy
+### Sorting using sortBy
Camel supports pluggable sorting strategies. This strategy uses the
[File Language](#languages:file-language.adoc) to configure the sorting.
@@ -676,7 +678,7 @@ per group, so we could reverse the file names:
sortBy=date:file:yyyyMMdd;reverse:file:name
-# Using GenericFileProcessStrategy
+## Using GenericFileProcessStrategy
The option `processStrategy` can be used to use a custom
`GenericFileProcessStrategy` that allows you to implement your own
@@ -699,7 +701,7 @@ this as:
- in the `commit()` method we can move the actual file and also delete
the *ready* file.
-# Using bridgeErrorHandler
+## Using bridgeErrorHandler
If you want to use the Camel Error Handler to deal with any exception
occurring in the file consumer, then you can enable the
@@ -726,12 +728,12 @@ When using bridgeErrorHandler, then `interceptors`, `OnCompletions` do
Handler, and does not allow prior actions such as interceptors,
onCompletion to take action.
-# Debug logging
+## Debug logging
This component has log level **TRACE** that can be helpful if you have
problems.
-# Samples
+# Examples
## Reading from a directory and the default move operation
@@ -832,11 +834,11 @@ See [File Language](#languages:file-language.adoc) for more samples.
|recursive|If a directory, will look for files in all the sub-directories as well.|false|boolean|
|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
-|directoryMustExist|Similar to the startingDirectoryMustExist option but this applies during polling (after starting the consumer).|false|boolean|
+|directoryMustExist|Similar to the startingDirectoryMustExist option, but this applies during polling (after starting the consumer).|false|boolean|
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
|extendedAttributes|To define which file attributes of interest. Like posix:permissions,posix:owner,basic:lastAccessTime, it supports basic wildcard like posix:, basic:lastAccessTime||string|
-|includeHiddenDirs|Whether to accept hidden directories. Directories which names starts with dot is regarded as a hidden directory, and by default not included. Set this option to true to include hidden directories in the file consumer.|false|boolean|
+|includeHiddenDirs|Whether to accept hidden directories. Directories which names starts with dot are regarded as a hidden directory, and by default are not included. Set this option to true to include hidden directories in the file consumer.|false|boolean|
|includeHiddenFiles|Whether to accept hidden files. Files which names starts with dot is regarded as a hidden file, and by default not included. Set this option to true to include hidden files in the file consumer.|false|boolean|
|inProgressRepository|A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used.||object|
|localWorkDirectory|When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory.||string|
@@ -845,7 +847,7 @@ See [File Language](#languages:file-language.adoc) for more samples.
|probeContentType|Whether to enable probing of the content type. If enable then the consumer uses Files#probeContentType(java.nio.file.Path) to determine the content-type of the file, and store that as a header with key Exchange#FILE\_CONTENT\_TYPE on the Message.|false|boolean|
|processStrategy|A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply.||object|
|startingDirectoryMustExist|Whether the starting directory must exist. Mind that the autoCreate option is default enabled, which means the starting directory is normally auto created if it doesn't exist. You can disable autoCreate and enable this to ensure the starting directory must exist. Will throw an exception if the directory doesn't exist.|false|boolean|
-|startingDirectoryMustHaveAccess|Whether the starting directory has access permissions. Mind that the startingDirectoryMustExist parameter must be set to true in order to verify that the directory exists. Will thrown an exception if the directory doesn't have read and write permissions.|false|boolean|
+|startingDirectoryMustHaveAccess|Whether the starting directory has access permissions. Mind that the startingDirectoryMustExist parameter must be set to true to verify that the directory exists. Will throw an exception if the directory doesn't have read and write permissions.|false|boolean|
|appendChars|Used to append characters (text) after writing files. This can for example be used to add new lines or other separators when writing and appending new files or existing files. To specify new-line (slash-n or slash-r) or tab (slash-t) characters then escape with an extra slash, eg slash-slash-n.||string|
|checksumFileAlgorithm|If provided, then Camel will write a checksum file when the original file has been written. The checksum file will contain the checksum created with the provided algorithm for the original file. The checksum file will always be written in the same folder as the original file.||string|
|fileExist|What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers.|Override|object|
@@ -855,17 +857,17 @@ See [File Language](#languages:file-language.adoc) for more samples.
|tempFileName|The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir.||string|
|tempPrefix|This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files.||string|
|allowNullBody|Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged.|false|boolean|
-|chmod|Specify the file permissions which is sent by the producer, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it.||string|
-|chmodDirectory|Specify the directory permissions used when the producer creates missing directories, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it.||string|
+|chmod|Specify the file permissions that are sent by the producer, the chmod value must be between 000 and 777; If there is a leading digit like in 0755, we will ignore it.||string|
+|chmodDirectory|Specify the directory permissions used when the producer creates missing directories, the chmod value must be between 000 and 777; If there is a leading digit like in 0755, we will ignore it.||string|
|eagerDeleteTargetFile|Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation.|true|boolean|
-|forceWrites|Whether to force syncing writes to the file system. You can turn this off if you do not want this level of guarantee, for example if writing to logs / audit logs etc; this would yield better performance.|true|boolean|
+|forceWrites|Whether to force syncing, writes to the file system. You can turn this off if you do not want this level of guarantee, for example, if writing to logs / audit logs etc.; this would yield better performance.|true|boolean|
|keepLastModified|Will keep the last modified timestamp from the source file (if any). Will use the FileConstants.FILE\_LAST\_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers.|false|boolean|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|moveExistingFileStrategy|Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided||object|
|autoCreate|Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to.|true|boolean|
|bufferSize|Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files).|131072|integer|
-|copyAndDeleteOnRenameFail|Whether to fallback and do a copy and delete file, in case the file could not be renamed directly. This option is not available for the FTP component.|true|boolean|
-|renameUsingCopy|Perform rename operations using a copy and delete strategy. This is primarily used in environments where the regular rename operation is unreliable (e.g. across different file systems or networks). This option takes precedence over the copyAndDeleteOnRenameFail parameter that will automatically fall back to the copy and delete strategy, but only after additional delays.|false|boolean|
+|copyAndDeleteOnRenameFail|Whether to fall back and do a copy and delete file, in case the file could not be renamed directly. This option is not available for the FTP component.|true|boolean|
+|renameUsingCopy|Perform rename operations using a copy and delete strategy. This is primarily used in environments where the regular rename operation is unreliable (e.g., across different file systems or networks). This option takes precedence over the copyAndDeleteOnRenameFail parameter that will automatically fall back to the copy and delete strategy, but only after additional delays.|false|boolean|
|synchronous|Sets whether synchronous processing should be strictly used|false|boolean|
|antExclude|Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format.||string|
|antFilterCaseSensitive|Sets case sensitive flag on ant filter.|true|boolean|
diff --git a/camel-filter-eip.md b/camel-filter-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..ffec725f3a88140e2460dd0d93188c8bc5e80022
--- /dev/null
+++ b/camel-filter-eip.md
@@ -0,0 +1,155 @@
+# Filter-eip.md
+
+The [Message
+Filter](http://www.enterpriseintegrationpatterns.com/Filter.html) from
+the [EIP patterns](#enterprise-integration-patterns.adoc) allows you to
+filter messages.
+
+How can a component avoid receiving uninteresting messages?
+
+
+
+
+
+Use a special kind of Message Router, a Message Filter, to eliminate
+undesired messages from a channel based on a set of criteria.
+
+The message filter implemented in Camel is similar to
+`if (predicate) { block }` in Java. The filter will **include** the
+message if the predicate evaluated to `true`.
+
+# EIP options
+
+# Exchange properties
+
+# Example
+
+The Camel [Simple](#languages:simple-language.adoc) language is great to
+use with the Filter EIP when routing is based on the content of the
+message, such as checking message headers.
+
+Java
+from("direct:a")
+.filter(simple("${header.foo} == 'bar'"))
+.to("direct:bar")
+.end()
+.to("direct:b")
+
+XML
+
+
+
+${header.foo} == 'bar'
+
+
+
+
+
+You can use many languages as the predicate, such as
+[XPath](#languages:xpath-language.adoc):
+
+Java
+from("direct:start").
+filter().xpath("/person\[@name='James'\]").
+to("mock:result");
+
+XML
+
+
+
+/person\[@name='James'\]
+
+
+
+
+Here is another example of calling a [method on a
+bean](#languages:bean-language.adoc) to define the filter behavior:
+
+ from("direct:start")
+ .filter().method(MyBean.class, "isGoldCustomer")
+ .to("mock:gold")
+ .end()
+ .to("mock:all");
+
+And then bean can have a method that returns a `boolean` as the
+predicate:
+
+ public static class MyBean {
+
+ public boolean isGoldCustomer(@Header("level") String level) {
+ return level.equals("gold");
+ }
+
+ }
+
+And in XML we can call the bean in `` where we can specify the
+FQN class name of the bean as shown:
+
+
+
+
+
+
+
+
+
+
+## Filtering with status property
+
+To know whether an `Exchange` was filtered or not, then you can choose
+to specify a name of a property to store on the exchange with the result
+(boolean), using `statusPropertyName` as shown below:
+
+Java
+from("direct:start")
+.filter().method(MyBean.class, "isGoldCustomer").statusPropertyName("gold")
+.to("mock:gold")
+.end()
+.to("mock:all");
+
+XML
+
+
+
+
+
+
+
+
+
+In the example above then Camel will store an exchange property with key
+`gold` with the result of the filtering, whether it was `true` or
+`false`.
+
+## Filtering and stopping
+
+When using the Message Filter EIP, then it only applies to its children.
+
+For example, in the previous example:
+
+
+
+
+
+
+
+
+
+
+Then for a message that is a gold customer will be routed to both
+`mock:gold` and `mock:all` (predicate is true). However, for a non-gold
+message (predicate is false) then the message will not be routed in the
+filter block, but will be routed to mock:all.
+
+Sometimes you may want to stop routing for messages that were filtered.
+To do this, you can use the [Stop](#stop-eip.adoc) EIP as shown:
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-flatpack-dataformat.md b/camel-flatpack-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..418864bea94c19a63c548882461b0cb6bb22b579
--- /dev/null
+++ b/camel-flatpack-dataformat.md
@@ -0,0 +1,65 @@
+# Flatpack-dataformat.md
+
+**Since Camel 2.1**
+
+The [Flatpack](#ROOT:flatpack-component.adoc) component ships with the
+Flatpack data format that can be used to format between fixed width or
+delimited text messages to a `List` of rows as `Map`.
+
+- marshal = from `List>` to `OutputStream` (can be
+ converted to `String`)
+
+- unmarshal = from `java.io.InputStream` (such as a `File` or
+ `String`) to a `java.util.List` as an
+ `org.apache.camel.component.flatpack.DataSetList` instance.
+ The result of the operation will contain all the data. If you need
+ to process each row one by one you can split the exchange, using
+ Splitter.
+
+**Notice:** The Flatpack library does currently not support header and
+trailers for the marshal operation.
+
+# Options
+
+# Usage
+
+To use the data format, instantiate an instance and invoke the marshal
+or unmarshal operation in the route builder:
+
+ FlatpackDataFormat fp = new FlatpackDataFormat();
+ fp.setDefinition(new ClassPathResource("INVENTORY-Delimited.pzmap.xml"));
+ ...
+ from("file:order/in").unmarshal(df).to("seda:queue:neworder");
+
+The sample above will read files from the `order/in` folder and
+unmarshal the input using the Flatpack configuration file
+`INVENTORY-Delimited.pzmap.xml` that configures the structure of the
+files. The result is a `DataSetList` object we store on the SEDA queue.
+
+ FlatpackDataFormat df = new FlatpackDataFormat();
+ df.setDefinition(new ClassPathResource("PEOPLE-FixedLength.pzmap.xml"));
+ df.setFixed(true);
+ df.setIgnoreFirstRecord(false);
+
+ from("seda:people").marshal(df).convertBodyTo(String.class).to("jms:queue:people");
+
+In the code above we marshal the data from an Object representation as a
+`List` of rows as `Maps`. The rows as `Map` contains the column name as
+the key, and the corresponding value. This structure can be created in
+Java code from e.g., a processor. We marshal the data according to the
+Flatpack format and convert the result as a `String` object and store it
+on a JMS queue.
+
+# Dependencies
+
+To use Flatpack in your camel routes, you need to add a dependency on
+**camel-flatpack** which implements this data format.
+
+If you use maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest release.
+
+
+ org.apache.camel
+ camel-flatpack
+ x.x.x
+
diff --git a/camel-flatpack.md b/camel-flatpack.md
index 48bc6f723922a9e7eb1d0acb541878dfb649db93..8c9afaf8ac56c8bc138fe6de181f59348602371a 100644
--- a/camel-flatpack.md
+++ b/camel-flatpack.md
@@ -18,7 +18,7 @@ Or for a delimited file handler with no configuration file just use
flatpack:someName[?options]
-# Examples
+# Usage
- `flatpack:fixed:foo.pzmap.xml` creates a fixed-width endpoint using
the `foo.pzmap.xml` file configuration.
@@ -29,7 +29,7 @@ Or for a delimited file handler with no configuration file just use
- `flatpack:foo` creates a delimited endpoint called `foo` with no
file configuration.
-# Message Body
+## Message Body
The component delivers the data in the IN message as a
`org.apache.camel.component.flatpack.DataSetList` object that has
@@ -40,7 +40,7 @@ Usually you want the `Map` if you process one row at a time
Each `Map` contains the key for the column name and its corresponding
value.
-For example to get the firstname from the sample below:
+For example, to get the firstname from the sample below:
Map row = exchange.getIn().getBody(Map.class);
String firstName = row.get("FIRSTNAME");
@@ -52,7 +52,7 @@ However, you can also always get it as a `List` (even for
Map row = (Map)data.get(0);
String firstName = row.get("FIRSTNAME");
-# Header and Trailer records
+## Header and Trailer records
The header and trailer notions in Flatpack are supported. However, you
**must** use fixed record IDs:
@@ -81,7 +81,7 @@ trailer. You can omit one or both of them if not needed.
-# Using the endpoint
+## Using as an Endpoint
A common use case is sending a file to this endpoint for further
processing in a separate route. For example:
@@ -101,7 +101,7 @@ processing in a separate route. For example:
You can also convert the payload of each message created to a `Map` for
easy Bean Integration
-# Flatpack DataFormat
+## Flatpack DataFormat
The [Flatpack](#flatpack-component.adoc) component ships with the
Flatpack data format that can be used to format between fixed width or
@@ -120,7 +120,7 @@ delimited text messages to a `List` of rows as `Map`.
**Notice:** The Flatpack library does currently not support header and
trailers for the marshal operation.
-# Options
+### Options
The data format has the following options:
@@ -131,56 +131,56 @@ The data format has the following options:
-
+
-
+
definition
null
The flatpack pzmap configuration file.
Can be omitted in simpler situations, but its preferred to use the
pzmap.
-
+
fixed
false
Delimited or fixed.
-
+
ignoreFirstRecord
true
Whether the first line is ignored for
delimited files (for the column headers).
-
+
textQualifier
"
If the text is qualified with a char
such as ".
-
+
delimiter
,
The delimiter char (could be
; , or similar)
-
+
parserFactory
null
Uses the default Flatpack parser
factory.
-
+
allowShortLines
false
Allows for lines to be shorter than
expected and ignores the extra characters.
-
+
ignoreExtraColumns
false
@@ -190,10 +190,10 @@ expected and ignores the extra characters.
-# Usage
+## Using the data format
-To use the data format, simply instantiate an instance and invoke the
-marshal or unmarshal operation in the route builder:
+To use the data format, instantiate an instance and invoke the marshal
+or unmarshal operation in the route builder:
FlatpackDataFormat fp = new FlatpackDataFormat();
fp.setDefinition(new ClassPathResource("INVENTORY-Delimited.pzmap.xml"));
@@ -212,21 +212,20 @@ files. The result is a `DataSetList` object we store on the SEDA queue.
from("seda:people").marshal(df).convertBodyTo(String.class).to("jms:queue:people");
-In the code above we marshal the data from a Object representation as a
+In the code above we marshal the data from an Object representation as a
`List` of rows as `Maps`. The rows as `Map` contains the column name as
the key, and the corresponding value. This structure can be created in
-Java code from e.g. a processor. We marshal the data according to the
+Java code from e.g., a processor. We marshal the data according to the
Flatpack format and convert the result as a `String` object and store it
on a JMS queue.
-# Dependencies
+## Dependencies
-To use Flatpack in your camel routes you need to add the a dependency on
+To use Flatpack in your camel routes, you need to add a dependency on
**camel-flatpack** which implements this data format.
-If you use maven you could just add the following to your pom.xml,
-substituting the version number for the latest \& greatest release (see
-the download page for the latest versions).
+If you use maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest release.
org.apache.camel
diff --git a/camel-flink.md b/camel-flink.md
index 054e7e83124c7444a3f55b92d978abb9f37eb518..133bb22f7c488345b5b04e7acb303c3734a6524a 100644
--- a/camel-flink.md
+++ b/camel-flink.md
@@ -30,7 +30,9 @@ DataSet, DataStream jobs.
flink:dataset?dataset=#myDataSet&dataSetCallback=#dataSetCallback
flink:datastream?datastream=#myDataStream&dataStreamCallback=#dataStreamCallback
-# Flink DataSet Callback
+# Examples
+
+## Flink DataSet Callback
@Bean
public DataSetCallback dataSetCallback() {
@@ -46,7 +48,7 @@ DataSet, DataStream jobs.
};
}
-# Flink DataStream Callback
+## Flink DataStream Callback
@Bean
public VoidDataStreamCallback dataStreamCallback() {
@@ -60,7 +62,7 @@ DataSet, DataStream jobs.
};
}
-# Camel-Flink Producer call
+## Camel-Flink Producer call
CamelContext camelContext = new SpringCamelContext(context);
diff --git a/camel-fop.md b/camel-fop.md
index 8a368040f9ee940978fd8752225fc057f7eaecb0..bc7c0aa96e686a3b19367524aa906f42d1723975 100644
--- a/camel-fop.md
+++ b/camel-fop.md
@@ -22,7 +22,9 @@ for this component:
fop://outputFormat?[options]
-# Output Formats
+# Usage
+
+## Output Formats
The primary output format is PDF, but other output
[formats](http://xmlgraphics.apache.org/fop/0.95/output.html) are also
@@ -35,59 +37,59 @@ supported:
-
+
-
+
PDF
application/pdf
Portable Document Format
-
+
PS
application/postscript
Adobe Postscript
-
+
PCL
application/x-pcl
Printer Control Language
-
+
PNG
image/png
PNG images
-
+
JPEG
image/jpeg
JPEG images
-
+
SVG
image/svg+xml
Scalable Vector Graphics
-
+
XML
application/X-fop-areatree
Area tree representation
-
+
MIF
application/mif
FrameMaker’s MIF
-
+
RTF
application/rtf
Rich Text Format
-
+
TXT
text/plain
Text
@@ -98,7 +100,7 @@ supported:
The complete list of valid output formats can be found in the
`MimeConstants.java` source file.
-# Configuration file
+## Configuration file
The location of a configuration file with the following
[structure](http://xmlgraphics.apache.org/fop/1.0/configuration.html).
@@ -106,7 +108,7 @@ The file is loaded from the classpath by default. You can use `file:`,
or `classpath:` as prefix to load the resource from file or classpath.
In previous releases, the file is always loaded from the file system.
-# Message Operations
+## Message Operations
@@ -115,90 +117,101 @@ In previous releases, the file is always loaded from the file system.
-
+
-
-CamelFop.Output.Format
+
+CamelFop.Output.Format
Overrides the output format for that
message
-
-CamelFop.Encrypt.userPassword
+
+CamelFop.Encrypt.userPassword
PDF user password
-
-CamelFop.Encrypt.ownerPassword
+
+CamelFop.Encrypt.ownerPassword
PDF owner passoword
-
-CamelFop.Encrypt.allowPrint
-true
+
+CamelFop.Encrypt.allowPrint
+true
Allows printing the PDF
-
+
CamelFop.Encrypt.allowCopyContent
-true
+style="text-align: left;">CamelFop.Encrypt.allowCopyContent
+true
Allows copying content of the
PDF
-
+
CamelFop.Encrypt.allowEditContent
-true
+style="text-align: left;">CamelFop.Encrypt.allowEditContent
+true
Allows editing content of the
PDF
-
+
CamelFop.Encrypt.allowEditAnnotations
-true
+style="text-align: left;">CamelFop.Encrypt.allowEditAnnotations
+true
Allows editing annotation of the
PDF
-
-CamelFop.Render.producer
+
+CamelFop.Render.producer
Apache FOP
Metadata element for the
system/software that produces the document
-
-CamelFop.Render.creator
+
+CamelFop.Render.creator
Metadata element for the user that
created the document
-
-CamelFop.Render.creationDate
+
+CamelFop.Render.creationDate
Creation Date
-
-CamelFop.Render.author
+
+CamelFop.Render.author
Author of the content of the
document
-
-CamelFop.Render.title
+
+CamelFop.Render.title
Title of the document
-
-CamelFop.Render.subject
+
+CamelFop.Render.subject
Subject of the document
-
-CamelFop.Render.keywords
+
+CamelFop.Render.keywords
Set of keywords applicable to this
document
@@ -206,9 +219,9 @@ document
-# Example
+## Example
-Below is an example route that renders PDFs from xml data and xslt
+Below is an example route that renders PDFs from XML data and XSLT
template and saves the PDF files in the target folder:
from("file:source/data/xml")
diff --git a/camel-freemarker.md b/camel-freemarker.md
index fa44e7b1ad43288c9dd647844f939ee75fde6f81..4f277ba2f6e1d14d0f274a592dd4b97489d50f25 100644
--- a/camel-freemarker.md
+++ b/camel-freemarker.md
@@ -37,7 +37,9 @@ For example, set the header value of `fruit` in the FreeMarker template:
The header, `fruit`, is now accessible from the `message.out.headers`.
-# FreeMarker Context
+# Usage
+
+## FreeMarker Context
Camel will provide exchange information in the FreeMarker context (just
a `Map`). The `Exchange` is transferred as:
@@ -48,44 +50,44 @@ a `Map`). The `Exchange` is transferred as:
-
+
-
+
exchange
The Exchange
itself.
-
+
exchange.properties
The Exchange
properties.
-
+
variables
The variables
-
+
headers
The headers of the In message.
-
+
camelContext
The Camel Context.
-
+
request
The In message.
-
+
body
The In message body.
-
+
response
The Out message (only for InOut message
exchange pattern).
@@ -102,21 +104,21 @@ the key "**CamelFreemarkerDataModel**" just like this
variableMap.put("exchange", exchange);
exchange.getIn().setHeader("CamelFreemarkerDataModel", variableMap);
-# Hot reloading
+## Hot reloading
The FreeMarker template resource is by default **not** hot reloadable
for both file and classpath resources (expanded jar). If you set
`contentCache=false`, then Camel will not cache the resource and hot
reloading is thus enabled. This scenario can be used in development.
-# Dynamic templates
+## Dynamic templates
Camel provides two headers by which you can define a different resource
location for a template or the template content itself. If any of these
headers is set, then Camel uses this over the endpoint configured
resource. This allows you to provide a dynamic template at runtime.
-# Samples
+# Examples
For example, you could use something like:
@@ -153,7 +155,7 @@ dynamically via a header, so for example:
setHeader(FreemarkerConstants.FREEMARKER_RESOURCE_URI).constant("path/to/my/template.ftl").
to("freemarker:dummy?allowTemplateFromHeader=true");
-# The Email Sample
+## The Email Example
In this sample, we want to use FreeMarker templating for an order
confirmation email. The email template is laid out in FreeMarker as:
diff --git a/camel-from-eip.md b/camel-from-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..8b772273d1ca92d6db851a2b1eafc4316d27dfdd
--- /dev/null
+++ b/camel-from-eip.md
@@ -0,0 +1,35 @@
+# From-eip.md
+
+Every Camel [route](#manual::routes.adoc) starts from an
+[Endpoint](#manual::endpoint.adoc) as the input (source) to the route.
+
+The `from` EIP is the input.
+
+the Java DSL also provides a `fromF` EIP, which can be used to avoid
+concatenating route parameters and making the code harder to read.
+
+# Options
+
+# Exchange properties
+
+# Example
+
+In the route below, the route starts from a
+[File](#ROOT:file-component.adoc) endpoint.
+
+Java
+from("file:inbox")
+.to("log:inbox");
+
+XML
+
+
+
+
+
+YAML
+\- from:
+uri: file:inbox
+steps:
+\- to:
+uri: log:inbox
diff --git a/camel-ftp.md b/camel-ftp.md
index 96781ddf02f4d8944564efd35e9548c9052b829b..6363bf71fbc60bdc1bd35eb7c49862275c7297cd 100644
--- a/camel-ftp.md
+++ b/camel-ftp.md
@@ -56,7 +56,9 @@ FTPS, also known as FTP Secure, is an extension to FTP that adds support
for the Transport Layer Security (TLS) and the Secure Sockets Layer
(SSL) cryptographic protocols.
-# FTPS component default trust store
+# Usage
+
+## FTPS component default trust store
When using the `ftpClient.` properties related to SSL with the FTPS
component, the trust store accepts all certificates. If you only want
@@ -103,7 +105,7 @@ url.
from("ftp://foo@myserver?password=secret&ftpClientConfig=#myConfig").to("bean:foo");
-# Concurrency
+## Concurrency
The FTP consumer (with the same endpoint) does not support concurrency
(the backing FTP client is not thread safe). You can use multiple FTP
@@ -112,7 +114,7 @@ that does not support concurrent consumers.
The FTP producer does **not** have this issue, it supports concurrency.
-# Default when consuming files
+## Default when consuming files
The FTP consumer will by default leave the consumed files untouched on
the remote FTP server. You have to configure it explicitly if you want
@@ -125,7 +127,7 @@ to a `.camel` sub directory. The reason Camel does **not** do this by
default for the FTP consumer is that it may lack permissions by default
to be able to move or delete files.
-## limitations
+### limitations
The option `readLock` can be used to force Camel **not** to consume
files that are currently in the progress of being written. However, this
@@ -140,7 +142,7 @@ restricted to the FTP\_ROOT folder. That prevents you from moving files
outside the FTP area. If you want to move files to another area, you can
use soft links and move files into a soft linked folder.
-# Exchange Properties
+## Exchange Properties
Camel sets the following exchange properties
@@ -150,23 +152,23 @@ Camel sets the following exchange properties
-
+
-
+
CamelBatchIndex
The current index out of total number
of files being consumed in this batch.
-
+
CamelBatchSize
The total number of files being
consumed in this batch.
-
+
CamelBatchComplete
true if there are no more
@@ -175,7 +177,7 @@ files in this batch.
-# About timeouts
+## About timeouts
The two sets of libraries (see top) have different API for setting
timeout. You can use the `connectTimeout` option for both of them to set
@@ -186,7 +188,7 @@ a timeout in millis to establish a network connection. An individual
for FTP/FTPS as the data timeout, which corresponds to the
`ftpClient.dataTimeout` value. All timeout values are in millis.
-# Using Local Work Directory
+## Using Local Work Directory
Camel supports consuming from remote FTP servers and downloading the
files directly into a local work directory. This avoids reading the
@@ -213,7 +215,7 @@ directly on the work file `java.io.File` handle and perform a
local work file, it can optimize and use a rename instead of a file
copy, as the work file is meant to be deleted anyway.
-# Stepwise changing directories
+## Stepwise changing directories
Camel FTP can operate in two modes in terms of traversing directories
when consuming files (e.g., downloading) or producing files (e.g.,
@@ -379,14 +381,14 @@ Stepwise Disabled
As you can see when not using stepwise, there is no CD operation invoked
at all.
-# Filtering Strategies
+## Filtering Strategies
Camel supports pluggable filtering strategies. They are described below.
See also the documentation for filtering strategies on the [File
component](#file-component.adoc).
-## Custom filtering
+### Custom filtering
Camel supports pluggable filtering strategies. This strategy it to use
the build in `org.apache.camel.component.file.GenericFileFilter` in
@@ -408,7 +410,7 @@ spring XML file:
-## Filtering using ANT path matcher
+### Filtering using ANT path matcher
The ANT path matcher is a filter shipped out-of-the-box in the
**camel-spring** jar. So you need to depend on **camel-spring** if you
@@ -428,7 +430,7 @@ The sample below demonstrates how to use it:
from("ftp://admin@localhost:2222/public/camel?antInclude=**/*.txt").to("...");
-# Using a proxy with SFTP
+## Using a proxy with SFTP
To use an HTTP proxy to connect to your remote host, you can configure
your route in the following way:
@@ -448,7 +450,7 @@ You can also assign a username and password to the proxy, if necessary.
Please consult the documentation for `com.jcraft.jsch.Proxy` to discover
all options.
-# Setting preferred SFTP authentication method
+## Setting preferred SFTP authentication method
If you want to explicitly specify the list of authentication methods
that should be used by `sftp` component, use `preferredAuthentications`
@@ -460,7 +462,7 @@ following route configuration:
from("sftp://localhost:9999/root?username=admin&password=admin&preferredAuthentications=publickey,password").
to("bean:processFile");
-# Consuming a single file using a fixed name
+## Consuming a single file using a fixed name
When you want to download a single file and knows the file name, you can
use `fileName=myFileName.txt` to tell Camel the name of the file to
@@ -489,12 +491,12 @@ a single file (if it exists) and grab the file content as a String type:
String data = template.retrieveBodyNoWait("ftp://admin@localhost:21/nolist/?password=admin&stepwise=false&useList=false&ignoreFileNotFoundOrPermissionError=true&fileName=report.txt&delete=true", String.class);
-# Debug logging
+## Debug logging
This component has log level **TRACE** that can be helpful if you have
problems.
-# Samples
+# Examples
In the sample below, we set up Camel to download all the reports from
the FTP server once every hour (60 min) as BINARY content and store it
@@ -560,9 +562,9 @@ You can find additional samples and details on the File component page.
|fileName|Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-${date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards.||string|
|passiveMode|Sets passive mode connections. Default is active mode connections.|false|boolean|
|separator|Sets the path separator to be used. UNIX = Uses unix style path separator Windows = Uses windows style path separator Auto = (is default) Use existing path separator in file name|UNIX|object|
-|transferLoggingIntervalSeconds|Configures the interval in seconds to use when logging the progress of upload and download operations that are in-flight. This is used for logging progress when operations takes longer time.|5|integer|
+|transferLoggingIntervalSeconds|Configures the interval in seconds to use when logging the progress of upload and download operations that are in-flight. This is used for logging progress when operations take a longer time.|5|integer|
|transferLoggingLevel|Configure the logging level to use when logging the progress of upload and download operations.|DEBUG|object|
-|transferLoggingVerbose|Configures whether the perform verbose (fine grained) logging of the progress of upload and download operations.|false|boolean|
+|transferLoggingVerbose|Configures whether perform verbose (fine-grained) logging of the progress of upload and download operations.|false|boolean|
|fastExistsCheck|If set this option to be true, camel-ftp will use the list file directly to check if the file exists. Since some FTP server may not support to list the file directly, if the option is false, camel-ftp will use the old way to list the directory and check if the file exists. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. This can be used to speed up the process if the FTP server has a lot of files.|false|boolean|
|delete|If true, the file will be deleted after it is processed successfully.|false|boolean|
|moveFailed|Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again.||string|
@@ -570,14 +572,14 @@ You can find additional samples and details on the File component page.
|preMove|Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order.||string|
|preSort|When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled.|false|boolean|
|recursive|If a directory, will look for files in all the sub-directories as well.|false|boolean|
-|resumeDownload|Configures whether resume download is enabled. This must be supported by the FTP server (almost all FTP servers support it). In addition the options localWorkDirectory must be configured so downloaded files are stored in a local directory, and the option binary must be enabled, which is required to support resuming of downloads.|false|boolean|
+|resumeDownload|Configures whether resume download is enabled. This must be supported by the FTP server (almost all FTP servers support it). In addition, the options localWorkDirectory must be configured so downloaded files are stored in a local directory, and the option binary must be enabled, which is required to support resuming of downloads.|false|boolean|
|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|streamDownload|Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. If enabling this option then you must set stepwise=false as both cannot be enabled at the same time.|false|boolean|
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|download|Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It's just that the file will not be downloaded.|false|boolean|
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
-|handleDirectoryParserAbsoluteResult|Allows you to set how the consumer will handle subfolders and files in the path if the directory parser results in with absolute paths The reason for this is that some FTP servers may return file names with absolute paths, and if so then the FTP component needs to handle this by converting the returned path into a relative path.|false|boolean|
+|handleDirectoryParserAbsoluteResult|Allows you to set how the consumer will handle subfolders and files in the path if the directory parser results in with absolute paths. The reason for this is that some FTP servers may return file names with absolute paths, and if so, then the FTP component needs to handle this by converting the returned path into a relative path.|false|boolean|
|ignoreFileNotFoundOrPermissionError|Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exist or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead.|false|boolean|
|inProgressRepository|A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used.||object|
|localWorkDirectory|When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory.||string|
@@ -593,14 +595,14 @@ You can find additional samples and details on the File component page.
|tempFileName|The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir.||string|
|tempPrefix|This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files.||string|
|allowNullBody|Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged.|false|boolean|
-|chmod|Allows you to set chmod on the stored file. For example chmod=640.||string|
+|chmod|Allows you to set chmod on the stored file. For example, chmod=640.||string|
|disconnectOnBatchComplete|Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server.|false|boolean|
|eagerDeleteTargetFile|Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation.|true|boolean|
|keepLastModified|Will keep the last modified timestamp from the source file (if any). Will use the FileConstants.FILE\_LAST\_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers.|false|boolean|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|moveExistingFileStrategy|Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided||object|
|sendNoop|Whether to send a noop command as a pre-write check before uploading files to the FTP server. This is enabled by default as a validation of the connection is still valid, which allows to silently re-connect to be able to upload the file. However if this causes problems, you can turn this option off.|true|boolean|
-|activePortRange|Set the client side port range in active mode. The syntax is: minPort-maxPort Both port numbers are inclusive, eg 10000-19999 to include all 1xxxx ports.||string|
+|activePortRange|Set the client side port range in active mode. The syntax is: minPort-maxPort Both port numbers are inclusive, e.g., 10000-19999 to include all 1xxxx ports.||string|
|autoCreate|Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to.|true|boolean|
|bufferSize|Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files).|131072|integer|
|connectTimeout|Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH|10000|duration|
diff --git a/camel-ftps.md b/camel-ftps.md
index c9a28227968d5cfac942717a91168d594901464b..d7a1a44374754970f5a8f2a0315e12ccf3569666 100644
--- a/camel-ftps.md
+++ b/camel-ftps.md
@@ -49,9 +49,9 @@ component](#ftp-component.adoc).
|fileName|Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-${date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards.||string|
|passiveMode|Sets passive mode connections. Default is active mode connections.|false|boolean|
|separator|Sets the path separator to be used. UNIX = Uses unix style path separator Windows = Uses windows style path separator Auto = (is default) Use existing path separator in file name|UNIX|object|
-|transferLoggingIntervalSeconds|Configures the interval in seconds to use when logging the progress of upload and download operations that are in-flight. This is used for logging progress when operations takes longer time.|5|integer|
+|transferLoggingIntervalSeconds|Configures the interval in seconds to use when logging the progress of upload and download operations that are in-flight. This is used for logging progress when operations take a longer time.|5|integer|
|transferLoggingLevel|Configure the logging level to use when logging the progress of upload and download operations.|DEBUG|object|
-|transferLoggingVerbose|Configures whether the perform verbose (fine grained) logging of the progress of upload and download operations.|false|boolean|
+|transferLoggingVerbose|Configures whether perform verbose (fine-grained) logging of the progress of upload and download operations.|false|boolean|
|fastExistsCheck|If set this option to be true, camel-ftp will use the list file directly to check if the file exists. Since some FTP server may not support to list the file directly, if the option is false, camel-ftp will use the old way to list the directory and check if the file exists. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. This can be used to speed up the process if the FTP server has a lot of files.|false|boolean|
|delete|If true, the file will be deleted after it is processed successfully.|false|boolean|
|moveFailed|Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again.||string|
@@ -59,14 +59,14 @@ component](#ftp-component.adoc).
|preMove|Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order.||string|
|preSort|When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled.|false|boolean|
|recursive|If a directory, will look for files in all the sub-directories as well.|false|boolean|
-|resumeDownload|Configures whether resume download is enabled. This must be supported by the FTP server (almost all FTP servers support it). In addition the options localWorkDirectory must be configured so downloaded files are stored in a local directory, and the option binary must be enabled, which is required to support resuming of downloads.|false|boolean|
+|resumeDownload|Configures whether resume download is enabled. This must be supported by the FTP server (almost all FTP servers support it). In addition, the options localWorkDirectory must be configured so downloaded files are stored in a local directory, and the option binary must be enabled, which is required to support resuming of downloads.|false|boolean|
|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|streamDownload|Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. If enabling this option then you must set stepwise=false as both cannot be enabled at the same time.|false|boolean|
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|download|Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It's just that the file will not be downloaded.|false|boolean|
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
-|handleDirectoryParserAbsoluteResult|Allows you to set how the consumer will handle subfolders and files in the path if the directory parser results in with absolute paths The reason for this is that some FTP servers may return file names with absolute paths, and if so then the FTP component needs to handle this by converting the returned path into a relative path.|false|boolean|
+|handleDirectoryParserAbsoluteResult|Allows you to set how the consumer will handle subfolders and files in the path if the directory parser results in with absolute paths. The reason for this is that some FTP servers may return file names with absolute paths, and if so, then the FTP component needs to handle this by converting the returned path into a relative path.|false|boolean|
|ignoreFileNotFoundOrPermissionError|Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exist or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead.|false|boolean|
|inProgressRepository|A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used.||object|
|localWorkDirectory|When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory.||string|
@@ -82,14 +82,14 @@ component](#ftp-component.adoc).
|tempFileName|The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir.||string|
|tempPrefix|This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files.||string|
|allowNullBody|Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged.|false|boolean|
-|chmod|Allows you to set chmod on the stored file. For example chmod=640.||string|
+|chmod|Allows you to set chmod on the stored file. For example, chmod=640.||string|
|disconnectOnBatchComplete|Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server.|false|boolean|
|eagerDeleteTargetFile|Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation.|true|boolean|
|keepLastModified|Will keep the last modified timestamp from the source file (if any). Will use the FileConstants.FILE\_LAST\_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers.|false|boolean|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|moveExistingFileStrategy|Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided||object|
|sendNoop|Whether to send a noop command as a pre-write check before uploading files to the FTP server. This is enabled by default as a validation of the connection is still valid, which allows to silently re-connect to be able to upload the file. However if this causes problems, you can turn this option off.|true|boolean|
-|activePortRange|Set the client side port range in active mode. The syntax is: minPort-maxPort Both port numbers are inclusive, eg 10000-19999 to include all 1xxxx ports.||string|
+|activePortRange|Set the client side port range in active mode. The syntax is: minPort-maxPort Both port numbers are inclusive, e.g., 10000-19999 to include all 1xxxx ports.||string|
|autoCreate|Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to.|true|boolean|
|bufferSize|Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files).|131072|integer|
|connectTimeout|Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH|10000|duration|
diff --git a/camel-geocoder.md b/camel-geocoder.md
index 4813e618b4f700508872b3e9852f36fec507fe1a..baf41a70bc978ec698f46d276d29214327b772c3 100644
--- a/camel-geocoder.md
+++ b/camel-geocoder.md
@@ -45,7 +45,7 @@ with a JSON representation of the current location.
If the option `headersOnly` is set to `true` then the message body is
left as-is, and only headers will be added to the Exchange.
-# Samples
+# Examples
In the example below, we get the latitude and longitude for Paris,
France
diff --git a/camel-git.md b/camel-git.md
index 83aca3fb8f83fea4fe16a74e04334b95baf2b650..2730470384e92cf4efc38bd736206173ee683751 100644
--- a/camel-git.md
+++ b/camel-git.md
@@ -23,7 +23,9 @@ The producer allows doing operations on a specific repository. The
consumer allows consuming commits, tags, and branches in a specific
repository.
-# Producer Example
+# Examples
+
+## Producer Example
Below is an example route of a producer that adds a file test.java to a
local repository, commits it with a specific message on the `main`
@@ -38,14 +40,14 @@ branch and then pushes it to remote repository.
.to("git:///tmp/testRepo?operation=createTag&tagName=myTag")
.to("git:///tmp/testRepo?operation=pushTag&tagName=myTag&remoteName=origin");
-# Consumer Example
+## Consumer Example
Below is an example route of a consumer that consumes commit:
from("git:///tmp/testRepo?type=commit")
.to(....)
-# Custom config file
+## Custom config file
By default, camel-git will load \`\`.gitconfig\`\` file from user home
folder. You can override this by providing your own \`\`.gitconfig\`\`
diff --git a/camel-github.md b/camel-github.md
index 3a5fb09e31f761c4a5cdb9386c0d6f000ffde6e9..7f160826abd66dd84d40d3cd697a093c1e218029 100644
--- a/camel-github.md
+++ b/camel-github.md
@@ -37,7 +37,9 @@ for this component:
github://endpoint[?options]
-# Configuring authentication
+# Usage
+
+## Configuring authentication
The GitHub component requires to be configured with an authentication
token on either the component or endpoint level.
@@ -47,7 +49,7 @@ For example, to set it on the component:
GitHubComponent ghc = context.getComponent("github", GitHubComponent.class);
ghc.setOauthToken("mytoken");
-# Consumer Endpoints:
+## Consumer Endpoints:
@@ -56,20 +58,20 @@ For example, to set it on the component:
-
+
-
+
pullRequest
polling
org.eclipse.egit.github.core.PullRequest
-
+
pullRequestComment
polling
org.eclipse.egit.github.core.Comment
org.eclipse.egit.github.core.CommitComment (inline comment
on a pull request diff)
-
+
tag
polling
org.eclipse.egit.github.core.RepositoryTag
-
+
commit
polling
org.eclipse.egit.github.core.RepositoryCommit
-# Producer Endpoints:
+## Producer Endpoints:
@@ -102,14 +104,14 @@ style="text-align: left;">org.eclipse.egit.github.core.RepositoryCommit
-
+
-
+
pullRequestComment
String (comment text)
- GitHubPullRequest
@@ -118,13 +120,13 @@ style="text-align: left;">
org.eclipse.egit.github.core.RepositoryCommit
to another inline comment on the pull request diff. If left off, a
general comment on the pull request discussion is assumed.
-
+
closePullRequest
none
- GitHubPullRequest
(integer) (REQUIRED): Pull request number.
-
+
createIssue
String (issue body text)
- GitHubIssueTitle
diff --git a/camel-google-bigquery.md b/camel-google-bigquery.md
index 9e47fbec5b52a6b8508b93fd41b93ab2e872deaa..5332248e6ad9b4b38d5a44c080b616283a916712 100644
--- a/camel-google-bigquery.md
+++ b/camel-google-bigquery.md
@@ -57,7 +57,9 @@ Or by setting the environment variable `GOOGLE_APPLICATION_CREDENTIALS`
google-bigquery://project-id:datasetId[:tableId]?[options]
-# Producer Endpoints
+# Usage
+
+## Producer Endpoints
Producer endpoints can accept and deliver to BigQuery individual and
grouped exchanges alike. Grouped exchanges have
@@ -73,7 +75,7 @@ of maps. A payload containing a map will insert a single row, and a
payload containing a list of maps will insert a row for each entry in
the list.
-# Template tables
+## Template tables
Reference:
[https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables](https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables)
@@ -90,7 +92,7 @@ on a per-day basis:
Note it is recommended to use partitioning for this use case.
-# Partitioning
+## Partitioning
Reference:
[https://cloud.google.com/bigquery/docs/creating-partitioned-tables](https://cloud.google.com/bigquery/docs/creating-partitioned-tables)
@@ -100,7 +102,7 @@ automatically partitioned into separate tables. When inserting data a
specific partition can be specified by setting the
`GoogleBigQueryConstants.PARTITION_DECORATOR` header on the exchange.
-# Ensuring data consistency
+## Ensuring data consistency
Reference:
[https://cloud.google.com/bigquery/streaming-data-into-bigquery#dataconsistency](https://cloud.google.com/bigquery/streaming-data-into-bigquery#dataconsistency)
diff --git a/camel-google-pubsub-lite.md b/camel-google-pubsub-lite.md
index 08a89f21b5d790f44fe53f5c32393f211d38198b..406c0df53076628d659d59d155463ecd6213a09b 100644
--- a/camel-google-pubsub-lite.md
+++ b/camel-google-pubsub-lite.md
@@ -38,7 +38,9 @@ The Google PubSub Component uses the following URI format:
Destination Name can be either a topic or a subscription name.
-# Producer Endpoints
+# Usage
+
+## Producer Endpoints
Google PubSub Lite expects the payload to be `byte[]` array, Producer
endpoints will send:
@@ -58,7 +60,7 @@ Lite orderingKey for the message. You can find more information on
[Using ordering
keys](https://cloud.google.com/pubsub/lite/docs/publishing#using_ordering_keys).
-# Consumer Endpoints
+## Consumer Endpoints
Google PubSub Lite will redeliver the message if it has not been
acknowledged within the time period set as a configuration option on the
@@ -67,7 +69,7 @@ subscription.
The component will acknowledge the message once exchange processing has
been completed.
-# Message Body
+## Message Body
The consumer endpoint returns the content of the message as `byte[]` -
exactly as the underlying system sends it. It is up for the route to
diff --git a/camel-google-pubsub.md b/camel-google-pubsub.md
index 99ee7928b6d63444d9813bac904bb8ec8876c4b1..e85b9c60da7f18aa73b4793dd903e21aba9d45fb 100644
--- a/camel-google-pubsub.md
+++ b/camel-google-pubsub.md
@@ -27,7 +27,9 @@ The Google Pubsub Component uses the following URI format:
Destination Name can be either a topic or a subscription name.
-# Producer Endpoints
+# Usage
+
+## Producer Endpoints
Producer endpoints can accept and deliver to PubSub individual and
grouped exchanges alike. Grouped exchanges have
@@ -60,7 +62,7 @@ messages](https://cloud.google.com/pubsub/docs/ordering).
Once exchange has been delivered to PubSub the PubSub Message ID will be
assigned to the header `GooglePubsubConstants.MESSAGE_ID`.
-# Consumer Endpoints
+## Consumer Endpoints
Google PubSub will redeliver the message if it has not been acknowledged
within the time period set as a configuration option on the
@@ -77,13 +79,13 @@ header `GooglePubsubConstants.ACK_ID`. If the header is removed or
tampered with, the ack will fail and the message will be redelivered
again after the ack deadline.
-# Message Body
+## Message Body
The consumer endpoint returns the content of the message as `byte[]`.
Exactly as the underlying system sends it. It is up for the route to
convert/unmarshall the contents.
-# Authentication Configuration
+## Authentication Configuration
By default, this component acquires credentials using
`GoogleCredentials.getApplicationDefault()`. This behavior can be
@@ -92,7 +94,7 @@ requests to Google API will be made without authentication details. This
is only desirable when developing against an emulator. This behavior can
be altered by supplying a path to a service account key file.
-# Rollback and Redelivery
+## Rollback and Redelivery
The rollback for Google PubSub relies on the idea of the Acknowledgement
Deadline - the time period where Google PubSub expects to receive the
@@ -113,7 +115,7 @@ acknowledgement deadline explicitly for the rollback by setting the
message header `GooglePubsubConstants.ACK_DEADLINE` to the value in
seconds.
-# Manual Acknowledgement
+## Manual Acknowledgement
By default, the PubSub consumer will acknowledge messages once the
exchange has been processed, or negative-acknowledge them if the
diff --git a/camel-google-secret-manager.md b/camel-google-secret-manager.md
index aaad199e36e28441246051e42b8b280138f050c1..d882260866f220d5bb52d6636bed44c4018b9b36 100644
--- a/camel-google-secret-manager.md
+++ b/camel-google-secret-manager.md
@@ -75,7 +75,12 @@ You can also configure the credentials in the `application.properties`
file such as:
camel.vault.gcp.useDefaultInstance = true
- camel.vault.aws.projectId = region
+ camel.vault.gcp.projectId = region
+
+`camel.vault.gcp` configuration only applies to the Google Secret
+Manager properties function (E.g when resolving properties). When using
+the `operation` option to create, get, list secrets etc., you should
+provide the usual options for connecting to GCP Services.
At this point you’ll be able to reference a property in the following
way by using `gcp:` as prefix in the `{{ }}` syntax:
@@ -121,7 +126,7 @@ example:
-
+
@@ -133,7 +138,7 @@ is not present on GCP Secret Manager:
-
+
@@ -168,7 +173,7 @@ exist.
-
+
@@ -195,7 +200,7 @@ With Environment variables:
or as plain Camel main properties:
camel.vault.gcp.useDefaultInstance = true
- camel.vault.aws.projectId = projectId
+ camel.vault.gcp.projectId = projectId
Or by specifying a path to a service account key file, instead of using
the default instance.
@@ -234,6 +239,96 @@ permissions to do operation at secret management level, (for example,
accessing the secret payload, or being admin of secret manager service
and also have permission over the Pubsub service)
+## Automatic `CamelContext` reloading on Secret Refresh - Required infrastructure’s creation
+
+You’ll need to install the gcloud cli from
+[https://cloud.google.com/sdk/docs/install](https://cloud.google.com/sdk/docs/install)
+
+Once the Cli has been installed we can proceed to log in and to set up
+the project with the following commands:
+
+\`\`\` gcloud auth login \`\`\`
+
+and
+
+\`\`\` gcloud projects create \ --name="GCP Secret
+Manager Refresh" \`\`\`
+
+The project will need a service identity for using secret manager
+service and we’ll be able to have that through the command:
+
+\`\`\` gcloud beta services identity create --service
+"secretmanager.googleapis.com" --project \ \`\`\`
+
+The latter command will provide a service account name that we need to
+export
+
+\`\`\` export SM\_SERVICE\_ACCOUNT="service-…." \`\`\`
+
+Since we want to have notifications about events related to a specific
+secret through a Google Pubsub topic we’ll need to create a topic for
+this purpose with the following command:
+
+\`\`\` gcloud pubsub topics create
+"projects/\/topics/pubsub-gcp-sec-refresh" \`\`\`
+
+The service account will need Secret Manager authorization to publish
+messages on the topic just created, so we’ll need to add an iam policy
+binding with the following command:
+
+\`\`\` \`\`\`
+
+We now need to create a subscription to the pubsub-gcp-sec-refresh just
+created and we’re going to call it sub-gcp-sec-refresh with the
+following command:
+
+\`\`\` gcloud pubsub subscriptions create
+"projects/\/subscriptions/sub-gcp-sec-refresh" --topic
+"projects/\/topics/pubsub-gcp-sec-refresh" \`\`\`
+
+Now we need to create a service account for running our application:
+
+\`\`\` gcloud iam service-accounts create gcp-sec-refresh-sa
+\--description="GCP Sec Refresh SA" --project \ \`\`\`
+
+Let’s give the SA an owner role:
+
+\`\`\` gcloud projects add-iam-policy-binding \
+\--member="serviceAccount:gcp-sec-refresh-sa@\.iam.gserviceaccount.com"
+\--role="roles/owner" \`\`\`
+
+Now we should create a Service account key file for the just create SA:
+
+\`\`\` gcloud iam service-accounts keys create \.json
+\--iam-account=gcp-sec-refresh-sa@\.iam.gserviceaccount.com
+\`\`\`
+
+Let’s enable the Secret Manager API for our project
+
+\`\`\` gcloud services enable secretmanager.googleapis.com --project
+\ \`\`\`
+
+Also the PubSub API needs to be enabled
+
+\`\`\` gcloud services enable pubsub.googleapis.com --project
+\ \`\`\`
+
+If needed enable also the Billing API.
+
+Now it’s time to create our secret, with topic notification:
+
+\`\`\` gcloud secrets create \
+\--topics=projects/\/topics/pubsub-gcp-sec-refresh
+\--project=\ \`\`\`
+
+And let’s add the value
+
+\`\`\` gcloud secrets versions add \
+\--data-file=\ --project=\ \`\`\`
+
+You could now use the projectId and the service account json file to
+recover the secret.
+
## Google Secret Manager Producer operations
Google Functions component provides the following operation on the
diff --git a/camel-google-sheets-stream.md b/camel-google-sheets-stream.md
index 74d334e166eda1665467e995d1e7634f86da8428..3dead9368e97727b9f1db1cc0ad158d5d9f0a789 100644
--- a/camel-google-sheets-stream.md
+++ b/camel-google-sheets-stream.md
@@ -55,19 +55,19 @@ header, with one of the enum values:
-
+
Header
Enum
Description
-
+
CamelGoogleSheets.ValueInputOption
RAW
The values the user has entered will
not be parsed and will be stored as-is.
-
+
CamelGoogleSheets.ValueInputOption
USER_ENTERED
diff --git a/camel-google-sheets.md b/camel-google-sheets.md
index dc70e2dcdc0d60c8119677ba7048d861d7d6edfa..3a81a41c4c5664becb461f8ba92bb72c58252d57 100644
--- a/camel-google-sheets.md
+++ b/camel-google-sheets.md
@@ -61,19 +61,19 @@ header, with one of the enum values:
-
+
Header
Enum
Description
-
+
CamelGoogleSheets.ValueInputOption
RAW
The values the user has entered will
not be parsed and will be stored as-is.
-
+
CamelGoogleSheets.ValueInputOption
USER_ENTERED
diff --git a/camel-google-storage.md b/camel-google-storage.md
index 9466423ca66c5e96fb48070b940bb5f4797e8478..bda4aabc2e241a9fce38b2b0c2801eabb14174ff 100644
--- a/camel-google-storage.md
+++ b/camel-google-storage.md
@@ -202,14 +202,14 @@ time for the created link through the header
DOWNLOAD\_LINK\_EXPIRATION\_TIME. If not specified, by default it is 5
minutes.
-# Bucket Auto creation
+## Bucket Auto creation
With the option `autoCreateBucket` users are able to avoid the
autocreation of a Bucket in case it doesn’t exist. The default for this
option is `true`. If set to false, any operation on a not-existent
bucket won’t be successful and an error will be returned.
-# MoveAfterRead consumer option
+## MoveAfterRead consumer option
In addition to `deleteAfterRead` it has been added another option,
`moveAfterRead`. With this option enabled the consumed object will be
diff --git a/camel-google-summary.md b/camel-google-summary.md
new file mode 100644
index 0000000000000000000000000000000000000000..c509560ea0b20f44650e0b2ba87497f6debf6b0d
--- /dev/null
+++ b/camel-google-summary.md
@@ -0,0 +1,12 @@
+# Google-summary.md
+
+The **google-** component allows you to work with the [G
+Suite](https://gsuite.google.co.in/). Google offers a great palette of
+different components like use of calender, mail, sheets and drive . The
+main reason to use Google is the G Suite features.
+
+# Google components
+
+See the following for usage of each component:
+
+indexDescriptionList::\[attributes=*group=Google*,descriptionformat=description\]
diff --git a/camel-grok-dataformat.md b/camel-grok-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..58e408347272dba2c009f269ee2cd51617e5f2a2
--- /dev/null
+++ b/camel-grok-dataformat.md
@@ -0,0 +1,80 @@
+# Grok-dataformat.md
+
+**Since Camel 3.0**
+
+This component provides dataformat for processing inputs with grok
+patterns. Grok patterns are used to process unstructured data into
+structured objects - `List>`.
+
+This component is based on the [Java Grok
+library](https://github.com/thekrakken/java-grok)
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-grok
+ x.x.x
+
+
+
+# Usage
+
+**Extract all IP addresses from input**
+
+ from("direct:in")
+ .unmarshal().grok("%{IP:ip}")
+ .to("log:out");
+
+**Parse Apache logs and process only 4xx responses**
+
+ from("file://apacheLogs")
+ .unmarshal().grok("%{COMBINEDAPACHELOG")
+ .split(body()).filter(simple("${body[response]} starts with '4'"))
+ .to("log:4xx")
+
+## Preregistered patterns
+
+This component comes with preregistered patterns, which are based on
+Logstash patterns. All [Java Grok Default
+Patterns](https://github.com/thekrakken/java-grok/tree/master/src/main/resources/patterns/patterns)
+are preregistered and as such could be used without manual registration.
+
+## Custom patterns
+
+Camel Grok DataFormat supports plugable patterns, which are auto loaded
+from Camel Registry. You can register patterns with Java DSL and Spring
+DSL:
+
+Java DSL
+public class MyRouteBuilder extends RouteBuilder {
+
+ @Override
+ public void configure() throws Exception {
+ bindToRegistry("myCustomPatternBean", new GrokPattern("FOOBAR", "foo|bar"));
+
+ from("direct:in")
+ .unmarshal().grok("%{FOOBAR:fooBar}")
+ .to("log:out");
+ }
+ }
+
+Spring XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Grok Data format Options
diff --git a/camel-groovy-dsl.md b/camel-groovy-dsl.md
new file mode 100644
index 0000000000000000000000000000000000000000..cff417079fa75a15eb7c99fd77ae733be3fde230
--- /dev/null
+++ b/camel-groovy-dsl.md
@@ -0,0 +1,5 @@
+# Groovy-dsl.md
+
+**Since Camel 3.9**
+
+See [DSL](#manual:ROOT:dsl.adoc)
diff --git a/camel-groovy-language.md b/camel-groovy-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..0f703dcdff3a087645a6bb37c4008ac9591c72cd
--- /dev/null
+++ b/camel-groovy-language.md
@@ -0,0 +1,171 @@
+# Groovy-language.md
+
+**Since Camel 1.3**
+
+Camel has support for using [Groovy](http://www.groovy-lang.org/).
+
+For example, you can use Groovy in a
+[Predicate](#manual::predicate.adoc) with the [Message
+Filter](#eips:filter-eip.adoc) EIP.
+
+ groovy("someGroovyExpression")
+
+# Groovy Options
+
+# Usage
+
+## Groovy Context
+
+Camel will provide exchange information in the Groovy context (just a
+`Map`). The `Exchange` is transferred as:
+
+
+
+
+
+
+
+
+
+
+
+body
+The message body.
+
+
+header
+The headers of the message.
+
+
+headers
+The headers of the message.
+
+
+variable
+The exchange variables
+
+
+variables
+The exchange variables
+
+
+exchangeProperty
+The exchange properties.
+
+
+exchangeProperties
+The exchange properties.
+
+
+exchange
+The Exchange
+itself.
+
+
+camelContext
+The Camel Context.
+
+
+exception
+If the exchange failed then this is the
+caused exception.
+
+
+request
+The message.
+
+
+response
+Deprecated The Out
+message (only for InOut message exchange pattern).
+
+
+log
+Can be used for logging purposes such
+as log.info('Using body: {}', body).
+
+
+
+
+## How to get the result from multiple statements script
+
+As the Groovy script engine evaluate method just return a `Null` if it
+runs a multiple statements script. Camel now looks up the value of
+script result by using the key of `result` from the value set. If you
+have multiple statements scripts, you need to make sure you set the
+value of result variable as the script return value.
+
+ bar = "baz"
+ // some other statements ...
+ // camel take the result value as the script evaluation result
+ result = body * 2 + 1
+
+## Customizing Groovy Shell
+
+For very special use-cases you may need to use a custom `GroovyShell`
+instance in your Groovy expressions. To provide the custom
+`GroovyShell`, add an implementation of the
+`org.apache.camel.language.groovy.GroovyShellFactory` SPI interface to
+the Camel registry.
+
+ public class CustomGroovyShellFactory implements GroovyShellFactory {
+
+ public GroovyShell createGroovyShell(Exchange exchange) {
+ ImportCustomizer importCustomizer = new ImportCustomizer();
+ importCustomizer.addStaticStars("com.example.Utils");
+ CompilerConfiguration configuration = new CompilerConfiguration();
+ configuration.addCompilationCustomizers(importCustomizer);
+ return new GroovyShell(configuration);
+ }
+
+ }
+
+Camel will then use your custom GroovyShell instance (containing your
+custom static imports), instead of the default one.
+
+## Loading script from external resource
+
+You can externalize the script and have Camel load it from a resource
+such as `"classpath:"`, `"file:"`, or `"http:"`. This is done using the
+following syntax: `"resource:scheme:location"`, e.g., to refer to a file
+on the classpath you can do:
+
+ .setHeader("myHeader").groovy("resource:classpath:mygroovy.groovy")
+
+## Dependencies
+
+To use scripting languages in your camel routes, you need to add a
+dependency on **camel-groovy**.
+
+If you use Maven you could just add the following to your `pom.xml`,
+substituting the version number for the latest and greatest release (see
+the download page for the latest versions).
+
+
+ org.apache.camel
+ camel-groovy
+ x.x.x
+
+
+# Examples
+
+In the example below, we use a groovy script as predicate in the message
+filter, to determine if any line items are over $100:
+
+Java
+from("queue:foo")
+.filter(groovy("body.lineItems.any { i -\> i.value \> 100 }"))
+.to("queue:bar")
+
+XML DSL
+
+
+
+body.lineItems.any { i -\> i.value \> 100 }
+
+
+
diff --git a/camel-grpc.md b/camel-grpc.md
index 1ef4c99fce0c5f32094bdbfca9fb62468bd7e084..d3cf68b852ed8fc6a283cbb94e7e4b3a9b11fc20 100644
--- a/camel-grpc.md
+++ b/camel-grpc.md
@@ -23,7 +23,9 @@ for this component:
grpc:host:port/service[?options]
-# Transport security and authentication support
+# Usage
+
+## Transport security and authentication support
The following [authentication](https://grpc.io/docs/guides/auth.html)
mechanisms are built-in to gRPC and available in this component:
@@ -52,7 +54,7 @@ combinations must be configured:
-
+
-
+
1
SSL/TLS
negotiationType
TLS
Required
-
+
keyCertChainResource
Required
-
+
keyResource
Required
-
+
keyPassword
Optional
-
+
trustCertCollectionResource
Optional
-
+
2
Token-based authentication with
Google API
@@ -104,21 +106,21 @@ Google API
GOOGLE
Required
-
+
negotiationType
TLS
Required
-
+
serviceAccountResource
Required
-
+
3
Custom JSON Web Token
implementation authentication
@@ -126,7 +128,7 @@ implementation authentication
JWT
Required
-
+
negotiationType
@@ -134,7 +136,7 @@ implementation authentication
Optional. The TLS/SSL not checking for
this type, but strongly recommended.
-
+
jwtAlgorithm
@@ -142,21 +144,21 @@ this type, but strongly recommended.
(HMAC384,HMAC512)
Optional
-
+
jwtSecret
Required
-
+
jwtIssuer
Optional
-
+
jwtSubject
@@ -166,7 +168,7 @@ this type, but strongly recommended.
-# gRPC producer resource type mapping
+## gRPC producer resource type mapping
The table below shows the types of objects in the message body,
depending on the types (simple or stream) of incoming and outgoing
@@ -183,7 +185,7 @@ incoming stream parameter in asynchronous style is not allowed.
-
+
-
+
synchronous
simple
simple
Object
Object
-
+
synchronous
simple
stream
Object
List<Object>
-
+
synchronous
stream
simple
not allowed
not allowed
-
+
synchronous
stream
stream
not allowed
not allowed
-
+
asynchronous
simple
simple
Object
List<Object>
-
+
asynchronous
simple
stream
Object
List<Object>
-
+
asynchronous
stream
simple
Object or List<Object>
List<Object>
-
+
asynchronous
stream
stream
@@ -251,14 +253,14 @@ incoming stream parameter in asynchronous style is not allowed.
-# gRPC Proxy
+## gRPC Proxy
It is not possible to create a universal proxy-route for all methods, so
you need to divide your gRPC service into several services by method’s
type: unary, server streaming, client streaming and bidirectional
streaming.
-## Unary
+### Unary
For unary requests, it is enough to write the following code:
diff --git a/camel-gson-dataformat.md b/camel-gson-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..14f8d7a4857546d63a54bf6ba988ca76ffda5351
--- /dev/null
+++ b/camel-gson-dataformat.md
@@ -0,0 +1,27 @@
+# Gson-dataformat.md
+
+**Since Camel 2.10**
+
+Gson is a Data Format that uses the [Gson
+Library](https://github.com/google/gson)
+
+ from("activemq:My.Queue").
+ marshal().json(JsonLibrary.Gson).
+ to("mqseries:Another.Queue");
+
+# Gson Options
+
+# Dependencies
+
+To use Gson in your camel routes, you need to add the dependency on
+**camel-gson** which implements this data format.
+
+If you use maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-gson
+ x.x.x
+
+
diff --git a/camel-guaranteed-delivery.md b/camel-guaranteed-delivery.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a026b7a3ce4b9a927e621416622384b4dea11d5
--- /dev/null
+++ b/camel-guaranteed-delivery.md
@@ -0,0 +1,47 @@
+# Guaranteed-delivery.md
+
+Camel supports the [Guaranteed
+Delivery](http://www.enterpriseintegrationpatterns.com/GuaranteedMessaging.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) using
+among others the following components:
+
+- [File](#ROOT:file-component.adoc) for using file systems as a
+ persistent store of messages
+
+- [JMS](#ROOT:jms-component.adoc) when using persistent delivery, the
+ default, for working with JMS queues and topics for high
+ performance, clustering and load balancing
+
+- [Kafka](#ROOT:kafka-component.adoc) when using persistent delivery
+ for working with streaming events for high performance, clustering
+ and load balancing
+
+- [JPA](#ROOT:jpa-component.adoc) for using a database as a
+ persistence layer, or use any of the other database components such
+ as [SQL](#ROOT:sql-component.adoc),
+ [JDBC](#ROOT:jdbc-component.adoc), or
+ [MyBatis](#ROOT:mybatis-component.adoc)
+
+
+
+
+
+# Example
+
+The following example demonstrates illustrates the use of [Guaranteed
+Delivery](http://www.enterpriseintegrationpatterns.com/GuaranteedMessaging.html)
+within the [JMS](#ROOT:jms-component.adoc) component.
+
+By default, a message is not considered successfully delivered until the
+recipient has persisted the message locally guaranteeing its receipt in
+the event the destination becomes unavailable.
+
+Java
+from("direct:start")
+.to("jms:queue:foo");
+
+XML
+
+
+
+
diff --git a/camel-guava-eventbus.md b/camel-guava-eventbus.md
index 5cb67f169e5e37a63001b4c2ebe0d8f225c2b1f5..a81abea73c7825becd211c90adf223cde5c5850b 100644
--- a/camel-guava-eventbus.md
+++ b/camel-guava-eventbus.md
@@ -70,7 +70,7 @@ forward body of the Camel exchanges to the Guava `EventBus` instance.
}
});
-# DeadEvent considerations
+## DeadEvent considerations
Keep in mind that due to the limitations caused by the design of the
Guava EventBus, you cannot specify event class to be received by the
@@ -110,7 +110,7 @@ follows.
from("guava-eventbus:busName?listenerInterface=com.example.CustomListener").to("seda:queue");
-# Consuming multiple types of events
+## Consuming multiple types of events
To define multiple types of events to be consumed by Guava EventBus
consumer use `listenerInterface` endpoint option, as listener interface
diff --git a/camel-gzipDeflater-dataformat.md b/camel-gzipDeflater-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..82b6933685c4082e1f4511cc7318ad02685f11f4
--- /dev/null
+++ b/camel-gzipDeflater-dataformat.md
@@ -0,0 +1,46 @@
+# GzipDeflater-dataformat.md
+
+**Since Camel 2.0**
+
+The GZip Deflater Data Format is a message compression and decompression
+format. It uses the same deflating algorithm used in the Zip data
+format, although some additional headers are provided. This format is
+produced by popular `gzip`/`gunzip` tool. Messages marshalled using GZip
+compression can be unmarshalled using GZip decompression just prior to
+being consumed at the endpoint. The compression capability is quite
+useful when you deal with large XML and text-based payloads or when you
+read messages previously comressed using `gzip` tool.
+
+This dataformat is not for working with gzip files such as uncompressing
+and building gzip files. Instead, use the
+[zipfile](#dataformats:zipFile-dataformat.adoc) dataformat.
+
+# Options
+
+# Marshal
+
+In this example, we marshal a regular text/XML payload to a compressed
+payload employing gzip compression format and send it an ActiveMQ queue
+called MY\_QUEUE.
+
+ from("direct:start").marshal().gzipDeflater().to("activemq:queue:MY_QUEUE");
+
+# Unmarshal
+
+In this example we unmarshal a gzipped payload from an ActiveMQ queue
+called MY\_QUEUE to its original format, and forward it for processing
+to the `UnGZippedMessageProcessor`.
+
+ from("activemq:queue:MY_QUEUE").unmarshal().gzipDeflater().process(new UnGZippedMessageProcessor());
+
+# Dependencies
+
+If you use Maven you could add the following to your `pom.xml`,
+substituting the version number for the latest and greatest release (see
+the download page for the latest versions).
+
+
+ org.apache.camel
+ camel-zip-deflater
+ x.x.x
+
diff --git a/camel-hashicorp-vault.md b/camel-hashicorp-vault.md
index 23b0cab57195ca205e32d7d29d82d6f96bab5658..5c00f59b36fd635148b5c98bd1c101c5e2a50b2a 100644
--- a/camel-hashicorp-vault.md
+++ b/camel-hashicorp-vault.md
@@ -16,6 +16,8 @@ Vault](https://www.vaultproject.io/).
+# Examples
+
## Using Hashicorp Vault Property Function
To use this function, you’ll need to provide credentials for Hashicorp
@@ -34,6 +36,11 @@ file such as:
camel.vault.hashicorp.port = port
camel.vault.hashicorp.scheme = scheme
+`camel.vault.hashicorp` configuration only applies to the Hashicorp
+Vault properties function (E.g when resolving properties). When using
+the `operation` option to create, get, list secrets etc., you should
+provide the `host`, `port`, `scheme` (if required) \& `token` options.
+
At this point, you’ll be able to reference a property in the following
way:
@@ -78,7 +85,7 @@ engine, like for example:
-
+
@@ -90,7 +97,7 @@ is not present on Hashicorp Vault instance, in the *secret* engine:
-
+
@@ -126,7 +133,7 @@ exist (in the *secret* engine).
-
+
diff --git a/camel-hazelcast-atomicvalue.md b/camel-hazelcast-atomicvalue.md
index a397e71ec74cd78553c9fe3b150831cb8e0c1262..68b398a98989b277431ef8e038ac2720ef5d70e9 100644
--- a/camel-hazelcast-atomicvalue.md
+++ b/camel-hazelcast-atomicvalue.md
@@ -27,7 +27,7 @@ The operations for this producer are:
- getAndAdd
-## Sample for **set**:
+## Example for **set**:
Java DSL
from("direct:set")
@@ -46,7 +46,7 @@ Spring XML
Provide the value to set inside the message body (here the value is 10):
`template.sendBody("direct:set", 10);`
-## Sample for **get**:
+## Example for **get**:
Java DSL
from("direct:get")
@@ -65,7 +65,7 @@ Spring XML
You can get the number with
`long body = template.requestBody("direct:get", null, Long.class);`.
-## Sample for **increment**:
+## Example for **increment**:
Java DSL
from("direct:increment")
@@ -84,7 +84,7 @@ Spring XML
The actual value (after increment) will be provided inside the message
body.
-## Sample for **decrement**:
+## Example for **decrement**:
Java DSL
from("direct:decrement")
@@ -103,7 +103,7 @@ Spring XML
The actual value (after decrement) will be provided inside the message
body.
-## Sample for **destroy**
+## Example for **destroy**
Java DSL
from("direct:destroy")
diff --git a/camel-hazelcast-list.md b/camel-hazelcast-list.md
index 0c6c0fd9766c855e92aef2e0090e051de29cdfcb..7e0c70c36940bd6e7d70c264f52a9f8e6cc550c1 100644
--- a/camel-hazelcast-list.md
+++ b/camel-hazelcast-list.md
@@ -30,26 +30,26 @@ The list producer provides eight operations:
- retainAll
-## Sample for **add**:
+## Example for **add**:
from("direct:add")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD))
.toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX);
-## Sample for **get**:
+## Example for **get**:
from("direct:get")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.GET))
.toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX)
.to("seda:out");
-## Sample for **setvalue**:
+## Example for **setvalue**:
from("direct:set")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.SET_VALUE))
.toF("hazelcast-%sbar", HazelcastConstants.LIST_PREFIX);
-## Sample for **removevalue**:
+## Example for **removevalue**:
from("direct:removevalue")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_VALUE))
diff --git a/camel-hazelcast-map.md b/camel-hazelcast-map.md
index fefc8815af2f0f28aafbaa568f046aafb2421ef7..15c9ebe11d7a79b54880f26515a24bfaaf7b0945 100644
--- a/camel-hazelcast-map.md
+++ b/camel-hazelcast-map.md
@@ -48,7 +48,7 @@ You can call the samples with:
template.sendBodyAndHeader("direct:[put|get|update|delete|query|evict]", "my-foo", HazelcastConstants.OBJECT_ID, "4711");
-## Sample for **put**:
+## Example for **put**:
Java DSL
from("direct:put")
@@ -88,7 +88,7 @@ Spring XML
-## Sample for **get**:
+## Example for **get**:
Java DSL
from("direct:get")
@@ -106,7 +106,7 @@ Spring XML
-## Sample for **update**:
+## Example for **update**:
Java DSL
from("direct:update")
@@ -122,7 +122,7 @@ Spring XML
-## Sample for **delete**:
+## Example for **delete**:
Java DSL
from("direct:delete")
@@ -138,7 +138,7 @@ Spring XML
-## Sample for **query**
+## Example for **query**
Java DSL
from("direct:query")
@@ -180,27 +180,27 @@ Header Variables inside the response message:
-
+
-
+
CamelHazelcastListenerTime
Long
time of the event in millis
-
+
CamelHazelcastListenerType
String
the map consumer sets here
"cachelistener"
-
+
CamelHazelcastListenerAction
String
@@ -208,20 +208,20 @@ style="text-align: left;">CamelHazelcastListenerAction
updated , envicted and
removed ).
-
+
CamelHazelcastObjectId
String
the oid of the object
-
+
CamelHazelcastCacheName
String
the name of the cache (e.g.,
"foo")
-
+
CamelHazelcastCacheType
String
diff --git a/camel-hazelcast-multimap.md b/camel-hazelcast-multimap.md
index 9535afe265c6d12778fee501e058ca036ade9bd3..0e1efd248024c5c3ef70623b60a49abf01bb4b87 100644
--- a/camel-hazelcast-multimap.md
+++ b/camel-hazelcast-multimap.md
@@ -32,7 +32,7 @@ The multimap producer provides eight operations:
- valueCount
-## Sample for **put**:
+## Example for **put**:
Java DSL
from("direct:put")
@@ -49,7 +49,7 @@ Spring XML
-## Sample for **removevalue**:
+## Example for **removevalue**:
Java DSL
from("direct:removevalue")
@@ -71,7 +71,7 @@ inside the message body. If you have a multimap object
``\{`key: "4711" values: { "my-foo", "my-bar"``}}\` you have to put
`my-foo` inside the message body to remove the `my-foo` value.
-## Sample for **get**:
+## Example for **get**:
Java DSL
from("direct:get")
@@ -90,7 +90,7 @@ Spring XML
-## Sample for **delete**:
+## Example for **delete**:
Java DSL
from("direct:delete")
diff --git a/camel-hazelcast-queue.md b/camel-hazelcast-queue.md
index 406cd8cd1e91df584f5cc1ab59655bde9114d84c..1254de03db90928ec42c13f73ef95c4254a1e25c 100644
--- a/camel-hazelcast-queue.md
+++ b/camel-hazelcast-queue.md
@@ -36,68 +36,68 @@ The queue producer provides 12 operations:
- retainAll
-## Sample for **add**:
+## Example for **add**:
from("direct:add")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.ADD))
.toF("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX);
-## Sample for **put**:
+## Example for **put**:
from("direct:put")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUT))
.toF("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX);
-## Sample for **poll**:
+## Example for **poll**:
from("direct:poll")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.POLL))
.toF("hazelcast:%sbar", HazelcastConstants.QUEUE_PREFIX);
-## Sample for **peek**:
+## Example for **peek**:
from("direct:peek")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PEEK))
.toF("hazelcast:%sbar", HazelcastConstants.QUEUE_PREFIX);
-## Sample for **offer**:
+## Example for **offer**:
from("direct:offer")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.OFFER))
.toF("hazelcast:%sbar", HazelcastConstants.QUEUE_PREFIX);
-## Sample for **removevalue**:
+## Example for **removevalue**:
from("direct:removevalue")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_VALUE))
.toF("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX);
-## Sample for **remaining capacity**:
+## Example for **remaining capacity**:
from("direct:remaining-capacity").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMAINING_CAPACITY)).to(
String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX));
-## Sample for **remove all**:
+## Example for **remove all**:
from("direct:removeAll").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_ALL)).to(
String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX));
-## Sample for **remove if**:
+## Example for **remove if**:
from("direct:removeIf").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.REMOVE_IF)).to(
String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX));
-## Sample for **drain to**:
+## Example for **drain to**:
from("direct:drainTo").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.DRAIN_TO)).to(
String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX));
-## Sample for **take**:
+## Example for **take**:
from("direct:take").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.TAKE)).to(
String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX));
-## Sample for **retain all**:
+## Example for **retain all**:
from("direct:retainAll").setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.RETAIN_ALL)).to(
String.format("hazelcast-%sbar", HazelcastConstants.QUEUE_PREFIX));
diff --git a/camel-hazelcast-replicatedmap.md b/camel-hazelcast-replicatedmap.md
index c4f59de3fead7ae4df19489f2c64f51d40a4dfd1..7da145b098c77580fbc6c2191e2417b7eb54d0d4 100644
--- a/camel-hazelcast-replicatedmap.md
+++ b/camel-hazelcast-replicatedmap.md
@@ -25,7 +25,7 @@ The replicatedmap producer provides 6 operations:
- containsValue
-## Sample for **put**:
+## Example for **put**:
Java DSL
from("direct:put")
@@ -42,7 +42,7 @@ Spring XML
-## Sample for **get**:
+## Example for **get**:
Java DSL
from("direct:get")
@@ -61,7 +61,7 @@ Spring XML
-## Sample for **delete**:
+## Example for **delete**:
Java DSL
from("direct:delete")
@@ -113,27 +113,27 @@ Header Variables inside the response message:
-
+
-
+
CamelHazelcastListenerTime
Long
time of the event in millis
-
+
CamelHazelcastListenerType
String
the map consumer sets here
"cachelistener"
-
+
CamelHazelcastListenerAction
String
@@ -141,20 +141,20 @@ style="text-align: left;">CamelHazelcastListenerAction
added and removed (and soon
envicted )
-
+
CamelHazelcastObjectId
String
the oid of the object
-
+
CamelHazelcastCacheName
String
the name of the cache (e.g.,
"foo")
-
+
CamelHazelcastCacheType
String
diff --git a/camel-hazelcast-ringbuffer.md b/camel-hazelcast-ringbuffer.md
index 27d95435f10ea2c7ce46180eb79c26ec99d07295..c475e890dd96ffd2b371e0f3596b22537fab0fb5 100644
--- a/camel-hazelcast-ringbuffer.md
+++ b/camel-hazelcast-ringbuffer.md
@@ -24,7 +24,7 @@ The ringbuffer producer provides 5 operations:
- capacity
-## Sample for **put**:
+## Example for **put**:
Java DSL
from("direct:put")
@@ -41,7 +41,7 @@ Spring XML
-## Sample for **readonce from head**:
+## Example for **readonce from head**:
Java DSL:
diff --git a/camel-hazelcast-summary.md b/camel-hazelcast-summary.md
new file mode 100644
index 0000000000000000000000000000000000000000..0801ba9c462f40aa6cd6a74263bc12b8b277c0fe
--- /dev/null
+++ b/camel-hazelcast-summary.md
@@ -0,0 +1,119 @@
+# Hazelcast-summary.md
+
+**Since Camel 2.7**
+
+The **hazelcast-** component allows you to work with the
+[Hazelcast](http://www.hazelcast.com) distributed data grid / cache.
+Hazelcast is an in-memory data grid, entirely written in Java (single
+jar). It offers a great palette of different data stores like map,
+multimap (same key, n values), queue, list and atomic number. The main
+reason to use Hazelcast is its simple cluster support. If you have
+enabled multicast on your network, you can run a cluster with a hundred
+nodes with no extra configuration. Hazelcast can simply configure to add
+additional features like n copies between nodes (default is 1), cache
+persistence, network configuration (if needed), near cache, eviction,
+and so on. For more information, consult the Hazelcast documentation on
+[http://www.hazelcast.com/docs.jsp](http://www.hazelcast.com/docs.jsp).
+
+# Hazelcast components
+
+See the following for usage of each component:
+
+indexDescriptionList::\[attributes=*group=Hazelcast*,descriptionformat=description\]
+
+# Installation
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-hazelcast
+ x.x.x
+
+
+
+# Using hazelcast reference
+
+## By its name
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ put
+
+
+
+
+
+
+
+ get
+
+
+
+
+
+
+## By instance
+
+
+
+
+
+
+
+
+ put
+
+
+
+
+
+
+
+ get
+
+
+
+
+
+
+## Configuring HazelcastInstance on component
+
+You can also configure the hazelcast instance on the component which
+will then be used by all hazelcast endpoints. In the example above we
+set up this for the hazelcast map component and setup hazelcast via
+verbose `` configurations.
+
+
+
+
+
+
+
+ 1234
+
+
+
+
+
+
+
+
+
diff --git a/camel-hazelcast-topic.md b/camel-hazelcast-topic.md
index ca97414db298d0a2487fd5e3c41e13648c9522ba..399434d3245149d4ab78ba922255ad3a52ff6a8d 100644
--- a/camel-hazelcast-topic.md
+++ b/camel-hazelcast-topic.md
@@ -14,7 +14,7 @@ distributed topic.
The topic producer provides only one operation (publish).
-## Sample for **publish**:
+## Example for **publish**:
from("direct:add")
.setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUBLISH))
diff --git a/camel-header-language.md b/camel-header-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..cb273f5c84d134371ccccb578d6d74bbaac37d1f
--- /dev/null
+++ b/camel-header-language.md
@@ -0,0 +1,30 @@
+# Header-language.md
+
+**Since Camel 1.5**
+
+The Header Expression Language allows you to extract values of named
+headers.
+
+# Header Options
+
+# Example usage
+
+The `recipientList` EIP can utilize a header:
+
+
+
+
+
+
+
+
+In this case, the list of recipients are contained in the header
+*myHeader*.
+
+And the same example in Java DSL:
+
+ from("direct:a").recipientList(header("myHeader"));
+
+# Dependencies
+
+The Header language is part of **camel-core**.
diff --git a/camel-headersmap.md b/camel-headersmap.md
new file mode 100644
index 0000000000000000000000000000000000000000..a53bd3f3520d047505fb464d3cb90ee9144e11c4
--- /dev/null
+++ b/camel-headersmap.md
@@ -0,0 +1,17 @@
+# Headersmap.md
+
+**Since Camel 2.20**
+
+The Headersmap component is a faster implementation of a
+case-insensitive map which can be plugged in and used by Camel at
+runtime to have slight faster performance in the Camel Message headers.
+
+# Usage
+
+## Auto-detection from classpath
+
+To use this implementation all you need to do is to add the
+`camel-headersmap` dependency to the classpath, and Camel should
+auto-detect this on startup and log as follows:
+
+ Detected and using HeadersMapFactory: camel-headersmap
diff --git a/camel-hl7-dataformat.md b/camel-hl7-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..bc67fc501a70075b4dc1e0dc63e12f2b6b484b90
--- /dev/null
+++ b/camel-hl7-dataformat.md
@@ -0,0 +1,419 @@
+# Hl7-dataformat.md
+
+**Since Camel 2.0**
+
+The HL7 component is used for working with the HL7 MLLP protocol and
+[HL7 v2
+messages](http://www.hl7.org/implement/standards/product_brief.cfm?product_id=185)
+using the [HAPI library](https://hapifhir.github.io/hapi-hl7v2/).
+
+This component supports the following:
+
+- HL7 MLLP codec for [Mina](#ROOT:mina-component.adoc)
+
+- HL7 MLLP codec for [Netty](#ROOT:netty-component.adoc)
+
+- Type Converter from/to HAPI and String
+
+- HL7 DataFormat using the HAPI library
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-hl7
+ x.x.x
+
+
+
+# HL7 MLLP protocol
+
+HL7 is often used with the HL7 MLLP protocol, which is a text-based TCP
+socket based protocol. This component ships with a Mina and Netty Codec
+that conforms to the MLLP protocol, so you can easily expose an HL7
+listener accepting HL7 requests over the TCP transport layer. To expose
+a HL7 listener service, the [camel-mina](#ROOT:mina-component.adoc) or
+[camel-netty](#ROOT:netty-component.adoc) component is used with the
+`HL7MLLPCodec` (mina) or `HL7MLLPNettyDecoder/HL7MLLPNettyEncoder`
+(Netty).
+
+HL7 MLLP codec can be configured as follows:
+
+
+
+
+
+
+
+
+
+
+
+
+startByte
+0x0b
+The start byte spanning the HL7
+payload.
+
+
+endByte1
+0x1c
+The first end byte spanning the HL7
+payload.
+
+
+endByte2
+0x0d
+The 2nd end byte spanning the HL7
+payload.
+
+
+charset
+JVM Default
+The encoding (a charset
+name ) to use for the codec. If not provided, Camel will use the JVM
+default Charset .
+
+
+produceString
+true
+If true, the codec creates a string
+using the defined charset. If false, the codec sends a plain byte array
+into the route, so that the HL7 Data Format can determine the actual
+charset from the HL7 message content.
+
+
+convertLFtoCR
+false
+Will convert \n to
+\r (0x0d, 13 decimal) as HL7 stipulates
+\r as segment terminators. The HAPI library requires the
+use of \r.
+
+
+
+
+## Exposing an HL7 listener using Mina
+
+In the Spring XML file, we configure a mina endpoint to listen for HL7
+requests using TCP on port `8888`:
+
+
+
+**sync=true** indicates that this listener is synchronous and therefore
+will return a HL7 response to the caller. The HL7 codec is set up with
+**codec=#hl7codec**. Note that `hl7codec` is just a Spring bean ID, so
+it could be named `mygreatcodecforhl7` or whatever. The codec is also
+set up in the Spring XML file:
+
+
+
+
+
+The endpoint **hl7MinaLlistener** can then be used in a route as a
+consumer, as this Java DSL example illustrates:
+
+ from("hl7MinaListener")
+ .bean("patientLookupService");
+
+This is a basic route that will listen for HL7 and route it to a service
+named **patientLookupService**. This is also Spring bean ID, configured
+in the Spring XML as:
+
+
+
+The business logic can be implemented in POJO classes that do not depend
+on Camel, as shown here:
+
+ import ca.uhn.hl7v2.HL7Exception;
+ import ca.uhn.hl7v2.model.Message;
+ import ca.uhn.hl7v2.model.v24.segment.QRD;
+
+ public class PatientLookupService {
+ public Message lookupPatient(Message input) throws HL7Exception {
+ QRD qrd = (QRD)input.get("QRD");
+ String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue();
+
+ // find patient data based on the patient id and create a HL7 model object with the response
+ Message response = ... create and set response data
+ return response
+ }
+
+## Exposing an HL7 listener using Netty (available from Camel 2.15 onwards)
+
+In the Spring XML file, we configure a netty endpoint to listen for HL7
+requests using TCP on port `8888`:
+
+
+
+**sync=true** indicates that this listener is synchronous and therefore
+will return a HL7 response to the caller. The HL7 codec is set up with
+**encoders=#hl7encoder\*and\*decoders=#hl7decoder**. Note that
+`hl7encoder` and `hl7decoder` are just bean IDs, so they could be named
+differently. The beans can be set in the Spring XML file:
+
+
+
+
+The endpoint **hl7NettyListener** can then be used in a route as a
+consumer, as this Java DSL example illustrates:
+
+ from("hl7NettyListener")
+ .bean("patientLookupService");
+
+# HL7 Model using java.lang.String or byte\[\]
+
+The HL7 MLLP codec uses plain String as its data format. Camel uses its
+Type Converter to convert to/from strings to the HAPI HL7 model objects,
+but you can use the plain String objects if you prefer, for instance, if
+you wish to parse the data yourself.
+
+You can also let both the Mina and Netty codecs use a plain `byte[]` as
+its data format by setting the `produceString` property to false. The
+Type Converter is also capable of converting the `byte[]` to/from HAPI
+HL7 model objects.
+
+# HL7v2 Model using HAPI
+
+The HL7v2 model uses Java objects from the HAPI library. Using this
+library, you can encode and decode from the EDI format (ER7) that is
+mostly used with HL7v2.
+
+The sample below is a request to look up a patient with the patient ID
+`0101701234`.
+
+ MSH|^~\\&|MYSENDER|MYRECEIVER|MYAPPLICATION||200612211200||QRY^A19|1234|P|2.4
+ QRD|200612211200|R|I|GetPatient|||1^RD|0101701234|DEM||
+
+Using the HL7 model, you can work with a `ca.uhn.hl7v2.model.Message`
+object, e.g., to retrieve a patient ID:
+
+ Message msg = exchange.getIn().getBody(Message.class);
+ QRD qrd = (QRD)msg.get("QRD");
+ String patientId = qrd.getWhoSubjectFilter(0).getIDNumber().getValue(); // 0101701234
+
+This is powerful when combined with the HL7 listener, because you don’t
+have to work with `byte[]`, `String` or any other simple object formats.
+You can just use the HAPI HL7v2 model objects. If you know the message
+type in advance, you can be more type-safe:
+
+ QRY_A19 msg = exchange.getIn().getBody(QRY_A19.class);
+ String patientId = msg.getQRD().getWhoSubjectFilter(0).getIDNumber().getValue();
+
+# HL7 DataFormat
+
+The `camel-hl7` JAR ships with a HL7 data format that can be used to
+marshal or unmarshal HL7 model objects.
+
+- `marshal` = from Message to byte stream (can be used when responding
+ using the HL7 MLLP codec)
+
+- `unmarshal` = from byte stream to Message (can be used when
+ receiving streamed data from the HL7 MLLP
+
+To use the data format, simply instantiate an instance and invoke the
+marshal or unmarshal operation in the route builder:
+
+ DataFormat hl7 = new HL7DataFormat();
+
+ from("direct:hl7in")
+ .marshal(hl7)
+ .to("jms:queue:hl7out");
+
+In the sample above, the HL7 is marshalled from a HAPI Message object to
+a byte stream and put on a JMS queue.
+The next example is the opposite:
+
+ DataFormat hl7 = new HL7DataFormat();
+
+ from("jms:queue:hl7out")
+ .unmarshal(hl7)
+ .to("patientLookupService");
+
+Here we unmarshal the byte stream into a HAPI Message object that is
+passed to our patient lookup service.
+
+## Segment separators
+
+Unmarshalling does not automatically fix segment separators anymore by
+converting `\n` to `\r`. If you
+need this conversion, `org.apache.camel.component.hl7.HL7#convertLFToCR`
+provides a handy `Expression` for this purpose.
+
+## Charset
+
+Both `marshal and unmarshal` evaluate the charset provided in the field
+`MSH-18`. If this field is empty, by default, the charset contained in
+the corresponding Camel charset property/header is assumed. You can even
+change this default behavior by overriding the `guessCharsetName` method
+when inheriting from the `HL7DataFormat` class.
+
+There is a shorthand syntax in Camel for well-known data formats that
+are commonly used. Then you don’t need to create an instance of the
+`HL7DataFormat` object:
+
+ from("direct:hl7in")
+ .marshal().hl7()
+ .to("jms:queue:hl7out");
+
+ from("jms:queue:hl7out")
+ .unmarshal().hl7()
+ .to("patientLookupService");
+
+# Message Headers
+
+The unmarshal operation adds these fields from the MSH segment as
+headers on the Camel message:
+
+
+
+
+
+
+
+
+
+
+
+
+CamelHL7SendingApplication
+MSH-3
+MYSERVER
+
+
+CamelHL7SendingFacility
+MSH-4
+MYSERVERAPP
+
+
+CamelHL7ReceivingApplication
+MSH-5
+MYCLIENT
+
+
+CamelHL7ReceivingFacility
+MSH-6
+MYCLIENTAPP
+
+
+CamelHL7Timestamp
+MSH-7
+20071231235900
+
+
+CamelHL7Security
+MSH-8
+null
+
+
+CamelHL7MessageType
+MSH-9-1
+ADT
+
+
+CamelHL7TriggerEvent
+MSH-9-2
+A01
+
+
+CamelHL7MessageControl
+MSH-10
+1234
+
+
+CamelHL7ProcessingId
+MSH-11
+P
+
+
+CamelHL7VersionId
+MSH-12
+2.4
+
+
+CamelHL7Context
+
+contains the https://hapifhir.github.io/hapi-hl7v2/base/apidocs/ca/uhn/hl7v2/HapiContext.html[HapiContext] that was used to parse the message
+
+
+CamelHL7Charset
+MSH-18
+UNICODE UTF-8
+
+
+
+
+All headers except `CamelHL7Context` are `String` types. If a header
+value is missing, its value is `null`.
+
+# Dependencies
+
+To use HL7 in your Camel routes, you’ll need to add a dependency on
+**camel-hl7** listed above, which implements this data format.
+
+The HAPI library is split into a [base
+library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-base) and
+several structure libraries, one for each HL7v2 message version:
+
+- [v2.1 structures
+ library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-structures-v21)
+
+- [v2.2 structures
+ library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-structures-v22)
+
+- [v2.3 structures
+ library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-structures-v23)
+
+- [v2.3.1 structures
+ library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-structures-v231)
+
+- [v2.4 structures
+ library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-structures-v24)
+
+- [v2.5 structures
+ library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-structures-v25)
+
+- [v2.5.1 structures
+ library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-structures-v251)
+
+- [v2.6 structures
+ library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-structures-v26)
+
+By default `camel-hl7` only references the HAPI [base
+library](https://repo1.maven.org/maven2/ca/uhn/hapi/hapi-base).
+Applications are responsible for including structure libraries
+themselves. For example, if an application works with HL7v2 message
+versions 2.4 and 2.5, then the following dependencies must be added:
+
+
+ ca.uhn.hapi
+ hapi-structures-v24
+ 2.2
+
+
+
+ ca.uhn.hapi
+ hapi-structures-v25
+ 2.2
+
+
diff --git a/camel-hl7terser-language.md b/camel-hl7terser-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e652f7cc91434713ac9a7ed8e8d668e0c352fbe
--- /dev/null
+++ b/camel-hl7terser-language.md
@@ -0,0 +1,127 @@
+# Hl7terser-language.md
+
+**Since Camel 2.11**
+
+[HAPI](https://hapifhir.github.io/hapi-hl7v2/) provides a
+[Terser](https://hapifhir.github.io/hapi-hl7v2/base/apidocs/ca/uhn/hl7v2/util/Terser.html)
+class that provides access to fields using a commonly used terse
+location specification syntax. The HL7 Terser language allows using this
+syntax to extract values from HL7 messages and to use them as
+expressions and predicates for filtering, content-based routing, etc.
+
+# HL7 Terser Language options
+
+# Examples
+
+In the example below, we want to set a header with the patent id from
+field QRD-8 in the QRY\_A19 message:
+
+ import static org.apache.camel.component.hl7.HL7.hl7terser;
+
+ // extract patient ID from field QRD-8 in the QRY_A19 message above and put into message header
+ from("direct:test1")
+ .setHeader("PATIENT_ID", hl7terser("QRD-8(0)-1"))
+ .to("mock:test1");
+
+ // continue processing if extracted field equals a message header
+ from("direct:test2")
+ .filter(hl7terser("QRD-8(0)-1").isEqualTo(header("PATIENT_ID"))
+ .to("mock:test2");
+
+## HL7 Validation
+
+Often it is preferable to first parse a HL7v2 message and in a separate
+step validate it against a HAPI
+[ValidationContext](https://hapifhir.github.io/hapi-hl7v2/base/apidocs/ca/uhn/hl7v2/validation/ValidationContext.html).
+
+The example below shows how to do that. Notice how we use the static
+method `messageConformsTo` which validates that the message is a HL7v2
+message.
+
+ import static org.apache.camel.component.hl7.HL7.messageConformsTo;
+ import ca.uhn.hl7v2.validation.impl.DefaultValidation;
+
+ // Use standard or define your own validation rules
+ ValidationContext defaultContext = new DefaultValidation();
+
+ // Throws PredicateValidationException if a message does not validate
+ from("direct:test1")
+ .validate(messageConformsTo(defaultContext))
+ .to("mock:test1");
+
+## HL7 Validation using the HapiContext
+
+The HAPI Context is always configured with a
+[ValidationContext](https://hapifhir.github.io/hapi-hl7v2/base/apidocs/ca/uhn/hl7v2/validation/ValidationContext.html)
+(or a
+[ValidationRuleBuilder](https://hapifhir.github.io/hapi-hl7v2/base/apidocs/ca/uhn/hl7v2/validation/builder/ValidationRuleBuilder.html)),
+so you can access the validation rules indirectly.
+
+Furthermore, when unmarshalling the HL7 data format forwards the
+configured HAPI context in the `CamelHL7Context` header, and the
+validation rules of this context can be reused:
+
+ import static org.apache.camel.component.hl7.HL7.messageConformsTo;
+ import static org.apache.camel.component.hl7.HL7.messageConforms;
+
+ HapiContext hapiContext = new DefaultHapiContext();
+ hapiContext.getParserConfiguration().setValidating(false); // don't validate during parsing
+
+ // customize HapiContext some more ... e.g., enforce that PID-8 in ADT_A01 messages of version 2.4 is not empty
+ ValidationRuleBuilder builder = new ValidationRuleBuilder() {
+ @Override
+ protected void configure() {
+ forVersion(Version.V24)
+ .message("ADT", "A01")
+ .terser("PID-8", not(empty()));
+ }
+ };
+ hapiContext.setValidationRuleBuilder(builder);
+
+ HL7DataFormat hl7 = new HL7DataFormat();
+ hl7.setHapiContext(hapiContext);
+
+ from("direct:test1")
+ .unmarshal(hl7) // uses the GenericParser returned from the HapiContext
+ .validate(messageConforms()) // uses the validation rules returned from the HapiContext
+ // equivalent with .validate(messageConformsTo(hapiContext))
+ // route continues from here
+
+## HL7 Acknowledgement expression
+
+A common task in HL7v2 processing is to generate an acknowledgement
+message as a response to an incoming HL7v2 message, e.g., based on a
+validation result. The `ack` expression lets us accomplish this very
+elegantly:
+
+ import static org.apache.camel.component.hl7.HL7.messageConformsTo;
+ import static org.apache.camel.component.hl7.HL7.ack;
+ import ca.uhn.hl7v2.validation.impl.DefaultValidation;
+
+ // Use standard or define your own validation rules
+ ValidationContext defaultContext = new DefaultValidation();
+
+ from("direct:test1")
+ .onException(Exception.class)
+ .handled(true)
+ .transform(ack()) // auto-generates negative ack because of exception in Exchange
+ .end()
+ .validate(messageConformsTo(defaultContext))
+ // do something meaningful here
+
+ // acknowledgement
+ .transform(ack())
+
+## Custom Acknowledgement for MLLP
+
+In special situations, you may want to set a custom acknowledgement
+without using Exceptions. This can be achieved using the `ack`
+expression:
+
+ import org.apache.camel.component.mllp.MllpConstants;
+ import ca.uhn.hl7v2.AcknowledgmentCode;
+ import ca.uhn.hl7v2.ErrorCode;
+
+ // In process block
+ exchange.setProperty(MllpConstants.MLLP_ACKNOWLEDGEMENT,
+ ack(AcknowledgmentCode.AR, "Server didn't accept this message", ErrorCode.UNKNOWN_KEY_IDENTIFIER).evaluate(exchange, Object.class)
diff --git a/camel-http.md b/camel-http.md
index 2923b0f42aff9b9f87df954d7d05126706f11745..418d608b819e4a234b4e0cfd19c702943e9f8260 100644
--- a/camel-http.md
+++ b/camel-http.md
@@ -23,14 +23,16 @@ for this component:
Will by default use port 80 for HTTP and 443 for HTTPS.
-# Message Body
+# Usage
+
+## Message Body
Camel will store the HTTP response from the external server on the *OUT*
body. All headers from the *IN* message will be copied to the *OUT*
message, so headers are preserved during routing. Additionally, Camel
will add the HTTP response headers as well to the *OUT* message headers.
-# Using System Properties
+## Using System Properties
When setting useSystemProperties to true, the HTTP Client will look for
the following System Properties, and it will use it:
@@ -67,7 +69,7 @@ the following System Properties, and it will use it:
- `http.maxConnections`
-# Response code
+## Response code
Camel will handle, according to the HTTP response code:
@@ -88,7 +90,7 @@ The option, `throwExceptionOnFailure`, can be set to `false` to prevent
the `HttpOperationFailedException` from being thrown for failed response
codes. This allows you to get any response from the remote server.
-# Exceptions
+## Exceptions
`HttpOperationFailedException` exception contains the following
information:
@@ -102,7 +104,7 @@ information:
- Response body as a `java.lang.String`, if server provided a body as
response
-# Which HTTP method will be used
+## Which HTTP method will be used
The following algorithm is used to determine what HTTP method should be
used:
@@ -114,7 +116,7 @@ used:
5. `POST` if there is data to send (body is not `null`).
6. `GET` otherwise.
-# Configuring URI to call
+## Configuring URI to call
You can set the HTTP producer’s URI directly from the endpoint URI. In
the route below, Camel will call out to the external server, `oldhost`,
@@ -142,7 +144,7 @@ endpoint is configured with [http://oldhost](http://oldhost).
If the http endpoint is working in bridge mode, it will ignore the
message header of `Exchange.HTTP_URI`.
-# Configuring URI Parameters
+## Configuring URI Parameters
The **http** producer supports URI parameters to be sent to the HTTP
server. The URI parameters can either be set directly on the endpoint
@@ -157,7 +159,7 @@ Or options provided in a header:
.setHeader(Exchange.HTTP_QUERY, constant("order=123&detail=short"))
.to("http://oldhost");
-# How to set the http method (GET/PATCH/POST/PUT/DELETE/HEAD/OPTIONS/TRACE) to the HTTP producer
+## How to set the http method (GET/PATCH/POST/PUT/DELETE/HEAD/OPTIONS/TRACE) to the HTTP producer
The HTTP component provides a way to set the HTTP request method by
setting the message header. Here is an example:
@@ -182,13 +184,13 @@ And the equivalent XML DSL:
-# Using client timeout - SO\_TIMEOUT
+## Using client timeout - SO\_TIMEOUT
See the
[HttpSOTimeoutTest](https://github.com/apache/camel/blob/main/components/camel-http/src/test/java/org/apache/camel/component/http/HttpSOTimeoutTest.java)
unit test.
-# Configuring a Proxy
+## Configuring a Proxy
The HTTP component provides a way to configure a proxy.
@@ -198,7 +200,7 @@ The HTTP component provides a way to configure a proxy.
There is also support for proxy authentication via the
`proxyAuthUsername` and `proxyAuthPassword` options.
-## Using proxy settings outside of URI
+### Using proxy settings outside of URI
To avoid System properties conflicts, you can set proxy configuration
only from the CamelContext or URI.
@@ -223,14 +225,14 @@ override the system properties with the endpoint options.
There is also a `http.proxyScheme` property you can set to explicitly
configure the scheme to use.
-# Configuring charset
+## Configuring charset
If you are using `POST` to send data you can configure the `charset`
using the `Exchange` property:
exchange.setProperty(Exchange.CHARSET_NAME, "ISO-8859-1");
-## Sample with scheduled poll
+### Example with scheduled poll
This sample polls the Google homepage every 10 seconds and write the
page to the file `message.html`:
@@ -240,7 +242,7 @@ page to the file `message.html`:
.setHeader(FileComponent.HEADER_FILE_NAME, "message.html")
.to("file:target/google");
-## URI Parameters from the endpoint URI
+### URI Parameters from the endpoint URI
In this sample, we have the complete URI endpoint that is just what you
would have typed in a web browser. Multiple URI parameters can of course
@@ -250,7 +252,7 @@ web browser. Camel does no tricks here.
// we query for Camel at the Google page
template.sendBody("http://www.google.com/search?q=Camel", null);
-## URI Parameters from the Message
+### URI Parameters from the Message
Map headers = new HashMap();
headers.put(Exchange.HTTP_QUERY, "q=Camel&lr=lang_en");
@@ -260,7 +262,7 @@ web browser. Camel does no tricks here.
In the header value above notice that it should **not** be prefixed with
`?` and you can separate parameters as usual with the `&` char.
-## Getting the Response Code
+### Getting the Response Code
You can get the HTTP response code from the HTTP component by getting
the value from the Out message header with
@@ -274,20 +276,20 @@ the value from the Out message header with
Message out = exchange.getOut();
int responseCode = out.getHeader(Exchange.HTTP_RESPONSE_CODE, Integer.class);
-# Disabling Cookies
+## Disabling Cookies
To disable cookies in the CookieStore, you can set the HTTP Client to
ignore cookies by adding this URI option:
`httpClient.cookieSpec=ignore`. This doesn’t affect cookies manually set
in the `Cookie` header
-# Basic auth with the streaming message body
+## Basic auth with the streaming message body
To avoid the `NonRepeatableRequestException`, you need to do the
Preemptive Basic Authentication by adding the option:
`authenticationPreemptive=true`
-# OAuth2 Support
+## OAuth2 Support
To get an access token from an Authorization Server and fill that in
Authorization header to do requests to protected services, you will need
@@ -314,13 +316,13 @@ Camel only provides support for OAuth2 client credentials flow
Camel does not perform any validation in access token. It’s up to the
underlying service to validate it.
-# Advanced Usage
+## Advanced Usage
If you need more control over the HTTP producer, you should use the
`HttpComponent` where you can set various classes to give you custom
behavior.
-## Setting up SSL for HTTP Client
+### Setting up SSL for HTTP Client
Using the JSSE Configuration Utility
@@ -495,7 +497,7 @@ we have two components, each using their own instance of
|clientConnectionManager|To use a custom and shared HttpClientConnectionManager to manage connections. If this has been configured then this is always used for all endpoints created by this component.||object|
|connectionsPerRoute|The maximum number of connections per route.|20|integer|
|connectionStateDisabled|Disables connection state tracking|false|boolean|
-|connectionTimeToLive|The time for connection to live, the time unit is millisecond, the default value is always keep alive.||integer|
+|connectionTimeToLive|The time for connection to live, the time unit is millisecond, the default value is always keepAlive.||integer|
|contentCompressionDisabled|Disables automatic content decompression|false|boolean|
|cookieManagementDisabled|Disables state (cookie) management|false|boolean|
|defaultUserAgentDisabled|Disables the default user agent set by this builder if none has been provided by the user|false|boolean|
@@ -528,7 +530,7 @@ we have two components, each using their own instance of
|Name|Description|Default|Type|
|---|---|---|---|
|httpUri|The url of the HTTP endpoint to call.||string|
-|disableStreamCache|Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body.|false|boolean|
+|disableStreamCache|Determines whether or not the raw input stream is cached or not. The Camel consumer (camel-servlet, camel-jetty etc.) will by default cache the input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The producer (camel-http) will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is (the stream can only be read once) as the message body.|false|boolean|
|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object|
|bridgeEndpoint|If the option is true, HttpProducer will ignore the Exchange.HTTP\_URI header, and use the endpoint's URI for request. You may also set the option throwExceptionOnFailure to be false to let the HttpProducer send all the fault response back.|false|boolean|
|connectionClose|Specifies whether a Connection Close header must be added to HTTP Request. By default connectionClose is false.|false|boolean|
@@ -578,6 +580,7 @@ we have two components, each using their own instance of
|authUsername|Authentication username||string|
|oauth2ClientId|OAuth2 client id||string|
|oauth2ClientSecret|OAuth2 client secret||string|
+|oauth2Scope|OAuth2 scope||string|
|oauth2TokenEndpoint|OAuth2 Token endpoint||string|
|sslContextParameters|To configure security using SSLContextParameters. Important: Only one instance of org.apache.camel.util.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need.||object|
|x509HostnameVerifier|To use a custom X509HostnameVerifier such as DefaultHostnameVerifier or NoopHostnameVerifier||object|
diff --git a/camel-hwcloud-dms.md b/camel-hwcloud-dms.md
index 7e8ed14245140ffb974869555cf0390127eeb7f8..701785be0eb666599afd3150e41108da6307adec 100644
--- a/camel-hwcloud-dms.md
+++ b/camel-hwcloud-dms.md
@@ -34,128 +34,128 @@ for this component:
-
+
-
+
CamelHwCloudDmsOperation
String
Name of operation to invoke
-
+
CamelHwCloudDmsEngine
String
The message engine. Either kafka or
rabbitmq
-
+
CamelHwCloudDmsInstanceId
String
Instance ID to invoke operation
on
-
+
CamelHwCloudDmsName
String
The name of the instance for creating
and updating an instance
-
+
CamelHwCloudDmsEngineVersion
String
The version of the message
engine
-
+
CamelHwCloudDmsSpecification
String
The baseline bandwidth of a Kafka
instance
-
+
CamelHwCloudDmsStorageSpace
int
The message storage space
-
+
CamelHwCloudDmsPartitionNum
int
The maximum number of partitions in a
Kafka instance
-
+
CamelHwCloudDmsAccessUser
String
The username of a RabbitMQ
instance
-
+
CamelHwCloudDmsPassword
String
The password of a RabbitMQ
instance
-
+
CamelHwCloudDmsVpcId
String
The VPC ID
-
+
CamelHwCloudDmsSecurityGroupId
String
The security group which the instance
belongs to
-
+
CamelHwCloudDmsSubnetId
String
The subnet ID
-
+
CamelHwCloudDmsAvailableZones
List<String>
The ID of an available zone
-
+
CamelHwCloudDmsProductId
String
The product ID
-
+
CamelHwCloudDmsKafkaManagerUser
String
The username for logging in to the
Kafka Manager
-
+
CamelHwCloudDmsKafkaManagerPassword
String
The password for logging in to the
Kafka Manager
-
+
CamelHwCloudDmsStorageSpecCode
String
@@ -176,21 +176,21 @@ corresponding query parameter.
-
+
-
+
CamelHwCloudDmsInstanceDeleted
boolean
Set as true when the
deleteInstance operation is successful
-
+
CamelHwCloudDmsInstanceUpdated
boolean
@@ -200,7 +200,7 @@ updateInstance operation is successful
-# List of Supported DMS Operations
+## List of Supported DMS Operations
- createInstance
@@ -253,7 +253,7 @@ An example of how to do this is shown below:
.setBody("{\"name\":\"new-instance\",\"description\":\"description\"}") // add remaining options
.to("hwcloud-dms:updateInstance?instanceId=******®ion=cn-north-4&accessKey=********&secretKey=********&projectId=*******")
-# Using ServiceKey Configuration Bean
+## Using ServiceKey Configuration Bean
Access key and secret keys are required to authenticate against cloud
DMS service. You can avoid having them being exposed and scattered over
diff --git a/camel-hwcloud-frs.md b/camel-hwcloud-frs.md
index 09ec5435ae8532802be0223f666ee0f49ce98a6b..a59ea53ff4027e8370c3544a30cdd37fb307073d 100644
--- a/camel-hwcloud-frs.md
+++ b/camel-hwcloud-frs.md
@@ -37,14 +37,14 @@ RAW(base64\_value) to avoid encoding issue.
-
+
-
+
CamelHwCloudFrsImageBase64
String
@@ -52,7 +52,7 @@ style="text-align: left;">CamelHwCloudFrsImageBase64
from an image. This property can be used when the operation is
faceDetection or faceVerification.
-
+
CamelHwCloudFrsImageUrl
String
@@ -60,7 +60,7 @@ style="text-align: left;">CamelHwCloudFrsImageUrl
be used when the operation is faceDetection or
faceVerification.
-
+
CamelHwCloudFrsImageFilePath
String
@@ -68,7 +68,7 @@ style="text-align: left;">CamelHwCloudFrsImageFilePath
property can be used when the operation is faceDetection or
faceVerification.
-
+
CamelHwCloudFrsAnotherImageBase64
String
@@ -76,14 +76,14 @@ style="text-align: left;">CamelHwCloudFrsAnotherImageBase64
<
from another image. This property can be used when the operation is
faceVerification.
-
+
CamelHwCloudFrsAnotherImageUrl
String
The URL of another image. This property
can be used when the operation is faceVerification.
-
+
CamelHwCloudFrsAnotherImageFilePath
String
@@ -91,7 +91,7 @@ style="text-align: left;">CamelHwCloudFrsAnotherImageFilePath
-
+
CamelHwCloudFrsVideoBase64
String
@@ -99,28 +99,28 @@ style="text-align: left;">CamelHwCloudFrsVideoBase64
from a video. This property can be used when the operation is
faceLiveDetection.
-
+
CamelHwCloudFrsVideoUrl
String
The URL of a video. This property can
be used when the operation is faceLiveDetection.
-
+
CamelHwCloudFrsVideoFilePath
String
The local file path of a video. This
property can be used when the operation is faceLiveDetection.
-
+
CamelHwCloudFrsVideoActions
String
The action code sequence list. This
property can be used when the operation is faceLiveDetection.
-
+
CamelHwCloudFrsVideoActionTimes
String
@@ -130,7 +130,7 @@ used when the operation is faceLiveDetection.
-# List of Supported Operations
+## List of Supported Operations
- faceDetection - detect, locate, and analyze the face in an input
image, and output the key facial points and attributes.
@@ -142,9 +142,9 @@ used when the operation is faceLiveDetection.
by checking whether the person’s actions in the video are consistent
with those in the input action list
-# Inline Configuration of route
+## Inline Configuration of route
-## faceDetection
+### faceDetection
Java DSL
diff --git a/camel-hwcloud-functiongraph.md b/camel-hwcloud-functiongraph.md
index c4eb1ff2db35df146ce0f893c0319f0024eb9753..c5d5d4ba23004e331de70c89d509ad0d5f8ec2cc 100644
--- a/camel-hwcloud-functiongraph.md
+++ b/camel-hwcloud-functiongraph.md
@@ -33,33 +33,33 @@ for this component:
-
+
-
+
CamelHwCloudFgOperation
String
Name of operation to invoke
-
+
CamelHwCloudFgFunction
String
Name of function to invoke operation
on
-
+
CamelHwCloudFgPackage
String
Name of the function package
-
+
CamelHwCloudFgXCffLogType
String
@@ -81,14 +81,14 @@ override their corresponding query parameter.
-
+
-
+
CamelHwCloudFgXCffLogs
String
@@ -99,11 +99,11 @@ is set
-# List of Supported FunctionGraph Operations
+## List of Supported FunctionGraph Operations
- invokeFunction - to invoke a serverless function
-# Using ServiceKey Configuration Bean
+## Using ServiceKey Configuration Bean
Access key and secret keys are required to authenticate against cloud
FunctionGraph service. You can avoid having them being exposed and
diff --git a/camel-hwcloud-iam.md b/camel-hwcloud-iam.md
index 3b07f9bf860901ee00ec945a3e13a4d205349b84..b69437e4d32358a2f0ecdfead59a01b622ba6f21 100644
--- a/camel-hwcloud-iam.md
+++ b/camel-hwcloud-iam.md
@@ -34,26 +34,26 @@ for this component:
-
+
-
+
CamelHwCloudIamOperation
String
Name of operation to invoke
-
+
CamelHwCloudIamUserId
String
User ID to invoke operation on
-
+
CamelHwCloudIamGroupId
String
@@ -66,7 +66,7 @@ on
If any of the above properties are set, they will override their
corresponding query parameter.
-# List of Supported IAM Operations
+## List of Supported IAM Operations
- listUsers
@@ -111,7 +111,7 @@ KeystoneUpdateGroupOption object or a Json string:
.setBody("{\"name\":\"group\",\"description\":\"employees\",\"domain_id\":\"1234\"}")
.to("hwcloud-iam:updateUser?groupId=********®ion=cn-north-4&accessKey=********&secretKey=********")
-# Using ServiceKey Configuration Bean
+## Using ServiceKey Configuration Bean
Access key and secret keys are required to authenticate against cloud
IAM service. You can avoid having them being exposed and scattered over
diff --git a/camel-hwcloud-imagerecognition.md b/camel-hwcloud-imagerecognition.md
index 5b4b2275ad2c7ec7ec19849c2dd21f982bf40f74..ea318f40ebf094b4eb988900a6778756334d8b1d 100644
--- a/camel-hwcloud-imagerecognition.md
+++ b/camel-hwcloud-imagerecognition.md
@@ -37,41 +37,41 @@ RAW(image\_base64\_value) to avoid encoding issue.
-
+
-
+
CamelHwCloudImageContent
String
The Base64 character string converted
from the image
-
+
CamelHwCloudImageUrl
String
The URL of an image
-
+
CamelHwCloudImageTagLimit
Integer
The maximum number of the returned tags
when the operation is tagRecognition
-
+
CamelHwCloudImageTagLanguage
String
The language of the returned tags when
the operation is tagRecognition
-
+
CamelHwCloudImageThreshold
Integer
@@ -80,7 +80,7 @@ style="text-align: left;">CamelHwCloudImageThreshold
-# List of Supported Image Recognition Operations
+## List of Supported Image Recognition Operations
- celebrityRecognition - to analyze and identify the political
figures, stars and online celebrities contained in the picture, and
@@ -89,9 +89,9 @@ style="text-align: left;">CamelHwCloudImageThreshold
- tagRecognition - to recognize hundreds of scenes and thousands of
objects and their properties in natural images
-# Inline Configuration of route
+## Inline Configuration of route
-## celebrityRecognition
+### celebrityRecognition
Java DSL
diff --git a/camel-hwcloud-obs.md b/camel-hwcloud-obs.md
index 686cc619b6c3862bfee25d6a3469073d864822af..163ab57fbe34080a655a0e6f21b0d24d9b94ddc9 100644
--- a/camel-hwcloud-obs.md
+++ b/camel-hwcloud-obs.md
@@ -34,34 +34,34 @@ for this component:
-
+
-
+
CamelHwCloudObsOperation
String
Name of operation to invoke
-
+
CamelHwCloudObsBucketName
String
Bucket name to invoke operation
on
-
+
CamelHwCloudObsBucketLocation
String
Bucket location when creating a new
bucket
-
+
CamelHwCloudObsObjectName
String
@@ -84,14 +84,14 @@ corresponding query parameter.
-
+
-
+
CamelHwCloudObsBucketExists
boolean
@@ -101,7 +101,7 @@ style="text-align: left;">CamelHwCloudObsBucketExists
-# List of Supported OBS Operations
+## List of Supported OBS Operations
- listBuckets
@@ -161,7 +161,7 @@ endpoint uri.
.setBody("{\"bucketName\":\"Bucket name\",\"maxKeys\":1000"}")
.to("hwcloud-obs:listObjects?region=cn-north-4&accessKey=********&secretKey=********")
-# Using ServiceKey Configuration Bean
+## Using ServiceKey Configuration Bean
Access key and secret keys are required to authenticate against the OBS
cloud. You can avoid having them being exposed and scattered over in
diff --git a/camel-hwcloud-smn.md b/camel-hwcloud-smn.md
index 4e5e8bc623c4337992eead2d03674b34589fe4fc..d3a41263b4da2384b9da75b2b53b483e9212b532 100644
--- a/camel-hwcloud-smn.md
+++ b/camel-hwcloud-smn.md
@@ -36,35 +36,35 @@ To send a notification.
-
+
-
+
CamelHwCloudSmnSubject
String
Subject tag for the outgoing
notification
-
+
CamelHwCloudSmnTopic
String
Smn topic into which the message is to
be posted
-
+
CamelHwCloudSmnMessageTtl
Integer
Validity of the posted notification
message
-
+
CamelHwCloudSmnTemplateTags
Map<String, String>
and values when using operation
publishAsTemplatedMessage
-
+
CamelHwCloudSmnTemplateName
String
@@ -92,21 +92,21 @@ operation publishAsTemplatedMessage
-
+
-
+
CamelHwCloudSmnMesssageId
String
Unique message id returned by Simple
Message Notification server after processing the request
-
+
CamelHwCloudSmnRequestId
String
@@ -116,7 +116,7 @@ Message Notification server after processing the request
-# Supported list of smn services and corresponding operations
+## Supported list of smn services and corresponding operations
@@ -124,13 +124,13 @@ Message Notification server after processing the request
-
+
-
+
publishMessageService
publishAsTextMessage,
@@ -139,9 +139,9 @@ publishAsTemplatedMessage
-# Inline Configuration of route
+## Inline Configuration of route
-## publishAsTextMessage
+### publishAsTextMessage
Java DSL
@@ -179,7 +179,7 @@ Java DSL
.setProperty("CamelHwCloudSmnTemplateName", constant("hello-template"))
.to("hwcloud-smn:publishMessageService?operation=publishAsTemplatedMessage&accessKey=*********&secretKey=********&projectId=9071a38e7f6a4ba7b7bcbeb7d4ea6efc®ion=cn-north-4")
-# Using ServiceKey configuration Bean
+## Using ServiceKey configuration Bean
Access key and secret keys are required to authenticate against cloud
smn service. You can avoid having them being exposed and scattered over
diff --git a/camel-hwcloud-summary.md b/camel-hwcloud-summary.md
new file mode 100644
index 0000000000000000000000000000000000000000..a62e8c00f2a77894cefb492f1d5bae4705f92107
--- /dev/null
+++ b/camel-hwcloud-summary.md
@@ -0,0 +1,12 @@
+# Hwcloud-summary.md
+
+The Camel components for [Huawei Cloud
+Services](https://www.huaweicloud.com/intl/en-us/) which provides
+connectivity to Huawei Cloud services from Camel.
+
+# Huawei Cloud components
+
+See the following for usage of each component:
+
+indexDescriptionList::\[attributes=*group=Huawei
+Cloud*,descriptionformat=description\]
diff --git a/camel-ical-dataformat.md b/camel-ical-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..a966dd77c25874f00186cfb5c83491f622170a1a
--- /dev/null
+++ b/camel-ical-dataformat.md
@@ -0,0 +1,47 @@
+# Ical-dataformat.md
+
+**Since Camel 2.12**
+
+The ICal dataformat is used for working with
+[iCalendar](http://en.wikipedia.org/wiki/ICalendar) messages.
+
+A typical iCalendar message looks like:
+
+ BEGIN:VCALENDAR
+ VERSION:2.0
+ PRODID:-//Events Calendar//iCal4j 1.0//EN
+ CALSCALE:GREGORIAN
+ BEGIN:VEVENT
+ DTSTAMP:20130324T180000Z
+ DTSTART:20130401T170000
+ DTEND:20130401T210000
+ SUMMARY:Progress Meeting
+ TZID:America/New_York
+ UID:00000000
+ ATTENDEE;ROLE=REQ-PARTICIPANT;CN=Developer 1:mailto:dev1@mycompany.com
+ ATTENDEE;ROLE=OPT-PARTICIPANT;CN=Developer 2:mailto:dev2@mycompany.com
+ END:VEVENT
+ END:VCALENDAR
+
+# Options
+
+# Basic Usage
+
+To unmarshal and marshal the message shown above, your route will look
+like the following:
+
+ from("direct:ical-unmarshal")
+ .unmarshal("ical")
+ .to("mock:unmarshaled")
+ .marshal("ical")
+ .to("mock:marshaled");
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-ical
+ x.x.x
+
+
diff --git a/camel-idempotentConsumer-eip.md b/camel-idempotentConsumer-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..af8744312e900bce16213f35267ca85e20582a11
--- /dev/null
+++ b/camel-idempotentConsumer-eip.md
@@ -0,0 +1,61 @@
+# IdempotentConsumer-eip.md
+
+The [Idempotent
+Consumer](http://www.enterpriseintegrationpatterns.com/IdempotentReceiver.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) is used
+to filter out duplicate messages.
+
+The Idempotent Consumer essentially acts like a [Message
+Filter](#filter-eip.adoc) to filter out duplicates.
+
+Camel will add the message id eagerly to the repository to detect
+duplication also for [Exchange](#manual::exchange.adoc)*s* currently in
+progress. On completion Camel will remove the message id from the
+repository if the [Exchange](#manual::exchange.adoc) failed, otherwise
+it stays there.
+
+# Options
+
+# Exchange properties
+
+# Idempotent Consumer implementations
+
+The idempotent consumer provides a pluggable repository which you can
+implement your own `org.apache.camel.spi.IdempotentRepository`.
+
+Camel provides the following Idempotent Consumer implementations:
+
+- MemoryIdempotentRepository from `camel-support` JAR
+
+- [CaffeineIdempotentRepository](#ROOT:caffeine-cache-component.adoc)
+
+- [CassandraIdempotentRepository](#ROOT:cql-component.adoc)
+ [NamedCassandraIdempotentRepository](#ROOT:cql-component.adoc)
+
+- [EHCacheIdempotentRepository](#ROOT:ehcache-component.adoc)
+
+- [HazelcastIdempotentRepository](#ROOT:hazelcast-summary.adoc)
+
+- [InfinispanIdempotentRepository](#ROOT:infinispan-component.adoc)
+ [InfinispanEmbeddedIdempotentRepository](#ROOT:infinispan-component.adoc)
+ [InfinispanRemoteIdempotentRepository](#ROOT:infinispan-component.adoc)
+
+- [JCacheIdempotentRepository](#ROOT:jcache-component.adoc)
+
+- [JpaMessageIdRepository](#ROOT:jpa-component.adoc)
+
+- [KafkaIdempotentRepository](#ROOT:kafka-component.adoc)
+
+- [MongoDbIdempotentRepository](#ROOT:mongodb-component.adoc)
+
+- [RedisIdempotentRepository](#ROOT:spring-redis-component.adoc)
+ [RedisStringIdempotentRepository](#ROOT:spring-redis-component.adoc)
+
+- [SpringCacheIdempotentRepository](#manual::spring.adoc)
+
+- [JdbcMessageIdRepository](#ROOT:sql-component.adoc)
+ [JdbcOrphanLockAwareIdempotentRepository](#ROOT:sql-component.adoc)
+
+# Example
+
+For example, see the above implementations for more details.
diff --git a/camel-ignite-compute.md b/camel-ignite-compute.md
index a477a89323d4fcd597805057cf69985684c7709d..2f4a79b2943c416f59a48f7811b73496c49f2656 100644
--- a/camel-ignite-compute.md
+++ b/camel-ignite-compute.md
@@ -28,42 +28,42 @@ Each operation expects the indicated types:
-
+
-
+
CALL
Collection of IgniteCallable, or a
single IgniteCallable.
-
+
BROADCAST
IgniteCallable, IgniteRunnable,
IgniteClosure.
-
+
APPLY
IgniteClosure.
-
+
EXECUTE
ComputeTask, Class<? extends
ComputeTask> or an object representing parameters if the taskName
option is not null.
-
+
RUN
A Collection of IgniteRunnables, or a
single IgniteRunnable.
-
+
AFFINITY_CALL
IgniteCallable.
-
+
AFFINITY_RUN
IgniteRunnable.
diff --git a/camel-ignite-summary.md b/camel-ignite-summary.md
new file mode 100644
index 0000000000000000000000000000000000000000..fbd8b8447ff52076e313a25c14bdca7efd775e54
--- /dev/null
+++ b/camel-ignite-summary.md
@@ -0,0 +1,132 @@
+# Ignite-summary.md
+
+**Since Camel 2.17**
+
+[Apache Ignite](https://ignite.apache.org/) In-Memory Data Fabric is a
+high performance, integrated and distributed in-memory platform for
+computing and transacting on large-scale data sets in real-time, orders
+of magnitude faster than possible with traditional disk-based or flash
+technologies. It is designed to deliver uncompromised performance for a
+wide set of in-memory computing use cases from high-performance
+computing, to the industry’s most advanced data grid, highly available
+service grid, and streaming. See all
+[features](https://ignite.apache.org/features.html).
+
+
+
+
+
+# Ignite components
+
+See the following for usage of each component:
+
+indexDescriptionList::\[attributes=*group=Ignite*,descriptionformat=description\]
+
+# Installation
+
+To use this component, add the following dependency to your `pom.xml`:
+
+
+ org.apache.camel
+ camel-ignite
+ ${camel.version}
+
+
+# Initializing the Ignite component
+
+Each instance of the Ignite component is associated with an underlying
+`org.apache.ignite.Ignite` instance. You can interact with two Ignite
+clusters by initializing two instances of the Ignite component and
+binding them to different `IgniteConfigurations`. There are three ways
+to initialize the Ignite component:
+
+- By passing in an existing `org.apache.ignite.Ignite` instance.
+ Here’s an example using Spring config:
+
+
+
+
+
+
+
+- By passing in an IgniteConfiguration, either constructed
+ programmatically or through inversion of control (e.g., Spring,
+ etc). Here’s an example using Spring config:
+
+
+
+
+
+
+ [...]
+
+
+
+
+- By passing in a URL, InputStream or String URL to a Spring-based
+ configuration file. In all three cases, you inject them in the same
+ property called configurationResource. Here’s an example using
+ Spring config:
+
+
+
+
+
+
+
+Additionally, if using Camel programmatically, there are several
+convenience static methods in IgniteComponent that return a component
+out of these configuration options:
+
+- `IgniteComponent#fromIgnite(Ignite)`
+
+- `IgniteComponent#fromConfiguration(IgniteConfiguration)`
+
+- `IgniteComponent#fromInputStream(InputStream)`
+
+- `IgniteComponent#fromUrl(URL)`
+
+- `IgniteComponent#fromLocation(String)`
+
+You may use those methods to quickly create an IgniteComponent with your
+chosen configuration technique.
+
+# General options
+
+All endpoints share the following options:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+true
+If the underlying Ignite operation
+returns void (no return type), this flag determines whether the producer
+will copy the IN body into the OUT body.
+treatCollectionsAsCacheObjects
+boolean
+false
+Some Ignite operations can deal with
+multiple elements at once, if passed a Collection. Enabling this option
+will treat Collections as a single object, invoking the operation
+variant for cardinality 1.
+
+
+
diff --git a/camel-infinispan-embedded.md b/camel-infinispan-embedded.md
index 35a993dda79d249e0ad9ae44c5b6aa9d99a50729..36ab9890837c934d613c5aab54d7d370a453531a 100644
--- a/camel-infinispan-embedded.md
+++ b/camel-infinispan-embedded.md
@@ -44,7 +44,9 @@ consumer allows listening for events from local infinispan cache.
If no cache configuration is provided, embedded cacheContainer is
created directly in the component.
-# Camel Operations
+# Usage
+
+## Camel Operations
This section lists all available operations, along with their header
information.
@@ -56,28 +58,28 @@ information.
-
+
-
+
InfinispanOperation.PUT
Put a key/value pair in the cache, optionally with expiration
-
+
InfinispanOperation.PUTASYNC
Asynchronously puts a key/value pair in the cache, optionally with expiration
-
+
InfinispanOperation.PUTIFABSENT
Put a key/value pair in the cache if it did not exist, optionally with expiration
-
+
InfinispanOperation.PUTIFABSENTASYNC
Asynchronously puts a key/value pair in the cache if it did not exist, optionally with expiration
@@ -114,18 +116,18 @@ Put Operations
-
+
-
+
InfinispanOperation.PUTALL
Adds multiple entries to a cache, optionally with expiration
-
+
CamelInfinispanOperation.PUTALLASYNC
Asynchronously adds multiple entries to a cache, optionally with expiration
@@ -156,18 +158,18 @@ Put All Operations
-
+
-
+
InfinispanOperation.GET
Retrieve the value associated with a specific key from the cache
-
+
InfinispanOperation.GETORDEFAULT
Retrieves the value, or default value, associated with a specific key from the cache
@@ -188,13 +190,13 @@ Get Operations
-
+
-
+
InfinispanOperation.CONTAINSKEY
Determines whether a cache contains a specific key
@@ -219,13 +221,13 @@ Contains Key Operation
-
+
-
+
InfinispanOperation.CONTAINSVALUE
Determines whether a cache contains a specific value
@@ -246,18 +248,18 @@ Contains Value Operation
-
+
-
+
InfinispanOperation.REMOVE
Removes an entry from a cache, optionally only if the value matches a given one
-
+
InfinispanOperation.REMOVEASYNC
Asynchronously removes an entry from a cache, optionally only if the value matches a given one
@@ -286,18 +288,18 @@ Remove Operations
-
+
-
+
InfinispanOperation.REPLACE
Conditionally replaces an entry in the cache, optionally with expiration
-
+
InfinispanOperation.REPLACEASYNC
Asynchronously conditionally replaces an entry in the cache, optionally with expiration
@@ -336,18 +338,18 @@ Replace Operations
-
+
-
+
InfinispanOperation.CLEAR
Clears the cache
-
+
InfinispanOperation.CLEARASYNC
Asynchronously clears the cache
@@ -364,13 +366,13 @@ Clear Operations
-
+
-
+
InfinispanOperation.SIZE
Returns the number of entries in the cache
@@ -391,13 +393,13 @@ Size Operation
-
+
-
+
InfinispanOperation.STATS
Returns statistics about the cache
@@ -418,13 +420,13 @@ Stats Operation
-
+
-
+
InfinispanOperation.QUERY
Executes a query on the cache
@@ -500,7 +502,7 @@ previous value by default.
class and annotate the resulting class with `@Listener` which can be
found in the package `org.infinispan.notifications`.
-# Using the Infinispan based idempotent repository
+## Using the Infinispan based idempotent repository
In this section, we will use the Infinispan based idempotent repository.
@@ -552,7 +554,7 @@ XML
3. Set the repository to the route
-# Using the Infinispan based aggregation repository
+## Using the Infinispan based aggregation repository
In this section, we will use the Infinispan based aggregation
repository.
diff --git a/camel-infinispan.md b/camel-infinispan.md
index e17b2ba7404e52d7cc7e5549cac88de18bc8d82c..2c33356209337d2ce7661ad782e2bb5941cfe122 100644
--- a/camel-infinispan.md
+++ b/camel-infinispan.md
@@ -27,7 +27,9 @@ The producer allows sending messages to a remote cache using the HotRod
protocol. The consumer allows listening for events from a remote cache
using the HotRod protocol.
-# Camel Operations
+# Usage
+
+## Camel Operations
This section lists all available operations, along with their header
information.
@@ -39,28 +41,28 @@ information.
-
+
-
+
InfinispanOperation.PUT
Put a key/value pair in the cache, optionally with expiration
-
+
InfinispanOperation.PUTASYNC
Asynchronously puts a key/value pair in the cache, optionally with expiration
-
+
InfinispanOperation.PUTIFABSENT
Put a key/value pair in the cache if it did not exist, optionally with expiration
-
+
InfinispanOperation.PUTIFABSENTASYNC
Asynchronously puts a key/value pair in the cache if it did not exist, optionally with expiration
@@ -97,18 +99,18 @@ Put Operations
-
+
-
+
InfinispanOperation.PUTALL
Adds multiple entries to a cache, optionally with expiration
-
+
CamelInfinispanOperation.PUTALLASYNC
Asynchronously adds multiple entries to a cache, optionally with expiration
@@ -139,18 +141,18 @@ Put All Operations
-
+
-
+
InfinispanOperation.GET
Retrieve the value associated with a specific key from the cache
-
+
InfinispanOperation.GETORDEFAULT
Retrieves the value, or default value, associated with a specific key from the cache
@@ -172,13 +174,13 @@ Get Operations
-
+
-
+
InfinispanOperation.CONTAINSKEY
Determines whether a cache contains a specific key
@@ -203,13 +205,13 @@ Contains Key Operation
-
+
-
+
InfinispanOperation.CONTAINSVALUE
Determines whether a cache contains a specific value
@@ -234,18 +236,18 @@ Contains Value Operation
-
+
-
+
InfinispanOperation.REMOVE
Removes an entry from a cache, optionally only if the value matches a given one
-
+
InfinispanOperation.REMOVEASYNC
Asynchronously removes an entry from a cache, optionally only if the value matches a given one
@@ -274,18 +276,18 @@ Remove Operations
-
+
-
+
InfinispanOperation.REPLACE
Conditionally replaces an entry in the cache, optionally with expiration
-
+
InfinispanOperation.REPLACEASYNC
Asynchronously conditionally replaces an entry in the cache, optionally with expiration
@@ -324,18 +326,18 @@ Replace Operations
-
+
-
+
InfinispanOperation.CLEAR
Clears the cache
-
+
InfinispanOperation.CLEARASYNC
Asynchronously clears the cache
@@ -352,13 +354,13 @@ Clear Operations
-
+
-
+
InfinispanOperation.SIZE
Returns the number of entries in the cache
@@ -379,13 +381,13 @@ Size Operation
-
+
-
+
InfinispanOperation.STATS
Returns statistics about the cache
@@ -406,13 +408,13 @@ Stats Operation
-
+
-
+
InfinispanOperation.QUERY
Executes a query on the cache
@@ -491,7 +493,7 @@ previous value by default.
can be found in the package
`org.infinispan.client.hotrod.annotation`.
-# Using the Infinispan based idempotent repository
+## Using the Infinispan based idempotent repository
In this section, we will use the Infinispan based idempotent repository.
@@ -543,7 +545,7 @@ XML
3. Set the repository to the route
-# Using the Infinispan based aggregation repository
+## Using the Infinispan based aggregation repository
In this section, we will use the Infinispan based aggregation
repository.
diff --git a/camel-intercept.md b/camel-intercept.md
new file mode 100644
index 0000000000000000000000000000000000000000..0fb8f4f9713a8490f61bc41e15b91e11778e944b
--- /dev/null
+++ b/camel-intercept.md
@@ -0,0 +1,425 @@
+# Intercept.md
+
+The intercept feature in Camel supports intercepting
+[Exchange](#manual::exchange.adoc)*s* while they are being routed.
+
+# Kinds of interceptors
+
+Camel supports three kinds of interceptors:
+
+- [`intercept`](#Intercept-Intercept) that intercepts every processing
+ step as they happen during routing
+
+- [`interceptFrom`](#Intercept-InterceptFrom) that intercepts only the
+ incoming step (i.e., [from](#from-eip.adoc))
+
+- [`interceptSendToEndpoint`](#Intercept-InterceptSendToEndpoint) that
+ intercepts only when an [Exchange](#manual::exchange.adoc) is about
+ to be sent to the given [endpoint](#message-endpoint.adoc).
+
+The `interceptSendToEndpoint` is dynamic hence it will also trigger if a
+dynamic URI is constructed that Camel was not aware of at startup time.
+
+The `interceptFrom` is not dynamic, and will only intercept all the
+known routes when Camel is starting. So if you construct a `Consumer`
+using the Camel Java API and consumes messages from this endpoint, then
+the `interceptFrom` is not triggered.
+
+## Interceptor scopes
+
+All the interceptors can be configured on global, or with [Route
+Configuration](#manual::route-configuration.adoc).
+
+## Common features of the interceptors
+
+All these interceptors support the following features:
+
+- [Predicate](#manual::predicate.adoc) using `when` to only trigger
+ the interceptor in certain conditions
+
+- `stop` forces stopping continue routing the Exchange and mark it as
+ completed successful (it’s actually the [Stop](#stop-eip.adoc) EIP).
+
+- `skip` when used with `interceptSendToEndpoint` will **skip**
+ sending the message to the original intended endpoint.
+
+- `afterUri` when used with `interceptSendToEndpoint` allows to send
+ the message to an [endpoint](#message-endpoint.adoc) afterward.
+
+- `interceptFrom` and `interceptSendToEndpoint` support endpoint URI
+ pattern matching by exact uri, wildcard and regular expression. See
+ further below for more details.
+
+- The intercepted endpoint uri is stored as exchange property with the
+ key `Exchange.INTERCEPTED_ENDPOINT`.
+
+# Using `intercept`
+
+The `Intercept` is intercepting the [Exchange](#manual::exchange.adoc)
+on every processing step during routing.
+
+Given the following example:
+
+Java
+// global interceptor for all routes
+intercept().to("log:hello");
+
+ from("jms:queue:order")
+ .to("bean:validateOrder")
+ .to("bean:processOrder");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+What happens is that the `Exchange` is intercepted before each
+processing step, that means that it will be intercepted before
+
+- `.to("bean:validateOrder")`
+
+- `.to("bean:processOrder")`
+
+So in this example we intercept the `Exchange` twice.
+
+## Controlling when to intercept using a predicate
+
+If you only want to intercept "sometimes", then you can use a
+[predicate](#manual::predicate.adoc).
+
+For instance, in the sample below, we only intercept if the message body
+contains the string word Hello:
+
+Java
+intercept().when(body().contains("Hello")).to("mock:intercepted");
+
+ from("jms:queue:order")
+ .to("bean:validateOrder")
+ .to("bean:processOrder");
+
+XML
+
+
+
+
+ ${in.body} contains 'Hello'
+
+
+
+
+
+
+
+
+
+
+
+
+## Stop routing after being intercepted
+
+It is also possible to stop routing after being intercepted. Now suppose
+that if the message body contains the word Hello we want to log and
+stop, then we can do:
+
+Java
+intercept().when(body().contains("Hello"))
+.to("log:test")
+.stop(); // stop continue routing
+
+ from("jms:queue:order")
+ .to("bean:validateOrder")
+ .to("bean:processOrder");
+
+XML
+
+
+
+
+ ${body} contains 'Hello'
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Using `interceptFrom`
+
+The `interceptFrom` is for intercepting any incoming Exchange, in any
+route (it intercepts all the [`from`](#from-eip.adoc) EIPs)
+
+This allows you to do some custom behavior for received Exchanges. You
+can provide a specific uri for a given Endpoint then it only applies for
+that particular route.
+
+So let’s start with the logging example. We want to log all the incoming
+messages, so we use `interceptFrom` to route to the
+[Log](#ROOT:log-component.adoc) component.
+
+Java
+interceptFrom()
+.to("log:incoming");
+
+ from("jms:queue:order")
+ .to("bean:validateOrder")
+ .to("bean:processOrder");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+If you want to only apply a specific endpoint, such as all jms
+endpoints, you can do:
+
+Java
+interceptFrom("jms\*")
+.to("log:incoming");
+
+ from("jms:queue:order")
+ .to("bean:validateOrder")
+ .to("bean:processOrder");
+
+ from("file:inbox")
+ .to("ftp:someserver/backup")
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+In this example then only messages from the JMS route are intercepted,
+because we specified a pattern in the `interceptFrom` as `jms*` (uses a
+wildcard).
+
+The pattern syntax is documented in more details later.
+
+# Using `interceptSendToEndpoint`
+
+You can also intercept when Apache Camel is sending a message to an
+[endpoint](#message-endpoint.adoc).
+
+This can be used to do some custom processing before the message is sent
+to the intended destination.
+
+The interceptor can also be configured to not send to the destination
+(`skip`) which means the message is detoured instead.
+
+A [Predicate](#manual::predicate.adoc) can also be used to control when
+to intercept, which has been previously covered.
+
+The `afterUri` option, is used when you need to process the response
+message from the intended destination. This functionality was added
+later to the interceptor, in a way of sending to yet another
+[endpoint](#message-endpoint.adoc).
+
+Let’s start with a basic example, where we want to intercept when a
+message is being sent to [kafka](#ROOT:kafka-component.adoc):
+
+Java
+interceptSendToEndpoint("kafka\*")
+.to("bean:beforeKafka");
+
+ from("jms:queue:order")
+ .to("bean:validateOrder")
+ .to("bean:processOrder")
+ .to("kafka:order");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+When you also want to process the message after it has been sent to the
+intended destination, then the example is slightly *odd* because you
+have to use the `afterUri` as shown:
+
+Java
+interceptSendToEndpoint("kafka\*")
+.to("bean:beforeKafka")
+.afterUri("bean:afterKafka");
+
+ from("jms:queue:order")
+ .to("bean:validateOrder")
+ .to("bean:processOrder")
+ .to("kafka:order");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Skip sending to original endpoint
+
+Sometimes you want to **intercept and skip** sending messages to a
+specific endpoint.
+
+For example, to avoid sending any message to kafka, but detour them to a
+[mock](#ROOT:mock-component.adoc) endpoint, it can be done as follows:
+
+Java
+interceptSendToEndpoint("kafka\*").skipSendToOriginalEndpoint()
+.to("mock:kafka");
+
+ from("jms:queue:order")
+ .to("bean:validateOrder")
+ .to("bean:processOrder")
+ .to("kafka:order");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Conditional skipping sending to endpoint
+
+You can combine both a [predicate](#manual::predicate.adoc) and skip
+sending to the original endpoint. For example, suppose you have some
+"test" messages that sometimes occur, and that you want to avoid sending
+these messages to a downstream kafka system, then this can be done as
+shown:
+
+Java
+interceptSendToEndpoint("kafka\*").skipSendToOriginalEndpoint()
+.when(simple("${header.biztype} == 'TEST'")
+.log("TEST message detected - is NOT send to kafka");
+
+ from("jms:queue:order")
+ .to("bean:validateOrder")
+ .to("bean:processOrder")
+ .to("kafka:order");
+
+XML
+
+
+
+ ${header.biztype} == 'TEST'
+
+
+
+
+
+
+
+
+
+
+
+
+# Intercepting endpoints using pattern matching
+
+The `interceptFrom` and `interceptSendToEndpoint` support endpoint
+pattern matching by the following rules in the given order:
+
+- match by exact URI name
+
+- match by wildcard
+
+- match by regular expression
+
+## Intercepting when matching by exact URI
+
+This matches only a specific endpoint with exactly the same URI.
+
+For example, to intercept messages being sent to a specific JMS queue,
+you can do:
+
+ interceptSendToEndpoint("jms:queue:cheese").to("log:smelly");
+
+## Intercepting when matching endpoints by wildcard
+
+Match by wildcard allows you to match a range of endpoints or all of a
+given type. For instance use `file:*` will match all
+[file-based](#ROOT:file-component.adoc) endpoints.
+
+ interceptFrom("file:*").to("log:from-file");
+
+Match by wildcard works so that the pattern ends with a `\*` and that
+the uri matches if it starts with the same pattern.
+
+For example, you can be more specific, to only match for files from
+specific folders like:
+
+ interceptFrom("file:order/inbox/*").to("log:new-file-orders");
+
+## Intercepting when matching endpoints by regular expression
+
+Match by regular expression is just like match by wildcard but using
+regex instead. So if we want to intercept incoming messages from gold
+and silver JMS queues, we can do:
+
+ interceptFrom("jms:queue:(gold|silver)").to("seda:handleFast");
diff --git a/camel-irc.md b/camel-irc.md
index 94793f5f68fedfeba88eeb4a920a0a8e74fa25e4..65462ea68817871e98c6e6eaf84420afbbff9058 100644
--- a/camel-irc.md
+++ b/camel-irc.md
@@ -18,9 +18,11 @@ for this component:
-# SSL Support
+# Usage
-## Using the JSSE Configuration Utility
+## SSL Support
+
+### Using the JSSE Configuration Utility
The IRC component supports SSL/TLS configuration through the [Camel JSSE
Configuration Utility](#manual::camel-configuration-utilities.adoc).
@@ -76,7 +78,9 @@ If you need to provide your own custom trust manager, use the
ircs:host[:port]/#room?username=user&password=pass&trustManager=#referenceToMyTrustManagerBean
-# Using keys
+# Examples
+
+## Using keys
Some IRC rooms require you to provide a key to be able to join that
channel. The key is just a secret word.
@@ -86,7 +90,7 @@ key.
irc:nick@irc.server.org?channels=#chan1,#chan2,#chan3&keys=chan1Key,,chan3key
-# Getting a list of channel users
+## Getting a list of channel users
Using the `namesOnJoin` option one can invoke the IRC-`NAMES` command
after the component has joined a channel. The server will reply with
@@ -103,7 +107,7 @@ the channel:
.filter(header("irc.num").isEqualTo("353"))
.to("mock:result").stop();
-# Sending to a different channel or a person
+## Sending to a different channel or a person
If you need to send messages to a different channel (or a person) which
is not defined on IRC endpoint, you can specify a different destination
@@ -118,14 +122,14 @@ You can specify the destination in the following header:
-
+
-
+
irc.sendTo
String
The channel (or the person)
diff --git a/camel-ironmq.md b/camel-ironmq.md
index 0e1da3c819374beeb2f96f5ba6f8d25aef72bc65..acb541adeb9806c39e6e31de68b7b76c11c2de32 100644
--- a/camel-ironmq.md
+++ b/camel-ironmq.md
@@ -30,13 +30,17 @@ for this component:
Where `queueName` identifies the IronMQ queue you want to publish or
consume messages from.
-# Message Body
+# Usage
+
+## Message Body
It should be either a String or an array of Strings. In the latter case,
the batch of strings will be sent to IronMQ as one request, creating one
message per element in the array.
-# Consumer example
+# Examples
+
+## Consumer example
Consume 50 messages per poll from the queue `testqueue` on aws eu, and
save the messages to files.
@@ -44,7 +48,7 @@ save the messages to files.
from("ironmq:testqueue?ironMQCloud=https://mq-aws-eu-west-1-1.iron.io&projectId=myIronMQProjectid&token=myIronMQToken&maxMessagesPerPoll=50")
.to("file:somefolder");
-# Producer example
+## Producer example
Dequeue from activemq jms and enqueue the messages on IronMQ.
@@ -71,9 +75,9 @@ Dequeue from activemq jms and enqueue the messages on IronMQ.
|ironMQCloud|IronMq Cloud url. Urls for public clusters: https://mq-aws-us-east-1-1.iron.io (US) and https://mq-aws-eu-west-1-1.iron.io (EU)|https://mq-aws-us-east-1-1.iron.io|string|
|preserveHeaders|Should message headers be preserved when publishing messages. This will add the Camel headers to the Iron MQ message as a json payload with a header list, and a message body. Useful when Camel is both consumer and producer.|false|boolean|
|projectId|IronMQ projectId||string|
-|batchDelete|Should messages be deleted in one batch. This will limit the number of api requests since messages are deleted in one request, instead of one pr. exchange. If enabled care should be taken that the consumer is idempotent when processing exchanges.|false|boolean|
+|batchDelete|Should messages be deleted in one batch. This will limit the number of api requests since messages are deleted in one request, instead of one per exchange. If enabled care should be taken that the consumer is idempotent when processing exchanges.|false|boolean|
|concurrentConsumers|The number of concurrent consumers.|1|integer|
-|maxMessagesPerPoll|Number of messages to poll pr. call. Maximum is 100.|1|integer|
+|maxMessagesPerPoll|Number of messages to poll per call. Maximum is 100.|1|integer|
|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|timeout|After timeout (in seconds), item will be placed back onto the queue.|60|integer|
|wait|Time in seconds to wait for a message to become available. This enables long polling. Default is 0 (does not wait), maximum is 30.||integer|
diff --git a/camel-jackson-dataformat.md b/camel-jackson-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a7e8a1709ab4e90b8fe64ae99b74e277d7052be
--- /dev/null
+++ b/camel-jackson-dataformat.md
@@ -0,0 +1,76 @@
+# Jackson-dataformat.md
+
+**Since Camel 2.0**
+
+Jackson is a Data Format that uses the [Jackson
+Library](https://github.com/FasterXML/jackson-core)
+
+ from("activemq:My.Queue").
+ marshal().json(JsonLibrary.Jackson).
+ to("mqseries:Another.Queue");
+
+# Jackson Options
+
+# Usage
+
+## 2 and 4 bytes characters
+
+Jackson will default work with UTF-8 using an optimized JSon generator
+that only supports UTF-8. For users that need 2-bytes or 4-bytes (such
+as Japanese) would need to turn on `useWriter=true` in the Camel
+dataformat, to use another JSon generator that lets `java.io.Writer`
+handle character encodings.
+
+## Using custom ObjectMapper
+
+You can configure `JacksonDataFormat` to use a custom `ObjectMapper` in
+case you need more control of the mapping configuration.
+
+If you set up a single `ObjectMapper` in the registry, then Camel will
+automatic lookup and use this `ObjectMapper`. For example, if you use
+Spring Boot, then Spring Boot can provide a default `ObjectMapper` for
+you if you have Spring MVC enabled. And this would allow Camel to detect
+that there is one bean of `ObjectMapper` class type in the Spring Boot
+bean registry and then use it. When this happens you should set a `INFO`
+logging from Camel.
+
+## Using Jackson for automatic type conversion
+
+The `camel-jackson` module allows integrating Jackson as a [Type
+Converter](#manual::type-converter.adoc).
+
+This gives a set of out-of-the-box converters to/from the Jackson type
+`JSonNode`, such as converting from `JSonNode` to `String` or vice
+versa.
+
+### Enabling more type converters and support for POJOs
+
+To enable POJO conversion support for `camel-jackson` then this must be
+enabled, which is done by setting the following options on the
+`CamelContext` global options, as shown:
+
+ // Enable Jackson JSON type converter for more types.
+ camelContext.getGlobalOptions().put("CamelJacksonEnableTypeConverter", "true");
+ // Allow Jackson JSON to convert to pojo types also
+ // (by default, Jackson only converts to String and other simple types)
+ getContext().getGlobalOptions().put("CamelJacksonTypeConverterToPojo", "true");
+
+The `camel-jackson` type converter integrates with
+[JAXB](#dataformats:jaxb-dataformat.adoc) which means you can annotate
+POJO class with `JAXB` annotations that Jackson can use. You can also
+use Jackson’s own annotations in your POJO classes.
+
+# Dependencies
+
+To use Jackson in your Camel routes, you need to add the dependency on
+**camel-jackson**, which implements this data format.
+
+If you use Maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release:
+
+
+ org.apache.camel
+ camel-jackson
+ x.x.x
+
+
diff --git a/camel-jacksonXml-dataformat.md b/camel-jacksonXml-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..6548ff4e1368f2110149b0a8933102833e2b259d
--- /dev/null
+++ b/camel-jacksonXml-dataformat.md
@@ -0,0 +1,277 @@
+# JacksonXml-dataformat.md
+
+**Since Camel 2.16**
+
+Jackson XML is a Data Format that uses the [Jackson
+library](https://github.com/FasterXML/jackson/) with the [XMLMapper
+extension](https://github.com/FasterXML/jackson-dataformat-xml) to
+unmarshal an XML payload into Java objects or to marshal Java objects
+into an XML payload.
+
+If you are familiar with Jackson, this XML data format behaves in the
+same way as its JSON counterpart, and thus can be used with classes
+annotated for JSON serialization/deserialization.
+
+This extension also mimics [JAXB’s "Code first"
+approach](https://github.com/FasterXML/jackson-dataformat-xml/blob/master/README.md).
+
+This data format relies on
+[Woodstox](https://github.com/FasterXML/Woodstox) (especially for
+features like pretty printing), a fast and efficient XML processor.
+
+ from("activemq:My.Queue").
+ unmarshal().jacksonXml().
+ to("mqseries:Another.Queue");
+
+# JacksonXML Options
+
+# Usage
+
+## 2 and 4 bytes characters
+
+Jackson will default work with UTF-8 using an optimized generator that
+only supports UTF-8. For users that need 2-bytes or 4-bytes (such as
+Japanese) would need to turn on `useWriter=true` in the Camel
+dataformat, to use another generator that lets `java.io.Writer` handle
+character encodings.
+
+## Using Jackson XML in Spring DSL
+
+When using Data Format in Spring DSL, you need to declare the data
+formats first. This is done in the `dataFormats` XML tag:
+
+
+
+
+
+
+And then you can refer to this id in the route:
+
+
+
+
+
+
+
+## Excluding POJO fields from marshalling
+
+When marshalling a POJO to XML, you might want to exclude certain fields
+from the XML output. With Jackson, you can use [JSON
+views](https://github.com/FasterXML/jackson-annotations/blob/master/src/main/java/com/fasterxml/jackson/annotation/JsonView.java)
+to accomplish this. First, create one or more marker classes.
+
+Use the marker classes with the `@JsonView` annotation to
+include/exclude certain fields. The annotation also works on getters.
+
+Finally, use the Camel `JacksonXMLDataFormat` to marshall the above POJO
+to XML.
+
+Note that the weight field is missing in the resulting XML:
+
+
+
+## Include/Exclude fields using the `jsonView` attribute with \`\`JacksonXML\`\`DataFormat
+
+As an example of using this attribute, you can instead of:
+
+ JacksonXMLDataFormat ageViewFormat = new JacksonXMLDataFormat(TestPojoView.class, Views.Age.class);
+
+ from("direct:inPojoAgeView")
+ .marshal(ageViewFormat);
+
+Directly specify your [JSON
+view](https://github.com/FasterXML/jackson-annotations/blob/master/src/main/java/com/fasterxml/jackson/annotation/JsonView.java)
+inside the Java DSL as:
+
+ from("direct:inPojoAgeView")
+ .marshal().jacksonXml(TestPojoView.class, Views.Age.class);
+
+And the same in XML DSL:
+
+
+
+
+
+
+
+
+## Setting serialization include option
+
+If you want to marshal a POJO to XML, and the POJO has some fields with
+null values. And you want to skip these null values, then you need to
+set either an annotation on the POJO:
+
+ @JsonInclude(Include.NON_NULL)
+ public class MyPojo {
+ ...
+ }
+
+But this requires you to include that annotation in your POJO source
+code. You can also configure the Camel JacksonXMLDataFormat to set the
+`include` option, as shown below:
+
+ JacksonXMLDataFormat format = new JacksonXMLDataFormat();
+ format.setInclude("NON_NULL");
+
+Or from XML DSL you configure this as
+
+
+
+
+
+## Unmarshalling from XML to POJO with dynamic class name
+
+If you use Jackson to unmarshal XML to POJO, then you can now specify a
+header in the message that indicates which class name to unmarshal to.
+The header has key `CamelJacksonUnmarshalType` if that header is present
+in the message, then Jackson will use that as FQN for the POJO class to
+unmarshal the XML payload as.
+
+For JMS end users, there is the `JMSType` header from the JMS spec that
+indicates that also. To enable support for `JMSType` you would need to
+turn that on, on the Jackson data format as shown:
+
+ JacksonDataFormat format = new JacksonDataFormat();
+ format.setAllowJmsType(true);
+
+Or from XML DSL you configure this as:
+
+
+
+
+
+## Unmarshalling from XML to `List` or `List`
+
+If you are using Jackson to unmarshal XML to a list of map/POJO, you can
+now specify this by setting `useList="true"` or use the
+`org.apache.camel.component.jacksonxml.ListJacksonXMLDataFormat`. For
+example, with Java, you can do as shown below:
+
+ JacksonXMLDataFormat format = new ListJacksonXMLDataFormat();
+ // or
+ JacksonXMLDataFormat format = new JacksonXMLDataFormat();
+ format.useList();
+ // and you can specify the POJO class type also
+ format.setUnmarshalType(MyPojo.class);
+
+And if you use XML DSL then you configure to use a list using `useList`
+attribute as shown below:
+
+
+
+
+
+And you can specify the POJO type also
+
+
+
+
+
+## Using custom Jackson modules
+
+You can use custom Jackson modules by specifying the class names of
+those using the moduleClassNames option as shown below.
+
+
+
+
+
+When using `moduleClassNames` then the custom Jackson modules are not
+configured, by created using default constructor and used as-is. If a
+custom module needs any custom configuration, then an instance of the
+module can be created and configured, and then use modulesRefs to refer
+to the module as shown below:
+
+
+ ... // configure the module as you want
+
+
+
+
+
+
+Multiple modules can be specified separated by comma, such as
+`moduleRefs="myJacksonModule,myOtherModule"`.
+
+## Enabling or disable features using Jackson
+
+Jackson XML has a number of features you can enable or disable, which
+its XmlMapper uses. For example, to disable failing on unknown
+properties when marshalling, you can configure this using the
+disableFeatures:
+
+
+
+
+
+You can disable multiple features by separating the values using comma.
+The values for the features must be the name of the enums from Jackson
+from the following enum classes:
+
+- `com.fasterxml.jackson.databind.SerializationFeature`
+
+- `com.fasterxml.jackson.databind.DeserializationFeature`
+
+- `com.fasterxml.jackson.databind.MapperFeature`
+
+- `com.fasterxml.jackson.dataformat.xml.deser.FromXmlParser.Feature`
+
+To enable a feature, use the enableFeatures options instead.
+
+From Java code, you can use the type safe methods from camel-jackson
+module:
+
+ JacksonDataFormat df = new JacksonDataFormat(MyPojo.class);
+ df.disableFeature(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES);
+ df.disableFeature(DeserializationFeature.FAIL_ON_NULL_FOR_PRIMITIVES);
+
+## Converting Maps to POJO using Jackson
+
+Jackson `XmlMapper` can be used to convert maps to POJO objects. Jackson
+component comes with the data converter that can be used to convert
+`java.util.Map` instance to non-String, non-primitive and non-Number
+objects.
+
+ Map invoiceData = new HashMap();
+ invoiceData.put("netValue", 500);
+ producerTemplate.sendBody("direct:mapToInvoice", invoiceData);
+ ...
+ // Later in the processor
+ Invoice invoice = exchange.getIn().getBody(Invoice.class);
+
+If there is a single `XmlMapper` instance available in the Camel
+registry, it will be used by the converter to perform the conversion.
+Otherwise, the default mapper will be used.
+
+## Formatted XML marshalling (pretty-printing)
+
+Using the `prettyPrint` option one can output a well-formatted XML while
+marshalling:
+
+
+
+
+
+And in Java DSL:
+
+ from("direct:inPretty").marshal().jacksonXml(true);
+
+Please note that there are 5 different overloaded `jacksonXml()` DSL
+methods which support the `prettyPrint` option in combination with other
+settings for `unmarshalType`, `jsonView` etc.
+
+# Dependencies
+
+To use Jackson XML in your Camel routes, you need to add the dependency
+on **camel-jacksonxml** which implements this data format.
+
+If you use Maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-jacksonxml
+ x.x.x
+
+
diff --git a/camel-jasypt.md b/camel-jasypt.md
new file mode 100644
index 0000000000000000000000000000000000000000..babafbbad49983c338437b14925c520368c0aa14
--- /dev/null
+++ b/camel-jasypt.md
@@ -0,0 +1,197 @@
+# Jasypt.md
+
+**Since Camel 2.5**
+
+[Jasypt](http://www.jasypt.org/) is a simplified encryption library that
+makes encryption and decryption easy. Camel integrates with Jasypt to
+allow sensitive information in
+[Properties](#ROOT:properties-component.adoc) files to be encrypted. By
+dropping **`camel-jasypt`** on the classpath those encrypted values will
+automatically be decrypted on-the-fly by Camel. This ensures that human
+eyes can’t easily spot sensitive information such as usernames and
+passwords.
+
+If you are using Maven, you need to add the following dependency to your
+`pom.xml` for this component:
+
+
+ org.apache.camel
+ camel-jasypt
+ x.x.x
+
+
+
+# Tooling
+
+The Jasypt component is a runnable JAR that provides a command line
+utility to encrypt or decrypt values.
+
+The usage documentation can be output to the console to describe the
+syntax and options it provides:
+
+ Apache Camel Jasypt takes the following options
+
+ -h or -help = Displays the help screen
+ -c or -command = Command either encrypt or decrypt
+ -p or -password = Password to use
+ -i or -input = Text to encrypt or decrypt
+ -a or -algorithm = Optional algorithm to use
+ -rsga or -algorithm = Optional random salt generator algorithm to use
+ -riga or -algorithm = Optional random iv generator algorithm to use
+
+A simple way of running the tool is with
+[JBang](https://www.jbang.dev/).
+
+For example, to encrypt the value `tiger`, you can use the following
+parameters. Make sure to specify the version of camel-jasypt that you
+want to use.
+
+ $ jbang org.apache.camel:camel-jasypt: -c encrypt -p secret -i tiger
+
+Which outputs the following result
+
+ Encrypted text: qaEEacuW7BUti8LcMgyjKw==
+
+This means the encrypted representation `qaEEacuW7BUti8LcMgyjKw==` can
+be decrypted back to `tiger` if you know the *master* password which was
+`secret`.
+If you run the tool again, then the encrypted value will return a
+different result. But decrypting the value will always return the
+correct original value.
+
+You can test decrypting the value by running the tooling using the
+following parameters:
+
+ $ jbang org.apache.camel:camel-jasypt: -c decrypt -p secret -i qaEEacuW7BUti8LcMgyjKw==
+
+Which outputs the following result:
+
+ Decrypted text: tiger
+
+The idea is to then use the encrypted values in your
+[Properties](#ROOT:properties-component.adoc) files. For example.
+
+ # Encrypted value for 'tiger'
+ my.secret = ENC(qaEEacuW7BUti8LcMgyjKw==)
+
+# Protecting the master password
+
+The *master* password used by Jasypt must be provided, so that it’s
+capable of decrypting the values. However, having this *master* password
+out in the open may not be an ideal solution. Therefore, you can provide
+it as a JVM system property or as an OS environment setting. If you
+decide to do so then the `password` option supports prefix that dictates
+this:
+
+- `sysenv:` means to look up the OS system environment with the given
+ key.
+
+- `sys:` means to look up a JVM system property.
+
+For example, you could provide the password before you start the
+application
+
+ $ export CAMEL_ENCRYPTION_PASSWORD=secret
+
+Then start the application, such as running the start script.
+
+When the application is up and running, you can unset the environment
+
+ $ unset CAMEL_ENCRYPTION_PASSWORD
+
+On runtimes like Spring Boot and Quarkus, you can configure a password
+property in `application.properties` as follows.
+
+ password=sysenv:CAMEL_ENCRYPTION_PASSWORD
+
+Or if configuring `JasyptPropertiesParser` manually, you can set the
+password like this.
+
+ jasyptPropertiesParser.setPassword("sysenv:CAMEL_ENCRYPTION_PASSWORD");
+
+# Example configuration
+
+Java
+On the Spring Boot and Quarkus runtimes, Camel Jasypt can be configured
+via configuration properties. Refer to their respective documentation
+pages for more information.
+
+Else, in Java DSL you need to configure Jasypt as a
+`JasyptPropertiesParser` instance and set it on the
+[Properties](#ROOT:properties-component.adoc) component as shown below:
+
+ // create the jasypt properties parser
+ JasyptPropertiesParser jasypt = new JasyptPropertiesParser();
+ // set the master password (see above for how to do this in a secure way)
+ jasypt.setPassword("secret");
+
+ // create the properties' component
+ PropertiesComponent pc = new PropertiesComponent();
+ pc.setLocation("classpath:org/apache/camel/component/jasypt/secret.properties");
+ // and use the jasypt properties parser, so we can decrypt values
+ pc.setPropertiesParser(jasypt);
+ // end enable nested placeholder support
+ pc.setNestedPlaceholder(true);
+
+ // add properties component to camel context
+ context.setPropertiesComponent(pc);
+
+It is possible to configure custom algorithms on the
+JasyptPropertiesParser like this.
+
+ JasyptPropertiesParser jasyptPropertiesParser = new JasyptPropertiesParser();
+
+ jasyptPropertiesParser.setAlgorithm("PBEWithHmacSHA256AndAES_256");
+ jasyptPropertiesParser.setRandomSaltGeneratorAlgorithm("PKCS11");
+ jasyptPropertiesParser.setRandomIvGeneratorAlgorithm("PKCS11");
+
+The properties file `secret.properties` will contain your encrypted
+configuration values, such as shown below. Notice how the password value
+is encrypted and is surrounded like `ENC(value here)`.
+
+ my.secret.password=ENC(bsW9uV37gQ0QHFu7KO03Ww==)
+
+XML (Spring)
+In Spring XML you need to configure the `JasyptPropertiesParser` which
+is shown below. Then the Camel
+[Properties](#ROOT:properties-component.adoc) component is told to use
+`jasypt` as the property parser, which means Jasypt has its chance to
+decrypt values looked up in the properties file.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+The [Properties](#ROOT:properties-component.adoc) component can also be
+inlined inside the `` tag which is shown below. Notice how
+we use the `propertiesParserRef` attribute to refer to Jasypt.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-java-joor-dsl.md b/camel-java-joor-dsl.md
new file mode 100644
index 0000000000000000000000000000000000000000..043bc0e72169151371a2f307aec9eb836cbfadcf
--- /dev/null
+++ b/camel-java-joor-dsl.md
@@ -0,0 +1,44 @@
+# Java-joor-dsl.md
+
+**Since Camel 3.9**
+
+The `java-joor-dsl` is used for runtime compiling Java routes in an
+existing running Camel integration. This was invented for Camel K and
+later ported to Apache Camel.
+
+This means that Camel will load the `.java` source during startup and
+compile this to Java byte code as `.class`, which then are loaded via
+class loader and behaves as regular Java compiled routes.
+
+# Example
+
+The following `MyRoute.java` source file:
+
+**MyRoute.java**
+
+ import org.apache.camel.builder.RouteBuilder;
+
+ public class MyRoute extends RouteBuilder {
+
+ @Override
+ public void configure() throws Exception {
+ from("timer:tick")
+ .setBody()
+ .constant("Hello Camel K!")
+ .to("log:info");
+ }
+ }
+
+Can then be loaded and run with Camel CLI or Camel K.
+
+**Running with Camel K**
+
+ kamel run MyRoute.java
+
+**Running with Camel CLI**
+
+ camel run MyRoute.java
+
+# See Also
+
+See [DSL](#manual:ROOT:dsl.adoc)
diff --git a/camel-java-language.md b/camel-java-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..96b6e7523cdb21badef18574de68370bbfffd202
--- /dev/null
+++ b/camel-java-language.md
@@ -0,0 +1,385 @@
+# Java-language.md
+
+**Since Camel 4.3**
+
+The Java language (uses jOOR library to compile Java code) allows using
+Java code in your Camel expression, with some limitations.
+
+The jOOR library integrates with the Java compiler and performs runtime
+compilation of Java code.
+
+# Java Options
+
+# Usage
+
+## Variables
+
+The Java language allows the following variables to be used in the
+script:
+
+
+
+
+
+
+
+
+
+
+
+
+context
+Context
+The CamelContext
+
+
+exchange
+Exchange
+The Camel Exchange
+
+
+message
+Message
+The Camel message
+
+
+body
+Object
+The message body
+
+
+
+
+## Functions
+
+The Java language allows the following functions to be used in the
+script:
+
+
+
+
+
+
+
+
+
+
+
+bodyAs(type)
+To convert the body to the given
+type.
+
+
+headerAs(name, type)
+To convert the header with the name to
+the given type.
+
+
+headerAs(name, defaultValue,
+type)
+To convert the header with the name to
+the given type. If no header exists, then use the given default
+value.
+
+
+exchangePropertyAs(name, type)
+To convert the exchange property with
+the name to the given type.
+
+
+exchangePropertyAs(name, defaultValue,
+type)
+To convert the exchange property with
+the name to the given type. If no exchange property exists, then use the
+given default value.
+
+
+optionalBodyAs(type)
+To convert the body to the given type,
+returned wrapped in java.util.Optional.
+
+
+optionalHeaderAs(name, type)
+To convert the header with the name to
+the given type, returned wrapped in
+java.util.Optional.
+
+
+optionalExchangePropertyAs(name,
+type)
+To convert the exchange property with
+the name to the given type, returned wrapped in
+java.util.Optional.
+
+
+
+
+These functions are convenient for getting the message body, header or
+exchange properties as a specific Java type.
+
+Here we want to get the message body as a `com.foo.MyUser` type we can
+do as follows:
+
+ var user = bodyAs(com.foo.MyUser.class);
+
+You can omit *.class* to make the function a little smaller:
+
+ var user = bodyAs(com.foo.MyUser);
+
+The type must be a fully qualified class type, but that can be
+inconvenient to type all the time. In such a situation, you can
+configure an import in the `camel-joor.properties` file as shown below:
+
+ import com.foo.MyUser;
+
+And then the function can be shortened:
+
+ var user = bodyAs(MyUser);
+
+## Dependency Injection
+
+The Camel Java language allows dependency injection by referring to
+beans by their id from the Camel registry. For optimization purposes,
+then the beans are injected once in the constructor and the scopes are
+*singleton*. This requires the injected beans to be *thread safe* as
+they will be reused for all processing.
+
+In the Java code you declare the injected beans using the syntax
+`#bean:beanId`.
+
+For example, suppose we have the following bean
+
+ public class MyEchoBean {
+
+ public String echo(String str) {
+ return str + str;
+ }
+
+ public String greet() {
+ return "Hello ";
+ }
+ }
+
+And this bean is registered with the name `myEcho` in the Camel
+registry.
+
+The Java code can then inject this bean directly in the script where the
+bean is in use:
+
+ from("direct:start")
+ .transform().java("'Hello ' + #bean:myEcho.echo(bodyAs(String))")
+ .to("mock:result");
+
+Now this code may seem a bit magic, but what happens is that the
+`myEcho` bean is injected via a constructor, and then called directly in
+the script, so it is as fast as possible.
+
+Under the hood, Camel Java generates the following source code compiled
+once:
+
+ public class JoorScript1 implements org.apache.camel.language.joor.JoorMethod {
+
+ private MyEchoBean myEcho;
+
+ public JoorScript1(CamelContext context) throws Exception {
+ myEcho = context.getRegistry().lookupByNameAndType("myEcho", MyEchoBean.class);
+ }
+
+ @Override
+ public Object evaluate(CamelContext context, Exchange exchange, Message message, Object body, Optional optionalBody) throws Exception {
+ return "Hello " + myEcho.echo(bodyAs(exchange, String.class));
+ }
+ }
+
+You can also store a reference to the bean in a variable which would
+more resemble how you would code in Java
+
+ from("direct:start")
+ .transform().java("var bean = #bean:myEcho; return 'Hello ' + bean.echo(bodyAs(String))")
+ .to("mock:result");
+
+Notice how we declare the bean as if it is a local variable via
+`var bean = #bean:myEcho`. When doing this we must use a different name
+as `myEcho` is the variable used by the dependency injection. Therefore,
+we use *bean* as name in the script.
+
+## Auto imports
+
+The Java language will automatically import from:
+
+ import java.util.*;
+ import java.util.concurrent.*;
+ import java.util.stream.*;
+ import org.apache.camel.*;
+ import org.apache.camel.util.*;
+
+## Configuration file
+
+You can configure the jOOR language in the `camel-joor.properties` file
+which by default is loaded from the root classpath. You can specify a
+different location with the `configResource` option on the Java
+language.
+
+For example, you can add additional imports in the
+`camel-joor.properties` file by adding:
+
+ import com.foo.MyUser;
+ import com.bar.*;
+ import static com.foo.MyHelper.*;
+
+You can also add aliases (`key=value`) where an alias will be used as a
+shorthand replacement in the code.
+
+ echo()=bodyAs(String) + bodyAs(String)
+
+Which allows using `echo()` in the jOOR language script such as:
+
+ from("direct:hello")
+ .transform(java("'Hello ' + echo()"))
+ .log("You said ${body}");
+
+The `echo()` alias will be replaced with its value resulting in a script
+as:
+
+ .transform(java("'Hello ' + bodyAs(String) + bodyAs(String)"))
+
+You can configure a custom configuration location for the
+`camel-joor.properties` file or reference to a bean in the registry:
+
+ JavaLanguage joor = (JavaLanguage) context.resolveLanguage("java");
+ java.setConfigResource("ref:MyJoorConfig");
+
+And then register a bean in the registry with id `MyJoorConfig` that is
+a String value with the content.
+
+ String config = "....";
+ camelContext.getRegistry().put("MyJoorConfig", config);
+
+# Example
+
+For example, to transform the message using jOOR language to the upper
+case
+
+ from("seda:orders")
+ .transform().java("message.getBody(String.class).toUpperCase()")
+ .to("seda:upper");
+
+And in XML DSL:
+
+
+
+
+ message.getBody(String.class).toUpperCase()
+
+
+
+
+## Multi statements
+
+It is possible to include multiple statements. The code below shows an
+example where the `user` header is retrieved in a first statement. And
+then, in a second statement we return a value whether the user is `null`
+or not.
+
+ from("seda:orders")
+ .transform().java("var user = message.getHeader(\"user\"); return user != null ? \"User: \" + user : \"No user exists\";")
+ .to("seda:user");
+
+Notice how we have to quote strings in strings, and that is annoying, so
+instead we can use single quotes:
+
+ from("seda:orders")
+ .transform().java("var user = message.getHeader('user'); return user != null ? 'User: ' + user : 'No user exists';")
+ .to("seda:user");
+
+## Hot re-load
+
+You can turn off pre-compilation for the Java language and then Camel
+will recompile the script for each message. You can externalize the code
+into a resource file, which will be reloaded on each message as shown:
+
+ JavaLanguage java = (JavaLanguage) context.resolveLanguage("java");
+ java.setPreCompile(false);
+
+ from("jms:incoming")
+ .transform().java("resource:file:src/main/resources/orders.java")
+ .to("jms:orders");
+
+Here the Java code is externalized into the file
+`src/main/resources/orders.java` which allows you to edit this source
+file while running the Camel application and try the changes with
+hot-reloading.
+
+In XML DSL it’s easier because you can turn off pre-compilation in the
+`` XML element:
+
+
+
+
+ resource:file:src/main/resources/orders.java
+
+
+
+
+## Lambda-based AggregationStrategy
+
+The Java language has special support for defining an
+`org.apache.camel.AggregationStrategy` as a lambda expression. This is
+useful when using EIP patterns that use aggregation such as the
+Aggregator, Splitter, Recipient List, Enrich, and others.
+
+To use this, then the Java language script must be in the following
+syntax:
+
+ (e1, e2) -> { }
+
+Where `e1` and `e2` are the *old* Exchange and *new* Exchange from the
+`aggregate` method in the `AggregationStrategy`. The returned value is
+used as the aggregated message body, or use `null` to skip this.
+
+The lambda syntax is representing a Java util
+`BiFunction` type.
+
+For example, to aggregate message bodies together, we can do this as
+shown:
+
+ (e1, e2) -> {
+ String b1 = e1.getMessage().getBody(String.class);
+ String b2 = e2.getMessage().getBody(String.class);
+ return b1 + ',' + b2;
+ }
+
+## Limitations
+
+The Java Camel language is only supported as a block of Java code that
+gets compiled into a Java class with a single method. The code that you
+can write is therefore limited to a number of Java statements.
+
+The supported runtime is intended for Java standalone, Spring Boot,
+Camel Quarkus and other microservices runtimes. It is not supported on
+any kind of Java Application Server runtime.
+
+Java does not support runtime compilation with Spring Boot using *fat
+jar* packaging ([https://github.com/jOOQ/jOOR/issues/69](https://github.com/jOOQ/jOOR/issues/69)), it works with
+exploded classpath.
+
+# Dependencies
+
+To use scripting languages in your camel routes, you need to add a
+dependency on **camel-joor**.
+
+If you use Maven you could add the following to your `pom.xml`,
+substituting the version number for the latest and greatest release.
+
+
+ org.apache.camel
+ camel-joor
+ x.x.x
+
diff --git a/camel-java-xml-io-dsl.md b/camel-java-xml-io-dsl.md
new file mode 100644
index 0000000000000000000000000000000000000000..abf5486964c7fcc484763d6e891b922858b325b8
--- /dev/null
+++ b/camel-java-xml-io-dsl.md
@@ -0,0 +1,289 @@
+# Java-xml-io-dsl.md
+
+**Since Camel 3.9**
+
+The `xml-io-dsl` is the Camel optimized XML DSL with a very fast and low
+overhead XML parser. The classic XML DSL was loaded via JAXB that is
+heavy and overhead.
+
+The JAXB parser is generic and can be used for parsing any XML. However,
+the `xml-io-dsl` is a source code generated parser that is Camel
+specific and can only parse Camel `.xml` route files (not classic Spring
+`` XML files).
+
+If you are using Camel XML DSL then it is recommended using `xml-io-dsl`
+instead of `xml-jaxb-dsl`. You can use this in all of Camel’s runtime
+such as Spring Boot, Quarkus, Camel Main, and Camel K etc.
+
+# Example
+
+The following `my-route.xml` source file:
+
+**my-route.xml**
+
+
+
+
+
+ Hello Camel K!
+
+
+
+
+
+You can omit the `xmlns` namespace. And if there is only a single route,
+you can use `` as the root XML tag.
+
+Can then be loaded and run with Camel CLI or Camel K.
+
+**Running with Camel K**
+
+ kamel run my-route.xml
+
+**Running with Camel CLI**
+
+ camel run my-route.xml
+
+**Since Camel 4.0.0**
+
+It is now possible with `xml-io-dsl` to declare some beans to be bound
+to [Camel Registry](#manual::registry.adoc) in similar way as with [YAML
+DSL](#yaml-dsl.adoc). Beans may be declared in XML and have their
+properties (also nested) defined. For example:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+While keeping all the benefits of fast XML parser used by `xml-io-dsl`,
+Camel can also process XML elements declared in other XML namespaces and
+process them separately. With this mechanism it is possible to include
+XML elements using Spring’s
+`http://www.springframework.org/schema/beans` namespace.
+
+This brings the flexibility of Spring Beans into [Camel
+Main](#components:others:main.adoc) without actually running any Spring
+Application Context (or Spring Boot). When elements from Spring
+namespace are found, they are used to populate and configure an instance
+of
+`org.springframework.beans.factory.support.DefaultListableBeanFactory`
+and leverage Spring dependency injection to wire the beans together.
+These beans are then exposed through normal [Camel
+Registry](#manual::registry.adoc) and may be used by Camel routes.
+
+Here’s an example `camel.xml` file, which defines both the routes and
+beans used (referred to) by the route definition:
+
+**camel.xml**
+
+
+
+
+
+
+
+
+
+ Spring Bean
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+A `my-route` route is referring to `greeter` bean which is defined using
+Spring `` element.
+
+More examples can be found in [Camel
+JBang](#manual:ROOT:camel-jbang.adoc#_using_spring_beans_in_camel_xml_dsl)
+page.
+
+## Using bean with constructors
+
+When beans must be created with constructor arguments, then this is made
+easier in Camel 4.1 onwards.
+
+For example as shown below:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+If you use Camel 4.0, then constructor arguments must be defined in the
+`type` attribute:
+
+
+
+
+
+
+
+
+
+
+## Creating beans from factory method
+
+A bean can also be created from a factory method (public static) as
+shown below:
+
+
+
+
+
+
+
+
+When using `factoryMethod` then the arguments to this method is taken
+from `constructors`. So in the example above, this means that class
+`com.acme.MyBean` should be as follows:
+
+ public class MyBean {
+
+ public static MyBean createMyBean(boolean important, String message) {
+ MyBean answer = ...
+ // create and configure the bean
+ return answer;
+ }
+ }
+
+The factory method must be `public static` and from the same class as
+the created class itself.
+
+## Creating beans from builder classes
+
+A bean can also be created from another builder class as shown below:
+
+
+
+
+
+
+
+
+The builder class must be `public` and have a no-arg default
+constructor.
+
+The builder class is then used to create the actual bean by using fluent
+builder style configuration. So the properties will be set on the
+builder class, and the bean is created by invoking the `builderMethod`
+at the end. The invocation of this method is done via Java reflection.
+
+## Creating beans from factory bean
+
+A bean can also be created from a factory bean as shown below:
+
+
+
+
+
+
+
+
+`factoryBean` can also refer to an existing bean by bean id instead of
+FQN classname.
+
+When using `factoryBean` and `factoryMethod` then the arguments to this
+method is taken from `constructors`. So in the example above, this means
+that class `com.acme.MyHelper` should be as follows:
+
+ public class MyHelper {
+
+ public static MyBean createMyBean(boolean important, String message) {
+ MyBean answer = ...
+ // create and configure the bean
+ return answer;
+ }
+ }
+
+The factory method must be `public static`.
+
+## Creating beans using script language
+
+For advanced use-cases then Camel allows to inline a script language,
+such as groovy, java, javascript, etc, to create the bean. This gives
+flexibility to use a bit of programming to create and configure the
+bean.
+
+
+
+
+
+When using `script` then constructors and factory bean/method is not in
+use
+
+## Using init and destroy methods on beans
+
+Sometimes beans need to do some initialization and cleanup work before a
+bean is ready to be used. For this you can use `initMethod` and
+`destroyMethod` that Camel triggers accordingly.
+
+Those methods must be public void and have no arguments, as shown below:
+
+ public class MyBean {
+
+ public void initMe() {
+ // do init work here
+ }
+
+ public void destroyMe() {
+ // do cleanup work here
+ }
+
+ }
+
+You then have to declare those methods in XML DSL as follows:
+
+
+
+
+
+
+
+
+The init and destroy methods are optional, so a bean does not have to
+have both, for example you may only have destroy methods.
+
+# See Also
+
+See [DSL](#manual:ROOT:dsl.adoc)
diff --git a/camel-java-xml-jaxb-dsl.md b/camel-java-xml-jaxb-dsl.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c2f8c95845b9135768b15f8828826df2b423c2e
--- /dev/null
+++ b/camel-java-xml-jaxb-dsl.md
@@ -0,0 +1,23 @@
+# Java-xml-jaxb-dsl.md
+
+**Since Camel 3.9**
+
+The `xml-jaxb-dsl` is the original Camel XML DSL that are loaded via
+JAXB that is heavy and with overhead.
+
+The JAXB parser is generic and can be used for parsing any XML. However,
+the `xml-io-dsl` is a source code generated parser that is Camel
+specific and can only parse Camel `.xml` route files (not classic Spring
+`` XML files).
+
+If you are using Camel XML DSL then it is recommended using `xml-io-dsl`
+instead of `xml-jaxb-dsl`. You can use this in all of Camel’s runtime
+such as Spring Boot, Quarkus, Camel Main, and Camel K etc.
+
+If you use classic Spring `` XML files then you must use the
+`camel-jaxb-dsl`, which comes out of the box when using
+`camel-spring-xml`.
+
+# See Also
+
+See [DSL](#manual:ROOT:dsl.adoc)
diff --git a/camel-jaxb-dataformat.md b/camel-jaxb-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..761e07eb5daf7fabba7ac0ffe8a8dd031712e1a8
--- /dev/null
+++ b/camel-jaxb-dataformat.md
@@ -0,0 +1,275 @@
+# Jaxb-dataformat.md
+
+**Since Camel 1.0**
+
+JAXB is a Data Format which uses the JAXB XML marshalling standard to
+unmarshal an XML payload into Java objects or to marshal Java objects
+into an XML payload.
+
+# Options
+
+# Usage
+
+## Using the Java DSL
+
+The following example uses a named DataFormat of `jaxb` which is
+configured with a Java package name to initialize the
+[JAXBContext](https://jakarta.ee/specifications/xml-binding/2.3/apidocs/javax/xml/bind/jaxbcontext).
+
+ DataFormat jaxb = new JaxbDataFormat("com.acme.model");
+
+ from("activemq:My.Queue").
+ unmarshal(jaxb).
+ to("mqseries:Another.Queue");
+
+You can, if you prefer, use a named reference to a data format which can
+then be defined in your Registry such as via your Spring XML file. e.g.
+
+ from("activemq:My.Queue").
+ unmarshal("myJaxbDataType").
+ to("mqseries:Another.Queue");
+
+## Using Spring XML
+
+The following example shows how to configure the `JaxbDataFormat` and
+use it in multiple routes.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Multiple context paths
+
+It is possible to use this data format with more than one context path.
+You can specify multiple context paths using `:` as a separator, for
+example `com.mycompany:com.mycompany2`.
+
+## Partial marshalling / unmarshalling
+
+JAXB 2 supports marshalling and unmarshalling XML tree fragments. By
+default, JAXB looks for the `@XmlRootElement` annotation on a given
+class to operate on the whole XML tree. This is useful, but not always.
+Sometimes the generated code does not have the `@XmlRootElement`
+annotation, and sometimes you need to unmarshall only part of the tree.
+
+In that case, you can use partial unmarshalling. To enable this
+behavior, you need set property `partClass` on the `JaxbDataFormat`.
+Camel will pass this class to the JAXB unmarshaller. If
+`JaxbConstants.JAXB_PART_CLASS` is set as one of the exchange headers,
+its value is used to override the `partClass` property on the
+`JaxbDataFormat`.
+
+For marshalling you have to add the `partNamespace` attribute with the
+`QName` of the destination namespace.
+
+If `JaxbConstants.JAXB_PART_NAMESPACE` is set as one of the exchange
+headers, its value is used to override the `partNamespace` property on
+the `JaxbDataFormat`.
+
+While setting `partNamespace` through
+`JaxbConstants.JAXB_PART_NAMESPACE`, please note that you need to
+specify its value in the format `{namespaceUri\}localPart`, as per the
+example below.
+
+ .setHeader(JaxbConstants.JAXB_PART_NAMESPACE, constant("{http://www.camel.apache.org/jaxb/example/address/1}address"));
+
+## Fragment
+
+`JaxbDataFormat` has a property named `fragment` which can set the
+`Marshaller.JAXB_FRAGMENT` property on the JAXB Marshaller. If you don’t
+want the JAXB Marshaller to generate the XML declaration, you can set
+this option to be `true`. The default value of this property is `false`.
+
+## Ignoring Non-XML Characters
+
+`JaxbDataFormat` supports ignoring [Non-XML
+Characters](https://www.w3.org/TR/xml/#NT-Char), you need to set the
+`filterNonXmlChars` property to `true`. The `JaxbDataFormat` will
+replace any non-XML character with a space character (`" "`) during
+message marshalling or unmarshalling. You can also set the Exchange
+property `Exchange.FILTER_NON_XML_CHARS`.
+
+
+
+
+
+
+
+
+
+
+
+
+Filtering in use
+StAX API and implementation
+No
+
+
+Filtering not in use
+StAX API only
+No
+
+
+
+
+This feature has been tested with Woodstox 3.2.9 and Sun JDK 1.6 StAX
+implementation.
+
+`JaxbDataFormat` now allows you to customize the `XMLStreamWriter` used
+to marshal the stream to XML. Using this configuration, you can add your
+own stream writer to completely remove, escape, or replace non-XML
+characters.
+
+ JaxbDataFormat customWriterFormat = new JaxbDataFormat("org.apache.camel.foo.bar");
+ customWriterFormat.setXmlStreamWriterWrapper(new TestXmlStreamWriter());
+
+The following example shows using the Spring DSL and also enabling
+Camel’s non-XML filtering:
+
+
+
+
+## Working with the `ObjectFactory`
+
+If you use XJC to create the java class from the schema, you will get an
+`ObjectFactory` for your JAXB context. Since the `ObjectFactory` uses
+[JAXBElement](https://jakarta.ee/specifications/xml-binding/2.3/apidocs/javax/xml/bind/jaxbelement)
+to hold the reference of the schema and element instance value,
+`JaxbDataformat` will ignore the `JAXBElement` by default, and you will
+get the element instance value instead of the `JAXBElement` object from
+the unmarshaled message body.
+
+If you want to get the `JAXBElement` object form the unmarshaled message
+body, you need to set the `JaxbDataFormat` `ignoreJAXBElement` property
+to be `false`.
+
+## Setting the encoding
+
+You can set the `encoding` option on the `JaxbDataFormat` to configure
+the `Marshaller.JAXB_ENCODING` encoding property on the JAXB Marshaller.
+
+You can set up which encoding to use when you declare the
+`JaxbDataFormat`. You can also provide the encoding in the Exchange
+property `Exchange.CHARSET_NAME`. This property will override the
+encoding set on the `JaxbDataFormat`.
+
+## Controlling namespace prefix mapping
+
+When marshalling using [JAXB](#jaxb-dataformat.adoc) or
+[SOAP](#jaxb-dataformat.adoc) then the JAXB implementation will
+automatically assign namespace prefixes, such as `ns2`, `ns3`, `ns4`
+etc. To control this mapping, Camel allows you to refer to a map which
+contains the desired mapping.
+
+For example, in Spring XML we can define a `Map` with the mapping. In
+the mapping file below, we map SOAP to use soap as a prefix. While our
+custom namespace `http://www.mycompany.com/foo/2` is not using any
+prefix.
+
+
+
+
+
+
+
+To use this in JAXB or SOAP data formats, you refer to this map, using
+the `namespacePrefixRef` attribute as shown below. Then Camel will look
+up in the Registry a `java.util.Map` with the id `myMap`, which was what
+we defined above.
+
+
+
+
+
+## Schema validation
+
+The `JaxbDataFormat` supports validation by marshalling and
+unmarshalling from / to XML. You can use the prefix `classpath:`,
+`file:` or `http:` to specify how the resource should be resolved. You
+can separate multiple schema files by using the `,` character.
+
+if the XSD schema files import/access other files, then you need to
+enable file protocol (or others to allow access)
+
+Using the Java DSL, you can configure it in the following way:
+
+ JaxbDataFormat jaxbDataFormat = new JaxbDataFormat();
+ jaxbDataFormat.setContextPath(Person.class.getPackage().getName());
+ jaxbDataFormat.setSchema("classpath:person.xsd,classpath:address.xsd");
+ jaxbDataFormat.setAccessExternalSchemaProtocols("file");
+
+You can do the same using the XML DSL:
+
+
+
+
+
+## Schema Location
+
+The `JaxbDataFormat` supports to specify the `SchemaLocation` when
+marshalling the XML.
+
+Using the Java DSL, you can configure it in the following way:
+
+ JaxbDataFormat jaxbDataFormat = new JaxbDataFormat();
+ jaxbDataFormat.setContextPath(Person.class.getPackage().getName());
+ jaxbDataFormat.setSchemaLocation("schema/person.xsd");
+
+You can do the same using the XML DSL:
+
+
+
+
+
+## Marshal data that is already XML
+
+The JAXB marshaller requires that the message body is JAXB compatible,
+e.g., it is a `JAXBElement`, a java instance that has JAXB annotations,
+or extends `JAXBElement`. There can be situations where the message body
+is already in XML, e.g., from a `String` type.
+
+`JaxbDataFormat` has an option named `mustBeJAXBElement` which you can
+set to `false` to relax this check and have the JAXB marshaller only
+attempt marshalling on `JAXBElement`
+(`javax.xml.bind.JAXBIntrospector#isElement` returns `true`). In those
+situations, the marshaller will fall back to marshal the message body
+as-is.
+
+# Dependencies
+
+To use JAXB in your Camel routes, you need to add a dependency on
+**camel-jaxb**, which implements this data format.
+
+If you use Maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-jaxb
+ x.x.x
+
diff --git a/camel-jcache.md b/camel-jcache.md
index ec0ff4b20f85cbcc831ab95c83859bc81f418ff4..a3085616944415ab03af52fe36b2daeaad46f53e 100644
--- a/camel-jcache.md
+++ b/camel-jcache.md
@@ -11,7 +11,9 @@ JSR107/JCache as cache implementation.
jcache:cacheName[?options]
-# JCache Policy
+# Usage
+
+## JCache Policy
The JCachePolicy is an interceptor around a route that caches the
"result of the route" (the message body) after the route is completed.
@@ -41,7 +43,7 @@ parameters set.
Similar caching solution is available, for example, in Spring using the
@Cacheable annotation.
-# JCachePolicy Fields
+## JCachePolicy Fields
@@ -51,7 +53,7 @@ Similar caching solution is available, for example, in Spring using the
-
+
-
+
cache
The Cache to use to store the cached
values. If this value is set, cacheManager,
@@ -68,7 +70,7 @@ ignored.
Cache
-
+
cacheManager
The CacheManager to use to look up or
create the Cache. Used only if cache is not set.
@@ -77,7 +79,7 @@ in the CamelContext registry or calls the standard JCache
Caching.getCachingProvider().getCacheManager().
CacheManager
-
+
cacheName
Name of the cache. Get the Cache from
cacheManager or create a new one if it doesn’t
@@ -85,7 +87,7 @@ exist.
RouteId of the route.
String
-
+
cacheConfiguration
JCache cache configuration to use if a
@@ -94,14 +96,14 @@ new Cache is created
MutableConfiguration object.
CacheConfiguration
-
+
keyExpression
An Expression to evaluate to determine
the cache key.
Exchange body
Expression
-
+
enabled
If the policy is not enabled, no
wrapper processor is added to the route. It has impact only during
@@ -113,9 +115,9 @@ caching from properties.
-# How to determine cache to use?
+## How to determine cache to use?
-# Set cache
+## Set cache
The cache used by the policy can be set directly. This means you have to
configure the cache yourself and get a JCache Cache object, but this
@@ -137,7 +139,7 @@ possible to use the standard Caching API as below:
.log("Getting order with id: ${body}")
.bean(OrderService.class,"findOrderById(${body})");
-# Set cacheManager
+## Set cacheManager
If the `cache` is not set, the policy will try to look up or create the
cache automatically. If the `cacheManager` is set on the policy, it will
@@ -155,7 +157,7 @@ using the `cacheConfiguration` (new MutableConfiguration by default).
jcachePolicy.setCacheManager(cacheManager);
jcachePolicy.setCacheName("items")
-# Find cacheManager
+## Find cacheManager
If `cacheManager` (and the `cache`) is not set, the policy will try to
find a JCache CacheManager object:
@@ -174,7 +176,7 @@ find a JCache CacheManager object:
.log("Getting order with id: ${body}")
.bean(OrderService.class,"findOrderById(${body})");
-# Partially wrapped route
+## Partially wrapped route
In the examples above, the whole route was executed or skipped. A policy
can be used to wrap only a segment of the route instead of all
@@ -192,7 +194,7 @@ The `.log()` at the beginning and at the end of the route is always
called, but the section inside `.policy()` and `.end()` is executed
based on the cache.
-# KeyExpression
+## KeyExpression
By default, the policy uses the received Exchange body as the *key*, so
the default expression is like `simple("${body\}")`. We can set a
@@ -220,7 +222,7 @@ to store the *value* in cache at the end of the route.
.log("Getting order with id: ${header.orderId}")
.bean(OrderService.class,"findOrderById(${header.orderId})");
-# BypassExpression
+## BypassExpression
The `JCachePolicy` can be configured with an `Expression` that can per
`Exchange` determine whether to look up the value from the cache or
@@ -228,9 +230,9 @@ bypass. If the expression is evaluated to `false` then the route is
executed as normal, and the returned value is inserted into the cache
for future lookup.
-# Camel XML DSL examples
+## Camel XML DSL examples
-# Use JCachePolicy in an XML route
+## Use JCachePolicy in an XML route
In Camel XML DSL, we need a named reference to the JCachePolicy instance
(registered in CamelContext or simply in Spring). We have to wrap the
@@ -263,7 +265,7 @@ See this example when only a part of the route is wrapped:
-# Define CachePolicy in Spring
+## Define CachePolicy in Spring
It’s more convenient to create a JCachePolicy in Java, especially within
a RouteBuilder using the Camel DSL expressions, but see this example to
@@ -278,7 +280,7 @@ define it in a Spring XML:
-# Create Cache from XML
+## Create Cache from XML
It’s not strictly speaking related to Camel XML DSL, but JCache
providers usually have a way to configure the cache in an XML file. For
@@ -299,7 +301,7 @@ configure the cache "spring" used in the example above.
-# Special scenarios and error handling
+## Special scenarios and error handling
If the Cache used by the policy is closed (can be done dynamically), the
whole caching functionality is skipped, the route will be executed every
diff --git a/camel-jdbc.md b/camel-jdbc.md
index f1a908766019de558ba4cd38c1b76aee07b7b4cc..466cdba32ddc0d6deed890a8c220003a0122855a 100644
--- a/camel-jdbc.md
+++ b/camel-jdbc.md
@@ -30,7 +30,9 @@ means that you cannot use the JDBC component in a `from()` statement.
jdbc:dataSourceName[?options]
-# Result
+# Usage
+
+## Result
By default, the result is returned in the OUT body as an
`ArrayList>`. The `List` object contains the
@@ -41,7 +43,7 @@ the result.
**Note:** This component fetches `ResultSetMetaData` to be able to
return the column name as the key in the `Map`.
-# Generated keys
+## Generated keys
If you insert data using SQL INSERT, then the RDBMS may support auto
generated keys. You can instruct the [JDBC](#jdbc-component.adoc)
@@ -52,7 +54,7 @@ table above.
Using generated keys does not work together with named parameters.
-# Using named parameters
+## Using named parameters
In the given route below, we want to get all the projects from the
`projects` table. Notice the SQL query has two named parameters, `:?lic`
@@ -69,7 +71,7 @@ value for the named parameters:
You can also store the header values in a `java.util.Map` and store the
map on the headers with the key `CamelJdbcParameters`.
-# Samples
+# Examples
In the following example, we set up the DataSource that camel-jdbc
requires. First we register our datasource in the Camel registry as
diff --git a/camel-jetty.md b/camel-jetty.md
index bd67f848c513ac17e20a95c1ac88f193d4e86d1c..af97135795a183bb6fbaabca4a3b709be00819e5 100644
--- a/camel-jetty.md
+++ b/camel-jetty.md
@@ -52,7 +52,7 @@ from Get Method, but also other HTTP methods.
The Jetty component supports consumer endpoints.
-# Consumer Example
+## Consumer
In this sample we define a route that exposes an HTTP service at
`\http://localhost:8080/myapp/myservice`:
@@ -71,13 +71,13 @@ If you need to expose a Jetty endpoint on all network interfaces, the
To listen across an entire URI prefix, see [How do I let Jetty match
wildcards](#manual:faq:how-do-i-let-jetty-match-wildcards.adoc).
-# Servlets
+## Servlets
If you actually want to expose routes by HTTP and already have a
Servlet, you should instead refer to the [Servlet
Transport](#servlet-component.adoc).
-# HTTP Request Parameters
+## HTTP Request Parameters
So if a client sends the HTTP request, `\http://serverUri?one=hello`,
the Jetty component will copy the HTTP request parameter, `one` to the
@@ -88,7 +88,7 @@ to another. If we used a language more powerful than
[OGNL](#languages:ognl-language.adoc)), we could also test for the
parameter value and do routing based on the header value as well.
-# Session Support
+## Session Support
The session support option, `sessionSupport`, can be used to enable a
`HttpSession` object and access the session object while processing the
@@ -111,7 +111,7 @@ follows:
...
}
-# SSL Support (HTTPS)
+## SSL Support (HTTPS)
Using the JSSE Configuration Utility
@@ -153,35 +153,6 @@ Spring DSL based configuration of endpoint
-Blueprint based configuration of endpoint
-
-Global configuration of sslContextParameters in a dedicated Blueprint
-XML file
-
-
-
-
-
-
-
-
-
-
-
-
-Use of the global configuration in other Blueprint XML files with route
-definitions
-
- ...
-
-
-
-
-
- ...
-
Configuring Jetty Directly
Jetty provides SSL support out of the box. To enable Jetty to run in SSL
@@ -237,7 +208,7 @@ client doesn’t need a certificate but can have one.
The value you use as keys in the above map is the port you configure
Jetty to listen to.
-## Configuring general SSL properties
+### Configuring general SSL properties
Instead of a per-port number specific SSL socket connector (as shown
above), you can now configure general properties that apply for all SSL
@@ -256,7 +227,7 @@ port number as entry).
-## How to obtain reference to the X509Certificate
+### How to obtain reference to the X509Certificate
Jetty stores a reference to the certificate in the HttpServletRequest
which you can access from code as follows:
@@ -264,7 +235,7 @@ which you can access from code as follows:
HttpServletRequest req = exchange.getIn().getBody(HttpServletRequest.class);
X509Certificate cert = (X509Certificate) req.getAttribute("javax.servlet.request.X509Certificate")
-## Configuring general HTTP properties
+### Configuring general HTTP properties
Instead of a per-port number specific HTTP socket connector (as shown
above), you can now configure general properties that apply for all HTTP
@@ -280,7 +251,7 @@ port number as entry).
-## Obtaining X-Forwarded-For header with HttpServletRequest.getRemoteAddr()
+### Obtaining X-Forwarded-For header with HttpServletRequest.getRemoteAddr()
If the HTTP requests are handled by an Apache server and forwarded to
jetty with mod\_proxy, the original client IP address is in the
@@ -309,7 +280,7 @@ This is particularly useful when an existing Apache server handles TLS
connections for a domain and proxies them to application servers
internally.
-# Default behavior for returning HTTP status codes
+## Default behavior for returning HTTP status codes
The default behavior of HTTP status codes is defined by the
`org.apache.camel.component.http.DefaultHttpBinding` class, which
@@ -322,7 +293,7 @@ returned, and the stacktrace is returned in the body. If you want to
specify which HTTP status code to return, set the code in the
`Exchange.HTTP_RESPONSE_CODE` header of the OUT message.
-# Customizing HttpBinding
+## Customizing HttpBinding
By default, Camel uses the
`org.apache.camel.component.http.DefaultHttpBinding` to handle how a
@@ -345,7 +316,7 @@ And then we can reference this binding when we define the route:
-# Jetty handlers and security configuration
+## Jetty handlers and security configuration
You can configure a list of Jetty handlers on the endpoint, which can be
useful for enabling advanced Jetty security features. These handlers are
@@ -398,71 +369,12 @@ You can configure a list of Jetty handlers as follows:
You can then define the endpoint as:
- from("jetty:http://0.0.0.0:9080/myservice?handlers=securityHandler")
+ from("jetty:http://0.0.0.0:9080/myservice?handlers=securityHandler");
If you need more handlers, set the `handlers` option equal to a
comma-separated list of bean IDs.
-Blueprint-based definition of basic authentication (based on Jetty 12):
-
-
-
-
-
-
-
-
- rolename1
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- ...
-
-The `roles.properties` files contain
-
- username1=password1,rolename1
- username2=password2,rolename1
-
-This file is located in the `etc` folder and will be reloaded when
-changed. The endpoint:
-
- http://0.0.0.0/path
-
-It is now secured with basic authentication. Only `username1` with
-`password1` and `username2` with `password2` are able to access the
-endpoint.
-
-# How to return a custom HTTP 500 reply message
+## How to return a custom HTTP 500 reply message
You may want to return a custom reply message when something goes wrong,
instead of the default reply message Camel
@@ -472,7 +384,7 @@ be easier to use Camel’s Exception Clause to construct the custom reply
message. For example, as show here, where we return
`Dude something went wrong` with HTTP error code 500:
-# Multipart Form support
+## Multipart Form support
The camel-jetty component supports multipart form post out of the box.
The submitted form-data are mapped into the message header. Camel Jetty
@@ -480,7 +392,7 @@ creates an attachment for each uploaded file. The file name is mapped to
the name of the attachment. The content type is set as the content type
of the attachment file name. You can find the example here.
-# Jetty JMX support
+## Jetty JMX support
The camel-jetty component supports the enabling of Jetty’s JMX
capabilities at the component and endpoint level with the endpoint
@@ -556,7 +468,7 @@ collisions when registering Jetty MBeans.
|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object|
|httpBinding|To use a custom HttpBinding to control the mapping between Camel message and HttpClient.||object|
|chunked|If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response|true|boolean|
-|disableStreamCache|Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body.|false|boolean|
+|disableStreamCache|Determines whether or not the raw input stream is cached or not. The Camel consumer (camel-servlet, camel-jetty etc.) will by default cache the input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The producer (camel-http) will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is (the stream can only be read once) as the message body.|false|boolean|
|transferException|If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean|
|async|Configure the consumer to work in async mode|false|boolean|
|continuationTimeout|Allows to set a timeout in millis when using Jetty as consumer (server). By default Jetty uses 30000. You can use a value of = 0 to never expire. If a timeout occurs then the request will be expired and Jetty will return back a http error 503 to the client. This option is only in use when using Jetty with the Asynchronous Routing Engine.|30000|integer|
diff --git a/camel-jfr.md b/camel-jfr.md
new file mode 100644
index 0000000000000000000000000000000000000000..a6773193cb4cee79ca52da0e1c3fd3b78f4a6b7b
--- /dev/null
+++ b/camel-jfr.md
@@ -0,0 +1,30 @@
+# Jfr.md
+
+**Since Camel 3.8**
+
+The Camel Java Flight Recorder (JFR) component is used for integrating
+Camel with Java Flight Recorder (JFR).
+
+This allows you to monitor and troubleshoot your Camel applications with
+JFR.
+
+The camel-jfr component emits lifecycle events for startup to JFR. This
+can, for example, be used to pinpoint which Camel routes may be slow to
+startup.
+
+See the *startupRecorder* options from [Camel
+Main](#components:others:main.adoc)
+
+# Example
+
+To enable you just need to add `camel-jfr` to the classpath, and enable
+JFR recording.
+
+JFR recordings can be started at:
+
+- When running the JVM using JVM arguments
+
+- When starting Camel by setting
+ `camel.main.startup-recorder-recording=true`.
+
+See the `flight-recorder` from the Camel Examples.
diff --git a/camel-jgroups.md b/camel-jgroups.md
index 1649f59604df63122c197439626d7953dcd6cb79..0145148efcf56ce52d588226f5898824452905cd 100644
--- a/camel-jgroups.md
+++ b/camel-jgroups.md
@@ -44,7 +44,7 @@ endpoint.
// Send a message to the cluster named 'clusterName'
from("direct:start").to("jgroups:clusterName");
-# Predefined filters
+## Predefined filters
JGroups component comes with predefined filters factory class named
`JGroupsFilters.`
@@ -64,7 +64,7 @@ node.
filter(dropNonCoordinatorViews()).
to("seda:masterNodeEventsQueue");
-# Predefined expressions
+## Predefined expressions
JGroups component comes with predefined expressions factory class named
`JGroupsExpressions.`
diff --git a/camel-jira.md b/camel-jira.md
index 87078a5510c9bd596c724c6e98af3d8d1b8396b0..07405b0ca5c85bf0a8666639f4d20ef413109100 100644
--- a/camel-jira.md
+++ b/camel-jira.md
@@ -78,13 +78,15 @@ As Jira is fully customizable, you must ensure the field IDs exist for
the project and workflow, as they can change between different Jira
servers.
-# Client Factory
+# Usage
+
+## Client Factory
You can bind the `JiraRestClientFactory` with name
**JiraRestClientFactory** in the registry to have it automatically set
in the Jira endpoint.
-# Authentication
+## Authentication
Camel-jira supports the following forms of authentication:
@@ -100,11 +102,11 @@ Camel-jira supports the following forms of authentication:
We recommend using OAuth or Personal token whenever possible, as it
provides the best security for your users and system.
-## Basic authentication requirements:
+### Basic authentication requirements:
- A username and a password.
-## OAuth authentication requirements:
+### OAuth authentication requirements:
Follow the tutorial in [Jira OAuth
documentation](https://developer.atlassian.com/cloud/jira/platform/jira-rest-api-oauth-authentication/)
@@ -119,7 +121,7 @@ access token.
- An access token, generated by Jira server.
-## Personal access token authentication requirements:
+### Personal access token authentication requirements:
Follow the tutorial to generate the [Personal
Token](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html).
@@ -127,7 +129,7 @@ Token](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-
- You have to set only the personal token in the `access-token`
parameter.
-# JQL:
+## JQL
The JQL URI option is used by both consumer endpoints. Theoretically,
items like the "project key", etc. could be URI options themselves.
@@ -152,14 +154,14 @@ index every single issue in the project.
Another note is that, similarly, the newComments consumer will have to
index every single issue **and** comment on the project. Therefore, for
large projects, it’s **vital** to optimize the JQL expression as much as
-possible. For example, the JIRA Toolkit Plugin includes a "Number of
-comments" custom field — use *"Number of comments" \> 0* in your
-query. Also try to minimize based on state (status=Open), increase the
-polling delay, etc. Example:
+possible. For example, the JIRA Toolkit Plugin includes a
+`"Number of comments"` custom field use `'"Number of comments" > 0'` in
+your query. Also try to minimize based on state (`status=Open`),
+increase the polling delay, etc. Example:
jira://[type]?[required options]&jql=RAW(project=[project key] AND status in (Open, \"Coding In Progress\") AND \"Number of comments\">0)"
-# Operations
+## Operations
See a list of required headers to set when using the Jira operations.
The author field for the producers is automatically set to the
@@ -172,7 +174,7 @@ There are operations that requires `id` for fields such as the issue
type, priority, transition. Check the valid `id` on your jira project as
they may differ on a jira installation and project workflow.
-# AddIssue
+## AddIssue
Required:
@@ -201,7 +203,7 @@ Optional:
- `IssueDescription`: The description of the issue.
-# AddComment
+## AddComment
Required:
@@ -209,7 +211,7 @@ Required:
- the body of the exchange is the description.
-# Attach
+## Attach
Only one file should attach per invocation.
@@ -219,13 +221,13 @@ Required:
- body of the exchange should be of type `File`
-# DeleteIssue
+## DeleteIssue
Required:
- `IssueKey`: The issue key identifier.
-# TransitionIssue
+## TransitionIssue
Required:
@@ -235,7 +237,7 @@ Required:
- the body of the exchange is the description.
-# UpdateIssue
+## UpdateIssue
- `IssueKey`: The issue key identifier.
@@ -257,7 +259,7 @@ Required:
- `IssueDescription`: The description of the issue.
-# Watcher
+## Watcher
- `IssueKey`: The issue key identifier.
@@ -267,7 +269,7 @@ Required:
- `IssueWatchersRemove`: A list of strings with the usernames to
remove from the watcher list.
-# WatchUpdates (consumer)
+## WatchUpdates (consumer)
- `watchedFields` Comma separated list of fields to watch for changes
i.e. `Status,Priority,Assignee,Components` etc.
@@ -291,7 +293,7 @@ about the change:
|Name|Description|Default|Type|
|---|---|---|---|
|delay|Time in milliseconds to elapse for the next poll.|6000|integer|
-|jiraUrl|The Jira server url, example: http://my\_jira.com:8081||string|
+|jiraUrl|The Jira server url, for example http://my\_jira.com:8081.||string|
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
@@ -312,15 +314,30 @@ about the change:
|---|---|---|---|
|type|Operation to perform. Consumers: NewIssues, NewComments. Producers: AddIssue, AttachFile, DeleteIssue, TransitionIssue, UpdateIssue, Watchers. See this class javadoc description for more information.||object|
|delay|Time in milliseconds to elapse for the next poll.|6000|integer|
-|jiraUrl|The Jira server url, example: http://my\_jira.com:8081||string|
+|jiraUrl|The Jira server url, for example http://my\_jira.com:8081.||string|
|jql|JQL is the query language from JIRA which allows you to retrieve the data you want. For example jql=project=MyProject Where MyProject is the product key in Jira. It is important to use the RAW() and set the JQL inside it to prevent camel parsing it, example: RAW(project in (MYP, COM) AND resolution = Unresolved)||string|
|maxResults|Max number of issues to search for|50|integer|
+|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|sendOnlyUpdatedField|Indicator for sending only changed fields in exchange body or issue object. By default consumer sends only changed fields.|true|boolean|
|watchedFields|Comma separated list of fields to watch for changes. Status,Priority are the defaults.|Status,Priority|string|
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
+|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer|
+|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer|
+|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer|
+|greedy|If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages.|false|boolean|
+|initialDelay|Milliseconds before the first poll starts.|1000|integer|
+|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.|0|integer|
+|runLoggingLevel|The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that.|TRACE|object|
+|scheduledExecutorService|Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool.||object|
+|scheduler|To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler|none|object|
+|schedulerProperties|To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler.||object|
+|startScheduler|Whether the scheduler should be auto started.|true|boolean|
+|timeUnit|Time unit for initialDelay and delay options.|MILLISECONDS|object|
+|useFixedDelay|Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details.|true|boolean|
|accessToken|(OAuth or Personal Access Token authentication) The access token generated by the Jira server.||string|
|consumerKey|(OAuth only) The consumer key from Jira settings.||string|
|password|(Basic authentication only) The password or the API Token to authenticate to the Jira server. Use only if username basic authentication is used.||string|
diff --git a/camel-jms.md b/camel-jms.md
index 3110ac6ad1eab5e5b28720525350de98a2d5980c..ae9c7ecf91accfbbc015bb2414f133e1ea3e8521 100644
--- a/camel-jms.md
+++ b/camel-jms.md
@@ -139,13 +139,6 @@ valid Java identifiers. One benefit of doing this is that you can then
use your headers inside a JMS Selector (whose SQL92 syntax mandates Java
identifier syntax for headers).
-A simple strategy for mapping header names is used by default. The
-strategy is to replace any dots and hyphens in the header name as shown
-below and to reverse the replacement when the header name is restored
-from a JMS message sent over the wire. What does this mean? No more
-losing method names to invoke on a bean component, no more losing the
-filename header for the File Component, and so on.
-
The current header name strategy for accepting header names in Camel is
as follows:
@@ -155,8 +148,34 @@ as follows:
- Hyphen is replaced by `\_HYPHEN_` and the replacement is reversed
when Camel consumes the message
-You can configure many different properties on the JMS endpoint, which
-map to properties on the `JMSConfiguration` object.
+Camel comes with two implementations of `HeaderFilterStrategy`:
+
+- `org.apache.camel.component.jms.ClassicJmsHeaderFilterStrategy` -
+ classic strategy used until Camel 4.8.
+
+- `org.apache.camel.component.jms.JmsHeaderFilterStrategy` - newer
+ default strategy from Camel 4.9 onwards.
+
+### ClassicJmsHeaderFilterStrategy
+
+A classic strategy for mapping header names is used in Camel 4.8 or
+older.
+
+This strategy also includes Camel internal headers such as
+`CamelFileName` and `CamelBeanMethodName` which means that you can send
+Camel messages over JMS to another Camel instance and preserve this
+information. However, this also means that JMS messages contains
+properties with `Camel...` keys. This is not desirable always, and
+therefore we changed default from Camel 4.9 onwards.
+
+You can always configure a custom `HeaderFilterStrategy` to remove all
+`Camel...` headers in Camel 4.8 or older.
+
+### JmsHeaderFilterStrategy
+
+The new default strategy from Camel 4.9 onwards behaves similar to other
+components, where `Camel...` headers are removed, and only allowing
+explicit end user headers.
**Mapping to Spring JMS**
@@ -164,7 +183,7 @@ Many of these properties map to properties on Spring JMS, which Camel
uses for sending and receiving messages. So you can get more information
about these properties by consulting the relevant Spring documentation.
-# Samples
+# Examples
JMS is used in many examples for other components as well. But we
provide a few samples below to get started.
@@ -200,7 +219,7 @@ Camel also has annotations, so you can use [POJO
Consuming](#manual::pojo-consuming.adoc) and [POJO
Producing](#manual::pojo-producing.adoc).
-## Spring DSL sample
+## Spring DSL Example
The preceding examples use the Java DSL. Camel also supports Spring XML
DSL. Here is the big spender sample using Spring DSL:
@@ -213,7 +232,7 @@ DSL. Here is the big spender sample using Spring DSL:
-## Other samples
+## Other Examples
JMS appears in many of the examples for other components and EIP
patterns, as well in this Camel documentation. So feel free to browse
@@ -264,7 +283,9 @@ Here we only store the original cause error message in the transform.
You can, however, use any Expression to send whatever you like. For
example, you can invoke a method on a Bean or use a custom processor.
-# Message Mapping between JMS and Camel
+# Usage
+
+## Message Mapping between JMS and Camel
Camel automatically maps messages between `javax.jms.Message` and
`org.apache.camel.Message`.
@@ -279,65 +300,65 @@ following JMS message types:
-
+
-
+
String
javax.jms.TextMessage
-
+
org.w3c.dom.Node
javax.jms.TextMessage
The DOM will be converted to
String.
-
+
Map
javax.jms.MapMessage
-
+
java.io.Serializable
javax.jms.ObjectMessage
-
+
byte[]
javax.jms.BytesMessage
-
+
java.io.File
javax.jms.BytesMessage
-
+
java.io.Reader
javax.jms.BytesMessage
-
+
java.io.InputStream
javax.jms.BytesMessage
-
+
java.nio.ByteBuffer
-
+
-
+
javax.jms.TextMessage
String
-
+
javax.jms.BytesMessage
byte[]
-
+
javax.jms.MapMessage
Map<String, Object>
-
+
javax.jms.ObjectMessage
Object
@@ -428,7 +449,7 @@ the header with the key `CamelJmsMessageType`. For example:
The possible values are defined in the `enum` class,
`org.apache.camel.jms.JmsMessageType`.
-# Message format when sending
+## Message format when sending
The exchange sent over the JMS wire must conform to the [JMS Message
spec](http://java.sun.com/j2ee/1.4/docs/api/javax/jms/Message.html).
@@ -466,7 +487,7 @@ at **DEBUG** level if it drops a given header value. For example:
2008-07-09 06:43:04,046 [main ] DEBUG JmsBinding
- Ignoring non primitive header: order of class: org.apache.camel.component.jms.issues.DummyOrder with value: DummyOrder{orderId=333, itemId=4444, quantity=2}
-# Message format when receiving
+## Message format when receiving
Camel adds the following properties to the `Exchange` when it receives a
message:
@@ -478,14 +499,14 @@ message:
-
+
-
+
org.apache.camel.jms.replyDestination
-
+
-
+
JMSCorrelationID
String
The JMS correlation ID.
-
+
JMSDeliveryMode
int
The JMS delivery mode.
-
+
JMSDestination
javax.jms.Destination
The JMS destination.
-
+
JMSExpiration
long
The JMS expiration.
-
+
JMSMessageID
String
The JMS unique message ID.
-
+
JMSPriority
int
The JMS priority (with 0 as the lowest
priority and 9 as the highest).
-
+
JMSRedelivered
boolean
Whether the JMS message is
redelivered.
-
+
JMSReplyTo
javax.jms.Destination
The JMS reply-to destination.
-
+
JMSTimestamp
long
The JMS timestamp.
-
+
JMSType
String
The JMS type.
-
+
JMSXGroupID
String
The JMS group ID.
@@ -578,7 +599,7 @@ As all the above information is standard JMS, you can check the [JMS
documentation](http://java.sun.com/javaee/5/docs/api/javax/jms/Message.html)
for further details.
-# About using Camel to send and receive messages and JMSReplyTo
+## About using Camel to send and receive messages and JMSReplyTo
The JMS component is complex, and you have to pay close attention to how
it works in some cases. So this is a short summary of some
@@ -599,7 +620,7 @@ following conditions:
All this can be a tad complex to understand and configure to support
your use case.
-## JmsProducer
+### JmsProducer
The `JmsProducer` behaves as follows, depending on configuration:
@@ -610,14 +631,14 @@ The `JmsProducer` behaves as follows, depending on configuration:
-
+
-
+
InOut
-
Camel will expect a reply, set a
@@ -625,20 +646,20 @@ temporary JMSReplyTo, and after sending the message, it
will start to listen for the reply message on the temporary
queue.
-
+
InOut
JMSReplyTo is set
Camel will expect a reply and, after
sending the message, it will start to listen for the reply message on
the specified JMSReplyTo queue.
-
+
InOnly
-
Camel will send the message and
not expect a reply.
-
+
InOnly
JMSReplyTo is set
By default, Camel discards the
@@ -655,7 +676,7 @@ thus continue after sending the message.
-## JmsConsumer
+### JmsConsumer
The `JmsConsumer` behaves as follows, depending on configuration:
@@ -666,26 +687,26 @@ The `JmsConsumer` behaves as follows, depending on configuration:
-
+
-
+
InOut
-
Camel will send the reply back to the
JMSReplyTo queue.
-
+
InOnly
-
Camel will not send a reply back, as
the pattern is InOnly .
-
+
-
disableReplyTo=true
@@ -706,7 +727,7 @@ This is useful if you want to send an `InOnly` message to a JMS topic:
.to(ExchangePattern.InOnly, "activemq:topic:order")
.to("bean:handleOrder");
-# Reuse endpoint and send to different destinations computed at runtime
+## Reuse endpoint and send to different destinations computed at runtime
If you need to send messages to a lot of different JMS destinations, it
makes sense to reuse a JMS endpoint and specify the real destination in
@@ -723,21 +744,21 @@ You can specify the destination in the following headers:
-
+
-
+
CamelJmsDestination
javax.jms.Destination
A destination object.
-
+
CamelJmsDestinationName
String
@@ -777,7 +798,7 @@ them to the created JMS message to avoid the accidental loops in the
routes (in scenarios when the message will be forwarded to another JMS
endpoint).
-# Configuring different JMS providers
+## Configuring different JMS providers
You can configure your JMS provider in Spring XML as follows:
@@ -796,7 +817,7 @@ This works by the SpringCamelContext lazily fetching components from the
spring context for the scheme name you use for Endpoint URIs and having
the Component resolve the endpoint URIs.
-## Using JNDI to find the ConnectionFactory
+### Using JNDI to find the ConnectionFactory
If you are using a J2EE container, you might need to look up JNDI to
find the JMS `ConnectionFactory` rather than use the usual ``
@@ -814,7 +835,7 @@ schema](http://static.springsource.org/spring/docs/3.0.x/spring-framework-refere
in the Spring reference documentation for more details about JNDI
lookup.
-# Concurrent Consuming
+## Concurrent Consuming
A common requirement with JMS is to consume messages concurrently in
multiple threads to make an application more responsive. You can set the
@@ -833,7 +854,7 @@ You can configure this option in one of the following ways:
- By invoking `setConcurrentConsumers()` directly on the
`JmsEndpoint`.
-## Concurrent Consuming with async consumer
+### Concurrent Consuming with async consumer
Notice that each concurrent consumer will only pick up the next
available message from the JMS broker, when the current message has been
@@ -846,7 +867,7 @@ Engine). See more details in the table on top of the page about the
from("jms:SomeQueue?concurrentConsumers=20&asyncConsumer=true").
bean(MyClass.class);
-# Request-reply over JMS
+## Request-reply over JMS
Camel supports Request Reply over JMS. In essence the MEP of the
Exchange should be `InOut` when you send a message to a JMS queue.
@@ -863,7 +884,7 @@ summaries the options.
-
+
-
+
Temporary
Fast
Yes
@@ -881,7 +902,7 @@ queue, and automatic created by Camel. To use this, do
can optionally configure replyToType=Temporary to make it
stand out that temporary queues are in use.
-
+
Shared
Slow
Yes
@@ -899,7 +920,7 @@ therefore not as fast as Temporary or
Exclusive queues. See further below how to tweak this for
better performance.
-
+
Exclusive
Fast
No (*Yes)
@@ -922,7 +943,7 @@ a unique name per node, then you can run this in a clustered
environment. As then the reply message will be sent back to that queue
for the given node that awaits the reply message.
-
+
concurrentConsumers
Fast
@@ -934,7 +955,7 @@ a range using the concurrentConsumers and
That using Shared reply queues may not work as well with
concurrent listeners, so use this option with care.
-
+
maxConcurrentConsumers
Fast
@@ -972,7 +993,7 @@ this in Camel as shown below:
In this route, we instruct Camel to route replies asynchronously using a
thread pool with five threads.
-## Request-reply over JMS and using a shared fixed reply queue
+### Request-reply over JMS and using a shared fixed reply queue
If you use a fixed reply queue when doing Request Reply over JMS as
shown in the example below, then pay attention.
@@ -999,7 +1020,7 @@ Notice this will cause the Camel to send pull requests to the message
broker more frequently, and thus require more network traffic.
It is generally recommended to use temporary queues if possible.
-## Request-reply over JMS and using an exclusive fixed reply queue
+### Request-reply over JMS and using an exclusive fixed reply queue
In the previous example, Camel would anticipate the fixed reply queue
named "bar" was shared, and thus it uses a `JMSSelector` to only consume
@@ -1038,7 +1059,7 @@ node in the cluster may pick up messages intended as a reply on another
node. For clustered environments, it’s recommended to use shared reply
queues instead.
-# Synchronizing clocks between senders and receivers
+## Synchronizing clocks between senders and receivers
When doing messaging between systems, it is desirable that the systems
have synchronized clocks. For example, when sending a
@@ -1053,7 +1074,7 @@ use the [timestamp
plugin](http://activemq.apache.org/timestampplugin.html) to synchronize
clocks.
-# About time to live
+## About time to live
Read first above about synchronized clocks.
@@ -1113,7 +1134,7 @@ For example, to indicate a 5 sec., you set `timeToLive=5000`. The option
also for InOnly messaging. The `requestTimeout` option is not being used
for InOnly messaging.
-# Enabling Transacted Consumption
+## Enabling Transacted Consumption
A common requirement is to consume from a queue in a transaction and
then process the message using the Camel route. To do this, just ensure
@@ -1159,7 +1180,7 @@ more details about this kind of setup, see
and
[here](http://forum.springsource.org/showthread.php?123631-JMS-DMLC-not-caching%20connection-when-using-TX-despite-cacheLevel-CACHE_CONSUMER&p=403530&posted=1#post403530).
-# Using JMSReplyTo for late replies
+## Using JMSReplyTo for late replies
When using Camel as a JMS listener, it sets an Exchange property with
the value of the ReplyTo `javax.jms.Destination` object, having the key
@@ -1189,13 +1210,13 @@ For example:
}
}
-# Using a request timeout
+## Using a request timeout
In the sample below we send a Request Reply style message Exchange (we
use the `requestBody` method = `InOut`) to the slow queue for further
processing in Camel, and we wait for a return reply:
-# Sending an InOnly message and keeping the JMSReplyTo header
+## Sending an InOnly message and keeping the JMSReplyTo header
When sending to a [JMS](#jms-component.adoc) destination using
**camel-jms**, the producer will use the MEP to detect if it is `InOnly`
@@ -1217,7 +1238,7 @@ For example, to send an `InOnly` message to the foo queue, but with a
Notice we use `preserveMessageQos=true` to instruct Camel to keep the
`JMSReplyTo` header.
-# Setting JMS provider options on the destination
+## Setting JMS provider options on the destination
Some JMS providers, like IBM’s WebSphere MQ, need options to be set on
the JMS destination. For example, you may need to specify the
diff --git a/camel-jolt.md b/camel-jolt.md
index 9a6c808149e1b6b3eb57652339b2369be33dc1ca..af08a4d82d6b858a106266917b3f2fc31bcdda96 100644
--- a/camel-jolt.md
+++ b/camel-jolt.md
@@ -26,7 +26,7 @@ Where `specName` is the classpath-local URI of the specification to
invoke; or the complete URL of the remote specification (e.g.:
`\file://folder/myfile.vm`).
-# Samples
+# Examples
For example, you could use something like
diff --git a/camel-jooq.md b/camel-jooq.md
index 0ca833f6d8c385d4684bc46ab99917dbaeef2e8e..28b1cfc2afc39fdd871e460eb9e9bebcafcc9f06 100644
--- a/camel-jooq.md
+++ b/camel-jooq.md
@@ -7,11 +7,13 @@
The JOOQ component enables you to store and retrieve Java objects from
persistent storage using JOOQ library.
+# Usage
+
JOOQ provides DSL to create queries. There are two types of queries:
-1. org.jooq.Query - can be executed
+1. `org.jooq.Query`: can be executed
-2. org.jooq.ResultQuery - can return results
+2. `org.jooq.ResultQuery`: can return results
For example:
@@ -23,7 +25,7 @@ For example:
ResultQuery resultQuery = create.resultQuery("SELECT * FROM BOOK");
Result result = resultQuery.fetch();
-# Plain SQL
+## Plain SQL
SQL could be executed using JOOQ’s objects "Query" or "ResultQuery".
Also, the SQL query could be specified inside URI:
@@ -32,7 +34,7 @@ Also, the SQL query could be specified inside URI:
See the examples below.
-# Consuming from endpoint
+## Consuming from endpoint
Consuming messages from a JOOQ consumer endpoint removes (or updates)
entity beans in the database. This allows you to use a database table as
@@ -52,22 +54,22 @@ When using jooq as a producer you can use any of the following
-
+
-
+
none
Execute a query (default)
-
+
execute
Execute a query with no expected
results
-
+
fetch
Execute a query and the result of the
query is stored as the new message body
@@ -75,9 +77,9 @@ query is stored as the new message body
-## Example:
+# Example
-JOOQ configuration:
+**JOOQ configuration:**
@@ -119,7 +121,7 @@ JOOQ configuration:
-Camel context configuration:
+**Camel context configuration:**
@@ -199,7 +201,7 @@ Camel context configuration:
-Sample bean:
+**Sample bean:**
@Component
public class BookStoreRecordBean {
diff --git a/camel-joor-language.md b/camel-joor-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..86fffbdd1cd8e88a2356e64b57d8131235949c58
--- /dev/null
+++ b/camel-joor-language.md
@@ -0,0 +1,392 @@
+# Joor-language.md
+
+**Since Camel 3.7**
+
+The jOOR language allows using Java code in your Camel expression, with
+some limitations.
+
+The jOOR library integrates with the Java compiler and performs runtime
+compilation of Java code.
+
+The jOOR language is actually Java code and therefore `joor` has been
+deprecated in favour of using `java` as the language name in Camel.
+
+Java 8 is not supported. Java 11 is required.
+
+# jOOR Options
+
+# Usage
+
+## Variables
+
+The jOOR language allows the following variables to be used in the
+script:
+
+
+
+
+
+
+
+
+
+
+
+
+context
+Context
+The CamelContext
+
+
+exchange
+Exchange
+The Camel Exchange
+
+
+message
+Message
+The Camel message
+
+
+body
+Object
+The message body
+
+
+
+
+## Functions
+
+The jOOR language allows the following functions to be used in the
+script:
+
+
+
+
+
+
+
+
+
+
+
+bodyAs(type)
+To convert the body to the given
+type.
+
+
+headerAs(name, type)
+To convert the header with the name to
+the given type.
+
+
+headerAs(name, defaultValue,
+type)
+To convert the header with the name to
+the given type. If no header exists, then use the given default
+value.
+
+
+exchangePropertyAs(name, type)
+To convert the exchange property with
+the name to the given type.
+
+
+exchangePropertyAs(name, defaultValue,
+type)
+To convert the exchange property with
+the name to the given type. If no exchange property exists, then use the
+given default value.
+
+
+optionalBodyAs(type)
+To convert the body to the given type,
+returned wrapped in java.util.Optional.
+
+
+optionalHeaderAs(name, type)
+To convert the header with the name to
+the given type, returned wrapped in
+java.util.Optional.
+
+
+optionalExchangePropertyAs(name,
+type)
+To convert the exchange property with
+the name to the given type, returned wrapped in
+java.util.Optional.
+
+
+
+
+These functions are convenient for getting the message body, header or
+exchange properties as a specific Java type.
+
+Here we want to get the message body as a `com.foo.MyUser` type we can
+do as follows:
+
+ var user = bodyAs(com.foo.MyUser.class);
+
+You can omit *.class* to make the function a little smaller:
+
+ var user = bodyAs(com.foo.MyUser);
+
+The type must be a fully qualified class type, but that can be
+inconvenient to type all the time. In such a situation, you can
+configure an import in the `camel-joor.properties` file as shown below:
+
+ import com.foo.MyUser;
+
+And then the function can be shortened:
+
+ var user = bodyAs(MyUser);
+
+## Dependency Injection
+
+The Camel jOOR language allows dependency injection by referring to
+beans by their id from the Camel registry. For optimization purposes,
+then the beans are injected once in the constructor and the scopes are
+*singleton*. This requires the injected beans to be *thread safe* as
+they will be reused for all processing.
+
+In the jOOR script you declare the injected beans using the syntax
+`#bean:beanId`.
+
+For example, suppose we have the following bean
+
+ public class MyEchoBean {
+
+ public String echo(String str) {
+ return str + str;
+ }
+
+ public String greet() {
+ return "Hello ";
+ }
+ }
+
+And this bean is registered with the name `myEcho` in the Camel
+registry.
+
+The jOOR script can then inject this bean directly in the script where
+the bean is in use:
+
+ from("direct:start")
+ .transform().joor("'Hello ' + #bean:myEcho.echo(bodyAs(String))")
+ .to("mock:result");
+
+Now this code may seem a bit magic, but what happens is that the
+`myEcho` bean is injected via a constructor, and then called directly in
+the script, so it is as fast as possible.
+
+Under the hood, Camel jOOR generates the following source code compiled
+once:
+
+ public class JoorScript1 implements org.apache.camel.language.joor.JoorMethod {
+
+ private MyEchoBean myEcho;
+
+ public JoorScript1(CamelContext context) throws Exception {
+ myEcho = context.getRegistry().lookupByNameAndType("myEcho", MyEchoBean.class);
+ }
+
+ @Override
+ public Object evaluate(CamelContext context, Exchange exchange, Message message, Object body, Optional optionalBody) throws Exception {
+ return "Hello " + myEcho.echo(bodyAs(exchange, String.class));
+ }
+ }
+
+You can also store a reference to the bean in a variable which would
+more resemble how you would code in Java
+
+ from("direct:start")
+ .transform().joor("var bean = #bean:myEcho; return 'Hello ' + bean.echo(bodyAs(String))")
+ .to("mock:result");
+
+Notice how we declare the bean as if it is a local variable via
+`var bean = #bean:myEcho`. When doing this, we must use a different name
+as `myEcho` is the variable used by the dependency injection. Therefore,
+we use *bean* as name in the script.
+
+## Auto imports
+
+The jOOR language will automatically import from:
+
+ import java.util.*;
+ import java.util.concurrent.*;
+ import java.util.stream.*;
+ import org.apache.camel.*;
+ import org.apache.camel.util.*;
+
+## Configuration file
+
+You can configure the jOOR language in the `camel-joor.properties` file
+which by default is loaded from the root classpath. You can specify a
+different location with the `configResource` option on the jOOR
+language.
+
+For example, you can add additional imports in the
+`camel-joor.properties` file by adding:
+
+ import com.foo.MyUser;
+ import com.bar.*;
+ import static com.foo.MyHelper.*;
+
+You can also add aliases (`key=value`) where an alias will be used as a
+shorthand replacement in the code.
+
+ echo()=bodyAs(String) + bodyAs(String)
+
+Which allows using `echo()` in the jOOR language script such as:
+
+ from("direct:hello")
+ .transform(joor("'Hello ' + echo()"))
+ .log("You said ${body}");
+
+The `echo()` alias will be replaced with its value resulting in a script
+as:
+
+ .transform(joor("'Hello ' + bodyAs(String) + bodyAs(String)"))
+
+You can configure a custom configuration location for the
+`camel-joor.properties` file or reference to a bean in the registry:
+
+ JoorLanguage joor = (JoorLanguage) context.resolveLanguage("joor");
+ joor.setConfigResource("ref:MyJoorConfig");
+
+And then register a bean in the registry with id `MyJoorConfig` that is
+a String value with the content.
+
+ String config = "....";
+ camelContext.getRegistry().put("MyJoorConfig", config);
+
+# Example
+
+For example, to transform the message using jOOR language to the upper
+case
+
+Java
+from("seda:orders")
+.transform().joor("message.getBody(String.class).toUpperCase()")
+.to("seda:upper");
+
+XML DSL
+
+
+
+message.getBody(String.class).toUpperCase()
+
+
+
+
+## Multi statements
+
+It is possible to include multiple statements. The code below shows an
+example where the `user` header is retrieved in a first statement. And
+then, in a second statement we return a value whether the user is `null`
+or not.
+
+ from("seda:orders")
+ .transform().joor("var user = message.getHeader(\"user\"); return user != null ? \"User: \" + user : \"No user exists\";")
+ .to("seda:user");
+
+Notice how we have to quote strings in strings, and that is annoying, so
+instead we can use single quotes:
+
+ from("seda:orders")
+ .transform().joor("var user = message.getHeader('user'); return user != null ? 'User: ' + user : 'No user exists';")
+ .to("seda:user");
+
+## Hot re-load
+
+You can turn off pre-compilation for the jOOR language and then Camel
+will recompile the script for each message. You can externalize the code
+into a resource file, which will be reloaded on each message as shown:
+
+Java
+JoorLanguage joor = (JoorLanguage) context.resolveLanguage("joor");
+joor.setPreCompile(false);
+
+ from("jms:incoming")
+ .transform().joor("resource:file:src/main/resources/orders.joor")
+ .to("jms:orders");
+
+Here the jOOR script is externalized into the file
+`src/main/resources/orders.joor` which allows you to edit this source
+file while running the Camel application and try the changes with
+hot-reloading.
+
+XML
+In XML DSL it’s easier because you can turn off pre-compilation in the
+`` XML element:
+
+
+
+
+ resource:file:src/main/resources/orders.joor
+
+
+
+
+## Lambda-based AggregationStrategy
+
+The jOOR language has special support for defining an
+`org.apache.camel.AggregationStrategy` as a lambda expression. This is
+useful when using EIP patterns that use aggregation such as the
+Aggregator, Splitter, Recipient List, Enrich, and others.
+
+To use this then the jOOR language script must be in the following
+syntax:
+
+ (e1, e2) -> { }
+
+Where `e1` and `e2` are the *old* Exchange and *new* Exchange from the
+`aggregate` method in the `AggregationStrategy`. The returned value is
+used as the aggregated message body, or use `null` to skip this.
+
+The lambda syntax is representing a Java util
+`BiFunction` type.
+
+For example, to aggregate message bodies together, we can do this as
+shown:
+
+ (e1, e2) -> {
+ String b1 = e1.getMessage().getBody(String.class);
+ String b2 = e2.getMessage().getBody(String.class);
+ return b1 + ',' + b2;
+ }
+
+## Limitations
+
+The jOOR Camel language is only supported as a block of Java code that
+gets compiled into a Java class with a single method. The code that you
+can write is therefore limited to a number of Java statements.
+
+The supported runtime is intended for Java standalone, Spring Boot,
+Camel Quarkus and other microservices runtimes. It is not supported on
+any kind of Java Application Server runtime.
+
+jOOR does not support runtime compilation with Spring Boot using *fat
+jar* packaging ([https://github.com/jOOQ/jOOR/issues/69](https://github.com/jOOQ/jOOR/issues/69)), it works with
+exploded classpath.
+
+# Dependencies
+
+To use scripting languages in your camel routes, you need to add a
+dependency on **camel-joor**.
+
+If you use Maven you could add the following to your `pom.xml`,
+substituting the version number for the latest and greatest release.
+
+
+ org.apache.camel
+ camel-joor
+ x.x.x
+
diff --git a/camel-jpa.md b/camel-jpa.md
index 042d2b380afec3dc1482b733f93434ddd65f20ed..21ded72b3733ff8e99264717e1e4a9a602e236f0 100644
--- a/camel-jpa.md
+++ b/camel-jpa.md
@@ -19,7 +19,20 @@ for this component:
-# Sending to the endpoint
+# URI format
+
+ jpa:entityClassName[?options]
+
+For sending to the endpoint, the *entityClassName* is optional. If
+specified, it helps the [Type
+Converter](http://camel.apache.org/type-converter.html) to ensure the
+body is of the correct type.
+
+For consuming, the *entityClassName* is mandatory.
+
+# Usage
+
+## Sending to the endpoint
You can store a Java entity bean in a database by sending it to a JPA
producer endpoint. The body of the *In* message is assumed to be an
@@ -45,7 +58,7 @@ note that you need to specify `useExecuteUpdate` to `true` if you
execute `UPDATE`/`DELETE` with `namedQuery` as Camel doesn’t look into
the named query unlike `query` and `nativeQuery`.
-# Consuming from the endpoint
+## Consuming from the endpoint
Consuming messages from a JPA consumer endpoint removes (or updates)
entity beans in the database. This allows you to use a database table as
@@ -71,18 +84,7 @@ which will be invoked on your entity bean before it has been processed
If you are consuming a lot of rows (100K+) and experience `OutOfMemory`
problems, you should set the `maximumResults` to a sensible value.
-# URI format
-
- jpa:entityClassName[?options]
-
-For sending to the endpoint, the *entityClassName* is optional. If
-specified, it helps the [Type
-Converter](http://camel.apache.org/type-converter.html) to ensure the
-body is of the correct type.
-
-For consuming, the *entityClassName* is mandatory.
-
-# Configuring EntityManagerFactory
+## Configuring EntityManagerFactory
It’s strongly advised to configure the JPA component to use a specific
`EntityManagerFactory` instance. If failed to do so each `JpaEndpoint`
@@ -101,7 +103,7 @@ from the Registry which means you do not need to configure this on the
`JpaComponent` as shown above. You only need to do so if there is
ambiguity, in which case Camel will log a WARN.
-# Configuring TransactionStrategy
+## Configuring TransactionStrategy
The `TransactionStrategy` is a vendor neutral abstraction that allows
`camel-jpa` to easily plug in and work with Spring `TransactionManager`
@@ -127,7 +129,7 @@ explicitly configure a JPA component that references the
-# Using a consumer with a named query
+## Using a consumer with a named query
For consuming only selected entities, you can use the `namedQuery` URI
query option. First, you have to define the named query in the JPA
@@ -144,7 +146,7 @@ After that, you can define a consumer uri like this one:
from("jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1")
.to("bean:myBusinessLogic");
-# Using a consumer with a query
+## Using a consumer with a query
For consuming only selected entities, you can use the `query` URI query
option. You only have to define the query option:
@@ -152,7 +154,7 @@ option. You only have to define the query option:
from("jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1")
.to("bean:myBusinessLogic");
-# Using a consumer with a native query
+## Using a consumer with a native query
For consuming only selected entities, you can use the `nativeQuery` URI
query option. You only have to define the native query option:
@@ -163,7 +165,7 @@ query option. You only have to define the native query option:
If you use the native query option, you will receive an object array in
the message body.
-# Using a producer with a named query
+## Using a producer with a named query
For retrieving selected entities or execute bulk update/delete, you can
use the `namedQuery` URI query option. First, you have to define the
@@ -183,7 +185,7 @@ After that, you can define a producer uri like this one:
Note that you need to specify `useExecuteUpdate` option to `true` to
execute `UPDATE`/`DELETE` statement as a named query.
-# Using a producer with a query
+## Using a producer with a query
For retrieving selected entities or execute bulk update/delete, you can
use the `query` URI query option. You only have to define the query
@@ -192,7 +194,7 @@ option:
from("direct:query")
.to("jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1");
-# Using a producer with a native query
+## Using a producer with a native query
For retrieving selected entities or execute bulk update/delete, you can
use the `nativeQuery` URI query option. You only have to define the
@@ -204,7 +206,7 @@ native query option:
If you use the native query option without specifying `resultClass`, you
will receive an object array in the message body.
-# Using the JPA-Based Idempotent Repository
+## Using the JPA-Based Idempotent Repository
The Idempotent Consumer from the [EIP
patterns](http://camel.apache.org/enterprise-integration-patterns.html)
@@ -239,10 +241,10 @@ To use the JPA based idempotent repository.
-**When running this Camel component tests inside your IDE**
+# Important Development Notes
If you run the [tests of this
-component](https://svn.apache.org/repos/asf/camel/trunk/components/camel-jpa/src/test)
+component](https://github.com/apache/camel/tree/main/components/camel-jpa/src/test)
directly inside your IDE, and not through Maven, then you could see
exceptions like these:
@@ -260,7 +262,7 @@ exceptions like these:
The problem here is that the source has been compiled or recompiled
through your IDE and not through Maven, which would [enhance the
byte-code at build
-time](https://svn.apache.org/repos/asf/camel/trunk/components/camel-jpa/pom.xml).
+time](https://github.com/apache/camel/blob/main/components/camel-jpa/pom.xml).
To overcome this, you need to enable [dynamic byte-code enhancement of
OpenJPA](http://openjpa.apache.org/entity-enhancement.html#dynamic-enhancement).
For example, assuming the current OpenJPA version being used in Camel is
@@ -291,39 +293,39 @@ following argument to the JVM:
|Name|Description|Default|Type|
|---|---|---|---|
|entityType|Entity class name||string|
-|joinTransaction|The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL\_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints.|true|boolean|
+|joinTransaction|The camel-jpa component will join transaction by default. You can use this option to turn this off, for example, if you use LOCAL\_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints.|true|boolean|
|maximumResults|Set the maximum number of results to retrieve on the Query.|-1|integer|
|namedQuery|To use a named query.||string|
|nativeQuery|To use a custom native query. You may want to use the option resultClass also when using native queries.||string|
|persistenceUnit|The JPA persistence unit used by default.|camel|string|
|query|To use a custom query.||string|
-|resultClass|Defines the type of the returned payload (we will call entityManager.createNativeQuery(nativeQuery, resultClass) instead of entityManager.createNativeQuery(nativeQuery)). Without this option, we will return an object array. Only has an affect when using in conjunction with native query when consuming data.||string|
+|resultClass|Defines the type of the returned payload (we will call entityManager.createNativeQuery(nativeQuery, resultClass) instead of entityManager.createNativeQuery(nativeQuery)). Without this option, we will return an object array. Only has an effect when using in conjunction with a native query when consuming data.||string|
|consumeDelete|If true, the entity is deleted after it is consumed; if false, the entity is not deleted.|true|boolean|
-|consumeLockEntity|Specifies whether or not to set an exclusive lock on each entity bean while processing the results from polling.|true|boolean|
+|consumeLockEntity|Specifies whether to set an exclusive lock on each entity bean while processing the results from polling.|true|boolean|
|deleteHandler|To use a custom DeleteHandler to delete the row after the consumer is done processing the exchange||object|
|lockModeType|To configure the lock mode on the consumer.|PESSIMISTIC\_WRITE|object|
-|maxMessagesPerPoll|An integer value to define the maximum number of messages to gather per poll. By default, no maximum is set. Can be used to avoid polling many thousands of messages when starting up the server. Set a value of 0 or negative to disable.||integer|
+|maxMessagesPerPoll|An integer value to define the maximum number of messages to gather per poll. By default, no maximum is set. It can be used to avoid polling many thousands of messages when starting up the server. Set a value of 0 or negative to disable.||integer|
|preDeleteHandler|To use a custom Pre-DeleteHandler to delete the row after the consumer has read the entity.||object|
|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|skipLockedEntity|To configure whether to use NOWAIT on lock and silently skip the entity.|false|boolean|
-|transacted|Whether to run the consumer in transacted mode, by which all messages will either commit or rollback, when the entire batch has been processed. The default behavior (false) is to commit all the previously successfully processed messages, and only rollback the last failed message.|false|boolean|
+|transacted|Whether to run the consumer in transacted mode, by which all messages will either commit or rollback, when the entire batch has been processed. The default behavior (false) is to commit all the previously successfully processed messages, and only roll back the last failed message.|false|boolean|
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
|parameters|This key/value mapping is used for building the query parameters. It is expected to be of the generic type java.util.Map where the keys are the named parameters of a given JPA query and the values are their corresponding effective values you want to select for. When it's used for producer, Simple expression can be used as a parameter value. It allows you to retrieve parameter values from the message body, header and etc.||object|
|pollStrategy|A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel.||object|
-|findEntity|If enabled then the producer will find a single entity by using the message body as key and entityType as the class type. This can be used instead of a query to find a single entity.|false|boolean|
+|findEntity|If enabled, then the producer will find a single entity by using the message body as a key and entityType as the class type. This can be used instead of a query to find a single entity.|false|boolean|
|firstResult|Set the position of the first result to retrieve.|-1|integer|
|flushOnSend|Flushes the EntityManager after the entity bean has been persisted.|true|boolean|
|outputTarget|To put the query (or find) result in a header or property instead of the body. If the value starts with the prefix property:, put the result into the so named property, otherwise into the header.||string|
|remove|Indicates to use entityManager.remove(entity).|false|boolean|
|singleResult|If enabled, a query or a find which would return no results or more than one result, will throw an exception instead.|false|boolean|
-|useExecuteUpdate|To configure whether to use executeUpdate() when producer executes a query. When you use INSERT, UPDATE or DELETE statement as a named query, you need to specify this option to 'true'.||boolean|
+|useExecuteUpdate|To configure whether to use executeUpdate() when producer executes a query. When you use INSERT, UPDATE or a DELETE statement as a named query, you need to specify this option to 'true'.||boolean|
|usePersist|Indicates to use entityManager.persist(entity) instead of entityManager.merge(entity). Note: entityManager.persist(entity) doesn't work for detached entities (where the EntityManager has to execute an UPDATE instead of an INSERT query)!|false|boolean|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|usePassedInEntityManager|If set to true, then Camel will use the EntityManager from the header JpaConstants.ENTITY\_MANAGER instead of the configured entity manager on the component/endpoint. This allows end users to control which entity manager will be in use.|false|boolean|
|entityManagerProperties|Additional properties for the entity manager to use.||object|
-|sharedEntityManager|Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager.|false|boolean|
+|sharedEntityManager|Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases, joinTransaction should be set to false as this is not an EXTENDED EntityManager.|false|boolean|
|backoffErrorThreshold|The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in.||integer|
|backoffIdleThreshold|The number of subsequent idle polls that should happen before the backoffMultipler should kick-in.||integer|
|backoffMultiplier|To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.||integer|
diff --git a/camel-jq-language.md b/camel-jq-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..2f3f3c7bf21b2454a7eb882a15502a3f4081b436
--- /dev/null
+++ b/camel-jq-language.md
@@ -0,0 +1,116 @@
+# Jq-language.md
+
+**Since Camel 3.18**
+
+Camel supports [JQ](https://jqlang.github.io/jq/) to allow using
+[Expression](#manual::expression.adoc) or
+[Predicate](#manual::predicate.adoc) on JSON messages.
+
+# JQ Options
+
+# Usage
+
+## Message body types
+
+Camel JQ leverages `camel-jackson` for type conversion. To enable
+camel-jackson POJO type conversion, refer to the Camel Jackson
+documentation.
+
+## Using header as input
+
+By default, JQ uses the message body as the input source. However, you
+can also use a header as input by specifying the `headerName` option.
+
+For example, to count the number of books from a JSON document that was
+stored in a header named `books` you can do:
+
+ from("direct:start")
+ .setHeader("numberOfBooks")
+ .jq(".store.books | length", int.class, "books")
+ .to("mock:result");
+
+## Camel supplied JQ Functions
+
+JQ comes with about a hundred built-in functions, and you can see many
+examples from [JQ](https://jqlang.github.io/jq/) documentation.
+
+The camel-jq adds the following functions:
+
+- `header`: allow accessing the Message header in a JQ expression.
+
+- `property`: allow accessing the Exchange property in a JQ
+ expression.
+
+- `constant`: allow using a constant value as-is in a JQ expression.
+
+For example, to set the property foo with the value from the Message
+header ‘MyHeader’:
+
+ from("direct:start")
+ .transform()
+ .jq(".foo = header(\"MyHeader\")")
+ .to("mock:result");
+
+Or from the exchange property:
+
+ from("direct:start")
+ .transform()
+ .jq(".foo = property(\"MyProperty\")")
+ .to("mock:result");
+
+And using a constant value
+
+ from("direct:start")
+ .transform()
+ .jq(".foo = constant(\"Hello World\")")
+ .to("mock:result");
+
+## Transforming a JSon message
+
+For basic JSon transformation where you have a fixed structure, you can
+represent with a combination of using Camel simple and JQ language as:
+
+ {
+ "company": "${jq(.customer.name)}",
+ "location": "${jq(.customer.address.country)}",
+ "gold": ${jq(.customer.orders[] | length > 5)}
+ }
+
+Here we use the simple language to define the structure and use JQ as
+inlined functions via the `${jq(exp)}` syntax.
+
+This makes it possible to use simple as a template language to define a
+basic structure and then JQ to grab the data from an incoming JSon
+message. The output of the transformation is also JSon, but with simple
+you could also make it XML or plain text based:
+
+
+ ${jq(.customer.name)}
+ ${jq(.customer.address.country)}
+
+
+# Examples
+
+For example, you can use JQ in a [Predicate](#manual::predicate.adoc)
+with the [Content-Based Router](#eips:choice-eip.adoc) EIP.
+
+ from("queue:books.new")
+ .choice()
+ .when().jq(".store.book.price < 10)")
+ .to("jms:queue:book.cheap")
+ .when().jq(".store.book.price < 30)")
+ .to("jms:queue:book.average")
+ .otherwise()
+ .to("jms:queue:book.expensive");
+
+# Dependencies
+
+If you use Maven you could just add the following to your `pom.xml`,
+substituting the version number for the latest and greatest release (see
+the download page for the latest versions).
+
+
+ org.apache.camel
+ camel-jq
+ x.x.x
+
diff --git a/camel-js-dsl.md b/camel-js-dsl.md
new file mode 100644
index 0000000000000000000000000000000000000000..0824bec8bc13262f7a60fd52147a0e769ac8aea0
--- /dev/null
+++ b/camel-js-dsl.md
@@ -0,0 +1,41 @@
+# Js-dsl.md
+
+**Since Camel 3.9**
+
+This DSL is deprecated and experimental support level and is not
+recommended being used for production.
+
+The `js-dsl` is used for runtime compiling JavaScript routes in an
+existing running Camel integration. This was invented for Camel K and
+later ported to Apache Camel.
+
+This means that Camel will load the `.js` source during startup and via
+the JavaScript compiler transform this into Camel routes.
+
+# Example
+
+The following `hello.js` source file:
+
+**hello.js**
+
+ function proc(e) {
+ e.getIn().setBody('Hello Camel K!')
+ }
+
+ from('timer:tick')
+ .process(proc)
+ .to('log:info')
+
+Can then be loaded and run with Camel CLI or Camel K.
+
+**Running with Camel K**
+
+ kamel run hello.js
+
+**Running with Camel CLI**
+
+ camel run hello.js
+
+# See Also
+
+See [DSL](#manual:ROOT:dsl.adoc)
diff --git a/camel-js-language.md b/camel-js-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..cf1fdfad3ef214ee83957b782a2ebd4f125d8935
--- /dev/null
+++ b/camel-js-language.md
@@ -0,0 +1,88 @@
+# Js-language.md
+
+**Since Camel 3.20**
+
+Camel allows [JavaScript](https://www.graalvm.org/javascript/) to be
+used as an [Expression](#manual::expression.adoc) or
+[Predicate](#manual::predicate.adoc) in Camel routes.
+
+For example, you can use JavaScript in a
+[Predicate](#manual::predicate.adoc) with the [Content-Based
+Router](#eips:choice-eip.adoc) EIP.
+
+# JavaScript Options
+
+# Variables
+
+
+
+
+
+
+
+
+
+
+
+
+this
+Exchange
+the Exchange is the root
+object
+
+
+context
+CamelContext
+the CamelContext
+
+
+exchange
+Exchange
+the Exchange
+
+
+exchangeId
+String
+the exchange id
+
+
+message
+Message
+the message
+
+
+body
+Message
+the message body
+
+
+headers
+Map
+the message headers
+
+
+properties
+Map
+the exchange properties
+
+
+
+
+# Dependencies
+
+To use JavaScript in your Camel routes, you need to add the dependency
+on **camel-javascript**, which implements the JavaScript language
+(JavaScript with GraalVM).
+
+If you use Maven, you could add the following to your pom.xml,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-javascript
+ x.x.x
+
diff --git a/camel-jsh-dsl.md b/camel-jsh-dsl.md
new file mode 100644
index 0000000000000000000000000000000000000000..dd0261dbadad1d16badd84f4d003752a74e34027
--- /dev/null
+++ b/camel-jsh-dsl.md
@@ -0,0 +1,38 @@
+# Jsh-dsl.md
+
+**Since Camel 3.15**
+
+This DSL is deprecated and experimental support level and is not
+recommended being used for production.
+
+The `jsh-dsl` is used for runtime compiling JavaShell routes in an
+existing running Camel integration. This was invented for Camel K and
+later ported to Apache Camel.
+
+This means that Camel will load the `.jsh` source during startup and use
+the JavaShell compiler to transform this into Camel routes.
+
+# Example
+
+The following `example.js` source file:
+
+**example.jsh**
+
+ builder.from("timer:tick")
+ .setBody()
+ .constant("Hello Camel K!")
+ .to("log:info");
+
+Can then be loaded and run with Camel CLI or Camel K.
+
+**Running with Camel K**
+
+ kamel run example.jsh
+
+**Running with Camel CLI**
+
+ camel run example.jsh
+
+# See Also
+
+See [DSL](#manual:ROOT:dsl.adoc)
diff --git a/camel-jslt.md b/camel-jslt.md
index d644b801c05c0032271b07658eeb0649ce892968..7d7b34e6501fcb0e41f317d078e75d8653e6ae43 100644
--- a/camel-jslt.md
+++ b/camel-jslt.md
@@ -26,7 +26,9 @@ Where **specName** is the classpath-local URI of the specification to
invoke; or the complete URL of the remote specification (e.g.:
`\file://folder/myfile.vm`).
-# Passing values to JSLT
+# Usage
+
+## Passing values to JSLT
Camel can supply exchange information as variables when applying a JSLT
expression on the body. The available variables from the **Exchange**
@@ -38,22 +40,22 @@ are:
-
+
-
+
headers
The headers of the In message as a json
object
-
+
variables
The variables
-
+
exchange.properties
The Exchange
properties as a json object. exchange is the name of the
@@ -75,7 +77,7 @@ For example, the header named `type` and the exchange property
"instance": $exchange.properties.instance
}
-# Samples
+# Examples
For example, you could use something like:
diff --git a/camel-json-validator.md b/camel-json-validator.md
index 96ba748f7e15602c79619f1c8b19d02456b4e1e9..fddec9b2582214ec5861cdf04c1eb918bbd27cd5 100644
--- a/camel-json-validator.md
+++ b/camel-json-validator.md
@@ -99,6 +99,7 @@ for an example.
|---|---|---|---|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
+|objectMapper|To use a custom ObjectMapper||object|
## Endpoint Configurations
@@ -115,4 +116,5 @@ for an example.
|disabledDeserializationFeatures|Comma-separated list of Jackson DeserializationFeature enum values which will be disabled for parsing exchange body||string|
|enabledDeserializationFeatures|Comma-separated list of Jackson DeserializationFeature enum values which will be enabled for parsing exchange body||string|
|errorHandler|To use a custom ValidatorErrorHandler. The default error handler captures the errors and throws an exception.||object|
+|objectMapper|The used Jackson object mapper||object|
|uriSchemaLoader|To use a custom schema loader allowing for adding custom format validation. The default implementation will create a schema loader that tries to determine the schema version from the $schema property of the specified schema.||object|
diff --git a/camel-jsonApi-dataformat.md b/camel-jsonApi-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..feb1700f6d069eb73a425f3b14be30c78dc29c04
--- /dev/null
+++ b/camel-jsonApi-dataformat.md
@@ -0,0 +1,18 @@
+# JsonApi-dataformat.md
+
+**Since Camel 3.0**
+
+# Dependencies
+
+To use JsonAPI in your Camel routes, you need to add the dependency on
+**camel-jsonapi** which implements this data format.
+
+If you use Maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-jsonapi
+ x.x.x
+
+
diff --git a/camel-jsonata.md b/camel-jsonata.md
index 44e8eb94daeed5e27014e5a943537d1ee3c709b0..04a4bbd3f77bbde5bd6e558986044b7efd9ab1f6 100644
--- a/camel-jsonata.md
+++ b/camel-jsonata.md
@@ -26,7 +26,9 @@ Where **specName** is the classpath-local URI of the specification to
invoke; or the complete URL of the remote specification (e.g.:
`\file://folder/myfile.vm`).
-# Samples
+# Examples
+
+## Basic
For example, you could use something like:
@@ -39,6 +41,26 @@ And a file-based resource:
to("jsonata:file://myfolder/MyResponse.json?contentCache=true").
to("activemq:Another.Queue");
+## Frame bindings
+
+It is possible to configure custom functions that can be called from
+Jsonata. For example you might want to be able to inject environment
+variables:
+
+ from("activemq:My.Queue").
+ to("jsonata:file://myfolder/MyResponse.json?contentCache=true&frameBinding=#customBindings").
+ to("activemq:Another.Queue");
+
+A custom binding might look like the following:
+
+ @NoArgsConstructor
+ public class CustomJsonataFrameBinding implements JsonataFrameBinding {
+ @Override
+ public void bindToFrame(Jsonata.Frame frame) {
+ frame.bind("env", (String s) -> System.getenv(s));
+ }
+ }
+
## Component Configurations
@@ -46,6 +68,7 @@ And a file-based resource:
|---|---|---|---|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
+|frameBinding|To configure custom frame bindings and inject user functions.||object|
## Endpoint Configurations
@@ -58,3 +81,4 @@ And a file-based resource:
|inputType|Specifies if the input should be Jackson JsonNode or a JSON String.|Jackson|object|
|outputType|Specifies if the output should be Jackson JsonNode or a JSON String.|Jackson|object|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|frameBinding|To configure the Jsonata frame binding. Allows custom functions to be added.||object|
diff --git a/camel-jsonb-dataformat.md b/camel-jsonb-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..a0d705612a5c1f65c45fbd1ecee454c113a553fa
--- /dev/null
+++ b/camel-jsonb-dataformat.md
@@ -0,0 +1,38 @@
+# Jsonb-dataformat.md
+
+**Since Camel 3.7**
+
+JSON-B is a Data Format that uses the standard (javax) JSON-B library.
+
+ from("activemq:My.Queue").
+ marshal().json(JsonLibrary.Jsonb).
+ to("mqseries:Another.Queue");
+
+# JSON-B Options
+
+# Dependencies
+
+To use JSON-B in your Camel routes, you need to add the dependency on
+**camel-jsonb** that implements this data format.
+
+If you use Maven, you could add the following to your pom.xml,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-jsonb
+ x.x.x
+
+
+
+You have to add a dependency on the **implementation** of a jsonb
+specification.
+
+If you want to add the Johnzon implementation, and you are using maven,
+add following to your `pom.xml`:
+
+
+ org.apache.johnzon
+ johnzon-jsonb
+ x.x.x
+
diff --git a/camel-jsonpath-language.md b/camel-jsonpath-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..469148b51e6d07459cee44687a11b9b477809440
--- /dev/null
+++ b/camel-jsonpath-language.md
@@ -0,0 +1,324 @@
+# Jsonpath-language.md
+
+**Since Camel 2.13**
+
+Camel supports [JSONPath](https://github.com/json-path/JsonPath/) to
+allow using [Expression](#manual::expression.adoc) or
+[Predicate](#manual::predicate.adoc) on JSON messages.
+
+# JSONPath Options
+
+# Usage
+
+## JSONPath Syntax
+
+Using the JSONPath syntax takes some time to learn, even for basic
+predicates. So for example, to find out all the cheap books you have to
+do:
+
+ $.store.book[?(@.price < 20)]
+
+## Easy JSONPath Syntax
+
+However, what if you could just write it as:
+
+ store.book.price < 20
+
+And you can omit the path if you just want to look at nodes with a price
+key:
+
+ price < 20
+
+To support this there is a `EasyPredicateParser` which kicks-in if you
+have defined the predicate using a basic style. That means the predicate
+must not start with the `$` sign, and only include one operator.
+
+The easy syntax is:
+
+ left OP right
+
+You can use Camel simple language in the right operator, eg:
+
+ store.book.price < ${header.limit}
+
+See the [JSONPath](https://github.com/json-path/JsonPath) project page
+for more syntax examples.
+
+# Examples
+
+For example, you can use JSONPath in a
+[Predicate](#manual::predicate.adoc) with the [Content-Based
+Router](#eips:choice-eip.adoc) EIP.
+
+Java
+from("queue:books.new")
+.choice()
+.when().jsonpath("$.store.book\[?(@.price \< 10)\]")
+.to("jms:queue:book.cheap")
+.when().jsonpath("$.store.book\[?(@.price \< 30)\]")
+.to("jms:queue:book.average")
+.otherwise()
+.to("jms:queue:book.expensive");
+
+XML DSL
+
+
+
+
+$.store.book\[?(@.price \< 10)\]
+
+
+
+$.store.book\[?(@.price \< 30)\]
+
+
+
+
+
+
+
+
+## Supported message body types
+
+Camel JSONPath supports message body using the following types:
+
+
+
+
+
+
+
+
+
+
+
+File
+Reading from files
+
+
+String
+Plain strings
+
+
+Map
+Message bodies as
+java.util.Map types
+
+
+List
+Message bodies as
+java.util.List types
+
+
+POJO
+Optional If Jackson is
+on the classpath, then camel-jsonpath is able to use Jackson to read the
+message body as POJO and convert to java.util.Map which is
+supported by JSONPath. For example, you can add
+camel-jackson as dependency to include Jackson.
+
+
+InputStream
+If none of the above types matches,
+then Camel will attempt to read the message body as a
+java.io.InputStream.
+
+
+
+
+If a message body is of unsupported type, then an exception is thrown by
+default. However, you can configure JSONPath to suppress exceptions (see
+below)
+
+## Suppressing exceptions
+
+By default, jsonpath will throw an exception if the json payload does
+not have a valid path accordingly to the configured jsonpath expression.
+In some use-cases, you may want to ignore this in case the json payload
+contains optional data. Therefore, you can set the option
+`suppressExceptions` to `true` to ignore this as shown:
+
+Java
+from("direct:start")
+.choice()
+// use true to suppress exceptions
+.when().jsonpath("person.middlename", true)
+.to("mock:middle")
+.otherwise()
+.to("mock:other");
+
+XML DSL
+
+
+
+
+person.middlename
+
+
+
+
+
+
+
+
+This option is also available on the `@JsonPath` annotation.
+
+## Inline Simple expressions
+
+It’s possible to inlined [Simple](#languages:simple-language.adoc)
+language in the JSONPath expression using the simple syntax `${xxx}`.
+
+An example is shown below:
+
+Java
+from("direct:start")
+.choice()
+.when().jsonpath("$.store.book\[?(@.price \< ${header.cheap})\]")
+.to("mock:cheap")
+.when().jsonpath("$.store.book\[?(@.price \< ${header.average})\]")
+.to("mock:average")
+.otherwise()
+.to("mock:expensive");
+
+XML DSL
+
+
+
+
+$.store.book\[?(@.price \< ${header.cheap})\]
+
+
+
+$.store.book\[?(@.price \< ${header.average})\]
+
+
+
+
+
+
+
+
+You can turn off support for inlined Simple expression by setting the
+option `allowSimple` to `false` as shown:
+
+Java
+.when().jsonpath("$.store.book\[?(@.price \< 10)\]", false, false)
+
+XML DSL
+$.store.book\[?(@.price \< 10)\]
+
+## JSONPath injection
+
+You can use [Bean Integration](#manual::bean-integration.adoc) to invoke
+a method on a bean and use various languages such as JSONPath (via the
+`@JsonPath` annotation) to extract a value from the message and bind it
+to a method parameter, as shown below:
+
+ public class Foo {
+
+ @Consume("activemq:queue:books.new")
+ public void doSomething(@JsonPath("$.store.book[*].author") String author, @Body String json) {
+ // process the inbound message here
+ }
+ }
+
+## Encoding Detection
+
+The encoding of the JSON document is detected automatically, if the
+document is encoded in unicode (UTF-8, UTF-16LE, UTF-16BE, UTF-32LE,
+UTF-32BE) as specified in RFC-4627. If the encoding is a non-unicode
+encoding, you can either make sure that you enter the document in String
+format to JSONPath, or you can specify the encoding in the header
+`CamelJsonPathJsonEncoding` which is defined as a constant in:
+`JsonpathConstants.HEADER_JSON_ENCODING`.
+
+## Split JSON data into sub rows as JSON
+
+You can use JSONPath to split a JSON document, such as:
+
+ from("direct:start")
+ .split().jsonpath("$.store.book[*]", List.class)
+ .to("log:book");
+
+Notice how we specify `List.class` as the result-type. This is because
+if there is only a single element (only 1 book), then jsonpath will
+return the single entity as a `Map` instead of `List`. Therefore,
+we tell Camel that the result should always be a `List`, and Camel will
+then automatic wrap the single element into a new `List` object.
+
+Then each book is logged, however the message body is a `Map` instance.
+Sometimes you may want to output this as plain String JSON value
+instead, which can be done with the `writeAsString` option as shown:
+
+ from("direct:start")
+ .split().jsonpathWriteAsString("$.store.book[*]", List.class)
+ .to("log:book");
+
+Then each book is logged as a String JSON value.
+
+## Unpack a single-element array into an object
+
+It is possible to unpack a single-element array into an object:
+
+ from("direct:start")
+ .setBody().jsonpathUnpack("$.store.book", Book.class)
+ .to("log:book");
+
+If a book array contains only one book, it will be converted into a Book
+object.
+
+## Using header as input
+
+By default, JSONPath uses the message body as the input source. However,
+you can also use a header as input by specifying the `headerName`
+option.
+
+For example, to count the number of books from a JSON document that was
+stored in a header named `books` you can do:
+
+ from("direct:start")
+ .setHeader("numberOfBooks")
+ .jsonpath("$..store.book.length()", false, int.class, "books")
+ .to("mock:result");
+
+In the `jsonpath` expression above we specify the name of the header as
+`books`, and we also told that we wanted the result to be converted as
+an integer by `int.class`.
+
+The same example in XML DSL would be:
+
+
+
+
+ $..store.book.length()
+
+
+
+
+## Transforming a JSon message
+
+For basic JSon transformation where you have a fixed structure, you can
+represent with a combination of using Camel simple and JSonPath language
+as:
+
+ {
+ "company": "${jsonpath($.customer.name)}",
+ "location": "${jsonpath($.customer.address.country)}",
+ "gold": ${jsonpath($.customer.orders.length() > 5)}
+ }
+
+Here we use the simple language to define the structure and use JSonPath
+as inlined functions via the `${jsonpath(exp)}` syntax.
+
+This makes it possible to use simple as a template language to define a
+basic structure and then JSonPath to grab the data from an incoming JSon
+message. The output of the transformation is also JSon, but with simple
+you could also make it XML or plain text based:
+
+
+ ${jsonpath($.customer.name)}
+ ${jsonpath($.customer.address.country)}
+
diff --git a/camel-jt400.md b/camel-jt400.md
index 12be3556a2d4e520e424e41ac8a8ed25de6cad35..20265aea204ef4c0497394181932fe4d35deae5a 100644
--- a/camel-jt400.md
+++ b/camel-jt400.md
@@ -55,12 +55,12 @@ are presumed to be text and sent to the queue as an informational
message. Inquiry messages or messages requiring a message ID are not
supported.
-# Connection pool
+## Connection pool
You can explicitly configure a connection pool on the Jt400Component, or
as an uri option on the endpoint.
-# Program call
+## Program call
This endpoint expects the input to be an `Object[]`, whose object types
are `int`, `long`, `CharSequence` (such as `String`), or `byte[]`. All
diff --git a/camel-jta.md b/camel-jta.md
new file mode 100644
index 0000000000000000000000000000000000000000..edbf55ef043d20983e5c1ae4bdbd5c5b33446209
--- /dev/null
+++ b/camel-jta.md
@@ -0,0 +1,9 @@
+# Jta.md
+
+**Since Camel 3.4**
+
+The `camel-jta` component is used for integrating Camel’s transaction
+support with JTA.
+
+See more details in the [Transactional
+Client](#eips:transactional-client.adoc) documentation.
diff --git a/camel-jte.md b/camel-jte.md
index 6ecbe361922e614e02aa873c00b875b16f4e275c..e05ce5184f0d0089e22c0c3d4e0595bac8582f05 100644
--- a/camel-jte.md
+++ b/camel-jte.md
@@ -29,7 +29,9 @@ Where **templateName** is the classpath-local URI of the template to
invoke; or the complete URL of the remote template (e.g.:
`\file://folder/myfile.jte`).
-# JTE Context
+# Usage
+
+## JTE Context
Camel will provide exchange information in the JTE context, as a
`org.apache.camel.component.jte.Model` class with the following
@@ -41,38 +43,38 @@ information:
-
+
-
+
exchange
The Exchange itself (only
if allowContextMapAll=true).
-
+
headers
The headers of the message as
java.util.Map.
-
+
body
The message body as
Object.
-
+
strBody()
The message body converted to a
String
-
+
header("key")
Message header with the given key
converted to a String value.
-
+
exchangeProperty("key")
Exchange property with the given key
@@ -84,14 +86,14 @@ converted to a String value (only if allowContextMapAll=true).
You can set up your custom JTE data model in the message header with the
key "**CamelJteDataModel**" just like this
-# Dynamic templates
+## Dynamic templates
Camel provides two headers by which you can define a different resource
location for a template or the template content itself. If any of these
headers is set, then Camel uses this over the endpoint configured
resource. This allows you to provide a dynamic template at runtime.
-# Samples
+# Examples
For example, you could use something like:
diff --git a/camel-kafka.md b/camel-kafka.md
index 1c51bf29cdd378f03c881fc4917afe5e187e9583..3c353d66ec34489ef4807b629141e57b63072236 100644
--- a/camel-kafka.md
+++ b/camel-kafka.md
@@ -30,7 +30,9 @@ If you want to send a message to a dynamic topic then use
`KafkaConstants.OVERRIDE_TOPIC` as it is used as a one-time header that
is not sent along the message, and actually is removed in the producer.
-# Consumer error handling
+# Usage
+
+## Consumer error handling
While kafka consumer is polling messages from the kafka broker, then
errors can happen. This section describes what happens and what you can
@@ -69,7 +71,7 @@ For advanced control a custom implementation of
configured on the component level, which allows controlling which of the
strategies to use for each exception.
-# Consumer error handling (advanced)
+## Consumer error handling (advanced)
By default, Camel will poll using the **ERROR\_HANDLER** to process
exceptions. How Camel handles a message that results in an exception can
@@ -87,127 +89,7 @@ It is recommended that you read the section below "Using manual commit
with Kafka consumer" to understand how `breakOnFirstError` will work
based on the `CommitManager` that is configured.
-# Samples
-
-## Consuming messages from Kafka
-
-Here is the minimal route you need to read messages from Kafka.
-
- from("kafka:test?brokers=localhost:9092")
- .log("Message received from Kafka : ${body}")
- .log(" on the topic ${headers[kafka.TOPIC]}")
- .log(" on the partition ${headers[kafka.PARTITION]}")
- .log(" with the offset ${headers[kafka.OFFSET]}")
- .log(" with the key ${headers[kafka.KEY]}")
-
-If you need to consume messages from multiple topics, you can use a
-comma separated list of topic names.
-
- from("kafka:test,test1,test2?brokers=localhost:9092")
- .log("Message received from Kafka : ${body}")
- .log(" on the topic ${headers[kafka.TOPIC]}")
- .log(" on the partition ${headers[kafka.PARTITION]}")
- .log(" with the offset ${headers[kafka.OFFSET]}")
- .log(" with the key ${headers[kafka.KEY]}")
-
-It’s also possible to subscribe to multiple topics giving a pattern as
-the topic name and using the `topicIsPattern` option.
-
- from("kafka:test.*?brokers=localhost:9092&topicIsPattern=true")
- .log("Message received from Kafka : ${body}")
- .log(" on the topic ${headers[kafka.TOPIC]}")
- .log(" on the partition ${headers[kafka.PARTITION]}")
- .log(" with the offset ${headers[kafka.OFFSET]}")
- .log(" with the key ${headers[kafka.KEY]}")
-
-When consuming messages from Kafka, you can use your own offset
-management and not delegate this management to Kafka. To keep the
-offsets, the component needs a `StateRepository` implementation such as
-`FileStateRepository`. This bean should be available in the registry.
-Here how to use it :
-
- // Create the repository in which the Kafka offsets will be persisted
- FileStateRepository repository = FileStateRepository.fileStateRepository(new File("/path/to/repo.dat"));
-
- // Bind this repository into the Camel registry
- Registry registry = createCamelRegistry();
- registry.bind("offsetRepo", repository);
-
- // Configure the camel context
- DefaultCamelContext camelContext = new DefaultCamelContext(registry);
- camelContext.addRoutes(new RouteBuilder() {
- @Override
- public void configure() throws Exception {
- from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" +
- // Set up the topic and broker address
- "&groupId=A" +
- // The consumer processor group ID
- "&autoOffsetReset=earliest" +
- // Ask to start from the beginning if we have unknown offset
- "&offsetRepository=#offsetRepo")
- // Keep the offsets in the previously configured repository
- .to("mock:result");
- }
- });
-
-## Producing messages to Kafka
-
-Here is the minimal route you need in order to write messages to Kafka.
-
- from("direct:start")
- .setBody(constant("Message from Camel")) // Message to send
- .setHeader(KafkaConstants.KEY, constant("Camel")) // Key of the message
- .to("kafka:test?brokers=localhost:9092");
-
-# SSL configuration
-
-You have 2 different ways to configure the SSL communication on the
-Kafka component.
-
-The first way is through the many SSL endpoint parameters:
-
- from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" +
- "&groupId=A" +
- "&sslKeystoreLocation=/path/to/keystore.jks" +
- "&sslKeystorePassword=changeit" +
- "&sslKeyPassword=changeit" +
- "&securityProtocol=SSL")
- .to("mock:result");
-
-The second way is to use the `sslContextParameters` endpoint parameter:
-
- // Configure the SSLContextParameters object
- KeyStoreParameters ksp = new KeyStoreParameters();
- ksp.setResource("/path/to/keystore.jks");
- ksp.setPassword("changeit");
- KeyManagersParameters kmp = new KeyManagersParameters();
- kmp.setKeyStore(ksp);
- kmp.setKeyPassword("changeit");
- SSLContextParameters scp = new SSLContextParameters();
- scp.setKeyManagers(kmp);
-
- // Bind this SSLContextParameters into the Camel registry
- Registry registry = createCamelRegistry();
- registry.bind("ssl", scp);
-
- // Configure the camel context
- DefaultCamelContext camelContext = new DefaultCamelContext(registry);
- camelContext.addRoutes(new RouteBuilder() {
- @Override
- public void configure() throws Exception {
- from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" +
- // Set up the topic and broker address
- "&groupId=A" +
- // The consumer processor group ID
- "&sslContextParameters=#ssl" +
- // The security protocol
- "&securityProtocol=SSL)
- // Reference the SSL configuration
- .to("mock:result");
- }
- });
-
-# Using the Kafka idempotent repository
+## The Kafka idempotent repository
The `camel-kafka` library provides a Kafka topic-based idempotent
repository. This repository stores broadcasts all changes to idempotent
@@ -241,20 +123,20 @@ A `KafkaIdempotentRepository` has the following properties:
-
+
-
+
topic
Required The name of
the Kafka topic to use to broadcast changes. (required)
-
+
bootstrapServers
Required The
@@ -264,25 +146,25 @@ and consumer. Use this as shorthand if not setting
this component will apply sensible default configurations for the
producer and consumer.
-
+
groupId
The groupId to assign to the idempotent
consumer.
-
+
startupOnly
false
Whether to sync on startup only, or to
continue syncing while Camel is running.
-
+
maxCacheSize
1000
How many of the most recently used keys
should be stored in memory (default 1000).
-
+
pollDurationMs
100
The poll duration of the Kafka
@@ -300,7 +182,7 @@ sent on the topic, there exists a possibility that the cache cannot be
warmed up and will operate in an inconsistent state relative to its
peers until it catches up.
-
+
producerConfig
Sets the properties that will be used
@@ -308,7 +190,7 @@ by the Kafka producer that broadcasts changes. Overrides
bootstrapServers, so must define the Kafka
bootstrap.servers property itself
-
+
consumerConfig
Sets the properties that will be used
@@ -323,8 +205,8 @@ The repository can be instantiated by defining the `topic` and
`bootstrapServers`, or the `producerConfig` and `consumerConfig`
property sets can be explicitly defined to enable features such as
SSL/SASL. To use, this repository must be placed in the Camel registry,
-either manually or by registration as a bean in Spring/Blueprint, as it
-is `CamelContext` aware.
+either manually or by registration as a bean in Spring, as it is
+`CamelContext` aware.
Sample usage is as follows:
@@ -408,7 +290,7 @@ Lastly, it is also possible to do so in a processor:
.idempotentRepository("kafkaIdempotentRepository")
.to(to);
-# Using manual commit with Kafka consumer
+## Manual commits with the Kafka consumer
By default, the Kafka consumer will use auto commit, where the offset
will be committed automatically in the background using a given
@@ -423,7 +305,7 @@ endpoint, for example:
KafkaComponent kafka = new KafkaComponent();
kafka.setAutoCommitEnable(false);
kafka.setAllowManualCommit(true);
- ...
+ // ...
camelContext.addComponent("kafka", kafka);
By default, it uses the `NoopCommitManager` behind the scenes. To commit
@@ -483,7 +365,7 @@ operations.
\*Note 2: this is mostly useful with aggregation’s completion timeout
strategies.
-# Pausable Consumers
+## Pausable Consumers
The Kafka component supports pausable consumers. This type of consumer
can pause consuming data based on conditions external to the component
@@ -509,7 +391,7 @@ most users should prefer using the
[RoutePolicy](#manual::route-policy.adoc), which offers better control
of the route.
-# Kafka Headers propagation
+## Kafka Headers propagation
When consuming messages from Kafka, headers will be propagated to camel
exchange headers automatically. Producing flow backed by same
@@ -541,9 +423,9 @@ and `from` routes:
`myStrategy` object should be a subclass of `HeaderFilterStrategy` and
must be placed in the Camel registry, either manually or by registration
-as a bean in Spring/Blueprint, as it is `CamelContext` aware.
+as a bean in Spring, as it is `CamelContext` aware.
-# Kafka Transaction
+## Kafka Transaction
You need to add `transactional.id`, `enable.idempotence` and `retries`
in `additional-properties` to enable kafka transaction with the
@@ -568,15 +450,15 @@ transaction has been committed before and there is no chance to roll
back the changes since the kafka transaction does not support JTA/XA
spec. There is still a risk with the data consistency.
-# Setting Kerberos config file
+## Setting Kerberos config file
-Configure the *krb5.conf* file directly through the API
+Configure the *krb5.conf* file directly through the API:
static {
KafkaComponent.setKerberosConfigLocation("path/to/config/file");
}
-# Batching Consumer
+## Batching Consumer
To use a Kafka batching consumer with Camel, an application has to set
the configuration `batching` to `true`.
@@ -592,7 +474,7 @@ fill the batch, it is possible to use the `pollTimeoutMs` option to set
a timeout for the polling. In this case, the batch may contain less
messages than set in the `maxPollRecords`.
-## Automatic Commits
+### Automatic Commits
By default, Camel uses automatic commits when using batch processing. In
this case, Camel automatically commits the records after they have been
@@ -621,7 +503,7 @@ The code below provides an example of this approach:
}).to(KafkaTestUtil.MOCK_RESULT);
}
-### Handling Errors with Automatic Commits
+#### Handling Errors with Automatic Commits
When using automatic commits, Camel will not commit records if there is
a failure in processing. Because of this, there is a risk that records
@@ -721,7 +603,7 @@ and other Kafka operations are not abruptly aborted. For instance:
// route setup ...
}
-# Custom Subscription Adapters
+## Custom Subscription Adapters
Applications with complex subscription logic may provide a custom bean
to handle the subscription process. To so, it is necessary to implement
@@ -747,9 +629,9 @@ Then, it is necessary to add it as named bean instance to the registry:
context.getRegistry().bind(KafkaConstants.KAFKA_SUBSCRIBE_ADAPTER, new CustomSubscribeAdapter());
-# Interoperability
+## Interoperability
-## JMS
+### JMS
When interoperating Kafka and JMS, it may be necessary to coerce the JMS
headers into their expected type.
@@ -781,6 +663,126 @@ option. For example:
from("kafka:topic?headerDeserializer=#class:org.apache.camel.component.kafka.consumer.support.interop.JMSDeserializer")
.to("...");
+# Examples
+
+## Consuming messages from Kafka
+
+Here is the minimal route you need to read messages from Kafka.
+
+ from("kafka:test?brokers=localhost:9092")
+ .log("Message received from Kafka : ${body}")
+ .log(" on the topic ${headers[kafka.TOPIC]}")
+ .log(" on the partition ${headers[kafka.PARTITION]}")
+ .log(" with the offset ${headers[kafka.OFFSET]}")
+ .log(" with the key ${headers[kafka.KEY]}")
+
+If you need to consume messages from multiple topics, you can use a
+comma separated list of topic names.
+
+ from("kafka:test,test1,test2?brokers=localhost:9092")
+ .log("Message received from Kafka : ${body}")
+ .log(" on the topic ${headers[kafka.TOPIC]}")
+ .log(" on the partition ${headers[kafka.PARTITION]}")
+ .log(" with the offset ${headers[kafka.OFFSET]}")
+ .log(" with the key ${headers[kafka.KEY]}")
+
+It’s also possible to subscribe to multiple topics giving a pattern as
+the topic name and using the `topicIsPattern` option.
+
+ from("kafka:test.*?brokers=localhost:9092&topicIsPattern=true")
+ .log("Message received from Kafka : ${body}")
+ .log(" on the topic ${headers[kafka.TOPIC]}")
+ .log(" on the partition ${headers[kafka.PARTITION]}")
+ .log(" with the offset ${headers[kafka.OFFSET]}")
+ .log(" with the key ${headers[kafka.KEY]}")
+
+When consuming messages from Kafka, you can use your own offset
+management and not delegate this management to Kafka. To keep the
+offsets, the component needs a `StateRepository` implementation such as
+`FileStateRepository`. This bean should be available in the registry.
+Here how to use it :
+
+ // Create the repository in which the Kafka offsets will be persisted
+ FileStateRepository repository = FileStateRepository.fileStateRepository(new File("/path/to/repo.dat"));
+
+ // Bind this repository into the Camel registry
+ Registry registry = createCamelRegistry();
+ registry.bind("offsetRepo", repository);
+
+ // Configure the camel context
+ DefaultCamelContext camelContext = new DefaultCamelContext(registry);
+ camelContext.addRoutes(new RouteBuilder() {
+ @Override
+ public void configure() throws Exception {
+ fromF("kafka:%s?brokers=localhost:{{kafkaPort}}" +
+ // Set up the topic and broker address
+ "&groupId=A" +
+ // The consumer processor group ID
+ "&autoOffsetReset=earliest" +
+ // Ask to start from the beginning if we have unknown offset
+ "&offsetRepository=#offsetRepo", TOPIC)
+ // Keep the offsets in the previously configured repository
+ .to("mock:result");
+ }
+ });
+
+## Producing messages to Kafka
+
+Here is the minimal route you need to produce messages to Kafka.
+
+ from("direct:start")
+ .setBody(constant("Message from Camel")) // Message to send
+ .setHeader(KafkaConstants.KEY, constant("Camel")) // Key of the message
+ .to("kafka:test?brokers=localhost:9092");
+
+## SSL configuration
+
+You have two different ways to configure the SSL communication on the
+Kafka component.
+
+The first way is through the many SSL endpoint parameters:
+
+ from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" +
+ "&groupId=A" +
+ "&sslKeystoreLocation=/path/to/keystore.jks" +
+ "&sslKeystorePassword=changeit" +
+ "&sslKeyPassword=changeit" +
+ "&securityProtocol=SSL")
+ .to("mock:result");
+
+The second way is to use the `sslContextParameters` endpoint parameter:
+
+ // Configure the SSLContextParameters object
+ KeyStoreParameters ksp = new KeyStoreParameters();
+ ksp.setResource("/path/to/keystore.jks");
+ ksp.setPassword("changeit");
+ KeyManagersParameters kmp = new KeyManagersParameters();
+ kmp.setKeyStore(ksp);
+ kmp.setKeyPassword("changeit");
+ SSLContextParameters scp = new SSLContextParameters();
+ scp.setKeyManagers(kmp);
+
+ // Bind this SSLContextParameters into the Camel registry
+ Registry registry = createCamelRegistry();
+ registry.bind("ssl", scp);
+
+ // Configure the camel context
+ DefaultCamelContext camelContext = new DefaultCamelContext(registry);
+ camelContext.addRoutes(new RouteBuilder() {
+ @Override
+ public void configure() throws Exception {
+ from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" +
+ // Set up the topic and broker address
+ "&groupId=A" +
+ // The consumer processor group ID
+ "&sslContextParameters=#ssl" +
+ // The security protocol
+ "&securityProtocol=SSL)
+ // Reference the SSL configuration
+ .to("mock:result");
+ }
+ });
+
## Component Configurations
@@ -806,9 +808,9 @@ option. For example:
|commitTimeoutMs|The maximum time, in milliseconds, that the code will wait for a synchronous commit to complete|5000|duration|
|consumerRequestTimeoutMs|The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapsed, the client will resend the request if necessary or fail the request if retries are exhausted.|30000|integer|
|consumersCount|The number of consumers that connect to kafka server. Each consumer is run on a separate thread that retrieves and process the incoming data.|1|integer|
-|fetchMaxBytes|The maximum amount of data the server should return for a fetch request This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.|52428800|integer|
+|fetchMaxBytes|The maximum amount of data the server should return for a fetch request. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.|52428800|integer|
|fetchMinBytes|The minimum amount of data the server should return for a fetch request. If insufficient data is available, the request will wait for that much data to accumulate before answering the request.|1|integer|
-|fetchWaitMaxMs|The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch.min.bytes|500|integer|
+|fetchWaitMaxMs|The maximum amount of time the server will block before answering the fetch request if there isn't enough data to immediately satisfy fetch.min.bytes|500|integer|
|groupId|A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id, multiple processes can indicate that they are all part of the same consumer group. This option is required for consumers.||string|
|groupInstanceId|A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g., process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.||string|
|headerDeserializer|To use a custom KafkaHeaderDeserializer to deserialize kafka headers values||object|
@@ -844,7 +846,7 @@ option. For example:
|key|The record key (or null if no key is specified). If this option has been configured then it take precedence over header KafkaConstants#KEY||string|
|keySerializer|The serializer class for keys (defaults to the same as for messages if nothing is given).|org.apache.kafka.common.serialization.StringSerializer|string|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
-|lingerMs|The producer groups together any records that arrive in between request transmissions into a single, batched, request. Normally, this occurs only under load when records arrive faster than they can be sent out. However, in some circumstances, the client may want to reduce the number of requests even under a moderate load. This setting accomplishes this by adding a small amount of artificial delay. That is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that they can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition, it will be sent immediately regardless of this setting, however, if we have fewer than this many bytes accumulated for this partition, we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e., no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.|0|integer|
+|lingerMs|The producer groups together any records that arrive in between request transmissions into a single, batched, request. Normally, this occurs only under load when records arrive faster than they can be sent out. However, in some circumstances, the client may want to reduce the number of requests even under a moderate load. This setting achieves this by adding a small amount of artificial delay. That is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that they can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition, it will be sent immediately regardless of this setting, however, if we have fewer than this many bytes accumulated for this partition, we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e., no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.|0|integer|
|maxBlockMs|The configuration controls how long the KafkaProducer's send(), partitionsFor(), initTransactions(), sendOffsetsToTransaction(), commitTransaction() and abortTransaction() methods will block. For send() this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For partitionsFor() this time out bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may time out if the transaction coordinator could not be discovered or did not respond within the timeout.|60000|integer|
|maxInFlightRequest|The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled).|5|integer|
|maxRequestSize|The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.|1048576|integer|
@@ -862,7 +864,7 @@ option. For example:
|recordMetadata|Whether the producer should store the RecordMetadata results from sending to Kafka. The results are stored in a List containing the RecordMetadata metadata's. The list is stored on a header with the key KafkaConstants#KAFKA\_RECORDMETA|true|boolean|
|requestRequiredAcks|The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero, then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retry configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgment from all followers. In this case should the leader fail immediately after acknowledging the record, but before the followers have replicated it, then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. Note that enabling idempotence requires this config value to be 'all'. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.|all|string|
|requestTimeoutMs|The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client.|30000|integer|
-|retries|Setting a value greater than zero will cause the client to resend any record that has failed to be sent due to a potentially transient error. Note that this retry is no different from if the client re-sending the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.||integer|
+|retries|Setting a value greater than zero will cause the client to resend any record that has failed to be sent due to a potentially transient error. Note that this retry is no different from if the client re-sending the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to 1 will potentially change the ordering of records, because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds; then the records in the second batch may appear first.||integer|
|sendBufferBytes|Socket write buffer size|131072|integer|
|useIterator|Sets whether sending to kafka should send the message body as a single record, or use a java.util.Iterator to send multiple records to kafka (if the message body can be iterated).|true|boolean|
|valueSerializer|The serializer class for messages.|org.apache.kafka.common.serialization.StringSerializer|string|
@@ -927,9 +929,9 @@ option. For example:
|commitTimeoutMs|The maximum time, in milliseconds, that the code will wait for a synchronous commit to complete|5000|duration|
|consumerRequestTimeoutMs|The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapsed, the client will resend the request if necessary or fail the request if retries are exhausted.|30000|integer|
|consumersCount|The number of consumers that connect to kafka server. Each consumer is run on a separate thread that retrieves and process the incoming data.|1|integer|
-|fetchMaxBytes|The maximum amount of data the server should return for a fetch request This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.|52428800|integer|
+|fetchMaxBytes|The maximum amount of data the server should return for a fetch request. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.|52428800|integer|
|fetchMinBytes|The minimum amount of data the server should return for a fetch request. If insufficient data is available, the request will wait for that much data to accumulate before answering the request.|1|integer|
-|fetchWaitMaxMs|The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch.min.bytes|500|integer|
+|fetchWaitMaxMs|The maximum amount of time the server will block before answering the fetch request if there isn't enough data to immediately satisfy fetch.min.bytes|500|integer|
|groupId|A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id, multiple processes can indicate that they are all part of the same consumer group. This option is required for consumers.||string|
|groupInstanceId|A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g., process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.||string|
|headerDeserializer|To use a custom KafkaHeaderDeserializer to deserialize kafka headers values||object|
@@ -962,7 +964,7 @@ option. For example:
|headerSerializer|To use a custom KafkaHeaderSerializer to serialize kafka headers values||object|
|key|The record key (or null if no key is specified). If this option has been configured then it take precedence over header KafkaConstants#KEY||string|
|keySerializer|The serializer class for keys (defaults to the same as for messages if nothing is given).|org.apache.kafka.common.serialization.StringSerializer|string|
-|lingerMs|The producer groups together any records that arrive in between request transmissions into a single, batched, request. Normally, this occurs only under load when records arrive faster than they can be sent out. However, in some circumstances, the client may want to reduce the number of requests even under a moderate load. This setting accomplishes this by adding a small amount of artificial delay. That is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that they can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition, it will be sent immediately regardless of this setting, however, if we have fewer than this many bytes accumulated for this partition, we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e., no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.|0|integer|
+|lingerMs|The producer groups together any records that arrive in between request transmissions into a single, batched, request. Normally, this occurs only under load when records arrive faster than they can be sent out. However, in some circumstances, the client may want to reduce the number of requests even under a moderate load. This setting achieves this by adding a small amount of artificial delay. That is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that they can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition, it will be sent immediately regardless of this setting, however, if we have fewer than this many bytes accumulated for this partition, we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e., no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.|0|integer|
|maxBlockMs|The configuration controls how long the KafkaProducer's send(), partitionsFor(), initTransactions(), sendOffsetsToTransaction(), commitTransaction() and abortTransaction() methods will block. For send() this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For partitionsFor() this time out bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may time out if the transaction coordinator could not be discovered or did not respond within the timeout.|60000|integer|
|maxInFlightRequest|The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled).|5|integer|
|maxRequestSize|The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.|1048576|integer|
@@ -980,7 +982,7 @@ option. For example:
|recordMetadata|Whether the producer should store the RecordMetadata results from sending to Kafka. The results are stored in a List containing the RecordMetadata metadata's. The list is stored on a header with the key KafkaConstants#KAFKA\_RECORDMETA|true|boolean|
|requestRequiredAcks|The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero, then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retry configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgment from all followers. In this case should the leader fail immediately after acknowledging the record, but before the followers have replicated it, then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. Note that enabling idempotence requires this config value to be 'all'. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.|all|string|
|requestTimeoutMs|The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client.|30000|integer|
-|retries|Setting a value greater than zero will cause the client to resend any record that has failed to be sent due to a potentially transient error. Note that this retry is no different from if the client re-sending the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.||integer|
+|retries|Setting a value greater than zero will cause the client to resend any record that has failed to be sent due to a potentially transient error. Note that this retry is no different from if the client re-sending the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to 1 will potentially change the ordering of records, because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds; then the records in the second batch may appear first.||integer|
|sendBufferBytes|Socket write buffer size|131072|integer|
|useIterator|Sets whether sending to kafka should send the message body as a single record, or use a java.util.Iterator to send multiple records to kafka (if the message body can be iterated).|true|boolean|
|valueSerializer|The serializer class for messages.|org.apache.kafka.common.serialization.StringSerializer|string|
diff --git a/camel-kamelet-eip.md b/camel-kamelet-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..76c4783a193c9725aa278b7a72dab8baaa3c678e
--- /dev/null
+++ b/camel-kamelet-eip.md
@@ -0,0 +1,97 @@
+# Kamelet-eip.md
+
+Kamelets (Kamel route snippets) allow users to connect to external
+systems via a simplified interface, hiding all the low-level details
+about how those connections are implemented.
+
+By default, calling kamelets should be done as
+[endpoints](#message-endpoint.adoc) with the
+[kamelet](#components::kamelet-component.adoc) component, such as
+`to("kamelet:mykamelet")`.
+
+The Kamelet EIP allows calling Kamelets (i.e., [Route
+Template](#manual::route-template.adoc)), **for special use-cases**.
+
+When a Kamelet is designed for a special use-case such as aggregating
+messages, and returning a response message only when a group of
+aggregated messages is completed. In other words, kamelet does not
+return a response message for every incoming message. In special
+situations like these, then you **must** use this Kamelet EIP instead of
+using the [kamelet](#components::kamelet-component.adoc) component.
+
+Given the following Kamelet (as a route template):
+
+ routeTemplate("my-aggregate")
+ .templateParameter("count")
+ .from("kamelet:source")
+ .aggregate(constant(true))
+ .completionSize("{{count}}")
+ .aggregationStrategy(AggregationStrategies.string(","))
+ .to("log:aggregate")
+ .to("kamelet:sink")
+ .end();
+
+Note how the route template above uses *kamelet:sink* as a special
+endpoint to send out a result message. This is only done when the
+[Aggregate EIP](#aggregate-eip.adoc) has completed a group of messages.
+
+And the following route using the kamelet:
+
+ from("direct:start")
+ // this is not possible, you must use Kamelet EIP instead
+ .to("kamelet:my-aggregate?count=5")
+ .to("log:info")
+ .to("mock:result");
+
+Then this does not work, instead you **must** use Kamelet EIP instead:
+
+ from("direct:start")
+ .kamelet("my-aggregate?count=5")
+ .to("log:info")
+ .to("mock:result");
+
+When calling a Kamelet, you may refer to the name (template id) of the
+Kamelet in the EIP as shown below:
+
+# Options
+
+# Exchange properties
+
+# Using Kamelet EIP
+
+Java
+from("direct:start")
+.kamelet("foo")
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+Camel will then, when starting:
+
+- Lookup the [Route Template](#manual::route-template.adoc) with the
+ given id (in the example above its foo) from the `CamelContext`
+
+- Create a new route based on the [Route
+ Template](#manual::route-template.adoc)
+
+# Dependency
+
+The implementation of the Kamelet EIP is located in the `camel-kamelet`
+JAR, so you should add the following dependency:
+
+
+ org.apache.camel
+ camel-kamelet
+
+ x.y.z
+
+
+# See Also
+
+See the example
+[camel-example-kamelet](https://github.com/apache/camel-examples/tree/main/kamelet).
diff --git a/camel-kamelet-main.md b/camel-kamelet-main.md
new file mode 100644
index 0000000000000000000000000000000000000000..ad51e92ef85deb89c3d4172c3ab3c39a0341cd0a
--- /dev/null
+++ b/camel-kamelet-main.md
@@ -0,0 +1,42 @@
+# Kamelet-main.md
+
+**Since Camel 3.11**
+
+A `main` class that is opinionated to boostrap and run Camel standalone
+with Kamelets (or plain YAML routes) for development and demo purposes.
+
+# Initial configuration
+
+The `KameletMain` is pre-configured with the following properties:
+
+ camel.component.kamelet.location = classpath:kamelets,github:apache:camel-kamelets/kamelets
+ camel.component.rest.consumerComponentName = platform-http
+ camel.component.rest.producerComponentName = vertx-http
+ camel.main.jmxUpdateRouteEnabled = true
+
+These settings can be overridden by configuration in
+`application.properties`.
+
+# Automatic dependencies downloading
+
+The Kamelet Main can automatically download Kamelet YAML files from a
+remote location over http/https, and from github as well.
+
+The official Kamelets from the Apache Camel Kamelet Catalog is stored on
+github and they can be used out of the box as-is.
+
+For example a Camel route can be *coded* in YAML which uses the
+Earthquake Kamelet from the catalog, as shown below:
+
+ - route:
+ from: "kamelet:earthquake-source"
+ steps:
+ - unmarshal:
+ json: {}
+ - log: "Earthquake with magnitude ${body[properties][mag]} at ${body[properties][place]}"
+
+In this use-case the earthquake kamelet will be downloaded from github,
+and as well its required dependencies.
+
+You can find an example with this at
+[kamelet-main](https://github.com/apache/camel-examples/tree/main/kamelet-main).
diff --git a/camel-kamelet.md b/camel-kamelet.md
index daa2f2a95ff5d8294cd1dde37bc762f2a516a190..5c25689a814d47e176e850805b70aec31488bbd0 100644
--- a/camel-kamelet.md
+++ b/camel-kamelet.md
@@ -17,7 +17,9 @@ accepts additional parameters that are passed to the [Route
Template](#manual::route-template.adoc) engine and consumed upon route
materialization.
-# Discovery
+# Usage
+
+## Discovery
If a [Route Template](#manual::route-template.adoc) is not found, the
**kamelet** endpoint tries to load the related **kamelet** definition
@@ -25,7 +27,7 @@ from the file system (by default `classpath:kamelets`). The default
resolution mechanism expects *Kamelets* files to have the extension
`.kamelet.yaml`.
-# Samples
+# Examples
*Kamelets* can be used as if they were standard Camel components. For
example, suppose that we have created a Route Template as follows:
diff --git a/camel-knative-http.md b/camel-knative-http.md
new file mode 100644
index 0000000000000000000000000000000000000000..c4625d178f67368da3a80b7aef564eece8b2b406
--- /dev/null
+++ b/camel-knative-http.md
@@ -0,0 +1,5 @@
+# Knative-http.md
+
+**Since Camel 3.15**
+
+HTTP transport for the camel-knative component.
diff --git a/camel-knative.md b/camel-knative.md
index 206c68963177dd70ee70fe6a5d31c86488637be7..c8b00a9776a127afa16f75f735011874a13598cc 100644
--- a/camel-knative.md
+++ b/camel-knative.md
@@ -27,22 +27,24 @@ You can append query options to the URI in the following format:
# Options
-# Supported Knative resources
+# Usage
+
+## Supported Knative resources
The component support the following Knative resources you can target or
exposes using the `type` path parameter:
-- **channel**: allow producing or consuming events to or from a
+- `channel`: allow producing or consuming events to or from a
[**Knative Channel**](https://knative.dev/docs/eventing/channels/)
-- **endpoint**: allow exposing or targeting serverless workloads using
+- `endpoint`: allow exposing or targeting serverless workloads using
[**Knative
Services**](https://knative.dev/docs/serving/spec/knative-api-specification-1.0/#service)
-- **event**: allow producing or consuming events to or from a
- [**Knative Broker**](https://knative.dev/docs/eventing/broker)
+- `event`: allow producing or consuming events to or from a [**Knative
+ Broker**](https://knative.dev/docs/eventing/broker)
-# Knative Environment
+## Knative Environment
As the Knative component hides the technical details of how to
communicate with Knative services to the user (protocols, addresses,
@@ -52,6 +54,8 @@ so called `Knative Environment`, which is essence is a Json document
made by a number of `service` elements which looks like the below
example:
+**Example**
+
{
"services": [
{
@@ -88,28 +92,28 @@ The `metadata` fields has some additional advanced fields:
-
+
-
+
filter.
The prefix to define filters to be
applied to the incoming message headers.
```filter.ce.source=my-source```
-
+
knative.kind
The type of the k8s resource referenced
by the endpoint.
```knative.kind=InMemoryChannel```
-
+
knative.apiVersion
The version of the k8s resource
@@ -117,13 +121,13 @@ referenced by the endpoint
```knative.apiVersion=messaging.knative.dev/v1beta1```
-
+
knative.reply
If the consumer should construct a full
reply to knative request.
```knative.reply=false```
-
+
ce.override.
The prefix to define CloudEvents values
that have to be overridden.
@@ -133,7 +137,7 @@ style="text-align: left;">```ce.override.ce-type=MyType```
-# Example
+**Example**
CamelContext context = new DefaultCamelContext();
@@ -149,7 +153,7 @@ style="text-align: left;">```ce.override.ce-type=MyType```
- expose knative service
-# Using custom Knative Transport
+## Using custom Knative Transport
As today the component only support `http` as transport as it is the
only supported protocol on Knative side but the transport is pluggable
@@ -181,13 +185,13 @@ by implementing the following interface:
KnativeEnvironment.KnativeServiceDefinition service, Processor processor);
}
-# Using ProducerTemplate
+## Using ProducerTemplate
When using Knative producer with a ProducerTemplate, it is necessary to
specify a value for the CloudEvent source, simply by setting a value for
the header *CamelCloudEventSource*.
-## Example
+**Example**
producerTemplate.sendBodyAndHeader("knative:event/broker-test", body, CloudEvent.CAMEL_CLOUD_EVENT_SOURCE, "my-source-name");
diff --git a/camel-kubernetes-config-maps.md b/camel-kubernetes-config-maps.md
index d190d2ccbf6e667821d6c2ae91958f98e6efbd31..f0c3658f625f12e20c6bb63de6768857a6854d52 100644
--- a/camel-kubernetes-config-maps.md
+++ b/camel-kubernetes-config-maps.md
@@ -9,23 +9,27 @@ Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes ConfigMap operations and a consumer to consume events
related to ConfigMap objects.
-# Supported producer operation
+# Usage
-- listConfigMaps
+## Supported producer operation
-- listConfigMapsByLabels
+- `listConfigMaps`
-- getConfigMap
+- `listConfigMapsByLabels`
-- createConfigMap
+- `getConfigMap`
-- updateConfigMap
+- `createConfigMap`
-- deleteConfigMap
+- `updateConfigMap`
-# Kubernetes ConfigMaps Producer Examples
+- `deleteConfigMap`
-- listConfigMaps: this operation lists the configmaps
+# Examples
+
+## Kubernetes ConfigMaps Producer Examples
+
+- `listConfigMaps`: this operation lists the configmaps
@@ -35,8 +39,8 @@ related to ConfigMap objects.
This operation returns a List of ConfigMaps from your cluster
-- listConfigMapsByLabels: this operation lists the configmaps selected
- by label
+- `listConfigMapsByLabels`: this operation lists the configmaps
+ selected by label
@@ -56,7 +60,7 @@ This operation returns a List of ConfigMaps from your cluster
This operation returns a List of ConfigMaps from your cluster, using a
label selector (with key1 and key2, with value value1 and value2)
-# Kubernetes ConfigMaps Consumer Example
+## Kubernetes ConfigMaps Consumer Example
fromF("kubernetes-config-maps://%s?oauthToken=%s", host, authToken)
.setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, constant("default"))
@@ -75,6 +79,55 @@ label selector (with key1 and key2, with value value1 and value2)
This consumer will return a list of events on the namespace default for
the config map test.
+# Using configmap properties function with Kubernetes
+
+The `camel-kubernetes` component include the following configmap related
+functions:
+
+- `configmap` - A function to lookup the string property from
+ Kubernetes ConfigMaps.
+
+- `configmap-binary` - A function to lookup the binary property from
+ Kubernetes ConfigMaps.
+
+Camel reads Configmaps from the Kubernetes API Server. And when RBAC is
+enabled on the cluster, the ServiceAccount that is used to run the
+application needs to have the proper permissions for such access.
+
+Before the Kubernetes property placeholder functions can be used they
+need to be configured with either (or both)
+
+- path - A *mount path* that must be mounted to the running pod, to
+ load the configmaps or secrets from local disk.
+
+- kubernetes client - **Autowired** An
+ `io.fabric8.kubernetes.client.KubernetesClient` instance to use for
+ connecting to the Kubernetes API server.
+
+Camel will first use *mount paths* (if configured) to lookup, and then
+fallback to use the `KubernetesClient`.
+
+## Using configmap with Kubernetes
+
+Given a configmap named `myconfig` in Kubernetes that has two entries:
+
+ drink = beer
+ first = Carlsberg
+
+Then these values can be used in your Camel routes such as:
+
+
+
+
+
+
+
+
+
+You can also provide a default value in case a key does not exist:
+
+
+
## Component Configurations
diff --git a/camel-kubernetes-cronjob.md b/camel-kubernetes-cronjob.md
index 8005fb925f6a11b97510a82700c4945e8fe44e0a..db9ac2050aa65fcedf554338d261d3c89676ce85 100644
--- a/camel-kubernetes-cronjob.md
+++ b/camel-kubernetes-cronjob.md
@@ -10,17 +10,17 @@ execute kubernetes CronJob operations.
# Supported producer operation
-- listCronJob
+- `listCronJob`
-- listCronJobByLabels
+- `listCronJobByLabels`
-- getCronJob
+- `getCronJob`
-- createCronJob
+- `createCronJob`
-- updateCronJob
+- `updateCronJob`
-- deleteCronJob
+- `deleteCronJob`
## Component Configurations
diff --git a/camel-kubernetes-custom-resources.md b/camel-kubernetes-custom-resources.md
index 2592875600a83427240b2a985cd195f1bd8bf2aa..8080c0e26b94707dba91fcd7552212543f2fded3 100644
--- a/camel-kubernetes-custom-resources.md
+++ b/camel-kubernetes-custom-resources.md
@@ -11,17 +11,17 @@ events related to Node objects.
# Supported producer operation
-- listCustomResources
+- `listCustomResources`
-- listCustomResourcesByLabels
+- `listCustomResourcesByLabels`
-- getCustomResource
+- `getCustomResource`
-- deleteCustomResource
+- `deleteCustomResource`
-- createCustomResource
+- `createCustomResource`
-- updateCustomResource
+- `updateCustomResource`
## Component Configurations
diff --git a/camel-kubernetes-deployments.md b/camel-kubernetes-deployments.md
index 80cf5f410340c918ca6e1a7bda4b1a3cb549f082..e708ede2a68e7f844cc8cfad5b3c406c592d4bda 100644
--- a/camel-kubernetes-deployments.md
+++ b/camel-kubernetes-deployments.md
@@ -9,26 +9,30 @@ Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes Deployments operations and a consumer to consume
events related to Deployments objects.
-# Supported producer operation
+# Usage
-- listDeployments
+## Supported producer operation
-- listDeploymentsByLabels
+- `listDeployments`
-- getDeployment
+- `listDeploymentsByLabels`
-- createDeployment
+- `getDeployment`
-- updateDeployment
+- `createDeployment`
-- deleteDeployment
+- `updateDeployment`
-- scaleDeployment
+- `deleteDeployment`
-# Kubernetes Deployments Producer Examples
+- `scaleDeployment`
-- listDeployments: this operation list the deployments on a kubernetes
- cluster
+# Examples
+
+## Kubernetes Deployments Producer Examples
+
+- `listDeployments`: this operation lists the deployments on a
+ kubernetes cluster
@@ -36,9 +40,9 @@ events related to Deployments objects.
toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listDeployments").
to("mock:result");
-This operation return a List of Deployment from your cluster
+This operation returns a List of Deployment from your cluster
-- listDeploymentsByLabels: this operation list the deployments by
+- `listDeploymentsByLabels`: this operation lists the deployments by
labels on a kubernetes cluster
@@ -55,10 +59,10 @@ This operation return a List of Deployment from your cluster
toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listDeploymentsByLabels").
to("mock:result");
-This operation return a List of Deployments from your cluster, using a
+This operation returns a List of Deployments from your cluster, using a
label selector (with key1 and key2, with value value1 and value2)
-# Kubernetes Deployments Consumer Example
+## Kubernetes Deployments Consumer Example
fromF("kubernetes-deployments://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result");
public class KubernertesProcessor implements Processor {
diff --git a/camel-kubernetes-events.md b/camel-kubernetes-events.md
index f02c557de1aa55541123e219096586ea7d2de270..54c81a8ac413c01920635274f548cc5346e95b1b 100644
--- a/camel-kubernetes-events.md
+++ b/camel-kubernetes-events.md
@@ -9,23 +9,27 @@ Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes Event operations and a consumer to consume events
related to Event objects.
-# Supported producer operation
+# Usage
-- listEvents
+## Supported producer operation
-- listEventsByLabels
+- `listEvents`
-- getEvent
+- `listEventsByLabels`
-- createEvent
+- `getEvent`
-- updateEvent
+- `createEvent`
-- deleteEvent
+- `updateEvent`
-# Kubernetes Events Producer Examples
+- `deleteEvent`
-- listEvents: this operation lists the events
+# Examples
+
+## Kubernetes Events Producer Examples
+
+- `listEvents`: this operation lists the events
@@ -40,7 +44,7 @@ To indicate from which namespace, the events are expected, it is
possible to set the message header `CamelKubernetesNamespaceName`. By
default, the events of all namespaces are returned.
-- listEventsByLabels: this operation lists the events selected by
+- `listEventsByLabels`: this operation lists the events selected by
labels
@@ -68,7 +72,7 @@ This operation expects the message header `CamelKubernetesEventsLabels`
to be set to a `Map` where the key-value pairs represent
the expected label names and values.
-- getEvent: this operation gives a specific event
+- `getEvent`: this operation gives a specific event
@@ -94,7 +98,7 @@ needs to be set to the target name of event.
If no matching event could be found, `null` is returned.
-- createEvent: this operation creates a new event
+- `createEvent`: this operation creates a new event
@@ -146,12 +150,12 @@ representing a prefilled builder to use when creating the event. Please
note that the labels, name of event and name of namespace are always set
from the message headers, even when the builder is provided.
-- updateEvent: this operation updates an existing event
+- `updateEvent`: this operation updates an existing event
The behavior is exactly the same as `createEvent`, only the name of the
operation is different.
-- deleteEvent: this operation deletes an existing event.
+- `deleteEvent`: this operation deletes an existing event.
@@ -174,7 +178,7 @@ This operation expects two message headers which are
one needs to be set to the name of the target namespace and second one
needs to be set to the target name of event.
-# Kubernetes Events Consumer Example
+## Kubernetes Events Consumer Example
fromF("kubernetes-events://%s?oauthToken=%s", host, authToken)
.setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, constant("default"))
diff --git a/camel-kubernetes-hpa.md b/camel-kubernetes-hpa.md
index 31805e552cfca4fe202876f342fe0ca5344d7a6f..d1ae0666f3a49b13c0ba1c5e397f13f74b2767de 100644
--- a/camel-kubernetes-hpa.md
+++ b/camel-kubernetes-hpa.md
@@ -9,23 +9,27 @@ Components](#kubernetes-summary.adoc) which provides a producer to
execute kubernetes Horizontal Pod Autoscaler operations and a consumer
to consume events related to Horizontal Pod Autoscaler objects.
-# Supported producer operation
+# Usage
-- listHPA
+## Supported producer operation
-- listHPAByLabels
+- `listHPA`
-- getHPA
+- `listHPAByLabels`
-- createHPA
+- `getHPA`
-- updateHPA
+- `createHPA`
-- deleteHPA
+- `updateHPA`
-# Kubernetes HPA Producer Examples
+- `deleteHPA`
-- listHPA: this operation lists the HPAs on a kubernetes cluster
+# Examples
+
+## Kubernetes HPA Producer Examples
+
+- `listHPA`: this operation lists the HPAs on a kubernetes cluster
@@ -33,10 +37,10 @@ to consume events related to Horizontal Pod Autoscaler objects.
toF("kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPA").
to("mock:result");
-This operation returns a List of HPAs from your cluster
+This operation returns a list of HPAs from your cluster
-- listDeploymentsByLabels: this operation lists the HPAs by labels on
- a kubernetes cluster
+- `listDeploymentsByLabels`: this operation lists the HPAs by labels
+ on a kubernetes cluster
@@ -52,10 +56,10 @@ This operation returns a List of HPAs from your cluster
toF("kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPAByLabels").
to("mock:result");
-This operation returns a List of HPAs from your cluster, using a label
+This operation returns a List of HPAs from your cluster using a label
selector (with key1 and key2, with value value1 and value2)
-# Kubernetes HPA Consumer Example
+## Kubernetes HPA Consumer Example
fromF("kubernetes-hpa://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result");
public class KubernertesProcessor implements Processor {
diff --git a/camel-kubernetes-job.md b/camel-kubernetes-job.md
index 5f4f796c4bf52fd9fc58e9da5f7e725b2477abc3..20a920d36a4da6aab7055680463e9e7bccbbe240 100644
--- a/camel-kubernetes-job.md
+++ b/camel-kubernetes-job.md
@@ -8,23 +8,27 @@ The Kubernetes Job component is one of [Kubernetes
Components](#kubernetes-summary.adoc) which provides a producer to
execute kubernetes Job operations.
-# Supported producer operation
+# Usage
-- listJob
+## Supported producer operation
-- listJobByLabels
+- `listJob`
-- getJob
+- `listJobByLabels`
-- createJob
+- `getJob`
-- updateJob
+- `createJob`
-- deleteJob
+- `updateJob`
-# Kubernetes Job Producer Examples
+- `deleteJob`
-- listJob: this operation lists the jobs on a kubernetes cluster
+# Examples
+
+## Kubernetes Job Producer Examples
+
+- `listJob`: this operation lists the jobs on a kubernetes cluster
@@ -32,9 +36,9 @@ execute kubernetes Job operations.
toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJob").
to("mock:result");
-This operation returns a List of Jobs from your cluster
+This operation returns a list of jobs from your cluster
-- listJobByLabels: this operation lists the jobs by labels on a
+- `listJobByLabels`: this operation lists the jobs by labels on a
kubernetes cluster
@@ -51,10 +55,10 @@ This operation returns a List of Jobs from your cluster
toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJobByLabels").
to("mock:result");
-This operation returns a List of Jobs from your cluster, using a label
+This operation returns a list of jobs from your cluster, using a label
selector (with key1 and key2, with value value1 and value2)
-- createJob: This operation creates a job on a Kubernetes Cluster
+- `createJob`: This operation creates a job on a Kubernetes Cluster
We have a wonderful example of this operation thanks to [Emmerson
Miranda](https://github.com/Emmerson-Miranda) from this [Java
diff --git a/camel-kubernetes-namespaces.md b/camel-kubernetes-namespaces.md
index 7880891ee7707fc31f0885502e47d9a8ce672c8d..c4d70c862413a915fbd09e4c0b1bd9b5dc3e663b 100644
--- a/camel-kubernetes-namespaces.md
+++ b/camel-kubernetes-namespaces.md
@@ -9,24 +9,28 @@ Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes Namespace operations and a consumer to consume events
related to Namespace events.
-# Supported producer operation
+# Usage
-- listNamespaces
+## Supported producer operation
-- listNamespacesByLabels
+- `listNamespaces`
-- getNamespace
+- `listNamespacesByLabels`
-- createNamespace
+- `getNamespace`
-- updateNamespace
+- `createNamespace`
-- deleteNamespace
+- `updateNamespace`
-# Kubernetes Namespaces Producer Examples
+- `deleteNamespace`
-- listNamespaces: this operation lists the namespaces on a kubernetes
- cluster
+# Examples
+
+## Kubernetes Namespaces Producer Examples
+
+- `listNamespaces`: this operation lists the namespaces on a
+ kubernetes cluster
@@ -34,9 +38,9 @@ related to Namespace events.
toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNamespaces").
to("mock:result");
-This operation returns a List of namespaces from your cluster
+This operation returns a list of namespaces from your cluster
-- listNamespacesByLabels: this operation lists the namespaces by
+- `listNamespacesByLabels`: this operation lists the namespaces by
labels on a kubernetes cluster
@@ -53,10 +57,10 @@ This operation returns a List of namespaces from your cluster
toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNamespacesByLabels").
to("mock:result");
-This operation returns a List of Namespaces from your cluster, using a
+This operation returns a list of namespaces from your cluster, using a
label selector (with key1 and key2, with value value1 and value2)
-# Kubernetes Namespaces Consumer Example
+## Kubernetes Namespaces Consumer Example
fromF("kubernetes-namespaces://%s?oauthToken=%s&namespace=default", host, authToken).process(new KubernertesProcessor()).to("mock:result");
public class KubernertesProcessor implements Processor {
diff --git a/camel-kubernetes-nodes.md b/camel-kubernetes-nodes.md
index 75957ad36e86a05fa0164d5d0dec064bde4dafc9..3a2055f8c77b0337fc91ad5eab25aa8ab109edbb 100644
--- a/camel-kubernetes-nodes.md
+++ b/camel-kubernetes-nodes.md
@@ -9,23 +9,27 @@ Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes Node operations and a consumer to consume events
related to Node objects.
-# Supported producer operation
+# Usage
-- listNodes
+## Supported producer operation
-- listNodesByLabels
+- `listNodes`
-- getNode
+- `listNodesByLabels`
-- createNode
+- `getNode`
-- updateNode
+- `createNode`
-- deleteNode
+- `updateNode`
-# Kubernetes Nodes Producer Examples
+- `deleteNode`
-- listNodes: this operation lists the nodes on a kubernetes cluster
+# Examples
+
+## Kubernetes Nodes Producer Examples
+
+- `listNodes`: this operation lists the nodes on a kubernetes cluster
@@ -35,7 +39,7 @@ related to Node objects.
This operation returns a List of Nodes from your cluster
-- listNodesByLabels: this operation lists the nodes by labels on a
+- `listNodesByLabels`: this operation lists the nodes by labels on a
kubernetes cluster
@@ -52,10 +56,10 @@ This operation returns a List of Nodes from your cluster
toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNodesByLabels").
to("mock:result");
-This operation returns a List of Nodes from your cluster, using a label
+This operation returns a list of nodes from your cluster, using a label
selector (with key1 and key2, with value value1 and value2)
-# Kubernetes Nodes Consumer Example
+## Kubernetes Nodes Consumer Example
fromF("kubernetes-nodes://%s?oauthToken=%s&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result");
public class KubernertesProcessor implements Processor {
diff --git a/camel-kubernetes-persistent-volumes-claims.md b/camel-kubernetes-persistent-volumes-claims.md
index 6be044677fd5b0ac3ddf8e9edd3de0bf5f5be0b0..d8e84c4b58cc6ac6aff30971f446dfab99565652 100644
--- a/camel-kubernetes-persistent-volumes-claims.md
+++ b/camel-kubernetes-persistent-volumes-claims.md
@@ -6,25 +6,29 @@
The Kubernetes Persistent Volume Claim component is one of [Kubernetes
Components](#kubernetes-summary.adoc) which provides a producer to
-execute Kubernetes Persistent Volume Claims operations.
+execute Kubernetes Persistent Volume Claims (PVC) operations.
-# Supported producer operation
+# Usage
-- listPersistentVolumesClaims
+## Supported producer operation
-- listPersistentVolumesClaimsByLabels
+- `listPersistentVolumesClaims`
-- getPersistentVolumeClaim
+- `listPersistentVolumesClaimsByLabels`
-- createPersistentVolumeClaim
+- `getPersistentVolumeClaim`
-- updatePersistentVolumeClaim
+- `createPersistentVolumeClaim`
-- deletePersistentVolumeClaim
+- `updatePersistentVolumeClaim`
-# Kubernetes Persistent Volume Claims Producer Examples
+- `deletePersistentVolumeClaim`
-- listPersistentVolumesClaims: this operation lists the pvc on a
+# Example
+
+## Kubernetes Persistent Volume Claims Producer Examples
+
+- `listPersistentVolumesClaims`: this operation lists the PVCs on a
kubernetes cluster
@@ -33,10 +37,10 @@ execute Kubernetes Persistent Volume Claims operations.
toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaims").
to("mock:result");
-This operation returns a List of pvc from your cluster
+This operation returns a list of PVC from your cluster
-- listPersistentVolumesClaimsByLabels: this operation lists the pvc by
- labels on a kubernetes cluster
+- `listPersistentVolumesClaimsByLabels`: this operation lists the PVCs
+ by labels on a kubernetes cluster
@@ -52,7 +56,7 @@ This operation returns a List of pvc from your cluster
toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaimsByLabels").
to("mock:result");
-This operation returns a List of pvc from your cluster, using a label
+This operation returns a list of PVCs from your cluster using a label
selector (with key1 and key2, with value value1 and value2)
## Component Configurations
diff --git a/camel-kubernetes-persistent-volumes.md b/camel-kubernetes-persistent-volumes.md
index 4f644c3a731bfa436e56e722b262003a13e3bf0c..c202a89997e9b3c2179b7b757ca478d6fd92bf89 100644
--- a/camel-kubernetes-persistent-volumes.md
+++ b/camel-kubernetes-persistent-volumes.md
@@ -6,20 +6,24 @@
The Kubernetes Persistent Volume component is one of [Kubernetes
Components](#kubernetes-summary.adoc) which provides a producer to
-execute Kubernetes Persistent Volume operations.
+execute Kubernetes Persistent Volume (PV) operations.
-# Supported producer operation
+# Usage
-- listPersistentVolumes
+## Supported producer operation
-- listPersistentVolumesByLabels
+- `listPersistentVolumes`
-- getPersistentVolume
+- `listPersistentVolumesByLabels`
-# Kubernetes Persistent Volumes Producer Examples
+- `getPersistentVolume`
-- listPersistentVolumes: this operation lists the pv on a kubernetes
- cluster
+# Examples
+
+## Kubernetes Persistent Volumes Producer Examples
+
+- `listPersistentVolumes`: this operation lists the PVs on a
+ kubernetes cluster
@@ -27,10 +31,10 @@ execute Kubernetes Persistent Volume operations.
toF("kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumes").
to("mock:result");
-This operation returns a List of pv from your cluster
+This operation returns a list of PVs from your cluster
-- listPersistentVolumesByLabels: this operation lists the pv by labels
- on a kubernetes cluster
+- `listPersistentVolumesByLabels`: this operation lists the PVs by
+ labels on a kubernetes cluster
@@ -46,7 +50,7 @@ This operation returns a List of pv from your cluster
toF("kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesByLabels").
to("mock:result");
-This operation returns a List of pv from your cluster, using a label
+This operation returns a list of PVs from your cluster using a label
selector (with key1 and key2, with value value1 and value2)
## Component Configurations
diff --git a/camel-kubernetes-pods.md b/camel-kubernetes-pods.md
index 2cefdd8c85ecb81878874eb3c1890e56126922a2..a24dab92b6bae320870028619f0b6a9b8d727eeb 100644
--- a/camel-kubernetes-pods.md
+++ b/camel-kubernetes-pods.md
@@ -9,23 +9,27 @@ Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes Pods operations and a consumer to consume events
related to Pod Objects.
-# Supported producer operation
+# Usage
-- listPods
+## Supported producer operation
-- listPodsByLabels
+- `listPods`
-- getPod
+- `listPodsByLabels`
-- createPod
+- `getPod`
-- updatePod
+- `createPod`
-- deletePod
+- `updatePod`
-# Kubernetes Pods Producer Examples
+- `deletePod`
-- listPods: this operation lists the pods on a kubernetes cluster
+# Examples
+
+## Kubernetes Pods Producer Examples
+
+- `listPods`: this operation lists the pods on a kubernetes cluster
@@ -33,9 +37,9 @@ related to Pod Objects.
toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPods").
to("mock:result");
-This operation returns a List of Pods from your cluster
+This operation returns a list of pods from your cluster
-- listPodsByLabels: this operation lists the pods by labels on a
+- `listPodsByLabels`: this operation lists the pods by labels on a
kubernetes cluster
@@ -52,10 +56,10 @@ This operation returns a List of Pods from your cluster
toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPodsByLabels").
to("mock:result");
-This operation returns a List of Pods from your cluster, using a label
+This operation returns a list of pods from your cluster using a label
selector (with key1 and key2, with value value1 and value2)
-# Kubernetes Pods Consumer Example
+## Kubernetes Pods Consumer Example
fromF("kubernetes-pods://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result");
public class KubernertesProcessor implements Processor {
diff --git a/camel-kubernetes-replication-controllers.md b/camel-kubernetes-replication-controllers.md
index c7af1d805f177b65ce99039569c8303baa3c70f5..50bfbf7b83b7b92fec03ca0ad4ab2b34cf52b450 100644
--- a/camel-kubernetes-replication-controllers.md
+++ b/camel-kubernetes-replication-controllers.md
@@ -7,27 +7,31 @@
The Kubernetes Replication Controller component is one of [Kubernetes
Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes Replication controller operations and a consumer to
-consume events related to Replication Controller objects.
+consume events related to Replication Controller (RC) objects.
-# Supported producer operation
+# Usage
-- listReplicationControllers
+## Supported producer operation
-- listReplicationControllersByLabels
+- `listReplicationControllers`
-- getReplicationController
+- `listReplicationControllersByLabels`
-- createReplicationController
+- `getReplicationController`
-- updateReplicationController
+- `createReplicationController`
-- deleteReplicationController
+- `updateReplicationController`
-- scaleReplicationController
+- `deleteReplicationController`
-# Kubernetes Replication Controllers Producer Examples
+- `scaleReplicationController`
-- listReplicationControllers: this operation lists the RCs on a
+# Examples
+
+## Kubernetes Replication Controllers Producer Examples
+
+- `listReplicationControllers`: this operation lists the RCs on a
kubernetes cluster
@@ -36,10 +40,10 @@ consume events related to Replication Controller objects.
toF("kubernetes-replication-controllers:///?kubernetesClient=#kubernetesClient&operation=listReplicationControllers").
to("mock:result");
-This operation returns a List of RCs from your cluster
+This operation returns a list of RCs from your cluster
-- listReplicationControllersByLabels: this operation lists the RCs by
- labels on a kubernetes cluster
+- `listReplicationControllersByLabels`: this operation lists the RCs
+ by labels on a kubernetes cluster
@@ -55,10 +59,10 @@ This operation returns a List of RCs from your cluster
toF("kubernetes-replication-controllers:///?kubernetesClient=#kubernetesClient&operation=listReplicationControllersByLabels").
to("mock:result");
-This operation returns a List of RCs from your cluster, using a label
+This operation returns a list of RCs from your cluster using a label
selector (with key1 and key2, with value value1 and value2)
-# Kubernetes Replication Controllers Consumer Example
+## Kubernetes Replication Controllers Consumer Example
fromF("kubernetes-replication-controllers://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result");
public class KubernertesProcessor implements Processor {
diff --git a/camel-kubernetes-resources-quota.md b/camel-kubernetes-resources-quota.md
index 1de863fd49ef38e0053b70ccc77681a96ed3e924..d960f7032ddaac76746c6689b622d92df9acf371 100644
--- a/camel-kubernetes-resources-quota.md
+++ b/camel-kubernetes-resources-quota.md
@@ -8,23 +8,27 @@ The Kubernetes Resources Quota component is one of [Kubernetes
Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes Resource Quota operations.
-# Supported producer operation
+# Usage
-- listResourcesQuota
+## Supported producer operation
-- listResourcesQuotaByLabels
+- `listResourcesQuota`
-- getResourcesQuota
+- `listResourcesQuotaByLabels`
-- createResourcesQuota
+- `getResourcesQuota`
-- updateResourceQuota
+- `createResourcesQuota`
-- deleteResourcesQuota
+- `updateResourceQuota`
-# Kubernetes Resource Quota Producer Examples
+- `deleteResourcesQuota`
-- listResourcesQuota: this operation lists the Resource Quotas on a
+# Examples
+
+## Kubernetes Resource Quota Producer Examples
+
+- `listResourcesQuota`: this operation lists the resource quotas on a
kubernetes cluster
@@ -33,10 +37,10 @@ execute Kubernetes Resource Quota operations.
toF("kubernetes-resources-quota:///?kubernetesClient=#kubernetesClient&operation=listResourcesQuota").
to("mock:result");
-This operation returns a List of Resource Quotas from your cluster
+This operation returns a list of resource quotas from your cluster
-- listResourcesQuotaByLabels: this operation lists the Resource Quotas
- by labels on a kubernetes cluster
+- `listResourcesQuotaByLabels`: this operation lists the resource
+ quotas by labels on a kubernetes cluster
@@ -52,9 +56,8 @@ This operation returns a List of Resource Quotas from your cluster
toF("kubernetes-resources-quota:///?kubernetesClient=#kubernetesClient&operation=listResourcesQuotaByLabels").
to("mock:result");
-This operation returns a List of Resource Quotas from your cluster,
-using a label selector (with key1 and key2, with value value1 and
-value2)
+This operation returns a list of resource quotas from your cluster using
+a label selector (with key1 and key2, with value value1 and value2)
## Component Configurations
diff --git a/camel-kubernetes-secrets.md b/camel-kubernetes-secrets.md
index e0fae2a2e9756226bd84c08e7c521c8578aa490e..17354b6f8fbacc4ebe68c9ed627d125e13d62027 100644
--- a/camel-kubernetes-secrets.md
+++ b/camel-kubernetes-secrets.md
@@ -8,23 +8,27 @@ The Kubernetes Secrets component is one of [Kubernetes
Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes Secrets operations.
-# Supported producer operation
+# Usage
-- listSecrets
+## Supported producer operation
-- listSecretsByLabels
+- `listSecrets`
-- getSecret
+- `listSecretsByLabels`
-- createSecret
+- `getSecret`
-- updateSecret
+- `createSecret`
-- deleteSecret
+- `updateSecret`
-# Kubernetes Secrets Producer Examples
+- `deleteSecret`
-- listSecrets: this operation lists the secrets on a kubernetes
+# Example
+
+## Kubernetes Secrets Producer Examples
+
+- `listSecrets`: this operation lists the secrets on a kubernetes
cluster
@@ -33,10 +37,10 @@ execute Kubernetes Secrets operations.
toF("kubernetes-secrets:///?kubernetesClient=#kubernetesClient&operation=listSecrets").
to("mock:result");
-This operation returns a List of secrets from your cluster
+This operation returns a list of secrets from your cluster
-- listSecretsByLabels: this operation lists the Secrets by labels on a
- kubernetes cluster
+- `listSecretsByLabels`: this operation lists the Secrets by labels on
+ a kubernetes cluster
@@ -52,8 +56,101 @@ This operation returns a List of secrets from your cluster
toF("kubernetes-secrets:///?kubernetesClient=#kubernetesClient&operation=listSecretsByLabels").
to("mock:result");
-This operation returns a List of Secrets from your cluster, using a
-label selector (with key1 and key2, with value value1 and value2)
+This operation returns a list of Secrets from your cluster using a label
+selector (with key1 and key2, with value value1 and value2)
+
+# Using secrets properties function with Kubernetes
+
+The `camel-kubernetes` component include the following secrets related
+functions:
+
+- `secret` - A function to lookup the string property from Kubernetes
+ Secrets.
+
+- `secret-binary` - A function to lookup the binary property from
+ Kubernetes Secrets.
+
+Camel reads Secrets from the Kubernetes API Server. And when RBAC is
+enabled on the cluster, the ServiceAccount that is used to run the
+application needs to have the proper permissions for such access.
+
+Before the Kubernetes property placeholder functions can be used they
+need to be configured with either (or both)
+
+- path - A *mount path* that must be mounted to the running pod, to
+ load the configmaps or secrets from local disk.
+
+- kubernetes client - **Autowired** An
+ `io.fabric8.kubernetes.client.KubernetesClient` instance to use for
+ connecting to the Kubernetes API server.
+
+Camel will first use *mount paths* (if configured) to lookup, and then
+fallback to use the `KubernetesClient`.
+
+A secret named `mydb` could contain username and passwords to connect to
+a database such as:
+
+ myhost = killroy
+ myport = 5555
+ myuser = scott
+ mypass = tiger
+
+This can be used in Camel with for example the Postrgres Sink Kamelet:
+
+
+
+
+
+ { "username":"oscerd", "city":"Rome"}
+
+
+
+
+
+The postgres-sink Kamelet can also be configured in
+`application.properties` which reduces the configuration in the route
+above:
+
+ camel.component.kamelet.postgresql-sink.databaseName={{secret:mydb/myhost}}
+ camel.component.kamelet.postgresql-sink.serverPort={{secret:mydb/myport}}
+ camel.component.kamelet.postgresql-sink.username={{secret:mydb/myuser}}
+ camel.component.kamelet.postgresql-sink.password={{secret:mydb/mypass}}
+
+Which reduces the route to:
+
+
+
+
+
+ { "username":"oscerd", "city":"Rome"}
+
+
+
+
+
+# Automatic Camel context reloading on Secret Refresh
+
+Being able to reload Camel context on a Secret Refresh could be done by
+specifying the following properties:
+
+ camel.vault.kubernetes.refreshEnabled=true
+ camel.vault.kubernetes.secrets=Secret
+ camel.main.context-reload-enabled = true
+
+where `camel.vault.kubernetes.refreshEnabled` will enable the automatic
+context reload and `camel.vault.kubernetes.secrets` is a regex
+representing or a comma separated lists of the secrets we want to track
+for updates.
+
+Whenever a secrets listed in the property, will be updated in the same
+namespace of the running application, the Camel context will be
+reloaded, refreshing the secret value.
## Component Configurations
diff --git a/camel-kubernetes-service-accounts.md b/camel-kubernetes-service-accounts.md
index 8e4baeeb954ba6d65ca084b7506fa372a1233102..e1f2f797cc9a61afe967f786bcf9a4b6c106855c 100644
--- a/camel-kubernetes-service-accounts.md
+++ b/camel-kubernetes-service-accounts.md
@@ -6,25 +6,29 @@
The Kubernetes Service Account component is one of [Kubernetes
Components](#kubernetes-summary.adoc) which provides a producer to
-execute Kubernetes Service Account operations.
+execute Kubernetes Service Account (SA) operations.
-# Supported producer operation
+# Usage
-- listServiceAccounts
+## Supported producer operation
-- listServiceAccountsByLabels
+- `listServiceAccounts`
-- getServiceAccount
+- `listServiceAccountsByLabels`
-- createServiceAccount
+- `getServiceAccount`
-- updateServiceAccount
+- `createServiceAccount`
-- deleteServiceAccount
+- `updateServiceAccount`
-# Kubernetes ServiceAccounts Produce Examples
+- `deleteServiceAccount`
-- listServiceAccounts: this operation lists the sa on a kubernetes
+# Examples
+
+## Kubernetes ServiceAccounts Produce Examples
+
+- `listServiceAccounts`: this operation lists the SAs on a kubernetes
cluster
@@ -33,10 +37,10 @@ execute Kubernetes Service Account operations.
toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccounts").
to("mock:result");
-This operation returns a List of services from your cluster
+This operation returns a list of services from your cluster
-- listServiceAccountsByLabels: this operation lists the sa by labels
- on a kubernetes cluster
+- `listServiceAccountsByLabels`: this operation lists the SAs by
+ labels on a kubernetes cluster
@@ -52,7 +56,7 @@ This operation returns a List of services from your cluster
toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccountsByLabels").
to("mock:result");
-This operation returns a List of Services from your cluster, using a
+This operation returns a list of services from your cluster using a
label selector (with key1 and key2, with value value1 and value2)
## Component Configurations
diff --git a/camel-kubernetes-services.md b/camel-kubernetes-services.md
index a4ce0b7e224db456f99d40a85ecc52ba305720c4..33fbc9ba17e1f3066922835023d32a26b7d57002 100644
--- a/camel-kubernetes-services.md
+++ b/camel-kubernetes-services.md
@@ -9,21 +9,25 @@ Components](#kubernetes-summary.adoc) which provides a producer to
execute Kubernetes Service operations and a consumer to consume events
related to Service objects.
-# Supported producer operation
+# Usage
-- listServices
+## Supported producer operation
-- listServicesByLabels
+- `listServices`
-- getService
+- `listServicesByLabels`
-- createService
+- `getService`
-- deleteService
+- `createService`
-# Kubernetes Services Producer Examples
+- `deleteService`
-- listServices: this operation list the services on a kubernetes
+# Examples
+
+## Kubernetes Services Producer Examples
+
+- `listServices`: this operation lists the services on a kubernetes
cluster
@@ -32,10 +36,10 @@ related to Service objects.
toF("kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServices").
to("mock:result");
-This operation return a List of services from your cluster
+This operation returns a List of services from your cluster
-- listServicesByLabels: this operation list the deployments by labels
- on a kubernetes cluster
+- `listServicesByLabels`: this operation lists the deployments by
+ labels on a kubernetes cluster
@@ -51,10 +55,10 @@ This operation return a List of services from your cluster
toF("kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServicesByLabels").
to("mock:result");
-This operation return a List of Services from your cluster, using a
+This operation returns a list of services from your cluster using a
label selector (with key1 and key2, with value value1 and value2)
-# Kubernetes Services Consumer Example
+## Kubernetes Services Consumer Example
fromF("kubernetes-services://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result");
diff --git a/camel-kubernetes-summary.md b/camel-kubernetes-summary.md
new file mode 100644
index 0000000000000000000000000000000000000000..ed222ebb90b19944b6d785205d0fecad7adfb55d
--- /dev/null
+++ b/camel-kubernetes-summary.md
@@ -0,0 +1,55 @@
+# Kubernetes-summary.md
+
+**Since Camel 2.17**
+
+The Kubernetes components integrate your application with Kubernetes
+standalone or on top of Openshift.
+
+# Kubernetes components
+
+See the following for usage of each component:
+
+indexDescriptionList::\[attributes=*group=Kubernetes*,descriptionformat=description\]
+
+# Installation
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-kubernetes
+ x.x.x
+
+
+
+# Usage
+
+## Producer examples
+
+Here we show some examples of producer using camel-kubernetes.
+
+## Create a pod
+
+ from("direct:createPod")
+ .toF("kubernetes-pods://%s?oauthToken=%s&operation=createPod", host, authToken);
+
+By using the `KubernetesConstants.KUBERNETES_POD_SPEC` header, you can
+specify your PodSpec and pass it to this operation.
+
+## Delete a pod
+
+ from("direct:createPod")
+ .toF("kubernetes-pods://%s?oauthToken=%s&operation=deletePod", host, authToken);
+
+By using the `KubernetesConstants.KUBERNETES_POD_NAME` header, you can
+specify your Pod name and pass it to this operation.
+
+# Using Kubernetes ConfigMaps and Secrets
+
+The `camel-kubernetes` component also provides [Property
+Placeholder](#manual:ROOT:using-propertyplaceholder.adoc) functions that
+load the property values from Kubernetes *ConfigMaps* or *Secrets*.
+
+For more information, see [Property
+Placeholder](#manual:ROOT:using-propertyplaceholder.adoc).
diff --git a/camel-kudu.md b/camel-kudu.md
index c55a693df5b8d254a34d60752d53893e702d41bc..f94c30d47d1f87255435e727f02d25aff376acca 100644
--- a/camel-kudu.md
+++ b/camel-kudu.md
@@ -23,24 +23,25 @@ for this component:
You must have a valid Kudu instance running. More information is
available at [Apache Kudu](https://kudu.apache.org/).
-# Input Body formats
+# Usage
-## Insert, delete, update, and upsert
+## Input Body formats
-The input body format has to be a java.util.Map\.
-This map will represent a row of the table whose elements are columns,
-where the key is the column name and the value is the value of the
-column.
+### Insert, delete, update, and upsert
-# Output Body formats
+The input body format has to be a `java.util.Map`. This
+map will represent a row of the table whose elements are columns, where
+the key is the column name and the value is the value of the column.
-## Scan
+## Output Body formats
+
+### Scan
The output body format will be a
-java.util.List\\>. Each element
-of the list will be a different row of the table. Each row is a
-Map\ whose elements will be each pair of column
-name and column value for that row.
+`java.util.List>`. Each element of the
+list will be a different row of the table. Each row is a
+`Map` whose elements will be each pair of column name
+and column value for that row.
## Component Configurations
diff --git a/camel-langchain4j-chat.md b/camel-langchain4j-chat.md
index 6abeac425ca5ccc41c00f53924a209ea81ef4c94..dab0d64a1812469cc71103cebf04872bb0b0fe7a 100644
--- a/camel-langchain4j-chat.md
+++ b/camel-langchain4j-chat.md
@@ -2,10 +2,11 @@
**Since Camel 4.5**
-**Only producer is supported**
+**Both producer and consumer are supported**
-The LangChain4j Chat Component allows you to integrate with any LLM
-supported by [LangChain4j](https://github.com/langchain4j/langchain4j).
+The LangChain4j Chat Component allows you to integrate with any Large
+Language Model (LLM) supported by
+[LangChain4j](https://github.com/langchain4j/langchain4j).
Maven users will need to add the following dependency to their `pom.xml`
for this component:
@@ -23,26 +24,31 @@ for this component:
Where **chatId** can be any string to uniquely identify the endpoint
-# Using a specific Chat Model
+# Usage
+
+## Using a specific Chat Model
The Camel LangChain4j chat component provides an abstraction for
interacting with various types of Large Language Models (LLMs) supported
by [LangChain4j](https://github.com/langchain4j/langchain4j).
-To integrate with a specific Large Language Model, users should follow
-these steps:
+### Integrating with specific LLM
-## Example of Integrating with OpenAI
+To integrate with a specific LLM, users should follow the steps
+described below, which explain how to integrate with OpenAI.
Add the dependency for LangChain4j OpenAI support:
+**Example**
+
dev.langchain4j
langchain4j-open-ai
x.x.x
-Init the OpenAI Chat Language Model, and add it to the Camel Registry:
+Initialize the OpenAI Chat Language Model, and add it to the Camel
+Registry:
ChatLanguageModel model = OpenAiChatModel.builder()
.apiKey(openApiKey)
@@ -62,19 +68,19 @@ dependency, replace the `langchain4j-open-ai` dependency with the
appropriate dependency for the desired model. Update the initialization
parameters accordingly in the code snippet provided above.
-# Send a prompt with variables
+## Send a prompt with variables
To send a prompt with variables, use the Operation type
`LangChain4jChatOperations.CHAT_SINGLE_MESSAGE_WITH_PROMPT`. This
operation allows you to send a single prompt message with dynamic
variables, which will be replaced with values provided in the request.
-Example of route :
+**Route example:**
from("direct:chat")
.to("langchain4j-chat:test?chatModel=#chatModel&chatOperation=CHAT_SINGLE_MESSAGE_WITH_PROMPT")
-Example of usage:
+**Usage example:**
var promptTemplate = "Create a recipe for a {{dishType}} with the following ingredients: {{ingredients}}";
@@ -85,7 +91,7 @@ Example of usage:
String response = template.requestBodyAndHeader("direct:chat", variables,
LangChain4jChat.Headers.PROMPT_TEMPLATE, promptTemplate, String.class);
-# Chat with history
+## Chat with history
You can send a new prompt along with the chat message history by passing
all messages in a list of type
@@ -94,12 +100,12 @@ all messages in a list of type
allows you to continue the conversation with the context of previous
messages.
-Example of route :
+**Route example:**
from("direct:chat")
.to("langchain4j-chat:test?chatModel=#chatModel&chatOperation=CHAT_MULTIPLE_MESSAGES")
-Example of usage:
+**Usage example:**
List messages = new ArrayList<>();
messages.add(new SystemMessage("You are asked to provide recommendations for a restaurant based on user reviews."));
@@ -107,7 +113,10 @@ Example of usage:
String response = template.requestBody("direct:send-multiple", messages, String.class);
-# Chat with Tool
+## Chat with Tool
+
+as of Camel 4.8.0 this feature is deprecated. Users should use the
+[LangChain4j Tool component](#langchain4j-tools-component.adoc).
Camel langchain4j-chat component as a consumer can be used to implement
a LangChain tool. Right now tools are supported only via the
@@ -115,12 +124,12 @@ OpenAiChatModel backed by OpenAI APIs.
Tool Input parameter can be defined as an Endpoint multiValue option in
the form of `parameter.=`, or via the endpoint option
-camelToolParameter for a programmatic approach. The parameters can be
+`camelToolParameter` for a programmatic approach. The parameters can be
found as headers in the consumer route, in particular, if you define
`parameter.userId=5`, in the consumer route `${header.userId}` can be
used.
-Example of a producer and a consumer:
+**Producer and consumer example:**
from("direct:test")
.to("langchain4j-chat:test1?chatOperation=CHAT_MULTIPLE_MESSAGES");
@@ -128,7 +137,7 @@ Example of a producer and a consumer:
from("langchain4j-chat:test1?description=Query user database by number¶meter.number=integer")
.to("sql:SELECT name FROM users WHERE id = :#number");
-Example of usage:
+**Usage example:**
List messages = new ArrayList<>();
messages.add(new SystemMessage("""
@@ -140,6 +149,72 @@ Example of usage:
Exchange message = fluentTemplate.to("direct:test").withBody(messages).request(Exchange.class);
+## Retrieval Augmented Generation (RAG)
+
+Use the RAG feature to enrich exchanges with data retrieved from any
+type of Camel endpoint. The feature is compatible with all LangChain4
+Chat operations and is ideal for orchestrating the RAG workflow,
+utilizing the extensive library of components and Enterprise Integration
+Patterns (EIPs) available in Apache Camel.
+
+There are two ways for utilizing the RAG feature:
+
+### Using RAG with Content Enricher and LangChain4jRagAggregatorStrategy
+
+Enrich the exchange by retrieving a list of strings using any Camel
+producer. The `LangChain4jRagAggregatorStrategy` is specifically
+designed to augment data within LangChain4j chat producers.
+
+**Usage example:**
+
+ // Create an instance of the RAG aggregator strategy
+ LangChain4jRagAggregatorStrategy aggregatorStrategy = new LangChain4jRagAggregatorStrategy();
+
+ from("direct:test")
+ .enrich("direct:rag", aggregatorStrategy)
+ .to("langchain4j-chat:test1?chatOperation=CHAT_SIMPLE_MESSAGE");
+
+ from("direct:rag")
+ .process(exchange -> {
+ List augmentedData = List.of("data 1", "data 2" );
+ exchange.getIn().setBody(augmentedData);
+ });
+
+This method leverages a separate Camel route to fetch and process the
+augmented data.
+
+It is possible to enrich the message from multiple sources within the
+same exchange.
+
+**Usage example:**
+
+ // Create an instance of the RAG aggregator strategy
+ LangChain4jRagAggregatorStrategy aggregatorStrategy = new LangChain4jRagAggregatorStrategy();
+
+ from("direct:test")
+ .enrich("direct:rag-from-source-1", aggregatorStrategy)
+ .enrich("direct:rag-from-source-2", aggregatorStrategy)
+ .to("langchain4j-chat:test1?chatOperation=CHAT_SIMPLE_MESSAGE");
+
+### Using RAG with headers
+
+Directly add augmented data into the header. This method is particularly
+efficient for straightforward use cases where the augmented data is
+predefined or static. You must add augmented data as a List of
+`dev.langchain4j.rag.content.Content` directly inside the header
+`CamelLangChain4jChatAugmentedData`.
+
+**Usage example:**
+
+ import dev.langchain4j.rag.content.Content;
+
+ ...
+
+ Content augmentedContent = new Content("data test");
+ List contents = List.of(augmentedContent);
+
+ String response = template.requestBodyAndHeader("direct:send-multiple", messages, LangChain4jChat.Headers.AUGMENTED_DATA , contents, String.class);
+
## Component Configurations
@@ -147,6 +222,7 @@ Example of usage:
|---|---|---|---|
|chatOperation|Operation in case of Endpoint of type CHAT. The value is one of the values of org.apache.camel.component.langchain4j.chat.LangChain4jChatOperations|CHAT\_SINGLE\_MESSAGE|object|
|configuration|The configuration.||object|
+|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
|chatModel|Chat Language Model of type dev.langchain4j.model.chat.ChatLanguageModel||object|
@@ -158,5 +234,11 @@ Example of usage:
|---|---|---|---|
|chatId|The id||string|
|chatOperation|Operation in case of Endpoint of type CHAT. The value is one of the values of org.apache.camel.component.langchain4j.chat.LangChain4jChatOperations|CHAT\_SINGLE\_MESSAGE|object|
+|description|Tool description||string|
+|parameters|List of Tool parameters in the form of parameter.=||object|
+|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
+|camelToolParameter|Tool's Camel Parameters, programmatically define Tool description and parameters||object|
+|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
+|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|chatModel|Chat Language Model of type dev.langchain4j.model.chat.ChatLanguageModel||object|
diff --git a/camel-langchain4j-tokenizer.md b/camel-langchain4j-tokenizer.md
new file mode 100644
index 0000000000000000000000000000000000000000..f6232dad92541f8ab0b0833483646d721ddc52b4
--- /dev/null
+++ b/camel-langchain4j-tokenizer.md
@@ -0,0 +1,101 @@
+# Langchain4j-tokenizer
+
+**Since Camel 4.8**
+
+The LangChain4j tokenizer component provides support to tokenize (chunk)
+larger blocks of texts into text segments that can be used when
+interacting with LLMs. Tokenization is particularly helpful when used
+with [vector databases](https://en.wikipedia.org/wiki/Vector_database)
+to provide better and more contextual search results for
+[retrieval-augmented generation
+(RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation).
+
+This component uses the [LangChain4j document
+splitter](https://docs.langchain4j.dev/tutorials/rag/#document-splitter)
+to handle chunking.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-langchain4j-tokenizer
+ x.x.x
+
+
+
+# Usage
+
+## Chunking DSL
+
+The tokenization process is done in route, using a DSL that handles the
+parameters of the tokenization:
+
+Java
+from("direct:start")
+.tokenize(tokenizer()
+.byParagraph()
+.maxTokens(1024)
+.maxOverlap(10)
+.using(LangChain4jTokenizerDefinition.TokenizerType.OPEN\_AI)
+.end())
+.split().body()
+.to("mock:result");
+
+The tokenization creates a composite message (i.e.: an array of
+Strings). This composite message, then can be split using the [Split
+EIP](#eips:split-eip.adoc) so that each text segment is separately to an
+endpoint. Alternatively, the contents of the composite message may be
+passed through a processor so that invalid data is filtered.
+
+## Supported Splitters
+
+The following type of splitters is supported:
+
+- By paragraph: using the DSL `tokenizer().byParagraph()`
+
+- By sentence: using the DSL `tokenizer().bySentence()`
+
+- By word: using the DSL `tokenizer().byWord()`
+
+- By line: using the DSL `tokenizer().byLine()`
+
+- By character: using the DSL `tokenizer().byCharacter()`
+
+## Supported Tokenizers
+
+The following tokenizers are supported:
+
+- OpenAI: using `LangChain4jTokenizerDefinition.TokenizerType.OPEN_AI`
+
+- Azure: using `LangChain4jTokenizerDefinition.TokenizerType.AZURE`
+
+- Qwen: using `LangChain4jTokenizerDefinition.TokenizerType.QWEN`
+
+The application must provide the specific implementation of the
+tokenizer from LangChain4j. At this moment, they are:
+
+Open AI
+
+dev.langchain4j
+langchain4j-open-ai
+${langchain4j-version}
+
+
+Azure
+
+dev.langchain4j
+langchain4j-azure-open-ai
+${langchain4j-version}
+
+
+Qwen
+
+dev.langchain4j
+langchain4j-dashscope
+${langchain4j-version}
+
+
+## Component ConfigurationsThere are no configurations for this component
+
+## Endpoint ConfigurationsThere are no configurations for this component
diff --git a/camel-langchain4j-tools.md b/camel-langchain4j-tools.md
new file mode 100644
index 0000000000000000000000000000000000000000..e94a18ad6e65a188b1cd7df21894a7dca2a4ae53
--- /dev/null
+++ b/camel-langchain4j-tools.md
@@ -0,0 +1,174 @@
+# Langchain4j-tools
+
+**Since Camel 4.8**
+
+**Both producer and consumer are supported**
+
+The LangChain4j Tools Component allows you to use function calling
+features from Large Language Models (LLMs) supported by
+[LangChain4j](https://github.com/langchain4j/langchain4j).
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-langchain4j-tools
+ x.x.x
+
+
+
+# URI format
+
+**Producer**
+
+ langchain4j-tools:toolSet[?options]
+
+**Consumer**
+
+ langchain4j-tools:toolSet[?options]
+
+Where **toolSet** can be any string to uniquely identify the endpoint
+
+# Usage
+
+This component helps to use function-calling features from LLMs so that
+models can decide what functions (routes, in case of Camel) can be
+called (i.e.; routed).
+
+Consider, for instance, two consumer routes capable of query an user
+database by user ID or by social security number (SSN).
+
+**Queries user by ID**
+
+ from("langchain4j-tool:userInfo?tags=users&description=Query database by user ID")
+ .to("sql:SELECT name FROM users WHERE id = :#number");
+
+**Queries user by SSN**
+
+ from("langchain4j-tool:userInfo?tags=users&description=Query database by user social security ID")
+ .to("sql:SELECT name FROM users WHERE ssn = :#ssn");
+
+Now, consider a producer route that receives unstructured data as input.
+Such a route could consume this data, pass it to a LLM with
+function-calling capabilities (such as
+[llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B),
+[Granite Code 20b function calling,
+etc](https://huggingface.co/ibm-granite/granite-20b-functioncalling))
+and have the model decide which route to call.
+
+Such a route could receive questions in english such as:
+
+- *"What is the name of the user with user ID 1?"*
+
+- *"What is the name of the user with SSN 34.400.96?"*
+
+**Produce**
+
+ from(source)
+ .to("langchain4j-tool:userInfo?tags=users");
+
+## Tool Tags
+
+Consumer routes must define tags that groups
+[together](https://en.wikipedia.org/wiki/Set_theory). The aforementioned
+routes would be part have the `users` tag. The `users` tag has two
+routes: `queryById` and `queryBySSN`
+
+## Parameters
+
+The Tool Input parameter can be defined as an Endpoint multiValue option
+in the form of `parameter.=`, or via the endpoint option
+`camelToolParameter` for a programmatic approach. The parameters can be
+found as headers in the consumer route, in particular, if you define
+`parameter.userId=5`, in the consumer route `${header.userId}` can be
+used.
+
+**Producer and consumer example:**
+
+ from("direct:test")
+ .to("langchain4j-tools:test1?tags=users");
+
+ from("langchain4j-chat:test1?tags=users&description=Query user database by user ID¶meter.userId=integer")
+ .to("sql:SELECT name FROM users WHERE id = :#userId");
+
+**Usage example:**
+
+ List messages = new ArrayList<>();
+ messages.add(new SystemMessage("""
+ You provide information about specific user name querying the database given a number.
+ """));
+ messages.add(new UserMessage("""
+ What is the name of the user 1?
+ """));
+
+ Exchange message = fluentTemplate.to("direct:test").withBody(messages).request(Exchange.class);
+
+## Using a specific Model
+
+The Camel LangChain4j tools component provides an abstraction for
+interacting with various types of Large Language Models (LLMs) supported
+by [LangChain4j](https://github.com/langchain4j/langchain4j).
+
+### Integrating with specific LLM
+
+To integrate with a specific LLM, users should follow the steps
+described below, which explain how to integrate with OpenAI.
+
+Add the dependency for LangChain4j OpenAI support:
+
+**Example**
+
+
+ dev.langchain4j
+ langchain4j-open-ai
+ x.x.x
+
+
+Initialize the OpenAI Chat Language Model, and add it to the Camel
+Registry:
+
+ ChatLanguageModel model = OpenAiChatModel.builder()
+ .apiKey("NO_API_KEY")
+ .modelName("llama3.1:latest")
+ .temperature(0.0)
+ .timeout(ofSeconds(60000))
+ .build();
+ context.getRegistry().bind("chatModel", model);
+
+Use the model in the Camel LangChain4j Chat Producer
+
+ from("direct:chat")
+ .to("langchain4j-tools:test?tags=users&chatModel=#chatModel");
+
+To switch to another Large Language Model and its corresponding
+dependency, replace the `langchain4j-open-ai` dependency with the
+appropriate dependency for the desired model. Update the initialization
+parameters accordingly in the code snippet provided above.
+
+## Component Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|configuration|The configuration.||object|
+|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
+|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
+|chatModel|Chat Language Model of type dev.langchain4j.model.chat.ChatLanguageModel||object|
+
+## Endpoint Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|toolId|The tool name||string|
+|tags|The tags for the tools||string|
+|description|Tool description||string|
+|parameters|List of Tool parameters in the form of parameter.=||object|
+|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
+|camelToolParameter|Tool's Camel Parameters, programmatically define Tool description and parameters||object|
+|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
+|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
+|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|chatModel|Chat Language Model of type dev.langchain4j.model.chat.ChatLanguageModel||object|
diff --git a/camel-langchain4j-web-search.md b/camel-langchain4j-web-search.md
index 91febb0c388f00429307594ad0a8167d6968433d..207d34b0473b67a51e751d2cfc3717f6e2a1b6a8 100644
--- a/camel-langchain4j-web-search.md
+++ b/camel-langchain4j-web-search.md
@@ -24,16 +24,17 @@ for this component:
Where **searchId** can be any string to uniquely identify the endpoint
-# Using a specific Web Search Engine
+# Usage
+
+## Using a specific Web Search Engine
The Camel LangChain4j web search component provides an abstraction for
interacting with various types of Web Search Engines supported by
[LangChain4j](https://github.com/langchain4j/langchain4j).
-To integrate with a specific Web Search Engine, users should follow
-these steps:
-
-## Example of integrating with Tavily
+To integrate with a specific Web Search Engine, users should follow the
+steps described below, which explain how to integrate with
+[Tavily](https://tavily.com/).
Add the dependency for LangChain4j Tavily Web Search Engine support :
@@ -43,10 +44,11 @@ Add the dependency for LangChain4j Tavily Web Search Engine support :
x.x.x
-Init the Tavily Web Search Engine, and add it to the Camel Registry:
-Initialize the Tavily Web Search Engine, and bind it to the Camel
+Initialize the Web Search Engine instance, and bind it to the Camel
Registry:
+**Example:**
+
@BindToRegistry("web-search-engine")
WebSearchEngine tavilyWebSearchEngine = TavilyWebSearchEngine.builder()
.apiKey(tavilyApiKey)
@@ -57,7 +59,7 @@ The web search engine will be autowired automatically if its bound name
is `web-search-engine`. Otherwise, it should be added as a configured
parameter to the Camel route.
-Example of route:
+**Example:**
from("direct:web-search")
.to("langchain4j-web-search:test?webSearchEngine=#web-search-engine-test")
@@ -68,36 +70,39 @@ appropriate dependency for the desired web search engine. Update the
initialization parameters accordingly in the code snippet provided
above.
-# Customizing Web Search Results
+## Customizing Web Search Results
By default, the `maxResults` property is set to 1. You can adjust this
value to retrieve a list of results.
-## Retrieving single result or list of strings
+### Retrieving a single result or a list of strings
When `maxResults` is set to 1, you can by default retrieve by default
-the content as a single string. Example:
+the content as a single string.
+
+**Example:**
String response = template.requestBody("langchain4j-web-search:test", "Who won the European Cup in 2024?", String.class);
When `maxResults` is greater than 1, you can retrieve a list of strings.
-Example:
+
+**Example:**
List responses = template.requestBody("langchain4j-web-search:test?maxResults=3", "Who won the European Cup in 2024?", List.class);
-## Retrieve different types of Results
+## Retrieving different types of Results
-You can get different type of Results.
+You can get different types of Results.
-When `resultType` = SNIPPET, you will get a single or list (depending of
+When `resultType` = SNIPPET, you will get a single or list (depending on
`maxResults` value) of Strings containing the snippets.
When `resultType` = LANGCHAIN4J\_WEB\_SEARCH\_ORGANIC\_RESULT, you will
-get a single or list (depending of `maxResults` value) of Objects of
+get a single or list (depending on `maxResults` value) of Objects of
type `WebSearchOrganicResult` containing all the response created under
the hood by Langchain4j Web Search.
-# Advanced usage of WebSearchRequest
+## Advanced usage of WebSearchRequest
When defining a WebSearchRequest, the Camel LangChain4j web search
component will default to this request, instead of creating one from the
@@ -108,7 +113,7 @@ will be ignored. Use this parameter with caution.
A WebSearchRequest should be bound to the registry.
-Example of binding the request to the registry.
+**Example of binding the request to the registry.**
@BindToRegistry("web-search-request")
WebSearchRequest request = WebSearchRequest.builder()
@@ -120,11 +125,33 @@ The request will be autowired automatically if its bound name is
`web-search-request`. Otherwise, it should be added as a configured
parameter to the Camel route.
-Example of route:
+**Example of route:**
from("direct:web-search")
.to("langchain4j-web-search:test?webSearchRequest=#searchRequestTest");
-## Component ConfigurationsThere are no configurations for this component
-
-## Endpoint ConfigurationsThere are no configurations for this component
+## Component Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
+
+## Endpoint Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|searchId|The id||string|
+|additionalParams|The additionalParams is the additional parameters for the search request are a map of key-value pairs that represent additional parameters for the search request.||object|
+|geoLocation|The geoLocation is the desired geolocation for search results. Each search engine may have a different set of supported geolocations.||string|
+|language|The language is the desired language for search results. The expected values may vary depending on the search engine.||string|
+|maxResults|The maxResults is the expected number of results to be found if the search request were made. Each search engine may have a different limit for the maximum number of results that can be returned.|1|integer|
+|resultType|The resultType is the result type of the request. Valid values are LANGCHAIN4J\_WEB\_SEARCH\_ORGANIC\_RESULT, CONTENT, or SNIPPET. CONTENT is the default value; it will return a list of String . You can also specify to return either the Langchain4j Web Search Organic Result object (using LANGCHAIN4J\_WEB\_SEARCH\_ORGANIC\_RESULT) or snippet (using SNIPPET) for each result. If maxResults is equal to 1, the response will be a single object instead of a list.|CONTENT|object|
+|safeSearch|The safeSearch is the safe search flag, indicating whether to enable or disable safe search.||boolean|
+|startIndex|The startIndex is the start index for search results, which may vary depending on the search engine.||integer|
+|startPage|The startPage is the start page number for search results||integer|
+|webSearchEngine|The WebSearchEngine engine to use. This is mandatory. Use one of the implementations from Langchain4j web search engines.||object|
+|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|webSearchRequest|The webSearchRequest is the custom WebSearchRequest - advanced||object|
diff --git a/camel-ldap.md b/camel-ldap.md
index a2b9c08db71865bd12ed76a7e0c79027ed7aa3eb..e5405e1aab09727da9470c4906c2da22dd3f29e7 100644
--- a/camel-ldap.md
+++ b/camel-ldap.md
@@ -30,12 +30,14 @@ bean in the registry. The LDAP component only supports producer
endpoints, which means that an `ldap` URI cannot appear in the `from` at
the start of a route.
-# Result
+# Usage
+
+## Result
The result is returned to Out body as a
`List` object.
-# DirContext
+## DirContext
The URI, `ldap:ldapserver`, references a bean with the ID `ldapserver`.
The `ldapserver` bean may be defined as follows:
@@ -84,7 +86,7 @@ concurrency guarantees as Spring’s `prototype` scope. This ensures that
each part of your application interacts with a separate and isolated
`DirContext` instance, preventing unintended thread interference.
-# Security concerns related to LDAP injection
+## Security concerns related to LDAP injection
The camel-ldap component uses the message body to filter the search
results. Therefore, the message body should be protected from LDAP
@@ -95,7 +97,7 @@ method(s) to escape string values to be LDAP injection safe.
See the following link for information about [LDAP
Injection](https://cheatsheetseries.owasp.org/cheatsheets/LDAP_Injection_Prevention_Cheat_Sheet.html).
-# Samples
+# Examples
Following on from the configuration above, the code sample below sends
an LDAP request to filter search a group for a member. The Common Name
@@ -118,9 +120,9 @@ is then extracted from the response.
// ...
}
-If no specific filter is required - for example, you just need to look
-up a single entry - specify a wildcard filter expression. For example,
-if the LDAP entry has a Common Name, use a filter expression like:
+If no specific filter is required (for example, you need to look up a
+single entry), specify a wildcard filter expression. If the LDAP entry
+has a Common Name, use a filter expression like:
(cn=*)
@@ -170,7 +172,7 @@ server using credentials.
# Configuring SSL
All that is required is to create a custom socket factory and reference
-it in the InitialDirContext bean - see below sample.
+it in the `InitialDirContext` bean. See the sample below.
**SSL Configuration**
diff --git a/camel-ldif.md b/camel-ldif.md
index 66740abfb7c33e3934b9099558090ac074033dcb..3b5f77a4b760b2bbebd025a4642d9d731a729658 100644
--- a/camel-ldif.md
+++ b/camel-ldif.md
@@ -39,7 +39,9 @@ the start of a route.
For SSL configuration, refer to the `camel-ldap` component where there
is an example of setting up a custom SocketFactory instance.
-# Body types:
+# Usage
+
+## Body types:
The body can be a URL to an LDIF file or an inline LDIF file. To signify
the difference in body types, an inline LDIF must start with:
@@ -48,13 +50,13 @@ the difference in body types, an inline LDIF must start with:
If not, the component will try to parse the body as a URL.
-# Result
+## Result
The result is returned in the Out body as a
-`ArrayList` object. This contains either "success" or
-an Exception message for each LDIF entry.
+`ArrayList` object. This contains either `_success_`
+or an Exception message for each LDIF entry.
-# LdapConnection
+## LdapConnection
The URI, `ldif:ldapConnectionName`, references a bean with the ID,
`ldapConnectionName`. The ldapConnection can be configured using a
@@ -79,24 +81,7 @@ The `LdapConnection` bean may be defined as follows in Spring XML:
-or in a OSGi blueprint.xml:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Samples
+# Examples
Following on from the Spring configuration above, the code sample below
sends an LDAP request to filter search a group for a member. The Common
diff --git a/camel-leveldb.md b/camel-leveldb.md
new file mode 100644
index 0000000000000000000000000000000000000000..021134e5c587095c3b8faca8ffa333eca144f35a
--- /dev/null
+++ b/camel-leveldb.md
@@ -0,0 +1,233 @@
+# Leveldb.md
+
+**Since Camel 2.10**
+
+[Leveldb](https://github.com/google/leveldb) is a very lightweight and
+embeddable key value database. It allows, together with Camel, providing
+persistent support for various Camel features such as Aggregator.
+
+Current features it provides:
+
+- `LevelDBAggregationRepository`
+
+# Using LevelDBAggregationRepository
+
+`LevelDBAggregationRepository` is an `AggregationRepository` which on
+the fly persists the aggregated messages. This ensures that you will not
+lose messages, as the default aggregator will use an in-memory only
+`AggregationRepository`.
+
+It has the following options:
+
+
+
+
+
+
+
+
+
+
+
+
+repositoryName
+String
+A mandatory repository name. Allows you
+to use a shared LevelDBFile for multiple
+repositories.
+
+
+persistentFileName
+String
+Filename for the persistent storage. If
+no file exists on startup, a new file is created.
+
+
+levelDBFile
+LevelDBFile
+Use an existing configured
+org.apache.camel.component.leveldb.LevelDBFile
+instance.
+
+
+sync
+boolean
+Whether the LevelDBFile should sync on
+writing or not. Default is false. By sync on writing ensures that it’s
+always waiting for all writes to be spooled to disk and thus will not
+lose updates. See LevelDB
+docs for more details about async vs. sync writes.
+
+
+returnOldExchange
+boolean
+Whether the get operation should return
+the old existing Exchange if any existed. By default, this option is
+false to optimize as we do not need the old exchange when
+aggregating.
+
+
+useRecovery
+boolean
+Whether recovery is enabled. This
+option is by default true. When enabled, the Camel
+Aggregator automatically recovers failed aggregated exchange and have
+them resubmitted.
+
+
+recoveryInterval
+long
+If recovery is enabled, then a
+background task is run every x’th time to scan for failed exchanges to
+recover and resubmit. By default, this interval is 5000
+milliseconds.
+
+
+maximumRedeliveries
+int
+Allows you to limit the maximum number
+of redelivery attempts for a recovered exchange. If enabled, then the
+Exchange will be moved to the dead letter channel if all redelivery
+attempts failed. By default, this option is disabled. If this option is
+used then the deadLetterUri option must also be
+provided.
+
+
+deadLetterUri
+String
+An endpoint uri for a Dead Letter
+Channel where exhausted recovered Exchanges will be moved. If this
+option is used then the maximumRedeliveries option must
+also be provided.
+
+
+
+
+The `repositoryName` option must be provided. Then either the
+`persistentFileName` or the `levelDBFile` must be provided.
+
+## What is preserved when persisting
+
+`LevelDBAggregationRepository` will only preserve any `Serializable`
+compatible message body data types. Message headers must be primitive /
+string / numbers / etc. If a data type is not such a type its dropped
+and a `WARN` is logged. And it only persists the `Message` body and the
+`Message` headers. The `Exchange` properties are **not** persisted.
+
+## Recovery
+
+The `LevelDBAggregationRepository` will by default recover any failed
+Exchange. It does this by having a background task that scans for failed
+Exchanges in the persistent store. You can use the `checkInterval`
+option to set how often this task runs. The recovery works as
+transactional which ensures that Camel will try to recover and redeliver
+the failed Exchange. Any Exchange found to be recovered will be restored
+from the persistent store and resubmitted and send out again.
+
+The following headers are set when an Exchange is being
+recovered/redelivered:
+
+
+
+
+
+
+
+
+
+
+
+
+Exchange.REDELIVERED
+Boolean
+It is set to true to indicate the
+Exchange is being redelivered.
+
+
+Exchange.REDELIVERY_COUNTER
+Integer
+The redelivery attempt, starting from
+1.
+
+
+
+
+Only when an Exchange has been successfully processed it will be marked
+as complete which happens when the `confirm` method is invoked on the
+`AggregationRepository`. This means if the same Exchange fails again, it
+will be kept retried until it succeeds.
+
+You can use option `maximumRedeliveries` to limit the maximum number of
+redelivery attempts for a given recovered Exchange. You must also set
+the `deadLetterUri` option so Camel knows where to send the Exchange
+when the `maximumRedeliveries` was hit.
+
+You can see some examples in the unit tests of camel-leveldb, for
+example, [this
+test](https://github.com/apache/camel/blob/main/components/camel-leveldb/src/test/java/org/apache/camel/component/leveldb/LevelDBAggregateRecoverTest.java).
+
+## Serialization mechanism
+
+Component serializes by using Java serialization mechanism by default.
+
+You can use serialization via Jackson (using json). Jackson’s
+serialization brings better performance, but also several limitations.
+
+Example of jackson serialization:
+
+ LevelDBAggregationRepository repo = ...; //initialization of repository
+ repo.setSerializer(new JacksonLevelDBSerializer());
+
+Limitation of jackson serializer is caused by binary data:
+
+- If payload is a raw data (byte\[\]), it is saved into DB without
+ using Jackson
+
+- If payload contains objects with binary fields. Those fields won’t
+ be serialized/deserialized correctly. Customized serializer can be
+ used to solve this problem. Please use custom serializer with
+ Jackson by providing own Module:
+
+
+
+ SimpleModule simpleModule = new SimpleModule();
+ simpleModule.addSerializer(ObjectWithBinaryField.class, new ObjectWithBinaryFieldSerializer()); //custom serializer
+ simpleModule.addDeserializer(ObjectWithBinaryField.class, new ObjectWithBinaryFieldDeserializer()); //custom deserializer
+
+ repo.setSerializer(new JacksonLevelDBSerializer(simpleModule));
+
+# Using LevelDBAggregationRepository in Java DSL
+
+In this example we want to persist aggregated messages in the
+`target/data/leveldb.dat` file.
+
+# Using LevelDBAggregationRepository in Spring XML
+
+The same example but using Spring XML instead:
+
+# Dependencies
+
+To use LevelDB in your Camel routes, you need to add a dependency on
+**camel-leveldb**.
+
+If you use Maven, you could add the following to your pom.xml,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-leveldb
+ x.y.z
+
diff --git a/camel-loadBalance-eip.md b/camel-loadBalance-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..477b37fe7846c1c16d083f5a39977b561c302641
--- /dev/null
+++ b/camel-loadBalance-eip.md
@@ -0,0 +1,70 @@
+# LoadBalance-eip.md
+
+The Load Balancer Pattern allows you to delegate to one of a number of
+endpoints using a variety of different load balancing policies.
+
+# Built-in load balancing policies
+
+Camel provides the following policies out-of-the-box:
+
+
diff --git a/camel-log-eip.md b/camel-log-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..934adeef5cc8fb2317b0b256908e37c91630f8ec
--- /dev/null
+++ b/camel-log-eip.md
@@ -0,0 +1,224 @@
+# Log-eip.md
+
+How can I log the processing of a [Message](#message.adoc)?
+
+Camel provides many ways to log the fact that you are processing a
+message. Here are just a few examples:
+
+- You can use the [Log](#ROOT:log-component.adoc) component which logs
+ the Message content.
+
+- You can use the [Tracer](#manual::tracer.adoc) that traces logs
+ message flow.
+
+- You can also use a [Processor](#manual::processor.adoc) or
+ [Bean](#manual::bean-binding.adoc) and log from Java code.
+
+- You can use this log EIP.
+
+# Options
+
+# Exchange properties
+
+## Difference between Log EIP and Log component
+
+This log EIP is much lighter and meant for logging human logs such as
+`Starting to do ...` etc. It can only log a message based on the
+[Simple](#languages:simple-language.adoc) language.
+
+The [log](#ROOT:log-component.adoc) component is meant for logging the
+message content (body, headers, etc). There are many options on the log
+component to configure what content to log.
+
+# Example
+
+You can use the log EIP which allows you to use
+[Simple](#languages:simple-language.adoc) language to construct a
+dynamic message which gets logged.
+
+For example, you can do
+
+Java
+from("direct:start")
+.log("Processing ${id}")
+.to("bean:foo");
+
+XML
+
+
+
+
+
+
+This will be evaluated using the
+[Simple](#languages:simple-language.adoc) to construct the `String`
+containing the message to be logged.
+
+## Logging message body with streaming
+
+If the message body is stream based, then logging the message body may
+cause the message body to be empty afterward. See this
+[FAQ](#manual:faq:why-is-my-message-body-empty.adoc). For streamed
+messages, you can use Stream caching to allow logging the message body
+and be able to read the message body afterward again.
+
+The log DSL has overloaded methods to set the logging level and/or name
+as well.
+
+ from("direct:start")
+ .log(LoggingLevel.DEBUG, "Processing ${id}")
+ .to("bean:foo");
+
+and to set a logger name
+
+ from("direct:start")
+ .log(LoggingLevel.DEBUG, "com.mycompany.MyCoolRoute", "Processing ${id}")
+ .to("bean:foo");
+
+The logger instance may be used as well:
+
+ from("direct:start")
+ .log(LoggingLevel.DEBUG, org.slf4j.LoggerFactory.getLogger("com.mycompany.mylogger"), "Processing ${id}")
+ .to("bean:foo");
+
+For example, you can use this to log the file name being processed if
+you consume files.
+
+ from("file://target/files")
+ .log(LoggingLevel.DEBUG, "Processing file ${file:name}")
+ .to("bean:foo");
+
+In XML DSL, it is also easy to use log DSL as shown below:
+
+
+
+
+
+
+
+The log tag has attributes to set the message, loggingLevel and logName.
+For example:
+
+
+
+
+
+
+
+# Using custom logger
+
+It is possible to reference an existing logger instance. For example:
+
+
+
+
+
+
+
+
+
+
+
+## Configuring logging name
+
+The log message will be logged at `INFO` level using the route id as the
+logger name (or source name:line if source location is enabled, see TIP
+below). So for example, if you have not assigned an id to the route,
+then Camel will use `route-1`, `route-2` as the logger name.
+
+To use "fooRoute" as the route id, you can do:
+
+Java
+from("direct:start").routeId("fooRoute")
+.log("Processing ${id}")
+.to("bean:foo");
+
+XML
+
+
+
+
+
+
+If you enable `sourceLocationEnabled=true` on `CamelContext` then Camel
+will use source file:line as logger name, instead of the route id. This
+is for example what `camel-jbang` do, to make it easy to see where in
+the source code the log is located.
+
+### Using custom logger from the Registry
+
+If the Log EIP has not been configured with a specific logger to use,
+then Camel will look up in the [Registry](#manual::registry.adoc) if
+there is a single instance of `org.slf4j.Logger`.
+
+If such an instance exists, then this logger is used if not the behavior
+defaults to creating a new instance of logger.
+
+## Configuring logging name globally
+
+You can configure a global log name that is used instead of the route
+id, by setting the global option on the `CamelContext`.
+
+In Java, you can do:
+
+Java
+camelContext.getGlobalOptions().put(Exchange.LOG\_EIP\_NAME, "com.foo.myapp");
+
+XML
+
+
+
+
+
+
+# Masking sensitive information like password
+
+You can enable security masking for logging by setting `logMask` flag to
+`true`. Note that this option also affects the
+[Log](#ROOT:log-component.adoc) component.
+
+To enable mask in Java DSL at CamelContext level:
+
+Java
+camelContext.setLogMask(true);
+
+XML
+**And in XML you set the option on ``:**
+
+
+
+
+
+You can also turn it on\|off at route level. To enable mask in at route
+level:
+
+Java
+from("direct:start").logMask()
+.log("Processing ${id}")
+.to("bean:foo");
+
+XML
+
+
+
+
+## Using custom masking formatter
+
+`org.apache.camel.support.processor.DefaultMaskingFormatter` is used for
+the masking by default. If you want to use a custom masking formatter,
+put it into registry with the name `CamelCustomLogMask`. Note that the
+masking formatter must implement
+`org.apache.camel.spi.MaskingFormatter`.
+
+The know set of keywords to mask is gathered from all the different
+component options that are marked as secret. The list is generated into
+the source code in `org.apache.camel.util.SensitiveUtils`. At this time
+of writing, there are more than 65 different keywords.
+
+Custom keywords can be added as shown:
+
+ DefaultMaskingFormatter formatter = new DefaultMaskingFormatter();
+ formatter.addKeyword("mySpecialKeyword");
+ formatter.addKeyword("verySecret");
+
+ camelContext.getRegistry().bind(MaskingFormatter.CUSTOM_LOG_MASK_REF, formatter);
diff --git a/camel-log.md b/camel-log.md
index 02169c6806fca0f08a2ebb7b5bf8fae037849a58..b1332fd1d50a6b2fe1c14f5519294169b402af8c 100644
--- a/camel-log.md
+++ b/camel-log.md
@@ -46,7 +46,9 @@ There is also a `log` directly in the DSL, but it has a different
purpose. It’s meant for lightweight and human logs. See more details at
[LogEIP](#eips:log-eip.adoc).
-# Regular logger sample
+# Examples
+
+## Regular logger example
In the route below we log the incoming orders at `DEBUG` level before
the order is processed:
@@ -61,7 +63,7 @@ Or using Spring XML to define the route:
-# Regular logger with formatter sample
+## Regular logger with formatter example
In the route below we log the incoming orders at `INFO` level before the
order is processed.
@@ -69,7 +71,7 @@ order is processed.
from("activemq:orders").
to("log:com.mycompany.order?showAll=true&multiline=true").to("bean:processOrder");
-# Throughput logger with groupSize sample
+## Throughput logger with groupSize example
In the route below we log the throughput of the incoming orders at
`DEBUG` level grouped by 10 messages.
@@ -77,7 +79,7 @@ In the route below we log the throughput of the incoming orders at
from("activemq:orders").
to("log:com.mycompany.order?level=DEBUG&groupSize=10").to("bean:processOrder");
-# Throughput logger with groupInterval sample
+## Throughput logger with groupInterval example
This route will result in message stats logged every 10s, with an
initial 60s delay, and stats should be displayed even if there isn’t any
@@ -90,7 +92,7 @@ The following will be logged:
"Received: 1000 new messages, with total 2000 so far. Last group took: 10000 millis which is: 100 messages per second. average: 100"
-# Masking sensitive information like password
+## Masking sensitive information like password
You can enable security masking for logging by setting `logMask` flag to
`true`. Note that this option also affects Log EIP.
@@ -122,7 +124,7 @@ put it into registry with the name `CamelCustomLogMask`. Note that the
masking formatter must implement
`org.apache.camel.spi.MaskingFormatter`.
-# Full customization of the logging output
+## Full customization of the logging output
With the options outlined in the [#Formatting](#log-component.adoc)
section, you can control much of the output of the logger. However, log
@@ -164,7 +166,7 @@ in either of two ways:
-## Convention over configuration:\*
+### Convention over configuration:\*
Simply by registering a bean with the name `logFormatter`; the Log
Component is intelligent enough to pick it up automatically.
diff --git a/camel-loop-eip.md b/camel-loop-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..2609479a5295b41b294167169a5961f8ba050caa
--- /dev/null
+++ b/camel-loop-eip.md
@@ -0,0 +1,136 @@
+# Loop-eip.md
+
+The Loop EIP allows for processing a message a number of times, possibly
+in a different way for each iteration.
+
+# Options
+
+# Exchange properties
+
+# Looping modes
+
+The Loop EIP can run in three modes: default, copy, or while mode.
+
+In default mode the Loop EIP uses the same `Exchange` instance
+throughout the looping. So the result from the previous iteration will
+be used for the next iteration.
+
+In the copy mode, then the Loop EIP uses a copy of the original
+`Exchange` in each iteration. So the result from the previous iteration
+will **not** be used for the next iteration.
+
+In the while mode, then the Loop EIP will keep looping until the
+expression evaluates to `false` or `null`.
+
+# Example
+
+The following example shows how to take a request from the `direct:x`
+endpoint, then send the message repetitively to `mock:result`.
+
+The number of times the message is sent is either passed as an argument
+to `loop`, or determined at runtime by evaluating an expression.
+
+The [Expression](#manual::expression.adoc) **must** evaluate to an
+`int`, otherwise a `RuntimeCamelException` is thrown.
+
+Pass loop count as an argument:
+
+Java
+from("direct:a")
+.loop(8)
+.to("mock:result");
+
+XML
+
+
+
+8
+
+
+
+
+Use expression to determine loop count:
+
+Java
+from("direct:b")
+.loop(header("loop"))
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+
+
+And with the [XPath](#languages:xpath-language.adoc) language:
+
+ from("direct:c")
+ .loop(xpath("/hello/@times"))
+ .to("mock:result");
+
+# Using copy mode
+
+Now suppose we send a message to direct:start endpoint containing the
+letter A. The output of processing this route will be that, each
+mock:loop endpoint will receive AB as the message.
+
+Java
+from("direct:start")
+// instruct loop to use copy mode, which mean it will use a copy of the input exchange
+// for each loop iteration, instead of keep using the same exchange all over
+.loop(3).copy()
+.transform(body().append("B"))
+.to("mock:loop")
+.end() // end loop
+.to("mock:result");
+
+XML
+
+
+
+
+3
+
+${body}B
+
+
+
+
+
+
+However, if we do **not** enable copy mode, then mock:loop will receive
+`_"AB"_`, `_"ABB"_`, `_"ABBB"_`, etc. messages.
+
+# Looping using while
+
+The loop can act like a while loop that loops until the expression
+evaluates to `false` or `null`.
+
+For example, the route below loops while the length of the message body
+is five or fewer characters. Notice that the DSL uses `loopDoWhile`.
+
+Java
+from("direct:start")
+.loopDoWhile(simple("${body.length} \<= 5"))
+.to("mock:loop")
+.transform(body().append("A"))
+.end() // end loop
+.to("mock:result");
+
+XML
+
+
+
+${body.length} \<= 5
+
+
+A${body}
+
+
+
+
+
+Notice that the while loop is turned on using the `doWhile` attribute.
diff --git a/camel-lpr.md b/camel-lpr.md
index 1ae318ad94acb8040eff84ee717393cb2375fe31..81052c21a91ab2a5faebb2ea3d052b0255b6ffba 100644
--- a/camel-lpr.md
+++ b/camel-lpr.md
@@ -33,17 +33,24 @@ the scheme.
lpr://localhost/default[?options]
lpr://remotehost:port/path/to/printer[?options]
-# Sending Messages to a Printer
+# Usage
-## Printer Producer
+## Sending Messages to a Printer
+
+### Printer Producer
Sending data to the printer is very straightforward and involves
creating a producer endpoint that can be sent message exchanges on in
route.
-# Usage Samples
+# Examples
+
+Usage samples.
+
+## Printing text-based payloads
-## Example 1: Printing text-based payloads on a Default printer using letter stationary and one-sided mode
+**Printing text-based payloads on a Default printer using letter
+stationary and one-sided mode**
RouteBuilder builder = new RouteBuilder() {
public void configure() {
@@ -55,7 +62,10 @@ route.
"&sides=one-sided");
}};
-## Example 2: Printing GIF-based payloads on a remote printer using A4 stationary and one-sided mode
+## Printing GIF-based payloads
+
+**Printing GIF-based payloads on a remote printer using A4 stationary
+and one-sided mode**
RouteBuilder builder = new RouteBuilder() {
public void configure() {
@@ -66,7 +76,10 @@ route.
"&flavor=DocFlavor.INPUT_STREAM");
}};
-## Example 3: Printing JPEG-based payloads on a remote printer using Japanese Postcard stationary and one-sided mode
+## Printing JPEG-based payloads
+
+**Printing JPEG-based payloads on a remote printer using Japanese
+Postcard stationary and one-sided mode**
RouteBuilder builder = new RouteBuilder() {
public void configure() {
diff --git a/camel-lra.md b/camel-lra.md
new file mode 100644
index 0000000000000000000000000000000000000000..ed13716b2cfbb36e13ef9ff9e551c2448de3af24
--- /dev/null
+++ b/camel-lra.md
@@ -0,0 +1,17 @@
+# Lra.md
+
+**Since Camel 2.21**
+
+The LRA module provides bindings of the [Saga EIP](#eips:saga-eip.adoc)
+with any [MicroProfile compatible LRA
+Coordinator](https://github.com/eclipse/microprofile-lra).
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-lra
+ x.x.x
+
+
diff --git a/camel-lucene.md b/camel-lucene.md
index 8721af2544d795e9a8427c6039abdfcdfe530796..17bbf52ac2bbf7c20f7edbbb420f710b04dbfcef 100644
--- a/camel-lucene.md
+++ b/camel-lucene.md
@@ -37,7 +37,7 @@ for this component:
lucene:searcherName:insert[?options]
lucene:searcherName:query[?options]
-# Sending/Receiving Messages to/from the cache
+# Usage
## Lucene Producers
@@ -58,7 +58,9 @@ syntax](https://lucene.apache.org/core/8_4_1/queryparser/org/apache/lucene/query
There is a processor called LuceneQueryProcessor available to perform
queries against lucene without the need to create a producer.
-# Lucene Usage Samples
+# Examples
+
+Lucene usage samples.
## Example 1: Creating a Lucene index
diff --git a/camel-lumberjack.md b/camel-lumberjack.md
index 4bb3624964b6deb7bbcb1cf47875b8de85991bd7..182ddfce15ed324558135dcb7cda577689e5e899 100644
--- a/camel-lumberjack.md
+++ b/camel-lumberjack.md
@@ -6,8 +6,8 @@
The Lumberjack component retrieves logs sent over the network using the
Lumberjack protocol, from
-[Filebeat](https://www.elastic.co/fr/products/beats/filebeat), for
-instance. The network communication can be secured with SSL.
+[Filebeat](https://www.elastic.co/beats/filebeat/), for instance. The
+network communication can be secured with SSL.
Maven users will need to add the following dependency to their `pom.xml`
for this component:
@@ -24,11 +24,13 @@ for this component:
lumberjack:host
lumberjack:host:port
-# Result
+# Usage
The result body is a `Map` object.
-# Lumberjack Usage Samples
+# Examples
+
+Lumberjack usage samples.
## Example 1: Streaming the log messages
diff --git a/camel-lzf-dataformat.md b/camel-lzf-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..6cdb10b911b785de15e2bcdc221bfe81de924510
--- /dev/null
+++ b/camel-lzf-dataformat.md
@@ -0,0 +1,44 @@
+# Lzf-dataformat.md
+
+**Since Camel 2.17**
+
+The LZF [Data Format](#manual::data-format.adoc) is a message
+compression and decompression format. It uses the LZF deflate algorithm.
+Messages marshalled using LZF compression can be unmarshalled using LZF
+decompression just prior to being consumed at the endpoint. The
+compression capability is quite useful when you deal with large XML and
+text-based payloads or when you read messages previously compressed
+using LZF algorithm.
+
+# Options
+
+# Marshal
+
+In this example, we marshal a regular text/XML payload to a compressed
+payload employing LZF compression format and send it an ActiveMQ queue
+called MY\_QUEUE.
+
+ from("direct:start").marshal().lzf().to("activemq:queue:MY_QUEUE");
+
+# Unmarshal
+
+In this example we unmarshal a LZF payload from an ActiveMQ queue called
+MY\_QUEUE to its original format, and forward it for processing to the
+`UnGZippedMessageProcessor`.
+
+ from("activemq:queue:MY_QUEUE").unmarshal().lzf().process(new UnCompressedMessageProcessor());
+
+# Dependencies
+
+To use LZF compression in your Camel routes, you need to add a
+dependency on **camel-lzf** which implements this data format.
+
+If you use Maven you can add the following to your `pom.xml`,
+substituting the version number for the latest release.
+
+
+ org.apache.camel
+ camel-lzf
+ x.x.x
+
+
diff --git a/camel-mail-microsoft-oauth.md b/camel-mail-microsoft-oauth.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd0939f66b2b307c5e2d7d5546f1fd3d40d5c599
--- /dev/null
+++ b/camel-mail-microsoft-oauth.md
@@ -0,0 +1,51 @@
+# Mail-microsoft-oauth.md
+
+**Since Camel 3.18.4**
+
+The Mail Microsoft OAuth2 provides an implementation of
+`org.apache.camel.component.mail.MailAuthenticator` to authenticate
+IMAP/POP/SMTP connections and access to Email via Spring’s Mail support
+and the underlying JavaMail system.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-mail-microsoft-oauth
+ x.x.x
+
+
+
+Importing `camel-mail-microsoft-oauth` it will automatically import the
+camel-mail component.
+
+# Usage
+
+## Microsoft Exchange Online OAuth2 Mail Authenticator IMAP example
+
+To use OAuth, an application must be registered with Azure Active
+Directory. Follow the instructions listed in [Register an application
+with the Microsoft identity
+platform](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app)
+guide to register a new application.
+Enable application to access Exchange mailboxes via client credentials
+flow. Instructions
+[here](https://learn.microsoft.com/en-us/exchange/client-developer/legacy-protocols/how-to-authenticate-an-imap-pop-smtp-application-by-using-oauth)
+Once everything is set up, declare and register in the registry, an
+instance of
+`org.apache.camel.component.mail.MicrosoftExchangeOnlineOAuth2MailAuthenticator`.
+For example, in a Spring Boot application:
+
+ @BindToRegistry("auth")
+ public MicrosoftExchangeOnlineOAuth2MailAuthenticator exchangeAuthenticator(){
+ return new MicrosoftExchangeOnlineOAuth2MailAuthenticator(tenantId, clientId, clientSecret, "jon@doe.com");
+ }
+
+and then reference it in the Camel URI:
+
+ from("imaps://outlook.office365.com:993"
+ + "?authenticator=#auth"
+ + "&mail.imaps.auth.mechanisms=XOAUTH2"
+ + "&debugMode=true"
+ + "&delete=false")
diff --git a/camel-mail.md b/camel-mail.md
index 0a8ae4bc51cd8b83aad916d42672f61aedd6d92d..1b00231f8ec9e3e9090a1ec93af68ff66e879e33 100644
--- a/camel-mail.md
+++ b/camel-mail.md
@@ -48,11 +48,11 @@ the scheme:
pop3s://[username@]host[:port][?options]
imaps://[username@]host[:port][?options]
-## Sample endpoints
+# Usage
Typically, you specify a URI with login credentials as follows:
-**SMTP example**
+**SMTP endpoint example**
smtp://[username@]host[:port][?password=somepwd]
@@ -67,17 +67,17 @@ For example:
## Component alias names
-- IMAP
+- `IMAP`
-- IMAPs
+- `IMAPs`
-- POP3s
+- `POP3s`
-- POP3s
+- `POP3s`
-- SMTP
+- `SMTP`
-- SMTPs
+- `SMTPs`
## Default ports
@@ -90,40 +90,40 @@ determines the port number to use based on the protocol.
-
+
-
+
SMTP
25
-
+
SMTPS
465
-
+
POP3
110
-
+
POP3S
995
-
+
IMAP
143
-
+
IMAPS
993
-# SSL support
+## SSL support
The underlying mail framework is responsible for providing SSL support.
You may either configure SSL/TLS support by completely specifying the
@@ -131,7 +131,7 @@ necessary Java Mail API configuration options, or you may provide a
configured SSLContextParameters through the component or endpoint
configuration.
-## Using the JSSE Configuration Utility
+### Using the JSSE Configuration Utility
The mail component supports SSL/TLS configuration through the [Camel
JSSE Configuration
@@ -167,7 +167,7 @@ Spring DSL based configuration of endpoint
...
...
-## Configuring JavaMail Directly
+### Configuring JavaMail Directly
Camel uses Jakarta JavaMail, which only trusts certificates issued by
well-known Certificate Authorities (the default JVM trust
@@ -176,7 +176,7 @@ the CA certificates into the JVM’s Java trust/key store files, override
the default JVM trust/key store files (see `SSLNOTES.txt` in JavaMail
for details).
-# Mail Message Content
+## Mail Message Content
Camel uses the message exchange’s IN body as the
[MimeMessage](http://java.sun.com/javaee/5/docs/api/javax/mail/internet/MimeMessage.html)
@@ -211,7 +211,7 @@ able to get the message id of the
[MimeMessage](http://java.sun.com/javaee/5/docs/api/javax/mail/internet/MimeMessage.html)
with the key `CamelMailMessageId` from the Camel message header.
-# Headers take precedence over pre-configured recipients
+## Headers take precedence over pre-configured recipients
The recipients specified in the message headers always take precedence
over recipients pre-configured in the endpoint URI. The idea is that if
@@ -234,7 +234,7 @@ pre-configured settings.
template.sendBodyAndHeaders("smtp://admin@localhost?to=info@mycompany.com", "Hello World", headers);
-# Multiple recipients for easier configuration
+## Multiple recipients for easier configuration
It is possible to set multiple recipients using a comma-separated or a
semicolon-separated list. This applies both to header settings and to
@@ -245,7 +245,7 @@ settings in an endpoint URI. For example:
The preceding example uses a semicolon, `;`, as the separator character.
-# Setting sender name and email
+## Setting sender name and email
You can specify recipients in the format, `name `, to include
both the name and the email address of the recipient.
@@ -260,13 +260,13 @@ For example, you define the following headers on the message:
map.put("Bcc", "An Other ");
map.put("Reply-To", "An Other ");
-# JavaMail API (ex SUN JavaMail)
+## JavaMail API (ex SUN JavaMail)
[JavaMail API](https://java.net/projects/javamail/pages/Home) is used
-under the hood for consuming and producing mails.
-We encourage end-users to consult these references when using either
-POP3 or IMAP protocol. Note particularly that POP3 has a much more
-limited set of features than IMAP.
+under the hood for consuming and producing mails. We encourage end-users
+to consult these references when using either POP3 or IMAP protocol.
+Note particularly that POP3 has a much more limited set of features than
+IMAP.
- [JavaMail POP3
API](https://javamail.java.net/nonav/docs/api/com/sun/mail/pop3/package-summary.html)
@@ -277,7 +277,17 @@ limited set of features than IMAP.
- And generally about the [MAIL
Flags](https://javamail.java.net/nonav/docs/api/javax/mail/Flags.html)
-# Samples
+## Polling Optimization
+
+The parameter maxMessagePerPoll and fetchSize allow you to restrict the
+number of messages that should be processed for each poll. These
+parameters should help to prevent bad performance when working with
+folders that contain a lot of messages. In previous versions, these
+parameters have been evaluated too late, so that big mailboxes could
+still cause performance problems. With Camel 3.1, these parameters are
+evaluated earlier during the poll to avoid these problems.
+
+# Examples
We start with a simple route that sends the messages received from a JMS
queue as emails. The email account is the `admin` account on
@@ -290,7 +300,7 @@ In the next sample, we poll a mailbox for new emails once every minute.
from("imap://admin@mymailserver.com?password=secret&unseen=true&delay=60000")
.to("seda://mails");
-# Sending mail with attachment sample
+## Sending mail with attachment
**Attachments are not supported by all Camel components**
@@ -323,7 +333,7 @@ attachment.
// and let it go (processes the exchange by sending the email)
producer.process(exchange);
-# SSL sample
+## SSL example
In this sample, we want to poll our Google Mail inbox for mails. To
download mail onto a local mail client, Google Mail requires you to
@@ -346,7 +356,7 @@ progress in the logs:
2008-05-08 06:32:12,171 DEBUG MailConsumer - Processing message: messageNumber=[332], from=[James Bond <007@mi5.co.uk>], to=YOUR_USERNAME@gmail.com], subject=[...
2008-05-08 06:32:12,187 INFO newmail - Exchange[MailMessage: messageNumber=[332], from=[James Bond <007@mi5.co.uk>], to=YOUR_USERNAME@gmail.com], subject=[...
-# Consuming mails with attachment sample
+## Consuming mails with attachment
In this sample, we poll a mailbox and store all attachments from the
mails as files. First, we define a route to poll the mailbox. As this
@@ -386,7 +396,7 @@ As you can see the API to handle attachments is a bit clunky, but it’s
there, so you can get the `javax.activation.DataHandler` so you can
handle the attachments using standard API.
-# How to split a mail message with attachments
+## How to split a mail message with attachments
In this example, we consume mail messages which may have a number of
attachments. What we want to do is to use the Splitter EIP per
@@ -401,7 +411,7 @@ The code is provided out of the box in Camel 2.10 onwards in the
`camel-mail` component. The code is in the class:
`org.apache.camel.component.mail.SplitAttachmentsExpression`, which you
can find the source code
-[here](https://svn.apache.org/repos/asf/camel/trunk/components/camel-mail/src/main/java/org/apache/camel/component/mail/SplitAttachmentsExpression.java)
+[here](https://github.com/apache/camel/blob/main/components/camel-mail/src/main/java/org/apache/camel/component/mail/SplitAttachmentsExpression.java)
In the Camel route, you then need to use this Expression in the route as
shown below:
@@ -421,7 +431,7 @@ message body. This is done by creating the expression with boolean true
And then use the expression with the splitter EIP.
-# Using custom SearchTerm
+## Using custom SearchTerm
You can configure a `searchTerm` on the `MailEndpoint` which allows you
to filter out unwanted mails.
@@ -489,17 +499,7 @@ allows you to build complex terms such as:
SearchTerm term = builder.build();
-# Polling Optimization
-
-The parameter maxMessagePerPoll and fetchSize allow you to restrict the
-number of messages that should be processed for each poll. These
-parameters should help to prevent bad performance when working with
-folders that contain a lot of messages. In previous versions, these
-parameters have been evaluated too late, so that big mailboxes could
-still cause performance problems. With Camel 3.1, these parameters are
-evaluated earlier during the poll to avoid these problems.
-
-# Using headers with additional Java Mail Sender properties
+## Using headers with additional Java Mail Sender properties
When sending mails, then you can provide dynamic java mail properties
for the `JavaMailSender` from the Exchange as message headers with keys
diff --git a/camel-main.md b/camel-main.md
new file mode 100644
index 0000000000000000000000000000000000000000..e5f4f6c35fc83255bb7c65c1ca53d13e0e3b97f1
--- /dev/null
+++ b/camel-main.md
@@ -0,0 +1,4016 @@
+# Main.md
+
+**Since Camel 3.0**
+
+This module is used for running Camel standalone via a main class
+extended from `camel-main`.
+
+# Configuration options
+
+When running Camel via `camel-main` you can configure Camel in the
+`application.properties` file.
+
+The following tables lists all the options:
+
+## Camel Main configurations
+
+The camel.main supports 122 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.main.allowUseOriginalMessage
+Sets whether to allow access to the
+original message from Camel’s error handler, or from
+org.apache.camel.spi.UnitOfWork.getOriginalInMessage(). Turning this off
+can optimize performance, as defensive copy of the original message is
+not needed. Default is false.
+false
+boolean
+
+
+camel.main.autoConfigurationEnabled
+Whether auto configuration of
+components, dataformats, languages is enabled or not. When enabled the
+configuration parameters are loaded from the properties component. You
+can prefix the parameters in the properties file with: -
+camel.component.name.option1=value1 -
+camel.component.name.option2=value2 -
+camel.dataformat.name.option1=value1 -
+camel.dataformat.name.option2=value2 -
+camel.language.name.option1=value1 - camel.language.name.option2=value2
+Where name is the name of the component, dataformat or language such as
+seda,direct,jaxb. The auto configuration also works for any options on
+components that is a complex type (not standard Java type) and there has
+been an explicit single bean instance registered to the Camel registry
+via the org.apache.camel.spi.Registry#bind(String,Object) method or by
+using the org.apache.camel.BindToRegistry annotation style. This option
+is default enabled.
+true
+boolean
+
+
+camel.main.autoConfigurationEnvironmentVariablesEnabled
+Whether auto configuration should
+include OS environment variables as well. When enabled this allows to
+overrule any configuration using an OS environment variable. For example
+to set a shutdown timeout of 5 seconds: CAMEL_MAIN_SHUTDOWNTIMEOUT=5.
+This option is default enabled.
+true
+boolean
+
+
+camel.main.autoConfigurationFailFast
+Whether auto configuration should fail
+fast when configuring one ore more properties fails for whatever reason
+such as a invalid property name, etc. This option is default
+enabled.
+true
+boolean
+
+
+camel.main.autoConfigurationLogSummary
+Whether auto configuration should log a
+summary with the configured properties. This option is default
+enabled.
+true
+boolean
+
+
+camel.main.autoConfigurationSystemPropertiesEnabled
+Whether auto configuration should
+include JVM system properties as well. When enabled this allows to
+overrule any configuration using a JVM system property. For example to
+set a shutdown timeout of 5 seconds: -D camel.main.shutdown-timeout=5.
+Note that JVM system properties take precedence over OS environment
+variables. This option is default enabled.
+true
+boolean
+
+
+camel.main.autoStartup
+Sets whether the object should
+automatically start when Camel starts. Important: Currently only routes
+can be disabled, as CamelContext’s are always started. Note: When
+setting auto startup false on CamelContext then that takes precedence
+and no routes are started. You would need to start CamelContext explicit
+using the org.apache.camel.CamelContext.start() method, to start the
+context, and then you would need to start the routes manually using
+CamelContext.getRouteController().startRoute(String). Default is true to
+always start up.
+true
+boolean
+
+
+camel.main.autowiredEnabled
+Whether autowiring is enabled. This is
+used for automatic autowiring options (the option must be marked as
+autowired) by looking up in the registry to find if there is a single
+instance of matching type, which then gets configured on the component.
+This can be used for automatic configuring JDBC data sources, JMS
+connection factories, AWS Clients, etc. Default is true.
+true
+boolean
+
+
+camel.main.basePackageScan
+Package name to use as base (offset)
+for classpath scanning of RouteBuilder , org.apache.camel.TypeConverter
+, CamelConfiguration classes, and also classes annotated with
+org.apache.camel.Converter , or org.apache.camel.BindToRegistry . If you
+are using Spring Boot then it is instead recommended to use Spring Boots
+component scanning and annotate your route builder classes with
+Component. In other words only use this for Camel Main in standalone
+mode.
+
+String
+
+
+camel.main.basePackageScanEnabled
+Whether base package scan is
+enabled.
+true
+boolean
+
+
+camel.main.beanIntrospectionExtendedStatistics
+Sets whether bean introspection uses
+extended statistics. The default is false.
+false
+boolean
+
+
+camel.main.beanIntrospectionLoggingLevel
+Sets the logging level used by bean
+introspection, logging activity of its usage. The default is
+TRACE.
+
+LoggingLevel
+
+
+camel.main.beanPostProcessorEnabled
+Can be used to turn off bean post
+processing. Be careful to turn this off, as this means that beans that
+use Camel annotations such as org.apache.camel.EndpointInject ,
+org.apache.camel.ProducerTemplate , org.apache.camel.Produce ,
+org.apache.camel.Consume etc will not be injected and in use. Turning
+this off should only be done if you are sure you do not use any of these
+Camel features. Not all runtimes allow turning this off. The default
+value is true (enabled).
+true
+boolean
+
+
+camel.main.camelEventsTimestampEnabled
+Whether to include timestamps for all
+emitted Camel Events. Enabling this allows to know fine-grained at what
+time each event was emitted, which can be used for reporting to report
+exactly the time of the events. This is by default false to avoid the
+overhead of including this information.
+false
+boolean
+
+
+camel.main.caseInsensitiveHeaders
+Whether to use case sensitive or
+insensitive headers. Important: When using case sensitive (this is set
+to false). Then the map is case sensitive which means headers such as
+content-type and Content-Type are two different keys which can be a
+problem for some protocols such as HTTP based, which rely on case
+insensitive headers. However case sensitive implementations can yield
+faster performance. Therefore use case sensitive implementation with
+care. Default is true.
+true
+boolean
+
+
+camel.main.cloudPropertiesLocation
+Sets the locations (comma separated
+values) where to find properties configuration as defined for cloud
+native environments such as Kubernetes. You should only scan text based
+mounted configuration.
+
+String
+
+
+camel.main.compileWorkDir
+Work directory for compiler. Can be
+used to write compiled classes or other resources.
+
+String
+
+
+camel.main.configurationClasses
+Sets classes names that will be used to
+configure the camel context as example by providing custom beans through
+org.apache.camel.BindToRegistry annotation.
+
+String
+
+
+camel.main.configurations
+Sets the configuration objects used to
+configure the camel context.
+
+List
+
+
+camel.main.consumerTemplateCacheSize
+Consumer template endpoints cache
+size.
+1000
+int
+
+
+camel.main.contextReloadEnabled
+Used for enabling context reloading. If
+enabled then Camel allow external systems such as security vaults (AWS
+secrets manager, etc.) to trigger refreshing Camel by updating property
+placeholders and reload all existing routes to take changes into
+effect.
+false
+boolean
+
+
+camel.main.description
+Sets the description (intended for
+humans) of the Camel application.
+
+String
+
+
+camel.main.devConsoleEnabled
+Whether to enable developer console
+(requires camel-console on classpath). The developer console is only for
+assisting during development. This is NOT for production usage.
+false
+boolean
+
+
+camel.main.dumpRoutes
+If dumping is enabled then Camel will
+during startup dump all loaded routes (incl rests and route templates)
+represented as XML/YAML DSL into the log. This is intended for trouble
+shooting or to assist during development. Sensitive information that may
+be configured in the route endpoints could potentially be included in
+the dump output and is therefore not recommended being used for
+production usage. This requires to have camel-xml-io/camel-yaml-io on
+the classpath to be able to dump the routes as XML/YAML.
+
+String
+
+
+camel.main.dumpRoutesGeneratedIds
+Whether to include auto generated IDs
+in the dumped output. Default is false.
+false
+boolean
+
+
+camel.main.dumpRoutesInclude
+Controls what to include in output for
+route dumping. Possible values: all, routes, rests, routeConfigurations,
+routeTemplates, beans. Multiple values can be separated by comma.
+Default is routes.
+routes
+String
+
+
+camel.main.dumpRoutesLog
+Whether to log route dumps to
+Logger
+true
+boolean
+
+
+camel.main.dumpRoutesOutput
+Whether to save route dumps to an
+output file. If the output is a filename, then all content is saved to
+this file. If the output is a directory name, then one or more files are
+saved to the directory, where the names are based on the original source
+file names, or auto generated names.
+
+String
+
+
+camel.main.dumpRoutesResolvePlaceholders
+Whether to resolve property
+placeholders in the dumped output. Default is true.
+true
+boolean
+
+
+camel.main.dumpRoutesUriAsParameters
+When dumping routes to YAML format,
+then this option controls whether endpoint URIs should be expanded into
+a key/value parameters.
+false
+boolean
+
+
+camel.main.durationHitExitCode
+Sets the exit code for the application
+if duration was hit
+
+int
+
+
+camel.main.durationMaxAction
+Controls whether the Camel application
+should shutdown the JVM, or stop all routes, when duration max is
+triggered.
+shutdown
+String
+
+
+camel.main.durationMaxIdleSeconds
+To specify for how long time in seconds
+Camel can be idle before automatic terminating the JVM. You can use this
+to run Camel for a short while.
+
+int
+
+
+camel.main.durationMaxMessages
+To specify how many messages to process
+by Camel before automatic terminating the JVM. You can use this to run
+Camel for a short while.
+
+int
+
+
+camel.main.durationMaxSeconds
+To specify for how long time in seconds
+to keep running the JVM before automatic terminating the JVM. You can
+use this to run Camel for a short while.
+
+int
+
+
+camel.main.endpointBridgeErrorHandler
+Allows for bridging the consumer to the
+Camel routing Error Handler, which mean any exceptions occurred while
+the consumer is trying to pickup incoming messages, or the likes, will
+now be processed as a message and handled by the routing Error Handler.
+By default the consumer will use the
+org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will
+be logged at WARN/ERROR level and ignored. The default value is
+false.
+false
+boolean
+
+
+camel.main.endpointLazyStartProducer
+Whether the producer should be started
+lazy (on the first message). By starting lazy you can use this to allow
+CamelContext and routes to startup in situations where a producer may
+otherwise fail during starting and cause the route to fail being
+started. By deferring this startup to be lazy then the startup failure
+can be handled during routing messages via Camel’s routing error
+handlers. Beware that when the first message is processed then creating
+and starting the producer may take a little time and prolong the total
+processing time of the processing. The default value is false.
+false
+boolean
+
+
+camel.main.endpointRuntimeStatisticsEnabled
+Sets whether endpoint runtime
+statistics is enabled (gathers runtime usage of each incoming and
+outgoing endpoints). The default value is false.
+false
+boolean
+
+
+camel.main.exchangeFactory
+Controls whether to pool (reuse)
+exchanges or create new exchanges (prototype). Using pooled will reduce
+JVM garbage collection overhead by avoiding to re-create Exchange
+instances per message each consumer receives. The default is prototype
+mode.
+default
+String
+
+
+camel.main.exchangeFactoryCapacity
+The capacity the pool (for each
+consumer) uses for storing exchanges. The default capacity is
+100.
+100
+int
+
+
+camel.main.exchangeFactoryStatisticsEnabled
+Configures whether statistics is
+enabled on exchange factory.
+false
+boolean
+
+
+camel.main.extraShutdownTimeout
+Extra timeout in seconds to graceful
+shutdown Camel. When Camel is shutting down then Camel first shutdown
+all the routes (shutdownTimeout). Then additional services is shutdown
+(extraShutdownTimeout).
+15
+int
+
+
+camel.main.fileConfigurations
+Directory to load additional
+configuration files that contains configuration values that takes
+precedence over any other configuration. This can be used to refer to
+files that may have secret configuration that has been mounted on the
+file system for containers. You can specify a pattern to load from sub
+directories and a name pattern such as /var/app/secret/.properties,
+multiple directories can be separated by comma.
+
+String
+
+
+camel.main.globalOptions
+Sets global options that can be
+referenced in the camel context Important: This has nothing to do with
+property placeholders, and is just a plain set of key/value pairs which
+are used to configure global options on CamelContext, such as a maximum
+debug logging length etc.
+
+Map
+
+
+camel.main.inflightRepositoryBrowseEnabled
+Sets whether the inflight repository
+should allow browsing each inflight exchange. This is by default
+disabled as there is a very slight performance overhead when
+enabled.
+false
+boolean
+
+
+camel.main.javaRoutesExcludePattern
+Used for exclusive filtering
+RouteBuilder classes which are collected from the registry or via
+classpath scanning. The exclusive filtering takes precedence over
+inclusive filtering. The pattern is using Ant-path style pattern.
+Multiple patterns can be specified separated by comma. For example to
+exclude all classes starting with Bar use: **/Bar* To exclude all routes
+form a specific package use: com/mycompany/bar/* To exclude all routes
+form a specific package and its sub-packages use double wildcards:
+com/mycompany/bar/** And to exclude all routes from two specific
+packages use: com/mycompany/bar/*,com/mycompany/stuff/*
+
+String
+
+
+camel.main.javaRoutesIncludePattern
+Used for inclusive filtering
+RouteBuilder classes which are collected from the registry or via
+classpath scanning. The exclusive filtering takes precedence over
+inclusive filtering. The pattern is using Ant-path style pattern.
+Multiple patterns can be specified separated by comma. Multiple patterns
+can be specified separated by comma. For example to include all classes
+starting with Foo use: **/Foo To include all routes form a specific
+package use: com/mycompany/foo/* To include all routes form a specific
+package and its sub-packages use double wildcards: com/mycompany/foo/**
+And to include all routes from two specific packages use:
+com/mycompany/foo/*,com/mycompany/stuff/*
+
+String
+
+
+camel.main.jmxEnabled
+Enable JMX in your Camel
+application.
+true
+boolean
+
+
+camel.main.jmxManagementMBeansLevel
+Sets the mbeans registration level. The
+default value is Default.
+Default
+ManagementMBeansLevel
+
+
+camel.main.jmxManagementNamePattern
+The naming pattern for creating the
+CamelContext JMX management name. The default pattern is name
+name
+String
+
+
+camel.main.jmxManagementRegisterRoutesCreateByKamelet
+Whether routes created by Kamelets
+should be registered for JMX management. Enabling this allows to have
+fine-grained monitoring and management of every route created via
+Kamelets. This is default disabled as a Kamelet is intended as a
+component (black-box) and its implementation details as Camel route
+makes the overall management and monitoring of Camel applications more
+verbose. During development of Kamelets then enabling this will make it
+possible for developers to do fine-grained performance inspection and
+identify potential bottlenecks in the Kamelet routes. However, for
+production usage then keeping this disabled is recommended.
+false
+boolean
+
+
+camel.main.jmxManagementRegisterRoutesCreateByTemplate
+Whether routes created by route
+templates (not Kamelets) should be registered for JMX management.
+Enabling this allows to have fine-grained monitoring and management of
+every route created via route templates. This is default enabled (unlike
+Kamelets) as routes created via templates is regarded as standard
+routes, and should be available for management and monitoring.
+true
+boolean
+
+
+camel.main.jmxManagementStatisticsLevel
+Sets the JMX statistics level, the
+level can be set to Extended to gather additional information The
+default value is Default.
+Default
+ManagementStatisticsLevel
+
+
+camel.main.jmxUpdateRouteEnabled
+Whether to allow updating routes at
+runtime via JMX using the ManagedRouteMBean. This is disabled by
+default, but can be enabled for development and troubleshooting
+purposes, such as updating routes in an existing running Camel via JMX
+and other tools.
+false
+boolean
+
+
+camel.main.lightweight
+Configure the context to be
+lightweight. This will trigger some optimizations and memory reduction
+options. Lightweight context have some limitations. At this moment,
+dynamic endpoint destinations are not supported.
+false
+boolean
+
+
+camel.main.loadHealthChecks
+Whether to load custom health checks by
+scanning classpath.
+false
+boolean
+
+
+camel.main.loadStatisticsEnabled
+Sets whether context load statistics is
+enabled (something like the unix load average). The statistics requires
+to have camel-management on the classpath as JMX is required. The
+default value is false.
+false
+boolean
+
+
+camel.main.loadTypeConverters
+Whether to load custom type converters
+by scanning classpath. This is used for backwards compatibility with
+Camel 2.x. Its recommended to migrate to use fast type converter loading
+by setting Converter(loader = true) on your custom type converter
+classes.
+false
+boolean
+
+
+camel.main.logDebugMaxChars
+Is used to limit the maximum length of
+the logging Camel message bodies. If the message body is longer than the
+limit, the log message is clipped. Use -1 to have unlimited length. Use
+for example 1000 to log at most 1000 characters.
+
+int
+
+
+camel.main.logExhaustedMessageBody
+Sets whether to log exhausted message
+body with message history. Default is false.
+false
+boolean
+
+
+camel.main.logLanguage
+To configure the language to use for
+Log EIP. By default, the simple language is used. However, Camel also
+supports other languages such as groovy.
+
+String
+
+
+camel.main.logMask
+Sets whether log mask is enabled or
+not. Default is false.
+false
+boolean
+
+
+camel.main.logName
+The global name to use for Log EIP The
+name is default the routeId or the source:line if source location is
+enabled. You can also specify the name using tokens: ${class} - the
+logger class name (org.apache.camel.processor.LogProcessor) ${contextId}
+- the camel context id ${routeId} - the route id ${groupId} - the route
+group id ${nodeId} - the node id ${nodePrefixId} - the node prefix id
+${source} - the source:line (source location must be enabled)
+${source.name} - the source filename (source location must be enabled)
+${source.line} - the source line number (source location must be
+enabled) For example to use the route and node id you can specify the
+name as: ${routeId}/${nodeId}
+
+String
+
+
+camel.main.mainListenerClasses
+Sets classes names that will be used
+for MainListener that makes it possible to do custom logic during
+starting and stopping camel-main.
+
+String
+
+
+camel.main.mainListeners
+Sets main listener objects that will be
+used for MainListener that makes it possible to do custom logic during
+starting and stopping camel-main.
+
+List
+
+
+camel.main.mdcLoggingKeysPattern
+Sets the pattern used for determine
+which custom MDC keys to propagate during message routing when the
+routing engine continues routing asynchronously for the given message.
+Setting this pattern to will propagate all custom keys. Or setting the
+pattern to foo,bar will propagate any keys starting with either foo or
+bar. Notice that a set of standard Camel MDC keys are always propagated
+which starts with camel. as key name. The match rules are applied in
+this order (case insensitive): 1. exact match, returns true 2. wildcard
+match (pattern ends with a and the name starts with the pattern),
+returns true 3. regular expression match, returns true 4. otherwise
+returns false
+
+String
+
+
+camel.main.messageHistory
+Sets whether message history is enabled
+or not. Default is false.
+false
+boolean
+
+
+camel.main.modeline
+Whether camel-k style modeline is also
+enabled when not using camel-k. Enabling this allows to use a camel-k
+like experience by being able to configure various settings using
+modeline directly in your route source code.
+false
+boolean
+
+
+camel.main.name
+Sets the name of the
+CamelContext.
+
+String
+
+
+camel.main.producerTemplateCacheSize
+Producer template endpoints cache
+size.
+1000
+int
+
+
+camel.main.profile
+Camel profile to use when running. The
+dev profile is for development, which enables a set of additional
+developer focus functionality, tracing, debugging, and gathering
+additional runtime statistics that are useful during development.
+However, those additional features has a slight overhead cost, and are
+not enabled for production profile. The default profile is
+prod.
+
+String
+
+
+camel.main.routeFilterExcludePattern
+Used for filtering routes routes
+matching the given pattern, which follows the following rules: - Match
+by route id - Match by route input endpoint uri The matching is using
+exact match, by wildcard and regular expression as documented by
+PatternHelper#matchPattern(String,String) . For example to only include
+routes which starts with foo in their route id’s, use: include=foo* And
+to exclude routes which starts from JMS endpoints, use: exclude=jms:*
+Multiple patterns can be separated by comma, for example to exclude both
+foo and bar routes, use: exclude=foo*,bar* Exclude takes precedence over
+include.
+
+String
+
+
+camel.main.routeFilterIncludePattern
+Used for filtering routes matching the
+given pattern, which follows the following rules: - Match by route id -
+Match by route input endpoint uri The matching is using exact match, by
+wildcard and regular expression as documented by
+PatternHelper#matchPattern(String,String) . For example to only include
+routes which starts with foo in their route id’s, use: include=foo* And
+to exclude routes which starts from JMS endpoints, use: exclude=jms:*
+Multiple patterns can be separated by comma, for example to exclude both
+foo and bar routes, use: exclude=foo*,bar* Exclude takes precedence over
+include.
+
+String
+
+
+camel.main.routesBuilderClasses
+Sets classes names that implement
+RoutesBuilder .
+
+String
+
+
+camel.main.routesBuilders
+Sets the RoutesBuilder
+instances.
+
+List
+
+
+camel.main.routesCollectorEnabled
+Whether the routes collector is enabled
+or not. When enabled Camel will auto-discover routes (RouteBuilder
+instances from the registry and also load additional routes from the
+file system). The routes collector is default enabled.
+true
+boolean
+
+
+camel.main.routesCollectorIgnoreLoadingError
+Whether the routes collector should
+ignore any errors during loading and compiling routes. This is only
+intended for development or tooling.
+false
+boolean
+
+
+camel.main.routesExcludePattern
+Used for exclusive filtering of routes
+from directories. The exclusive filtering takes precedence over
+inclusive filtering. The pattern is using Ant-path style pattern.
+Multiple patterns can be specified separated by comma, as example, to
+exclude all the routes from a directory whose name contains foo use:
+**/foo.
+
+String
+
+
+camel.main.routesIncludePattern
+Used for inclusive filtering of routes
+from directories. The exclusive filtering takes precedence over
+inclusive filtering. The pattern is using Ant-path style pattern.
+Multiple patterns can be specified separated by comma, as example, to
+include all the routes from a directory whose name contains foo use:
+**/foo.
+classpath:camel/,classpath:camel-template/ ,classpath:camel-rest/*
+String
+
+
+camel.main.routesReloadDirectory
+Directory to scan for route changes.
+Camel cannot scan the classpath, so this must be configured to a file
+directory. Development with Maven as build tool, you can configure the
+directory to be src/main/resources to scan for Camel routes in XML or
+YAML files.
+src/main/resources/camel
+String
+
+
+camel.main.routesReloadDirectoryRecursive
+Whether the directory to scan should
+include sub directories. Depending on the number of sub directories,
+then this can cause the JVM to startup slower as Camel uses the JDK
+file-watch service to scan for file changes.
+false
+boolean
+
+
+camel.main.routesReloadEnabled
+Used for enabling automatic routes
+reloading. If enabled then Camel will watch for file changes in the
+given reload directory, and trigger reloading routes if files are
+changed.
+false
+boolean
+
+
+camel.main.routesReloadPattern
+Used for inclusive filtering of routes
+from directories. Typical used for specifying to accept routes in XML or
+YAML files, such as .yaml,.xml. Multiple patterns can be specified
+separated by comma.
+
+String
+
+
+camel.main.routesReloadRemoveAllRoutes
+When reloading routes should all
+existing routes be stopped and removed. By default, Camel will stop and
+remove all existing routes before reloading routes. This ensures that
+only the reloaded routes will be active. If disabled then only routes
+with the same route id is updated, and any existing routes are continued
+to run.
+true
+boolean
+
+
+camel.main.routesReloadRestartDuration
+Whether to restart max duration when
+routes are reloaded. For example if max duration is 60 seconds, and a
+route is reloaded after 25 seconds, then this will restart the count and
+wait 60 seconds again.
+false
+boolean
+
+
+camel.main.shutdownLogInflightExchangesOnTimeout
+Sets whether to log information about
+the inflight Exchanges which are still running during a shutdown which
+didn’t complete without the given timeout. This requires to enable the
+option inflightRepositoryBrowseEnabled.
+true
+boolean
+
+
+camel.main.shutdownNowOnTimeout
+Sets whether to force shutdown of all
+consumers when a timeout occurred and thus not all consumers was
+shutdown within that period. You should have good reasons to set this
+option to false as it means that the routes keep running and is halted
+abruptly when CamelContext has been shutdown.
+true
+boolean
+
+
+camel.main.shutdownRoutesInReverseOrder
+Sets whether routes should be shutdown
+in reverse or the same order as they were started.
+true
+boolean
+
+
+camel.main.shutdownSuppressLoggingOnTimeout
+Whether Camel should try to suppress
+logging during shutdown and timeout was triggered, meaning forced
+shutdown is happening. And during forced shutdown we want to avoid
+logging errors/warnings et all in the logs as a side-effect of the
+forced timeout. Notice the suppress is a best effort as there may still
+be some logs coming from 3rd party libraries and whatnot, which Camel
+cannot control. This option is default false.
+false
+boolean
+
+
+camel.main.shutdownTimeout
+Timeout in seconds to graceful shutdown
+all the Camel routes.
+45
+int
+
+
+camel.main.sourceLocationEnabled
+Whether to capture precise source
+location:line-number for all EIPs in Camel routes. Enabling this will
+impact parsing Java based routes (also Groovy etc.) on startup as this
+uses JDK StackTraceElement to calculate the location from the Camel
+route, which comes with a performance cost. This only impact startup,
+not the performance of the routes at runtime.
+false
+boolean
+
+
+camel.main.startupRecorder
+To use startup recorder for capturing
+execution time during starting Camel. The recorder can be one of: false
+(or off), logging, backlog, java-flight-recorder (or jfr).
+
+String
+
+
+camel.main.startupRecorderDir
+Directory to store the recording. By
+default the current directory will be used. Use false to turn off saving
+recording to disk.
+
+String
+
+
+camel.main.startupRecorderDuration
+How long time to run the startup
+recorder. Use 0 (default) to keep the recorder running until the JVM is
+exited. Use -1 to stop the recorder right after Camel has been started
+(to only focus on potential Camel startup performance bottlenecks) Use a
+positive value to keep recording for N seconds. When the recorder is
+stopped then the recording is auto saved to disk (note: save to disk can
+be disabled by setting startupRecorderDir to false)
+
+long
+
+
+camel.main.startupRecorderMaxDepth
+To filter our sub steps at a maximum
+depth. Use -1 for no maximum. Use 0 for no sub steps. Use 1 for max 1
+sub step, and so forth. The default is -1.
+-1
+int
+
+
+camel.main.startupRecorderProfile
+To use a specific Java Flight Recorder
+profile configuration, such as default or profile. The default is
+default.
+default
+String
+
+
+camel.main.startupRecorderRecording
+To enable Java Flight Recorder to start
+a recording and automatic dump the recording to disk after startup is
+complete. This requires that camel-jfr is on the classpath, and to
+enable this option.
+false
+boolean
+
+
+camel.main.startupSummaryLevel
+Controls the level of information
+logged during startup (and shutdown) of CamelContext.
+Default
+StartupSummaryLevel
+
+
+camel.main.streamCachingAllowClasses
+To filter stream caching of a given set
+of allowed/denied classes. By default, all classes that are
+java.io.InputStream is allowed. Multiple class names can be separated by
+comma.
+
+String
+
+
+camel.main.streamCachingAnySpoolRules
+Sets whether if just any of the
+org.apache.camel.spi.StreamCachingStrategy.SpoolRule rules returns true
+then shouldSpoolCache(long) returns true, to allow spooling to disk. If
+this option is false, then all the
+org.apache.camel.spi.StreamCachingStrategy.SpoolRule must return true.
+The default value is false which means that all the rules must return
+true.
+false
+boolean
+
+
+camel.main.streamCachingBufferSize
+Sets the stream caching buffer size to
+use when allocating in-memory buffers used for in-memory stream caches.
+The default size is 4096.
+
+int
+
+
+camel.main.streamCachingDenyClasses
+To filter stream caching of a given set
+of allowed/denied classes. By default, all classes that are
+java.io.InputStream is allowed. Multiple class names can be separated by
+comma.
+
+String
+
+
+camel.main.streamCachingEnabled
+Sets whether stream caching is enabled
+or not. While stream types (like StreamSource, InputStream and Reader)
+are commonly used in messaging for performance reasons, they also have
+an important drawback: they can only be read once. In order to be able
+to work with message content multiple times, the stream needs to be
+cached. Streams are cached in memory only (by default). If
+streamCachingSpoolEnabled=true, then, for large stream messages (over
+128 KB by default) will be cached in a temporary file instead, and Camel
+will handle deleting the temporary file once the cached stream is no
+longer necessary. Default is true.
+true
+boolean
+
+
+camel.main.streamCachingRemoveSpoolDirectoryWhenStopping
+Whether to remove stream caching
+temporary directory when stopping. This option is default true.
+true
+boolean
+
+
+camel.main.streamCachingSpoolCipher
+Sets a stream caching cipher name to
+use when spooling to disk to write with encryption. By default the data
+is not encrypted.
+
+String
+
+
+camel.main.streamCachingSpoolDirectory
+Sets the stream caching spool
+(temporary) directory to use for overflow and spooling to disk. If no
+spool directory has been explicit configured, then a temporary directory
+is created in the java.io.tmpdir directory.
+
+String
+
+
+camel.main.streamCachingSpoolEnabled
+To enable stream caching spooling to
+disk. This means, for large stream messages (over 128 KB by default)
+will be cached in a temporary file instead, and Camel will handle
+deleting the temporary file once the cached stream is no longer
+necessary. Default is false.
+false
+boolean
+
+
+camel.main.streamCachingSpoolThreshold
+Stream caching threshold in bytes when
+overflow to disk is activated. The default threshold is 128kb. Use -1 to
+disable overflow to disk.
+
+long
+
+
+camel.main.streamCachingSpoolUsedHeapMemoryLimit
+Sets what the upper bounds should be
+when streamCachingSpoolUsedHeapMemoryThreshold is in use.
+
+String
+
+
+camel.main.streamCachingSpoolUsedHeapMemoryThreshold
+Sets a percentage (1-99) of used heap
+memory threshold to activate stream caching spooling to disk.
+
+int
+
+
+camel.main.streamCachingStatisticsEnabled
+Sets whether stream caching statistics
+is enabled.
+false
+boolean
+
+
+camel.main.threadNamePattern
+Sets the thread name pattern used for
+creating the full thread name. The default pattern is: Camel (camelId)
+thread #counter - name Where camelId is the name of the CamelContext.
+and counter is a unique incrementing counter. and name is the regular
+thread name. You can also use longName which is the long thread name
+which can includes endpoint parameters etc.
+
+String
+
+
+camel.main.tracing
+Sets whether tracing is enabled or not.
+Default is false.
+false
+boolean
+
+
+camel.main.tracingLoggingFormat
+To use a custom tracing logging format.
+The default format (arrow, routeId, label) is: %-4.4s %-12.12s
+%-33.33s
+%-4.4s [%-12.12s] [%-33.33s]
+String
+
+
+camel.main.tracingPattern
+Tracing pattern to match which node
+EIPs to trace. For example to match all To EIP nodes, use to. The
+pattern matches by node and route id’s Multiple patterns can be
+separated by comma.
+
+String
+
+
+camel.main.tracingStandby
+Whether to set tracing on standby. If
+on standby then the tracer is installed and made available. Then the
+tracer can be enabled later at runtime via JMX or via
+Tracer#setEnabled(boolean) .
+false
+boolean
+
+
+camel.main.tracingTemplates
+Whether tracing should trace inner
+details from route templates (or kamelets). Turning this on increases
+the verbosity of tracing by including events from internal routes in the
+templates or kamelets. Default is false.
+false
+boolean
+
+
+camel.main.typeConverterStatisticsEnabled
+Sets whether type converter statistics
+is enabled. By default the type converter utilization statistics is
+disabled. Notice: If enabled then there is a slight performance impact
+under very heavy load.
+false
+boolean
+
+
+camel.main.useBreadcrumb
+Set whether breadcrumb is enabled. The
+default value is false.
+false
+boolean
+
+
+camel.main.useDataType
+Whether to enable using data type on
+Camel messages. Data type are automatic turned on if one ore more routes
+has been explicit configured with input and output types. Otherwise data
+type is default off.
+false
+boolean
+
+
+camel.main.useMdcLogging
+To turn on MDC logging
+false
+boolean
+
+
+camel.main.uuidGenerator
+UUID generator to use. default (32
+bytes), short (16 bytes), classic (32 bytes or longer), simple (long
+incrementing counter), off (turned off for exchanges - only intended for
+performance profiling)
+default
+String
+
+
+
+
+## Camel Route Controller configurations
+
+The camel.routecontroller supports 12 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.routecontroller.backOffDelay
+Backoff delay in millis when restarting
+a route that failed to startup.
+2000
+long
+
+
+camel.routecontroller.backOffMaxAttempts
+Backoff maximum number of attempts to
+restart a route that failed to startup. When this threshold has been
+exceeded then the controller will give up attempting to restart the
+route, and the route will remain as stopped.
+
+long
+
+
+camel.routecontroller.backOffMaxDelay
+Backoff maximum delay in millis when
+restarting a route that failed to startup.
+
+long
+
+
+camel.routecontroller.backOffMaxElapsedTime
+Backoff maximum elapsed time in millis,
+after which the backoff should be considered exhausted and no more
+attempts should be made.
+
+long
+
+
+camel.routecontroller.backOffMultiplier
+Backoff multiplier to use for
+exponential backoff. This is used to extend the delay between restart
+attempts.
+
+double
+
+
+camel.routecontroller.enabled
+To enable using supervising route
+controller which allows Camel to start up and then, the controller takes
+care of starting the routes in a safe manner. This can be used when you
+want to start up Camel despite a route may otherwise fail fast during
+startup and cause Camel to fail to start up as well. By delegating the
+route startup to the supervising route controller, then it manages the
+startup using a background thread. The controller allows to be
+configured with various settings to attempt to restart failing
+routes.
+false
+boolean
+
+
+camel.routecontroller.excludeRoutes
+Pattern for filtering routes to be
+excluded as supervised. The pattern is matching on route id, and
+endpoint uri for the route. Multiple patterns can be separated by comma.
+For example to exclude all JMS routes, you can say jms:. And to exclude
+routes with specific route ids mySpecialRoute,myOtherSpecialRoute. The
+pattern supports wildcards and uses the matcher from
+org.apache.camel.support.PatternHelper#matchPattern.
+
+String
+
+
+camel.routecontroller.includeRoutes
+Pattern for filtering routes to be
+included as supervised. The pattern is matching on route id, and
+endpoint uri for the route. Multiple patterns can be separated by comma.
+For example to include all kafka routes, you can say kafka:. And to
+include routes with specific route ids myRoute,myOtherRoute. The pattern
+supports wildcards and uses the matcher from
+org.apache.camel.support.PatternHelper#matchPattern.
+
+String
+
+
+camel.routecontroller.initialDelay
+Initial delay in milli seconds before
+the route controller starts, after CamelContext has been
+started.
+
+long
+
+
+camel.routecontroller.threadPoolSize
+The number of threads used by the route
+controller scheduled thread pool that are used for restarting routes.
+The pool uses 1 thread by default, but you can increase this to allow
+the controller to concurrently attempt to restart multiple routes in
+case more than one route has problems starting.
+1
+int
+
+
+camel.routecontroller.unhealthyOnExhausted
+Whether to mark the route as unhealthy
+(down) when all restarting attempts (backoff) have failed and the route
+is not successfully started and the route manager is giving up. If
+setting this to false will make health checks ignore this problem and
+allow to report the Camel application as UP.
+true
+boolean
+
+
+camel.routecontroller.unhealthyOnRestarting
+Whether to mark the route as unhealthy
+(down) when the route failed to initially start, and is being controlled
+for restarting (backoff). If setting this to false will make health
+checks ignore this problem and allow to report the Camel application as
+UP.
+true
+boolean
+
+
+
+
+## Camel Embedded HTTP Server (only for standalone; not Spring Boot or Quarkus) configurations
+
+The camel.server supports 23 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.server.authenticationEnabled
+Whether to enable HTTP authentication
+for embedded server (for standalone applications; not Spring Boot or
+Quarkus).
+false
+boolean
+
+
+camel.server.authenticationPath
+Set HTTP url path of embedded server
+that is protected by authentication configuration.
+
+String
+
+
+camel.server.basicPropertiesFile
+Name of the file that contains basic
+authentication info for Vert.x file auth provider.
+
+String
+
+
+camel.server.devConsoleEnabled
+Whether to enable developer console
+(not intended for production use). Dev console must also be enabled on
+CamelContext. For example by setting camel.context.dev-console=true in
+application.properties, or via code camelContext.setDevConsole(true); If
+enabled then you can access a basic developer console on context-path:
+/q/dev.
+false
+boolean
+
+
+camel.server.downloadEnabled
+Whether to enable file download via
+HTTP. This makes it possible to browse and download resource source
+files such as Camel XML or YAML routes. Only enable this for
+development, troubleshooting or special situations for management and
+monitoring.
+false
+boolean
+
+
+camel.server.enabled
+Whether embedded HTTP server is
+enabled. By default, the server is not enabled.
+false
+boolean
+
+
+camel.server.healthCheckEnabled
+Whether to enable health-check console.
+If enabled then you can access health-check status on context-path:
+/q/health
+false
+boolean
+
+
+camel.server.host
+Hostname to use for binding embedded
+HTTP server
+0.0.0.0
+String
+
+
+camel.server.infoEnabled
+Whether to enable info console. If
+enabled then you can see some basic Camel information at
+/q/info
+false
+boolean
+
+
+camel.server.jolokiaEnabled
+Whether to enable jolokia. If enabled
+then you can access jolokia api on context-path: /q/jolokia
+false
+boolean
+
+
+camel.server.jwtKeystorePassword
+Password from the keystore used for JWT
+tokens validation.
+
+String
+
+
+camel.server.jwtKeystorePath
+Path to the keystore file used for JWT
+tokens validation.
+
+String
+
+
+camel.server.jwtKeystoreType
+Type of the keystore used for JWT
+tokens validation (jks, pkcs12, etc.).
+
+String
+
+
+camel.server.maxBodySize
+Maximum HTTP body size the embedded
+HTTP server can accept.
+
+Long
+
+
+camel.server.metricsEnabled
+Whether to enable metrics. If enabled
+then you can access metrics on context-path: /q/metrics
+false
+boolean
+
+
+camel.server.path
+Context-path to use for embedded HTTP
+server
+/
+String
+
+
+camel.server.port
+Port to use for binding embedded HTTP
+server
+8080
+int
+
+
+camel.server.sendEnabled
+Whether to enable sending messages to
+Camel via HTTP. This makes it possible to use Camel to send messages to
+Camel endpoint URIs via HTTP.
+false
+boolean
+
+
+camel.server.staticContextPath
+The context-path to use for serving
+static content. By default, the root path is used. And if there is an
+index.html page then this is automatically loaded.
+/
+String
+
+
+camel.server.staticEnabled
+Whether serving static files is
+enabled. If enabled then Camel can host html/js and other web files that
+makes it possible to include small web applications.
+false
+boolean
+
+
+camel.server.uploadEnabled
+Whether to enable file upload via HTTP
+(not intended for production use). This functionality is for development
+to be able to reload Camel routes and code with source changes (if
+reload is enabled). If enabled then you can upload/delete files via HTTP
+PUT/DELETE on context-path: /q/upload/{name}. You must also configure
+the uploadSourceDir option.
+false
+boolean
+
+
+camel.server.uploadSourceDir
+Source directory when upload is
+enabled.
+
+String
+
+
+camel.server.useGlobalSslContextParameters
+Whether to use global SSL configuration
+for securing the embedded HTTP server.
+false
+boolean
+
+
+
+
+## Camel Debugger configurations
+
+The camel.debug supports 15 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.debug.bodyIncludeFiles
+Whether to include the message body of
+file based messages. The overhead is that the file content has to be
+read from the file.
+true
+boolean
+
+
+camel.debug.bodyIncludeStreams
+Whether to include the message body of
+stream based messages. If enabled then beware the stream may not be
+re-readable later. See more about Stream Caching.
+false
+boolean
+
+
+camel.debug.bodyMaxChars
+To limit the message body to a maximum
+size in the traced message. Use 0 or negative value to use unlimited
+size.
+32768
+int
+
+
+camel.debug.breakpoints
+Allows to pre-configure breakpoints
+(node ids) to use with debugger on startup. Multiple ids can be
+separated by comma. Use special value all_routes to add a
+breakpoint for the first node for every route, in other words this makes
+it easy to debug from the beginning of every route without knowing the
+exact node ids.
+
+String
+
+
+camel.debug.enabled
+Enables Debugger in your Camel
+application.
+false
+boolean
+
+
+camel.debug.fallbackTimeout
+Fallback Timeout in seconds (300
+seconds as default) when block the message processing in Camel. A
+timeout used for waiting for a message to arrive at a given
+breakpoint.
+300
+long
+
+
+camel.debug.includeException
+Trace messages to include exception if
+the message failed
+true
+boolean
+
+
+camel.debug.includeExchangeProperties
+Whether to include the exchange
+properties in the traced message
+true
+boolean
+
+
+camel.debug.includeExchangeVariables
+Whether to include the exchange
+variables in the traced message
+true
+boolean
+
+
+camel.debug.jmxConnectorEnabled
+Whether to create JMX connector that
+allows tooling to control the Camel debugger. This is what the IDEA and
+VSCode tooling is using.
+true
+boolean
+
+
+camel.debug.jmxConnectorPort
+Port number to expose a JMX RMI
+connector for tooling that needs to control the debugger.
+1099
+int
+
+
+camel.debug.loggingLevel
+The debugger logging level to use when
+logging activity.
+INFO
+LoggingLevel
+
+
+camel.debug.singleStepIncludeStartEnd
+In single step mode, then when the
+exchange is created and completed, then simulate a breakpoint at start
+and end, that allows to suspend and watch the incoming/complete exchange
+at the route (you can see message body as response, failed exception
+etc).
+false
+boolean
+
+
+camel.debug.standby
+To set the debugger in standby mode,
+where the debugger will be installed by not automatic enabled. The
+debugger can then later be enabled explicit from Java, JMX or
+tooling.
+false
+boolean
+
+
+camel.debug.waitForAttach
+Whether the debugger should suspend on
+startup, and wait for a remote debugger to attach. This is what the IDEA
+and VSCode tooling is using.
+false
+boolean
+
+
+
+
+## Camel Tracer configurations
+
+The camel.trace supports 14 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.trace.backlogSize
+Defines how many of the last messages
+to keep in the tracer (should be between 1 - 1000).
+100
+int
+
+
+camel.trace.bodyIncludeFiles
+Whether to include the message body of
+file based messages. The overhead is that the file content has to be
+read from the file.
+true
+boolean
+
+
+camel.trace.bodyIncludeStreams
+Whether to include the message body of
+stream based messages. If enabled then beware the stream may not be
+re-readable later. See more about Stream Caching.
+false
+boolean
+
+
+camel.trace.bodyMaxChars
+To limit the message body to a maximum
+size in the traced message. Use 0 or negative value to use unlimited
+size.
+32768
+int
+
+
+camel.trace.enabled
+Enables tracer in your Camel
+application.
+false
+boolean
+
+
+camel.trace.includeException
+Trace messages to include exception if
+the message failed
+true
+boolean
+
+
+camel.trace.includeExchangeProperties
+Whether to include the exchange
+properties in the traced message
+true
+boolean
+
+
+camel.trace.includeExchangeVariables
+Whether to include the exchange
+variables in the traced message
+true
+boolean
+
+
+camel.trace.removeOnDump
+Whether all traced messages should be
+removed when the tracer is dumping. By default, the messages are
+removed, which means that dumping will not contain previous dumped
+messages.
+true
+boolean
+
+
+camel.trace.standby
+To set the tracer in standby mode,
+where the tracer will be installed by not automatic enabled. The tracer
+can then later be enabled explicit from Java, JMX or tooling.
+false
+boolean
+
+
+camel.trace.traceFilter
+Filter for tracing messages
+
+String
+
+
+camel.trace.tracePattern
+Filter for tracing by route or node
+id
+
+String
+
+
+camel.trace.traceRests
+Whether to trace routes that is created
+from Rest DSL.
+false
+boolean
+
+
+camel.trace.traceTemplates
+Whether to trace routes that is created
+from route templates or kamelets.
+false
+boolean
+
+
+
+
+## Camel SSL configurations
+
+The camel.ssl supports 19 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.ssl.certAlias
+An optional certificate alias to use.
+This is useful when the keystore has multiple certificates.
+
+String
+
+
+camel.ssl.cipherSuites
+List of TLS/SSL cipher suite algorithm
+names. Multiple names can be separated by comma.
+
+String
+
+
+camel.ssl.cipherSuitesExclude
+Filters TLS/SSL cipher suites
+algorithms names. This filter is used for excluding algorithms that
+matches the naming pattern. Multiple names can be separated by comma.
+Notice that if the cipherSuites option has been configured then the
+include/exclude filters are not in use.
+
+String
+
+
+camel.ssl.cipherSuitesInclude
+Filters TLS/SSL cipher suites
+algorithms names. This filter is used for including algorithms that
+matches the naming pattern. Multiple names can be separated by comma.
+Notice that if the cipherSuites option has been configured then the
+include/exclude filters are not in use.
+
+String
+
+
+camel.ssl.clientAuthentication
+Sets the configuration for server-side
+client-authentication requirements
+NONE
+String
+
+
+camel.ssl.enabled
+Enables SSL in your Camel
+application.
+false
+boolean
+
+
+camel.ssl.keyManagerAlgorithm
+Algorithm name used for creating the
+KeyManagerFactory. See
+https://docs.oracle.com/en/java/javase/17/docs/specs/security/standard-names.html
+
+String
+
+
+camel.ssl.keyManagerProvider
+To use a specific provider for creating
+KeyManagerFactory. The list of available providers returned by
+java.security.Security.getProviders() or null to use the highest
+priority provider implementing the secure socket protocol.
+
+String
+
+
+camel.ssl.keyStore
+The key store to load. The key store is
+by default loaded from classpath. If you must load from file system,
+then use file: as prefix. file:nameOfFile (to refer to the file system)
+classpath:nameOfFile (to refer to the classpath; default) http:uri (to
+load the resource using HTTP) ref:nameOfBean (to lookup an existing
+KeyStore instance from the registry, for example for testing and
+development).
+
+String
+
+
+camel.ssl.keystorePassword
+Sets the SSL Keystore
+password.
+
+String
+
+
+camel.ssl.keyStoreProvider
+To use a specific provider for creating
+KeyStore. The list of available providers returned by
+java.security.Security.getProviders() or null to use the highest
+priority provider implementing the secure socket protocol.
+
+String
+
+
+camel.ssl.keyStoreType
+The type of the key store to load. See
+https://docs.oracle.com/en/java/javase/17/docs/specs/security/standard-names.html
+
+String
+
+
+camel.ssl.provider
+To use a specific provider for creating
+SSLContext. The list of available providers returned by
+java.security.Security.getProviders() or null to use the highest
+priority provider implementing the secure socket protocol.
+
+String
+
+
+camel.ssl.secureRandomAlgorithm
+Algorithm name used for creating the
+SecureRandom. See
+https://docs.oracle.com/en/java/javase/17/docs/specs/security/standard-names.html
+
+String
+
+
+camel.ssl.secureRandomProvider
+To use a specific provider for creating
+SecureRandom. The list of available providers returned by
+java.security.Security.getProviders() or null to use the highest
+priority provider implementing the secure socket protocol.
+
+String
+
+
+camel.ssl.secureSocketProtocol
+The protocol for the secure sockets
+created by the SSLContext. See
+https://docs.oracle.com/en/java/javase/17/docs/specs/security/standard-names.html
+TLSv1.3
+String
+
+
+camel.ssl.sessionTimeout
+Timeout in seconds to use for
+SSLContext. The default is 24 hours.
+86400
+int
+
+
+camel.ssl.trustStore
+The trust store to load. The trust
+store is by default loaded from classpath. If you must load from file
+system, then use file: as prefix. file:nameOfFile (to refer to the file
+system) classpath:nameOfFile (to refer to the classpath; default)
+http:uri (to load the resource using HTTP) ref:nameOfBean (to lookup an
+existing KeyStore instance from the registry, for example for testing
+and development).
+
+String
+
+
+camel.ssl.trustStorePassword
+Sets the SSL Truststore
+password.
+
+String
+
+
+
+
+## Camel Thread Pool configurations
+
+The camel.threadpool supports 8 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.threadpool.allowCoreThreadTimeOut
+Sets default whether to allow core
+threads to timeout
+false
+Boolean
+
+
+camel.threadpool.config
+Adds a configuration for a specific
+thread pool profile (inherits default values)
+
+Map
+
+
+camel.threadpool.keepAliveTime
+Sets the default keep alive time for
+inactive threads
+
+Long
+
+
+camel.threadpool.maxPoolSize
+Sets the default maximum pool
+size
+
+Integer
+
+
+camel.threadpool.maxQueueSize
+Sets the default maximum number of
+tasks in the work queue. Use -1 or an unbounded queue
+
+Integer
+
+
+camel.threadpool.poolSize
+Sets the default core pool size
+(threads to keep minimum in pool)
+
+Integer
+
+
+camel.threadpool.rejectedPolicy
+Sets the default handler for tasks
+which cannot be executed by the thread pool.
+
+ThreadPoolRejectedPolicy
+
+
+camel.threadpool.timeUnit
+Sets the default time unit used for
+keep alive time
+
+TimeUnit
+
+
+
+
+## Camel Health Check configurations
+
+The camel.health supports 8 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.health.consumersEnabled
+Whether consumers health check is
+enabled
+true
+Boolean
+
+
+camel.health.enabled
+Whether health check is enabled
+globally
+true
+Boolean
+
+
+camel.health.excludePattern
+Pattern to exclude health checks from
+being invoked by Camel when checking healths. Multiple patterns can be
+separated by comma.
+
+String
+
+
+camel.health.exposureLevel
+Sets the level of details to exposure
+as result of invoking health checks. There are the following levels:
+full, default, oneline The full level will include all details and
+status from all the invoked health checks. The default level will report
+UP if everything is okay, and only include detailed information for
+health checks that was DOWN. The oneline level will only report either
+UP or DOWN.
+default
+String
+
+
+camel.health.initialState
+The initial state of health-checks
+(readiness). There are the following states: UP, DOWN, UNKNOWN. By
+default, the state is DOWN, is regarded as being pessimistic/careful.
+This means that the overall health checks may report as DOWN during
+startup and then only if everything is up and running flip to being UP.
+Setting the initial state to UP, is regarded as being optimistic. This
+means that the overall health checks may report as UP during startup and
+then if a consumer or other service is in fact un-healthy, then the
+health-checks can flip being DOWN. Setting the state to UNKNOWN means
+that some health-check would be reported in unknown state, especially
+during early bootstrap where a consumer may not be fully initialized or
+validated a connection to a remote system. This option allows to
+pre-configure the state for different modes.
+down
+String
+
+
+camel.health.producersEnabled
+Whether producers health check is
+enabled
+false
+Boolean
+
+
+camel.health.registryEnabled
+Whether registry health check is
+enabled
+true
+Boolean
+
+
+camel.health.routesEnabled
+Whether routes health check is
+enabled
+true
+Boolean
+
+
+
+
+## Camel Rest-DSL configurations
+
+The camel.rest supports 29 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.rest.apiComponent
+Sets the name of the Camel component to
+use as the REST API (such as swagger or openapi)
+
+String
+
+
+camel.rest.apiContextPath
+Sets a leading API context-path the
+REST API services will be using. This can be used when using components
+such as camel-servlet where the deployed web application is deployed
+using a context-path.
+
+String
+
+
+camel.rest.apiContextRouteId
+Sets the route id to use for the route
+that services the REST API. The route will by default use an auto
+assigned route id.
+
+String
+
+
+camel.rest.apiHost
+To use a specific hostname for the API
+documentation (such as swagger or openapi) This can be used to override
+the generated host with this configured hostname
+
+String
+
+
+camel.rest.apiProperties
+Sets additional options on api
+level
+
+Map
+
+
+camel.rest.apiVendorExtension
+Whether vendor extension is enabled in
+the Rest APIs. If enabled then Camel will include additional information
+as vendor extension (eg keys starting with x-) such as route ids, class
+names etc. Not all 3rd party API gateways and tools supports
+vendor-extensions when importing your API docs.
+false
+boolean
+
+
+camel.rest.bindingMode
+Sets the binding mode to be used by the
+REST consumer
+RestBindingMode.off
+RestBindingMode
+
+
+camel.rest.bindingPackageScan
+Package name to use as base (offset)
+for classpath scanning of POJO classes are located when using binding
+mode is enabled for JSon or XML. Multiple package names can be separated
+by comma.
+
+String
+
+
+camel.rest.clientRequestValidation
+Whether to enable validation of the
+client request to check: 1) Content-Type header matches what the Rest
+DSL consumes; returns HTTP Status 415 if validation error. 2) Accept
+header matches what the Rest DSL produces; returns HTTP Status 406 if
+validation error. 3) Missing required data (query parameters, HTTP
+headers, body); returns HTTP Status 400 if validation error. 4) Parsing
+error of the message body (JSon, XML or Auto binding mode must be
+enabled); returns HTTP Status 400 if validation error.
+false
+boolean
+
+
+camel.rest.component
+Sets the name of the Camel component to
+use as the REST consumer
+
+String
+
+
+camel.rest.componentProperties
+Sets additional options on component
+level
+
+Map
+
+
+camel.rest.consumerProperties
+Sets additional options on consumer
+level
+
+Map
+
+
+camel.rest.contextPath
+Sets a leading context-path the REST
+services will be using. This can be used when using components such as
+camel-servlet where the deployed web application is deployed using a
+context-path. Or for components such as camel-jetty or camel-netty-http
+that includes a HTTP server.
+
+String
+
+
+camel.rest.corsHeaders
+Sets the CORS headers to use if CORS
+has been enabled.
+
+Map
+
+
+camel.rest.dataFormatProperties
+Sets additional options on data format
+level
+
+Map
+
+
+camel.rest.enableCORS
+To specify whether to enable CORS which
+means Camel will automatic include CORS in the HTTP headers in the
+response. This option is default false
+false
+boolean
+
+
+camel.rest.enableNoContentResponse
+Whether to return HTTP 204 with an
+empty body when a response contains an empty JSON object or XML root
+object. The default value is false.
+false
+boolean
+
+
+camel.rest.endpointProperties
+Sets additional options on endpoint
+level
+
+Map
+
+
+camel.rest.host
+Sets the hostname to use by the REST
+consumer
+
+String
+
+
+camel.rest.hostNameResolver
+Sets the resolver to use for resolving
+hostname
+RestHostNameResolver.allLocalIp
+RestHostNameResolver
+
+
+camel.rest.inlineRoutes
+Inline routes in rest-dsl which are
+linked using direct endpoints. Each service in Rest DSL is an individual
+route, meaning that you would have at least two routes per service
+(rest-dsl, and the route linked from rest-dsl). By inlining (default)
+allows Camel to optimize and inline this as a single route, however this
+requires to use direct endpoints, which must be unique per service. If a
+route is not using direct endpoint then the rest-dsl is not inlined, and
+will become an individual route. This option is default true.
+true
+boolean
+
+
+camel.rest.jsonDataFormat
+Sets a custom json data format to be
+used Important: This option is only for setting a custom name of the
+data format, not to refer to an existing data format instance.
+
+String
+
+
+camel.rest.port
+Sets the port to use by the REST
+consumer
+
+int
+
+
+camel.rest.producerApiDoc
+Sets the location of the api document
+(swagger api) the REST producer will use to validate the REST uri and
+query parameters are valid accordingly to the api document. This
+requires adding camel-openapi-java to the classpath, and any miss
+configuration will let Camel fail on startup and report the error(s).
+The location of the api document is loaded from classpath by default,
+but you can use file: or http: to refer to resources to load from file
+or http url.
+
+String
+
+
+camel.rest.producerComponent
+Sets the name of the Camel component to
+use as the REST producer
+
+String
+
+
+camel.rest.scheme
+Sets the scheme to use by the REST
+consumer
+
+String
+
+
+camel.rest.skipBindingOnErrorCode
+Whether to skip binding output if there
+is a custom HTTP error code, and instead use the response body as-is.
+This option is default true.
+true
+boolean
+
+
+camel.rest.useXForwardHeaders
+Whether to use X-Forward headers to set
+host etc. for OpenApi. This may be needed in special cases involving
+reverse-proxy and networking going from HTTP to HTTPS etc. Then the
+proxy can send X-Forward headers (X-Forwarded-Proto) that influences the
+host names in the OpenAPI schema that camel-openapi-java generates from
+Rest DSL routes.
+false
+boolean
+
+
+camel.rest.xmlDataFormat
+Sets a custom xml data format to be
+used. Important: This option is only for setting a custom name of the
+data format, not to refer to an existing data format instance.
+
+String
+
+
+
+
+## Camel AWS Vault configurations
+
+The camel.vault.aws supports 11 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.vault.aws.accessKey
+The AWS access key
+
+String
+
+
+camel.vault.aws.defaultCredentialsProvider
+Define if we want to use the AWS
+Default Credentials Provider or not
+false
+boolean
+
+
+camel.vault.aws.profileCredentialsProvider
+Define if we want to use the AWS
+Profile Credentials Provider or not
+false
+boolean
+
+
+camel.vault.aws.profileName
+Define the profile name to use if
+Profile Credentials Provider is selected
+
+String
+
+
+camel.vault.aws.refreshEnabled
+Whether to automatically reload Camel
+upon secrets being updated in AWS.
+false
+boolean
+
+
+camel.vault.aws.refreshPeriod
+The period (millis) between checking
+AWS for updated secrets.
+30000
+long
+
+
+camel.vault.aws.region
+The AWS region
+
+String
+
+
+camel.vault.aws.secretKey
+The AWS secret key
+
+String
+
+
+camel.vault.aws.secrets
+Specify the secret names (or pattern)
+to check for updates. Multiple secrets can be separated by
+comma.
+
+String
+
+
+camel.vault.aws.sqsQueueUrl
+In case of usage of SQS notification
+this field will specified the Queue URL to use
+
+String
+
+
+camel.vault.aws.useSqsNotification
+Whether to use AWS SQS for secrets
+updates notification, this will require setting up
+Eventbridge/Cloudtrail/SQS communication
+false
+boolean
+
+
+
+
+## Camel GCP Vault configurations
+
+The camel.vault.gcp supports 7 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.vault.gcp.projectId
+The GCP Project ID
+
+String
+
+
+camel.vault.gcp.refreshEnabled
+Whether to automatically reload Camel
+upon secrets being updated in AWS.
+false
+boolean
+
+
+camel.vault.gcp.refreshPeriod
+The period (millis) between checking
+Google for updated secrets.
+30000
+long
+
+
+camel.vault.gcp.secrets
+Specify the secret names (or pattern)
+to check for updates. Multiple secrets can be separated by
+comma.
+
+String
+
+
+camel.vault.gcp.serviceAccountKey
+The Service Account Key
+location
+
+String
+
+
+camel.vault.gcp.subscriptionName
+Define the Google Pubsub subscription
+Name to be used when checking for updates
+
+String
+
+
+camel.vault.gcp.useDefaultInstance
+Define if we want to use the GCP Client
+Default Instance or not
+false
+boolean
+
+
+
+
+## Camel Azure Key Vault configurations
+
+The camel.vault.azure supports 12 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.vault.azure.azureIdentityEnabled
+Whether the Azure Identity
+Authentication should be used or not.
+false
+boolean
+
+
+camel.vault.azure.blobAccessKey
+The Eventhubs Blob Access Key for
+CheckpointStore purpose
+
+String
+
+
+camel.vault.azure.blobAccountName
+The Eventhubs Blob Account Name for
+CheckpointStore purpose
+
+String
+
+
+camel.vault.azure.blobContainerName
+The Eventhubs Blob Container Name for
+CheckpointStore purpose
+
+String
+
+
+camel.vault.azure.clientId
+The client Id for accessing Azure Key
+Vault
+
+String
+
+
+camel.vault.azure.clientSecret
+The client Secret for accessing Azure
+Key Vault
+
+String
+
+
+camel.vault.azure.eventhubConnectionString
+The Eventhubs connection String for Key
+Vault Secret events notifications
+
+String
+
+
+camel.vault.azure.refreshEnabled
+Whether to automatically reload Camel
+upon secrets being updated in Azure.
+false
+boolean
+
+
+camel.vault.azure.refreshPeriod
+The period (millis) between checking
+Azure for updated secrets.
+30000
+long
+
+
+camel.vault.azure.secrets
+Specify the secret names (or pattern)
+to check for updates. Multiple secrets can be separated by
+comma.
+
+String
+
+
+camel.vault.azure.tenantId
+The Tenant Id for accessing Azure Key
+Vault
+
+String
+
+
+camel.vault.azure.vaultName
+The vault Name in Azure Key
+Vault
+
+String
+
+
+
+
+## Camel Kubernetes Vault configurations
+
+The camel.vault.kubernetes supports 2 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.vault.kubernetes.refreshEnabled
+Whether to automatically reload Camel
+upon secrets being updated in Kubernetes Cluster.
+true
+boolean
+
+
+camel.vault.kubernetes.secrets
+Specify the secret names (or pattern)
+to check for updates. Multiple secrets can be separated by
+comma.
+
+String
+
+
+
+
+## Camel Hashicorp Vault configurations
+
+The camel.vault.hashicorp supports 4 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.vault.hashicorp.host
+Host to access hashicorp vault
+
+String
+
+
+camel.vault.hashicorp.port
+Port to access hashicorp vault
+
+String
+
+
+camel.vault.hashicorp.scheme
+Scheme to access hashicorp
+vault
+
+String
+
+
+camel.vault.hashicorp.token
+Token to access hashicorp
+vault
+
+String
+
+
+
+
+## Camel OpenTelemetry configurations
+
+The camel.opentelemetry supports 5 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.opentelemetry.enabled
+To enable OpenTelemetry
+false
+boolean
+
+
+camel.opentelemetry.encoding
+Sets whether the header keys need to be
+encoded (connector specific) or not. The value is a boolean. Dashes need
+for instances to be encoded for JMS property keys.
+false
+boolean
+
+
+camel.opentelemetry.excludePatterns
+Adds an exclude pattern that will
+disable tracing for Camel messages that matches the pattern. Multiple
+patterns can be separated by comma.
+
+String
+
+
+camel.opentelemetry.instrumentationName
+A name uniquely identifying the
+instrumentation scope, such as the instrumentation library, package, or
+fully qualified class name. Must not be null.
+camel
+String
+
+
+camel.opentelemetry.traceProcessors
+Setting this to true will create new
+OpenTelemetry Spans for each Camel Processors. Use the excludePattern
+property to filter out Processors.
+false
+boolean
+
+
+
+
+## Camel Micrometer Metrics configurations
+
+The camel.metrics supports 10 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.metrics.binders
+Additional Micrometer binders to
+include such as jvm-memory, processor, jvm-thread, and so forth.
+Multiple binders can be separated by comma. The following binders
+currently is available from Micrometer: class-loader,
+commons-object-pool2, file-descriptor, hystrix-metrics-binder,
+jvm-compilation, jvm-gc, jvm-heap-pressure, jvm-info, jvm-memory,
+jvm-thread, log4j2, logback, processor, uptime
+
+String
+
+
+camel.metrics.clearOnReload
+Clear the captured metrics data when
+Camel is reloading routes such as when using Camel JBang.
+true
+boolean
+
+
+camel.metrics.enabled
+To enable Micrometer metrics.
+false
+boolean
+
+
+camel.metrics.enableExchangeEventNotifier
+Set whether to enable the
+MicrometerExchangeEventNotifier for capturing metrics on exchange
+processing times.
+true
+boolean
+
+
+camel.metrics.enableMessageHistory
+Set whether to enable the
+MicrometerMessageHistoryFactory for capturing metrics on individual
+route node processing times. Depending on the number of configured route
+nodes, there is the potential to create a large volume of metrics.
+Therefore, this option is disabled by default.
+false
+boolean
+
+
+camel.metrics.enableRouteEventNotifier
+Set whether to enable the
+MicrometerRouteEventNotifier for capturing metrics on the total number
+of routes and total number of routes running.
+true
+boolean
+
+
+camel.metrics.enableRoutePolicy
+Set whether to enable the
+MicrometerRoutePolicyFactory for capturing metrics on route processing
+times.
+true
+boolean
+
+
+camel.metrics.namingStrategy
+Controls the name style to use for
+metrics. Default = uses micrometer naming convention. Legacy = uses the
+classic naming style (camelCase)
+default
+String
+
+
+camel.metrics.routePolicyLevel
+Sets the level of information to
+capture. all = both context and routes.
+all
+String
+
+
+camel.metrics.textFormatVersion
+The text-format version to use with
+Prometheus scraping. 0.0.4 = text/plain; version=0.0.4; charset=utf-8
+1.0.0 = application/openmetrics-text; version=1.0.0;
+charset=utf-8
+0.0.4
+String
+
+
+
+
+## Fault Tolerance EIP Circuit Breaker configurations
+
+The camel.faulttolerance supports 13 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.faulttolerance.bulkheadEnabled
+Whether bulkhead is enabled or not on
+the circuit breaker. Default is false.
+false
+Boolean
+
+
+camel.faulttolerance.bulkheadExecutorService
+References to a custom thread pool to
+use when bulkhead is enabled.
+
+String
+
+
+camel.faulttolerance.bulkheadMaxConcurrentCalls
+Configures the max amount of concurrent
+calls the bulkhead will support. Default value is 10.
+10
+Integer
+
+
+camel.faulttolerance.bulkheadWaitingTaskQueue
+Configures the task queue size for
+holding waiting tasks to be processed by the bulkhead. Default value is
+10.
+10
+Integer
+
+
+camel.faulttolerance.circuitBreaker
+Refers to an existing
+io.smallrye.faulttolerance.core.circuit.breaker.CircuitBreaker instance
+to lookup and use from the registry. When using this, then any other
+circuit breaker options are not in use.
+
+String
+
+
+camel.faulttolerance.delay
+Control how long the circuit breaker
+stays open. The value are in seconds and the default is 5
+seconds.
+5
+Long
+
+
+camel.faulttolerance.failureRatio
+Configures the failure rate threshold
+in percentage. If the failure rate is equal or greater than the
+threshold the CircuitBreaker transitions to open and starts
+short-circuiting calls. The threshold must be greater than 0 and not
+greater than 100. Default value is 50 percentage.
+50
+Integer
+
+
+camel.faulttolerance.requestVolumeThreshold
+Controls the size of the rolling window
+used when the circuit breaker is closed Default value is 20.
+20
+Integer
+
+
+camel.faulttolerance.successThreshold
+Controls the number of trial calls
+which are allowed when the circuit breaker is half-open Default value is
+1.
+1
+Integer
+
+
+camel.faulttolerance.timeoutDuration
+Configures the thread execution
+timeout. Default value is 1000 milliseconds.
+1000
+Long
+
+
+camel.faulttolerance.timeoutEnabled
+Whether timeout is enabled or not on
+the circuit breaker. Default is false.
+false
+Boolean
+
+
+camel.faulttolerance.timeoutPoolSize
+Configures the pool size of the thread
+pool when timeout is enabled. Default value is 10.
+10
+Integer
+
+
+camel.faulttolerance.timeoutScheduledExecutorService
+References to a custom thread pool to
+use when timeout is enabled
+
+String
+
+
+
+
+## Resilience4j EIP Circuit Breaker configurations
+
+The camel.resilience4j supports 20 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.resilience4j.automaticTransitionFromOpenToHalfOpenEnabled
+Enables automatic transition from OPEN
+to HALF_OPEN state once the waitDurationInOpenState has passed.
+false
+Boolean
+
+
+camel.resilience4j.bulkheadEnabled
+Whether bulkhead is enabled or not on
+the circuit breaker.
+false
+Boolean
+
+
+camel.resilience4j.bulkheadMaxConcurrentCalls
+Configures the max amount of concurrent
+calls the bulkhead will support.
+
+Integer
+
+
+camel.resilience4j.bulkheadMaxWaitDuration
+Configures a maximum amount of time
+which the calling thread will wait to enter the bulkhead. If bulkhead
+has space available, entry is guaranteed and immediate. If bulkhead is
+full, calling threads will contest for space, if it becomes available.
+maxWaitDuration can be set to 0. Note: for threads running on an
+event-loop or equivalent (rx computation pool, etc), setting
+maxWaitDuration to 0 is highly recommended. Blocking an event-loop
+thread will most likely have a negative effect on application
+throughput.
+
+Integer
+
+
+camel.resilience4j.circuitBreaker
+Refers to an existing
+io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup
+and use from the registry. When using this, then any other circuit
+breaker options are not in use.
+
+String
+
+
+camel.resilience4j.config
+Refers to an existing
+io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to
+lookup and use from the registry.
+
+String
+
+
+camel.resilience4j.failureRateThreshold
+Configures the failure rate threshold
+in percentage. If the failure rate is equal or greater than the
+threshold the CircuitBreaker transitions to open and starts
+short-circuiting calls. The threshold must be greater than 0 and not
+greater than 100. Default value is 50 percentage.
+50
+Float
+
+
+camel.resilience4j.minimumNumberOfCalls
+Configures configures the minimum
+number of calls which are required (per sliding window period) before
+the CircuitBreaker can calculate the error rate. For example, if
+minimumNumberOfCalls is 10, then at least 10 calls must be recorded,
+before the failure rate can be calculated. If only 9 calls have been
+recorded the CircuitBreaker will not transition to open even if all 9
+calls have failed. Default minimumNumberOfCalls is 100
+100
+Integer
+
+
+camel.resilience4j.permittedNumberOfCallsInHalfOpenState
+Configures the number of permitted
+calls when the CircuitBreaker is half open. The size must be greater
+than 0. Default size is 10.
+10
+Integer
+
+
+camel.resilience4j.slidingWindowSize
+Configures the size of the sliding
+window which is used to record the outcome of calls when the
+CircuitBreaker is closed. slidingWindowSize configures the size of the
+sliding window. Sliding window can either be count-based or time-based.
+If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls
+are recorded and aggregated. If slidingWindowType is TIME_BASED, the
+calls of the last slidingWindowSize seconds are recorded and aggregated.
+The slidingWindowSize must be greater than 0. The minimumNumberOfCalls
+must be greater than 0. If the slidingWindowType is COUNT_BASED, the
+minimumNumberOfCalls cannot be greater than slidingWindowSize . If the
+slidingWindowType is TIME_BASED, you can pick whatever you want. Default
+slidingWindowSize is 100.
+100
+Integer
+
+
+camel.resilience4j.slidingWindowType
+Configures the type of the sliding
+window which is used to record the outcome of calls when the
+CircuitBreaker is closed. Sliding window can either be count-based or
+time-based. If slidingWindowType is COUNT_BASED, the last
+slidingWindowSize calls are recorded and aggregated. If
+slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize
+seconds are recorded and aggregated. Default slidingWindowType is
+COUNT_BASED.
+COUNT_BASED
+String
+
+
+camel.resilience4j.slowCallDurationThreshold
+Configures the duration threshold
+(seconds) above which calls are considered as slow and increase the slow
+calls percentage. Default value is 60 seconds.
+60
+Integer
+
+
+camel.resilience4j.slowCallRateThreshold
+Configures a threshold in percentage.
+The CircuitBreaker considers a call as slow when the call duration is
+greater than slowCallDurationThreshold(Duration. When the percentage of
+slow calls is equal or greater the threshold, the CircuitBreaker
+transitions to open and starts short-circuiting calls. The threshold
+must be greater than 0 and not greater than 100. Default value is 100
+percentage which means that all recorded calls must be slower than
+slowCallDurationThreshold.
+100
+Float
+
+
+camel.resilience4j.throwExceptionWhenHalfOpenOrOpenState
+Whether to throw
+io.github.resilience4j.circuitbreaker.CallNotPermittedException when the
+call is rejected due circuit breaker is half open or open.
+false
+Boolean
+
+
+camel.resilience4j.timeoutCancelRunningFuture
+Configures whether cancel is called on
+the running future. Defaults to true.
+true
+Boolean
+
+
+camel.resilience4j.timeoutDuration
+Configures the thread execution timeout
+(millis). Default value is 1000 millis (1 second).
+1000
+Integer
+
+
+camel.resilience4j.timeoutEnabled
+Whether timeout is enabled or not on
+the circuit breaker. Default is false.
+false
+Boolean
+
+
+camel.resilience4j.timeoutExecutorService
+References to a custom thread pool to
+use when timeout is enabled (uses ForkJoinPool#commonPool() by
+default)
+
+String
+
+
+camel.resilience4j.waitDurationInOpenState
+Configures the wait duration (in
+seconds) which specifies how long the CircuitBreaker should stay open,
+before it switches to half open. Default value is 60 seconds.
+60
+Integer
+
+
+camel.resilience4j.writableStackTraceEnabled
+Enables writable stack traces. When set
+to false, Exception.getStackTrace returns a zero length array. This may
+be used to reduce log spam when the circuit breaker is open as the cause
+of the exceptions is already known (the circuit breaker is
+short-circuiting calls).
+false
+Boolean
+
+
+
+
+## Camel Saga EIP (Long Running Actions) configurations
+
+The camel.lra supports 5 options, which are listed below.
+
+
+
+
+
+
+
+
+
+
+
+
+
+camel.lra.coordinatorContextPath
+The context-path for the LRA
+coordinator. Is default /lra-coordinator
+/lra-coordinator
+String
+
+
+camel.lra.coordinatorUrl
+The URL for the LRA coordinator service
+that orchestrates the transactions
+
+String
+
+
+camel.lra.enabled
+To enable Saga LRA
+false
+boolean
+
+
+camel.lra.localParticipantContextPath
+The context-path for the local
+participant. Is default /lra-participant
+/lra-participant
+String
+
+
+camel.lra.localParticipantUrl
+The URL for the local
+participant
+
+String
+
+
+
+
+# Package Scanning
+
+**Available since Camel 3.16**
+
+When running Camel standalone via `camel-main` JAR, then Camel will use
+package scanning to discover:
+
+- Camel routes by discovering `RouteBuilder` classes
+
+- Camel configuration classes by discovering `CamelConfiguration`
+ classes or classes annotated with `@Configuration`.
+
+- Camel type converters by discovering classes annotated with
+ `@Converter`
+
+To use package scanning then Camel needs to know the base package to use
+as *offset*. This can be specified either with the
+`camel.main.basePackage` option or via `Main` class as shown below:
+
+ package com.foo.acme;
+
+ public class MyCoolApplication {
+
+ public static void main(String[] args) {
+ Main main = new Main(MyCoolApplication.class);
+ main.run();
+ }
+
+ }
+
+In the example above, then we use `com.foo.acme` as the base package,
+which is done by passing in the class in the `Main` constructor. This is
+similar with how Spring Boot does this.
+
+Camel will then scan from the base package and the sub packages.
+
+## Disabling Package Scanning
+
+Package scanning can be turned off by setting
+`camel.main.basePackageScanEnabled=false`.
+
+There is a little overhead when using package scanning as Camel performs
+this scan during startup.
+
+# Configuring Camel Main applications
+
+You can use *configuration* classes to configure Camel Main applications
+from Java.
+
+In **Camel 3.16** onwards the configuration classes must either
+implement the interface `org.apache.camel.CamelConfiguration`, or be
+annotated with `@Configuration` (or both). In previous versions this was
+not required.
+
+For example to configure a Camel application by creating custom beans
+you can do:
+
+ public class MyConfiguration implements CamelConfiguration {
+
+ @BindToRegistry
+ public MyBean myAwesomeBean() {
+ MyBean bean = new MyBean();
+ // do something on bean
+ return bean;
+ }
+
+ public void configure(CamelContext camelContext) throws Exception {
+ // this method is optional and can be omitted
+ // do any kind of configuration here if needed
+ }
+
+ }
+
+In the configuration class you can also have custom methods that creates
+beans, such as the `myAwesomeBean` method that creates the `MyBean` and
+registers it with the name `myAwesomeBean` (defaults to method name).
+
+This is similar to Spring Boot where you can also do this with the
+Spring Boot `@Bean` annotations, or in Quarkus/CDI with the `@Produces`
+annotation.
+
+## Using annotation based configuration classes
+
+Instead of configuration classes that implements `CamelConfiguration`,
+you can annotate the class with `org.apache.camel.@Configuration` as
+shown:
+
+ @Configuration
+ public class MyConfiguration {
+
+ @BindToRegistry
+ public MyBean myAwesomeBean() {
+ MyBean bean = new MyBean();
+ // do something on bean
+ return bean;
+ }
+ }
+
+# Specifying custom beans
+
+Custom beans can be configured in `camel-main` via properties (such as
+in the `application.properties` file).
+
+For example to create a `DataSource` for a Postgress database, you can
+create a new bean instance via `#class:` with the class name (fully
+qualified). Properties on the data source can then additional configured
+such as the server and database name, etc.
+
+ camel.beans.myDS = #class:org.postgresql.jdbc3.Jdbc3PoolingDataSource
+ camel.beans.myDS.dataSourceName = myDS
+ camel.beans.myDS.serverName = mypostrgress
+ camel.beans.myDS.databaseName = test
+ camel.beans.myDS.user = testuser
+ camel.beans.myDS.password = testpassword
+ camel.beans.myDS.maxConnections = 10
+
+The bean is registered in the Camel Registry with the name `myDS`.
+
+If you use the SQL component then the datasource can be configured on
+the SQL component:
+
+ camel.component.sql.dataSource = #myDS
+
+To refer to a custom bean you may want to favour using `#bean:` style,
+as this states the intention more clearly that its referring to a bean,
+and not just a text value that happens to start with a `+#+` sign:
+
+ camel.component.sql.dataSource = #bean:myDS
+
+## Creating a custom map bean
+
+When creating a bean as a `java.util.Map` type, then you can use the
+`[]` syntax as shown below:
+
+ camel.beans.myApp[id] = 123
+ camel.beans.myApp[name] = Demo App
+ camel.beans.myApp[version] = 1.0.1
+ camel.beans.myApp[username] = goofy
+
+Camel will then create this as a `LinkedHashMap` type with the name
+`myApp` which is bound to the Camel
+[Registry](#manual:ROOT:registry.adoc), with the data defined in the
+properties.
+
+If you desire a different `java.util.Map` implementation, then you can
+use `#class` style as shown:
+
+ camel.beans.myApp = #class:com.foo.MyMapImplementation
+ camel.beans.myApp[id] = 123
+ camel.beans.myApp[name] = Demo App
+ camel.beans.myApp[version] = 1.0.1
+ camel.beans.myApp[username] = goofy
+
+## Creating a custom bean with constructor parameters
+
+When creating a bean then parameters to the constructor can be provided.
+Suppose we have a class `MyFoo` with a constructor:
+
+ public class MyFoo {
+ private String name;
+ private boolean important;
+ private int id;
+
+ public MyFoo(String name, boolean important, int id) {
+ this.name = name;
+ this.important = important;
+ this.id = id;
+ }
+ }
+
+Then we can create a bean instance with name `foo` and provide
+parameters to the constructor as shown:
+
+ camel.beans.foo = #class:com.foo.MyBean("Hello World", true, 123)
+
+## Creating custom beans with factory method
+
+When creating a bean then parameters to a factorty method can be
+provided. Suppose we have a class `MyFoo` with a static factory method:
+
+ public class MyFoo {
+ private String name;
+ private boolean important;
+ private int id;
+
+ private MyFoo() {
+ // use factory method
+ }
+
+ public static MyFoo buildFoo(String name, boolean important, int id) {
+ MyFoo foo = new MyFoo();
+ foo.name = name;
+ foo.important = important;
+ foo.id = id;
+ return foo;
+ }
+ }
+
+Then we can create a bean instance with name `foo` and provide
+parameters to the static factory method as shown:
+
+ camel.beans.foo = #class:com.foo.MyBean#buildFoo("Hello World", true, 123)
+
+The syntax must use `#factoryMethodName` to tell Camel that the bean
+should be created from a factory method.
+
+## Optional parameters on beans
+
+If a parameter on a bean is not mandatory then the parameter can be
+marked as optional using `?` syntax, as shown:
+
+ camel.beans.foo = #class:com.foo.MyBean("Hello World", true, 123)
+ camel.beans.foo.?company = Acme
+
+Then the company parameter is only set if `MyBean` has this option
+(silent ignore if no option present). Otherwise, if a parameter is set,
+and the bean does not have such a parameter, then an exception is thrown
+by Camel.
+
+## Optional parameter values on beans
+
+If a parameter value on a bean is configured using [Property
+Placeholder](#manual:ROOT:using-propertyplaceholder.adoc) and the
+placeholder is optional, then the placeholder can be marked as optional
+using `?` syntax, as shown:
+
+ camel.beans.foo = #class:com.foo.MyBean("Hello World", true, 123)
+ camel.beans.foo.company = {{?companyName}}
+
+Then the company parameter is only set if there is a property
+placeholder with the key *companyName* (silent ignore if no option
+present).
+
+### Default parameter values on beans
+
+It is possible to supply a default value (using `:defaultValue`) if the
+placeholder does not exist as shown:
+
+ camel.beans.foo = #class:com.foo.MyBean("Hello World", true, 123)
+ camel.beans.foo.company = {{?companyName:Acme}}
+
+Here the default value is *Acme* that will be used if there is no
+property placeholder with the key *companyName*.
+
+## Nested parameters on beans
+
+You can configure nested parameters separating them via `.` (dot).
+
+For example given this `Country` class:
+
+ public class Country {
+ private String iso;
+ private String name;
+
+ public void setIso(String iso) {
+ this.iso = iso;
+ }
+
+ public void setName(String name) {
+ this.name = name;
+ }
+ }
+
+Which is an option on the `MyBean` class. Then we can then configure its
+iso and name parameter as shown below:
+
+ camel.beans.foo = #class:com.foo.MyBean("Hello World", true, 123)
+ camel.beans.foo.country.iso = USA
+ camel.beans.foo.country.name = United States of America
+
+Camel will automatically create an instance of `Country` if `MyBean` has
+a getter/setter for this option, and that the `Country` bean has a
+default no-arg constructor.
+
+## Configuring singleton beans by their type
+
+In the example above the SQL component was configured with the name of
+the `DataSource`. There can be situations where you know there is only a
+single instance of a data source in the Camel registry. In such a
+situation you can instead refer to the class or interface type via the
+`#type:` prefix as shown below:
+
+ camel.component.sql.dataSource = #type:javax.sql.DataSource
+
+If there is no bean in the registry with the type `javax.sql.DataSource`
+then the option isn’t configured.
+
+## Autowiring beans
+
+The example above can be taken one step further by letting `camel-main`
+try to autowire the beans.
+
+ camel.component.sql.dataSource = #autowired
+
+In this situation then `#autowrired` will make Camel detect the type of
+the `dataSource` option on the `SQL` component. Because type is a
+`javax.sql.DataSource` instance, then Camel will lookup in the registry
+if there is a single instance of the same type. If there is no such bean
+then the option isn’t configured.
+
+# Defining a Map bean
+
+You can specify `java.util.Map` beans in `camel-main` via properties
+(such as in the `application.properties` file).
+
+Maps have a special syntax with brackets as shown below:
+
+ camel.beans.mymap[table] = 12
+ camel.beans.mymap[food] = Big Burger
+ camel.beans.mymap[cheese] = yes
+ camel.beans.mymap[quantity] = 1
+
+The Map is registered in the Camel Registry with the name `mymap`.
+
+## Using dots in Map keys
+
+If the Map should contain keys with dots then the key must be quoted, as
+shown below using single quoted keys:
+
+ camel.beans.myldapserver['java.naming.provider.url'] = ldaps://ldap.local:636
+ camel.beans.myldapserver['java.naming.security.principal'] = scott
+ camel.beans.myldapserver['java.naming.security.credentials'] = tiger
+
+# Defining a List bean
+
+This is similar to Map bean where the key is the index, eg 0, 1, 2, etc:
+
+ camel.beans.myprojects[0] = Camel
+ camel.beans.myprojects[1] = Kafka
+ camel.beans.myprojects[2] = Quarkus
+
+The List is registered in the Camel Registry with the name `myprojects`.
+
+# Examples
+
+You can find a set of examples using `camel-main` in [Camel
+Examples](https://github.com/apache/camel-examples) which demonstrate
+running Camel in standalone with `camel-main`.
diff --git a/camel-mapstruct.md b/camel-mapstruct.md
index 090bdebdb12a1c9afb23ebf8e047792942982d59..ad5836d77be3d04079799a338b5201519571d24f 100644
--- a/camel-mapstruct.md
+++ b/camel-mapstruct.md
@@ -14,12 +14,14 @@ The camel-mapstruct component is used for converting POJOs using
Where `className` is the fully qualified class name of the POJO to
convert to.
-# Setting up MapStruct
+# Usage
+
+## Setting up MapStruct
The camel-mapstruct component must be configured with one or more
package names for classpath scanning MapStruct *Mapper* classes. This is
-needed because the *Mapper* classes are to be used for converting POJOs
-with MapStruct.
+necessary because the *Mapper* classes are to be used for converting
+POJOs with MapStruct.
For example, to set up two packages, you can do the following:
diff --git a/camel-marshal-eip.md b/camel-marshal-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..7e6e4fcf10e8ba01bc5e9dbe3b113e9bee9d23d9
--- /dev/null
+++ b/camel-marshal-eip.md
@@ -0,0 +1,65 @@
+# Marshal-eip.md
+
+The [Marshal](#marshal-eip.adoc) and [Unmarshal](#unmarshal-eip.adoc)
+EIPs are used for [Message Transformation](#message-translator.adoc).
+
+
+
+
+
+Camel has support for message transformation using several techniques.
+One such technique is [Data
+Formats](#components:dataformats:index.adoc), where marshal and
+unmarshal come from.
+
+So in other words, the [Marshal](#marshal-eip.adoc) and
+[Unmarshal](#unmarshal-eip.adoc) EIPs are used with [Data
+Formats](#dataformats:index.adoc).
+
+- `marshal`: transforms the message body (such as Java object) into a
+ binary or textual format, ready to be wired over the network.
+
+- `unmarshal`: transforms data in some binary or textual format (such
+ as received over the network) into a Java object; or some other
+ representation according to the data format being used.
+
+# Example
+
+The following example reads XML files from the inbox/xml directory. Each
+file is then transformed into Java Objects using
+[JAXB](#dataformats:jaxb-dataformat.adoc). Then a
+[Bean](#ROOT:bean-component.adoc) is invoked that takes in the Java
+object.
+
+Then the reverse operation happens to transform the Java objects back
+into XML also via JAXB, but using the `marshal` operation. And finally,
+the message is routed to a [JMS](#ROOT:jms-component.adoc) queue.
+
+Java
+from("file:inbox/xml")
+.unmarshal().jaxb()
+.to("bean:validateOrder")
+.marshal().jaxb()
+.to("jms:queue:order");
+
+XML
+
+
+
+
+
+
+
+
+YAML
+\- from:
+uri: file:inbox/xml
+steps:
+\- unmarshal:
+jaxb: {}
+\- to:
+uri: bean:validateOrder
+\- marshal:
+jaxb: {}
+\- to:
+uri: jms:queue:order
diff --git a/camel-master.md b/camel-master.md
index 20e33b85e2a73359ab26e8b9fa3b8b5f88e92010..97abd3f5d943d8aef8eda5950a823e5ccd1473b2 100644
--- a/camel-master.md
+++ b/camel-master.md
@@ -13,11 +13,19 @@ either doesn’t support concurrent consumption or due to commercial or
stability reasons, you can only have a single connection at any point in
time.
-# Using the master endpoint
+# URI format
+
+ master:namespace:endpoint[?options]
+
+Where endpoint is any Camel endpoint, you want to run in master/slave
+mode.
+
+# Usage
+
+## Using the master endpoint
-Just prefix any camel endpoint with **master:someName:** where
-*someName* is a logical name and is used to acquire the master lock.
-e.g.
+Prefix any camel endpoint with **master:someName:** where *someName* is
+a logical name and is used to acquire the master lock. For instance:
from("master:cheese:jms:foo")
.to("activemq:wine");
@@ -34,13 +42,6 @@ become active and start consuming messages from `jms:foo`.
Apache ActiveMQ 5.x has such a feature out of the box called [Exclusive
Consumers](https://activemq.apache.org/exclusive-consumer.html).
-# URI format
-
- master:namespace:endpoint[?options]
-
-Where endpoint is any Camel endpoint, you want to run in master/slave
-mode.
-
# Example
You can protect a clustered Camel application to only consume files from
@@ -69,7 +70,7 @@ using
context.addService(service)
-- **Xml (Spring/Blueprint)**
+- **Xml (Spring)**
+
+
+
+Use a central Message Broker that can receive messages from multiple
+destinations, determine the correct destination and route the message to
+the correct channel.
+
+Camel supports integration with existing message broker systems such as
+[ActiveMQ](#ROOT:activemq-component.adoc),
+[Kafka](#ROOT:kafka-component.adoc),
+[RabbitMQ](#ROOT:spring-rabbitmq-component.adoc), and cloud queue
+systems such as [AWS SQS](#ROOT:aws2-sqs-component.adoc), and others.
+
+These Camel components allow to both send and receive data from message
+brokers.
diff --git a/camel-message-bus.md b/camel-message-bus.md
new file mode 100644
index 0000000000000000000000000000000000000000..17985e6db1b906b3e8cf6897a29b31ca15166bbe
--- /dev/null
+++ b/camel-message-bus.md
@@ -0,0 +1,29 @@
+# Message-bus.md
+
+Camel supports the [Message
+Bus](https://www.enterpriseintegrationpatterns.com/MessageBus.html) from
+the [EIP patterns](#enterprise-integration-patterns.adoc). You could
+view Camel as a Message Bus itself as it allows producers and consumers
+to be decoupled.
+
+
+
+
+
+A messaging system such as Apache ActiveMQ can be used as a Message Bus.
+
+# Example
+
+The following demonstrates how the Camel message bus can be used to
+ingest a message into the bus with the [JMS](#ROOT:jms-component.adoc)
+component.
+
+Java
+from("file:inbox")
+.to("jms:inbox");
+
+XML
+
+
+
+
diff --git a/camel-message-channel.md b/camel-message-channel.md
new file mode 100644
index 0000000000000000000000000000000000000000..be670945ddbc950601824b43f0c10ee05dadaf33
--- /dev/null
+++ b/camel-message-channel.md
@@ -0,0 +1,32 @@
+# Message-channel.md
+
+Camel supports the [Message
+Channel](http://www.enterpriseintegrationpatterns.com/MessageChannel.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+The Message Channel is an internal implementation detail of the
+`Endpoint` interface, where all interactions of the channel is via the
+[Endpoint](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html).
+
+
+
+
+
+# Example
+
+In [JMS](#ROOT:jms-component.adoc), Message Channels are represented by
+topics and queues such as the following:
+
+ jms:queue:foo
+
+The following shows a little route snippet:
+
+Java
+from("file:foo")
+.to("jms:queue:foo")
+
+XML
+
+
+
+
diff --git a/camel-message-dispatcher.md b/camel-message-dispatcher.md
new file mode 100644
index 0000000000000000000000000000000000000000..f4af2b3286d70ef99ca9ff3749d3ea66f9c584b8
--- /dev/null
+++ b/camel-message-dispatcher.md
@@ -0,0 +1,21 @@
+# Message-dispatcher.md
+
+Camel supports the [Message
+Dispatcher](https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageDispatcher.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+
+
+
+
+In Apache Camel, the Message Dispatcher can be achieved in different
+ways such as:
+
+- You can use a component like [JMS](#ROOT:jms-component.adoc) with
+ selectors to implement a [Selective
+ Consumer](#selective-consumer.adoc) as the Message Dispatcher
+ implementation.
+
+- Or you can use a [Message Endpoint](#message-endpoint.adoc) as the
+ Message Dispatcher itself, or combine this with the [Content-Based
+ Router](#choice-eip.adoc) as the Message Dispatcher.
diff --git a/camel-message-endpoint.md b/camel-message-endpoint.md
new file mode 100644
index 0000000000000000000000000000000000000000..e876503016ae8824c074a97318338695cefe7f10
--- /dev/null
+++ b/camel-message-endpoint.md
@@ -0,0 +1,44 @@
+# Message-endpoint.md
+
+Camel supports the [Message
+Endpoint](http://www.enterpriseintegrationpatterns.com/MessageEndpoint.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) using the
+[Endpoint](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html)
+interface.
+
+How does an application connect to a messaging channel to send and
+receive messages?
+
+
+
+
+
+Connect an application to a messaging channel using a Message Endpoint,
+a client of the messaging system that the application can then use to
+send or receive messages.
+
+When using the [DSL](#manual::dsl.adoc) to create
+[Routes](#manual::routes.adoc), you typically refer to Message Endpoints
+by their [URIs](#manual::uris.adoc) rather than directly using the
+[Endpoint](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html)
+interface. It’s then a responsibility of the
+[CamelContext](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/CamelContext.html)
+to create and activate the necessary `Endpoint` instances using the
+available [Components](#ROOT:index.adoc).
+
+# Example
+
+The following example route demonstrates the use of a
+[File](#ROOT:file-component.adoc) consumer endpoint and a
+[JMS](#ROOT:jms-component.adoc) producer endpoint, by their
+[URIs](#manual::uris.adoc):
+
+Java
+from("file:messages/foo")
+.to("jms:queue:foo");
+
+XML
+
+
+
+
diff --git a/camel-message-expiration.md b/camel-message-expiration.md
new file mode 100644
index 0000000000000000000000000000000000000000..6cf01dde5841a5e1ecf93bbb8c58b05f12908327
--- /dev/null
+++ b/camel-message-expiration.md
@@ -0,0 +1,36 @@
+# Message-expiration.md
+
+Camel supports the [Message
+Expiration](https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageExpiration.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+How can a sender indicate when a message should be considered stale and
+thus should not be processed?
+
+
+
+
+
+Set the Message Expiration to specify a time limit how long the message
+is viable.
+
+Message expiration is supported by some Camel components such as
+[JMS](#ROOT:jms-component.adoc), which uses *time-to-live* to specify
+for how long the message is valid.
+
+When using message expiration, then mind about keeping the systems
+clocks' synchronized among the systems.
+
+# Example
+
+A message should expire after 5 seconds:
+
+Java
+from("direct:cheese")
+.to("jms:queue:cheese?timeToLive=5000");
+
+XML
+
+
+
+
diff --git a/camel-message-history.md b/camel-message-history.md
new file mode 100644
index 0000000000000000000000000000000000000000..d9f7e6ddce9737b89b13dbc99cc92d81890224b4
--- /dev/null
+++ b/camel-message-history.md
@@ -0,0 +1,235 @@
+# Message-history.md
+
+Camel supports the [Message
+History](https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageHistory.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+The Message History from the EIP patterns allows for analyzing and
+debugging the flow of messages in a loosely coupled system.
+
+
+
+
+
+Attaching a Message History to the message will provide a list of all
+applications that the message has passed through since its origination.
+
+# Enabling Message History
+
+The message history is disabled by default (to optimize for lower
+footprint out of the box). You should only enable message history if
+needed, such as during development, where Camel can report route
+stack-traces when a message failed with an exception. But for production
+usage, then message history should only be enabled if you have
+monitoring systems that rely on gathering these fine-grained details.
+When message history is enabled then there is a slight performance
+overhead as the history data is stored in a
+`java.util.concurrent.CopyOnWriteArrayList` due to the need of being
+thread safe.
+
+The Message History can be enabled or disabled per CamelContext or per
+route (disabled by default). For example, you can turn it on with:
+
+Java
+camelContext.setMessageHistory(true);
+
+XML
+
+
+
+
+Or when using Spring Boot or Quarkus, you can enable this in the
+configuration file:
+
+Quarkus
+camel.quarkus.message-history = true
+
+Spring Boot
+camel.springboot.message-history = true
+
+## Route level Message History
+
+You can also enable or disable message history per route. When doing
+this, then Camel can only gather message history in the routes where
+this is enabled, which means you may not have full coverage. You may
+still want to do this, for example, to capture the history in a critical
+route to help pinpoint where the route is slow.
+
+A route level configuration overrides the global configuration.
+
+To enable in Java:
+
+ from("jms:cheese")
+ .messageHistory()
+ .to("bean:validate")
+ .to("bean:transform")
+ .to("jms:wine");
+
+You can also turn off message history per route:
+
+Java
+from("jms:cheese")
+.messageHistory(false)
+.to("bean:validate")
+.to("bean:transform")
+.to("jms:wine");
+
+XML
+
+
+
+
+
+
+
+## Enabling source location information
+
+Camel is capable of gathering precise source file:line-number for each
+EIPs in the routes. When enabled, then the message history will output
+this information in the route stack-trace.
+
+To enable source location:
+
+Java
+camelContext.setSourceLocationEnabled(true);
+
+XML
+
+
+
+
+Or when using Spring Boot or Quarkus, you can enable this in the
+configuration file:
+
+Quarkus
+camel.quarkus.source-location-enabled = true
+
+Spring Boot
+camel.springboot.source-location-enabled = true
+
+# Route stack-trace in exceptions logged by error handler
+
+If Message History is enabled, then Camel will include this information,
+when the [Error Handler](#manual::error-handler.adoc) logs exhausted
+exceptions, where you can see the message history; you may think this as
+a "route stacktrace".
+
+And example is provided below:
+
+ 2022-01-06 12:13:06.721 ERROR 67729 --- [ - timer://java] o.a.c.p.e.DefaultErrorHandler : Failed delivery for (MessageId: B4365D4CED3E5E1-0000000000000004 on ExchangeId: B4365D4CED3E5E1-0000000000000004). Exhausted after delivery attempt: 1 caught: java.lang.IllegalArgumentException: The number is too low
+
+ Message History (source location is disabled)
+
+Source ID Processor Elapsed (ms) route1/route1
+from\[timer://java?period=2s\] 2 route1/setBody1
+setBody\[bean\[MyJavaRouteBuilder method:randomNumbe 0 route1/log1 log 1
+route1/throwException1
+throwException\[java.lang.IllegalArgumentException\] 0
+
+Stacktrace
+
+ java.lang.IllegalArgumentException: The number is too low
+ at sample.camel.MyJavaRouteBuilder.configure(MyJavaRouteBuilder.java:34) ~[classes/:na]
+ at org.apache.camel.builder.RouteBuilder.checkInitialized(RouteBuilder.java:607) ~[camel-core-model-3.20.0.jar:3.20.0]
+ at org.apache.camel.builder.RouteBuilder.configureRoutes(RouteBuilder.java:553) ~[camel-core-model-3.20.0.jar:3.20.0]
+
+When Message History is enabled, then the full history is logged as
+shown above. Here we can see the full path the message has been routed.
+
+When Message History is disabled, as it is by default, then the error
+handler logs a brief history with the last node where the exception
+occurred as shown below:
+
+ 2022-01-06 12:12:32.072 ERROR 67704 --- [ - timer://java] o.a.c.p.e.DefaultErrorHandler : Failed delivery for (MessageId: CD6D1B185A3706F-0000000000000004 on ExchangeId: CD6D1B185A3706F-0000000000000004). Exhausted after delivery attempt: 1 caught: java.lang.IllegalArgumentException: The number is too low
+
+ Message History (source location and message history is disabled)
+
+Source ID Processor Elapsed (ms) route1/route1
+from\[timer://java?period=2s\] 2 … route1/throwException1
+throwException\[java.lang.IllegalArgumentException\] 0
+
+Stacktrace
+
+ java.lang.IllegalArgumentException: The number is too low
+ at sample.camel.MyJavaRouteBuilder.configure(MyJavaRouteBuilder.java:34) ~[classes/:na]
+ at org.apache.camel.builder.RouteBuilder.checkInitialized(RouteBuilder.java:607) ~[camel-core-model-3.20.0.jar:3.20.0]
+ at org.apache.camel.builder.RouteBuilder.configureRoutes(RouteBuilder.java:553) ~[camel-core-model-3.20.0.jar:3.20.0]
+
+Here you can see the Message History only outputs the input (route1) and
+the last step where the exception occurred (throwException1).
+
+Notice that the source column is empty because the source location is
+not enabled. When enabled then, you can see exactly which source file
+and line number the message routed:
+
+ 2022-01-06 12:19:01.277 ERROR 67870 --- [ - timer://java] o.a.c.p.e.DefaultErrorHandler : Failed delivery for (MessageId: 37412D6F722F679-0000000000000003 on ExchangeId: 37412D6F722F679-0000000000000003). Exhausted after delivery attempt: 1 caught: java.lang.IllegalArgumentException: The number is too low
+
+ Message History
+
+Source ID Processor Elapsed (ms) MyJavaRouteBuilder:29 route1/route1
+from\[timer://java?period=2s\] 10 MyJavaRouteBuilder:32 route1/setBody1
+setBody\[bean\[MyJavaRouteBuilder method:randomNumber 1
+MyJavaRouteBuilder:33 route1/log1 log 1 MyJavaRouteBuilder:35
+route1/throwException1
+throwException\[java.lang.IllegalArgumentException\] 0
+
+Stacktrace
+
+ java.lang.IllegalArgumentException: The number is too low
+ at sample.camel.MyJavaRouteBuilder.configure(MyJavaRouteBuilder.java:34) ~[classes/:na]
+ at org.apache.camel.builder.RouteBuilder.checkInitialized(RouteBuilder.java:607) ~[camel-core-model-3.20.0.jar:3.20.0]
+ at org.apache.camel.builder.RouteBuilder.configureRoutes(RouteBuilder.java:553) ~[camel-core-model-3.20.0.jar:3.20.0]
+
+In this case we can see its the `MyJavaRouteBuilder` class on line 35
+that is the problem.
+
+## Configuring route stack-trace from error handler
+
+You can turn off logging Message History with
+`logExhaustedMessageHistory` from the [Error
+Handler](#manual::error-handler.adoc) using:
+
+ errorHandler(defaultErrorHandler().logExhaustedMessageHistory(false));
+
+The [Error Handler](#manual::error-handler.adoc) does not log the
+message body/header details (to avoid logging sensitive message body
+details). You can enable this with `logExhaustedMessageBody` on the
+error handler as shown:
+
+Java
+errorHandler(defaultErrorHandler().logExhaustedMessageBody(true));
+
+XML
+In XML configuring this is a bit different, as you configure this in the
+`redeliveryPolicy` of the `` as shown:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# MessageHistory API
+
+When message history is enabled during routing Camel captures how the
+`Exchange` is routed, as an `org.apache.camel.MessageHistory` entity
+that is stored on the `Exchange`.
+
+On the `org.apache.camel.MessageHistory` there is information about the
+route id, processor id, timestamp, and elapsed time it took to process
+the `Exchange` by the processor.
+
+You can access the message history from Java code:
+
+ List list = exchange.getProperty(Exchange.MESSAGE_HISTORY, List.class);
+ for (MessageHistory history : list) {
+ System.out.println("Routed at id: " + history.getNode().getId());
+ }
diff --git a/camel-message-router.md b/camel-message-router.md
new file mode 100644
index 0000000000000000000000000000000000000000..77a478fbe222149fd3f718528da5f13cd058afcd
--- /dev/null
+++ b/camel-message-router.md
@@ -0,0 +1,63 @@
+# Message-router.md
+
+The [Message
+Router](http://www.enterpriseintegrationpatterns.com/MessageRouter.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) allows
+you to consume from an input destination, evaluate some predicate, then
+choose the right output destination.
+
+
+
+
+
+In Camel, the Message Router can be archived in different ways such as:
+
+- You can use the [Content-Based Router](#choice-eip.adoc) to evaluate
+ and choose the output destination.
+
+- The [Routing Slip](#routingSlip-eip.adoc) and [Dynamic
+ Router](#dynamicRouter-eip.adoc) EIPs can also be used for choosing
+ which destination to route messages.
+
+The [Content-Based Router](#choice-eip.adoc) is recommended to use when
+you have multiple predicates to evaluate where to send the message.
+
+The [Routing Slip](#routingSlip-eip.adoc) and [Dynamic
+Router](#dynamicRouter-eip.adoc) are arguably more advanced where you do
+not use predicates to choose where to route the message, but use an
+expression to choose where the message should go.
+
+# Example
+
+The following example shows how to route a request from an input
+`direct:a` endpoint to either `direct:b`, `direct:c`, or `direct:d`
+depending on the evaluation of various
+[Predicates](#manual::predicate.adoc):
+
+Java
+from("direct:a")
+.choice()
+.when(simple("${header.foo} == 'bar'"))
+.to("direct:b")
+.when(simple("${header.foo} == 'cheese'"))
+.to("direct:c")
+.otherwise()
+.to("direct:d");
+
+XML
+
+
+
+
+${header.foo} == 'bar'
+
+
+
+${header.foo} == 'cheese'
+
+
+
+
+
+
+
diff --git a/camel-message-translator.md b/camel-message-translator.md
new file mode 100644
index 0000000000000000000000000000000000000000..271ca58587ee4fc4b7e99e26aee5327cb2138634
--- /dev/null
+++ b/camel-message-translator.md
@@ -0,0 +1,108 @@
+# Message-translator.md
+
+Camel supports the [Message
+Translator](http://www.enterpriseintegrationpatterns.com/MessageTranslator.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+
+
+
+
+The Message Translator can be done in different ways in Camel:
+
+- Using [Transform](#transform-eip.adoc) or [Set
+ Body](#setBody-eip.adoc) in the DSL
+
+- Calling a [Processor](#manual::processor.adoc) or
+ [bean](#manual::bean-integration.adoc) to perform the transformation
+
+- Using template-based [Components](#ROOT:index.adoc), with the
+ template being the source for how the message is translated
+
+- Messages can also be transformed using [Data
+ Format](#manual::data-format.adoc) to marshal and unmarshal messages
+ in different encodings.
+
+# Example
+
+Each of the approaches above is documented in the following examples:
+
+## Message Translator with Transform EIP
+
+You can use a [Transform](#transform-eip.adoc) which uses an
+[Expression](#manual::expression.adoc) to do the transformation:
+
+In the example below, we prepend Hello to the message body using the
+[Simple](#components:languages:simple-language.adoc) language:
+
+Java
+from("direct:cheese")
+.setBody(simple("Hello ${body}"))
+.to("log:hello");
+
+XML
+
+
+
+Hello ${body}
+
+
+
+
+## Message Translator with Bean
+
+You can transform a message using Camel’s [Bean
+Integration](#manual::bean-integration.adoc) to call any method on a
+bean that performs the message translation:
+
+Java
+from("activemq:cheese")
+.bean("myTransformerBean", "doTransform")
+.to("activemq:wine");
+
+XML
+
+
+
+
+
+
+## Message Translator with Processor
+
+You can also use a [Processor](#manual::processor.adoc) to do the
+transformation:
+
+Java
+from("activemq:cheese")
+.process(new MyTransformerProcessor())
+.to("activemq:wine");
+
+XML
+
+
+
+
+
+
+## Message Translator using Templating Components
+
+You can also consume a message from one destination, transform it with
+something like [Velocity](#ROOT:velocity-component.adoc) or
+[XQuery](#ROOT:xquery-component.adoc), and then send it on to another
+destination.
+
+Java
+from("activemq:cheese")
+.to("velocity:com/acme/MyResponse.vm")
+.to("activemq:wine");
+
+XML
+
+
+
+
+
+
+## Message Translator using Data Format
+
+See [Marshal](#marshal-eip.adoc) EIP for more details and examples.
diff --git a/camel-message.md b/camel-message.md
new file mode 100644
index 0000000000000000000000000000000000000000..a7178bf52edabe5395fd1a842b35ff72b1d1981d
--- /dev/null
+++ b/camel-message.md
@@ -0,0 +1,32 @@
+# Message.md
+
+Camel supports the
+[Message](http://www.enterpriseintegrationpatterns.com/Message.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) using the
+[Message](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Message.html)
+interface.
+
+
+
+
+
+The `org.apache.camel.Message` is the *data record* that represents the
+message part of the [Exchange](#manual::exchange.adoc).
+
+The message contains the following information:
+
+- `_body_`: the message body (i.e., payload)
+
+- `_headers_`: headers with additional information
+
+- `_messageId_`: Unique id of the message. By default, the message
+ uses the same id as `Exchange.getExchangeId` as messages are
+ associated with the `Exchange` and using different IDs offers little
+ value. Another reason is to optimize for performance to avoid
+ generating new IDs. A few Camel components do provide their own
+ message IDs such as the JMS components.
+
+- `_timestamp_`: the timestamp the message originates from. Some
+ systems like JMS, Kafka, AWS have a timestamp on the event/message
+ that Camel receives. This method returns the timestamp if a
+ timestamp exists.
diff --git a/camel-messaging-bridge.md b/camel-messaging-bridge.md
new file mode 100644
index 0000000000000000000000000000000000000000..884462c5ff5364925400326cb54141a0e3d9efb0
--- /dev/null
+++ b/camel-messaging-bridge.md
@@ -0,0 +1,38 @@
+# Messaging-bridge.md
+
+Camel supports the [Messaging
+Bridge](https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessagingBridge.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+How can multiple messaging systems be connected so that messages
+available on one are also available on the others?
+
+
+
+
+
+Use a Messaging Bridge, a connection between messaging systems, to
+replicate messages between systems.
+
+You can use Camel to bridge different systems using Camel
+[Components](#ROOT:index.adoc) and bridge the endpoints together in a
+[Route](#manual::routes.adoc).
+
+Another alternative is to bridge systems using [Change Data
+Capture](#change-data-capture.adoc).
+
+# Example
+
+A basic bridge between two messaging systems (such as WebsphereMQ and
+[JMS](#ROOT:jms-component.adoc) broker) can be done with a single Camel
+route:
+
+Java
+from("mq:queue:foo")
+.to("jms:queue:foo")
+
+XML
+
+
+
+
diff --git a/camel-messaging-gateway.md b/camel-messaging-gateway.md
new file mode 100644
index 0000000000000000000000000000000000000000..e815bde31bbfcccee8e07ae3a7916e272858b79f
--- /dev/null
+++ b/camel-messaging-gateway.md
@@ -0,0 +1,24 @@
+# Messaging-gateway.md
+
+Camel supports the [Messaging
+Gateway](https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessagingGateway.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+How do you encapsulate access to the messaging system from the rest of
+the application?
+
+
+
+
+
+Use a Messaging Gateway, a class that wraps messaging-specific method
+calls and exposes domain-specific methods to the application.
+
+Camel has several endpoint components that support the Messaging Gateway
+from the EIP patterns. Components like [Bean](#ROOT:bean-component.adoc)
+provide a way to bind a Java interface to the message exchange.
+
+Another approach is to use `@Produce` annotations ([POJO
+Producing](#manual::pojo-producing.adoc)) which also can be used to hide
+Camel APIs and thereby encapsulate access, acting as a Messaging Gateway
+EIP solution.
diff --git a/camel-messaging-mapper.md b/camel-messaging-mapper.md
new file mode 100644
index 0000000000000000000000000000000000000000..4e76379c0ec1dc0c43a266578fbb20c805f13249
--- /dev/null
+++ b/camel-messaging-mapper.md
@@ -0,0 +1,32 @@
+# Messaging-mapper.md
+
+Camel supports the [Messaging
+Mapper](https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessagingMapper.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+How do you move data between domain objects and the messaging
+infrastructure while keeping the two independent of each other?
+
+
+
+
+
+Create a separate Messaging Mapper that contains the mapping logic
+between the messaging infrastructure and the domain objects. Neither the
+objects nor the infrastructure has knowledge of the Messaging Mapper’s
+existence.
+
+The Messaging Mapper accesses one or more domain objects and converts
+them into a message as required by the messaging channel. It also
+performs the opposite function, creating or updating domain objects
+based on incoming messages. Since the Messaging Mapper is implemented as
+a separate class that references the domain object(s) and the messaging
+layer, neither layer is aware of the other. The layers don’t even know
+about the Messaging Mapper.
+
+With Camel, this pattern is often implemented directly via Camel
+components that provide [Type Converters](#manual::type-converter.adoc)
+from the messaging infrastructure to common Java types or Java Objects
+representing the data model of the component in question. Combining this
+with the [Message Translator](#message-translator.adoc) to have the
+Messaging Mapper EIP pattern.
diff --git a/camel-metrics.md b/camel-metrics.md
index b130f6e92e87401efa957e8e5e4cb2364ad2ecd2..87a6f46869857dd82b185596b7d9eb6128c41350 100644
--- a/camel-metrics.md
+++ b/camel-metrics.md
@@ -14,7 +14,7 @@ Camel routes. Supported metric types are
the behaviour of applications. The configurable reporting backend
enables different integration options for collecting and visualizing
statistics. The component also provides a `MetricsRoutePolicyFactory`
-which allows to expose route statistics using Dropwizard Metrics, see
+which allows exposing route statistics using Dropwizard Metrics, see
bottom of page for details.
Maven users will need to add the following dependency to their `pom.xml`
@@ -111,11 +111,11 @@ endpoint finishes processing of exchange. While processing exchange
Metrics endpoint will catch all exceptions and write log entry using
level `warn`.
-# Metrics type counter
+## Metrics type counter
metrics:counter:metricname[?options]
-## Options
+### Options
@@ -124,20 +124,20 @@ level `warn`.
-
+
-
+
increment
-
Long value to add to the
counter
-
+
decrement
-
Long value to subtract from the
@@ -165,7 +165,7 @@ both defined only increment operation is called.
.to("metrics:counter:simple.counter?decrement=3")
.to("direct:out");
-## Headers
+### Headers
Message headers can be used to override `increment` and `decrement`
values specified in Metrics component URI.
@@ -177,20 +177,20 @@ values specified in Metrics component URI.
-
+
-
+
CamelMetricsCounterIncrement
Override increment value in
URI
Long
-
+
CamelMetricsCounterDecrement
Override decrement value in
URI
@@ -211,11 +211,11 @@ URI
.to("metrics:counter:body.length")
.to("mock:out");
-# Metric type histogram
+## Metric type histogram
metrics:histogram:metricname[?options]
-## Options
+### Options
@@ -224,14 +224,14 @@ URI
-
+
-
+
value
-
Value to use in histogram
@@ -252,7 +252,7 @@ logged.
.to("metrics:histogram:simple.histogram")
.to("direct:out");
-## Headers
+### Headers
Message header can be used to override value specified in Metrics
component URI.
@@ -264,14 +264,14 @@ component URI.
-
+
-
+
CamelMetricsHistogramValue
Override histogram value in
URI
@@ -286,11 +286,11 @@ URI
.to("metrics:histogram:simple.histogram?value=700")
.to("direct:out")
-# Metric type meter
+## Metric type meter
metrics:meter:metricname[?options]
-## Options
+### Options
@@ -299,14 +299,14 @@ URI
-
+
-
+
mark
-
Long value to use as mark
@@ -326,7 +326,7 @@ If `mark` is not set then `meter.mark()` is called without argument.
.to("metrics:meter:simple.meter?mark=81")
.to("direct:out");
-## Headers
+### Headers
Message header can be used to override `mark` value specified in Metrics
component URI.
@@ -338,14 +338,14 @@ component URI.
-
+
-
+
CamelMetricsMeterMark
Override mark value in URI
Long
@@ -359,11 +359,11 @@ component URI.
.to("metrics:meter:simple.meter?mark=123")
.to("direct:out");
-# Metrics type timer
+## Metrics type timer
metrics:timer:metricname[?options]
-## Options
+### Options
@@ -372,14 +372,14 @@ component URI.
-
+
-
+
action
-
start or stop
@@ -401,7 +401,7 @@ and warning is logged.
`TimerContext` objects are stored as Exchange properties between
different Metrics component calls.
-## Headers
+### Headers
Message header can be used to override action value specified in Metrics
component URI.
@@ -413,14 +413,14 @@ component URI.
-
+
-
+
CamelMetricsTimerAction
Override timer action in URI
org.apache.camel.component.metrics.MetricsTim
.to("metrics:timer:simple.timer")
.to("direct:out");
-# Metric type gauge
+## Metric type gauge
metrics:gauge:metricname[?options]
-## Options
+### Options
@@ -448,14 +448,14 @@ style="text-align: left;">org.apache.camel.component.metrics.MetricsTim
-
+
-
+
subject
-
Any object to be observed by the
@@ -472,7 +472,7 @@ registered.
.to("metrics:gauge:simple.gauge?subject=#mySubjectBean")
.to("direct:out");
-## Headers
+### Headers
Message headers can be used to override `subject` values specified in
Metrics component URI. Note: if `CamelMetricsName` header is specified,
@@ -486,14 +486,14 @@ URI.
-
+
-
+
CamelMetricsGaugeSubject
Override subject value in URI
Object
@@ -507,7 +507,7 @@ URI.
.to("metrics:counter:simple.gauge?subject=#mySubjectBean")
.to("direct:out");
-# MetricsRoutePolicyFactory
+## MetricsRoutePolicyFactory
This factory allows adding a `RoutePolicy` for each route that exposes
route utilization statistics using Dropwizard metrics. This factory can
@@ -536,14 +536,14 @@ following options:
-
+
-
+
useJmx
false
Whether to report fine-grained
@@ -555,18 +555,18 @@ type in the JMX tree. That mbean has a single operation to output the
statistics using json. Setting useJmx to true is only
needed if you want fine-grained mbeans per statistics type.
-
+
jmxDomain
org.apache.camel.metrics
The JMX domain name
-
+
prettyPrint
false
Whether to use pretty print when
outputting statistics in json format
-
+
metricsRegistry
Allow using a shared
@@ -574,19 +574,19 @@ outputting statistics in json format
then Camel will create a shared instance used by the
CamelContext.
-
+
rateUnit
TimeUnit.SECONDS
The unit to use for rate in the metrics
reporter or when dumping the statistics as json.
-
+
durationUnit
TimeUnit.MILLISECONDS
The unit to use for duration in the
metrics reporter or when dumping the statistics as json.
-
+
namePattern
##name##.##routeId##.##type##
@@ -612,7 +612,7 @@ as shown below:
...
}
-# MetricsMessageHistoryFactory
+## MetricsMessageHistoryFactory
This factory allows using metrics to capture Message History performance
statistics while routing messages. It works by using a metrics Timer for
@@ -637,14 +637,14 @@ The following options are supported on the factory:
-
+
-
+
useJmx
false
Whether to report fine-grained
@@ -656,18 +656,18 @@ type in the JMX tree. That mbean has a single operation to output the
statistics using json. Setting useJmx to true is only
needed if you want fine-grained mbeans per statistics type.
-
+
jmxDomain
org.apache.camel.metrics
The JMX domain name
-
+
prettyPrint
false
Whether to use pretty print when
outputting statistics in json format
-
+
metricsRegistry
Allow using a shared
@@ -675,19 +675,19 @@ outputting statistics in json format
then Camel will create a shared instance used by the
CamelContext.
-
+
rateUnit
TimeUnit.SECONDS
The unit to use for rate in the metrics
reporter or when dumping the statistics as json.
-
+
durationUnit
TimeUnit.MILLISECONDS
The unit to use for duration in the
metrics reporter or when dumping the statistics as json.
-
+
namePattern
##name##.##routeId##.###id###.##type##
@@ -714,7 +714,7 @@ From Java code, you can get the service from the CamelContext as shown:
And the JMX API the MBean is registered in the `type=services` tree with
`name=MetricsMessageHistoryService`.
-# InstrumentedThreadPoolFactory
+## InstrumentedThreadPoolFactory
This factory allows you to gather performance information about Camel
Thread Pools by injecting a `InstrumentedThreadPoolFactory` which
diff --git a/camel-micrometer-prometheus.md b/camel-micrometer-prometheus.md
new file mode 100644
index 0000000000000000000000000000000000000000..af9fc8defcd4e7f066e1d9af888cba245c3df724
--- /dev/null
+++ b/camel-micrometer-prometheus.md
@@ -0,0 +1,107 @@
+# Micrometer-prometheus.md
+
+**Since Camel 4.3**
+
+The camel-micrometer-prometheus is used for running Camel standalone
+(Camel Main), and to integrate with the Micrometer Prometheus Registry.
+
+# Usage
+
+## Auto-detection from classpath
+
+To use this implementation all you need to do is to add the
+`camel-micrometer-prometheus` dependency to the classpath, and turn on
+metrics in `application.properties` such as:
+
+ # enable HTTP server with metrics
+ camel.server.enabled=true
+ camel.server.metricsEnabled=true
+
+ # turn on micrometer metrics
+ camel.metrics.enabled=true
+ # include more camel details
+ camel.metrics.enableMessageHistory=true
+ # include additional out-of-the-box micrometer metrics for cpu, jvm and used file descriptors
+ camel.metrics.binders=processor,jvm-info,file-descriptor
+
+## List of known binders from Micrometer
+
+The following binders can be configured with `camel.metrics.binders`
+that comes out of the box from Micrometer:
+
+
+
+
+
+
+
+
+Binder Name
+Description
+
+
+class-loader
+JVM class loading metrics
+
+
+commons-object-pool2
+Apache Commons Pool 2.x
+metrics
+
+
+file-descriptor
+File descriptor metrics gathered by the
+JVM
+
+
+hystrix-metrics-binder
+Hystrix Circuit Breaker
+metrics
+
+
+jvm-compilation
+JVM compilation metrics
+
+
+jvm-gc
+Garbage collection and GC
+pauses
+
+
+jvm-heap-pressure
+Provides methods to access measurements
+of low pool memory and heavy GC overhead
+
+
+jvm-info
+JVM information
+
+
+jvm-memory
+Utilization of various memory and
+buffer pools.
+
+
+jvm-thread
+JVM threads statistics
+
+
+log4j2
+Apache Log4j 2 statistics
+
+
+logback
+Logback logger statistics
+
+
+processor
+CPU processing statistics
+
+
+uptime
+Uptime statistics
+
+
+
diff --git a/camel-micrometer.md b/camel-micrometer.md
index 714e3557427d15e63207399111aa67c43551ac1a..6f6c27373f2dd23248c821339d692f70c423c632 100644
--- a/camel-micrometer.md
+++ b/camel-micrometer.md
@@ -35,7 +35,9 @@ this component:
# Options
-# Meter Registry
+# Usage
+
+## Meter Registry
By default the Camel Micrometer component creates a
`SimpleMeterRegistry` instance, suitable mainly for testing. You should
@@ -44,7 +46,7 @@ Micrometer registries primarily determine the backend monitoring system
to be used. A `CompositeMeterRegistry` can be used to address more than
one monitoring target.
-# Default Camel Metrics
+## Default Camel Metrics
Some Camel specific metrics are available out of the box.
@@ -55,83 +57,83 @@ Some Camel specific metrics are available out of the box.
-
+
-
+
camel.message.history
timer
Sample of performance of each node in
the route when message history is enabled
-
+
camel.routes.added
gauge
Number of routes in total
-
+
camel.routes.reloaded
gauge
Number of routes that has been
reloaded
-
+
camel.routes.running
gauge
Number of routes currently
running
-
+
camel.exchanges.inflight
gauge
Route inflight messages
-
+
camel.exchanges.total
counter
Total number of processed
exchanges
-
+
camel.exchanges.succeeded
counter
Number of successfully completed
exchanges
-
+
camel.exchanges.failed
counter
Number of failed exchanges
-
+
camel.exchanges.failures.handled
counter
Number of failures handled
-
+
camel.exchanges.external.redeliveries
counter
Number of external initiated
redeliveries (such as from JMS broker)
-
+
camel.exchange.event.notifier
gauge + summary
Metrics for messages created, sent,
completed, and failed events
-
+
camel.route.policy
gauge + summary
Route performance metrics
-
+
camel.route.policy.long.task
gauge + summary
Route long task metric
@@ -139,7 +141,7 @@ completed, and failed events
-## Using legacy metrics naming
+### Using legacy metrics naming
In Camel 3.20 or older, then the naming of metrics is using *camelCase*
style. However, since Camel 3.21 onwards, the naming is using the
@@ -163,7 +165,7 @@ The naming style can be configured on:
- `MicrometerMessageHistoryFactory`
-# Usage of producers
+## Usage of producers
Each meter has type and name. Supported types are
[counter](##MicrometerComponent-counter), [distribution
@@ -180,7 +182,7 @@ strings that are also evaluated as `Simple` expression. E.g., the URI
parameter `tags=X=${header.Y}` would assign the current value of header
`Y` to the key `X`.
-## Headers
+### Headers
The meter name defined in URI can be overridden by populating a header
with name `CamelMetricsName`. The meter tags defined as URI parameters
@@ -203,11 +205,11 @@ Micrometer endpoint finishes processing of exchange. While processing
exchange Micrometer endpoint will catch all exceptions and write log
entry using level `warn`.
-# Counter
+## Counter
micrometer:counter:name[?options]
-## Options
+### Options
@@ -216,20 +218,20 @@ entry using level `warn`.
-
+
-
+
increment
-
Double value to add to the
counter
-
+
decrement
-
Double value to subtract from the
@@ -262,7 +264,7 @@ that evaluates to 3.0, the `simple.counter` counter is decremented by
.to("micrometer:counter:simple.counter?decrement=${header.X}")
.to("direct:out");
-## Headers
+### Headers
Like in `camel-metrics`, specific Message headers can be used to
override `increment` and `decrement` values specified in the Micrometer
@@ -275,20 +277,20 @@ endpoint URI.
-
+
-
+
CamelMetricsCounterIncrement
Override increment value in
URI
Double
-
+
CamelMetricsCounterDecrement
Override decrement value in
URI
@@ -309,11 +311,11 @@ URI
.to("micrometer:counter:body.length")
.to("direct:out");
-# Distribution Summary
+## Distribution Summary
micrometer:summary:metricname[?options]
-## Options
+### Options
@@ -322,14 +324,14 @@ URI
-
+
-
+
value
-
Value to use in histogram
@@ -358,7 +360,7 @@ registered with the `simple.histogram`:
.to("micrometer:summary:simple.histogram?value=${header.X}")
.to("direct:out");
-## Headers
+### Headers
Like in `camel-metrics`, a specific Message header can be used to
override the value specified in the Micrometer endpoint URI.
@@ -370,14 +372,14 @@ override the value specified in the Micrometer endpoint URI.
-
+
-
+
CamelMetricsHistogramValue
Override histogram value in
URI
@@ -392,11 +394,11 @@ URI
.to("micrometer:summary:simple.histogram?value=700")
.to("direct:out")
-# Timer
+## Timer
micrometer:timer:metricname[?options]
-## Options
+### Options
@@ -405,14 +407,14 @@ URI
-
+
-
+
action
-
start or stop
@@ -437,7 +439,7 @@ different Metrics component calls.
`action` is evaluated as a `Simple` expression returning a result of
type `MicrometerTimerAction`.
-## Headers
+### Headers
Like in `camel-metrics`, a specific Message header can be used to
override action value specified in the Micrometer endpoint URI.
@@ -449,14 +451,14 @@ override action value specified in the Micrometer endpoint URI.
-
+
-
+
CamelMetricsTimerAction
Override timer action in URI
org.apache.camel.component.micrometer.Microme
.to("micrometer:timer:simple.timer")
.to("direct:out");
-# Using Micrometer route policy factory
+## Using Micrometer route policy factory
`MicrometerRoutePolicyFactory` allows to add a RoutePolicy for each
route to expose route utilization statistics using Micrometer. This
@@ -500,33 +502,33 @@ the following options:
-
+
-
+
prettyPrint
false
Whether to use pretty print when
outputting statistics in json format
-
+
meterRegistry
Allow using a shared
MeterRegistry. If none is provided, then Camel will create
a shared instance used by the CamelContext.
-
+
durationUnit
TimeUnit.MILLISECONDS
The unit to use for duration in when
dumping the statistics as json.
-
+
configuration
see below
-
+
-
+
contextEnabled
true
whether to include counter for context
level metrics
-
+
routeEnabled
true
whether to include counter for route
level metrics
-
+
additionalCounters
true
activates all additional
counters
-
+
exchangesSucceeded
true
activates counter for succeeded
exchanges
-
+
exchangesFailed
true
activates counter for failed
exchanges
-
+
exchangesTotal
true
activates counter for total count of
exchanges
-
+
externalRedeliveries
true
activates counter for redeliveries of
exchanges
-
+
failuresHandled
true
activates counter for handled
failures
-
+
longTask
false
activates long task timer (current
processing time for micrometer)
-
+
timerInitiator
null
Consumer<Timer.Builder> for
custom initialize Timer
-
+
longTaskInitiator
null
Consumer<LongTaskTimer.Builder>
@@ -623,7 +625,7 @@ for custom initialize LongTaskTimer
If JMX is enabled in the CamelContext, the MBean is registered in the
`type=services` tree with `name=MicrometerRoutePolicy`.
-# Using Micrometer message history factory
+## Using Micrometer message history factory
`MicrometerMessageHistoryFactory` allows to use metrics to capture
Message History performance statistics while routing messages. It works
@@ -648,27 +650,27 @@ The following options are supported on the factory:
-
+
-
+
prettyPrint
false
Whether to use pretty print when
outputting statistics in json format
-
+
meterRegistry
Allow using a shared
MeterRegistry. If none is provided, then Camel will create
a shared instance used by the CamelContext.
-
+
durationUnit
TimeUnit.MILLISECONDS
The unit to use for duration when
@@ -688,7 +690,7 @@ From Java code, you can get the service from the CamelContext as shown:
If JMX is enabled in the CamelContext, the MBean is registered in the
`type=services` tree with `name=MicrometerMessageHistory`.
-# Micrometer event notification
+## Micrometer event notification
There is a `MicrometerRouteEventNotifier` (counting added and running
routes) and a `MicrometerExchangeEventNotifier` (timing exchanges from
@@ -709,7 +711,7 @@ From Java code, you can get the service from the CamelContext as shown:
If JMX is enabled in the CamelContext, the MBean is registered in the
`type=services` tree with `name=MicrometerEventNotifier`.
-# Instrumenting Camel thread pools
+## Instrumenting Camel thread pools
`InstrumentedThreadPoolFactory` allows you to gather performance
information about Camel Thread Pools by injecting a
@@ -717,7 +719,7 @@ information about Camel Thread Pools by injecting a
inside of Camel. See more details at [Threading
Model](#manual::threading-model.adoc).
-# Exposing Micrometer statistics in JMX
+## Exposing Micrometer statistics in JMX
Micrometer uses `MeterRegistry` implementations to publish statistics.
While in production scenarios it is advisable to select a dedicated
@@ -762,7 +764,7 @@ return meterRegistry;
The `HierarchicalNameMapper` strategy determines how meter name and tags
are assembled into an MBean name.
-# Using Camel Micrometer with Camel Main
+## Using Camel Micrometer with Camel Main
When you use Camel standalone (`camel-main`), then if you need to expose
metrics for Prometheus, then you can use `camel-micrometer-prometheus`
@@ -780,7 +782,7 @@ as shown:
# include additional out-of-the-box micrometer metrics for cpu, jvm and used file descriptors
camel.metrics.binders=processor,jvm-info,file-descriptor
-# Using Camel Micrometer with Spring Boot
+## Using Camel Micrometer with Spring Boot
When you use `camel-micrometer-starter` with Spring Boot, then Spring
Boot autoconfiguration will automatically enable metrics capture if a
diff --git a/camel-microprofile-config.md b/camel-microprofile-config.md
new file mode 100644
index 0000000000000000000000000000000000000000..ce1c4d76babcba47512ea0493f51964132c99864
--- /dev/null
+++ b/camel-microprofile-config.md
@@ -0,0 +1,20 @@
+# Microprofile-config.md
+
+**Since Camel 3.0**
+
+The microprofile-config component is used for bridging the Eclipse
+MicroProfile Config with the Properties Component. This allows using
+configuration management from Eclipse MicroProfile with Camel.
+
+To enable this, add this component to the classpath and Camel should
+auto-detect this when starting up.
+
+# Usage
+
+## Register manually
+
+You can also register the microprofile-config component manually with
+the Apache Camel Properties Component as shown below:
+
+ PropertiesComponent pc = (PropertiesComponent) camelContext.getPropertiesComponent();
+ pc.addPropertiesSource(new CamelMicroProfilePropertiesSource());
diff --git a/camel-microprofile-fault-tolerance.md b/camel-microprofile-fault-tolerance.md
new file mode 100644
index 0000000000000000000000000000000000000000..537edf6b8b53ee161d99a77dd5ea6b9c03ef942d
--- /dev/null
+++ b/camel-microprofile-fault-tolerance.md
@@ -0,0 +1,19 @@
+# Microprofile-fault-tolerance.md
+
+**Since Camel 3.3**
+
+This component supports the Circuit Breaker EIP with the MicroProfile
+Fault Tolerance library.
+
+For more details, see the [Circuit Breaker
+EIP](#eips:circuitBreaker-eip.adoc) documentation.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-microprofile-fault-tolerance
+ x.x.x
+
+
diff --git a/camel-microprofile-health.md b/camel-microprofile-health.md
new file mode 100644
index 0000000000000000000000000000000000000000..54ad00b7d1535582e410871ab8692a61b4d0673d
--- /dev/null
+++ b/camel-microprofile-health.md
@@ -0,0 +1,60 @@
+# Microprofile-health.md
+
+**Since Camel 3.0**
+
+The microprofile-health component is used for bridging [Eclipse
+MicroProfile
+Health](https://microprofile.io/project/eclipse/microprofile-health)
+checks with Camel’s own Health Check API.
+
+This enables you to write checks using the Camel health APIs and have
+them exposed via MicroProfile Health readiness and liveness checks.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-microprofile-health
+ x.x.x
+
+
+
+# Usage
+
+This component provides a custom `HealthCheckRegistry` implementation
+that needs to be registered on the `CamelContext`.
+
+ HealthCheckRegistry registry = new CamelMicroProfileHealthCheckRegistry();
+ camelContext.setExtension(HealthCheckRegistry.class, registry);
+
+By default, Camel health checks are registered as both MicroProfile
+Health liveness and readiness checks. To have finer control over whether
+a Camel health check should be considered either a readiness or liveness
+check, you can extend `AbstractHealthCheck` and override the
+`isLiveness()` and `isReadiness()` methods.
+
+For example, to have a check registered exclusively as a liveness check:
+
+ public class MyHealthCheck extends AbstractHealthCheck {
+
+ public MyHealthCheck() {
+ super("my-liveness-check-id");
+ getConfiguration().setEnabled(true);
+ }
+
+ @Override
+ protected void doCall(HealthCheckResultBuilder builder, Map options) {
+ builder.detail("some-detail-key", "some-value");
+
+ if (someSuccessCondition) {
+ builder.up();
+ } else {
+ builder.down();
+ }
+ }
+
+ public boolean isReadiness() {
+ return false;
+ }
+ }
diff --git a/camel-milvus.md b/camel-milvus.md
index c68aca77978ab15b5662c91612f24004765d5a90..3e08d681aa2a0495b7872a515da6c1c6f31df85c 100644
--- a/camel-milvus.md
+++ b/camel-milvus.md
@@ -14,7 +14,9 @@ Vector Database](https://https://milvus.io/).
Where **collection** represents a named set of points (vectors with a
payload) defined in your database.
-# Collection Samples
+# Examples
+
+## Collection Examples
In the route below, we use the milvus component to create a collection
named *test* with the given parameters:
@@ -57,9 +59,9 @@ FieldType fieldType1 = FieldType.newBuilder()
.build())
.to("milvus:test");
-# Points Samples
+## Points Examples
-## Upsert
+### Upsert
In the route below we use the milvus component to perform insert on
points in the collection named *test*:
@@ -98,7 +100,7 @@ vectors.add(vector);
.withCollectionName("test")
.withFields(fields)
.build())
- .to("qdrant:test");
+ .to("milvus:test");
## Search
@@ -128,7 +130,53 @@ return vector;
.withOutputFields(Lists.newArrayList("userAge"))
.withConsistencyLevel(ConsistencyLevelEnum.STRONG)
.build())
- .to("qdrant:myCollection");
+ .to("milvus:myCollection");
+
+## Relation with Langchain4j-Embeddings component
+
+The Milvus component provides a datatype transformer, from
+langchain4j-embeddings to an insert/upsert object compatible with
+Milvus.
+
+As an example, you could think about these routes:
+
+Java
+
+
+
+ protected RoutesBuilder createRouteBuilder() {
+ return new RouteBuilder() {
+ public void configure() {
+ from("direct:in")
+ .to("langchain4j-embeddings:test")
+ .setHeader(Milvus.Headers.ACTION).constant(MilvusAction.INSERT)
+ .setHeader(Milvus.Headers.KEY_NAME).constant("userID")
+ .setHeader(Milvus.Headers.KEY_VALUE).constant(Long.valueOf("3"))
+ .transform(new org.apache.camel.spi.DataType("milvus:embeddings"))
+ .to(MILVUS_URI);
+
+ from("direct:up")
+ .to("langchain4j-embeddings:test")
+ .setHeader(Milvus.Headers.ACTION).constant(MilvusAction.UPSERT)
+ .setHeader(Milvus.Headers.KEY_NAME).constant("userID")
+ .setHeader(Milvus.Headers.KEY_VALUE).constant(Long.valueOf("3"))
+ .transform(new org.apache.camel.spi.DataType("milvus:embeddings"))
+ .to(MILVUS_URI);
+ }
+ };
+ }
+
+It’s important to note that Milvus SDK doesn’t support upsert for autoID
+fields. This means if you set a field as key, and you set the autoID to
+true, the upsert won’t be possible.
+
+That’s the reason why, in the example, we are setting the userID as
+keyName with a keyValue of 3. This is particularly important when you
+design your Milvus database.
+
+The transformer only supports insert/upsert objects, so the only
+operation you can set via header are INSERT and UPSERT, otherwise the
+transformer will fail with an error log.
## Component Configurations
diff --git a/camel-mimeMultipart-dataformat.md b/camel-mimeMultipart-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..8c2faac4e2d7a7c0ffb025730cba6d9ed761a67e
--- /dev/null
+++ b/camel-mimeMultipart-dataformat.md
@@ -0,0 +1,209 @@
+# MimeMultipart-dataformat.md
+
+**Since Camel 2.17**
+
+This data format that can convert a Camel message with attachments into
+a Camel message that has a MIME-Multipart message as message body (and
+no attachments).
+
+The use case for this is to enable the user to send attachments over
+endpoints that do not directly support attachments, either as special
+protocol implementation (e.g. send a MIME-multipart over an HTTP
+endpoint) or as a kind of tunneling solution (e.g. because camel-jms
+does not support attachments but by marshalling the message with
+attachments into a MIME-Multipart, sending that to a JMS queue,
+receiving the message from the JMS queue and unmarshalling it again
+(into a message body with attachments).
+
+The marshal option of the mimeMultipart data format will convert a
+message with attachments into a MIME-Multipart message. If the parameter
+`multipartWithoutAttachment` is set to true, it will also marshal
+messages without attachments into a multipart message with a single
+part, if the parameter is set to false it will leave the message alone.
+
+MIME headers of the multipart as "MIME-Version" and "Content-Type" are
+set as camel headers to the message. If the parameter "headersInline" is
+set to true, it will also create a MIME multipart message in any case.
+Furthermore, the MIME headers of the multipart are written as part of
+the message body, not as camel headers.
+
+The unmarshal option of the mimeMultipart data format will convert a
+MIME-Multipart message into a camel message with attachments and leave
+other messages alone. MIME-Headers of the MIME-Multipart message have to
+be set as Camel headers. The unmarshalling will only take place if the
+"Content-Type" header is set to a "multipart" type. If the option
+"headersInline" is set to true, the body is always parsed as a MIME
+message. As a consequence, if the message body is a stream and stream
+caching is not enabled, a message body that is actually not a MIME
+message with MIME headers in the message body will be replaced by an
+empty message.
+
+# Options
+
+# Message Headers (marshal)
+
+
+
+
+
+
+
+
+
+
+
+
+Message-Id
+String
+The marshal operation will set this
+parameter to the generated MIME message id if the "headersInline"
+parameter is set to false.
+
+
+MIME-Version
+String
+The marshal operation will set this
+parameter to the applied MIME version (1.0) if the "headersInline"
+parameter is set to false.
+
+
+Content-Type
+String
+The content of this header will be used
+as a content type for the message body part. If no content type is set,
+"application/octet-stream" is assumed. After the marshal operation, the
+content type is set to "multipart/related" or empty if the
+"headersInline" parameter is set to true.
+
+
+Content-Encoding
+String
+If the incoming content type is
+"text/*" the content encoding will be set to the encoding parameter of
+the Content-Type MIME header of the body part. Furthermore, the given
+charset is applied for text to binary conversions.
+
+
+
+
+# Message Headers (unmarshal)
+
+
+
+
+
+
+
+
+
+
+
+
+Content-Type
+String
+If this header is not set to
+multipart/* the unmarshal operation will not do anything.
+In other cases, the multipart will be parsed into a camel message with
+attachments and the header is set to the Content-Type header of the body
+part, except if this is application/octet-stream. In the latter case,
+the header is removed.
+
+
+Content-Encoding
+String
+If the content-type of the body part
+contains an encoding parameter, this header will be set to the value of
+this encoding parameter (converted from MIME encoding descriptor to Java
+encoding descriptor)
+
+
+MIME-Version
+String
+The unmarshal operation will read this
+header and use it for parsing the MIME multipart. The header is removed
+afterward
+
+
+
+
+# Examples
+
+ from(...).marshal().mimeMultipart()
+
+With a message where no Content-Type header is set, will create a
+Message with the following message Camel headers:
+
+**Camel Message Headers**
+
+ Content-Type=multipart/mixed; \n boundary="----=_Part_0_14180567.1447658227051"
+ Message-Id=<...>
+ MIME-Version=1.0
+
+ The message body will be:
+
+**Camel Message Body**
+
+ ------=_Part_0_14180567.1447658227051
+ Content-Type: application/octet-stream
+ Content-Transfer-Encoding: base64
+ Qm9keSB0ZXh0
+ ------=_Part_0_14180567.1447658227051
+ Content-Type: application/binary
+ Content-Transfer-Encoding: base64
+ Content-Disposition: attachment; filename="Attachment File Name"
+ AAECAwQFBgc=
+ ------=_Part_0_14180567.1447658227051--
+
+A message with the header Content-Type set to "text/plain" sent to the
+route
+
+ from("...").marshal().mimeMultipart("related", true, true, "(included|x-.*)", true);
+
+will create a message without any specific MIME headers set as Camel
+headers (the Content-Type header is removed from the Camel message) and
+the following message body that includes also all headers of the
+original message starting with "x-" and the header with name "included":
+
+**Camel Message Body**
+
+ Message-ID: <...>
+ MIME-Version: 1.0
+ Content-Type: multipart/related;
+ boundary="----=_Part_0_1134128170.1447659361365"
+ x-bar: also there
+ included: must be included
+ x-foo: any value
+
+ ------=_Part_0_1134128170.1447659361365
+ Content-Type: text/plain
+ Content-Transfer-Encoding: 8bit
+
+ Body text
+ ------=_Part_0_1134128170.1447659361365
+ Content-Type: application/binary
+ Content-Transfer-Encoding: binary
+ Content-Disposition: attachment; filename="Attachment File Name"
+
+ [binary content]
+ ------=_Part_0_1134128170.1447659361365
+
+# Dependencies
+
+To use MIME-Multipart in your Camel routes, you need to add a dependency
+on **camel-mail**, which implements this data format.
+
+If you use Maven, you can add the following to your pom.xml:
+
+
+ org.apache.camel
+ camel-mail
+ x.x.x
+
diff --git a/camel-mina.md b/camel-mina.md
index bb97a39ae0d20b62fa5f5b25ca49b5a5d056b0ff..7e47647807fbe5ff6ff408411bf9c5fde464c447 100644
--- a/camel-mina.md
+++ b/camel-mina.md
@@ -48,7 +48,9 @@ content—message headers and exchange properties are not sent.
However, the option, **transferExchange**, does allow you to transfer
the exchange itself over the wire. See options below.
-# Using a custom codec
+# Usage
+
+## Using a custom codec
See the Mina how to write your own codec. To use your custom codec with
`camel-mina`, you should register your codec in the Registry; for
@@ -56,7 +58,24 @@ example, by creating a bean in the Spring XML file. Then use the `codec`
option to specify the bean ID of your codec. See
[HL7](#dataformats:hl7-dataformat.adoc) that has a custom codec.
-## Sample with sync=false
+## Get the IoSession for message
+
+You can get the IoSession from the message header with this key
+`MinaConstants.MINA_IOSESSION`, and also get the local host address with
+the key `MinaConstants.MINA_LOCAL_ADDRESS` and remote host address with
+the key `MinaConstants.MINA_REMOTE_ADDRESS`.
+
+## Configuring Mina filters
+
+Filters permit you to use some Mina Filters, such as `SslFilter`. You
+can also implement some customized filters. Please note that `codec` and
+`logger` are also implemented as Mina filters of the type, `IoFilter`.
+Any filters you may define are appended to the end of the filter chain;
+that is, after `codec` and `logger`.
+
+# Examples
+
+## Example with sync=false
In this sample, Camel exposes a service that listens for TCP connections
on port 6200. We use the **textline** codec. In our route, we create a
@@ -74,14 +93,14 @@ it on port 6200.
MockEndpoint.assertIsSatisfied(context);
-## Sample with sync=true
+## Example with sync=true
In the next sample, we have a more common use case where we expose a TCP
-service on port 6201 also use the textline codec. However, this time we
-want to return a response, so we set the `sync` option to `true` on the
-consumer.
+service on port 6201 also use the `textline` codec. However, this time
+we want to return a response, so we set the `sync` option to `true` on
+the consumer.
- from("mina:tcp://localhost:" + port2 + "?textline=true&sync=true").process(new Processor() {
+ fromF("mina:tcp://localhost:%d?textline=true&sync=true", port2).process(new Processor() {
public void process(Exchange exchange) throws Exception {
String body = exchange.getIn().getBody(String.class);
exchange.getOut().setBody("Bye " + body);
@@ -96,7 +115,7 @@ fact, something we have dynamically set in our processor code logic.
String response = (String)template.requestBody("mina:tcp://localhost:" + port2 + "?textline=true&sync=true", "World");
assertEquals("Bye World", response);
-# Sample with Spring DSL
+## Example with Spring DSL
Spring DSL can also be used for [MINA](#mina-component.adoc). In the
sample below, we expose a TCP server on port 5555:
@@ -134,21 +153,6 @@ written the `bye` message back to the client:
}
});
-# Get the IoSession for message
-
-You can get the IoSession from the message header with this key
-`MinaConstants.MINA_IOSESSION`, and also get the local host address with
-the key `MinaConstants.MINA_LOCAL_ADDRESS` and remote host address with
-the key `MinaConstants.MINA_REMOTE_ADDRESS`.
-
-# Configuring Mina filters
-
-Filters permit you to use some Mina Filters, such as `SslFilter`. You
-can also implement some customized filters. Please note that `codec` and
-`logger` are also implemented as Mina filters of the type, `IoFilter`.
-Any filters you may define are appended to the end of the filter chain;
-that is, after `codec` and `logger`.
-
## Component Configurations
diff --git a/camel-minio.md b/camel-minio.md
index 8299ba26047f4a3a7fccef2a5d9ef18e0953255c..ea4923395a775cd2f1ab850476aff75935fa4b31 100644
--- a/camel-minio.md
+++ b/camel-minio.md
@@ -27,10 +27,12 @@ the following snippet:
from("minio://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&objectName=hello.txt")
.to("file:/var/downloaded");
-You have to provide the minioClient in the Registry or your accessKey
-and secretKey to access the [Minio](https://min.io/).
+You have to provide the minioClient in the Registry or your `accessKey`
+and `secretKey` to access the [Minio](https://min.io/).
-# Batch Consumer
+# Usage
+
+## Batch Consumer
This component implements the Batch Consumer.
@@ -43,25 +45,25 @@ messages.
Camel-Minio component provides the following operation on the producer
side:
-- copyObject
+- `copyObject`
-- deleteObject
+- `deleteObject`
-- deleteObjects
+- `deleteObjects`
-- listBuckets
+- `listBuckets`
-- deleteBucket
+- `deleteBucket`
-- listObjects
+- `listObjects`
-- getObject (this will return a MinioObject instance)
+- `getObject` (this will return a \`MinioObject instance)
-- getObjectRange (this will return a MinioObject instance)
+- `getObjectRange` (this will return a \`MinioObject instance)
-- createDownloadLink (this will return a Presigned download Url)
+- `createDownloadLink` (this will return a Presigned download Url)
-- createUploadLink (this will return a Presigned upload url)
+- `createUploadLink` (this will return a Presigned upload url)
## Advanced Minio configuration
@@ -75,7 +77,7 @@ configuration:
## Minio Producer Operation examples
-- CopyObject: this operation copies an object from one bucket to a
+- `CopyObject`: this operation copies an object from one bucket to a
different one
@@ -96,7 +98,7 @@ This operation will copy the object with the name expressed in the
header camelDestinationKey to the camelDestinationBucket bucket, from
the bucket mycamelbucket.
-- DeleteObject: this operation deletes an object from a bucket
+- `DeleteObject`: this operation deletes an object from a bucket
@@ -113,7 +115,7 @@ the bucket mycamelbucket.
This operation will delete the object camelKey from the bucket
mycamelbucket.
-- ListBuckets: this operation lists the buckets for this account in
+- `ListBuckets`: this operation lists the buckets for this account in
this region
@@ -124,7 +126,7 @@ mycamelbucket.
This operation will list the buckets for this account
-- DeleteBucket: this operation deletes the bucket specified as URI
+- `DeleteBucket`: this operation deletes the bucket specified as URI
parameter or header
@@ -135,7 +137,7 @@ This operation will list the buckets for this account
This operation will delete the bucket mycamelbucket
-- ListObjects: this operation list object in a specific bucket
+- `ListObjects`: this operation list object in a specific bucket
@@ -145,7 +147,8 @@ This operation will delete the bucket mycamelbucket
This operation will list the objects in the mycamelbucket bucket
-- GetObject: this operation gets a single object in a specific bucket
+- `GetObject`: this operation gets a single object in a specific
+ bucket
@@ -162,7 +165,7 @@ This operation will list the objects in the mycamelbucket bucket
This operation will return a MinioObject instance related to the
camelKey object in `mycamelbucket` bucket.
-- GetObjectRange: this operation gets a single object range in a
+- `GetObjectRange`: this operation gets a single object range in a
specific bucket
@@ -182,7 +185,7 @@ camelKey object in `mycamelbucket` bucket.
This operation will return a MinioObject instance related to the
camelKey object in `mycamelbucket` bucket, containing bytes from 0 to 9.
-- createDownloadLink: this operation will return a presigned url
+- `createDownloadLink`: this operation will return a presigned url
through which a file can be downloaded using GET method
@@ -198,8 +201,8 @@ camelKey object in `mycamelbucket` bucket, containing bytes from 0 to 9.
.to("minio://mycamelbucket?minioClient=#minioClient&operation=createDownloadLink")
.to("mock:result");
-- createUploadLink: this operation will return a presigned url through
- which a file can be uploaded using PUT method
+- `createUploadLink`: this operation will return a presigned url
+ through which a file can be uploaded using PUT method
@@ -214,18 +217,19 @@ camelKey object in `mycamelbucket` bucket, containing bytes from 0 to 9.
.to("minio://mycamelbucket?minioClient=#minioClient&operation=createUploadLink")
.to("mock:result");
-createDownLink and createUploadLink have a default expiry of 3600s which
-can be overridden by setting the header
-MinioConstants.PRESIGNED\_URL\_EXPIRATION\_TIME (value in seconds)
+`createDownLink` and `createUploadLink` have a default expiry of 3600s
+which can be overridden by setting the header
+`MinioConstants.PRESIGNED_URL_EXPIRATION_TIME` (value in seconds)
-# Bucket Auto-creation
+## Bucket Auto-creation
With the option `autoCreateBucket` users are able to avoid the
-autocreation of a Minio Bucket in case it doesn’t exist. The default for
-this option is `true`. If set to false, any operation on a not-existent
-bucket in Minio won’t be successful, and an error will be returned.
+auto-creation of a Minio Bucket in case it doesn’t exist. The default
+for this option is `true`. If set to false, any operation on a
+not-existent bucket in Minio won’t be successful, and an error will be
+returned.
-# Automatic detection of a Minio client in registry
+## Automatic detection of a Minio client in registry
The component is capable of detecting the presence of a Minio bean into
the registry. If it’s the only instance of that type, it will be used as
@@ -233,7 +237,7 @@ the client, and you won’t have to define it as uri parameter, like the
example above. This may be really useful for smarter configuration of
the endpoint.
-# Moving stuff between a bucket and another bucket
+## Moving stuff between a bucket and another bucket
Some users like to consume stuff from a bucket and move the content in a
different one without using the `copyObject` feature of this component.
@@ -241,7 +245,7 @@ If this is the case for you, remember to remove the `bucketName` header
from the incoming exchange of the consumer. Otherwise, the file will
always be overwritten on the same original bucket.
-# MoveAfterRead consumer option
+## MoveAfterRead consumer option
In addition to `deleteAfterRead`, it has been added another option,
`moveAfterRead`. With this option enabled, the consumed object will be
@@ -255,7 +259,7 @@ In this case, the objects consumed will be moved to `myothercamelbucket`
bucket and deleted from the original one (because of `deleteAfterRead`
set to true as default).
-# Using a POJO as body
+## Using a POJO as body
Sometimes build a Minio Request can be complex because of multiple
options. We introduce the possibility to use a POJO as the body. In
@@ -265,15 +269,16 @@ List brokers request, you can do something like:
from("direct:minio")
.setBody(ListObjectsArgs.builder()
.bucket(bucketName)
- .recursive(getConfiguration().isRecursive())))
- .to("minio://test?minioClient=#minioClient&operation=listObjects&pojoRequest=true")
+ .recursive(getConfiguration().isRecursive()))
+ .to("minio://test?minioClient=#minioClient&operation=listObjects&pojoRequest=true");
In this way, you’ll pass the request directly without the need of
passing headers and options specifically related to this operation.
# Dependencies
-Maven users will need to add the following dependency to their pom.xml.
+Maven users will need to add the following dependency to their
+`pom.xml`.
**pom.xml**
diff --git a/camel-mllp.md b/camel-mllp.md
index 8a4f38c5be4f34cfdd1482c9c1acecc262ee4daa..44474fa10139f86cd21d8b7228cf52e37ec33187 100644
--- a/camel-mllp.md
+++ b/camel-mllp.md
@@ -36,7 +36,9 @@ for this component:
-# MLLP Consumer
+# Usage
+
+## MLLP Consumer
The MLLP Consumer supports receiving MLLP-framed messages and sending
HL7 Acknowledgements. The MLLP Consumer can automatically generate the
@@ -48,10 +50,11 @@ acknowledgement that will be generated can be controlled by setting the
read messages without sending any HL7 Acknowledgement if the automatic
acknowledgement is disabled and the exchange pattern is `InOnly`.
-## Exchange Properties
+### Exchange Properties
-The type of acknowledgment the MLLP Consumer generates and state of the
-TCP Socket can be controlled by these properties on the Camel exchange:
+The type of acknowledgment the MLLP Consumer generates, and the state of
+the TCP Socket can be controlled by these properties on the Camel
+exchange:
@@ -60,85 +63,85 @@ TCP Socket can be controlled by these properties on the Camel exchange:
-
+
Key
Type
Description
-
+
CamelMllpAcknowledgement
-byte[]
+byte[]
If present, this property will be sent
to the client as the MLLP Acknowledgement
-
+
CamelMllpAcknowledgementString
-String
+String
If present and
CamelMllpAcknowledgement is not present, this property will
we sent to the client as the MLLP Acknowledgement
-
+
CamelMllpAcknowledgementMsaText
-String
+String
If neither
CamelMllpAcknowledgement or
CamelMllpAcknowledgementString are present and autoAck is
true, this property can be used to specify the contents of MSA-3 in the
generated HL7 acknowledgement
-
+
CamelMllpAcknowledgementType
-String
+String
If neither
CamelMllpAcknowledgement or
CamelMllpAcknowledgementString are present and autoAck is
true, this property can be used to specify the HL7 acknowledgement type
(i.e. AA, AE, AR)
-
+
CamelMllpAutoAcknowledge
-Boolean
+Boolean
Overrides the autoAck query
parameter
-
+
CamelMllpCloseConnectionBeforeSend
-Boolean
+Boolean
If true, the Socket will be closed
before sending data
-
+
CamelMllpResetConnectionBeforeSend
-Boolean
+Boolean
If true, the Socket will be reset
before sending data
-
+
CamelMllpCloseConnectionAfterSend
-Boolean
+Boolean
If true, the Socket will be closed
immediately after sending data
-
+
CamelMllpResetConnectionAfterSend
-Boolean
+Boolean
If true, the Socket will be reset
immediately after sending any data
-# MLLP Producer
+## MLLP Producer
The MLLP Producer supports sending MLLP-framed messages and receiving
HL7 Acknowledgements. The MLLP Producer interrogates the HL7
@@ -148,7 +151,7 @@ is raised in the event of a negative acknowledgement. The MLLP Producer
can ignore acknowledgements when configured with InOnly exchange
pattern.
-## Exchange Properties
+### Exchange Properties
The state of the TCP Socket can be controlled by these properties on the
Camel exchange:
@@ -160,36 +163,36 @@ Camel exchange:
-
+
Key
Type
Description
-
+
CamelMllpCloseConnectionBeforeSend
-Boolean
+Boolean
If true, the Socket will be closed
before sending data
-
+
CamelMllpResetConnectionBeforeSend
-Boolean
+Boolean
If true, the Socket will be reset
before sending data
-
+
CamelMllpCloseConnectionAfterSend
-Boolean
+Boolean
If true, the Socket will be closed
immediately after sending data
-
+
CamelMllpResetConnectionAfterSend
-Boolean
+Boolean
If true, the Socket will be reset
immediately after sending any data
diff --git a/camel-mock.md b/camel-mock.md
index 487c2c6587217af5f96ecd9eecaf9eafc4001fe5..39f599154aab563d473fdc79ce2869887422563c 100644
--- a/camel-mock.md
+++ b/camel-mock.md
@@ -12,6 +12,15 @@ Framework to simplify your unit and integration testing using
Patterns](#eips:enterprise-integration-patterns.adoc) and Camel’s large
range of Components together with the powerful Bean Integration.
+# URI format
+
+ mock:someName[?options]
+
+Where `someName` can be any string that uniquely identifies the
+endpoint.
+
+# Usage
+
The Mock component provides a powerful declarative testing mechanism,
which is similar to [jMock](http://www.jmock.org) in that it allows
declarative expectations to be created on any Mock endpoint before a
@@ -53,14 +62,9 @@ instead of adding Mock endpoints to routes directly. There are two new
options `retainFirst`, and `retainLast` that can be used to limit the
number of messages the Mock endpoints keep in memory.
-# URI format
+# Examples
- mock:someName[?options]
-
-Where `someName` can be any string that uniquely identifies the
-endpoint.
-
-# Simple Example
+## Simple Example
Here’s a simple example of Mock endpoint in use. First, the endpoint is
resolved on the context. Then we set an expectation, and then, after the
@@ -84,7 +88,7 @@ Camel will by default wait 10 seconds when the `assertIsSatisfied()` is
invoked. This can be configured by setting the
`setResultWaitTime(millis)` method.
-# Using assertPeriod
+## Using assertPeriod
When the assertion is satisfied then Camel will stop waiting and
continue from the `assertIsSatisfied` method. That means if a new
@@ -102,7 +106,7 @@ can do that by setting the `setAssertPeriod` method, for example:
// now let's assert that the mock:foo endpoint received 2 messages
resultEndpoint.assertIsSatisfied();
-# Setting expectations
+## Setting expectations
You can see from the Javadoc of
[MockEndpoint](https://www.javadoc.io/doc/org.apache.camel/camel-mock/current/org/apache/camel/component/mock/MockEndpoint.html)
@@ -115,51 +119,51 @@ methods are as follows:
-
+
-
+
expectedMessageCount(int)
To define the expected count of
messages on the endpoint.
-
+
expectedMinimumMessageCount(int)
To define the minimum number of
expected messages on the endpoint.
-
+
expectedBodiesReceived(…)
To define the expected bodies that
should be received (in order).
-
+
expectedHeaderReceived(…)
To define the expected header that
should be received
-
+
expectsAscending(Expression)
To add an expectation that messages are
received in order, using the given Expression to compare
messages.
-
+
expectsDescending(Expression)
To add an expectation that messages are
received in order, using the given Expression to compare
messages.
-
+
expectsNoDuplicates(Expression)
To add an expectation that no duplicate
@@ -175,7 +179,7 @@ Here’s another example:
resultEndpoint.expectedBodiesReceived("firstMessageBody", "secondMessageBody", "thirdMessageBody");
-# Adding expectations to specific messages
+## Adding expectations to specific messages
In addition, you can use the
[`message(int messageIndex)`](https://javadoc.io/doc/org.apache.camel/camel-mock/latest/org/apache/camel/component/mock/MockEndpoint.html)
@@ -191,7 +195,7 @@ There are some examples of the Mock endpoint in use in the [`camel-core`
processor
tests](https://github.com/apache/camel/tree/main/core/camel-core/src/test/java/org/apache/camel/processor).
-# Mocking existing endpoints
+## Mocking existing endpoints
Camel now allows you to automatically mock existing endpoints in your
Camel routes.
@@ -233,11 +237,10 @@ more details about this at Intercept as it is the same matching function
used by Camel.
Mind that mocking endpoints causes the messages to be copied when they
-arrive at the mock.
-That means Camel will use more memory. This may not be suitable when you
-send in a lot of messages.
+arrive at the mock. That means Camel will use more memory. This may not
+be suitable when you send in a lot of messages.
-# Mocking existing endpoints using the `camel-test` component
+## Mocking existing endpoints using the `camel-test` component
Instead of using the `adviceWith` to instruct Camel to mock endpoints,
you can easily enable this behavior when using the `camel-test` Test
@@ -252,11 +255,11 @@ instead.
****`isMockEndpoints` using camel-test kit****
-# Mocking existing endpoints with XML DSL
+## Mocking existing endpoints with XML DSL
If you do not use the `camel-test` component for unit testing (as shown
-above) you can use a different approach when using XML files for
-routes.
+above) you can use a different approach when using XML files for routes.
+
The solution is to create a new XML file used by the unit test and then
include the intended XML file which has the route you want to test.
@@ -281,7 +284,7 @@ the pattern in the constructor for the bean:
-# Mocking endpoints and skip sending to original endpoint
+## Mocking endpoints and skip sending to original endpoint
Sometimes you want to easily mock and skip sending to certain endpoints.
So the message is detoured and send to the mock endpoint only. You can
@@ -295,7 +298,7 @@ The same example using the Test Kit
****isMockEndpointsAndSkip using camel-test kit****
-# Limiting the number of messages to keep
+## Limiting the number of messages to keep
The [Mock](#mock-component.adoc) endpoints will by default keep a copy
of every Exchange that it received. So if you test with a lot of
@@ -323,7 +326,7 @@ methods that work on message bodies, headers, etc. will only operate on
the retained messages. In the example above, they can test only the
expectations on the 10 retained messages.
-# Testing with arrival times
+## Testing with arrival times
The [Mock](#mock-component.adoc) endpoint stores the arrival time of the
message as a property on the Exchange.
diff --git a/camel-mongodb-gridfs.md b/camel-mongodb-gridfs.md
index 07f061bdeeee77e4f99f7c81684b12cc515a2cca..162b164a7cb03eb37911f8032a6444ab8fa40c5a 100644
--- a/camel-mongodb-gridfs.md
+++ b/camel-mongodb-gridfs.md
@@ -18,37 +18,11 @@ for this component:
mongodb-gridfs:connectionBean?database=databaseName&bucket=bucketName[&moreOptions...]
-# Configuration of a database in Spring XML
+# Usage
-The following Spring XML creates a bean defining the connection to a
-MongoDB instance.
-
-
-
-
-
-
-
-
-# Sample route
-
-The following route defined in Spring XML executes the operation
-[**findOne**](#mongodb-gridfs-component.adoc) on a collection.
+## GridFS operations - producer endpoint
-**Get a file from GridFS**
-
-
-
-
-
-
-
-
-# GridFS operations - producer endpoint
-
-## count
+### count
Returns the total number of files in the collection, returning an
Integer as the OUT message body.
@@ -64,7 +38,7 @@ that filename.
headers.put(Exchange.FILE_NAME, "filename.txt");
Integer count = template.requestBodyAndHeaders("direct:count", query, headers);
-## listAll
+### listAll
Returns a Reader that lists all the filenames and their IDs in a tab
separated stream.
@@ -75,7 +49,7 @@ separated stream.
filename1.txt 1252314321
filename2.txt 2897651254
-## findOne
+### findOne
Finds a file in the GridFS system and sets the body to an InputStream of
the content. Also provides the metadata has headers. It uses
@@ -87,7 +61,7 @@ find.
headers.put(Exchange.FILE_NAME, "filename.txt");
InputStream result = template.requestBodyAndHeaders("direct:findOne", "irrelevantBody", headers);
-## create
+### create
Create a new file in the GridFs database. It uses the
`Exchange.FILE_NAME` from the incoming headers for the name and the body
@@ -99,7 +73,7 @@ contents (as an InputStream) as the content.
InputStream stream = ... the data for the file ...
template.requestBodyAndHeaders("direct:create", stream, headers);
-## remove
+### remove
Removes a file from the GridFS database.
@@ -108,6 +82,36 @@ Removes a file from the GridFS database.
headers.put(Exchange.FILE_NAME, "filename.txt");
template.requestBodyAndHeaders("direct:remove", "", headers);
+# Examples
+
+## Example route
+
+The following route defined in Spring XML executes the operation
+[**findOne**](#mongodb-gridfs-component.adoc) on a collection.
+
+**Get a file from GridFS**
+
+
+
+
+
+
+
+
+## Configuration of a database in Spring XML
+
+The following Spring XML creates a bean defining the connection to a
+MongoDB instance.
+
+
+
+
+
+
+
+
## Component Configurations
diff --git a/camel-multicast-eip.md b/camel-multicast-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..5fd97673d8798927afd8dc7fae150ce499255ee6
--- /dev/null
+++ b/camel-multicast-eip.md
@@ -0,0 +1,227 @@
+# Multicast-eip.md
+
+The Multicast EIP allows routing **the same** message to a number of
+[endpoints](#manual::endpoint.adoc) and process them in a different way.
+
+
+
+
+
+The Multicast EIP has many features and is also used as a baseline for
+the [Recipient List](#recipientList-eip.adoc) and
+[Split](#split-eip.adoc) EIPs. For example, the Multicast EIP is capable
+of aggregating each multicasted message into a single *response* message
+as the result after the Multicast EIP.
+
+# Options
+
+# Exchange properties
+
+# Using Multicast
+
+The following example shows how to take a request from the `direct:a`
+endpoint, then multicast these requests to `direct:x`, `direct:y`, and
+`direct:z`.
+
+Java
+from("direct:a")
+.multicast()
+.to("direct:x")
+.to("direct:y")
+.to("direct:z");
+
+XML
+
+
+
+
+
+
+
+
+
+By default, Multicast EIP runs in single threaded mode, which means that
+the next multicasted message is processed only when the previous is
+finished. This means that `direct:b` must be done before Camel will call
+`direct:c` and so on.
+
+## Multicasting with parallel processing
+
+You can enable parallel processing with Multicast EIP so each
+multicasted message is processed by its own thread in parallel.
+
+The example below enabled parallel mode:
+
+Java
+from("direct:a")
+.multicast().parallelProcessing()
+.to("direct:x")
+.to("direct:y")
+.to("direct:z");
+
+XML
+
+
+
+
+
+
+
+
+
+When parallel processing is enabled, then the Camel routing engin will
+continue processing using last used thread from the parallel thread
+pool. However, if you want to use the original thread that called the
+multicast, then make sure to enable the synchronous option as well.
+
+## Ending a Multicast block
+
+You may want to continue routing the exchange after the Multicast EIP.
+
+In the example below, then sending to `mock:result` happens after the
+Multicast EIP has finished. In other words, `direct:x`, `direct:y`, and
+`direct:z` should be completed first, before the message continues.
+
+Java
+from("direct:a")
+.multicast().parallelProcessing()
+.to("direct:x")
+.to("direct:y")
+.to("direct:z")
+.end()
+.to("mock:result");
+
+Note that you need to use `end()` to mark where multicast ends, and
+where other EIPs can be added to continue the route.
+
+XML
+
+
+
+
+
+
+
+
+
+
+## Aggregating
+
+The `AggregationStrategy` is used for aggregating all the multicasted
+exchanges together as a single response exchange, that becomes the
+outgoing exchange after the Multicast EIP block.
+
+The example now aggregates with the `MyAggregationStrategy` class:
+
+Java
+from("direct:start")
+.multicast(new MyAggregationStrategy()).parallelProcessing().timeout(500)
+.to("direct:x")
+.to("direct:y")
+.to("direct:z")
+.end()
+.to("mock:result");
+
+XML
+We can refer to the FQN class name with `#class:` syntax as shown below:
+
+
+
+
+
+
+
+
+
+
+
+The Multicast, Recipient List, and Splitter EIPs have special support
+for using `AggregationStrategy` with access to the original input
+exchange. You may want to use this when you aggregate messages and there
+has been a failure in one of the messages, which you then want to enrich
+on the original input message and return as response; it’s the aggregate
+method with three exchange parameters.
+
+## Stop processing in case of exception
+
+The Multicast EIP will by default continue to process the entire
+exchange even in case one of the multicasted messages will throw an
+exception during routing.
+
+For example, if you want to multicast to three destinations and the
+second destination fails by an exception. What Camel does by default is
+to process the remainder destinations. You have the chance to deal with
+the exception when aggregating using an `AggregationStrategy`.
+
+But sometimes you want Camel to stop and let the exception be propagated
+back, and let the Camel [Error Handler](#manual::error-handler.adoc)
+handle it. You can do this by specifying that it should stop in case of
+an exception occurred. This is done by the `stopOnException` option as
+shown below:
+
+Java
+from("direct:start")
+.multicast()
+.stopOnException().to("direct:foo", "direct:bar", "direct:baz")
+.end()
+.to("mock:result");
+
+ from("direct:foo").to("mock:foo");
+
+ from("direct:bar").process(new MyProcessor()).to("mock:bar");
+
+ from("direct:baz").to("mock:baz");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+In the example above, then `MyProcessor` is causing a failure and throws
+an exception. This means the Multicast EIP will stop after this, and not
+the last route (`direct:baz`).
+
+## Preparing the message by deep copying before multicasting
+
+The multicast EIP will copy the source exchange and multicast each copy.
+However, the copy is a shallow copy, so in case you have mutable message
+bodies, then any changes will be visible by the other copied messages.
+If you want to use a deep clone copy, then you need to use a custom
+`onPrepare` which allows you to create a deep copy of the message body
+in the `Processor`.
+
+Notice the `onPrepare` can be used for any kind of custom logic that you
+would like to execute before the [Exchange](#manual::exchange.adoc) is
+being multicasted.
+
+# See Also
+
+Because Multicast EIP is a baseline for the [Recipient
+List](#recipientList-eip.adoc) and [Split](#split-eip.adoc) EIPs, then
+you can find more information in those EIPs about features that are also
+available with Multicast EIP.
diff --git a/camel-mustache.md b/camel-mustache.md
index 0c2c4c180c0e1552e36421a24162c4fa11194479..e30e7069d8f9f0a152122a13aa3776860df194a9 100644
--- a/camel-mustache.md
+++ b/camel-mustache.md
@@ -25,7 +25,9 @@ Where **templateName** is the classpath-local URI of the template to
invoke; or the complete URL of the remote template (e.g.:
`\file://folder/myfile.mustache`).
-# Mustache Context
+# Usage
+
+## Mustache Context
Camel will provide exchange information in the Mustache context (just a
`Map`). The `Exchange` is transferred as:
@@ -36,44 +38,44 @@ Camel will provide exchange information in the Mustache context (just a
-
+
-
+
exchange
The Exchange
itself.
-
+
exchange.properties
The Exchange
properties.
-
+
variables
The variables
-
+
headers
The headers of the In message.
-
+
camelContext
The Camel Context.
-
+
request
The In message.
-
+
body
The In message body.
-
+
response
The Out message (only for InOut message
exchange pattern).
@@ -81,14 +83,14 @@ exchange pattern).
-# Dynamic templates
+## Dynamic templates
Camel provides two headers by which you can define a different resource
location for a template or the template content itself. If any of these
headers is set, then Camel uses this over the endpoint configured
resource. This allows you to provide a dynamic template at runtime.
-# Samples
+# Examples
For example, you could use something like:
@@ -112,7 +114,7 @@ dynamically via a header, so for example:
setHeader(MustacheConstants.MUSTACHE_RESOURCE_URI).constant("path/to/my/template.mustache").
to("mustache:dummy?allowTemplateFromHeader=true");
-# The Email Sample
+## The Email Example
In this sample, we want to use Mustache templating for an order
confirmation email. The email template is laid out in Mustache as:
diff --git a/camel-mvel-language.md b/camel-mvel-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..91d4370a944bb780d7f01bcf244ae296d195ed75
--- /dev/null
+++ b/camel-mvel-language.md
@@ -0,0 +1,159 @@
+# Mvel-language.md
+
+**Since Camel 2.0**
+
+Camel supports [MVEL](http://mvel.documentnode.com/) to do message
+transformations using MVEL templates.
+
+MVEL is powerful for templates, but can also be used for
+[Expression](#manual::expression.adoc) or
+[Predicate](#manual::predicate.adoc)
+
+For example, you can use MVEL in a [Predicate](#manual::predicate.adoc)
+with the [Content-Based Router](#eips:choice-eip.adoc) EIP.
+
+You can use MVEL dot notation to invoke operations. If you for instance
+have a body that contains a POJO that has a `getFamilyName` method then
+you can construct the syntax as follows:
+
+ request.body.familyName
+
+Or use similar syntax as in Java:
+
+ getRequest().getBody().getFamilyName()
+
+# MVEL Options
+
+# Variables
+
+The following Camel related variables are made available:
+
+
+
+
+
+
+
+
+
+
+
+
+this
+Exchange
+the Exchange is the root
+object
+
+
+context
+CamelContext
+the CamelContext
+
+
+exchange
+Exchange
+the Exchange
+
+
+exchangeId
+String
+the exchange id
+
+
+exception
+Throwable
+the Exchange exception (if
+any)
+
+
+request
+Message
+the message
+
+
+message
+Message
+the message
+
+
+headers
+Map
+the message headers
+
+
+header(name)
+Object
+the message header by the given
+name
+
+
+header(name, type)
+Type
+the message header by the given name as
+the given type
+
+
+properties
+Map
+the exchange properties
+
+
+property(name)
+Object
+the exchange property by the given
+name
+
+
+property(name, type)
+Type
+the exchange property by the given name
+as the given type
+
+
+
+
+# Example
+
+For example, you could use MVEL inside a [Message
+Filter](#eips:filter-eip.adoc)
+
+ from("seda:foo")
+ .filter().mvel("headers.foo == 'bar'")
+ .to("seda:bar");
+
+And in XML:
+
+
+
+
+ headers.foo == 'bar'
+
+
+
+
+# Loading script from external resource
+
+You can externalize the script and have Apache Camel load it from a
+resource such as `"classpath:"`, `"file:"`, or `"http:"`. This is done
+using the following syntax: `"resource:scheme:location"`, e.g., to refer
+to a file on the classpath you can do:
+
+ .setHeader("myHeader").mvel("resource:classpath:script.mvel")
+
+# Dependencies
+
+To use MVEL in your Camel routes, you need to add the dependency on
+**camel-mvel** which implements the MVEL language.
+
+If you use Maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-mvel
+ x.x.x
+
diff --git a/camel-mvel.md b/camel-mvel.md
index 260b65edeed342535dbf04a1b9cb989d26a2b31a..7c3b705acc3702768b2e8643ee5c6a36ec29ebe9 100644
--- a/camel-mvel.md
+++ b/camel-mvel.md
@@ -26,7 +26,9 @@ Where **templateName** is the classpath-local URI of the template to
invoke; or the complete URL of the remote template (e.g.:
`\file://folder/myfile.mvel`).
-# MVEL Context
+# Usage
+
+## MVEL Context
Camel will provide exchange information in the MVEL context (just a
`Map`). The `Exchange` is transferred as:
@@ -37,53 +39,53 @@ Camel will provide exchange information in the MVEL context (just a
-
+
-
+
exchange
The Exchange
itself
-
+
exchange.properties
The Exchange
properties
-
+
variables
The variables
-
+
headers
The headers of the message
-
+
camelContext
The CamelContext
-
+
request
The message
-
+
in
The message
-
+
body
The message body
-
+
out
The Out message (only for InOut message
exchange pattern).
-
+
response
The Out message (only for InOut message
exchange pattern).
@@ -91,7 +93,7 @@ exchange pattern).
-# Hot reloading
+## Hot reloading
The mvel template resource is, by default, hot reloadable for both file
and classpath resources (expanded jar). If you set `contentCache=true`,
@@ -99,14 +101,14 @@ Camel will only load the resource once, and thus hot reloading is not
possible. This scenario can be used in production when the resource
never changes.
-# Dynamic templates
+## Dynamic templates
Camel provides two headers by which you can define a different resource
location for a template, or the template content itself. If any of these
headers is set, then Camel uses this over the endpoint configured
resource. This allows you to provide a dynamic template at runtime.
-# Example
+# Examples
For example, you could use something like
diff --git a/camel-mybatis-bean.md b/camel-mybatis-bean.md
index d7428642e51d14f7b4872e2c109e2f464a0a5e9b..6130908947ca52d464a7a50eda1b374cf917fcd7 100644
--- a/camel-mybatis-bean.md
+++ b/camel-mybatis-bean.md
@@ -22,12 +22,13 @@ for this component:
This component will by default load the MyBatis SqlMapConfig file from
-the root of the classpath with the expected name of
-`SqlMapConfig.xml`.
+the root of the classpath with the expected name of `SqlMapConfig.xml`.
If the file is located in another location, you will need to configure
the `configurationUri` option on the `MyBatisComponent` component.
-# Message Body
+# Usage
+
+## Message Body
The response from MyBatis will only be set as the body if it’s a
`SELECT` statement. That means, for example, for `INSERT` statements
@@ -35,7 +36,7 @@ Camel will not replace the body. This allows you to continue routing and
keep the original body. The response from MyBatis is always stored in
the header with the key `CamelMyBatisResult`.
-# Samples
+# Examples
For example, if you wish to consume beans from a JMS queue and insert
them into a database, you could do the following:
diff --git a/camel-mybatis.md b/camel-mybatis.md
index 85e0a3d235b19906fa8bea70346f2fcdecb9638c..01a3668b06e455596ee4f44b179c3417354f73af 100644
--- a/camel-mybatis.md
+++ b/camel-mybatis.md
@@ -43,7 +43,7 @@ Camel will not replace the body. This allows you to continue routing and
keep the original body. The response from MyBatis is always stored in
the header with the key `CamelMyBatisResult`.
-# Samples
+# Examples
For example, if you wish to consume beans from a JMS queue and insert
them into a database, you could do the following:
diff --git a/camel-nats.md b/camel-nats.md
index 16cf928ca9d3395f0a33758e6083980650b32dfb..6f878d0f723294b705d0ae449a99d3dbc7ddf11c 100644
--- a/camel-nats.md
+++ b/camel-nats.md
@@ -22,7 +22,9 @@ for this component.
Where **topic** is the topic name
-# Configuring servers
+# Usage
+
+## Configuring servers
You configure the NATS servers on either the component or the endpoint.
@@ -53,7 +55,7 @@ urls in the `application.properties` file
camel.component.nats.servers=scott:tiger@someserver:4222,superman:123@someotherserver:42222
-# Request/Reply support
+## Request/Reply support
The producer supports request/reply where it can wait for an expected
reply message.
@@ -63,7 +65,7 @@ message as reply-message if required.
# Examples
-**Producer example:**
+## Producer example
from("direct:send")
.to("nats:mytopic");
@@ -79,7 +81,7 @@ or your token
from("direct:send")
.to("nats:mytopic?servers=token@localhost:4222);
-**Consumer example:**
+## Consumer example
from("nats:mytopic?maxMessages=5&queueName=myqueue")
.to("mock:result");
diff --git a/camel-nav.md b/camel-nav.md
new file mode 100644
index 0000000000000000000000000000000000000000..e3dd8bc41bea14cb13b11e367eb2a88c36dbb0ce
--- /dev/null
+++ b/camel-nav.md
@@ -0,0 +1,220 @@
+# Nav.md
+
+- [Enterprise Integration
+ Patterns](#eips:enterprise-integration-patterns.adoc)
+
+ - [Aggregate](#aggregate-eip.adoc)
+
+ - [BatchConfig](#batchConfig-eip.adoc)
+
+ - [Bean](#bean-eip.adoc)
+
+ - [Change Data Capture](#change-data-capture.adoc)
+
+ - [Channel Adapter](#channel-adapter.adoc)
+
+ - [Choice](#choice-eip.adoc)
+
+ - [Circuit Breaker](#circuitBreaker-eip.adoc)
+
+ - [Claim Check](#claimCheck-eip.adoc)
+
+ - [Competing Consumers](#competing-consumers.adoc)
+
+ - [Composed Message Processor](#composed-message-processor.adoc)
+
+ - [Content Enricher](#content-enricher.adoc)
+
+ - [Content Filter](#content-filter-eip.adoc)
+
+ - [Convert Body To](#convertBodyTo-eip.adoc)
+
+ - [Convert Header To](#convertHeaderTo-eip.adoc)
+
+ - [Convert Variable To](#convertVariableTo-eip.adoc)
+
+ - [Correlation Identifier](#correlation-identifier.adoc)
+
+ - [Custom Load Balancer](#customLoadBalancer-eip.adoc)
+
+ - [Dead Letter Channel](#dead-letter-channel.adoc)
+
+ - [Delay](#delay-eip.adoc)
+
+ - [Durable Subscriber](#durable-subscriber.adoc)
+
+ - [Dynamic Router](#dynamicRouter-eip.adoc)
+
+ - [Enrich](#enrich-eip.adoc)
+
+ - [Event Driven Consumer](#eventDrivenConsumer-eip.adoc)
+
+ - [Event Message](#event-message.adoc)
+
+ - [Failover Load Balancer](#failoverLoadBalancer-eip.adoc)
+
+ - [Fault Tolerance
+ Configuration](#faultToleranceConfiguration-eip.adoc)
+
+ - [Fault Tolerance EIP](#fault-tolerance-eip.adoc)
+
+ - [Filter](#filter-eip.adoc)
+
+ - [From](#from-eip.adoc)
+
+ - [Guaranteed Delivery](#guaranteed-delivery.adoc)
+
+ - [Idempotent Consumer](#idempotentConsumer-eip.adoc)
+
+ - [Intercept](#intercept.adoc)
+
+ - [Kamelet](#kamelet-eip.adoc)
+
+ - [Load Balance](#loadBalance-eip.adoc)
+
+ - [Logger](#log-eip.adoc)
+
+ - [Loop](#loop-eip.adoc)
+
+ - [Marshal](#marshal-eip.adoc)
+
+ - [Message](#message.adoc)
+
+ - [Message Broker](#message-broker.adoc)
+
+ - [Message Bus](#message-bus.adoc)
+
+ - [Message Channel](#message-channel.adoc)
+
+ - [Message Dispatcher](#message-dispatcher.adoc)
+
+ - [Message Endpoint](#message-endpoint.adoc)
+
+ - [Message Expiration](#message-expiration.adoc)
+
+ - [Message History](#message-history.adoc)
+
+ - [Message Router](#message-router.adoc)
+
+ - [Message Translator](#message-translator.adoc)
+
+ - [Messaging Bridge](#messaging-bridge.adoc)
+
+ - [Messaging Gateway](#messaging-gateway.adoc)
+
+ - [Messaging Mapper](#messaging-mapper.adoc)
+
+ - [Multicast](#multicast-eip.adoc)
+
+ - [Normalizer](#normalizer.adoc)
+
+ - [On Fallback](#onFallback-eip.adoc)
+
+ - [Pipeline](#pipeline-eip.adoc)
+
+ - [Point to Point Channel](#point-to-point-channel.adoc)
+
+ - [Poll](#poll-eip.adoc)
+
+ - [Poll Enrich](#pollEnrich-eip.adoc)
+
+ - [Polling Consumer](#polling-consumer.adoc)
+
+ - [Process](#process-eip.adoc)
+
+ - [Process Manager](#process-manager.adoc)
+
+ - [Publish Subscribe Channel](#publish-subscribe-channel.adoc)
+
+ - [Random Load Balancer](#randomLoadBalancer-eip.adoc)
+
+ - [Recipient List](#recipientList-eip.adoc)
+
+ - [Remove Header](#removeHeader-eip.adoc)
+
+ - [Remove Headers](#removeHeaders-eip.adoc)
+
+ - [Remove Properties](#removeProperties-eip.adoc)
+
+ - [Remove Property](#removeProperty-eip.adoc)
+
+ - [Remove Variable](#removeVariable-eip.adoc)
+
+ - [Request Reply](#requestReply-eip.adoc)
+
+ - [Resequence](#resequence-eip.adoc)
+
+ - [Resilience4j
+ Configuration](#resilience4jConfiguration-eip.adoc)
+
+ - [Resilience4j EIP](#resilience4j-eip.adoc)
+
+ - [Resume Strategies](#resume-strategies.adoc)
+
+ - [Return Address](#return-address.adoc)
+
+ - [Rollback](#rollback-eip.adoc)
+
+ - [Round Robin Load Balancer](#roundRobinLoadBalancer-eip.adoc)
+
+ - [Routing Slip](#routingSlip-eip.adoc)
+
+ - [Saga](#saga-eip.adoc)
+
+ - [Sample](#sample-eip.adoc)
+
+ - [Scatter-Gather](#scatter-gather.adoc)
+
+ - [Script](#script-eip.adoc)
+
+ - [Selective Consumer](#selective-consumer.adoc)
+
+ - [Service Activator](#service-activator.adoc)
+
+ - [Service Call](#serviceCall-eip.adoc)
+
+ - [Set Body](#setBody-eip.adoc)
+
+ - [Set Header](#setHeader-eip.adoc)
+
+ - [Set Headers](#setHeaders-eip.adoc)
+
+ - [Set Property](#setProperty-eip.adoc)
+
+ - [Set Variable](#setVariable-eip.adoc)
+
+ - [Set Variables](#setVariables-eip.adoc)
+
+ - [Sort](#sort-eip.adoc)
+
+ - [Split](#split-eip.adoc)
+
+ - [Step](#step-eip.adoc)
+
+ - [Sticky Load Balancer](#stickyLoadBalancer-eip.adoc)
+
+ - [Stop](#stop-eip.adoc)
+
+ - [StreamConfig](#streamConfig-eip.adoc)
+
+ - [Threads](#threads-eip.adoc)
+
+ - [Throttle](#throttle-eip.adoc)
+
+ - [To](#to-eip.adoc)
+
+ - [To D](#toD-eip.adoc)
+
+ - [Topic Load Balancer](#topicLoadBalancer-eip.adoc)
+
+ - [Transactional Client](#transactional-client.adoc)
+
+ - [Transform](#transform-eip.adoc)
+
+ - [Unmarshal](#unmarshal-eip.adoc)
+
+ - [Validate](#validate-eip.adoc)
+
+ - [Weighted Load Balancer](#weightedLoadBalancer-eip.adoc)
+
+ - [Wire Tap](#wireTap-eip.adoc)
diff --git a/camel-netty-http.md b/camel-netty-http.md
index 0d3c8873a1ab7486788703d4ce4535e4aecdf5c6..45d1d1ab3062101d8e6a58728e79855588f1fe08 100644
--- a/camel-netty-http.md
+++ b/camel-netty-http.md
@@ -65,7 +65,9 @@ options from [Netty](#netty-component.adoc) are not applicable when
using this Netty HTTP component, such as options related to UDP
transport.
-# Access to Netty types
+# Usage
+
+## Access to Netty types
This component uses the
`org.apache.camel.component.netty.http.NettyHttpMessage` as the message
@@ -75,26 +77,22 @@ Mind that the original response may not be accessible at all times.
io.netty.handler.codec.http.HttpRequest request = exchange.getIn(NettyHttpMessage.class).getHttpRequest();
-# Using HTTP Basic Authentication
+## Using HTTP Basic Authentication
The Netty HTTP consumer supports HTTP basic authentication by specifying
the security realm name to use, as shown below
-
+
...
The realm name is mandatory to enable basic authentication. By default,
the JAAS based authenticator is used, which will use the realm name
-specified (karaf in the example above) and use the JAAS realm and the
-`JAAS \{{LoginModule}}s` of this realm for authentication.
-
-End user of Apache Karaf / ServiceMix has a karaf realm out of the box,
-and hence why the example above would work out of the box in these
-containers.
+specified (`_someRealm_` in the example above) and use the JAAS realm
+and the `JAAS \{{LoginModule}}s` of this realm for authentication.
-## Specifying ACL on web resources
+### Specifying ACL on web resources
The `org.apache.camel.component.netty.http.SecurityConstraint` allows to
define constraints on web resources. And the
@@ -133,13 +131,13 @@ The constraint above is defined so that
- access to /guest/\* requires the admin or guest role
- access to /public/\* is an exclusion that means no authentication is
- needed, and is therefore public for everyone without logging in
+ necessary, and is therefore public for everyone without logging in
To use this constraint, we just need to refer to the bean id as shown
below:
-
+
...
@@ -258,11 +256,6 @@ And in the routes you refer to this option as shown below
...
-## Reusing the same server bootstrap configuration with multiple routes across multiple bundles in OSGi container
-
-See the Netty HTTP Server Example for more details and example how to do
-that.
-
## Implementing a reverse proxy
Netty HTTP component can act as a reverse proxy, in that case
diff --git a/camel-netty.md b/camel-netty.md
index ca9a2b49c62a18369f6acdf64c5f5c1caa52b050..17ae4ce224adc5fd36ef197a1dc3404226097412 100644
--- a/camel-netty.md
+++ b/camel-netty.md
@@ -43,7 +43,9 @@ The URI scheme for a netty component is as follows
This component supports producer and consumer endpoints for both TCP and
UDP.
-# Registry-based Options
+# Usage
+
+## Registry-based Options
Codec Handlers and SSL Keystores can be enlisted in the Registry, such
as in the Spring XML file. The values that could be passed in are the
@@ -55,39 +57,39 @@ following:
-
+
-
+
passphrase
password setting to use to
encrypt/decrypt payloads sent using SSH
-
+
keyStoreFormat
keystore format to be used for payload
encryption. Defaults to JKS if not set
-
+
securityProvider
Security provider to be used for
payload encryption. Defaults to SunX509 if not
set.
-
+
keyStoreFile
deprecated: Client
side certificate keystore to be used for encryption
-
+
trustStoreFile
deprecated: Server
side certificate keystore to be used for encryption
-
+
keyStoreResource
Client side certificate keystore to be
used for encryption. It is loaded by default from classpath, but you can
@@ -95,7 +97,7 @@ prefix with "classpath:", "file:", or
"http:" to load the resource from different
systems.
-
+
trustStoreResource
Server side certificate keystore to be
@@ -104,33 +106,33 @@ prefix with "classpath:", "file:", or
"http:" to load the resource from different
systems.
-
+
sslHandler
Reference to a class that could be used
to return an SSL Handler
-
+
encoder
A custom ChannelHandler
class that can be used to perform special marshalling of outbound
payloads. Must override
io.netty.channel.ChannelInboundHandlerAdapter.
-
+
encoders
A list of encoders to be used. You can
use a string that has values separated by comma, and have the values be
looked up in the Registry. Remember to prefix the value with
# so Camel knows it should look up.
-
+
decoder
A custom ChannelHandler
class that can be used to perform special marshalling of inbound
payloads. Must override
io.netty.channel.ChannelOutboundHandlerAdapter.
-
+
decoders
A list of decoders to be used. You can
use a string that has values separated by comma, and have the values be
@@ -142,7 +144,7 @@ looked up in the Registry. Remember to prefix the value with
Read below about using non-shareable encoders/decoders.
-## Using non-shareable encoders or decoders
+### Using non-shareable encoders or decoders
If your encoders or decoders are not shareable (e.g., they don’t have
the @Shareable class annotation), then your encoder/decoder must
@@ -156,9 +158,9 @@ The Netty component offers a
`org.apache.camel.component.netty.ChannelHandlerFactories` factory
class, that has a number of commonly used methods.
-# Sending Messages to/from a Netty endpoint
+## Sending Messages to/from a Netty endpoint
-## Netty Producer
+### Netty Producer
In Producer mode, the component provides the ability to send payloads to
a socket endpoint using either TCP or UDP protocols (with optional SSL
@@ -167,7 +169,7 @@ support).
The producer mode supports both one-way and request-response based
operations.
-## Netty Consumer
+### Netty Consumer
In Consumer mode, the component provides the ability to:
@@ -182,7 +184,7 @@ In Consumer mode, the component provides the ability to:
The consumer mode supports both one-way and request-response based
operations.
-## Using Multiple Codecs
+### Using Multiple Codecs
In certain cases, it may be necessary to add chains of encoders and
decoders to the netty pipeline. To add multiple codecs to a Camel netty
@@ -277,7 +279,7 @@ XML
-# Closing Channel When Complete
+## Closing Channel When Complete
When acting as a server, you sometimes want to close the channel when,
for example, a client conversion is finished. You can do this by simply
@@ -303,7 +305,7 @@ written the bye message back to the client:
Adding custom channel pipeline factories to gain complete control over a
created pipeline
-# Custom pipeline
+## Custom pipeline
Custom channel pipelines provide complete control to the user over the
handler/interceptor chain by inserting custom handler(s), encoder(s) \&
@@ -330,7 +332,7 @@ A custom pipeline factory must be constructed as follows
The example below shows how `ServerInitializerFactory` factory may be
created
-## Using custom pipeline factory
+### Using custom pipeline factory
public class SampleServerInitializerFactory extends ServerInitializerFactory {
private int maxLineSize = 1024;
@@ -369,7 +371,7 @@ and instantiated/utilized on a Camel route in the following way
}
});
-# Reusing Netty boss and worker thread pools
+## Reusing Netty boss and worker thread pools
Netty has two kinds of thread pools: boss and worker. By default, each
Netty consumer and producer has their private thread pools. If you want
@@ -415,7 +417,7 @@ And if we have another route, we can refer to the shared worker pool:
And so forth.
-# Multiplexing concurrent messages over a single connection with request/reply
+## Multiplexing concurrent messages over a single connection with request/reply
When using Netty for request/reply messaging via the netty producer,
then by default, each message is sent via a non-shared connection
@@ -446,7 +448,7 @@ You can find an example with the Apache Camel source code in the
examples directory under the `camel-example-netty-custom-correlation`
directory.
-# Native transport
+## Native transport
To enable native transport, you need to add additional dependency for
epoll or kqueue depending on your OS and CPU arch. To make it easier add
diff --git a/camel-nitrite.md b/camel-nitrite.md
index 8f22dd34ea65a00568a89f941d4e5727d4c609f9..c80b4c961fa69d155f58b1336996c6781a80f70b 100644
--- a/camel-nitrite.md
+++ b/camel-nitrite.md
@@ -17,7 +17,9 @@ for this component.
-# Producer operations
+# Usage
+
+## Producer operations
The following Operations are available to specify as
`NitriteConstants.OPERATION` when producing to Nitrite.
@@ -30,7 +32,7 @@ The following Operations are available to specify as
-
+
-
+
FindCollectionOperation
collection
@@ -47,7 +49,7 @@ style="text-align: left;">Filter(optional), FindOptions(optional)
Find Documents in collection by Filter.
If not specified, returns all documents
-
+
RemoveCollectionOperation
collection
@@ -56,7 +58,7 @@ style="text-align: left;">Filter(required), RemoveOptions(optional)Remove documents matching
Filter
-
+
UpdateCollectionOperation
collection
@@ -65,7 +67,7 @@ style="text-align: left;">Filter(required), UpdateOptions(optional), Do
Update documents matching Filter. If
Document not specified, the message body is used
-
+
CreateIndexOperation
common
@@ -74,7 +76,7 @@ style="text-align: left;">field:String(required), IndexOptions(required
Create index with IndexOptions on
field
-
+
DropIndexOperation
common
@@ -82,7 +84,7 @@ style="text-align: left;">DropIndexOperation
style="text-align: left;">field:String(required)
Drop index on field
-
+
ExportDatabaseOperation
common
@@ -91,20 +93,20 @@ style="text-align: left;">ExportOptions(optional)
Export full database to JSON and stores
result in body - see Nitrite docs for details about format
-
+
GetAttributesOperation
common
Get attributes of a collection
-
+
GetByIdOperation
common
NitriteId
Get Document by _id
-
+
ImportDatabaseOperation
common
@@ -112,7 +114,7 @@ style="text-align: left;">ImportDatabaseOperation
Import the full database from JSON in
body
-
+
InsertOperation
common
payload(optional)
@@ -120,7 +122,7 @@ body
to ObjectRepository. If parameter is not specified, inserts message
body
-
+
ListIndicesOperation
common
@@ -128,7 +130,7 @@ style="text-align: left;">ListIndicesOperation
List indexes in collection and stores
List<Index> in message body
-
+
RebuildIndexOperation
common
@@ -137,7 +139,7 @@ style="text-align: left;">field (required), async (optional)
Rebuild existing index on
field
-
+
UpdateOperation
common
payload(optional)
@@ -145,7 +147,7 @@ field
in ObjectRepository. If parameter is not specified, updates document
from message body
-
+
UpsertOperation
common
payload(optional)
@@ -153,7 +155,7 @@ from message body
collection or object in ObjectRepository. If parameter is not specified,
updates document from message body
-
+
FindRepositoryOperation
repository
@@ -163,7 +165,7 @@ style="text-align: left;">ObjectFilter(optional), FindOptions(optional)
ObjectFilter. If not specified, returns all objects in
repository
-
+
RemoveRepositoryOperation
repository
@@ -172,7 +174,7 @@ style="text-align: left;">ObjectFilter(required), RemoveOptions(optiona
Remove objects in ObjectRepository
matched by ObjectFilter
-
+
UpdateRepositoryOperation
repository
diff --git a/camel-normalizer.md b/camel-normalizer.md
new file mode 100644
index 0000000000000000000000000000000000000000..e74f5a0169b1aa2ff0ec3eddfd8f306c3254e118
--- /dev/null
+++ b/camel-normalizer.md
@@ -0,0 +1,81 @@
+# Normalizer.md
+
+Camel supports the
+[Normalizer](https://www.enterpriseintegrationpatterns.com/patterns/messaging/Normalizer.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+The normalizer pattern is used to process messages that are semantically
+equivalent, but arrive in different formats. The normalizer transforms
+the incoming messages into a common format.
+
+
+
+
+
+In Apache Camel, you can implement the normalizer pattern by combining a
+[Content-Based Router](#choice-eip.adoc), which detects the incoming
+message’s format, with a collection of different [Message
+Translators](#message-translator.adoc), which transform the different
+incoming formats into a common format.
+
+# Example
+
+This example shows a Message Normalizer that converts two types of XML
+messages into a common format. Messages in this common format are then
+routed.
+
+Java
+// we need to normalize two types of incoming messages
+from("direct:start")
+.choice()
+.when().xpath("/employee").to("bean:normalizer?method=employeeToPerson")
+.when().xpath("/customer").to("bean:normalizer?method=customerToPerson")
+.end()
+.to("mock:result");
+
+XML
+
+
+
+
+
+/employee
+
+
+
+/customer
+
+
+
+
+
+
+
+
+
+In this case, we’re using a Java [Bean](#ROOT:bean-component.adoc) as
+the normalizer.
+
+The class looks like this:
+
+ // Java
+ public class MyNormalizer {
+
+ public void employeeToPerson(Exchange exchange, @XPath("/employee/name/text()") String name) {
+ exchange.getMessage().setBody(createPerson(name));
+ }
+
+ public void customerToPerson(Exchange exchange, @XPath("/customer/@name") String name) {
+ exchange.getMessage().setBody(createPerson(name));
+ }
+
+ private String createPerson(String name) {
+ return " ";
+ }
+ }
+
+In case there are many incoming formats, then the [Content Based
+Router](#choice-eip.adoc) may end up with too many choices. In this
+situation, then an alternative is to use [Dynamic to](#toD-eip.adoc)
+that computes a [Bean](#ROOT:bean-component.adoc) endpoint, to be called
+that acts as [Message Translator](#message-translator.adoc).
diff --git a/camel-oaipmh.md b/camel-oaipmh.md
index 4bc5c249036c37eeeaf684ed13bf834109576f2e..0b0c6ebf983839b24c1748a8355b11daca9c0a08 100644
--- a/camel-oaipmh.md
+++ b/camel-oaipmh.md
@@ -26,7 +26,7 @@ for this component:
The OAI-PMH component supports both consumer and producer endpoints.
-# Producer Example
+## Producer Example
The following is a basic example of how to send a request to an OAI-PMH
Server.
@@ -38,7 +38,7 @@ in Java DSL
The result is a set of pages in XML format with all the records of the
consulted repository.
-# Consumer Example
+## Consumer Example
The following is a basic example of how to receive all messages from an
OAI-PMH Server. In Java DSL
diff --git a/camel-observation.md b/camel-observation.md
new file mode 100644
index 0000000000000000000000000000000000000000..f8e0a2928cc9d47ef8a3ea714e2bf7275fbd0561
--- /dev/null
+++ b/camel-observation.md
@@ -0,0 +1,101 @@
+# Observation.md
+
+**Since Camel 3.21**
+
+The Micrometer Observation component is used for performing
+observability of incoming and outgoing Camel messages using [Micrometer
+Observation](https://micrometer.io/docs/observation).
+
+By configuring the `ObservationRegistry` you can add behaviour to your
+observations such as metrics (e.g., via `Micrometer`) or tracing (e.g.,
+via `OpenTelemetry` or `Brave`) or any custom behaviour.
+
+Events are captured for incoming and outgoing messages being sent
+to/from Camel.
+
+# Configuration
+
+The configuration properties for the Micrometer Observations are:
+
+
+
+
+
+
+
+
+
+
+
+
+excludePatterns
+
+Sets exclude pattern(s) that will
+disable tracing for Camel messages that matches the pattern. The content
+is a Set<String> where the key is a pattern. The pattern uses the
+rules from Intercept.
+
+
+encoding
+false
+Sets whether the header keys need to be
+encoded (connector specific) or not. The value is a boolean. Dashes
+required for instances to be encoded for JMS property keys.
+
+
+
+
+## Configuration
+
+Include the `camel-opentelemetry` component in your POM, along with any
+specific dependencies associated with the chosen OpenTelemetry compliant
+Tracer.
+
+To explicitly configure OpenTelemetry support, instantiate the
+`OpenTelemetryTracer` and initialize the camel context. You can
+optionally specify a `Tracer`, or alternatively it can be implicitly
+discovered using the `Registry`
+
+ ObservationRegistry observationRegistry = ObservationRegistry.create();
+ MicrometerObservationTracer micrometerObservationTracer = new MicrometerObservationTracer();
+
+ // This component comes from Micrometer Core - it's used for creation of metrics
+ MeterRegistry meterRegistry = new SimpleMeterRegistry();
+
+ // This component comes from Micrometer Tracing - it's an abstraction over tracers
+ io.micrometer.tracing.Tracer otelTracer = otelTracer();
+ // This component comes from Micrometer Tracing - an example of B3 header propagation via OpenTelemetry
+ OtelPropagator otelPropagator = new OtelPropagator(ContextPropagators.create(B3Propagator.injectingSingleHeader()), tracer);
+
+ // Configuration ObservationRegistry for metrics
+ observationRegistry.observationConfig().observationHandler(new DefaultMeterObservationHandler(meterRegistry));
+
+ // Configuration ObservationRegistry for tracing
+ observationRegistry.observationConfig().observationHandler(new ObservationHandler.FirstMatchingCompositeObservationHandler(new CamelPropagatingSenderTracingObservationHandler<>(otelTracer, otelPropagator), new CamelPropagatingReceiverTracingObservationHandler<>(otelTracer, otelPropagator), new CamelDefaultTracingObservationHandler(otelTracer)));
+
+ // Both components ObservationRegistry and MeterRegistry should be set manually, or they will be resolved from CamelContext if present
+ micrometerObservationTracer.setObservationRegistry(observationRegistry);
+ micrometerObservationTracer.setTracer(otelTracer);
+
+ // Initialize the MicrometerObservationTracer
+ micrometerObservationTracer.init(context);
+
+# Spring Boot
+
+If you are using Spring Boot, then you can add the
+`camel-observation-starter` dependency, and turn on OpenTracing by
+annotating the main class with `@CamelObservation`.
+
+The `MicrometerObservationTracer` will be implicitly obtained from the
+camel context’s `Registry`, unless a `MicrometerObservationTracer` bean
+has been defined by the application.
+
+# MDC Logging
+
+When MDC Logging is enabled for the active Camel context the Trace ID
+and Span ID will be added and removed from the MDC for each route, the
+keys are `trace_id` and `span_id`, respectively.
diff --git a/camel-ognl-language.md b/camel-ognl-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..854c60422be9e2b8bd8d1fdde8a2d3d8b0ad72da
--- /dev/null
+++ b/camel-ognl-language.md
@@ -0,0 +1,155 @@
+# Ognl-language.md
+
+**Since Camel 1.1**
+
+Camel allows [OGNL](https://en.wikipedia.org/wiki/OGNL), supported by
+[(Apache Commons OGNL)](http://commons.apache.org/proper/commons-ognl/),
+to be used as an [Expression](#manual::expression.adoc) or
+[Predicate](#manual::predicate.adoc) in Camel routes.
+
+For example, you can use MVEL in a [Predicate](#manual::predicate.adoc)
+with the [Content-Based Router](#eips:choice-eip.adoc) EIP.
+
+You can use OGNL dot notation to invoke operations. If you for instance
+have a body that contains a POJO that has a `getFamilyName` method then
+you can construct the syntax as follows:
+
+ request.body.familyName
+
+Or use similar syntax as in Java:
+
+ getRequest().getBody().getFamilyName()
+
+# OGNL Options
+
+# Variables
+
+
+
+
+
+
+
+
+
+
+
+
+this
+Exchange
+the Exchange is the root
+object
+
+
+context
+CamelContext
+the CamelContext
+
+
+exchange
+Exchange
+the Exchange
+
+
+exchangeId
+String
+the exchange id
+
+
+exception
+Throwable
+the Exchange exception (if
+any)
+
+
+request
+Message
+the message
+
+
+message
+Message
+the message
+
+
+headers
+Map
+the message headers
+
+
+header(name)
+Object
+the message header by the given
+name
+
+
+header(name, type)
+Type
+the message header by the given name as
+the given type
+
+
+properties
+Map
+the exchange properties
+
+
+property(name)
+Object
+the exchange property by the given
+name
+
+
+property(name, type)
+Type
+the exchange property by the given name
+as the given type
+
+
+
+
+# Example
+
+For example, you could use OGNL inside a [Message
+Filter](#eips:filter-eip.adoc)
+
+ from("seda:foo")
+ .filter().ognl("request.headers.foo == 'bar'")
+ .to("seda:bar");
+
+And in XML:
+
+
+
+
+ request.headers.foo == 'bar'
+
+
+
+
+# Loading script from external resource
+
+You can externalize the script and have Apache Camel load it from a
+resource such as `"classpath:"`, `"file:"`, or `"http:"`. This is done
+using the following syntax: `"resource:scheme:location"`, e.g., to refer
+to a file on the classpath you can do:
+
+ .setHeader("myHeader").ognl("resource:classpath:myognl.txt")
+
+# Dependencies
+
+To use OGNL in your camel routes, you need to add the dependency on
+**camel-ognl**, which implements the OGNL language.
+
+If you use Maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-ognl
+ x.x.x
+
diff --git a/camel-olingo2.md b/camel-olingo2.md
index 08748084c23c69fbeb5bbdda93e8864fa4f90402..c4fdfd7dfff56be95df5d85d9037d7db8bd8518f 100644
--- a/camel-olingo2.md
+++ b/camel-olingo2.md
@@ -37,7 +37,9 @@ for this component:
olingo2://endpoint/?[options]
-# Endpoint HTTP Headers
+# Usage
+
+## Endpoint HTTP Headers
The component level configuration property **httpHeaders** supplies
static HTTP header information. However, some systems require dynamic
@@ -50,7 +52,7 @@ and the response headers will be returned in a
**`CamelOlingo2.responseHttpHeaders`** property. Both properties are of
the type `java.util.Map`.
-# OData Resource Type Mapping
+## OData Resource Type Mapping
The result of **read** endpoint and data type of **data** option depends
on the OData resource being queried, created or modified.
@@ -62,7 +64,7 @@ on the OData resource being queried, created or modified.
-
+
-
+
Entity data model
$metadata
org.apache.olingo.odata2.api.edm.Edm
-
+
Service document
/
org.apache.olingo.odata2.api.servicedocument.ServiceDocument
-
+
OData feed
<entity-set>
org.apache.olingo.odata2.api.ep.feed.ODataFeed
-
+
OData entry
<entity-set>(<key-predicate>)
@@ -97,28 +99,28 @@ style="text-align: left;">org.apache.olingo.odata2.api.ep.entry.ODataEn
for Out body (response) java.util.Map<String, Object>
for In body (request)
-
+
Simple property
<entity-set>(<key-predicate>)/<simple-property>
The appropriate Java data type as
described by Olingo EdmProperty
-
+
Simple property value
<entity-set>(<key-predicate>)/<simple-property>/$value
The appropriate Java data type as
described by Olingo EdmProperty
-
+
Complex property
<entity-set>(<key-predicate>)/<complex-property>
java.util.Map<String,
Object>
-
+
Zero or one association link
<entity-set>(<key-predicate>/$link/<one-to-one-entity-set-property>
@@ -126,7 +128,7 @@ style="text-align: left;"><entity-set>(<key-predicate>/$link/<
java.util.Map<String, Object> with key property names
and values for request
-
+
Zero or many association links
<entity-set>(<key-predicate>/$link/<one-to-many-entity-set-property>
@@ -136,7 +138,7 @@ for response
java.util.List<java.util.Map<String, Object>>
containing a list of key property names and values for request
-
+
Count
<resource-uri>/$count
java.lang.Long
@@ -144,7 +146,7 @@ containing a list of key property names and values for request
-# Samples
+# Examples
The following route reads top 5 entries from the Manufacturer feed
ordered by ascending Name property.
diff --git a/camel-olingo4.md b/camel-olingo4.md
index f164dc24e1dbc321c8075d12bc9810b11719e515..f1cfba53391948487c47c7b9b8b8703e43d94f74 100644
--- a/camel-olingo4.md
+++ b/camel-olingo4.md
@@ -35,7 +35,9 @@ for this component:
olingo4://endpoint/?[options]
-# Endpoint HTTP Headers
+# Usage
+
+## Endpoint HTTP Headers
The component level configuration property **httpHeaders** supplies
static HTTP header information. However, some systems require dynamic
@@ -48,7 +50,7 @@ and the response headers will be returned in a
**`CamelOlingo4.responseHttpHeaders`** property. Both properties are of
the type **`java.util.Map`**.
-# OData Resource Type Mapping
+## OData Resource Type Mapping
The result of **read** endpoint and data type of **data** option depends
on the OData resource being queried, created or modified.
@@ -60,7 +62,7 @@ on the OData resource being queried, created or modified.
-
+
-
+
Entity data model
$metadata
org.apache.olingo.commons.api.edm.Edm
-
+
Service document
/
org.apache.olingo.client.api.domain.ClientServiceDocument
-
+
OData entity set
<entity-set>
org.apache.olingo.client.api.domain.ClientEntitySet
-
+
OData entity
<entity-set>(<key-predicate>)
@@ -95,28 +97,28 @@ style="text-align: left;">org.apache.olingo.client.api.domain.ClientEnt
for Out body (response) java.util.Map<String, Object>
for In body (request)
-
+
Simple property
<entity-set>(<key-predicate>)/<simple-property>
org.apache.olingo.client.api.domain.ClientPrimitiveValue
-
+
Simple property value
<entity-set>(<key-predicate>)/<simple-property>/$value
org.apache.olingo.client.api.domain.ClientPrimitiveValue
-
+
Complex property
<entity-set>(<key-predicate>)/<complex-property>
org.apache.olingo.client.api.domain.ClientComplexValue
-
+
Count
<resource-uri>/$count
java.lang.Long
@@ -124,7 +126,7 @@ style="text-align: left;">org.apache.olingo.client.api.domain.ClientCom
-# Samples
+# Examples
The following route reads top 5 entries from the People entity ordered
by ascending FirstName property.
diff --git a/camel-onFallback-eip.md b/camel-onFallback-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..6a2fd3bb7f1c1d18ca989b148c187a610dcd002d
--- /dev/null
+++ b/camel-onFallback-eip.md
@@ -0,0 +1,27 @@
+# OnFallback-eip.md
+
+If you are using **onFallback** then that is intended to be local
+processing only where you can do a message transformation or call a bean
+or something as the fallback.
+
+If you need to call an external service over the network, then you
+should use **onFallbackViaNetwork** that runs in another independent
+**HystrixCommand** that uses its own thread pool to not exhaust the
+first command.
+
+# Options
+
+# Exchange properties
+
+# Using fallback
+
+The **onFallback** is used by [Circuit
+Breaker](#circuitBreaker-eip.adoc) EIPs to execute a fallback route. For
+example, how to use this see the various Circuit Breaker
+implementations:
+
+- [FaultTolerance EIP](#fault-tolerance-eip.adoc) - MicroProfile Fault
+ Tolerance Circuit Breaker
+
+- [Resilience4j EIP](#resilience4j-eip.adoc) - Resilience4j Circuit
+ Breaker
diff --git a/camel-openapi-java.md b/camel-openapi-java.md
new file mode 100644
index 0000000000000000000000000000000000000000..2218f847c16cf25f80e3c397d72b78c893b71951
--- /dev/null
+++ b/camel-openapi-java.md
@@ -0,0 +1,269 @@
+# Openapi-java.md
+
+**Since Camel 3.1**
+
+The Rest DSL can be integrated with the `camel-openapi-java` module
+which is used for exposing the REST services and their APIs using
+[OpenApi](https://www.openapis.org/).
+
+Only OpenAPI spec version 3.x is supported. You cannot use the old
+Swagger 2.0 spec.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-openapi-java
+ x.x.x
+
+
+
+The camel-openapi-java module can be used from the REST components
+(without the need for servlet)
+
+# Using OpenApi in rest-dsl
+
+You can enable the OpenApi api from the rest-dsl by configuring the
+`apiContextPath` dsl as shown below:
+
+ public class UserRouteBuilder extends RouteBuilder {
+ @Override
+ public void configure() throws Exception {
+ // configure we want to use servlet as the component for the rest DSL,
+ // and we enable json binding mode
+ restConfiguration().component("netty-http").bindingMode(RestBindingMode.json)
+ // and output using pretty print
+ .dataFormatProperty("prettyPrint", "true")
+ // setup context path and port number that netty will use
+ .contextPath("/").port(8080)
+ // add OpenApi api-doc out of the box
+ .apiContextPath("/api-doc")
+ .apiProperty("api.title", "User API").apiProperty("api.version", "1.2.3")
+ // and enable CORS
+ .apiProperty("cors", "true");
+
+ // this user REST service is json only
+ rest("/user").description("User rest service")
+ .consumes("application/json").produces("application/json")
+ .get("/{id}").description("Find user by id").outType(User.class)
+ .param().name("id").type(path).description("The id of the user to get").dataType("int").endParam()
+ .to("bean:userService?method=getUser(${header.id})")
+ .put().description("Updates or create a user").type(User.class)
+ .param().name("body").type(body).description("The user to update or create").endParam()
+ .to("bean:userService?method=updateUser")
+ .get("/findAll").description("Find all users").outType(User[].class)
+ .to("bean:userService?method=listUsers");
+ }
+ }
+
+# Options
+
+The OpenApi module can be configured using the following options. To
+configure using a servlet, you use the init-param as shown above. When
+configuring directly in the rest-dsl, you use the appropriate method,
+such as `enableCORS`, `host,contextPath`, dsl. The options with
+`api.xxx` is configured using `apiProperty` dsl.
+
+
+
+
+
+
+
+
+
+
+
+
+cors
+Boolean
+Whether to enable CORS. Notice this
+only enables CORS for the api browser, and not the actual access to the
+REST services. The default is false.
+
+
+openapi.version
+String
+OpenApi spec version. Only spec version
+3.x is supported. Is default 3.0.
+
+
+host
+String
+To set up the hostname. If not
+configured, camel-openapi-java will calculate the name as localhost
+based.
+
+
+schemes
+String
+The protocol schemes to use. Multiple
+values can be separated by comma such as "http,https". The default value
+is "http".
+
+
+base.path
+String
+Required : To set up
+the base path where the REST services are available. The path is
+relative (e.g., do not start with http/https) and camel-openapi-java
+will calculate the absolute base path at runtime, which will be
+protocol://host:port/context-path/base.path
+
+
+api.path
+String
+To set up the path where the API is
+available (eg /api-docs). The path is relative (e.g., do not start with
+http/https) and camel-openapi-java will calculate the absolute base path
+at runtime, which will be
+protocol://host:port/context-path/api.path So using
+relative paths is much easier. See above for an example.
+
+
+api.version
+String
+The version of the api. Is default
+0.0.0.
+
+
+api.title
+String
+The title of the application.
+
+
+api.description
+String
+A short description of the
+application.
+
+
+api.termsOfService
+String
+A URL to the Terms of Service of the
+API.
+
+
+api.contact.name
+String
+Name of person or organization to
+contact
+
+
+api.contact.email
+String
+An email to be used for API-related
+correspondence.
+
+
+api.contact.url
+String
+A URL to a website for more contact
+information.
+
+
+api.license.name
+String
+The license name used for the
+API.
+
+
+api.license.url
+String
+A URL to the license used for the
+API.
+
+
+api.default.consumes
+String
+Comma-separated list of default media
+types when RestParamType.body is used without providing any
+.consumes() configuration. The default value is
+application/json. Note that this applies only to the
+generated OpenAPI document and not to the actual REST services.
+
+
+api.default.produces
+String
+Comma-separated list of default media
+types when outType is used without providing any
+.produces() configuration. The default value is
+application/json. Note that this applies only to the
+generated OpenAPI document and not to the actual REST services
+
+
+
+
+# Adding Security Definitions in API doc
+
+The Rest DSL now supports declaring OpenApi `securityDefinitions` in the
+generated API document. For example, as shown below:
+
+ rest("/user").tag("dude").description("User rest service")
+ // setup security definitions
+ .securityDefinitions()
+ .oauth2("petstore_auth").authorizationUrl("http://petstore.swagger.io/oauth/dialog").end()
+ .apiKey("api_key").withHeader("myHeader").end()
+ .end()
+ .consumes("application/json").produces("application/json")
+
+Here we have set up two security definitions
+
+- OAuth2: with implicit authorization with the provided url
+
+- Api Key: using an api key that comes from HTTP header named
+ *myHeader*
+
+Then you need to specify on the rest operations which security to use by
+referring to their key (petstore\_auth or api\_key).
+
+ .get("/{id}/{date}").description("Find user by id and date").outType(User.class)
+ .security("api_key")
+
+ ...
+
+ .put().description("Updates or create a user").type(User.class)
+ .security("petstore_auth", "write:pets,read:pets")
+
+Here the get operation is using the Api Key security, and the put
+operation is using OAuth security with permitted scopes of read and
+write pets.
+
+# JSon or Yaml
+
+The camel-openapi-java module supports both JSon and Yaml out of the
+box. You can specify in the request url what you want by using `.json`
+or `.yaml` as suffix in the context-path If none is specified, then the
+HTTP Accept header is used to detect if json or yaml can be accepted. If
+either both are accepted or none was set as accepted, then json is
+returned as the default format.
+
+# useXForwardHeaders and API URL resolution
+
+The OpenApi specification allows you to specify the host, port \& path
+that is serving the API. In OpenApi V2 this is done via the `host` field
+and in OpenAPI V3 it is part of the `servers` field.
+
+By default, the value for these fields is determined by `X-Forwarded`
+headers, `X-Forwarded-Host` \& `X-Forwarded-Proto`.
+
+This can be overridden by disabling the lookup of `X-Forwarded` headers
+and by specifying your own host, port \& scheme on the REST
+configuration.
+
+ restConfiguration().component("netty-http")
+ .useXForwardHeaders(false)
+ .apiProperty("schemes", "https");
+ .host("localhost")
+ .port(8080);
+
+# Examples
+
+In the Apache Camel distribution we ship the `camel-example-openapi-cdi`
+and `camel-example-spring-boot-rest-openapi-simple` which demonstrates
+using this OpenApi component.
diff --git a/camel-openapi-validator.md b/camel-openapi-validator.md
new file mode 100644
index 0000000000000000000000000000000000000000..ac67e5dd879aec43864418f4a8242e4d26afb993
--- /dev/null
+++ b/camel-openapi-validator.md
@@ -0,0 +1,19 @@
+# Openapi-validator.md
+
+**Since Camel 4.7**
+
+Camel comes with a default client request validator for the Camel Rest
+DSL.
+
+The `camel-openapi-validator` uses the third party [Atlassian Swagger
+Request
+Validator](https://bitbucket.org/atlassian/swagger-request-validator/src/master/)
+library instead for client request validator. This library is a more
+extensive validator than the default validator from `camel-core`.
+
+This library does not work with running in `camel-jbang`.
+
+# Auto-detection from classpath
+
+To use this implementation all you need to do is to add the
+`camel-openapi-validator` dependency to the classpath.
diff --git a/camel-opensearch.md b/camel-opensearch.md
index 850a911d30c6f93a8e6da137ee38bc5c60a3a5c7..91667066dc7ca3bd3edb832a3071c0e16343f190 100644
--- a/camel-opensearch.md
+++ b/camel-opensearch.md
@@ -22,12 +22,14 @@ for this component:
opensearch://clusterName[?options]
-# Message Operations
+# Usage
-The following [https://opensearch.org/](https://opensearch.org/) operations are currently
-supported. Set an endpoint URI option or exchange header with a key of
-"operation" and a value set to one of the following. Some operations
-also require other parameters or the message body to be set.
+## Message Operations
+
+The following [OpenSearch](https://opensearch.org/) operations are
+currently supported. Set an endpoint URI option or exchange header with
+a key of "operation" and a value set to one of the following. Some
+operations also require other parameters or the message body to be set.
@@ -36,124 +38,114 @@ also require other parameters or the message body to be set.
-
+
-
-Index
-Map ,
-String , byte[] ,
-Reader , InputStream or
-IndexRequest.Builder content to index
+
+Index
+Map, String,
+byte[], Reader, InputStream or
+IndexRequest.Builder content to index
Adds content to an index and returns
the content’s indexId in the body. You can set the name of the target
index by setting the message header with the key "indexName". You can
set the indexId by setting the message header with the key
"indexId".
-
-GetById
-String or
-GetRequest.Builder index id of content to
-retrieve
+
+GetById
+String or
+GetRequest.Builder index id of content to retrieve
Retrieves the document corresponding to
the given index id and returns a GetResponse object in the body. You can
set the name of the target index by setting the message header with the
key "indexName". You can set the type of document by setting the message
header with the key "documentClass".
-
-Delete
-String or
-DeleteRequest.Builder index id of content to
+
+Delete
+String or
+DeleteRequest.Builder index id of content to
delete
Deletes the specified indexName and
returns a Result object in the body. You can set the name of the target
index by setting the message header with the key "indexName".
-
-DeleteIndex
-String or
-DeleteIndexRequest.Builder index name of the index to
+
+DeleteIndex
+String or
+DeleteIndexRequest.Builder index name of the index to
delete
Deletes the specified indexName and
returns a status code in the body. You can set the name of the target
index by setting the message header with the key "indexName".
-
-Bulk
-Iterable or
-BulkRequest.Builder of any type that is already
-accepted (DeleteOperation.Builder for delete operation,
-UpdateOperation.Builder for update operation, CreateOperation.Builder
-for create operation, byte[], InputStream, String, Reader, Map or any
-document type for index operation)
+
+Bulk Iterable
+or BulkRequest.Builder of any type that is already accepted
+(DeleteOperation.Builder for delete operation, UpdateOperation.Builder
+for update operation, CreateOperation.Builder for create operation,
+byte[], InputStream, String, Reader, Map or any document type for index
+operation)
Adds/Updates/Deletes content from/to an
index and returns a List<BulkResponseItem> object in the body You
can set the name of the target index by setting the message header with
the key "indexName".
+Search
-
-Search
-Map ,
-String or
-SearchRequest.Builder
+
+Map, String
+or SearchRequest.Builder
Search the content with the map of
query string. You can set the name of the target index by setting the
message header with the key "indexName". You can set the number of hits
to return by setting the message header with the key "size". You can set
the starting document offset by setting the message header with the key
"from".
+MultiSearch
-
-MultiSearch
+
MsearchRequest.Builder
+style="text-align: left;">MsearchRequest.Builder
Multiple search in one
+MultiGet
-
-MultiGet
-Iterable<String>
-or MgetRequest.Builder the id of the document to
+
+Iterable<String> or
+MgetRequest.Builder the id of the document to
retrieve
Multiple get in one
You can set the name of the target index by setting the message
header with the key "indexName".
+Exists
-
-Exists
+
None
Checks whether the index exists or not
and returns a Boolean flag in the body.
You must set the name of the target index by setting the message
header with the key "indexName".
+Update
-
-Update
-byte[] ,
-InputStream , String ,
-Reader , Map or any document type
-content to update
+
+byte[],
+InputStream, String, Reader,
+Map or any document type content to update
Updates content to an index and returns
the content’s indexId in the body. You can set the name of the target
index by setting the message header with the key "indexName". You can
set the indexId by setting the message header with the key
"indexId".
-
-
Ping
-None
-Pings the OpenSearch cluster and
-returns true if the ping succeeded, false otherwise
-# Configure the component and enable basic authentication
+## Configure the component and enable basic authentication
To use the OpenSearch component, it has to be configured with a minimum
configuration.
@@ -173,7 +165,18 @@ SSL on the component like the example below
camelContext.addComponent("opensearch", opensearchComponent);
-# Index Example
+## Document type
+
+For all the search operations, it is possible to indicate the type of
+document to retrieve to get the result already unmarshalled with the
+expected type.
+
+The document type can be set using the header "documentClass" or via the
+uri parameter of the same name.
+
+# Examples
+
+## Index Example
Below is a simple INDEX example
@@ -185,7 +188,7 @@ Below is a simple INDEX example
-**For this operation, you’ll need to specify an indexId header.**
+For this operation, you’ll need to specify an indexId header.
A client would simply need to pass a body message containing a Map to
the route. The result body contains the indexId created.
@@ -194,7 +197,7 @@ the route. The result body contains the indexId created.
map.put("content", "test");
String indexId = template.requestBody("direct:index", map, String.class);
-# Search Example
+## Search Example
Searching on specific field(s) and value use the Operation ´Search´.
Pass in the query JSON String or the Map
@@ -247,7 +250,7 @@ Search using OpenSearch scroll api to fetch all results.
.to("mock:output")
.end();
-# MultiSearch Example
+## MultiSearch Example
MultiSearching on specific field(s) and value uses the Operation
`MultiSearch`. Pass in the MultiSearchRequest instance
@@ -269,15 +272,6 @@ MultiSearch on specific field(s)
.body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build());
List> response = template.requestBody("direct:multiSearch", builder, List.class);
-# Document type
-
-For all the search operations, it is possible to indicate the type of
-document to retrieve to get the result already unmarshalled with the
-expected type.
-
-The document type can be set using the header "documentClass" or via the
-uri parameter of the same name.
-
## Component Configurations
@@ -289,12 +283,12 @@ uri parameter of the same name.
|maxRetryTimeout|The time in ms before retry|30000|integer|
|socketTimeout|The timeout in ms to wait before the socket will time out.|30000|integer|
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
-|client|To use an existing configured OpenSearch client, instead of creating a client per endpoint. This allows to customize the client with specific settings.||object|
-|enableSniffer|Enable automatically discover nodes from a running OpenSearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean|
+|client|To use an existing configured OpenSearch client, instead of creating a client per endpoint. This allows customizing the client with specific settings.||object|
+|enableSniffer|Enable automatically discover nodes from a running OpenSearch cluster. If this option is used in conjunction with Spring Boot, then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean|
|sniffAfterFailureDelay|The delay of a sniff execution scheduled after a failure (in milliseconds)|60000|integer|
|snifferInterval|The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions|300000|integer|
|enableSSL|Enable SSL|false|boolean|
-|password|Password for authenticate||string|
+|password|Password for authenticating||string|
|user|Basic authenticate user||string|
## Endpoint Configurations
@@ -303,7 +297,7 @@ uri parameter of the same name.
|Name|Description|Default|Type|
|---|---|---|---|
|clusterName|Name of the cluster||string|
-|connectionTimeout|The time in ms to wait before connection will timeout.|30000|integer|
+|connectionTimeout|The time in ms to wait before connection will time out.|30000|integer|
|disconnect|Disconnect after it finish calling the producer|false|boolean|
|from|Starting index of the response.||integer|
|hostAddresses|Comma separated list with ip:port formatted remote transport addresses to use.||string|
@@ -312,12 +306,12 @@ uri parameter of the same name.
|operation|What operation to perform||object|
|scrollKeepAliveMs|Time in ms during which OpenSearch will keep search context alive|60000|integer|
|size|Size of the response.||integer|
-|socketTimeout|The timeout in ms to wait before the socket will timeout.|30000|integer|
+|socketTimeout|The timeout in ms to wait before the socket will time out.|30000|integer|
|useScroll|Enable scroll usage|false|boolean|
|waitForActiveShards|Index creation waits for the write consistency number of shards to be available|1|integer|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|documentClass|The class to use when deserializing the documents.|ObjectNode|string|
-|enableSniffer|Enable automatically discover nodes from a running OpenSearch cluster. If this option is used in conjunction with Spring Boot then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean|
+|enableSniffer|Enable automatically discover nodes from a running OpenSearch cluster. If this option is used in conjunction with Spring Boot, then it's managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot).|false|boolean|
|sniffAfterFailureDelay|The delay of a sniff execution scheduled after a failure (in milliseconds)|60000|integer|
|snifferInterval|The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions|300000|integer|
|certificatePath|The certificate that can be used to access the ES Cluster. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems.||string|
diff --git a/camel-openshift-build-configs.md b/camel-openshift-build-configs.md
index 2cd4d8dc1013b20a177bcdb0f1dbf7e796e51e13..077bef413d822e34654ca1f83f8058bbeab0ce35 100644
--- a/camel-openshift-build-configs.md
+++ b/camel-openshift-build-configs.md
@@ -8,17 +8,21 @@ The OpenShift Build Config component is one of [Kubernetes
Components](#kubernetes-summary.adoc) which provides a producer to
execute Openshift Build Configs operations.
-# Supported producer operation
+# Usage
-- listBuildConfigs
+## Supported producer operation
-- listBuildConfigsByLabels
+- `listBuildConfigs`
-- getBuildConfig
+- `listBuildConfigsByLabels`
-# Openshift Build Configs Producer Examples
+- `getBuildConfig`
-- listBuilds: this operation lists the Build Configs on an Openshift
+# Examples
+
+## Openshift Build Configs Producer Examples
+
+- `listBuilds`: this operation lists the build configs on an Openshift
cluster
@@ -27,10 +31,10 @@ execute Openshift Build Configs operations.
toF("openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigs").
to("mock:result");
-This operation returns a List of Builds from your Openshift cluster
+This operation returns a list of builds from your Openshift cluster
-- listBuildsByLabels: this operation lists the build configs by labels
- on an Openshift cluster
+- `listBuildsByLabels`: this operation lists the build configs by
+ labels on an Openshift cluster
@@ -46,8 +50,8 @@ This operation returns a List of Builds from your Openshift cluster
toF("openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigsByLabels").
to("mock:result");
-This operation returns a List of Build configs from your cluster, using
-a label selector (with key1 and key2, with value value1 and value2)
+This operation returns a list of build configs from your cluster using a
+label selector (with key1 and key2, with value value1 and value2)
## Component Configurations
diff --git a/camel-openshift-builds.md b/camel-openshift-builds.md
index 5a59f26943e3ef80be5256e7bc1577fc492a9266..390d99738eb5fea2a20f180d27ff9bba8e4f3cc8 100644
--- a/camel-openshift-builds.md
+++ b/camel-openshift-builds.md
@@ -8,17 +8,22 @@ The Openshift Builds component is one of [Kubernetes
Components](#kubernetes-summary.adoc) which provides a producer to
execute Openshift builds operations.
-# Supported producer operation
+# Usage
-- listBuilds
+## Supported producer operation
-- listBuildsByLabels
+- `listBuilds`
-- getBuild
+- `listBuildsByLabels`
-# Openshift Builds Producer Examples
+- `getBuild`
-- listBuilds: this operation lists the Builds on an Openshift cluster
+# Examples
+
+## Openshift Builds Producer Examples
+
+- `listBuilds`: this operation lists the builds on an Openshift
+ cluster
@@ -28,8 +33,8 @@ execute Openshift builds operations.
This operation returns a List of Builds from your Openshift cluster
-- listBuildsByLabels: this operation lists the builds by labels on an
- Openshift cluster
+- `listBuildsByLabels`: this operation lists the builds by labels on
+ an Openshift cluster
@@ -45,7 +50,7 @@ This operation returns a List of Builds from your Openshift cluster
toF("openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuildsByLabels").
to("mock:result");
-This operation returns a List of Builds from your cluster, using a label
+This operation returns a list of builds from your cluster using a label
selector (with key1 and key2, with value value1 and value2)
## Component Configurations
diff --git a/camel-openshift-deploymentconfigs.md b/camel-openshift-deploymentconfigs.md
index 4474f8d31ad9525137723efa01e9989de7a8715e..a4c512a0b1be69e25e4676123b06ee175ebe44fc 100644
--- a/camel-openshift-deploymentconfigs.md
+++ b/camel-openshift-deploymentconfigs.md
@@ -9,25 +9,29 @@ Components](#kubernetes-summary.adoc) which provides a producer to
execute Openshift Deployment Configs operations and a consumer to
consume events related to Deployment Configs objects.
-# Supported producer operation
+# Usage
-- listDeploymentConfigs
+## Supported producer operation
-- listDeploymentsConfigsByLabels
+- `listDeploymentConfigs`
-- getDeploymentConfig
+- `listDeploymentsConfigsByLabels`
-- createDeploymentConfig
+- `getDeploymentConfig`
-- updateDeploymentConfig
+- `createDeploymentConfig`
-- deleteDeploymentConfig
+- `updateDeploymentConfig`
-- scaleDeploymentConfig
+- `deleteDeploymentConfig`
-# Openshift Deployment Configs Producer Examples
+- `scaleDeploymentConfig`
-- listDeploymentConfigs: this operation lists the deployments on an
+# Examples
+
+## Openshift Deployment Configs Producer Examples
+
+- `listDeploymentConfigs`: this operation lists the deployments on an
Openshift cluster
@@ -36,9 +40,9 @@ consume events related to Deployment Configs objects.
toF("openshift-deploymentconfigs:///?kubernetesClient=#kubernetesClient&operation=listDeploymentConfigs").
to("mock:result");
-This operation returns a List of Deployment Configs from your cluster
+This operation returns a list of deployment configs from your cluster
-- listDeploymentConfigsByLabels: this operation lists the deployment
+- `listDeploymentConfigsByLabels`: this operation lists the deployment
configs by labels on an Openshift cluster
@@ -55,11 +59,11 @@ This operation returns a List of Deployment Configs from your cluster
toF("openshift-deploymentconfigs:///?kubernetesClient=#kubernetesClient&operation=listDeploymentConfigsByLabels").
to("mock:result");
-This operation returns a List of Deployment Configs from your cluster,
+This operation returns a list of deployment configs from your cluster
using a label selector (with key1 and key2, with value value1 and
value2)
-# Openshift Deployment Configs Consumer Example
+## Openshift Deployment Configs Consumer Example
fromF("openshift-deploymentconfigs://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new OpenshiftProcessor()).to("mock:result");
public class OpenshiftProcessor implements Processor {
diff --git a/camel-openstack-cinder.md b/camel-openstack-cinder.md
index 67e9d7296b520a91ce242da32836e37f377ec51e..6aa72fc4b21a2f9d42769230460e6ed69b576725 100644
--- a/camel-openstack-cinder.md
+++ b/camel-openstack-cinder.md
@@ -9,7 +9,8 @@ OpenStack block storage services.
# Dependencies
-Maven users will need to add the following dependency to their pom.xml.
+Maven users will need to add the following dependency to their
+`pom.xml`.
**pom.xml**
@@ -30,9 +31,9 @@ Camel.
You can use the following settings for each subsystem:
-# volumes
+## Volumes
-## Operations you can perform with the Volume producer
+### Operations you can perform with the Volume producer
@@ -40,33 +41,33 @@ You can use the following settings for each subsystem:
-
+
-
+
create
Create new volume.
-
+
get
Get the volume.
-
+
getAll
Get all volumes.
-
+
getAllTypes
Get volume types.
-
+
update
Update the volume.
-
+
delete
Delete the volume.
@@ -74,12 +75,12 @@ You can use the following settings for each subsystem:
If you need more precise volume settings, you can create a new object of
-the type **org.openstack4j.model.storage.block.Volume** and send in the
+the type `org.openstack4j.model.storage.block.Volume` and send in the
message body.
-# snapshots
+## Snapshots
-## Operations you can perform with the Snapshot producer
+### Operations you can perform with the Snapshot producer
@@ -87,29 +88,29 @@ message body.
-
+
-
+
create
Create a new snapshot.
-
+
get
Get the snapshot.
-
+
getAll
Get all snapshots.
-
+
update
Get update the snapshot.
-
+
delete
Delete the snapshot.
@@ -117,7 +118,7 @@ message body.
If you need more precise server settings, you can create a new object of
-the type **org.openstack4j.model.storage.block.VolumeSnapshot** and send
+the type `org.openstack4j.model.storage.block.VolumeSnapshot` and send
in the message body.
## Component Configurations
diff --git a/camel-openstack-glance.md b/camel-openstack-glance.md
index 6122a0e954623cd70d5651750423da5c50d39737..908b8370ffe3bcd83ba1c8067a9e9f2edbb60e45 100644
--- a/camel-openstack-glance.md
+++ b/camel-openstack-glance.md
@@ -9,7 +9,8 @@ OpenStack image services.
# Dependencies
-Maven users will need to add the following dependency to their pom.xml.
+Maven users will need to add the following dependency to their
+`pom.xml`.
**pom.xml**
@@ -34,37 +35,37 @@ Camel.
-
+
-
+
reserve
Reserve image.
-
+
create
Create a new image.
-
+
update
Update image.
-
+
upload
Upload image.
-
+
get
Get the image.
-
+
getAll
Get all images.
-
+
delete
Delete the image.
diff --git a/camel-openstack-keystone.md b/camel-openstack-keystone.md
index f8094f706572abc34d626eb9a78f3e2714bb2dc4..ce2ccdc031a6e2fbe610b8f0623524d7b8866ea6 100644
--- a/camel-openstack-keystone.md
+++ b/camel-openstack-keystone.md
@@ -11,7 +11,8 @@ The openstack-keystone component supports only Identity API v3
# Dependencies
-Maven users will need to add the following dependency to their pom.xml.
+Maven users will need to add the following dependency to their
+`pom.xml`.
**pom.xml**
@@ -32,9 +33,9 @@ Camel.
You can use the following settings for each subsystem:
-# domains
+## Domains
-## Operations you can perform with the Domain producer
+### Operations you can perform with the Domain producer
@@ -42,29 +43,29 @@ You can use the following settings for each subsystem:
-
+
-
+
create
Create a new domain.
-
+
get
Get the domain.
-
+
getAll
Get all domains.
-
+
update
Update the domain.
-
+
delete
Delete the domain.
@@ -72,12 +73,12 @@ You can use the following settings for each subsystem:
If you need more precise domain settings, you can create a new object of
-the type **org.openstack4j.model.identity.v3.Domain** and send in the
+the type `org.openstack4j.model.identity.v3.Domain` and send in the
message body.
-# groups
+## Groups
-## Operations you can perform with the Group producer
+### Operations you can perform with the Group producer
@@ -85,42 +86,42 @@ message body.
-
+
-
+
create
Create a new group.
-
+
get
Get the group.
-
+
getAll
Get all groups.
-
+
update
Update the group.
-
+
delete
Delete the group.
-
+
addUserToGroup
Add the user to the group.
-
+
checkUserGroup
Check whether is the user in the
group.
-
+
removeUserFromGroup
Remove the user from the
@@ -130,12 +131,12 @@ group.
If you need more precise group settings, you can create a new object of
-the type **org.openstack4j.model.identity.v3.Group** and send in the
+the type `org.openstack4j.model.identity.v3.Group` and send in the
message body.
-# projects
+## Projects
-## Operations you can perform with the Project producer
+### Operations you can perform with the Project producer
@@ -143,29 +144,29 @@ message body.
-
+
-
+
create
Create a new project.
-
+
get
Get the project.
-
+
getAll
Get all projects.
-
+
update
Update the project.
-
+
delete
Delete the project.
@@ -173,12 +174,12 @@ message body.
If you need more precise project settings, you can create a new object
-of the type **org.openstack4j.model.identity.v3.Project** and send in
-the message body.
+of the type `org.openstack4j.model.identity.v3.Project` and send in the
+message body.
-# regions
+## Regions
-## Operations you can perform with the Region producer
+### Operations you can perform with the Region producer
@@ -186,29 +187,29 @@ the message body.
-
+
-
+
create
Create new region.
-
+
get
Get the region.
-
+
getAll
Get all regions.
-
+
update
Update the region.
-
+
delete
Delete the region.
@@ -216,12 +217,12 @@ the message body.
If you need more precise region settings, you can create a new object of
-the type **org.openstack4j.model.identity.v3.Region** and send in the
+the type `org.openstack4j.model.identity.v3.Region` and send in the
message body.
-# users
+## Users
-## Operations you can perform with the User producer
+### Operations you can perform with the User producer
@@ -229,29 +230,29 @@ message body.
-
+
-
+
create
Create new user.
-
+
get
Get the user.
-
+
getAll
Get all users.
-
+
update
Update the user.
-
+
delete
Delete the user.
@@ -259,7 +260,7 @@ message body.
If you need more precise user settings, you can create a new object of
-the type **org.openstack4j.model.identity.v3.User** and send in the
+the type `org.openstack4j.model.identity.v3.User` and send in the
message body.
## Component Configurations
diff --git a/camel-openstack-neutron.md b/camel-openstack-neutron.md
index 53a1ed487d9f07509e1838e66cfa026a7cd39a75..9bfa4897d68b5f3139b153ebaa39a0e15dee6fb0 100644
--- a/camel-openstack-neutron.md
+++ b/camel-openstack-neutron.md
@@ -9,7 +9,8 @@ OpenStack network services.
# Dependencies
-Maven users will need to add the following dependency to their pom.xml.
+Maven users will need to add the following dependency to their
+`pom.xml`.
**pom.xml**
@@ -30,9 +31,9 @@ Camel.
You can use the following settings for each subsystem:
-# networks
+## Networks
-## Operations you can perform with the Network producer
+### Operations you can perform with the Network producer
@@ -40,25 +41,25 @@ You can use the following settings for each subsystem:
-
+
-
+
create
Create a new network.
-
+
get
Get the network.
-
+
getAll
Get all networks.
-
+
delete
Delete the network.
@@ -66,12 +67,12 @@ You can use the following settings for each subsystem:
If you need more precise network settings, you can create a new object
-of the type **org.openstack4j.model.network.Network** and send in the
+of the type `org.openstack4j.model.network.Network` and send in the
message body.
-# subnets
+## Subnets
-## Operations you can perform with the Subnet producer
+### Operations you can perform with the Subnet producer
@@ -79,29 +80,29 @@ message body.
-
+
-
+
create
Create new subnet.
-
+
get
Get the subnet.
-
+
getAll
Get all subnets.
-
+
delete
Delete the subnet.
-
+
action
Perform an action on the
subnet.
@@ -110,12 +111,12 @@ subnet.
If you need more precise subnet settings, you can create a new object of
-the type **org.openstack4j.model.network.Subnet** and send in the
-message body.
+the type `org.openstack4j.model.network.Subnet` and send in the message
+body.
-# ports
+## Ports
-## Operations you can perform with the Port producer
+### Operations you can perform with the Port producer
@@ -123,38 +124,38 @@ message body.
-
+
-
+
create
Create a new port.
-
+
get
Get the port.
-
+
getAll
Get all ports.
-
+
update
Update the port.
-
+
delete
Delete the port.
-# routers
+## Routers
-## Operations you can perform with the Router producer
+### Operations you can perform with the Router producer
@@ -162,37 +163,37 @@ message body.
-
+
-
+
create
Create a new router.
-
+
get
Get the router.
-
+
getAll
Get all routers.
-
+
update
Update the router.
-
+
delete
Delete the router.
-
+
attachInterface
Attach an interface.
-
+
detachInterface
Detach an interface.
diff --git a/camel-openstack-nova.md b/camel-openstack-nova.md
index 3da90d59ddfaa77f1d75eb2075cb55b888d91065..e354c326c132b245ba332ab2faf06aa76f54504f 100644
--- a/camel-openstack-nova.md
+++ b/camel-openstack-nova.md
@@ -9,7 +9,8 @@ compute services.
# Dependencies
-Maven users will need to add the following dependency to their pom.xml.
+Maven users will need to add the following dependency to their
+`pom.xml`.
**pom.xml**
@@ -30,9 +31,9 @@ Camel.
You can use the following settings for each subsystem:
-# flavors
+## Flavors
-## Operations you can perform with the Flavor producer
+### Operations you can perform with the Flavor producer
@@ -40,25 +41,25 @@ You can use the following settings for each subsystem:
-
+
-
+
create
Create new flavor.
-
+
get
Get the flavor.
-
+
getAll
Get all flavors.
-
+
delete
Delete the flavor.
@@ -66,12 +67,12 @@ You can use the following settings for each subsystem:
If you need more precise flavor settings, you can create a new object of
-the type **org.openstack4j.model.compute.Flavor** and send in the
-message body.
+the type `org.openstack4j.model.compute.Flavor` and send in the message
+body.
-# servers
+## Servers
-## Operations you can perform with the Server producer
+### Operations you can perform with the Server producer
@@ -79,33 +80,33 @@ message body.
-
+
-
+
create
Create a new server.
-
+
createSnapshot
Create snapshot of the server.
-
+
get
Get the server.
-
+
getAll
Get all servers.
-
+
delete
Delete the server.
-
+
action
Perform an action on the
server.
@@ -114,12 +115,12 @@ server.
If you need more precise server settings, you can create a new object of
-the type **org.openstack4j.model.compute.ServerCreate** and send in the
+the type `org.openstack4j.model.compute.ServerCreate` and send in the
message body.
-# keypairs
+## Key/Pairs
-## Operations you can perform with the Keypair producer
+### Operations you can perform with the Keypair producer
@@ -127,25 +128,25 @@ message body.
-
+
-
+
create
Create new keypair.
-
+
get
Get the keypair.
-
+
getAll
Get all keypairs.
-
+
delete
Delete the keypair.
diff --git a/camel-openstack-summary.md b/camel-openstack-summary.md
new file mode 100644
index 0000000000000000000000000000000000000000..73db8eb11d26d9144517c5aeb4d53d45b0694200
--- /dev/null
+++ b/camel-openstack-summary.md
@@ -0,0 +1,24 @@
+# Openstack-summary.md
+
+**Since Camel 2.19**
+
+The OpenStack component is a component for managing your
+[OpenStack](https://www.openstack.org//) applications.
+
+# OpenStack components
+
+See the following for usage of each component:
+
+indexDescriptionList::\[attributes=*group=OpenStack*,descriptionformat=description\]
+
+# Installation
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-openstack
+ x.x.x
+
+
diff --git a/camel-openstack-swift.md b/camel-openstack-swift.md
index 67f2a0b012a0da735091fd81aac4ab52493858d0..7e4ce82eda33be6e49dca199ce907d2bef3d8310 100644
--- a/camel-openstack-swift.md
+++ b/camel-openstack-swift.md
@@ -9,7 +9,8 @@ object storage services.
# Dependencies
-Maven users will need to add the following dependency to their pom.xml.
+Maven users will need to add the following dependency to their
+`pom.xml`.
**pom.xml**
@@ -30,9 +31,9 @@ Camel.
You can use the following settings for each subsystem:
-# containers
+## Containers
-## Operations you can perform with the Container producer
+### Operations you can perform with the Container producer
@@ -40,42 +41,42 @@ You can use the following settings for each subsystem:
-
+
-
+
create
Create a new container.
-
+
get
Get the container.
-
+
getAll
Get all containers.
-
+
update
Update the container.
-
+
delete
Delete the container.
-
+
getMetadata
Get metadata.
-
+
createUpdateMetadata
Create/update metadata.
-
+
deleteMetadata
Delete metadata.
@@ -84,14 +85,14 @@ style="text-align: left;">createUpdateMetadata
If you need more precise container settings, you can create a new object
of the type
-**org.openstack4j.model.storage.object.options.CreateUpdateContainerOptions**
+`org.openstack4j.model.storage.object.options.CreateUpdateContainerOptions`
(in case of create or update operation) or
-**org.openstack4j.model.storage.object.options.ContainerListOptions**
-for listing containers and send in the message body.
+`org.openstack4j.model.storage.object.options.ContainerListOptions` for
+listing containers and send in the message body.
-# objects
+## Objects
-## Operations you can perform with the Object producer
+### Operations you can perform with the Object producer
@@ -99,37 +100,37 @@ for listing containers and send in the message body.
-
+
-
+
create
Create a new object.
-
+
get
Get the object.
-
+
getAll
Get all objects.
-
+
update
Get update the object.
-
+
delete
Delete the object.
-
+
getMetadata
Get metadata.
-
+
createUpdateMetadata
Create/update metadata.
diff --git a/camel-opentelemetry.md b/camel-opentelemetry.md
new file mode 100644
index 0000000000000000000000000000000000000000..168dca5e868636bff82e7d17210e099f41f3333a
--- /dev/null
+++ b/camel-opentelemetry.md
@@ -0,0 +1,143 @@
+# Opentelemetry.md
+
+**Since Camel 3.5**
+
+The OpenTelemetry component is used for tracing and timing incoming and
+outgoing Camel messages using
+[OpenTelemetry](https://opentelemetry.io/).
+
+Events (spans) are captured for incoming and outgoing messages being
+sent to/from Camel.
+
+# Configuration
+
+The configuration properties for the OpenTelemetry tracer are:
+
+
+
+
+
+
+
+
+
+
+
+
+instrumentationName
+camel
+A name uniquely identifying the
+instrumentation scope, such as the instrumentation library, package, or
+fully qualified class name. Must not be null.
+
+
+excludePatterns
+
+Sets exclude pattern(s) that will
+disable tracing for Camel messages that matches the pattern. The content
+is a Set<String> where the key is a pattern. The pattern uses the
+rules from Intercept.
+
+
+encoding
+false
+Sets whether the header keys need to be
+encoded (connector specific) or not. The value is a boolean. Dashes are
+required for instances to be encoded for JMS property keys.
+
+
+traceProcessors
+false
+Setting this to true will create new
+OpenTelemetry Spans for each Camel Processors. Use the excludePattern
+property to filter out Processors
+
+
+
+
+# Using Camel OpenTelemetry
+
+Include the `camel-opentelemetry` component in your POM, along with any
+specific dependencies associated with the chosen OpenTelemetry compliant
+Tracer.
+
+To explicitly configure OpenTelemetry support, instantiate the
+`OpenTelemetryTracer` and initialize the camel context. You can
+optionally specify a `Tracer`, or alternatively it can be implicitly
+discovered using the `Registry`
+
+ OpenTelemetryTracer otelTracer = new OpenTelemetryTracer();
+ // By default, it uses the DefaultTracer, but you can override it with a specific OpenTelemetry Tracer implementation.
+ otelTracer.setTracer(...);
+ // And then initialize the context
+ otelTracer.init(camelContext);
+
+You would still need OpenTelemetry to instrument your code, which can be
+done via a [Java agent](#OpenTelemetry-JavaAgent).
+
+## Using with standalone Camel
+
+If you use `camel-main` as standalone Camel, then you can enable and use
+OpenTelemetry without Java code.
+
+Add `camel-opentelemetry` component in your POM, and configure in
+`application.properties`:
+
+ camel.opentelemetry.enabled = true
+ # you can configure the other options
+ # camel.opentelemetry.instrumentationName = myApp
+
+You would still need OpenTelemetry to instrument your code, which can be
+done via a [Java agent](#OpenTelemetry-JavaAgent).
+
+# Spring Boot
+
+If you are using Spring Boot, then you can add the
+`camel-opentelemetry-starter` dependency, and turn on OpenTelemetry by
+annotating the main class with `@CamelOpenTelemetry`.
+
+The `OpenTelemetryTracer` will be implicitly obtained from the camel
+context’s `Registry`, unless a `OpenTelemetryTracer` bean has been
+defined by the application.
+
+# Java Agent
+
+Download the [latest
+version](https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/).
+
+This package includes the instrumentation agent as well as
+instrumentation for all supported libraries and all available data
+exporters. The package provides a completely automatic, out-of-the-box
+experience.
+
+Enable the instrumentation agent using the `-javaagent` flag to the JVM.
+
+ java -javaagent:path/to/opentelemetry-javaagent.jar \
+ -jar myapp.jar
+
+By default, the OpenTelemetry Java agent uses [OTLP
+exporter](https://github.com/open-telemetry/opentelemetry-java/tree/main/exporters/otlp)
+configured to send data to [OpenTelemetry
+collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md)
+at `http://localhost:4318`.
+
+Configuration parameters are passed as Java system properties (`-D`
+flags) or as environment variables. See [the configuration
+documentation](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/agent-config.md)
+for the full list of configuration items. For example:
+
+ java -javaagent:path/to/opentelemetry-javaagent.jar \
+ -Dotel.service.name=your-service-name \
+ -Dotel.traces.exporter=otlp \
+ -jar myapp.jar
+
+# MDC Logging
+
+When MDC Logging is enabled for the active Camel context the Trace ID
+and Span ID will be added and removed from the MDC for each route, the
+keys are `trace_id` and `span_id`, respectively.
diff --git a/camel-optaplanner.md b/camel-optaplanner.md
index 5dd4dd12b5978f0b48fab3ec9628eefb2b7513e1..319aaf40d3f599b55a6252bcda597fad9273d580 100644
--- a/camel-optaplanner.md
+++ b/camel-optaplanner.md
@@ -4,13 +4,13 @@
**Both producer and consumer are supported**
-The Optaplanner component solves the planning problem contained in a
-message with [OptaPlanner](http://www.optaplanner.org/).
-For example, feed it an unsolved Vehicle Routing problem and it solves
-it.
+The [OptaPlanner](http://www.optaplanner.org/) component solves the
+planning problem contained in a message with
+[OptaPlanner](http://www.optaplanner.org/). For example, feed it an
+unsolved Vehicle Routing problem and it solves it.
-The component supports consumer listening for SloverManager results and
-producer for processing Solution and ProblemChange.
+The component supports consumer listening for `SolverManager` results
+and producer for processing Solution and ProblemChange.
Maven users will need to add the following dependency to their `pom.xml`
for this component:
@@ -30,8 +30,8 @@ You can append query options to the URI in the following format,
# Message Body
-Camel takes the planning problem for the *IN* body, solves it and
-returns it on the *OUT* body. The *IN* body object supports the
+Camel takes the planning problem for the `IN` body, solves it and
+returns it on the *OUT* body. The `IN` body object supports the
following use cases:
- If the body contains the `PlanningSolution` annotation, then it will
@@ -44,10 +44,10 @@ following use cases:
- If the body is none of the above types, then the producer will
return the best result from the solver identified by `solverId`.
-## Samples
+## Examples
Solve a planning problem on the ActiveMQ queue with OptaPlanner, passing
-the SolverManager:
+the `SolverManager`:
from("activemq:My.Queue").
.to("optaplanner:problemName?solverManager=#solverManager");
diff --git a/camel-paho-mqtt5.md b/camel-paho-mqtt5.md
index b7e197aa6914cead2f511eb165bd94b17ee0cd1d..9c2ad1bf12fd07ecc55cf0f19a79d25cf7889b4f 100644
--- a/camel-paho-mqtt5.md
+++ b/camel-paho-mqtt5.md
@@ -26,7 +26,9 @@ for this component:
Where `topic` is the name of the topic.
-# Default payload type
+# Usage
+
+## Default payload type
By default, the Camel Paho component operates on the binary payloads
extracted out of (or put into) the MQTT message:
@@ -50,7 +52,7 @@ converts binary payload into `String` (and conversely):
String payload = "message";
producerTemplate.sendBody("paho-mqtt5:topic", payload);
-# Samples
+# Examples
For example, the following snippet reads messages from the MQTT broker
installed on the same host as the Camel router:
diff --git a/camel-paho.md b/camel-paho.md
index 3c185dca7447bf53de393612daf3a1e1f7fb56a9..cc42b1a74f4bde98df6e55e3b876115658fe450c 100644
--- a/camel-paho.md
+++ b/camel-paho.md
@@ -26,7 +26,9 @@ for this component:
Where `topic` is the name of the topic.
-# Default payload type
+# Usage
+
+## Default payload type
By default, the Camel Paho component operates on the binary payloads
extracted out of (or put into) the MQTT message:
@@ -50,7 +52,7 @@ converts binary payload into `String` (and conversely):
String payload = "message";
producerTemplate.sendBody("paho:topic", payload);
-# Samples
+# Examples
For example, the following snippet reads messages from the MQTT broker
installed on the same host as the Camel router:
diff --git a/camel-parquetAvro-dataformat.md b/camel-parquetAvro-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..b770b1f142e003d7703739048b00a3725ca56c91
--- /dev/null
+++ b/camel-parquetAvro-dataformat.md
@@ -0,0 +1,47 @@
+# ParquetAvro-dataformat.md
+
+**Since Camel 4.0**
+
+The ParquetAvro Data Format is a Camel Framework’s data format
+implementation based on the parquet-avro library for (de)/serialization
+purposes. Messages can be unmarshalled to Avro’s GenericRecords or plain
+Java objects (POJOs). With the help of Camel’s routing engine and data
+transformations, you can then play with them and apply customised
+formatting and call other Camel Components to convert and send messages
+to upstream systems.
+
+# Parquet Data Format Options
+
+# Unmarshal
+
+There are ways to unmarshal parquet files/structures, usually binary
+parquet files, where camel DSL allows.
+
+In this first example we unmarshal file payload to OutputStream and send
+it to mock endpoint, then we will be able to get GenericRecord or POJO
+(it could be a list if that is coming through)
+
+ from("direct:unmarshal").unmarshal(parquet).to("mock:unmarshal");
+
+# Marshal
+
+Marshalling is the reverse process of unmarshalling, so when you have
+your `GenericRecord` or POJO and marshal it, you will get the
+parquet-formatted output stream on your producer endpoint.
+
+ from("direct:marshal").marshal(parquet).to("mock:marshal");
+
+# Dependencies
+
+To use parquet-avro data format in your camel routes you need to add a
+dependency on **camel-parquet-avro** which implements this data format.
+
+If you use Maven you can add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-parquet-avro
+ x.x.x
+
+
diff --git a/camel-pdf.md b/camel-pdf.md
index f4859e10d645139ba798a9ca0068ced7cf2728d5..77305ad6b9866109fbe0e3aca1d5afdcd48179c8 100644
--- a/camel-pdf.md
+++ b/camel-pdf.md
@@ -27,7 +27,9 @@ The PDF component only supports producer endpoints.
pdf:operation[?options]
-# Type converter
+# Usage
+
+## Type converter
Since Camel 4.8, the component is capable of doing simple document
conversions. For instance, suppose you are receiving a PDF byte as a
diff --git a/camel-pg-replication-slot.md b/camel-pg-replication-slot.md
index 31effa3271758f123942b3c95d2ebe5d10880c8c..cf561cb67fc43a833ca3d140a8d116e051237029 100644
--- a/camel-pg-replication-slot.md
+++ b/camel-pg-replication-slot.md
@@ -18,19 +18,16 @@ for this component:
-URI format
+# URI format
The pg-replication-slot component uses the following two styles of
endpoint URI notation:
pg-replication-slot://host:port/database/slot:plugin[?parameters]
-# Examples
+# Usage
- from("pg-replication-slot://localhost:5432/finance/sync_slot:test_decoding?user={{username}}&password={{password}}&slotOptions.skip-empty-xacts=true&slotOptions.include-xids=false")
- .to("mock:result");
-
-# Tips
+## Tips
PostgreSQL can generate a huge number of empty transactions on certain
operations (e.g. `VACUUM`). These transactions can congest your route.
@@ -44,6 +41,13 @@ data from PostgreSQL to another database, make sure your operations are
idempotent (e.g., use `UPSERT` instead of `INSERT`, etc). This will make
sure repeated messages won’t affect your system negatively.
+# Examples
+
+**Example route**
+
+ from("pg-replication-slot://localhost:5432/finance/sync_slot:test_decoding?user={{username}}&password={{password}}&slotOptions.skip-empty-xacts=true&slotOptions.include-xids=false")
+ .to("mock:result");
+
## Component Configurations
diff --git a/camel-pgevent.md b/camel-pgevent.md
index dbed71c48de404520affa9b8e05f4acdc2c3745f..604b1243f30ebc5ea8437f2d14157d5684820a67 100644
--- a/camel-pgevent.md
+++ b/camel-pgevent.md
@@ -18,7 +18,7 @@ for this component:
-URI format
+# URI format
The pgevent component uses the following two styles of endpoint URI
notation:
@@ -26,9 +26,11 @@ notation:
pgevent:datasource[?parameters]
pgevent://host:port/database/channel[?parameters]
-# Common problems
+# Usage
-## Unable to connect to PostgreSQL database using DataSource
+## Common problems
+
+### Unable to connect to PostgreSQL database using DataSource
Using the driver provided by PostgreSQL itself (`jdbc:postgresql:/...`)
when using a DataSource to connect to a PostgreSQL database does not
diff --git a/camel-pgp-dataformat.md b/camel-pgp-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..14d2ce187bbd7e1f292fcb2a93201284299668fb
--- /dev/null
+++ b/camel-pgp-dataformat.md
@@ -0,0 +1,376 @@
+# Pgp-dataformat.md
+
+**Since Camel 2.9**
+
+The PGP Data Format integrates the Java Cryptographic Extension into
+Camel, allowing simple and flexible encryption and decryption of
+messages using Camel’s familiar marshal and unmarshal formatting
+mechanism. It assumes marshalling to mean encryption to ciphertext and
+unmarshalling to mean decryption back to the original plaintext. This
+data format implements only symmetric (shared-key) encryption and
+decryption.
+
+# PGPDataFormat Options
+
+# PGPDataFormat Message Headers
+
+You can override the PGPDataFormat options by applying the below headers
+into messages dynamically.
+
+
+
+
+
+
+
+
+
+
+
+
+CamelPGPDataFormatKeyFileName
+String
+filename of the keyring; will override
+existing setting directly on the PGPDataFormat.
+
+
+CamelPGPDataFormatEncryptionKeyRing
+byte[]
+the encryption keyring; will override
+existing setting directly on the PGPDataFormat.
+
+
+CamelPGPDataFormatKeyUserid
+String
+the User ID of the key in the PGP
+keyring; will override existing setting directly on the
+PGPDataFormat.
+
+
+CamelPGPDataFormatKeyUserids
+List<String>
+the User IDs of the key in the PGP
+keyring; will override existing setting directly on the
+PGPDataFormat.
+
+
+CamelPGPDataFormatKeyPassword
+String
+password used when opening the private
+key; will override existing setting directly on the
+PGPDataFormat.
+
+
+CamelPGPDataFormatSignatureKeyFileName
+String
+filename of the signature keyring; will
+override existing setting directly on the PGPDataFormat.
+
+
+CamelPGPDataFormatSignatureKeyRing
+byte[]
+the signature keyring; will override
+existing setting directly on the PGPDataFormat.
+
+
+CamelPGPDataFormatSignatureKeyUserid
+String
+the User ID of the signature key in the
+PGP keyring; will override existing setting directly on the
+PGPDataFormat.
+
+
+CamelPGPDataFormatSignatureKeyUserids
+List<String>
+the User IDs of the signature keys in
+the PGP keyring; will override existing setting directly on the
+PGPDataFormat.
+
+
+CamelPGPDataFormatSignatureKeyPassword
+String
+password used when opening the
+signature private key; will override existing setting directly on the
+PGPDataFormat.
+
+
+CamelPGPDataFormatEncryptionAlgorithm
+int
+symmetric key encryption algorithm;
+will override existing setting directly on the PGPDataFormat.
+
+
+CamelPGPDataFormatSignatureHashAlgorithm
+int
+signature hash algorithm; will override
+existing setting directly on the PGPDataFormat.
+
+
+CamelPGPDataFormatCompressionAlgorithm
+int
+compression algorithm; will override
+existing setting directly on the PGPDataFormat.
+
+
+CamelPGPDataFormatNumberOfEncryptionKeys
+Integer
+number of public keys used for
+encrypting the symmetric key, set by PGPDataFormat during the encryption
+process
+
+
+CamelPGPDataFormatNumberOfSigningKeys
+Integer
+number of private keys used for
+creating signatures, set by PGPDataFormat during the signing
+process
+
+
+
+
+# Encrypting with PGPDataFormat
+
+The following sample uses the popular PGP format for
+encrypting/decrypting files using the [Bouncy Castle Java
+libraries](http://www.bouncycastle.org/java.html):
+
+The following sample performs signing + encryption, and then signature
+verification + decryption. It uses the same keyring for both signing and
+encryption, but you can obviously use different keys:
+
+Java
+from("direct:pgp-encrypt")
+.marshal().pgp("file:pubring.gpg", "alice@example.com")
+.unmarshal().pgp("file:secring.gpg", "alice@example.com", "letmein");
+
+Spring XML
+
+
+
+
+
+
+## Working with the previous example
+
+- A public keyring file which contains the public keys used to encrypt
+ the data
+
+- A private keyring file which contains the keys used to decrypt the
+ data
+
+- The keyring password
+
+## Managing your keyring
+
+To manage the keyring, I use the command line tools, I find this to be
+the simplest approach to managing the keys. There are also Java
+libraries available from [http://www.bouncycastle.org/java.html](http://www.bouncycastle.org/java.html) if you
+would prefer to do it that way.
+
+Install the command line utilities on linux
+
+ apt-get install gnupg
+
+Create your keyring, entering a secure password
+
+ gpg --gen-key
+
+If you need to import someone else’s public key so that you can encrypt
+a file for them.
+
+ gpg --import pubring.gpg
+ gpg --export-secret-keys > secring.gpg
+
+# PGP Decrypting/Verifying of Messages Encrypted/Signed by Different Private/Public Keys
+
+A PGP Data Formatter can decrypt/verify messages which have been
+encrypted by different public keys or signed by different private keys.
+Provide the corresponding private keys in the secret keyring, the
+corresponding public keys in the public keyring, and the passphrases in
+the passphrase accessor.
+
+ Map userId2Passphrase = new HashMap(2);
+ // add passphrases of several private keys whose corresponding public keys have been used to encrypt the messages
+ userId2Passphrase.put("UserIdOfKey1","passphrase1"); // you must specify the exact User ID!
+ userId2Passphrase.put("UserIdOfKey2","passphrase2");
+ PGPPassphraseAccessor passphraseAccessor = new PGPPassphraseAccessorDefault(userId2Passphrase);
+
+ PGPDataFormat pgpVerifyAndDecrypt = new PGPDataFormat();
+ pgpVerifyAndDecrypt.setPassphraseAccessor(passphraseAccessor);
+ // the method getSecKeyRing() provides the secret keyring as a byte array containing the private keys
+ pgpVerifyAndDecrypt.setEncryptionKeyRing(getSecKeyRing()); // alternatively, you can use setKeyFileName(keyfileName)
+ // the method getPublicKeyRing() provides the public keyring as a byte array containing the public keys
+ pgpVerifyAndDecrypt.setSignatureKeyRing((getPublicKeyRing()); // alternatively, you can use setSignatureKeyFileName(signatgureKeyfileName)
+ // it is not necessary to specify the encryption or signer User Id
+
+ from("direct:start")
+ ...
+ .unmarshal(pgpVerifyAndDecrypt) // can decrypt/verify messages encrypted/signed by different private/public keys
+ ...
+
+- The functionality is especially useful to support the key exchange.
+ If you want to exchange the private key for decrypting, you can
+ accept for a period of time messages which are either encrypted with
+ the old or new corresponding public key. Or if the sender wants to
+ exchange his signer private key, you can accept for a period of
+ time, the old or new signer key.
+
+- Technical background: The PGP encrypted data contains a Key ID of
+ the public key which was used to encrypt the data. This Key ID can
+ be used to locate the private key in the secret keyring to decrypt
+ the data. The same mechanism is also used to locate the public key
+ for verifying a signature. Therefore, you no longer must specify
+ User IDs for the unmarshalling.
+
+# Restricting the Signer Identities during PGP Signature Verification
+
+If you verify a signature, you not only want to verify the correctness
+of the signature, but you also want to check that the signature comes
+from a certain identity or a specific set of identities. Therefore, it
+is possible to restrict the number of public keys from the public
+keyring which can be used for the verification of a signature.
+
+**Signature User IDs**
+
+ // specify the User IDs of the expected signer identities
+ List expectedSigUserIds = new ArrayList();
+ expectedSigUserIds.add("Trusted company1");
+ expectedSigUserIds.add("Trusted company2");
+
+ PGPDataFormat pgpVerifyWithSpecificKeysAndDecrypt = new PGPDataFormat();
+ pgpVerifyWithSpecificKeysAndDecrypt.setPassword("my password"); // for decrypting with private key
+ pgpVerifyWithSpecificKeysAndDecrypt.setKeyFileName(keyfileName);
+ pgpVerifyWithSpecificKeysAndDecrypt.setSignatureKeyFileName(signatgureKeyfileName);
+ pgpVerifyWithSpecificKeysAndDecrypt.setSignatureKeyUserids(expectedSigUserIds); // if you have only one signer identity, then you can also use setSignatureKeyUserid("expected Signer")
+
+ from("direct:start")
+ ...
+ .unmarshal(pgpVerifyWithSpecificKeysAndDecrypt)
+ ...
+
+- If the PGP content has several signatures, the verification is
+ successful as soon as one signature can be verified.
+
+- If you do not want to restrict the signer identities for
+ verification, then do not specify the signature key User IDs. In
+ this case, all public keys in the public keyring are taken into
+ account.
+
+# Several Signatures in One PGP Data Format
+
+The PGP specification allows that one PGP data format can contain
+several signatures from different keys. Since Camel 2.13.3, it’s been
+possible to create such kind of PGP content via specifying signature
+User IDs which relate to several private keys in the secret keyring.
+
+**Several Signatures**
+
+ PGPDataFormat pgpSignAndEncryptSeveralSignerKeys = new PGPDataFormat();
+ pgpSignAndEncryptSeveralSignerKeys.setKeyUserid(keyUserid); // for encrypting, you can also use setKeyUserids if you want to encrypt with several keys
+ pgpSignAndEncryptSeveralSignerKeys.setKeyFileName(keyfileName);
+ pgpSignAndEncryptSeveralSignerKeys.setSignatureKeyFileName(signatgureKeyfileName);
+ pgpSignAndEncryptSeveralSignerKeys.setSignaturePassword("sdude"); // here we assume that all private keys have the same password, if this is not the case, then you can use setPassphraseAccessor
+
+ List signerUserIds = new ArrayList();
+ signerUserIds.add("company old key");
+ signerUserIds.add("company new key");
+ pgpSignAndEncryptSeveralSignerKeys.setSignatureKeyUserids(signerUserIds);
+
+ from("direct:start")
+ ...
+ .marshal(pgpSignAndEncryptSeveralSignerKeys)
+ ...
+
+# Support for Sub-Keys and Key Flags in PGP Data Format Marshaller
+
+An [OpenPGP V4 key](https://tools.ietf.org/html/rfc4880#section-12.1)
+can have a primary key and sub-keys. The usage of the keys is indicated
+by the so-called [Key
+Flags](https://tools.ietf.org/html/rfc4880#section-5.2.3.21). For
+example, you can have a primary key with two sub-keys; the primary key
+shall only be used for certifying other keys (Key Flag 0x01), the first
+sub-key shall only be used for signing (Key Flag 0x02), and the second
+sub-key shall only be used for encryption (Key Flag 0x04 or 0x08). The
+PGP Data Format marshaler takes into account these Key Flags of the
+primary key and sub-keys in order to determine the right key for signing
+and encryption. This is necessary because the primary key and its
+sub-keys have the same User IDs.
+
+# Support for Custom Key Accessors
+
+You can implement custom key accessors for encryption/signing. The above
+PGPDataFormat class selects in a certain predefined way the keys which
+should be used for signing/encryption or verifying/decryption. If you
+have special requirements for how your keys should be selected, you
+should use the
+[PGPKeyAccessDataFormat](https://github.com/apache/camel/blob/main/components/camel-crypto/src/main/java/org/apache/camel/converter/crypto/PGPKeyAccessDataFormat.java)
+class instead and implement the interfaces
+[PGPPublicKeyAccessor](https://github.com/apache/camel/blob/main/components/camel-crypto/src/main/java/org/apache/camel/converter/crypto/PGPPublicKeyAccessor.java)
+and
+[PGPSecretKeyAccessor](https://github.com/apache/camel/blob/main/components/camel-crypto/src/main/java/org/apache/camel/converter/crypto/PGPSecretKeyAccessor.java)
+as beans. There are default implementations
+[DefaultPGPPublicKeyAccessor](https://github.com/apache/camel/blob/main/components/camel-crypto/src/main/java/org/apache/camel/converter/crypto/DefaultPGPPublicKeyAccessor.java)
+and
+[DefaultPGPSecretKeyAccessor](https://github.com/apache/camel/blob/main/components/camel-crypto/src/main/java/org/apache/camel/converter/crypto/DefaultPGPSecretKeyAccessor.java)
+which cache the keys, so that not every time the keyring is parsed when
+the processor is called.
+
+PGPKeyAccessDataFormat has the same options as PGPDataFormat except
+password, keyFileName, encryptionKeyRing, signaturePassword,
+signatureKeyFileName, and signatureKeyRing.
+
+# Dependencies
+
+To use the PGP dataformat in your camel routes you need to add the
+following dependency to your pom.
+
+
+ org.apache.camel
+ camel-crypto
+ x.x.x
+
+
diff --git a/camel-pinecone.md b/camel-pinecone.md
index 636784fb67fd255b949becafb0eacab3844a99a4..7cc1c32cbe6d599ebe0695293082bd00d3126563 100644
--- a/camel-pinecone.md
+++ b/camel-pinecone.md
@@ -4,8 +4,8 @@
**Only producer is supported**
-The Pionecone Component provides support for interacting with the
-[Milvus Vector Database](https://pinecone.io/).
+The Pinecone Component provides support for interacting with the
+[Pinecone Vector Database](https://pinecone.io/).
# URI format
diff --git a/camel-pipeline-eip.md b/camel-pipeline-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..de44efbd362eec24347b019c34f467616580e730
--- /dev/null
+++ b/camel-pipeline-eip.md
@@ -0,0 +1,111 @@
+# Pipeline-eip.md
+
+Camel supports the [Pipes and
+Filters](http://www.enterpriseintegrationpatterns.com/PipesAndFilters.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) in
+various ways.
+
+
+
+
+
+With Camel, you can separate your processing across multiple independent
+[Endpoints](#manual::endpoint.adoc) which can then be chained together.
+
+# Options
+
+# Exchange properties
+
+# Using pipeline
+
+You can create pipelines of logic using multiple
+[Endpoint](#manual::endpoint.adoc) or [Message
+Translator](#message-translator.adoc) instances as follows:
+
+Java
+from("activemq:cheese")
+.pipeline()
+.to("bean:foo")
+.to("bean:bar")
+.to("acitvemq:wine");
+
+XML
+
+
+
+
+
+
+
+
+
+Though a pipeline is the default mode of operation when you specify
+multiple outputs in Camel. Therefore, it’s much more common to see this
+with Camel:
+
+Java
+from("activemq:SomeQueue")
+.to("bean:foo")
+.to("bean:bar")
+.to("activemq:OutputQueue");
+
+XML
+
+
+
+
+
+
+
+## Pipeline vs Multicast
+
+The opposite to `pipeline` is [`multicast`](#multicast-eip.adoc). A
+[Multicast](#multicast-eip.adoc) EIP routes a copy of the same message
+into each of its outputs, where these messages are processed
+independently. Pipeline EIP, however, will route the same message
+sequentially in the pipeline where the output from the previous step is
+input to the next. The same principle from the Linux shell with chaining
+commands together with pipe (`|`).
+
+## When using a pipeline is necessary
+
+Using a pipeline becomes necessary when you need to group together a
+series of steps into a single logical step. For example, in the example
+below where [Multicast](#multicast-eip.adoc) EIP is in use, to process
+the same message in two different pipelines. The first pipeline calls
+the something bean, and the second pipeline calls the foo and bar beans
+and then routes the message to another queue.
+
+Java
+from("activemq:SomeQueue")
+.multicast()
+.pipeline()
+.to("bean:something")
+.to("log:something")
+.end()
+.pipeline()
+.to("bean:foo")
+.to("bean:bar")
+.to("activemq:OutputQueue")
+.end()
+.end() // ends multicast
+.to("log:result");
+
+Notice how we have to use `end()` to mark the end of the blocks.
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-platform-http-jolokia.md b/camel-platform-http-jolokia.md
new file mode 100644
index 0000000000000000000000000000000000000000..779946133a7df28babcfd6d12af725adfb5ad717
--- /dev/null
+++ b/camel-platform-http-jolokia.md
@@ -0,0 +1,11 @@
+# Platform-http-jolokia.md
+
+**Since Camel 4.5**
+
+The Platform HTTP Jolokia component is used for Camel standalone to
+expose Jolokia over HTTP using the embedded HTTP server.
+
+Jolokia can be enabled as follows in `application.properties`:
+
+ camel.server.enabled = true
+ camel.server.jolokiaEnabled = true
diff --git a/camel-platform-http-main.md b/camel-platform-http-main.md
new file mode 100644
index 0000000000000000000000000000000000000000..170c9e4d40ce8dd3e9e4bf56ee3b356fd30b3508
--- /dev/null
+++ b/camel-platform-http-main.md
@@ -0,0 +1,55 @@
+# Platform-http-main.md
+
+**Since Camel 4.0**
+
+The camel-platform-http-main is an embedded HTTP server for `camel-main`
+standalone applications.
+
+The embedded HTTP server is using VertX from the
+`camel-platform-http-vertx` dependency.
+
+# Enabling
+
+The HTTP server for `camel-main` is disabled by default, and you need to
+explicitly enable this by setting `camel.server.enabled=true` in
+application.properties.
+
+# Auto-detection from classpath
+
+To use this implementation all you need to do is to add the
+`camel-platform-http-main` dependency to the classpath. Then, the
+platform http component should auto-detect this.
+
+# Uploading and downloading files
+
+The embedded HTTP server comes with a set of features out of the box,
+that can be enabled.
+
+These features are as follows:
+
+- `/q/info` - Report basic information about Camel
+
+- `/dev/console` - Developer console that provides a lot of statistics
+ and information
+
+- `/q/health` - Health checks
+
+- `/q/jolokia` - To use Jolokia to expose JMX over HTTP REST
+
+- `/q/metrics` - To provide otel metrics in prometheus format
+
+- `/q/upload` - Uploading source files, to allow hot reloading.
+
+- `/q/download` - Downloading source files, to allow inspecting
+
+- `/q/send` - Sending messages to the Camel application via HTTP
+
+- `/` - Serving static content such as html, javascript, css, and
+ images to make it easy to embed very small web applications.
+
+You configure these features in the `application.properties` file using
+the `camel.server.xxx` options.
+
+# See More
+
+- [Platform HTTP Vert.x](#platform-http-vertx.adoc)
diff --git a/camel-platform-http-vertx.md b/camel-platform-http-vertx.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a78531cb94b3f7969cf22f1f979d2c5ed0de94c
--- /dev/null
+++ b/camel-platform-http-vertx.md
@@ -0,0 +1,187 @@
+# Platform-http-vertx.md
+
+**Since Camel 3.2**
+
+The camel-platform-http-vertx is a Vert.x based implementation of the
+`PlatformHttp` SPI.
+
+# Vert.x Route
+
+This implementation will by default lookup the instance of
+`VertxPlatformHttpRouter` on the registry, however, you can configure an
+existing instance using the getter/setter on the
+`VertxPlatformHttpEngine` class.
+
+# Auto-detection from classpath
+
+To use this implementation all you need to do is to add the
+`camel-platform-http-vertx` dependency to the classpath, and the
+platform http component should auto-detect this.
+
+# Message Headers
+
+
+
+
+
+
+
+
+
+
+
+
+CamelVertxPlatformHttpAuthenticatedUser
+io.vertx.ext.auth.User
+If an authenticated user is present on
+the Vert.x Web RoutingContext, this header is populated
+with a User object containing the
+Principal.
+
+
+CamelVertxPlatformHttpLocalAddress
+io.vertx.core.net.SocketAddress
+The local address for the connection if
+present on the Vert.x Web RoutingContext.
+
+
+CamelVertxPlatformHttpRemoteAddress
+io.vertx.core.net.SocketAddress
+The remote address for the connection
+if present on the Vert.x Web RoutingContext.
+
+
+
+
+Camel also populates **all** `request.parameter` and Camel also
+populates **all** `request.headers`. For example, given a client request
+with the URL, `\http://myserver/myserver?orderid=123`, the exchange will
+contain a header named `orderid` with value `123`.
+
+# VertxPlatformHttpServer
+
+In addition to the implementation of the `PlatformHttp` SPI based on
+Vert.x, this module provides a Vert.x based HTTP server compatible with
+the `VertxPlatformHttpEngine`:
+
+ final int port = AvailablePortFinder.getNextAvailable();
+ final CamelContext context = new DefaultCamelContext();
+
+ VertxPlatformHttpServerConfiguration conf = new VertxPlatformHttpServerConfiguration();
+ conf.setBindPort(port);
+
+ context.addService(new VertxPlatformHttpServer(conf));
+ context.addRoutes(new RouteBuilder() {
+ @Override
+ public void configure() throws Exception {
+ from("platform-http:/test")
+ .routeId("get")
+ .setBody().constant("Hello from Camel's PlatformHttp service");
+ }
+ });
+
+ context.start();
+
+# Implementing a reverse proxy
+
+Platform HTTP component can act as a reverse proxy, in that case
+`Exchange.HTTP_URI`, `Exchange.HTTP_HOST` headers are populated from the
+absolute URL received on the request line of the HTTP request.
+
+Here’s an example of an HTTP proxy that simply redirects the Exchange to
+the origin server.
+
+ from("platform-http:proxy")
+ .toD("http://"
+ + "${headers." + Exchange.HTTP_HOST + "}");
+
+# Access to Request and Response
+
+The Vertx HTTP server has its own API abstraction for HTTP
+request/response objects which you can access via Camel `HttpMessage` as
+shown in the custom `Processor` below :
+
+ .process(exchange -> {
+ // grab the message as HttpMessage
+ HttpMessage message = exchange.getMessage(HttpMessage.class);
+ // use getRequest() / getResponse() to access Vertx directly
+ // you can add custom headers
+ message.getResponse().putHeader("beer", "Heineken");
+ // also access request details and use that in the code
+ String p = message.getRequest().path();
+ message.setBody("request path: " + p);
+ });
+
+# Handling large request / response payloads
+
+When large request / response payloads are expected, there is a
+`useStreaming` option, which can be enabled to improve performance. When
+`useStreaming` is `true`, it will take advantage of [stream
+caching](#manual::stream-caching.adoc). In conjunction with enabling
+disk spooling, you can avoid having to store the entire request body
+payload in memory.
+
+ // Handle a large request body and stream it to a file
+ from("platform-http:/upload?httpMethodRestrict=POST&useStreaming=true")
+ .log("Processing large request body...")
+ .to("file:/uploads?fileName=uploaded.txt")
+
+# Setting up http authentication
+
+Http authentication is disabled by default. In can be enabled by calling
+`setEnabled(true)` of `AuthenticationConfig`. Default http
+authentication takes http-basic credentials and compares them with those
+provided in camel-platform-http-vertx-auth.properties file. To be more
+specific, default http authentication
+
+To set up authentication, you need to create
+`AuthenticationConfigEntries`, as shown in the example below. This
+example uses Vert.x
+[BasicAuthHandler](https://vertx.io/docs/apidocs/io/vertx/ext/web/handler/BasicAuthHandler.html)
+and
+[PropertyFileAuthentication](https://vertx.io/docs/vertx-auth-properties/java/)
+to configure Basic http authentication with users info stored in
+`myPropFile.properties` file. Mind that in Vert.x order of adding
+`AuthenticationHandlers` matters, so `AuthenticationConfigEntries` with
+a more specific url path are applied first.
+
+ final int port = AvailablePortFinder.getNextAvailable();
+ final CamelContext context = new DefaultCamelContext();
+
+ VertxPlatformHttpServerConfiguration conf = new VertxPlatformHttpServerConfiguration();
+ conf.setBindPort(port);
+
+ //creating custom auth settings
+ AuthenticationConfigEntry customEntry = new AuthenticationConfigEntry();
+ AuthenticationProviderFactory provider = vertx -> PropertyFileAuthentication.create(vertx, "myPropFile.properties");
+ AuthenticationHandlerFactory handler = BasicAuthHandler::create;
+ customEntry.setPath("/path/that/will/be/protected");
+ customEntry.setAuthenticationProviderFactory(provider);
+ customEntry.setAuthenticationHandlerFactory(handler);
+
+ AuthenticationConfig authenticationConfig = new AuthenticationConfig(List.of(customEntry));
+ authenticationConfig.setEnabled(true);
+
+ conf.setAuthenticationConfig(authenticationConfig);
+
+ context.addService(new VertxPlatformHttpServer(conf));
+ context.addRoutes(new RouteBuilder() {
+ @Override
+ public void configure() throws Exception {
+ from("platform-http:/test")
+ .routeId("get")
+ .setBody().constant("Hello from Camel's PlatformHttp service");
+ }
+ });
+
+ context.start();
diff --git a/camel-platform-http.md b/camel-platform-http.md
index eaf8e083d2cd21a8a67ae544ab8d71e7bc4726bc..f3d31ff81d9af9dfd837e77fe9a14c4f67bd9e32 100644
--- a/camel-platform-http.md
+++ b/camel-platform-http.md
@@ -18,7 +18,9 @@ for this component:
-# Platform HTTP Provider
+# Usage
+
+## Platform HTTP Provider
To use Platform HTTP, a provider (engine) is required to be available on
the classpath. The purpose is to have drivers for different runtimes
@@ -42,7 +44,7 @@ Spring Boot
-# Implementing a reverse proxy
+## Implementing a reverse proxy
Platform HTTP component can act as a reverse proxy. In that case, some
headers are populated from the absolute URL received on the request line
@@ -79,6 +81,7 @@ in `camel-platform-http-vertx` component.
|matchOnUriPrefix|Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found.|false|boolean|
|muteException|If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace.|true|boolean|
|produces|The content type this endpoint produces, such as application/xml or application/json.||string|
+|returnHttpRequestHeaders|Whether to include HTTP request headers (Accept, User-Agent, etc.) into HTTP response produced by this endpoint.|false|boolean|
|useCookieHandler|Whether to enable the Cookie Handler that allows Cookie addition, expiry, and retrieval (currently only supported by camel-platform-http-vertx)|false|boolean|
|useStreaming|Whether to use streaming for large requests and responses (currently only supported by camel-platform-http-vertx)|false|boolean|
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
diff --git a/camel-plc4x.md b/camel-plc4x.md
index e1c21c5d72a6a350dc3397fc2925b7bef7154e93..77d0f26f90add3f2e17217404756faadf9313e48 100644
--- a/camel-plc4x.md
+++ b/camel-plc4x.md
@@ -8,31 +8,6 @@ The Camel Component for PLC4X allows you to create routes using the
PLC4X API to read from a Programmable Logic Controllers (PLC) device or
write to it.
-It supports various protocols by adding the driver dependencies:
-
-- Allen Bradley ETH
-
-- Automation Device Specification (ADS)
-
-- CANopen
-
-- EtherNet/IP
-
-- Firmata
-
-- KNXnet/IP
-
-- Modbus (TCP/UDP/Serial)
-
-- Open Platform Communications Unified Architecture (OPC UA)
-
-- Step7 (S7)
-
-The list of supported protocols is growing in
-[PLC4X](https://plc4x.apache.org). There are good chance that they will
-work out of the box just by adding the driver dependency. You can check
-[here](https://plc4x.apache.org/users/protocols/index.html).
-
# URI Format
plc4x://driver[?options]
@@ -58,7 +33,35 @@ Maven users will need to add the following dependency to their
where `${camel-version}` must be replaced by the actual version of
Camel.
-# Consumer
+# Usage
+
+The Camel PLC4X component supports various protocols by adding the
+driver dependencies:
+
+- Allen Bradley ETH
+
+- Automation Device Specification (ADS)
+
+- CANopen
+
+- EtherNet/IP
+
+- Firmata
+
+- KNXnet/IP
+
+- Modbus (TCP/UDP/Serial)
+
+- Open Platform Communications Unified Architecture (OPC UA)
+
+- Step7 (S7)
+
+The list of supported protocols is growing in
+[PLC4X](https://plc4x.apache.org). There are good chances that they will
+work out of the box just by adding the driver dependency. You can check
+[here](https://plc4x.apache.org/users/protocols/index.html).
+
+## Consumer
The consumer supports one-time reading or Triggered Reading. To read
from the PLC, use a `Map` containing the Alias and
@@ -70,12 +73,12 @@ can repeat this for multiple tags.
The Body created by the Consumer will be a `Map`
containing the Aliases and their associated value read from the PLC.
-# Polling Consumer
+## Polling Consumer
The polling consumer supports consecutive reading. The input and output
are the same as for the regular consumer.
-# Producer
+## Producer
To write data to the PLC, we also use a `Map`. The difference with the
Producer is that the `Value` of the Map has also to be a `Map`. Also,
diff --git a/camel-point-to-point-channel.md b/camel-point-to-point-channel.md
new file mode 100644
index 0000000000000000000000000000000000000000..416621156439e6bde0eca0d74080d4204acf2281
--- /dev/null
+++ b/camel-point-to-point-channel.md
@@ -0,0 +1,62 @@
+# Point-to-point-channel.md
+
+Camel supports the [Point to Point
+Channel](http://www.enterpriseintegrationpatterns.com/PointToPointChannel.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+An application is using Messaging to make remote procedure calls (RPC)
+or transfer documents.
+
+How can the caller be sure that exactly one receiver will receive the
+document or perform the call?
+
+
+
+
+
+Send the message on a Point-to-Point Channel, which ensures that only
+one receiver will receive a particular message.
+
+The Point to Point Channel is supported in Camel by messaging based
+[Components](#ROOT:index.adoc), such as:
+
+- [AMQP](#ROOT:amqp-component.adoc) for working with AMQP Queues
+
+- [ActiveMQ](#ROOT:jms-component.adoc), or
+ [JMS](#ROOT:jms-component.adoc) for working with JMS Queues
+
+- [SEDA](#ROOT:seda-component.adoc) for internal Camel seda queue
+ based messaging
+
+- [Spring RabbitMQ](#ROOT:spring-rabbitmq-component.adoc) for working
+ with AMQP Queues (RabbitMQ)
+
+There is also messaging based in the cloud from cloud providers such as
+Amazon, Google and Azure.
+
+See also the related [Publish Scribe
+Channel](#publish-subscribe-channel.adoc) EIP
+
+# Example
+
+The following example demonstrates point to point messaging using the
+[JMS](#ROOT:jms-component.adoc) component:
+
+Java
+from("direct:start")
+.to("jms:queue:foo");
+
+ from("jms:queue:foo")
+ .to("bean:foo");
+
+XML
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-poll-eip.md b/camel-poll-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..d5737f9efb5d91684c32ef32e45060357312c424
--- /dev/null
+++ b/camel-poll-eip.md
@@ -0,0 +1,118 @@
+# Poll-eip.md
+
+Camel supports the [Content
+Enricher](http://www.enterpriseintegrationpatterns.com/DataEnricher.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+
+
+
+
+In Camel the Content Enricher can be done in several ways:
+
+- Using [Enrich](#enrich-eip.adoc) EIP, [Poll
+ Enrich](#pollEnrich-eip.adoc), or [Poll](#poll-eip.adoc) EIP
+
+- Using a [Message Translator](#message-translator.adoc)
+
+- Using a [Processor](#manual::processor.adoc) with the enrichment
+ programmed in Java
+
+- Using a [Bean](#bean-eip.adoc) EIP with the enrichment programmed in
+ Java
+
+The Poll EIP is a simplified [Poll Enrich](#pollEnrich-eip.adoc) which
+only supports:
+
+- Static Endpoints
+
+- No custom aggregation or other advanced features
+
+- Uses a 20 seconds timeout (default)
+
+# Options
+
+# Exchange properties
+
+# Polling a message using Poll EIP
+
+`poll` uses a [Polling Consumer](#polling-consumer.adoc) to obtain the
+data. It is usually used for [Event Message](#event-message.adoc)
+messaging, for instance, to read a file or download a file using FTP.
+
+We have three methods when polling:
+
+- `receive`: Waits until a message is available and then returns it.
+ **Warning** that this method could block indefinitely if no messages
+ are available.
+
+- `receiveNoWait`: Attempts to receive a message exchange immediately
+ without waiting and returning `null` if a message exchange is not
+ available yet.
+
+- `receive(timeout)`: Attempts to receive a message exchange, waiting
+ up to the given timeout to expire if a message is not yet available.
+ Returns the message or `null` if the timeout expired.
+
+## Timeout
+
+By default, Camel will use the `receive(timeout)` which has a 20 seconds
+timeout.
+
+You can pass in a timeout value that determines which method to use:
+
+- if timeout is `-1` or other negative number then `receive` is
+ selected (**Important:** the `receive` method may block if there is
+ no message)
+
+- if timeout is `0` then `receiveNoWait` is selected
+
+- otherwise, `receive(timeout)` is selected
+
+The timeout values are in milliseconds.
+
+## Using Poll
+
+For example to download an FTP file:
+
+
+ Report REST API
+
+
+
+
+
+
+
+You can use dynamic values using the simple language in the uri, as
+shown below:
+
+
+ Report REST API
+
+
+
+
+
+
+
+## Using Poll with Rest DSL
+
+You can also use `poll` with [Rest DSL](#manual::rest-dsl.adoc) to, for
+example, download a file from [AWS S3](#ROOT:aws2-s3-component.adoc) as
+the response of an API call.
+
+
+ Report REST API
+
+
+
+
+
+
+
+# See More
+
+- [Poll EIP](#poll-eip.adoc)
+
+- [Enrich EIP](#enrich-eip.adoc)
diff --git a/camel-pollEnrich-eip.md b/camel-pollEnrich-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..9cfa1d3b24b315a454c44601b38672fd53398cb7
--- /dev/null
+++ b/camel-pollEnrich-eip.md
@@ -0,0 +1,195 @@
+# PollEnrich-eip.md
+
+Camel supports the [Content
+Enricher](http://www.enterpriseintegrationpatterns.com/DataEnricher.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+
+
+
+
+In Camel the Content Enricher can be done in several ways:
+
+- Using [Enrich](#enrich-eip.adoc) EIP, [Poll
+ Enrich](#pollEnrich-eip.adoc), or [Poll](#poll-eip.adoc) EIP
+
+- Using a [Message Translator](#message-translator.adoc)
+
+- Using a [Processor](#manual::processor.adoc) with the enrichment
+ programmed in Java
+
+- Using a [Bean](#bean-eip.adoc) EIP with the enrichment programmed in
+ Java
+
+The most natural Camel approach is using [Enrich](#enrich-eip.adoc) EIP,
+which comes as two kinds:
+
+- [Enrich](#enrich-eip.adoc) EIP: This is the most common content
+ enricher that uses a `Producer` to obtain the data. It is usually
+ used for [Request Reply](#requestReply-eip.adoc) messaging, for
+ instance, to invoke an external web service.
+
+- [Poll Enrich](#pollEnrich-eip.adoc) EIP: Uses a [Polling
+ Consumer](#polling-consumer.adoc) to obtain the additional data. It
+ is usually used for [Event Message](#event-message.adoc) messaging,
+ for instance, to read a file or download a file using
+ [FTP](#ROOT:ftp-component.adoc).
+
+This page documents the Poll Enrich EIP.
+
+# Options
+
+# Exchange properties
+
+# Content enrichment using Poll Enrich EIP
+
+`pollEnrich` uses a [Polling Consumer](#polling-consumer.adoc) to obtain
+the additional data. It is usually used for [Event
+Message](#event-message.adoc) messaging, for instance, to read a file or
+download a file using FTP.
+
+The `pollEnrich` works just the same as `enrich`, however, as it uses a
+[Polling Consumer](#polling-consumer.adoc), we have three methods when
+polling:
+
+- `receive`: Waits until a message is available and then returns it.
+ **Warning** that this method could block indefinitely if no messages
+ are available.
+
+- `receiveNoWait`: Attempts to receive a message exchange immediately
+ without waiting and returning `null` if a message exchange is not
+ available yet.
+
+- `receive(timeout)`: Attempts to receive a message exchange, waiting
+ up to the given timeout to expire if a message is not yet available.
+ Returns the message or `null` if the timeout expired.
+
+## Poll Enrich with timeout
+
+It is good practice to use timeout value.
+
+By default, Camel will use the `receive` which may block until there is
+a message available. It is therefore recommended to always provide a
+timeout value, to make this clear that we may wait for a message until
+the timeout is hit.
+
+You can pass in a timeout value that determines which method to use:
+
+- if timeout is `-1` or other negative number then `receive` is
+ selected (**Important:** the `receive` method may block if there is
+ no message)
+
+- if timeout is `0` then `receiveNoWait` is selected
+
+- otherwise, `receive(timeout)` is selected
+
+The timeout values are in milliseconds.
+
+## Using Poll Enrich
+
+The content enricher (`pollEnrich`) retrieves additional data from a
+*resource endpoint* in order to enrich an incoming message (contained in
+the *original exchange*).
+
+An `AggregationStrategy` is used to combine the original exchange and
+the *resource exchange*. The first parameter of the
+`AggregationStrategy.aggregate(Exchange, Exchange)` method corresponds
+to the original exchange, the second parameter the resource exchange.
+
+Here’s an example for implementing an `AggregationStrategy`, which
+merges the two data together as a `String` with colon separator:
+
+ public class ExampleAggregationStrategy implements AggregationStrategy {
+
+ public Exchange aggregate(Exchange original, Exchange resource) {
+ // this is just an example, for real-world use-cases the
+ // aggregation strategy would be specific to the use-case
+
+ if (resource == null) {
+ return original;
+ }
+ Object oldBody = original.getIn().getBody();
+ Object newBody = resource.getIn().getBody();
+ original.getIn().setBody(oldBody + ":" + newBody);
+ return original;
+ }
+
+ }
+
+You then use the `AggregationStrategy` with the `pollEnrich` in the Java
+DSL as shown:
+
+ AggregationStrategy aggregationStrategy = ...
+
+ from("direct:start")
+ .pollEnrich("file:inbox?fileName=data.txt", 10000, aggregationStrategy)
+ .to("mock:result");
+
+In the example, Camel will poll a file (timeout 10 seconds). The
+`AggregationStrategy` is then used to merge the file with the existing
+`Exchange`.
+
+In XML DSL you use `pollEnrich` as follows:
+
+
+
+
+
+
+
+ file:inbox?fileName=data.txt
+
+
+
+
+
+## Using Poll Enrich with Rest DSL
+
+You can also use `pollEnrich` with [Rest DSL](#manual::rest-dsl.adoc)
+to, for example, download a file from [AWS
+S3](#ROOT:aws2-s3-component.adoc) as the response of an API call.
+
+
+ Report REST API
+
+
+
+ aws-s3:xavier-dev?amazonS3Client=#s3client&deleteAfterRead=false&fileName=report-file.pdf
+
+
+
+
+
+Notice that the enriched endpoint is a constant, however, Camel also
+supports dynamic endpoints which is covered next.
+
+## Poll Enrich with Dynamic Endpoints
+
+Both `enrich` and `pollEnrich` supports using dynamic uris computed
+based on information from the current `Exchange`.
+
+For example to `pollEnrich` from an endpoint that uses a header to
+indicate a SEDA queue name:
+
+Java
+from("direct:start")
+.pollEnrich().simple("seda:${header.queueName}")
+.to("direct:result");
+
+XML
+
+
+
+seda:${header.queueName}
+
+
+
+
+See the `cacheSize` option for more details on *how much cache* to use
+depending on how many or few unique endpoints are used.
+
+# See More
+
+- [Poll EIP](#poll-eip.adoc)
+
+- [Enrich EIP](#enrich-eip.adoc)
diff --git a/camel-polling-consumer.md b/camel-polling-consumer.md
new file mode 100644
index 0000000000000000000000000000000000000000..33b4ac21914ecc517d4a305adc0b4fe69580eb2e
--- /dev/null
+++ b/camel-polling-consumer.md
@@ -0,0 +1,155 @@
+# Polling-consumer.md
+
+Camel supports implementing the [Polling
+Consumer](http://www.enterpriseintegrationpatterns.com/PollingConsumer.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+An application needs to consume Messages, but it wants to control when
+it consumes each message.
+
+How can an application consume a message when the application is ready?
+
+
+
+
+
+The application should use a Polling Consumer, one that explicitly makes
+a call when it wants to receive a message.
+
+In Camel the `PollingConsumer` is represented by the
+[PollingConsumer](https://github.com/apache/camel/blob/main/core/camel-api/src/main/java/org/apache/camel/PollingConsumer.java)
+interface.
+
+You can get hold of a `PollingConsumer` in several ways in Camel:
+
+- Use [Poll Enrich](#pollEnrich-eip.adoc) EIP
+
+- Create a `PollingConsumer` instance via the
+ [Endpoint.createPollingConsumer()](https://github.com/apache/camel/blob/main/core/camel-api/src/main/java/org/apache/camel/Endpoint.java)
+ method.
+
+- Use the [ConsumerTemplate](#manual::consumertemplate.adoc) to poll
+ on demand.
+
+# Using Polling Consumer
+
+If you need to use Polling Consumer from within a route, then the [Poll
+Enrich](#pollEnrich-eip.adoc) EIP can be used.
+
+On the other hand, if you need to use Polling Consumer programmatically,
+then using [ConsumerTemplate](#manual::consumertemplate.adoc) is a good
+choice.
+
+And if you want to use the lower level Camel APIs, then you can create
+the `PollingConsumer` instance to be used.
+
+## Using Polling Consumer from Java
+
+You can programmatically create an instance of `PollingConsumer` from
+any endpoint as shown below:
+
+ Endpoint endpoint = context.getEndpoint("activemq:my.queue");
+ PollingConsumer consumer = endpoint.createPollingConsumer();
+ Exchange exchange = consumer.receive();
+
+## PollingConsumer API
+
+There are three main polling methods on
+[PollingConsumer](https://github.com/apache/camel/blob/main/core/camel-api/src/main/java/org/apache/camel/PollingConsumer.java):
+
+
+
+
+
+
+
+
+
+
+
+PollingConsumer.receive()
+Waits until a message is available and
+then returns it; potentially blocking forever
+
+
+PollingConsumer.receive(long)
+Attempts to receive a message exchange,
+waiting up to the given timeout and returning null if no message
+exchange could be received within the time available
+
+
+PollingConsumer.receiveNoWait()
+Attempts to receive a message exchange
+immediately without waiting and returning null if a message exchange is
+not available yet
+
+
+
+
+## Two kinds of Polling Consumer implementations
+
+In Camel there are two kinds of `PollingConsumer` implementations:
+
+- *Custom*: Some components have their own custom implementation of
+ `PollingConsumer` which is optimized for the given component.
+
+- *Default*: `EventDrivenPollingConsumer` is the default
+ implementation otherwise.
+
+The `EventDrivenPollingConsumer` supports the following options:
+
+
+
+
+
+
+
+
+
+
+
+
+pollingConsumerQueueSize
+1000
+The queue size for the internal
+hand-off queue between the polling consumer and producers sending data
+into the queue.
+
+
+pollingConsumerBlockWhenFull
+true
+Whether to block any producer if the
+internal queue is full.
+
+
+pollingConsumerBlockTimeout
+0
+To use a timeout (in milliseconds) when
+the producer is blocked if the internal queue is full. If the value is
+0 or negative then no timeout is in use. If a timeout is
+triggered then a ExchangeTimedOutException is
+thrown.
+
+
+
+
+You can configure these options in endpoints [URIs](#manual::uris.adoc),
+such as shown below:
+
+ Endpoint endpoint =
+ context.getEndpoint("file:inbox?pollingConsumerQueueSize=50");
+ PollingConsumer consumer = endpoint.createPollingConsumer();
+ Exchange exchange = consumer.receive(5000);
diff --git a/camel-process-eip.md b/camel-process-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..c7fa98d598245e4e78c0c84ab0a087e6eb7927d9
--- /dev/null
+++ b/camel-process-eip.md
@@ -0,0 +1,117 @@
+# Process-eip.md
+
+The
+[Processor](http://javadoc.io/doc/org.apache.camel/camel-api/latest/org/apache/camel/Processor.html)
+is used for processing message [Exchanges](#manual::exchange.adoc).
+
+The processor is a core Camel concept that represents a node capable of
+using, creating, or modifying an incoming exchange. During routing,
+exchanges flow from one processor to another; as such, you can think of
+a route as a graph having specialized processors as the nodes, and lines
+that connect the output of one processor to the input of another.
+Processors could be implementations of EIPs, producers for specific
+components, or your own custom creation. The figure below shows the flow
+between processors.
+
+
+
+
+
+A route first starts with a consumer (think `from` in the DSL) that
+populates the initial exchange. At each processor step, the out message
+from the previous step is the in message of the next. In many cases,
+processors don’t set an out message, so in this case the in message is
+reused. At the end of a route, the [Message Exchange
+Pattern](#manual::exchange-pattern.adoc) (MEP) of the exchange
+determines whether a reply needs to be sent back to the caller of the
+route. If the MEP is `InOnly`, no reply will be sent back. If it’s
+`InOut`, Camel will take the out message from the last step and return
+it.
+
+# Processor API
+
+The `Processor` interface is a central API in Camel. Its API is
+purposely designed to be both straightforward and flexible in the form
+of a single functional method:
+
+ @FunctionalInterface
+ public interface Processor {
+
+ /**
+ * Processes the message exchange
+ *
+ * @param exchange the message exchange
+ * @throws Exception if an internal processing error has occurred.
+ */
+ void process(Exchange exchange) throws Exception;
+ }
+
+The `Processor` is used heavily internally in Camel, such as the base
+for all implementations of the [EIP
+patterns](#enterprise-integration-patterns.adoc).
+
+## Using a processor in a route
+
+Once you have written a class which implements `Processor` like this:
+
+ public class MyProcessor implements Processor {
+ public void process(Exchange exchange) throws Exception {
+ // do something...
+ }
+ }
+
+Then in Camel you can call this processor:
+
+ from("activemq:myQueue")
+ .process(new MyProcessor());
+
+You can also call a processor by its bean id, if the processor has been
+enlisted in the [Registry](#manual::registry.adoc), such as with the id
+`myProcessor`:
+
+Java
+from("activemq:myQueue")
+.process("myProcessor");
+
+XML
+And in XML you can refer to the fully qualified class name via `#class:`
+syntax:
+
+
+
+
+
+
+Spring XML
+Or if you use Spring XML you can create the processor via ``:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Why use `process` when you can use `to` instead?
+
+The process can be used in routes as an anonymous inner class such:
+
+ from("activemq:myQueue").process(new Processor() {
+ public void process(Exchange exchange) throws Exception {
+ String payload = exchange.getMessage().getBody(String.class);
+ // do something with the payload and/or exchange here
+ exchange.getMessage().setBody("Changed body");
+ }
+ }).to("activemq:myOtherQueue");
+
+This is usable for quickly whirling up some code. If the code in the
+inner class gets a bit more complicated, it is advised to refactor it
+into a separate class.
diff --git a/camel-process-manager.md b/camel-process-manager.md
new file mode 100644
index 0000000000000000000000000000000000000000..a53954f9881b44915c0a10d7ef97ed4b232b1355
--- /dev/null
+++ b/camel-process-manager.md
@@ -0,0 +1,38 @@
+# Process-manager.md
+
+Camel supports the [Process
+Manager](https://www.enterpriseintegrationpatterns.com/patterns/messaging/ProcessManager.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+How do we route a message through multiple processing steps when the
+required steps may not be known at design-time and may not be
+sequential?
+
+
+
+
+
+Use a central processing unit, a Process Manager, to maintain the state
+of the sequence and determine the next processing step based on
+intermediate results.
+
+With Camel, this pattern is implemented by using the [Dynamic
+Router](#dynamicRouter-eip.adoc) EIP. Camel’s implementation of the
+dynamic router maintains the state of the sequence, and allows to
+determine the next processing step based dynamically.
+
+# Routing Slip vs. Dynamic Router
+
+On the other hand, the [Routing Slip](#routingSlip-eip.adoc) EIP
+demonstrates how a message can be routed through a dynamic series of
+processing steps. The solution of the Routing Slip is based on two key
+assumptions:
+
+- the sequence of processing steps has to be determined up-front
+
+- and the sequence is linear.
+
+In many cases, these assumptions may not be fulfilled. For example,
+routing decisions might have to be made based on intermediate results.
+Or, the processing steps may not be sequential, but multiple steps might
+be executed in parallel.
diff --git a/camel-properties.md b/camel-properties.md
index 6ae38d42d0ba5d4b132bb1474a5dfcb680f181a3..3454521332385e5367f3018079129a70e3aa08f0 100644
--- a/camel-properties.md
+++ b/camel-properties.md
@@ -2,7 +2,7 @@
**Since Camel 2.3**
-The properties component is used for property placeholders in your Camel
+The Properties component is used for property placeholders in your Camel
application, such as endpoint URIs. It is **not** a regular Camel
component with producer and consumer for routing messages. However, for
historical reasons it was named `PropertiesComponent` and this name is
@@ -12,16 +12,16 @@ See the [Property
Placeholder](#manual:ROOT:using-propertyplaceholder.adoc) documentation
for general information on using property placeholders in Camel.
-The properties component requires to load the properties (key=value
+The Properties component requires to load the properties (key=value
pairs) from an external source such as `.properties` files. The
component is pluggable, and you can configure to use other sources or
write a custom implementation (for example to load from a database).
# Defining location of properties files
-The properties component needs to know a location(s) where to resolve
-the properties. You can define one to many locations. Multiple locations
-can be separated by comma such as:
+The properties component needs to know the location(s) where to resolve
+the properties. You can define one-to-many locations. You can separate
+multiple locations by comma, such as:
pc.setLocation("com/mycompany/myprop.properties,com/mycompany/other.properties");
@@ -39,13 +39,13 @@ and OS environments variables.
For example:
- location=file:{{sys:karaf.home}}/etc/foo.properties
+ location=file:{{sys:app.home}}/etc/foo.properties
In the location above we defined a location using the file scheme using
-the JVM system property with key `karaf.home`.
+the JVM system property with key `app.home`.
-To use an OS environment variable instead you would have to prefix with
-`env:`. You can also prefix with `env.`, however this style is not
+To use an OS environment variable, instead you would have to prefix with
+`env:`. You can also prefix with `env.`, however, this style is not
recommended because all the other functions use colon.
location=file:{{env:APP_HOME}}/etc/foo.properties
@@ -78,7 +78,7 @@ Using the `` allows to configure this within the
-For fine grained configuration of the location, then this can be done as
+For fine-grained configuration of the location, then this can be done as
follows:
@@ -91,11 +91,6 @@ follows:
resolver = "classpath"
path = "com/my/company/something/my-properties-2.properties"
optional = "false"/>
-
-
# Options
@@ -110,7 +105,7 @@ The component supports the following options, which are listed below.
-
+
-
+
camel.component.properties.auto-discover-properties-sources
Whether to automatically discovery
@@ -127,7 +122,7 @@ factory.
true
Boolean
-
+
camel.component.properties.default-fallback-enabled
If false, the component does not
@@ -136,7 +131,7 @@ separator.
true
Boolean
-
+
camel.component.properties.encoding
Encoding to use when loading properties
@@ -146,7 +141,7 @@ as documented by java.util.Properties#load(java.io.InputStream)
String
-
+
camel.component.properties.environment-variable-mode
Sets the OS environment variables mode
@@ -157,7 +152,7 @@ property mode
2
Integer
-
+
camel.component.properties.ignore-missing-location
Whether to silently ignore if a
@@ -166,7 +161,7 @@ found.
false
Boolean
-
+
camel.component.properties.initial-properties
Sets initial properties which will be
@@ -175,7 +170,7 @@ java.util.Properties type.
String
-
+
camel.component.properties.location
A list of locations to load properties.
@@ -185,7 +180,7 @@ option.
String
-
+
camel.component.properties.nested-placeholder
Whether to support nested property
@@ -194,7 +189,7 @@ placeholder, that should be resolved (recursively).
true
Boolean
-
+
camel.component.properties.override-properties
Sets a special list of override
@@ -203,7 +198,7 @@ The option is a java.util.Properties type.
String
-
+
camel.component.properties.properties-parser
To use a custom PropertiesParser. The
@@ -212,7 +207,7 @@ type.
String
-
+
camel.component.properties.system-properties-mode
Sets the JVM system property mode (0 =
diff --git a/camel-protobuf-dataformat.md b/camel-protobuf-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..d7a31ede183e0d8d818d039b85a28ee26881ef67
--- /dev/null
+++ b/camel-protobuf-dataformat.md
@@ -0,0 +1,207 @@
+# Protobuf-dataformat.md
+
+**Since Camel 2.2**
+
+# Protobuf - Protocol Buffers
+
+"Protocol Buffers - Google’s data interchange format"
+
+Camel provides a Data Format to serialize between Java and the Protocol
+Buffer protocol. The project’s site details why you may wish to [choose
+this format over
+xml](https://developers.google.com/protocol-buffers/docs/overview).
+Protocol Buffer is language-neutral and platform-neutral, so messages
+produced by your Camel routes may be consumed by other language
+implementations.
+
+[API
+Site](https://developers.google.com/protocol-buffers/docs/reference/java/)
+[Protobuf Implementation](https://github.com/google/protobuf)
+
+[Protobuf Java
+Tutorial](https://developers.google.com/protocol-buffers/docs/javatutorial)
+
+# Protobuf Options
+
+# Content type format
+
+It’s possible to parse JSON message to convert it to the protobuf format
+and unparse it back using native util converter. To use this option, set
+contentTypeFormat value to `json` or call protobuf with second
+parameter. If the default instance is not specified, always use the
+native protobuf format. The sample code shows below:
+
+ from("direct:marshal")
+ .unmarshal()
+ .protobuf("org.apache.camel.dataformat.protobuf.generated.AddressBookProtos$Person", "json")
+ .to("mock:reverse");
+
+# Input data type
+
+This dataformat supports marshaling input data either as protobuf
+`Message` type or `Map` data type. In case of input data as `Map` type,
+first it will try to retrieve the data as `Map` using built-in type
+converters, if it fails to do so, it will fall back to retrieve it as
+proto `Message`.
+
+# Output data type
+
+As mentioned above, you can define the content type format to choose
+from JSON or native to serialize/deserialize data from/to. In addition,
+you can also obtain the data as `Map` and let this component do the
+heavy lifting to parse the data from proto `Message` to `Map`, you will
+need to set the `contentTypeFormat` to `native` and explicitly define
+the data type `Map` when you get body of the exchange. For instance:
+`exchange.getMessage().getBody(Map.class)`.
+
+# Protobuf overview
+
+This quick overview of how to use Protobuf. For more detail, see the
+[complete
+tutorial](https://developers.google.com/protocol-buffers/docs/javatutorial)
+
+# Defining the proto format
+
+The first step is to define the format for the body of your exchange.
+This is defined in a .proto file as so:
+
+**addressbook.proto**
+
+ syntax = "proto2";
+
+ package org.apache.camel.component.protobuf;
+
+ option java_package = "org.apache.camel.component.protobuf";
+ option java_outer_classname = "AddressBookProtos";
+
+ message Person {
+ required string name = 1;
+ required int32 id = 2;
+ optional string email = 3;
+
+ enum PhoneType {
+ MOBILE = 0;
+ HOME = 1;
+ WORK = 2;
+ }
+
+ message PhoneNumber {
+ required string number = 1;
+ optional PhoneType type = 2 [default = HOME];
+ }
+
+ repeated PhoneNumber phone = 4;
+ }
+
+ message AddressBook {
+ repeated Person person = 1;
+ }
+
+# Generating Java classes
+
+The Protobuf SDK provides a compiler which will generate the Java
+classes for the format we defined in our `.proto` file. If your
+operating system is supported by [Protobuf Java code generator maven
+plugin](https://www.xolstice.org/protobuf-maven-plugin), you can
+automate protobuf Java code generating by adding the following
+configurations to your `pom.xml`:
+
+Insert operating system and CPU architecture detection extension inside
+`` tag of the project `pom.xml` or set
+`${os.detected.classifier}` parameter manually
+
+
+
+ kr.motd.maven
+ os-maven-plugin
+ 1.4.1.Final
+
+
+
+Insert gRPC and protobuf Java code generator plugin **\**
+tag of the project `pom.xml`
+
+
+ org.xolstice.maven.plugins
+ protobuf-maven-plugin
+ 0.5.0
+ true
+
+
+
+ test-compile
+ compile
+
+
+ com.google.protobuf:protoc:${protobuf-version}:exe:${os.detected.classifier}
+
+
+
+
+
+You can also run the compiler for any additional supported languages you
+require manually.
+
+`protoc --java_out=. ./proto/addressbook.proto`
+
+This will generate a single Java class named AddressBookProtos which
+contains inner classes for Person and AddressBook. Builders are also
+implemented for you. The generated classes implement
+com.google.protobuf.Message which is required by the serialization
+mechanism. For this reason it is important that only these classes are
+used in the body of your exchanges. Camel will throw an exception on
+route creation if you attempt to tell the Data Format to use a class
+that does not implement com.google.protobuf.Message. Use the generated
+builders to translate the data from any of your existing domain classes.
+
+# Java DSL
+
+You can use create the ProtobufDataFormat instance and pass it to Camel
+DataFormat marshal and unmarshal API like this.
+
+ ProtobufDataFormat format = new ProtobufDataFormat(Person.getDefaultInstance());
+
+ from("direct:in").marshal(format);
+ from("direct:back").unmarshal(format).to("mock:reverse");
+
+Or use the DSL `protobuf()` passing the unmarshal default instance or
+default instance class name like this. However, if you have input data
+as `Map` type, you will need to **specify** the ProtobufDataFormat
+otherwise it will throw an error.
+
+ // You don't need to specify the default instance for protobuf marshaling, but you will need in case your input data is a Map type
+ from("direct:marshal").marshal().protobuf();
+ from("direct:unmarshalA").unmarshal()
+ .protobuf("org.apache.camel.dataformat.protobuf.generated.AddressBookProtos$Person")
+ .to("mock:reverse");
+
+ from("direct:unmarshalB").unmarshal().protobuf(Person.getDefaultInstance()).to("mock:reverse");
+
+# Spring DSL
+
+The following example shows how to use Protobuf to unmarshal using
+Spring configuring the protobuf data type
+
+
+
+
+
+
+
+
+
+
+
+# Dependencies
+
+To use Protobuf in your Camel routes, you need to add the dependency on
+**camel-protobuf**, which implements this data format.
+
+**Example pom.xml for `camel-protobuf`**
+
+
+ org.apache.camel
+ camel-protobuf
+ x.x.x
+
+
diff --git a/camel-protobufJackson-dataformat.md b/camel-protobufJackson-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..36fba21b89676abb5bcdc9f00785755fd6e03475
--- /dev/null
+++ b/camel-protobufJackson-dataformat.md
@@ -0,0 +1,56 @@
+# ProtobufJackson-dataformat.md
+
+**Since Camel 3.10**
+
+Jackson Protobuf is a Data Format which uses the [Jackson
+library](https://github.com/FasterXML/jackson/) with the [Protobuf
+extension](https://github.com/FasterXML/jackson-dataformats-binary) to
+unmarshal a Protobuf payload into Java objects or to marshal Java
+objects into a Protobuf payload.
+
+If you are familiar with Jackson, this Protobuf data format behaves in
+the same way as its JSON counterpart, and thus can be used with classes
+annotated for JSON serialization/deserialization.
+
+ from("kafka:topic").
+ unmarshal().protobuf(ProtobufLibrary.Jackson, JsonNode.class).
+ to("log:info");
+
+# Protobuf Jackson Options
+
+# Usage
+
+## Configuring the `SchemaResolver`
+
+Since Protobuf serialization is schema-based, this data format requires
+that you provide a SchemaResolver object that is able to look up the
+schema for each exchange that is going to be marshalled/unmarshalled.
+
+You can add a single SchemaResolver to the registry, and it will be
+looked up automatically. Or you can explicitly specify the reference to
+a custom SchemaResolver.
+
+## Using custom ProtobufMapper
+
+You can configure `JacksonProtobufDataFormat` to use a custom
+`ProtobufMapper` in case you need more control of the mapping
+configuration.
+
+If you set up a single `ProtobufMapper` in the registry, then Camel will
+automatic lookup and use this `ProtobufMapper`.
+
+# Dependencies
+
+To use Protobuf Jackson in your Camel routes, you need to add the
+dependency on **camel-jackson-protobuf**, which implements this data
+format.
+
+If you use Maven, you could add the following to your pom.xml,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-jackson-protobuf
+ x.x.x
+
+
diff --git a/camel-publish-subscribe-channel.md b/camel-publish-subscribe-channel.md
new file mode 100644
index 0000000000000000000000000000000000000000..c47e2acdac25dcdc4b1d7a38cc984310caa1392f
--- /dev/null
+++ b/camel-publish-subscribe-channel.md
@@ -0,0 +1,61 @@
+# Publish-subscribe-channel.md
+
+Camel supports the [Publish-Subscribe
+Channel](http://www.enterpriseintegrationpatterns.com/PublishSubscribeChannel.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+How can the sender broadcast an event to all interested receivers?
+
+
+
+
+
+Send the event on a Publish-Subscribe Channel, which delivers a copy of
+a particular event to each receiver.
+
+The Publish-Subscribe Channel is supported in Camel by messaging based
+[Components](#ROOT:index.adoc), such as:
+
+- [AMQP](#ROOT:amqp-component.adoc) for working with AMQP Queues
+
+- [ActiveMQ](#ROOT:jms-component.adoc), or
+ [JMS](#ROOT:jms-component.adoc) for working with JMS Queues
+
+- [SEDA](#ROOT:seda-component.adoc) for internal Camel seda queue
+ based messaging
+
+- [Spring RabbitMQ](#ROOT:spring-rabbitmq-component.adoc) for working
+ with AMQP Queues (RabbitMQ)
+
+There is also messaging based in the cloud from cloud providers such as
+Amazon, Google and Azure.
+
+See also the related [Point to Point
+Channel](#point-to-point-channel.adoc) EIP
+
+# Example
+
+The following example demonstrates publish subscriber messaging using
+the [JMS](#ROOT:jms-component.adoc) component with JMS topics:
+
+Java
+from("direct:start")
+.to("jms:topic:cheese");
+
+ from("jms:topic:cheese")
+ .to("bean:foo");
+
+ from("jms:topic:cheese")
+ .to("bean:bar");
+
+XML
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-pubnub.md b/camel-pubnub.md
index 69503b8672f60e532ec2671639f7aaa37026cf20..8728c3c5987e61066f596e6846e1d1e61e139ca9 100644
--- a/camel-pubnub.md
+++ b/camel-pubnub.md
@@ -39,7 +39,9 @@ for this component:
Where **channel** is the PubNub channel to publish or subscribe to.
-# Message body
+# Usage
+
+## Message body
The message body can contain any JSON serializable data, including
Objects, Arrays, Integers, and Strings. Message data should not contain
@@ -127,9 +129,9 @@ asf events.
There are a couple of examples in the test directory that show some of
the PubNub features. They require a PubNub account, from where you can
-obtain a publish- and subscribe key.
+obtain a publish/subscribe key.
-The example PubNubSensorExample already contains a subscribe key
+The example PubNubSensorExample already contains a subscription key
provided by PubNub, so this is ready to run without an account. The
example illustrates the PubNub component subscribing to an infinite
stream of sensor data.
diff --git a/camel-python-language.md b/camel-python-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..0d792f2830f49e21ff0993162ff42f146e6f60bd
--- /dev/null
+++ b/camel-python-language.md
@@ -0,0 +1,87 @@
+# Python-language.md
+
+**Since Camel 3.19**
+
+Camel allows [Python](https://www.jython.org/) to be used as an
+[Expression](#manual::expression.adoc) or
+[Predicate](#manual::predicate.adoc) in Camel routes.
+
+For example, you can use Python in a
+[Predicate](#manual::predicate.adoc) with the [Content-Based
+Router](#eips:choice-eip.adoc) EIP.
+
+# Python Options
+
+# Variables
+
+
+
+
+
+
+
+
+
+
+
+
+this
+Exchange
+the Exchange is the root
+object
+
+
+context
+CamelContext
+the CamelContext
+
+
+exchange
+Exchange
+the Exchange
+
+
+exchangeId
+String
+the exchange id
+
+
+message
+Message
+the message
+
+
+body
+Message
+the message body
+
+
+headers
+Map
+the message headers
+
+
+properties
+Map
+the exchange properties
+
+
+
+
+# Dependencies
+
+To use Python in your Camel routes, you need to add the dependency on
+**camel-python** which implements the Python language.
+
+If you use Maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest release.
+
+
+ org.apache.camel
+ camel-python
+ x.x.x
+
diff --git a/camel-qdrant.md b/camel-qdrant.md
index 2958fb9ee2f40a0da227c0395e2f8cdf0c59b915..59c626620934c9e59bb8e7ea0d656eedc2d54c19 100644
--- a/camel-qdrant.md
+++ b/camel-qdrant.md
@@ -14,12 +14,14 @@ Vector Database](https://qdrant.tech).
Where **collection** represents a named set of points (vectors with a
payload) defined in your database.
-# Collection Samples
+# Examples
+
+## Collection Examples
In the route below, we use the qdrant component to create a collection
named *myCollection* with the given parameters:
-## Create Collection
+### Create Collection
Java
from("direct:in")
@@ -32,7 +34,7 @@ Collections.VectorParams.newBuilder()
.setDistance(Collections.Distance.Cosine).build())
.to("qdrant:myCollection");
-## Delete Collection
+### Delete Collection
In the route below, we use the qdrant component to delete a collection
named *myCollection*:
@@ -43,7 +45,7 @@ from("direct:in")
.constant(QdrantAction.DELETE\_COLLECTION)
.to("qdrant:myCollection");
-## Collection Info
+### Collection Info
In the route below, we use the qdrant component to get information about
the collection named `myCollection`:
@@ -61,9 +63,9 @@ an exception of type `QdrantActionException` with a cause of type
`StatusRuntimeException statusRuntimeException` and status
`Status.NOT_FOUND`.
-# Points Samples
+## Points Examples
-## Upsert
+### Upsert
In the route below we use the qdrant component to perform insert +
updates (upsert) on points in the collection named *myCollection*:
@@ -83,7 +85,7 @@ Points.PointStruct.newBuilder()
.build())
.to("qdrant:myCollection");
-## Retrieve
+### Retrieve
In the route below, we use the qdrant component to retrieve information
of a single point by id from the collection named *myCollection*:
@@ -96,7 +98,7 @@ from("direct:in")
.constant(PointIdFactory.id(8))
.to("qdrant:myCollection");
-## Delete
+### Delete
In the route below, we use the qdrant component to delete points from
the collection named `myCollection` according to a criteria:
diff --git a/camel-quartz.md b/camel-quartz.md
index 0263a990409b77f4fb2355d488d8f2ae1b651e8e..c18b5b8de9554fe990f3d60f625af8aca6ecd723 100644
--- a/camel-quartz.md
+++ b/camel-quartz.md
@@ -5,9 +5,7 @@
**Only consumer is supported**
The Quartz component provides a scheduled delivery of messages using the
-[Quartz Scheduler 2.x](http://www.quartz-scheduler.org/).
-Each endpoint represents a different timer (in Quartz terms, a Trigger
-and JobDetail).
+[Quartz Scheduler 2.x](http://www.quartz-scheduler.org/).
Maven users will need to add the following dependency to their `pom.xml`
for this component:
@@ -31,7 +29,12 @@ cron expression is provided, the component uses a simple trigger. If no
`groupName` is provided, the quartz component uses the `Camel` group
name.
-# Configuring quartz.properties file
+# Usage
+
+Each endpoint represents a different timer (in Quartz terms, a `Trigger`
+and \`\`JobDetail\`).
+
+## Configuring quartz.properties file
By default, Quartz will look for a `quartz.properties` file in the
`org/quartz` directory of the classpath. If you are using WAR
@@ -49,7 +52,7 @@ allows you to configure properties:
-
+
-
+
properties
null
Properties
You can configure a
java.util.Properties instance.
-
+
propertiesFile
null
String
@@ -80,15 +83,15 @@ To do this, you can configure this in Spring XML as follows
-# Enabling Quartz scheduler in JMX
+## Enabling Quartz scheduler in JMX
-You need to configure the quartz scheduler properties to enable JMX.
-That is typically setting the option `"org.quartz.scheduler.jmx.export"`
+You need to configure the quartz scheduler properties to enable JMX.
+That is typically setting the option `"org.quartz.scheduler.jmx.export"`
to a `true` value in the configuration file.
-This option is set to true by default, unless explicitly disabled.
+This option is set to `true` by default, unless explicitly disabled.
-# Clustering
+## Clustering
If you use Quartz in clustered mode, e.g., the `JobStore` is clustered.
Then the [Quartz](#quartz-component.adoc) component will **not**
@@ -98,7 +101,7 @@ the trigger to keep running on the other nodes in the cluster.
When running in clustered node, no checking is done to ensure unique job
name/group for endpoints.
-# Message Headers
+## Message Headers
Camel adds the getters from the Quartz Execution Context as header
values. The following headers are added:
@@ -110,7 +113,7 @@ values. The following headers are added:
The `fireTime` header contains the `java.util.Date` of when the exchange
was fired.
-# Using Cron Triggers
+## Using Cron Triggers
Quartz supports [Cron-like
expressions](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)
@@ -137,20 +140,20 @@ valid URI syntax:
-
+
-
+
+
Space
-# Specifying time zone
+## Specifying time zone
The Quartz Scheduler allows you to configure time zone per trigger. For
example, to use a time zone of your country, then you can do as follows:
@@ -159,17 +162,17 @@ example, to use a time zone of your country, then you can do as follows:
The timeZone value is the values accepted by `java.util.TimeZone`.
-# Specifying start date
+## Specifying start date
The Quartz Scheduler allows you to configure start date per trigger. You
can provide the start date in the date format yyyy-MM-dd’T'HH:mm:ssz.
quartz://groupName/timerName?cron=0+0/5+12-18+?+*+MON-FRI&trigger.startAt=2023-11-22T14:32:36UTC
-# Specifying end date
+## Specifying end date
The Quartz Scheduler allows you to configure end date per trigger. You
-can provide the end date in the date format yyyy-MM-dd’T'HH:mm:ssz.
+can provide the end date in the date format `yyyy-MM-dd'T'HH:mm:ssz`.
quartz://groupName/timerName?cron=0+0/5+12-18+?+*+MON-FRI&trigger.endAt=2023-11-22T14:32:36UTC
@@ -177,7 +180,7 @@ Note: Start and end dates may be affected by time drifts and
unpredictable behavior during daylight-saving time changes. Exercise
caution, especially in environments where precise timing is critical.
-# Configuring misfire instructions
+## Configuring misfire instructions
The quartz scheduler can be configured with a misfire instruction to
handle misfire situations for the trigger. The concrete trigger type
@@ -198,17 +201,17 @@ instructions as well:
The simple and cron triggers have the following misfire instructions
representative:
-## SimpleTrigger.MISFIRE\_INSTRUCTION\_FIRE\_NOW = 1 (default)
+### SimpleTrigger.MISFIRE\_INSTRUCTION\_FIRE\_NOW = 1 (default)
Instructs the Scheduler that upon a mis-fire situation, the
SimpleTrigger wants to be fired now by Scheduler.
This instruction should typically only be used for *one-shot*
(non-repeating) Triggers. If it is used on a trigger with a repeat count
-\> 0, then it is equivalent to the instruction
+greater than 0, then it is equivalent to the instruction
`MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_REMAINING_REPEAT_COUNT`.
-## SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NOW\_WITH\_EXISTING\_REPEAT\_COUNT = 2
+### SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NOW\_WITH\_EXISTING\_REPEAT\_COUNT = 2
Instructs the Scheduler that upon a mis-fire situation, the
SimpleTrigger wants to be re-scheduled to `now` (even if the associated
@@ -221,7 +224,7 @@ and repeat-count that it was originally setup with. This is only an
issue if you for some reason wanted to be able to tell what the original
values were at some later time.
-## SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NOW\_WITH\_REMAINING\_REPEAT\_COUNT = 3
+### SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NOW\_WITH\_REMAINING\_REPEAT\_COUNT = 3
Instructs the Scheduler that upon a mis-fire situation, the
SimpleTrigger wants to be re-scheduled to `now` (even if the associated
@@ -239,7 +242,7 @@ to tell what the original values were at some later time.
This instruction could cause the Trigger to go to the *COMPLETE* state
after firing `now`, if all the repeat-fire-times where missed.
-## SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NEXT\_WITH\_REMAINING\_COUNT = 4
+### SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NEXT\_WITH\_REMAINING\_COUNT = 4
Instructs the Scheduler that upon a mis-fire situation, the
SimpleTrigger wants to be re-scheduled to the next scheduled time after
@@ -249,7 +252,7 @@ count set to what it would be, if it had not missed any firings.
This instruction could cause the Trigger to go directly to the
*COMPLETE* state if all fire-times where missed.
-## SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NEXT\_WITH\_EXISTING\_COUNT = 5
+### SimpleTrigger.MISFIRE\_INSTRUCTION\_RESCHEDULE\_NEXT\_WITH\_EXISTING\_COUNT = 5
Instructs the Scheduler that upon a mis-fire situation, the
SimpleTrigger wants to be re-scheduled to the next scheduled time after
@@ -259,27 +262,27 @@ count left unchanged.
This instruction could cause the Trigger to go directly to the
*COMPLETE* state if the end-time of the trigger has arrived.
-## CronTrigger.MISFIRE\_INSTRUCTION\_FIRE\_ONCE\_NOW = 1 (default)
+### CronTrigger.MISFIRE\_INSTRUCTION\_FIRE\_ONCE\_NOW = 1 (default)
Instructs the Scheduler that upon a mis-fire situation, the CronTrigger
wants to be fired now by Scheduler.
-## CronTrigger.MISFIRE\_INSTRUCTION\_DO\_NOTHING = 2
+### CronTrigger.MISFIRE\_INSTRUCTION\_DO\_NOTHING = 2
Instructs the Scheduler that upon a mis-fire situation, the CronTrigger
wants to have its next-fire-time updated to the next time in the
schedule after the current time (taking into account any associated
Calendar. However, it does not want to be fired now.
-# Using QuartzScheduledPollConsumerScheduler
+## Using QuartzScheduledPollConsumerScheduler
The [Quartz](#quartz-component.adoc) component provides a Polling
-Consumer scheduler which allows to use cron based scheduling for
-[Polling Consumers](#eips:polling-consumer.adoc) such as the File and
-FTP consumers.
+Consumer scheduler which allows using cron based scheduling for [Polling
+Consumers](#eips:polling-consumer.adoc) such as the File and FTP
+consumers.
For example, to use a cron based expression to poll for files every
-second second, then a Camel route can be defined simply as:
+second, then a Camel route can be defined simply as:
from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?")
.to("bean:process");
@@ -300,7 +303,7 @@ The following options are supported:
-
+
-
+
quartzScheduler
null
org.quartz.Scheduler
none is configured, then the shared scheduler from the Quartz component is used.
-
+
cron
null
String
Mandatory : To define
the cron expression for triggering the polls.
-
+
triggerId
null
String
To specify the trigger id. If none is
provided, then a UUID is generated and used.
-
+
triggerGroup
QuartzScheduledPollConsumerScheduler
String
To specify the trigger group.
-
+
timeZone
Default
TimeZone
@@ -348,9 +351,9 @@ trigger.
-**Important:** Remember configuring these options from the endpoint URIs
-must be prefixed with `scheduler.`. For example, to configure the
-trigger id and group:
+Remember that configuring these options from the endpoint URIs must be
+prefixed with `scheduler.`. For example, to configure the trigger id and
+group:
from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?&scheduler.triggerId=myId&scheduler.triggerGroup=myGroup")
.to("bean:process");
@@ -361,24 +364,18 @@ as well:
from("file:inbox?scheduler=spring&scheduler.cron=0/2+*+*+*+*+?")
.to("bean:process");
-# Cron Component Support
+## Cron Component Support
The Quartz component can be used as implementation of the Camel Cron
component.
-Maven users will need to add the following additional dependency to
-their `pom.xml`:
-
-
- org.apache.camel
- camel-cron
- x.x.x
-
-
+# Example
Users can then use the cron component instead of the quartz component,
as in the following route:
+**Example route for the cron component**
+
from("cron://name?schedule=0+0/5+12-18+?+*+MON-FRI")
.to("activemq:Totally.Rocks");
@@ -388,7 +385,7 @@ as in the following route:
|Name|Description|Default|Type|
|---|---|---|---|
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
-|enableJmx|Whether to enable Quartz JMX which allows to manage the Quartz scheduler from JMX. This options is default true|true|boolean|
+|enableJmx|Whether to enable Quartz JMX, which allows managing the Quartz scheduler from JMX. The default value for this option is true.|true|boolean|
|prefixInstanceName|Whether to prefix the Quartz Scheduler instance name with the CamelContext name. This is enabled by default, to let each CamelContext use its own Quartz scheduler instance by default. You can set this option to false to reuse Quartz scheduler instances between multiple CamelContext's.|true|boolean|
|prefixJobNameWithEndpointId|Whether to prefix the quartz job with the endpoint id. This option is default false.|false|boolean|
|properties|Properties to configure the Quartz scheduler.||object|
@@ -398,7 +395,7 @@ as in the following route:
|scheduler|To use the custom configured Quartz scheduler, instead of creating a new Scheduler.||object|
|schedulerFactory|To use the custom SchedulerFactory which is used to create the Scheduler.||object|
|autoStartScheduler|Whether the scheduler should be auto started. This option is default true|true|boolean|
-|interruptJobsOnShutdown|Whether to interrupt jobs on shutdown which forces the scheduler to shutdown quicker and attempt to interrupt any running jobs. If this is enabled then any running jobs can fail due to being interrupted. When a job is interrupted then Camel will mark the exchange to stop continue routing and set java.util.concurrent.RejectedExecutionException as caused exception. Therefore use this with care, as its often better to allow Camel jobs to complete and shutdown gracefully.|false|boolean|
+|interruptJobsOnShutdown|Whether to interrupt jobs on shutdown, which forces the scheduler to shut down quicker and attempt to interrupt any running jobs. If this is enabled, then any running jobs can fail due to being interrupted. When a job is interrupted then Camel will mark the exchange to stop to continue routing and set java.util.concurrent.RejectedExecutionException as caused exception. Therefore, use this with care, as its often better to allow Camel jobs to complete and shutdown gracefully.|false|boolean|
## Endpoint Configurations
diff --git a/camel-quickfix.md b/camel-quickfix.md
index 610b0dd39cd1fd7fbd70e9c0452b336ee3d55599..0e0d71a6361a0540a1fe6a2d828584630ec9bc16 100644
--- a/camel-quickfix.md
+++ b/camel-quickfix.md
@@ -23,21 +23,26 @@ for this component:
quickfix:configFile[?sessionID=sessionID&lazyCreateEngine=true|false]
-The **configFile** is the name of the QuickFIX/J configuration to use
-for the FIX engine (located as a resource found in your classpath). The
-optional **sessionID** identifies a specific FIX session. The format of
+The `configFile` is the name of the QuickFIX/J configuration to use for
+the FIX engine (located as a resource found in your classpath). The
+optional `sessionID` identifies a specific FIX session. The format of
the sessionID is:
(BeginString):(SenderCompID)[/(SenderSubID)[/(SenderLocationID)]]->(TargetCompID)[/(TargetSubID)[/(TargetLocationID)]]
-The optional **lazyCreateEngine** parameter allows creating QuickFIX/J
-engine on demand. Value **true** means the engine is started when the
-first message is sent or there’s consumer configured in route
-definition. When **false** value is used, the engine is started at the
-endpoint creation. When this parameter is missing, the value of
-component’s property **lazyCreateEngines** is being used.
+The optional `lazyCreateEngine` parameter allows creating QuickFIX/J
+engine on demand:
-Example URIs:
+- The value `true` means the engine is started when the first message
+ is sent or there’s consumer configured in route definition.
+
+- When the value `false` is used, the engine is started at the
+ endpoint creation.
+
+When this parameter is missing, the value of component’s property
+`lazyCreateEngines` is being used.
+
+**Example URIs:**
quickfix:config.cfg
@@ -47,11 +52,11 @@ Example URIs:
# Endpoints
-FIX sessions are endpoints for the **quickfix** component. An endpoint
-URI may specify a single session or all sessions managed by a specific
+FIX sessions are endpoints for the quickfix component. An endpoint URI
+may specify a single session or all sessions managed by a specific
QuickFIX/J engine. Typical applications will use only one FIX engine,
but advanced users may create multiple FIX engines by referencing
-different configuration files in **quickfix** component endpoint URIs.
+different configuration files in quickfix component endpoint URIs.
When a consumer does not include a session ID in the endpoint URI, it
will receive exchanges for all sessions managed by the FIX engine
@@ -61,28 +66,29 @@ include the session-related fields in the FIX message being sent. If a
session is specified in the URI, then the component will automatically
inject the session-related fields into the FIX message.
+# Usage
+
The DataDictionary header is useful if string messages are being
received and need to be parsed in a route. QuickFIX/J requires a data
dictionary to parse certain types of messages (with repeating groups,
for example). By injecting a DataDictionary header in the route after
receiving a message string, the FIX engine can properly parse the data.
-# QuickFIX/J Configuration Extensions
+## QuickFIX/J Configuration Extensions
When using QuickFIX/J directly, one typically writes code to create
instances of logging adapters, message stores, and communication
-connectors. The **quickfix** component will automatically create
-instances of these classes based on information in the configuration
-file. It also provides defaults for many of the commonly required
-settings and adds additional capabilities (like the ability to activate
-JMX support).
-
-The following sections describe how the **quickfix** component processes
-the QuickFIX/J configuration. For comprehensive information about
-QuickFIX/J configuration, see the [QFJ user
+connectors. The quickfix component will automatically create instances
+of these classes based on information in the configuration file. It also
+provides defaults for many of the commonly required settings and adds
+additional capabilities (like the ability to activate JMX support).
+
+The following sections describe how the quickfix component processes the
+QuickFIX/J configuration. For comprehensive information about QuickFIX/J
+configuration, see the [user
manual](http://www.quickfixj.org/quickfixj/usermanual/usage/configuration.html).
-## Communication Connectors
+### Communication Connectors
When the component detects an initiator or acceptor session setting in
the QuickFIX/J configuration file, it will automatically create the
@@ -96,18 +102,18 @@ file.
-
+
-
+
ConnectionType=initiator
Create an initiator connector
-
+
ConnectionType=acceptor
Create an acceptor connector
@@ -125,19 +131,19 @@ and must be placed in the settings default section.
-
+
-
+
ThreadModel=ThreadPerConnector
Use SocketInitiator or
SocketAcceptor (default)
-
+
ThreadModel=ThreadPerSession
Use
@@ -147,7 +153,7 @@ style="text-align: left;">
ThreadModel=ThreadPerSession
-## Logging
+### Logging
The QuickFIX/J logger implementation can be specified by including the
following settings in the default section of the configuration file. The
@@ -163,44 +169,44 @@ values in the QuickFIX/J settings file.
-
+
-
+
ScreenLogShowEvents
Use a ScreenLog
-
+
ScreenLogShowIncoming
Use a ScreenLog
-
+
ScreenLogShowOutgoing
Use a ScreenLog
-
+
SLF4J*
Use a SLF4JLog. Any of the
SLF4J settings will cause this log to be used.
-
+
FileLogPath
Use a FileLog
-
+
JdbcDriver
Use a JdbcLog
-## Message Store
+### Message Store
The QuickFIX/J message store implementation can be specified by
including the following settings in the default section of the
@@ -217,21 +223,21 @@ QuickFIX/J settings file.
-
+
-
+
JdbcDriver
Use a JdbcStore
-
+
FileStorePath
Use a FileStore
-
+
SleepycatDatabaseDir
Use a
@@ -240,14 +246,14 @@ style="text-align: left;">
SleepycatDatabaseDir
-## Message Factory
+### Message Factory
A message factory is used to construct domain objects from raw FIX
messages. The default message factory is `DefaultMessageFactory`.
However, advanced applications may require a custom message factory.
This can be set on the QuickFIX/J component.
-## JMX
+### JMX
@@ -255,13 +261,13 @@ This can be set on the QuickFIX/J component.
-
+
-
+
UseJmx
if Y, then enable
QuickFIX/J JMX
@@ -269,7 +275,7 @@ QuickFIX/J JMX
-## Other Defaults
+### Other Defaults
The component provides some default settings for what are normally
required settings in QuickFIX/J configuration files. `SessionStartTime`
@@ -277,7 +283,7 @@ and `SessionEndTime` default to "00:00:00", meaning the session will not
be automatically started and stopped. The `HeartBtInt` (heartbeat
interval) defaults to 30 seconds.
-## Minimal Initiator Configuration Example
+### Minimal Initiator Configuration Example
[SESSION]
ConnectionType=initiator
@@ -285,7 +291,7 @@ interval) defaults to 30 seconds.
SenderCompID=YOUR_SENDER
TargetCompID=YOUR_TARGET
-# Using the InOut Message Exchange Pattern
+## Using the InOut Message Exchange Pattern
Although the FIX protocol is event-driven and asynchronous, there are
specific pairs of messages that represent a request-reply message
@@ -293,9 +299,9 @@ exchange. To use an InOut exchange pattern, there should be a single
request message and single reply message to the request. Examples
include an OrderStatusRequest message and UserRequest.
-## Implementing InOut Exchanges for Consumers
+### Implementing InOut Exchanges for Consumers
-Add "exchangePattern=InOut" to the QuickFIX/J enpoint URI. The
+Add `exchangePattern=InOut` to the QuickFIX/J enpoint URI. The
`MessageOrderStatusService` in the example below is a bean with a
synchronous service method. The method returns the response to the
request (an ExecutionReport in this case) which is then sent back to the
@@ -305,7 +311,7 @@ requestor session.
.filter(header(QuickfixjEndpoint.MESSAGE_TYPE_KEY).isEqualTo(MsgType.ORDER_STATUS_REQUEST))
.bean(new MarketOrderStatusService());
-## Implementing InOut Exchanges for Producers
+### Implementing InOut Exchanges for Producers
For producers, sending a message will block until a reply is received or
a timeout occurs. There is no standard way to correlate reply messages
@@ -321,7 +327,7 @@ using `Exchange` properties.
-
+
-
+
Correlation Criteria
-"CorrelationCriteria"
QuickfixjProducer.CORRELATION_CRITERIA_KEY
+style="text-align: left;">CorrelationCriteria
+QuickfixjProducer.CORRELATION_CRITERIA_KEY
None
-
+
Correlation Timeout in
Milliseconds
-"CorrelationTimeout"
QuickfixjProducer.CORRELATION_TIMEOUT_KEY
+style="text-align: left;">CorrelationTimeout
+QuickfixjProducer.CORRELATION_TIMEOUT_KEY
1000
The correlation criteria is defined with a `MessagePredicate` object.
-The following example will treat a FIX ExecutionReport from the
+The following example will treat a FIX `ExecutionReport` from the
specified session where the transaction type is STATUS and the Order ID
matches our request. The session ID should be for the *requestor*, the
sender and target CompID fields will be reversed when looking for the
@@ -359,18 +367,7 @@ reply.
.withField(ExecTransType.FIELD, Integer.toString(ExecTransType.STATUS))
.withField(OrderID.FIELD, request.getString(OrderID.FIELD)));
-## Example
-
-The source code contains an example called `RequestReplyExample` that
-demonstrates the InOut exchanges for a consumer and producer. This
-example creates a simple HTTP server endpoint that accepts order status
-requests. The HTTP request is converted to a FIX
-OrderStatusRequestMessage, is augmented with a correlation criteria, and
-is then routed to a quickfix endpoint. The response is then converted to
-a JSON-formatted string and sent back to the HTTP server endpoint to be
-provided as the web response.
-
-# Spring Configuration
+## Spring Configuration
The QuickFIX/J component includes a Spring `FactoryBean` for configuring
the session settings within a Spring context. A type converter for
@@ -433,7 +430,7 @@ session ID strings is also included. The following example shows a
simple configuration of an acceptor and initiator session with default
settings for both sessions.
-# Exception handling
+## Exception handling
QuickFIX/J behavior can be modified if certain exceptions are thrown
during processing of a message. If a `RejectLogon` exception is thrown
@@ -444,13 +441,13 @@ Normally, QuickFIX/J handles the logon process automatically. However,
sometimes an outgoing logon message must be modified to include
credentials required by a FIX counterparty. If the FIX logon message
body is modified when sending a logon message
-(EventCategory=`AdminMessageSent` the modified message will be sent to
+(`EventCategory=AdminMessageSent` the modified message will be sent to
the counterparty. It is important that the outgoing logon message is
being processed *synchronously*. If it is processed asynchronously (on
another thread), the FIX engine will immediately send the unmodified
outgoing message when its callback method returns.
-# FIX Sequence Number Management
+## FIX Sequence Number Management
If an application exception is thrown during *synchronous* exchange
processing, this will cause QuickFIX/J to not increment incoming FIX
@@ -475,7 +472,9 @@ sending messages.
See the FIX protocol specifications and the QuickFIX/J documentation for
more details about FIX sequence number management.
-# Route Examples
+# Examples
+
+## Route Examples
Several examples are included in the QuickFIX/J component source code
(test subdirectories). One of these examples implements a trival trade
@@ -502,6 +501,17 @@ and processes them.
filter(header(QuickfixjEndpoint.MESSAGE_TYPE_KEY).isEqualTo(MsgType.EXECUTION_REPORT)).
bean(new MyTradeExecutionProcessor());
+## Additional Examples
+
+The source code contains an example called `RequestReplyExample` that
+demonstrates the InOut exchanges for a consumer and producer. This
+example creates a simple HTTP server endpoint that accepts order status
+requests. The HTTP request is converted to a FIX
+`OrderStatusRequestMessage`, is augmented with a correlation criteria,
+and is then routed to a quickfix endpoint. The response is then
+converted to a JSON-formatted string and sent back to the HTTP server
+endpoint to be provided as the web response.
+
## Component Configurations
diff --git a/camel-randomLoadBalancer-eip.md b/camel-randomLoadBalancer-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..2281c9b5a7f6be3fbbb01d2b7950ee7319dc47b4
--- /dev/null
+++ b/camel-randomLoadBalancer-eip.md
@@ -0,0 +1,33 @@
+# RandomLoadBalancer-eip.md
+
+Random mode for the [Load Balancer](#loadBalance-eip.adoc) EIP.
+
+The destination endpoints are selected randomly. This is a well-known
+and classic policy, which spreads the load randomly.
+
+# Exchange properties
+
+# Example
+
+We want to load balance between three endpoints in random mode.
+
+This is done as follows:
+
+Java
+from("direct:start")
+.loadBalance().random()
+.to("seda:x")
+.to("seda:y")
+.to("seda:z")
+.end();
+
+XML
+
+
+
+
+
+
+
+
+
diff --git a/camel-reactive-executor-tomcat.md b/camel-reactive-executor-tomcat.md
new file mode 100644
index 0000000000000000000000000000000000000000..4141d7d332b48d4abb54da59a00eff994f919f5d
--- /dev/null
+++ b/camel-reactive-executor-tomcat.md
@@ -0,0 +1,15 @@
+# Reactive-executor-tomcat.md
+
+**Since Camel 3.17**
+
+The `camel-reactive-executor-tomcat` is intended for users of Apache
+Tomcat, to let Camel applications shutdown cleanly when being
+un-deployed in Apache Tomcat.
+
+# Auto-detection from classpath
+
+To use this implementation all you need to do is to add the
+`camel-reactive-executor-tomcat` dependency to the classpath, and Camel
+should auto-detect this on startup and log as follows:
+
+ Using ReactiveExecutor: camel-reactive-executor-tomcat
diff --git a/camel-reactive-executor-vertx.md b/camel-reactive-executor-vertx.md
new file mode 100644
index 0000000000000000000000000000000000000000..c6c1649cab55e392f27aacf1fa67998ab7bbb2d2
--- /dev/null
+++ b/camel-reactive-executor-vertx.md
@@ -0,0 +1,27 @@
+# Reactive-executor-vertx.md
+
+**Since Camel 3.0**
+
+The camel-reactive-executor-vertx is a VertX based implementation of the
+`ReactiveExecutor` SPI.
+
+By default, Camel uses its own reactive engine for routing messages, but
+you can plug in different engines via an SPI interface. This is a VertX
+based plugin that uses the VertX event loop for processing messages
+during routing.
+
+At this time, this component is an experiment so use it with care.
+
+# VertX instance
+
+This implementation will first look up in the registry for an existing
+`io.vertx.core.Vertx` to be used. However, you can configure an existing
+instance using the getter/setter on the `VertXReactiveExecutor` class.
+
+# Auto-detection from classpath
+
+To use this implementation all you need to do is to add the
+`camel-reactive-executor-vertx` dependency to the classpath, and Camel
+should auto-detect this on startup and log as follows:
+
+ Using ReactiveExecutor: camel-reactive-executor-vertx
diff --git a/camel-reactive-streams.md b/camel-reactive-streams.md
index a0975957b5f26fb54adc70077e5696079f61a96e..abd1faff1422948c74165a0b857128b04842fe9f 100644
--- a/camel-reactive-streams.md
+++ b/camel-reactive-streams.md
@@ -55,7 +55,7 @@ application to interact with Camel data:
- **Process** data flowing from a Camel route using a reactive
processing step (In-Out from Camel)
-# Getting data from Camel
+## Getting data from Camel
To subscribe to data flowing from a Camel route, exchanges should be
redirected to a named stream, like in the following snippet:
@@ -89,7 +89,7 @@ can be used to process events).
The example prints all numbers generated by Camel into `System.out`.
-## Getting data from Camel using the direct API
+### Getting data from Camel using the direct API
For short Camel routes and for users that prefer defining the whole
processing flow using functional constructs of the reactive framework
@@ -106,7 +106,7 @@ Camel URIs.
.doOnNext(System.out::println)
.subscribe();
-# Sending data to Camel
+## Sending data to Camel
When an external library needs to push events into a Camel route, the
Reactive Streams endpoint must be set as consumer.
@@ -135,7 +135,7 @@ can be used to publish events).
String items are generated every second by RxJava in the example, and
they are pushed into the Camel route defined above.
-## Sending data to Camel using the direct API
+### Sending data to Camel using the direct API
Also in this case, the direct API can be used to obtain a Camel
subscriber from an endpoint URI.
@@ -146,7 +146,7 @@ subscriber from an endpoint URI.
Flowable.just("hello", "world")
.subscribe(camel.subscriber("seda:queue", String.class));
-# Request a transformation to Camel
+## Request a transformation to Camel
Routes defined in some Camel DSL can be used within a reactive stream
framework to perform a specific transformation. The same mechanism can
@@ -170,7 +170,7 @@ the Camel context:
from("reactive-streams:readAndMarshal")
.marshal() // ... other details
-## Request a transformation to Camel using the direct API
+### Request a transformation to Camel using the direct API
An alternative approach consists of using the URI endpoints directly in
the reactive flow:
@@ -193,7 +193,7 @@ In this case, the Camel transformation can be just:
from("direct:process")
.marshal() // ... other details
-# Process Camel data into the reactive framework
+## Process Camel data into the reactive framework
While a reactive streams *Publisher* allows exchanging data in a
unidirectional way, Camel routes often use an in-out exchange pattern
@@ -235,9 +235,9 @@ completely reactive way.
See Camel examples (**camel-example-reactive-streams**) for details.
-# Advanced Topics
+## Advanced Topics
-## Controlling Backpressure (producer side)
+### Controlling Backpressure (producer side)
When routing Camel exchanges to an external subscriber, backpressure is
handled by an internal buffer that caches exchanges before delivering
@@ -251,9 +251,9 @@ Considering the following route:
If the JMS queue contains a high number of messages and the Subscriber
associated with the `flow` stream is too slow, messages are dequeued
-from JMS and appended to the buffer, possibly causing a "out of memory"
-error. To avoid such problems, a `ThrottlingInflightRoutePolicy` can be
-set in the route.
+from JMS and appended to the buffer, possibly causing an *"out of
+memory"* error. To avoid such problems, a
+`ThrottlingInflightRoutePolicy` can be set in the route.
ThrottlingInflightRoutePolicy policy = new ThrottlingInflightRoutePolicy();
policy.setMaxInflightExchanges(10);
@@ -289,10 +289,10 @@ When the `LATEST` backpressure strategy is used, the publisher keeps
only the last exchange received from the route, while older data is
discarded (other options are available).
-## Controlling Backpressure (consumer side)
+### Controlling Backpressure (consumer side)
When Camel consumes items from a reactive-streams publisher, the maximum
-number of inflight exchanges can be set as endpoint option.
+number of in-flight exchanges can be set as an endpoint option.
The subscriber associated with the consumer interacts with the publisher
to keep the number of messages in the route lower than the threshold.
diff --git a/camel-reactor.md b/camel-reactor.md
new file mode 100644
index 0000000000000000000000000000000000000000..66b4e5d1df584f1eb1cc602f95df61b0410b243c
--- /dev/null
+++ b/camel-reactor.md
@@ -0,0 +1,13 @@
+# Reactor.md
+
+**Since Camel 2.20**
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-reactor
+ x.x.x
+
+
diff --git a/camel-recipientList-eip.md b/camel-recipientList-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..8547e312ca41ab8c68e89f990532daf35a6b4fd7
--- /dev/null
+++ b/camel-recipientList-eip.md
@@ -0,0 +1,335 @@
+# RecipientList-eip.md
+
+Camel supports the [Recipient
+List](https://www.enterpriseintegrationpatterns.com/RecipientList.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+How do we route a message to a list of dynamically specified recipients?
+
+
+
+
+
+Define a channel for each recipient. Then use a Recipient List to
+inspect an incoming message, determine the list of desired recipients,
+and forward the message to all channels associated with the recipients
+in the list.
+
+# Options
+
+See the `cacheSize` option for more details on *how much cache* to use
+depending on how many or few unique endpoints are used.
+
+# Exchange properties
+
+# Using Recipient List
+
+The Recipient List EIP allows routing **the same** message to a number
+of [endpoints](#manual::endpoint.adoc) and process them in a different
+way.
+
+There can be one or more destinations, and Camel will execute them
+sequentially (by default). However, a parallel mode exists which allows
+processing messages concurrently.
+
+The Recipient List EIP has many features and is based on the
+[Multicast](#multicast-eip.adoc) EIP. For example, the Recipient List
+EIP is capable of aggregating each message into a single *response*
+message as the result after the Recipient List EIP.
+
+## Using Static Recipient List
+
+The following example shows how to route a request from an input
+`queue:a` endpoint to a static list of destinations, using `constant`:
+
+Java
+from("jms:queue:a")
+.recipientList(constant("seda:x,seda:y,seda:z"));
+
+XML
+
+
+
+seda:x,seda:y,seda:z
+
+
+
+## Using Dynamic Recipient List
+
+Usually one of the main reasons for using the Recipient List pattern is
+that the list of recipients is dynamic and calculated at runtime.
+
+The following example demonstrates how to create a dynamic recipient
+list using an [Expression](#manual::expression.adoc) (which in this case
+extracts a named header value dynamically) to calculate the list of
+endpoints; which are either of type `Endpoint` or are converted to a
+`String` and then resolved using the endpoint URIs (separated by comma).
+
+Java
+from("jms:queue:a")
+.recipientList(header("foo"));
+
+XML
+
+
+
+
+
+
+### How is dynamic destinations evaluated
+
+The dynamic list of recipients that are defined in the header must be
+iterable such as:
+
+- `java.util.Collection`
+
+- `java.util.Iterator`
+
+- arrays
+
+- `org.w3c.dom.NodeList`
+
+- a single `String` with values separated by comma (the delimiter
+ configured)
+
+- any other type will be regarded as a single value
+
+## Configuring delimiter for dynamic destinations
+
+In XML DSL you can set the delimiter attribute for setting a delimiter
+to be used if the header value is a single `String` with multiple
+separated endpoints. By default, Camel uses comma as delimiter, but this
+option lets you specify a custom delimiter to use instead.
+
+
+
+
+
+
+
+
+
+So if **myHeader** contains a `String` with the value
+`"activemq:queue:foo;activemq:topic:hello ; log:bar"` then Camel will
+split the `String` using the delimiter given in the XML that was comma,
+resulting into three endpoints to send to. You can use spaces between
+the endpoints as Camel will trim the value when it looks up the endpoint
+to send to.
+
+And in Java DSL, you specify the delimiter as second parameter as shown
+below:
+
+ from("direct:a")
+ .recipientList(header("myHeader"), ";");
+
+## Using parallel processing
+
+The Recipient List supports `parallelProcessing` similar to what
+[Multicast](#multicast-eip.adoc) and [Split](#split-eip.adoc) EIPs have
+as well. When using parallel processing, then a thread pool is used to
+have concurrent tasks sending the `Exchange` to multiple recipients
+concurrently.
+
+You can enable parallel mode using `parallelProcessing` as shown:
+
+Java
+from("direct:a")
+.recipientList(header("myHeader")).parallelProcessing();
+
+XML
+
+
+
+
+
+
+
+When parallel processing is enabled, then the Camel routing engin will
+continue processing using last used thread from the parallel thread
+pool. However, if you want to use the original thread that called the
+recipient list, then make sure to enable the synchronous option as well.
+
+### Using custom thread pool
+
+A thread pool is only used for `parallelProcessing`. You supply your own
+custom thread pool via the `ExecutorServiceStrategy` (see Camel’s
+Threading Model), the same way you would do it for the
+`aggregationStrategy`. By default, Camel uses a thread pool with 10
+threads (subject to change in future versions).
+
+The Recipient List EIP will by default continue to process the entire
+exchange even in case one of the sub messages will throw an exception
+during routing.
+
+For example, if you want to route to three destinations and the second
+destination fails by an exception. What Camel does by default is to
+process the remainder destinations. You have the chance to deal with the
+exception when aggregating using an `AggregationStrategy`.
+
+But sometimes you want the Camel to stop and let the exception be
+propagated back, and let the Camel [Error
+Handler](#manual::error-handler.adoc) handle it. You can do this by
+specifying that it should stop in case of an exception occurred. This is
+done by the `stopOnException` option as shown below:
+
+Java
+from("direct:start")
+.recipientList(header("whereTo")).stopOnException()
+.to("mock:result");
+
+ from("direct:foo").to("mock:foo");
+
+ from("direct:bar").process(new MyProcessor()).to("mock:bar");
+
+ from("direct:baz").to("mock:baz");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+In this example suppose a message is sent with the header
+`whereTo=direct:foo,direct:bar,direct:baz` that means the recipient list
+sends messages to those three endpoints.
+
+Now suppose that the `MyProcessor` is causing a failure and throws an
+exception. This means the Recipient List EIP will stop after this, and
+not the last route (`direct:baz`).
+
+## Ignore invalid endpoints
+
+The Recipient List supports `ignoreInvalidEndpoints` (like [Routing
+Slip](#routingSlip-eip.adoc) EIP). You can use it to skip endpoints
+which are invalid.
+
+Java
+from("direct:a")
+.recipientList(header("myHeader")).ignoreInvalidEndpoints();
+
+XML
+
+
+
+
+
+
+
+Then let us say the `myHeader` contains the following two endpoints
+`direct:foo,xxx:bar`. The first endpoint is valid and works. However,
+the second one is invalid and will just be ignored. Camel logs at DEBUG
+level about it, so you can see why the endpoint was invalid.
+
+## Using timeout
+
+If you use `parallelProcessing` then you can configure a total `timeout`
+value in millis.
+
+Camel will then process the messages in parallel until the timeout is
+hit. This allows you to continue processing if one message consumer is
+slow. For example, you can set a timeout value of 20 sec.
+
+If the timeout is reached with running tasks still remaining, certain
+tasks for which it is challenging for Camel to shut down in a graceful
+manner may continue to run. So use this option with a bit of care.
+
+For example, in the unit test below, you can see that we multicast the
+message to three destinations. We have a timeout of 2 seconds, which
+means only the last two messages can be completed within the timeframe.
+This means we will only aggregate the last two which yields a result
+aggregation which outputs "BC".
+
+ from("direct:start")
+ .multicast(new AggregationStrategy() {
+ public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
+ if (oldExchange == null) {
+ return newExchange;
+ }
+
+ String body = oldExchange.getIn().getBody(String.class);
+ oldExchange.getIn().setBody(body + newExchange.getIn().getBody(String.class));
+ return oldExchange;
+ }
+ })
+ .parallelProcessing().timeout(250).to("direct:a", "direct:b", "direct:c")
+ // use end to indicate end of multicast route
+ .end()
+ .to("mock:result");
+
+ from("direct:a").delay(1000).to("mock:A").setBody(constant("A"));
+
+ from("direct:b").to("mock:B").setBody(constant("B"));
+
+ from("direct:c").to("mock:C").setBody(constant("C"));
+
+By default, if a timeout occurs the `AggregationStrategy` is not
+invoked. However, you can implement the `timeout` method: This allows
+you to deal with the timeout in the `AggregationStrategy` if you really
+need to.
+
+Timeout is total
+
+The timeout is total, which means that after X time, Camel will
+aggregate the messages which have completed within the timeframe. The
+remainder will be canceled. Camel will also only invoke the `timeout`
+method in the `TimeoutAwareAggregationStrategy` once, for the first
+index which caused the timeout.
+
+## Using ExchangePattern in recipients
+
+The recipient list will by default use the current Exchange Pattern.
+Though one can imagine use-cases where one wants to send a message to a
+recipient using a different exchange pattern. For example, you may have
+a route that initiates as an `InOnly` route, but want to use `InOut`
+exchange pattern with a recipient list. You can configure the exchange
+pattern directly in the recipient endpoints.
+
+For example, in the route below we pick up new files (which will be
+started as `InOnly`) and then route to a recipient list. As we want to
+use `InOut` with the ActiveMQ (JMS) endpoint we can now specify this
+using the `exchangePattern=InOut` option. Then the response from the JMS
+request/reply will then be continued routed, and thus the response is
+what will be stored in as a file in the outbox directory.
+
+ from("file:inbox")
+ // the exchange pattern is InOnly initially when using a file route
+ .recipientList().constant("activemq:queue:inbox?exchangePattern=InOut")
+ .to("file:outbox");
+
+The recipient list will not alter the original exchange pattern. So in
+the example above the exchange pattern will still be `InOnly` when the
+message is routed to the `file:outbox endpoint`. If you want to alter
+the exchange pattern permanently then use `.setExchangePattern` in the
+route.
+
+See more details at [Event Message](#event-message.adoc) and [Request
+Reply](#requestReply-eip.adoc) EIPs.
+
+# See Also
+
+Because Recipient List EIP is based on the
+[Multicast](#multicast-eip.adoc), then you can find more information in
+[Multicast](#multicast-eip.adoc) EIP about features that are also
+available with Recipient List EIP.
diff --git a/camel-redis.md b/camel-redis.md
new file mode 100644
index 0000000000000000000000000000000000000000..5950ca8fce046ee444a7c4266392350272a52295
--- /dev/null
+++ b/camel-redis.md
@@ -0,0 +1,6 @@
+# Redis.md
+
+**Since Camel 3.5**
+
+The Redis component provides an `AggregationStrategy` to use Redis as
+the backend datastore.
diff --git a/camel-ref-language.md b/camel-ref-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..6ba6875bd59661cb5eb4fa253a718ac8a0e8f49c
--- /dev/null
+++ b/camel-ref-language.md
@@ -0,0 +1,38 @@
+# Ref-language.md
+
+**Since Camel 2.8**
+
+The Ref Expression Language is really just a way to lookup a custom
+`Expression` or `Predicate` from the
+[Registry](#manual:ROOT:registry.adoc).
+
+This is particular useable in XML DSLs.
+
+# Ref Language options
+
+# Example usage
+
+The Splitter EIP in XML DSL can utilize a custom expression using
+`[` like:
+
+ ]
+
+
+
+
+ [myExpression]
+
+
+
+
+in this case, the message coming from the seda:a endpoint will be split
+using a custom `Expression` which has the id `myExpression` in the
+[Registry](#manual:ROOT:registry.adoc).
+
+And the same example using Java DSL:
+
+ from("seda:a").split().ref("myExpression").to("seda:b");
+
+# Dependencies
+
+The Ref language is part of **camel-core**.
diff --git a/camel-ref.md b/camel-ref.md
index 475f01471f4aceb8164f846396018696480209d3..0c6113b85c56939938d3fbd46f292558901cf7b9 100644
--- a/camel-ref.md
+++ b/camel-ref.md
@@ -16,7 +16,9 @@ but not always, the Spring registry). If you are using the Spring
registry, `someName` would be the bean ID of an endpoint in the Spring
registry.
-# Runtime lookup
+# Usage
+
+## Runtime lookup
This component can be used when you need dynamic discovery of endpoints
in the Registry where you can compute the URI at runtime. Then you can
@@ -40,7 +42,7 @@ Registry such as:
-# Sample
+# Example
Bind endpoints to the Camel registry:
diff --git a/camel-removeHeader-eip.md b/camel-removeHeader-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..1459fd007f361444e0f53c43f5f80896f525bbdf
--- /dev/null
+++ b/camel-removeHeader-eip.md
@@ -0,0 +1,49 @@
+# RemoveHeader-eip.md
+
+The Remove Header EIP allows you to remove a single header from the
+[Message](#message.adoc).
+
+# Options
+
+# Exchange properties
+
+# Example
+
+We want to remove a header with key "myHeader" from the message:
+
+Java
+from("seda:b")
+.removeHeader("myHeader")
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+YAML
+\- from:
+uri: seda:b
+steps:
+\- removeHeader:
+name: myHeader
+\- to:
+uri: mock:result
+
+# See Also
+
+Camel provides the following EIPs for removing headers or exchange
+properties:
+
+- [Remove Header](#removeHeader-eip.adoc): To remove a single header
+
+- [Remove Headers](#removeHeaders-eip.adoc): To remove one or more
+ message headers
+
+- [Remove Property](#removeProperty-eip.adoc): To remove a single
+ exchange property
+
+- [Remove Properties](#removeProperties-eip.adoc): To remove one or
+ more exchange properties
diff --git a/camel-removeHeaders-eip.md b/camel-removeHeaders-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..828a27fe0a863eda7a8afbc2a0fd9747a6c1e28d
--- /dev/null
+++ b/camel-removeHeaders-eip.md
@@ -0,0 +1,84 @@
+# RemoveHeaders-eip.md
+
+The Remove Headers EIP allows you to remove one or more headers from the
+[Message](#message.adoc), based on pattern syntax.
+
+# Options
+
+# Exchange properties
+
+# Remove Headers by pattern
+
+The Remove Headers EIP supports pattern matching by the following rules
+in the given order:
+
+- match by exact name
+
+- match by wildcard
+
+- match by regular expression
+
+# Remove all headers
+
+To remove all headers you can use `*` as the pattern:
+
+Java
+from("seda:b")
+.removeHeaders("\*")
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+YAML
+\- from:
+uri: seda:b
+steps:
+\- removeHeaders: "\*"
+\- to:
+uri: mock:result
+
+# Remove all Camel headers
+
+To remove all headers that start with `Camel` then use `Camel*` as
+shown:
+
+Java
+from("seda:b")
+.removeHeaders("Camel\*")
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+YAML
+\- from:
+uri: seda:b
+steps:
+\- removeHeaders: "Camel\*"
+\- to:
+uri: mock:result
+
+# See Also
+
+Camel provides the following EIPs for removing headers or exchange
+properties:
+
+- [Remove Header](#removeHeader-eip.adoc): To remove a single header
+
+- [Remove Headers](#removeHeaders-eip.adoc): To remove one or more
+ message headers
+
+- [Remove Property](#removeProperty-eip.adoc): To remove a single
+ exchange property
+
+- [Remove Properties](#removeProperties-eip.adoc): To remove one or
+ more exchange properties
diff --git a/camel-removeProperties-eip.md b/camel-removeProperties-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..e11d4a5cbc3bce7a26105a6ca54677262a41bbbb
--- /dev/null
+++ b/camel-removeProperties-eip.md
@@ -0,0 +1,71 @@
+# RemoveProperties-eip.md
+
+The Remove Properties EIP allows you to remove one or more `Exchange`
+properties, based on pattern syntax.
+
+# Options
+
+# Exchange properties
+
+# Remove Exchange Properties by pattern
+
+The Remove Properties EIP supports pattern matching by the following
+rules in the given order:
+
+- match by exact name
+
+- match by wildcard
+
+- match by regular expression
+
+# Remove all properties
+
+Java
+from("seda:b")
+.removeProperties("\*")
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+Be careful to remove all exchange properties as Camel uses internally
+exchange properties to keep state on the `Exchange` during routing. So
+use this with care. You should generally only remove custom exchange
+properties that are under your own control.
+
+# Remove properties by pattern
+
+To remove all exchange properties that start with `Foo` then use `Foo*`
+as shown:
+
+Java
+from("seda:b")
+.removeProperties("Foo\*")
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+# See Also
+
+Camel provides the following EIPs for removing headers or exchange
+properties:
+
+- [Remove Header](#removeHeader-eip.adoc): To remove a single header
+
+- [Remove Headers](#removeHeaders-eip.adoc): To remove one or more
+ message headers
+
+- [Remove Property](#removeProperty-eip.adoc): To remove a single
+ exchange property
+
+- [Remove Properties](#removeProperties-eip.adoc): To remove one or
+ more exchange properties
diff --git a/camel-removeProperty-eip.md b/camel-removeProperty-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..599f63d2243593bf6a13587c8f239724fc105984
--- /dev/null
+++ b/camel-removeProperty-eip.md
@@ -0,0 +1,41 @@
+# RemoveProperty-eip.md
+
+The Remove Property EIP allows you to remove a single property from the
+`Exchange`.
+
+# Options
+
+# Exchange properties
+
+# Example
+
+We want to remove an exchange property with key "myProperty" from the
+exchange:
+
+Java
+from("seda:b")
+.removeProperty("myProperty")
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+# See Also
+
+Camel provides the following EIPs for removing headers or exchange
+properties:
+
+- [Remove Header](#removeHeader-eip.adoc): To remove a single header
+
+- [Remove Headers](#removeHeaders-eip.adoc): To remove one or more
+ message headers
+
+- [Remove Property](#removeProperty-eip.adoc): To remove a single
+ exchange property
+
+- [Remove Properties](#removeProperties-eip.adoc): To remove one or
+ more exchange properties
diff --git a/camel-removeVariable-eip.md b/camel-removeVariable-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..b61524b06825d4b9498f6b9aff836827f1eab572
--- /dev/null
+++ b/camel-removeVariable-eip.md
@@ -0,0 +1,26 @@
+# RemoveVariable-eip.md
+
+The Remove Variable EIP allows you to remove a single variable.
+
+# Options
+
+# Exchange properties
+
+# Example
+
+We want to remove a variable with key "myVar" from the exchange:
+
+Java
+from("seda:b")
+.removeVariable("myVar")
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+If you want to remove all variables from the `Exchange` then use `*` as
+the name.
diff --git a/camel-requestReply-eip.md b/camel-requestReply-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..921daa0e54e68df2faaed68947299d362c84f9c6
--- /dev/null
+++ b/camel-requestReply-eip.md
@@ -0,0 +1,113 @@
+# RequestReply-eip.md
+
+Camel supports the [Request
+Reply](http://www.enterpriseintegrationpatterns.com/RequestReply.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+When an application sends a message, how can it get a response from the
+receiver?
+
+
+
+
+
+Send a pair of Request-Reply messages, each on its own channel.
+
+Camel supports Request Reply by the [Exchange
+Pattern](#manual::exchange-pattern.adoc) on a [Message](#message.adoc)
+which can be set to `InOut` to indicate a request/reply message. Camel
+[Components](#ROOT:index.adoc) then implement this pattern using the
+underlying transport or protocols.
+
+For example, when using [JMS](#ROOT:jms-component.adoc) with `InOut` the
+component will by default perform these actions:
+
+- create by default a temporary inbound queue
+
+- set the `JMSReplyTo` destination on the request message
+
+- set the `JMSCorrelationID` on the request message
+
+- send the request message
+
+- consume the response and associate the inbound message to the
+ belonging request using the `JMSCorrelationID` (as you may be
+ performing many concurrent request/responses).
+
+- continue routing when the reply is received and populated on the
+ [Exchange](#manual::exchange.adoc)
+
+See the related [Event Message](#eips:event-message.adoc).
+
+# Using endpoint URI
+
+If you are using a component which defaults to `InOnly` you can override
+the [Exchange Pattern](#manual::exchange-pattern.adoc) for a
+**consumer** endpoint using the pattern property.
+
+ foo:bar?exchangePattern=InOut
+
+This is only possible on endpoints used by consumers (i.e., in
+``).
+
+In the example below the message will be forced as a request reply
+message as the consumer is in `InOut` mode.
+
+Java
+from("jms:someQueue?exchangePattern=InOut")
+.to("bean:processMessage");
+
+XML
+
+
+
+
+
+# Using setExchangePattern EIP
+
+You can specify the [Exchange Pattern](#manual::exchange-pattern.adoc)
+using `setExchangePattern` in the DSL.
+
+Java
+from("direct:foo")
+.setExchangePattern(ExchangePattern.InOut)
+.to("jms:queue:cheese");
+
+XML
+
+
+
+
+
+
+When using `setExchangePattern` then the [Exchange
+Pattern](#manual::exchange-pattern.adoc) on the
+[Exchange](#manual::exchange.adoc) is changed from this point onwards in
+the route.
+
+This means you can change the pattern back again at a later point:
+
+ from("direct:foo")
+ .setExchangePattern(ExchangePattern.InOnly)
+ .to("jms:queue:one-way");
+ .setExchangePattern(ExchangePattern.InOut)
+ .to("jms:queue:in-and-out")
+ .log("InOut MEP received ${body}")
+
+Using `setExchangePattern` to change the [Exchange
+Pattern](#manual::exchange-pattern.adoc) is often only used in special
+use-cases where you must force to be using either `InOnly` or `InOut`
+mode when using components that support both modes (such as messaging
+components like ActiveMQ, JMS, RabbitMQ etc.)
+
+# JMS component and InOnly vs. InOut
+
+When consuming messages from [JMS](#ROOT:jms-component.adoc) a Request
+Reply is indicated by the presence of the `JMSReplyTo` header. This
+means the JMS component automatic detects whether to use `InOnly` or
+`InOut` in the consumer.
+
+Likewise, the JMS producer will check the current [Exchange
+Pattern](#manual::exchange-pattern.adoc) on the
+[Exchange](#manual::exchange.adoc) to know whether to use `InOnly` or
+`InOut` mode (i.e., one-way vs. request/reply messaging)
diff --git a/camel-resequence-eip.md b/camel-resequence-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..3c7e10d511b9371267e84a703973ccddd3a831dc
--- /dev/null
+++ b/camel-resequence-eip.md
@@ -0,0 +1,348 @@
+# Resequence-eip.md
+
+Camel supports the
+[Resequencer](http://www.enterpriseintegrationpatterns.com/Resequencer.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+How can we get a stream of related but out-of-sequence messages back
+into the correct order?
+
+
+
+
+
+Use a stateful filter, a Resequencer, to collect and re-order messages
+so that they can be published to the output channel in a specified
+order.
+
+The Resequencer implementation in Camel uses an
+[Expression](#manual::expression.adoc) as the `Comparator` to re-order
+the messages. By using the expression, then the messages can easily be
+re-ordered by a message header or another piece of the message.
+
+Camel supports two re-sequencing algorithms:
+
+- [Batch Resequencing](#batchConfig-eip.adoc) - **Default mode**:
+ collects messages into a batch, sorts the messages and sends them to
+ their output.
+
+- [Stream Resequencing](#streamConfig-eip.adoc) - re-orders
+ (continuous) message streams based on the detection of gaps between
+ messages.
+
+By default, the Resequencer does not support duplicate messages and will
+only keep the last message, in case a message arrives with the same
+message expression. However, in the batch mode you can enable it to
+allow duplicates.
+
+# Options
+
+# Exchange properties
+
+# Batch Resequencing
+
+The following example shows how to use the Resequencer in [batch
+mode](#batchConfig-eip.adoc) (default), so that messages are sorted in
+order of the message body.
+
+That is messages are collected into a batch (either by a maximum number
+of messages per batch or using a timeout) then they are sorted, and sent
+out, to continue being routed.
+
+In the example below, we re-order the message based on the content of
+the message body. The default batch modes will collect up to 100
+messages per batch, or timeout every second.
+
+Java
+from("direct:start")
+.resequence().body()
+.to("mock:result");
+
+This is equivalent to:
+
+ from("direct:start")
+ .resequence(body()).batch()
+ .to("mock:result");
+
+XML
+
+
+
+${body}
+
+
+
+
+The batch resequencer can be further configured via the `size()` and
+`timeout()` methods:
+
+Java
+from("direct:start")
+.resequence(body()).batch().size(300).timeout(4000L)
+.to("mock:result")
+
+XML
+
+
+
+
+${body}
+
+
+
+
+This sets the batch size to 300 and the batch timeout to 4000 ms (by
+default, the batch size is 100, and the timeout is 1000 ms).
+
+So the above example will reorder messages in order of their bodies.
+Typically, you’d use a header rather than the body to order things; or
+maybe a part of the body. So you could replace this expression with:
+
+Java
+from("direct:start")
+.resequence(header("mySeqNo"))
+.to("mock:result")
+
+XML
+
+
+
+
+
+
+
+
+This reorders messages using a custom sequence number with the header
+name mySeqNo.
+
+## Allow Duplicates
+
+When allowing duplicates, then the resequencer retains the duplicate
+message instead of keeping only the last duplicated message.
+
+In batch mode, you can turn on duplicates as follows:
+
+Java
+from("direct:start")
+.resequence(header("mySeqNo")).allowDuplicates()
+.to("mock:result")
+
+XML
+
+
+
+
+
+
+
+
+
+## Reverse Ordering
+
+You can reverse the expression ordering. By default, the order is based
+on `0..9,A..Z`, which would let messages with low numbers be ordered
+first, and thus also outgoing first. In some cases, you want to reverse
+the ordering.
+
+In batch mode, you can turn on reverse as follows:
+
+Java
+from("direct:start")
+.resequence(header("mySeqNo")).reverse()
+.to("mock:result")
+
+XML
+
+
+
+
+
+
+
+
+
+## Ignoring invalid messages
+
+The Resequencer throws a `CamelExchangeException` if the incoming
+Exchange is not valid for the resequencer such as, the expression cannot
+be evaluated due to a missing header.
+
+You can ignore these kinds of errors, and let the Resequencer skip the
+invalid Exchange.
+
+To do this, you do as follows:
+
+Java
+from("direct:start")
+.resequence(header("seqno")).batch()
+// ignore invalid exchanges (they are discarded)
+.ignoreInvalidExchanges()
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+
+
+
+This option is available for both batch and stream mode.
+
+## Resequence JMS messages based on JMSPriority
+
+It’s now much easier to use the Resequencer to resequence messages from
+JMS queues based on JMSPriority. For that to work, you need to use the
+two new options `allowDuplicates` and `reverse`.
+
+ from("jms:queue:foo")
+ // sort by JMSPriority by allowing duplicates (the message can have the same JMSPriority)
+ // and use reverse ordering so 9 is the first output (most important), and 0 is the last
+ // use batch mode and fire every 3rd second
+ .resequence(header("JMSPriority")).batch().timeout(3000).allowDuplicates().reverse()
+ .to("mock:result");
+
+Notice this is **only** possible in the `batch` mode of the Resequencer.
+
+# Stream Resequencing
+
+In streaming mode, then the Resequencer will send out messages as soon
+as possible when a message with the next expected sequence number
+arrived.
+
+The streaming mode requires the messages to be re-ordered based on
+integer numeric values that are ordered 1,2,3…N.
+
+The following example uses the header seqnum for the ordering:
+
+Java
+from("direct:start")
+.resequence(header("seqnum")).stream()
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+
+
+
+The Resequencer keeps a backlog of pending messages in a backlog. The
+default capacity is 1000 elements, which can be configured:
+
+Java
+from("direct:start")
+.resequence(header("seqnum")).stream().capacity(5000).timeout(4000)
+.to("mock:result")
+
+XML
+
+
+
+
+
+
+
+
+
+This uses a capacity of 5000 elements. And the timeout has been set to 4
+seconds. In case of a timeout, then the resequencer disregards the
+current expected sequence number, and moves to the next expected number.
+
+## How streaming mode works?
+
+The stream-processing resequencer algorithm is based on the detection of
+gaps in a message stream rather than on a fixed batch size. Gap
+detection in combination with timeouts removes the constraint of having
+to know the number of messages of a sequence (i.e., the batch size) in
+advance. Messages must contain a unique sequence number for which a
+predecessor and a successor are known.
+
+For example, a message with the sequence number 3 has a predecessor
+message with the sequence number 2 and a successor message with the
+sequence number 4. The message sequence 2,3,5 has a gap because the
+successor of 3 is missing. The resequencer therefore has to retain
+message 5 until message 4 arrives (or a timeout occurs).
+
+If the maximum time difference between messages (with
+successor/predecessor relationship with respect to the sequence number)
+in a message stream is known, then the Resequencer timeout parameter
+should be set to this value.
+
+In this case, it is guaranteed that all messages of a stream are
+delivered in correct order to the next processor. The lower the timeout
+value is compared to the out-of-sequence time difference, the higher is
+the probability for out-of-sequence messages delivered by this
+Resequencer. Large timeout values should be supported by sufficiently
+high capacity values. The capacity parameter is used to prevent the
+Resequencer from running out of memory.
+
+## Using custom streaming mode sequence expression
+
+By default, the stream Resequencer expects long sequence numbers, but
+other sequence numbers types can be supported as well by providing a
+custom expression.
+
+ public class MyFileNameExpression implements Expression {
+
+ public String getFileName(Exchange exchange) {
+ return exchange.getIn().getBody(String.class);
+ }
+
+ public Object evaluate(Exchange exchange) {
+ // parser the file name with YYYYMMDD-DNNN pattern
+ String fileName = getFileName(exchange);
+ String[] files = fileName.split("-D");
+ Long answer = Long.parseLong(files[0]) * 1000 + Long.parseLong(files[1]);
+ return answer;
+ }
+
+ public T evaluate(Exchange exchange, Class type) {
+ Object result = evaluate(exchange);
+ return exchange.getContext().getTypeConverter().convertTo(type, result);
+ }
+ }
+
+And then you can use this expression in a Camel route:
+
+ from("direct:start")
+ .resequence(new MyFileNameExpression()).stream().timeout(2000)
+ .to("mock:result");
+
+## Rejecting old messages
+
+Rejecting old messages is used to prevent out of order messages from
+being sent, regardless of the event that delivered messages downstream
+(capacity, timeout, etc).
+
+If enabled, the Resequencer will throw a `MessageRejectedException` when
+an incoming Exchange is *older* (based on the `Comparator`) than the
+last message delivered.
+
+This provides an extra level of control in regard to delayed message
+ordering.
+
+In the example below, old messages are rejected:
+
+Java
+from("direct:start")
+.resequence(header("seqno")).stream().timeout(1000).rejectOld()
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+
+
+
+If an old message is detected then Camel throws
+`MessageRejectedException`.
diff --git a/camel-resilience4j-eip.md b/camel-resilience4j-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..9171682842795aafcee4bfde04716d7faac35c5e
--- /dev/null
+++ b/camel-resilience4j-eip.md
@@ -0,0 +1,171 @@
+# Resilience4j-eip.md
+
+The Resilience4j EIP provides integration with Resilience4j
+[Resilience4j](https://resilience4j.readme.io/) to be used as [Circuit
+Breaker](#circuitBreaker-eip.adoc) in the Camel routes.
+
+# Configuration options
+
+The Resilience4j EIP supports two options which are listed below:
+
+
+
+
+
+
+
+
+
+
+
+
+
+resilienceConfiguration
+Configure the Resilience EIP. When the
+configuration is complete, use end() to return to the
+Resilience EIP.
+
+Resilience4jConfigurationDefinition
+
+
+resilienceConfigurationRef
+Refers to a Resilience configuration to
+use for configuring the Resilience EIP.
+
+String
+
+
+
+
+See [Resilience4j Configuration](#resilience4jConfiguration-eip.adoc)
+for all the configuration options on Resilience [Circuit
+Breaker](#circuitBreaker-eip.adoc).
+
+# Using Resilience4j EIP
+
+Below is an example route showing a Resilience4j circuit breaker that
+protects against a downstream HTTP operation with fallback.
+
+Java
+from("direct:start")
+.circuitBreaker()
+.to("http://fooservice.com/faulty")
+.onFallback()
+.transform().constant("Fallback message")
+.end()
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+Fallback message
+
+
+
+
+
+
+In case the calling the downstream HTTP service is failing, and an
+exception is thrown, then the circuit breaker will react and execute the
+fallback route instead.
+
+If there was no fallback, then the circuit breaker will throw an
+exception.
+
+For more information about fallback, see
+[onFallback](#onFallback-eip.adoc).
+
+## Configuring Resilience4j
+
+You can fine-tune Resilience4j by the many [Resilience4j
+Configuration](#resilience4jConfiguration-eip.adoc) options.
+
+For example, to use a 2-second execution timeout, you can do as follows:
+
+Java
+from("direct:start")
+.circuitBreaker()
+// use a 2-second timeout
+.resilience4jConfiguration().timeoutEnabled(true).timeoutDuration(2000).end()
+.log("Resilience processing start: ${threadName}")
+.to("http://fooservice.com/faulty")
+.log("Resilience processing end: ${threadName}")
+.end()
+.log("After Resilience ${body}");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+In this example if calling the downstream service does not return a
+response within 2 seconds, a timeout is triggered, and the exchange will
+fail with a `TimeoutException`.
+
+## Camel’s Error Handler and Circuit Breaker EIP
+
+By default, the [Circuit Breaker](#circuitBreaker-eip.adoc) EIP handles
+errors by itself. This means if the circuit breaker is open, and the
+message fails, then Camel’s error handler is not reacting also.
+
+However, you can enable Camels error handler with circuit breaker by
+enabling the `inheritErrorHandler` option, as shown:
+
+ // Camel's error handler that will attempt to redeliver the message 3 times
+ errorHandler(deadLetterChannel("mock:dead").maximumRedeliveries(3).redeliveryDelay(0));
+
+ from("direct:start")
+ .to("log:start")
+ // turn on Camel's error handler on circuit breaker so Camel can do redeliveries
+ .circuitBreaker().inheritErrorHandler(true)
+ .to("mock:a")
+ .throwException(new IllegalArgumentException("Forced"))
+ .end()
+ .to("log:result")
+ .to("mock:result");
+
+This example is from a test, where you can see the Circuit Breaker EIP
+block has been hardcoded to always fail by throwing an exception.
+Because the `inheritErrorHandler` has been enabled, then Camel’s error
+handler will attempt to call the Circuit Breaker EIP block again.
+
+That means the `mock:a` endpoint will receive the message again, and a
+total of `1 + 3 = 4` message (first time + 3 redeliveries).
+
+If we turn off the `inheritErrorHandler` option (default) then the
+Circuit Breaker EIP will only be executed once because it handled the
+error itself.
+
+# Dependencies
+
+Camel provides the [Circuit Breaker](#circuitBreaker-eip.adoc) EIP in
+the route model, which allows to plug in different implementations.
+Resilience4j is one such implementation.
+
+Maven users will need to add the following dependency to their `pom.xml`
+to use this EIP:
+
+
+ org.apache.camel
+ camel-resilience4j
+ x.x.x
+
diff --git a/camel-resilience4j.md b/camel-resilience4j.md
new file mode 100644
index 0000000000000000000000000000000000000000..4f54b72ccf09f8a31b9a8c8a3ab336826758d251
--- /dev/null
+++ b/camel-resilience4j.md
@@ -0,0 +1,19 @@
+# Resilience4j.md
+
+**Since Camel 3.0**
+
+This component supports the Circuit Breaker EIP with the
+[Resilience4j](https://resilience4j.readme.io/) library.
+
+For more details, see the [Circuit Breaker
+EIP](#eips:circuitBreaker-eip.adoc) documentation.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-resilience4j
+ x.x.x
+
+
diff --git a/camel-resilience4jConfiguration-eip.md b/camel-resilience4jConfiguration-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..aff9eb75bbe920553872cae70fead2296573b5b1
--- /dev/null
+++ b/camel-resilience4jConfiguration-eip.md
@@ -0,0 +1,11 @@
+# Resilience4jConfiguration-eip.md
+
+This page documents all the specific options for the
+[Resilience4j](#resilience4j-eip.adoc) EIP.
+
+# Exchange properties
+
+# Example
+
+See [Resilience4j](#resilience4j-eip.adoc) EIP for details how to use
+this EIP.
diff --git a/camel-resourceresolver-github.md b/camel-resourceresolver-github.md
new file mode 100644
index 0000000000000000000000000000000000000000..cf765a5b0631a2f49114ae968d2593890ec704e9
--- /dev/null
+++ b/camel-resourceresolver-github.md
@@ -0,0 +1,40 @@
+# Resourceresolver-github.md
+
+**Since Camel 3.11**
+
+A pluggable resource resolver that allows to load files from GitHub over
+the internet via `https` protocol.
+
+The syntax is
+
+ github:organization:repository:branch:filename
+
+The default branch is `main` so if you want to load from this branch,
+you can use a shorter syntax
+
+ github:organization:repository:name
+
+For example to load:
+`https://github.com/apache/camel-kamelets/blob/main/kamelets/aws-ddb-streams-source.kamelet.yaml`
+
+ github:apache:camel-kamelets:main:kamelets/aws-ddb-streams-source.kamelet.yaml
+
+Because the file is in the main branch, we can omit this:
+
+ github:apache:camel-kamelets:kamelets/aws-ddb-streams-source.kamelet.yaml
+
+This resource resolver can potentially load any resources from GitHub
+that are in public repositories. It’s not recommended for production
+usage but is great for development and demo purposes.
+
+# Resolving from gist
+
+You can also load resources from gist
+
+The syntax is
+
+ gist:user:id:cid:fileName
+
+For example:
+
+ gist:davsclaus:477ddff5cdeb1ae03619aa544ce47e92:cd1be96034748e42e43879a4d27ed297752b6115:mybeer.xml
diff --git a/camel-rest-openapi.md b/camel-rest-openapi.md
index 4287d260caa278f8d55e52ebaa3bd212ea7d27f0..e154b5a0e8c9b9d516ca40cfa0ab203b0c3eac52 100644
--- a/camel-rest-openapi.md
+++ b/camel-rest-openapi.md
@@ -47,10 +47,10 @@ failing that OpenApi’s own resource loading support.
This component does not act as an HTTP client. It delegates that to
another component mentioned above. The lookup mechanism searches for a
-single component that implements the *RestProducerFactory* interface and
-uses that. If the CLASSPATH contains more than one, then the property
-`componentName` should be set to indicate which component to delegate
-to.
+single component that implements the `RestProducerFactory` interface and
+uses that. If the `_CLASSPATH_` contains more than one, then the
+property `componentName` should be set to indicate which component to
+delegate to.
Most of the configuration is taken from the OpenApi specification, but
the option exists to override those by specifying them on the component
@@ -66,9 +66,9 @@ and implement the required *RestProducerFactory* interface — as do the
components listed at the top.
If you do not specify the *componentName* at either component or
-endpoint level, CLASSPATH is searched for a suitable delegate. There
-should be only one component present on the CLASSPATH that implements
-the *RestProducerFactory* interface for this to work.
+endpoint level, `_CLASSPATH_` is searched for a suitable delegate. There
+should be only one component present on the `_CLASSPATH_` that
+implements the `RestProducerFactory` interface for this to work.
This component’s endpoint URI is lenient which means that in addition to
message headers you can specify REST operation’s parameters as endpoint
@@ -77,22 +77,62 @@ makes sense to use this feature only for parameters that are indeed
constant for all invocations — for example API version in path such as
`/api/{version}/users/{id}`.
-# Example: PetStore
+# Usage
+
+# Request validation
+
+API requests can be validated against the configured OpenAPI
+specification before they are sent by setting the
+`requestValidationEnabled` option to `true`. Validation is provided by
+the
+[swagger-request-validator](https://bitbucket.org/atlassian/swagger-request-validator/src/master/).
+
+The validator checks for the following conditions:
+
+- request body - Checks if the request body is required and whether
+ there is any body on the Camel Exchange.
+
+- valid json - Checks if the content-type is `application/json` that
+ the message body can be parsed as valid JSon.
+
+- content-type - Validates whether the `Content-Type` header for the
+ request is valid for the API operation. The value is taken from the
+ `Content-Type` Camel message exchange header.
+
+- request parameters - Validates whether an HTTP header required by
+ the API operation is present. The header is expected to be present
+ among the Camel message exchange headers.
+
+- query parameters - Validates whether an HTTP query parameter
+ required by the API operation is present. The query parameter is
+ expected to be present among the Camel message exchange headers.
+
+If any of the validation checks fail, then a
+`RestOpenApiValidationException` is thrown. The exception object has a
+`getValidationErrors` method that returns the error messages from the
+validator.
+
+# Examples
+
+## PetStore
Checkout the `rest-openapi-simple` example project in the
-[https://github.com/apache/camel-spring-boot-examples](https://github.com/apache/camel-spring-boot-examples) repository.
+[camel-spring-boot-examples](https://github.com/apache/camel-spring-boot-examples)
+repository.
For example, if you wanted to use the
[*PetStore*](https://petstore3.swagger.io/api/v3/) provided REST API
simply reference the specification URI and desired operation id from the
OpenApi specification or download the specification and store it as
-`openapi.json` (in the root) of CLASSPATH that way it will be
+`openapi.json` (in the root) of `_CLASSPATH_` that way it will be
automatically used. Let’s use the [HTTP](#http-component.adoc) component
to perform all the requests and Camel’s excellent support for Spring
Boot.
Here are our dependencies defined in Maven POM file:
+**Example pom.xml**
+
org.apache.camel.springboot
camel-http-starter
@@ -103,7 +143,7 @@ Here are our dependencies defined in Maven POM file:
camel-rest-openapi-starter
-Start by defining a *RestOpenApiComponent* bean:
+Start by defining a `RestOpenApiComponent` bean:
@Bean
public Component petstore(CamelContext camelContext) {
@@ -123,7 +163,7 @@ same manner (using `application.properties`).
In this example, there is no need to explicitly associate the `petstore`
component with the `HttpComponent` as Camel will use the first class on
-the CLASSPATH that implements `RestProducerFactory`. However, if a
+the `_CLASSPATH_` that implements `RestProducerFactory`. However, if a
different component is required, then calling
`petstore.setComponentName("http")` would use the named component from
the Camel registry.
@@ -138,39 +178,6 @@ invoke PetStore REST methods:
return template.requestBodyAndHeader("petstore:getPetById", null, "petId", petId);
}
-# Request validation
-
-API requests can be validated against the configured OpenAPI
-specification before they are sent by setting the
-`requestValidationEnabled` option to `true`. Validation is provided by
-the
-[swagger-request-validator](https://bitbucket.org/atlassian/swagger-request-validator/src/master/).
-
-The validator checks for the following conditions:
-
-- request body - Checks if the request body is required and whether
- there is any body on the Camel Exchange.
-
-- valid json - Checks if the content-type is `application/json` that
- the message body can be parsed as valid JSon.
-
-- content-type - Validates whether the `Content-Type` header for the
- request is valid for the API operation. The value is taken from the
- `Content-Type` Camel message exchange header.
-
-- request parameters - Validates whether an HTTP header required by
- the API operation is present. The header is expected to be present
- among the Camel message exchange headers.
-
-- query parameters - Validates whether an HTTP query parameter
- required by the API operation is present. The query parameter is
- expected to be present among the Camel message exchange headers.
-
-If any of the validation checks fail, then a
-`RestOpenApiValidationException` is thrown. The exception object has a
-`getValidationErrors` method that returns the error messages from the
-validator.
-
## Component Configurations
diff --git a/camel-rest.md b/camel-rest.md
index a9057afa6c7c55c9843af21eb5249da2017f885b..5efe3c4e68e33934ab57ad8d477145cb09b749a7 100644
--- a/camel-rest.md
+++ b/camel-rest.md
@@ -38,7 +38,9 @@ The following components support the REST producer:
- camel-vertx-http
-# Path and uriTemplate syntax
+# Usage
+
+## Path and uriTemplate syntax
The path and uriTemplate option is defined using a REST syntax where you
define the REST context path using support for parameters.
@@ -69,7 +71,9 @@ have two REST services configured using uriTemplates.
from("rest:get:hello:/french/{me}")
.transform().simple("Bonjour ${header.me}");
-# Rest producer examples
+# Examples
+
+## Rest producer examples
You can use the REST component to call REST services like any other
Camel component.
@@ -107,7 +111,7 @@ use as the HTTP client, for example to use http, you can do:
from("direct:start")
.to("rest:get:hello/{me}");
-# Rest producer binding
+## Rest producer binding
The REST producer supports binding using JSON or XML like the rest-dsl
does.
@@ -150,7 +154,7 @@ For example, if the REST service returns a JSON payload that binds to
You must configure `outType` option if you want POJO binding to happen
for the response messages received from calling the REST service.
-# More examples
+## More examples
See Rest DSL, which offers more examples and how you can use the Rest
DSL to define those in a nicer, restful way.
diff --git a/camel-resume-strategies.md b/camel-resume-strategies.md
new file mode 100644
index 0000000000000000000000000000000000000000..bc6acc9245c545276e30146379f9e635ece55343
--- /dev/null
+++ b/camel-resume-strategies.md
@@ -0,0 +1,245 @@
+# Resume-strategies.md
+
+The resume strategies allow users to implement strategies that point the
+consumer part of the routes to the last point of consumption. This
+allows Camel to skip reading and processing data that has already been
+consumed.
+
+The resume strategies can be used to allow quicker stop and resume
+operations when consuming large data sources. For instance, imagine a
+scenario where the file consumer is reading a large file. Without a
+resume strategy, stopping and starting Camel would cause the consumer in
+the File component to read all the bytes of the given file at the
+initial offset (that is, offset 0). The resume strategy allows
+integrations can point the consumer to the exact offset to resume the
+operations.
+
+Support for resume varies according to the component. Initially, the
+support is available for the following components:
+
+- [camel-atom](#components::atom-component.adoc)
+
+- [camel-aws2-kinesis](#components::aws2-kinesis-component.adoc)
+
+- [camel-cassandracql](#components::cql-component.adoc)
+
+- [camel-couchbase](#components::couchbase-component.adoc)
+
+- [camel-couchdb](#components::couchdb-component.adoc)
+
+- [camel-file](#components::file-component.adoc)
+
+- [camel-kafka](#components::kafka-component.adoc)
+
+- [camel-rss](#components::rss-component.adoc)
+
+The resume strategies comes in three parts:
+
+- A DSL method that marks the route as supporting resume operations
+ and points to an instance of a strategy implementation.
+
+- A set of core infrastructure that allows integrations to implement
+ different types of strategies
+
+- Basic strategies implementations that can be extended to implement
+ the specific resume strategies required by the integrations
+
+# The DSL method
+
+The route needs to use the `resumable()` method followed by passing a
+strategy configuration using `configuration`. It is also possible to use
+the `resumableStrategy` to point to an instance of the resume strategy
+in use, although this is much more complex. The vast majority of the
+cases should use a `configuration`, in which case Camel will do the
+heavy-lifting for you.
+
+Using the resume API with the configuration should look like this:
+
+ KafkaResumeStrategyConfigurationBuilder kafkaConfigurationBuilder = KafkaResumeStrategyConfigurationBuilder.newBuilder()
+ .withBootstrapServers("kafka-address:9092")
+ .withTopic("offset")
+ .withProducerProperty("max.block.ms", "10000")
+ .withMaxInitializationDuration(Duration.ofSeconds(5))
+ .withResumeCache(new MyChoiceOfResumeCache<>(100));
+
+ from("some:component")
+ .resumable().configuration(kafkaConfigurationBuilder)
+ .process(this::process);
+
+## Configuring via beans
+
+This instance can be bound in the Context registry as follows:
+
+ getCamelContext().getRegistry().bind("testResumeStrategy", new MyTestResumeStrategy());
+ getCamelContext().getRegistry().bind("resumeCache", new MyChoiceOfResumeCache<>(100));
+
+ from("some:component")
+ .resumable("testResumeStrategy")
+ .process(this::process);
+
+Or the instance can be constructed as follows:
+
+ getCamelContext().getRegistry().bind("resumeCache", new MyChoiceOfResumeCache<>(100));
+
+ from("some:component")
+ .resumable(new MyTestResumeStrategy())
+ .process(this::process)
+
+In some circumstances, such as when dealing with File I/O, it may be
+necessary to set the offset manually. There are **supporting classes**
+that can help work with resumables:
+
+- `org.apache.camel.support.Resumables` - resumables handling support
+
+- `org.apache.camel.support.Offsets` - offset handling support
+
+## Intermittent Mode
+
+In some cases, it may be necessary to avoid updating the offset for
+every exchange. You can enable the intermittent mode to modify the route
+behavior so that missing offsets will not cause an exception:
+
+ from("some:component")
+ .resumable(new MyTestResumeStrategy()).intermittent(true)
+ .process(this::process)
+
+# Builtin Resume Strategies
+
+Camel comes with a few builtin strategies that can be used to store,
+retrieve and update the offsets. The following strategies are available:
+
+- `SingleNodeKafkaResumeStrategy`: a resume strategy from the
+ `camel-kafka` component that uses Kafka as the store for the offsets
+ and is suitable for single node integrations.
+
+- `MultiNodeKafkaResumeStrategy`: a resume strategy from the
+ `camel-kafka` component that uses Kafka as the store for the offsets
+ and is suitable for multi node integrations (i.e.: integrations
+ running on clusters using the
+ [camel-master](#components::master-component.adoc) component.
+
+## Configuring the Strategies
+
+Some of the builtin strategies may need additional configuration. This
+can be done using the configuration builders available for each
+strategy. For instance, to configure either one of the Kafka strategies
+mentioned earlier, the `KafkaResumeStrategyConfiguration` needs to be
+used. It can be created using a code similar to the following:
+
+ KafkaResumeStrategyConfiguration resumeStrategyConfiguration = KafkaResumeStrategyConfigurationBuilder.newBuilder()
+ .withBootstrapServers(bootStrapAddress)
+ .withTopic(kafkaTopic)
+ .build();
+
+## Implementing New Builtin Resume Strategies
+
+New builtin resume strategies can be created by implementing the
+`ResumeStrategy` interface. Check the code for
+`SingleNodeKafkaResumeStrategy` for implementation details.
+
+# Local Cache Support
+
+A sample local cache implemented using
+[Caffeine](https://github.com/ben-manes/caffeine).
+
+- `org.apache.camel.component.caffeine.resume.CaffeineCache`
+
+# Known Limitations
+
+When using the converters with the file component, beware of the
+differences in the behavior from `Reader` and `InputStream`:
+
+For instance, the behavior of:
+
+ from("file:{{input.dir}}?noop=true&fileName={{input.file}}")
+ .resumable("testResumeStrategy")
+ .convertBodyTo(Reader.class)
+ .process(this::process);
+
+It is different from the behavior of:
+
+ from("file:{{input.dir}}?noop=true&fileName={{input.file}}")
+ .resumable("testResumeStrategy")
+ .convertBodyTo(InputStream.class)
+ .process(this::process);
+
+**Reason**: the `skip` method in the Reader will skip characters,
+whereas the same method on the InputStream will skip bytes.
+
+# Pausable Consumers API
+
+The Pausable consumers API is a subset of the resume API that provides
+pause and resume features for supported components. With this API, it is
+possible to implement logic that controls the behavior of the consumer
+based on conditions that are external to the component. For instance, it
+makes it possible to pause the consumer if an external system becomes
+unavailable.
+
+Currently, support for pausable consumers is available for the following
+components:
+
+- [camel-kafka](#components::kafka-component.adoc)
+
+To use the API, it needs an instance of a Consumer listener along with a
+predicate that tests whether to continue.
+
+- `org.apache.camel.resume.ConsumerListener`: the consumer listener
+ interface. Camel already comes with pre-built consumer listeners,
+ but users in need of more complex behaviors can create their own
+ listeners.
+
+- a predicate that returns true if data consumption should resume or
+ false if consumption should be put on pause
+
+Usage example:
+
+ from(from)
+ .pausable(new KafkaConsumerListener(), o -> canContinue())
+ .process(exchange -> LOG.info("Received an exchange: {}", exchange.getMessage().getBody()))
+ .to(destination);
+
+You can also integrate the pausable API and the consumer listener with
+the circuit breaker EIP. For instance, it’s possible to configure the
+circuit breaker so that it can manipulate the state of the listener
+based on success or on error conditions on the circuit.
+
+One example would be to create an event watcher that checks for a
+downstream system availability. It watches for error events and, when
+they happen, it triggers a scheduled check. On success, it shuts down
+the scheduled check.
+
+An example implementation of this approach would be similar to this:
+
+ CircuitBreaker circuitBreaker = CircuitBreaker.ofDefaults("pausable");
+
+ circuitBreaker.getEventPublisher()
+ .onSuccess(event -> {
+ LOG.info("Downstream call succeeded");
+ if (executorService != null) {
+ executorService.shutdownNow();
+ executorService = null;
+ }
+ })
+ .onError(event -> {
+ LOG.info(
+ "Downstream call error. Starting a thread to simulate checking for the downstream availability");
+
+ if (executorService == null) {
+ executorService = Executors.newSingleThreadScheduledExecutor();
+ // In a real world scenario, instead of incrementing, it could be pinging a remote system or
+ // running a similar check to determine whether it's available. That
+ executorService.scheduleAtFixedRate(() -> someCheckMethod(), 1, 1, TimeUnit.SECONDS);
+ }
+ });
+
+ // Binds the configuration to the registry
+ getCamelContext().getRegistry().bind("pausableCircuit", circuitBreaker);
+
+ from(from)
+ .pausable(new KafkaConsumerListener(), o -> canContinue())
+ .routeId("pausable-it")
+ .process(exchange -> LOG.info("Got record from Kafka: {}", exchange.getMessage().getBody()))
+ .circuitBreaker()
+ .resilience4jConfiguration().circuitBreaker("pausableCircuit").end()
+ .to(to)
+ .end();
diff --git a/camel-return-address.md b/camel-return-address.md
new file mode 100644
index 0000000000000000000000000000000000000000..c9b28c7167e2d5b5b2bc26ec9f2f9a4c5f8064a0
--- /dev/null
+++ b/camel-return-address.md
@@ -0,0 +1,53 @@
+# Return-address.md
+
+Camel supports the [Return
+Address](http://www.enterpriseintegrationpatterns.com/ReturnAddress.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+How does a replier know where to send the reply?
+
+
+
+
+
+The request message should contain a Return Address that indicates where
+to send the reply message.
+
+Camel supports Return Address by messaging
+[Components](#ROOT:index.adoc) that provides this functionality such as
+the [JMS](#ROOT:jms-component.adoc) component via the `JMSReplyTo`
+header.
+
+# Example
+
+In the example below we send a message to the JMS cheese queue using
+`InOut` mode, this means that Camel will automatically configure the
+`JMSReplyTo` header with a temporary queue as the Return Address.
+
+Java
+from("direct:foo")
+.to(ExchangePattern.InOut, "jms:queue:cheese");
+
+XML
+
+
+
+
+
+You can also specify a named reply queue with the `replyTo` option
+(instead of a temporary queue). When doing so then `InOut` mode is
+implied:
+
+Java
+from("direct:foo")
+.to("jms:queue:cheese?replyTo=myReplyQueue");
+
+XML
+
+
+
+
+
+# See Also
+
+See the related [Request Reply](#requestReply-eip.adoc) EIP.
diff --git a/camel-robotframework.md b/camel-robotframework.md
index f200965a1b67ed40b3e5d5a95cb0700a86492400..fb6778b96ec9fe99ebdae8b4cc88d6e02016ba72 100644
--- a/camel-robotframework.md
+++ b/camel-robotframework.md
@@ -31,7 +31,7 @@ Where **templateName** is the classpath-local URI of the template to
invoke; or the complete URL of the remote template (eg:
file://folder/myfile.robot).
-# Samples
+# Examples
For example, you could use something like:
diff --git a/camel-rocketmq.md b/camel-rocketmq.md
index fd4e2716a88d696b056bd55551b715b7c9a6e7fb..d8caad8b5c7dea58afed5378e2a428e9d1d1c41b 100644
--- a/camel-rocketmq.md
+++ b/camel-rocketmq.md
@@ -30,7 +30,9 @@ be sent to. In the case of consumers, the topic name determines the
topic will be subscribed. This component uses RocketMQ push consumer by
default.
-# InOut Pattern
+# Usage
+
+## InOut Pattern
InOut Pattern based on Message Key. When the producer sends the message,
a messageKey will be generated and append to the message’s key.
diff --git a/camel-rollback-eip.md b/camel-rollback-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd8968cfd640aea32dfdd637767b8619c55e0fac
--- /dev/null
+++ b/camel-rollback-eip.md
@@ -0,0 +1,78 @@
+# Rollback-eip.md
+
+The Rollback EIP is used for marking an
+[Exchange](#manual::exchange.adoc) to rollback and stop continue routing
+the message.
+
+# Options
+
+# Exchange properties
+
+# Using Rollback
+
+We want to test a message for some conditions and force a rollback if a
+message may be faulty.
+
+In Java DSL we can do:
+
+Java
+from("direct:start")
+.choice().when(body().contains("error"))
+.rollback("That do not work")
+.otherwise()
+.to("direct:continue");
+
+XML
+
+
+
+
+${body} contains 'error'
+
+
+
+
+
+
+
+
+When Camel is rolling back, then a `RollbackExchangeException` is thrown
+with the cause message `_"That do not work"_`.
+
+## Marking for Rollback only
+
+When a message is rolled back, then Camel will by default throw a
+`RollbackExchangeException` to cause the message to fail and rollback.
+
+This behavior can be modified to only mark for rollback, and not throw
+the exception.
+
+Java
+from("direct:start")
+.choice().when(body().contains("error"))
+.markRollbackOnly()
+.otherwise()
+.to("direct:continue");
+
+XML
+
+
+
+
+${body} contains 'error'
+
+
+
+
+
+
+
+
+Then no exception is thrown, but the message is marked to rollback and
+stopped routing.
+
+## Using Rollback with Transactions
+
+Rollback can be used together with
+[transactions](#transactional-client.adoc). For more details, see
+[Transaction Client](#transactional-client.adoc) EIP.
diff --git a/camel-roundRobinLoadBalancer-eip.md b/camel-roundRobinLoadBalancer-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..78d74aa02d659e2ce14a6161101ee0f96fcc8c99
--- /dev/null
+++ b/camel-roundRobinLoadBalancer-eip.md
@@ -0,0 +1,35 @@
+# RoundRobinLoadBalancer-eip.md
+
+Round Robin mode for the [Load Balancer](#loadBalance-eip.adoc) EIP.
+
+The exchanges are selected in a round-robin fashion. This is a well
+known and classic policy, which spreads the load evenly.
+
+# Options
+
+# Exchange properties
+
+# Example
+
+We want to load balance between three endpoints in round-robin mode.
+
+This is done as follows in Java DSL:
+
+ from("direct:start")
+ .loadBalance().roundRobin()
+ .to("seda:x")
+ .to("seda:y")
+ .to("seda:z")
+ .end();
+
+In XML, you’ll have a route like this:
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-routingSlip-eip.md b/camel-routingSlip-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..15a1dd3e1df27069b653038ccbf3995a73efbd5c
--- /dev/null
+++ b/camel-routingSlip-eip.md
@@ -0,0 +1,106 @@
+# RoutingSlip-eip.md
+
+Camel supports the [Routing
+Slip](https://www.enterpriseintegrationpatterns.com/patterns/messaging/RoutingTable.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+How do we route a message consecutively through a series of processing
+steps when the sequence of steps is not known at design-time and may
+vary for each message?
+
+
+
+
+
+Attach a Routing Slip to each message, specifying the sequence of
+processing steps. Wrap each component with a special message router that
+reads the Routing Slip and routes the message to the next component in
+the list.
+
+# Options
+
+See the `cacheSize` option for more details on *how much cache* to use
+depending on how many or few unique endpoints are used.
+
+# Exchange properties
+
+# Using Routing Slip
+
+The Routing Slip EIP allows to route a message through a series of
+[endpoints](#manual::endpoint.adoc) (the slip).
+
+There can be 1 or more endpoint [uris](#manual::uris.adoc) in the slip.
+
+A slip can be empty, meaning that the message will not be routed
+anywhere.
+
+The following route will take any messages sent to the Apache ActiveMQ
+queue cheese and use the header with key "whereTo" that is used to
+compute the slip (endpoint [uris](#manual::uris.adoc)).
+
+Java
+from("activemq:cheese")
+.routingSlip(header("whereTo"));
+
+XML
+
+
+
+
+
+
+
+The value of the header ("whereTo") should be a comma-delimited string
+of endpoint URIs you wish the message to be routed to. The message will
+be routed in a [pipeline](#pipeline-eip.adoc) fashion, i.e., one after
+the other.
+
+The Routing Slip sets a property, `Exchange.SLIP_ENDPOINT`, on the
+`Exchange` which contains the current endpoint as it advanced though the
+slip. This allows you to *know* how far we have processed in the slip.
+
+The Routing Slip will compute the slip **beforehand**, which means the
+slip is only computed once. If you need to compute the slip
+*on-the-fly*, then use the [Dynamic Router](#dynamicRouter-eip.adoc) EIP
+instead.
+
+## How is the slip computed
+
+The Routing Slip uses an [Expression](#manual::expression.adoc) to
+compute the value for the slip. The result of the expression can be one
+of:
+
+- `String`
+
+- `Collection`
+
+- `Iterator` or `Iterable`
+
+- Array
+
+If the value is a `String` then the `uriDelimiter` is used to split the
+string into multiple uris. The default delimiter is comma, but can be
+re-configured.
+
+## Ignore Invalid Endpoints
+
+The Routing Slip supports `ignoreInvalidEndpoints` (like [Recipient
+List](#recipientList-eip.adoc) EIP). You can use it to skip endpoints
+which are invalid.
+
+Java
+from("direct:start")
+.routingSlip("myHeader").ignoreInvalidEndpoints();
+
+XML
+
+
+
+
+
+
+
+Then let us say the `myHeader` contains the following two endpoints
+`direct:foo,xxx:bar`. The first endpoint is valid and works. However,
+the second one is invalid and will just be ignored. Camel logs at DEBUG
+level about it, so you can see why the endpoint was invalid.
diff --git a/camel-rss-dataformat.md b/camel-rss-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..88c6f28a46877f8e2f93c1c7e647afeb69e6cc6e
--- /dev/null
+++ b/camel-rss-dataformat.md
@@ -0,0 +1,51 @@
+# Rss-dataformat.md
+
+**Since Camel 2.1**
+
+The RSS component ships with an RSS dataformat that can be used to
+convert between String (as XML) and ROME RSS model objects.
+
+- marshal = from ROME `SyndFeed` to XML `String`
+
+- unmarshal = from XML `String` to ROME `SyndFeed`
+
+A route using this would look something like this:
+
+The purpose of this feature is to make it possible to use Camel’s lovely
+built-in expressions for manipulating RSS messages. As shown below, an
+XPath expression can be used to filter the RSS message:
+
+**Query parameters**
+
+If the URL for the RSS feed uses query parameters, this component will
+understand them as well, for example if the feed uses `alt=rss`, then
+you can for example do
+`from("rss:http://someserver.com/feeds/posts/default?alt=rss&splitEntries=false&delay=1000").to("bean:rss");`
+
+# Options
+
+# Example
+
+The RSS component ships with an RSS dataformat that can be used to
+convert between String (as XML) and ROME RSS model objects.
+
+- marshal = from ROME `SyndFeed` to XML `String`
+
+- unmarshal = from XML `String` to ROME `SyndFeed`
+
+A route using the RSS dataformat will look like this:
+
+ from("rss:file:src/test/data/rss20.xml?splitEntries=false&delay=1000")
+ .marshal().rss()
+ .to("mock:marshal");
+
+The purpose of this feature is to make it possible to use Camel’s
+built-in expressions for manipulating RSS messages. As shown below, an
+XPath expression can be used to filter the RSS message. In the following
+example, on ly entries with Camel in the title will get through the
+filter.
+
+ from("rss:file:src/test/data/rss20.xml?splitEntries=true&delay=100")
+ .marshal().rss()
+ .filter().xpath("//item/title[contains(.,'Camel')]")
+ .to("mock:result");
diff --git a/camel-rss.md b/camel-rss.md
index de9b6c7557bf535f8b61e8abf23a53b81a3767e7..c5d1291c3e561815aa158a008ca31180ff7c0ffe 100644
--- a/camel-rss.md
+++ b/camel-rss.md
@@ -25,7 +25,9 @@ The component currently only supports consuming feeds.
Where `rssUri` is the URI to the RSS feed to poll.
-# Exchange data types
+# Usage
+
+## Exchange data types
Camel initializes the In body on the Exchange with a ROME `SyndFeed`.
Depending on the value of the `splitEntries` flag, Camel returns either
@@ -38,20 +40,20 @@ a `SyndFeed` with one `SyndEntry` or a `java.util.List` of `SyndEntrys`.
-
+
-
+
splitEntries
true
A single entry from the current feed is
set in the exchange.
-
+
splitEntries
false
The entire list of entries from the
@@ -69,7 +71,7 @@ following example will be resolved:
from("rss:http://someserver.com/feeds/posts/default?alt=rss&splitEntries=false&delay=1000")
.to("bean:rss");
-# Filtering entries
+## Filtering entries
You can filter out entries using XPath, as shown in the data format
section above. You can also exploit Camel’s Bean Integration to
diff --git a/camel-rxjava.md b/camel-rxjava.md
new file mode 100644
index 0000000000000000000000000000000000000000..7744722f9c87b468dcd0b75cf1e50a7307bce2f9
--- /dev/null
+++ b/camel-rxjava.md
@@ -0,0 +1,17 @@
+# Rxjava.md
+
+**Since Camel 2.22**
+
+RxJava based back-end for Camel’s reactive streams component.
+
+See more details in the camel-streams-component documentation.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-rxjava
+ x.x.x
+
+
diff --git a/camel-saga-eip.md b/camel-saga-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..3798630f3c85a5f155cc32b56c1f6c3671fcc4d3
--- /dev/null
+++ b/camel-saga-eip.md
@@ -0,0 +1,562 @@
+# Saga-eip.md
+
+The Saga EIP provides a way to define a series of related actions in a
+Camel route that should be either completed successfully (**all of
+them**) or not-executed/compensated. Sagas implementations are able to
+coordinate **distributed services communicating using any transport**
+towards a globally **consistent outcome**.
+
+Although their main purpose is similar, Sagas are different from
+classical ACID distributed (XA) transactions. That is because the status
+of the different participating services is guaranteed to be consistent
+only at the end of the Saga and not in any intermediate step (lack of
+isolation).
+
+
+
+
+
+Conversely, Sagas are suitable for many use cases where usage of
+distributed transactions is discouraged. For example, services
+participating in a Saga are allowed to use any kind of datastore:
+classical databases or even NoSQL non-transactional databases. Sagas are
+also suitable for being used in stateless cloud services as they do not
+require a transaction log to be stored alongside the service.
+
+Differently from transactions, Sagas are also not required to be
+completed in a small amount of time because they don’t use
+database-level locks. They can live for a longer time span: from a few
+seconds to several days. The Saga EIP implementation based on the
+MicroProfile sandbox spec is indeed called LRA that stands for
+*"Long-Running Action"*. It also supports coordination of external
+**heterogeneous services**, written with any language/technology and
+also running outside a JVM.
+
+see camel-lra.
+
+Sagas don’t use locks on data. Instead, they define the concept of
+"Compensating Action" that is an action that should be executed when the
+standard flow encounters an error, with the purpose of restoring the
+status that was present before the flow execution. Compensating actions
+can be declared in Camel routes using the Java or XML DSL and will be
+invoked by Camel only when needed (if the saga is canceled due to an
+error).
+
+# Options
+
+# Exchange properties
+
+# Exchange headers
+
+The following exchange headers are set on each `Exchange` participating
+in a Saga (normal actions, compensating actions and completions):
+
+
+
+
+
+
+
+
+
+
+
+
+Long-Running-Action
+String
+A globally unique identifier for the
+Saga that can be propagated to remote systems using transport-level
+headers (e.g., HTTP).
+
+
+
+
+# Saga Service Configuration
+
+The Saga EIP requires that a service implementing the interface
+`org.apache.camel.saga.CamelSagaService` is added to the \`CamelContext.
+
+Camel currently supports the following Saga Services:
+
+- `InMemorySagaService`: Is a **basic** implementation of the Saga EIP
+ that does not support advanced features (no remote context
+ propagation, no consistency guarantee in case of application
+ failure).
+
+- `LRASagaService`: Is a **fully-fledged** implementation of the Saga
+ EIP based on MicroProfile sandbox LRA specification that supports
+ remote context propagation and provides consistency guarantees in
+ case of application failure.
+
+## Using the In-Memory Saga Service
+
+The in-memory Saga service is not recommended for production
+environments. It does not support the persistence of the Saga status (it
+is kept only in-memory), so it cannot guarantee the consistency of Sagas
+in case of application failure (e.g., JVM crash).
+
+Also, when using an in-memory Saga service, Saga contexts cannot be
+propagated to remote services using transport-level headers (it can be
+done with other implementations).
+
+Users that want to use the in-memory saga service should add the
+following code to customize the Camel context.
+
+ context.addService(new org.apache.camel.saga.InMemorySagaService());
+
+This service belongs in the `camel-support` module.
+
+## Using the LRA Saga Service
+
+The LRA Saga Service is an implementation based on the MicroProfile
+sandbox LRA specification. It leverages an **external Saga coordinator**
+to control the execution of the various steps of the Saga. The proposed
+reference implementation for the LRA specification is the [Narayana LRA
+Coordinator](http://jbossts.blogspot.it/2017/12/narayana-lra-implementation-of-saga.html).
+Users can follow instructions present on the Narayana website to **start
+up a remote instance of the coordinator**.
+
+The URL of the LRA coordinator is a required parameter of the Camel LRA
+service. The Camel application and the LRA service communicate using the
+HTTP protocol.
+
+To use the LRA Saga service, maven users will need to add the following
+dependency to their `pom.xml`:
+
+
+ org.apache.camel
+ camel-lra
+
+ x.y.z
+
+
+A Camel REST context is also required to be present for the LRA
+implementation to work. You may add `camel-undertow` for example.
+
+
+ org.apache.camel
+ camel-undertow
+
+ x.y.z
+
+
+The LRA implementation of the Saga EIP will add some web endpoints under
+the `_"/lra-participant"_` path. Those endpoints will be used by the LRA
+coordinator for calling back the application.
+
+ // Configure the LRA saga service
+ org.apache.camel.service.lra.LRASagaService sagaService = new org.apache.camel.service.lra.LRASagaService();
+ sagaService.setCoordinatorUrl("http://lra-service-host");
+ sagaService.setLocalParticipantUrl("http://my-host-as-seen-by-lra-service:8080/context-path");
+
+ // Add it to the Camel context
+ context.addService(sagaService);
+
+### Using the LRA Saga Service in Spring Boot
+
+Spring Boot users can use a simplified configuration model for the LRA
+Saga Service. Maven users can include the **camel-lra-starter** module
+in their project:
+
+
+ org.apache.camel.springboot
+ camel-lra-starter
+
+ x.y.z
+
+
+
+ org.apache.camel.springboot
+ camel-undertow-starter
+
+ x.y.z
+
+
+Configuration can be done in the Spring Boot `application.yaml` file:
+
+**application.yaml**
+
+ camel:
+ lra:
+ enabled: true
+ coordinator-url: http://lra-service-host
+ local-participant-url: http://my-host-as-seen-by-lra-service:8080/context-path
+
+Once done, the Saga EIP can be directly used inside Camel routes, and it
+will use the LRA Saga Service under the hood.
+
+# Examples
+
+Suppose you want to place a new order, and you have two distinct
+services in your system: one managing the orders and one managing the
+credit. Logically, you can place an order if you have enough credits for
+it.
+
+With the Saga EIP you can model the `direct:buy` route as a Saga
+composed of two distinct actions, one to create the order and one to
+take the credit.
+
+**Both actions must be executed, or none of them**: an order placed
+without enough credits can be considered an inconsistent outcome (and a
+payment without an order).
+
+ from("direct:buy")
+ .saga()
+ .to("direct:newOrder")
+ .to("direct:reserveCredit");
+
+**That’s it**. The buy action will not change for the rest of the
+examples. We’ll just see different options that can be used to model the
+"New Order" and "Reserve Credit" actions in the following.
+
+We have used a `direct` endpoint to model the two actions since this
+example can be used with both implementations of the Saga service. We
+could have used **http** or other kinds of endpoint with the LRA Saga
+service.
+
+Both services called by the `direct:buy` route can **participate in the
+Saga** and declare their compensating actions.
+
+ from("direct:newOrder")
+ .saga()
+ .propagation(SagaPropagation.MANDATORY)
+ .compensation("direct:cancelOrder")
+ .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION)
+ .bean(orderManagerService, "newOrder")
+ .log("Order ${body} created");
+
+Here the propagation mode is set to `MANDATORY` meaning that any
+exchange flowing in this route must be already part of a saga. And it is
+the case in this example, since the saga is created in the `direct:buy`
+route.
+
+The `direct:newOrder` route declares a compensating action called
+`direct:cancelOrder`, responsible for undoing the order in case the saga
+is canceled.
+
+Each exchange always contains a `Exchange.SAGA_LONG_RUNNING_ACTION`
+header that here is used as id of the order. This is done to identify
+the order to delete in the corresponding compensating action, but it is
+not a requirement (options can be used as an alternative solution).
+
+The compensating action of `direct:newOrder` is `direct:cancelOrder`,
+and it’s shown below:
+
+ from("direct:cancelOrder")
+ .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION)
+ .bean(orderManagerService, "cancelOrder")
+ .log("Order ${body} cancelled");
+
+It is called automatically by the Saga EIP implementation when the order
+should be canceled.
+
+It should not terminate with error. In case an error is thrown in the
+`direct:cancelOrder` route, the EIP implementation should periodically
+retry to execute the compensating action up to a certain limit. This
+means that **any compensating action must be idempotent**, so it should
+take into account that it may be triggered multiple times and should not
+fail in any case.
+
+If compensation cannot be done after all retries, a manual intervention
+process should be triggered by the Saga implementation.
+
+It may happen that due to a delay in the execution of the
+`direct:newOrder` route the Saga is canceled by another party in the
+meantime. For instance, due to an error in a parallel route or a timeout
+at Saga level.
+
+So, when the compensating action `direct:cancelOrder` is called, it may
+not find the Order record that should be canceled. It is important to
+guarantee full global consistency, so that **any main action and its
+corresponding compensating action are commutative**, i.e., if
+compensation occurs before the main action, it should have the same
+effect.
+
+Another possible approach, when using a commutative behavior is not
+possible, is to consistently fail in the compensating action until data
+produced by the main action is found (or the maximum number of retries
+is exhausted): this approach may work in many contexts, but it’s
+**heuristic**.
+
+The credit service may be implemented almost in the same way as the
+order service.
+
+ // action
+ from("direct:reserveCredit")
+ .saga()
+ .propagation(SagaPropagation.MANDATORY)
+ .compensation("direct:refundCredit")
+ .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION)
+ .bean(creditService, "reserveCredit")
+ .log("Credit ${header.amount} reserved in action ${body}");
+
+ // compensation
+ from("direct:refundCredit")
+ .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION)
+ .bean(creditService, "refundCredit")
+ .log("Credit for action ${body} refunded");
+
+Here the compensating action for a credit reservation is a refund.
+
+This completes the example. It can be run with both implementations of
+the Saga EIP, as it does not involve remote endpoints.
+
+Further, options will be shown next.
+
+## Handling Completion Events
+
+It is often required to do some processing when the Saga is completed.
+Compensation endpoints are invoked when something wrong happens and the
+Saga is canceled. Equivalently, **completion endpoints** can be invoked
+to do further processing when the Saga is completed successfully.
+
+For example, in the order service above, we may need to know when the
+order is completed (and the credit reserved) to actually start preparing
+the order. We will not want to start to prepare the order if the payment
+is not done (unlike most modern CPUs that give you access to reserved
+memory before ensuring that you have rights to read it).
+
+This can be done easily with a modified version of the `direct:newOrder`
+endpoint:
+
+ from("direct:newOrder")
+ .saga()
+ .propagation(SagaPropagation.MANDATORY)
+ .compensation("direct:cancelOrder")
+ .completion("direct:completeOrder") // completion endpoint
+ .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION)
+ .bean(orderManagerService, "newOrder")
+ .log("Order ${body} created");
+
+ // direct:cancelOrder is the same as in the previous example
+
+ // called on successful completion
+ from("direct:completeOrder")
+ .transform().header(Exchange.SAGA_LONG_RUNNING_ACTION)
+ .bean(orderManagerService, "findExternalId")
+ .to("jms:prepareOrder")
+ .log("Order ${body} sent for preparation");
+
+When the Saga is completed, the order is sent to a JMS queue for
+preparation.
+
+Like compensating actions, also completion actions may be called
+multiple times by the Saga coordinator. Especially in case of errors,
+like network errors. In this example, the service listening to the
+`prepareOrder` JMS queue should be prepared to hold possible duplicates.
+
+Check the Idempotent Consumer EIP for examples on how to handle
+duplicates.
+
+## Using Custom Identifiers and Options
+
+The example shown so far uses the `Exchange.SAGA_LONG_RUNNING_ACTION` as
+identifier for the resources `order` and `credit`. This is not always a
+desired approach, as it may pollute the business logic and the data
+model.
+
+An alternative approach is to use Saga options to "register" custom
+identifiers. For example, the credit service may be refactored as
+follows:
+
+ // action
+ from("direct:reserveCredit")
+ .bean(idService, "generateCustomId") // generate a custom ID and set it in the body
+ .to("direct:creditReservation")
+
+ // delegate action
+ from("direct:creditReservation")
+ .saga()
+ .propagation(SagaPropagation.SUPPORTS)
+ .option("CreditId", body()) // mark the current body as needed in the compensating action
+ .compensation("direct:creditRefund")
+ .bean(creditService, "reserveCredit")
+ .log("Credit ${header.amount} reserved. Custom Id used is ${body}");
+
+ // called only if the saga is canceled
+ from("direct:creditRefund")
+ .transform(header("CreditId")) // retrieve the CreditId option from headers
+ .bean(creditService, "refundCredit")
+ .log("Credit for Custom Id ${body} refunded");
+
+**Note how the previous listing is not using the
+`Exchange.SAGA_LONG_RUNNING_ACTION` header at all.**
+
+Since the `direct:creditReservation` endpoint can be now called also
+from outside a Saga, the propagation mode can be set to `SUPPORTS`.
+
+Multiple options\* can be declared in a Saga route.
+
+## Setting Timeouts
+
+Sagas are long-running actions, but this does not mean that they should
+not have a bounded timeframe to execute. **Setting timeouts on Sagas is
+always a good practice** as it guarantees that a Saga does not remain
+stuck forever in the case of machine failure.
+
+The Saga EIP implementation may have a default timeout set on all Sagas
+that don’t specify it explicitly
+
+When the timeout expires, the Saga EIP will decide to **cancel the
+Saga** (and compensate all participants), unless a different decision
+has been taken before.
+
+Timeouts can be set on Saga participants as follows:
+
+ from("direct:newOrder")
+ .saga()
+ .timeout(1, TimeUnit.MINUTES) // newOrder requires that the saga is completed within 1 minute
+ .propagation(SagaPropagation.MANDATORY)
+ .compensation("direct:cancelOrder")
+ .completion("direct:completeOrder")
+ // ...
+ .log("Order ${body} created");
+
+All participants, e.g., credit service, order service, can set their own
+timeout. When a participant joins an existing transaction, the timeout
+of the already active saga can be influenced. It should calculate the
+moment in time the saga would become eligible for cancellation based on
+the time which the request enters the method and the timeout. When this
+moment is earlier than the moment calculated for the saga at that time,
+this new moment becomes the timeout moment for the saga. So when
+multiple participants define a timeout period, the earliest one will
+trigger the cancellation of the saga.
+
+A timeout can also be specified at saga level as follows:
+
+ from("direct:buy")
+ .saga()
+ .timeout(5, TimeUnit.MINUTES) // timeout at saga level
+ .to("direct:newOrder")
+ .to("direct:reserveCredit");
+
+## Choosing Propagation
+
+In the examples above, we have used the `MANDATORY` and `SUPPORTS`
+propagation modes, but also the `REQUIRED` propagation mode, that is the
+default propagation used when nothing else is specified.
+
+These propagation modes map 1:1 the equivalent modes used in
+transactional contexts. Here’s a summary of their meaning:
+
+
+
+
+
+
+
+
+
+
+
+REQUIRED
+Join the existing saga or create a new
+one if it does not exist.
+
+
+REQUIRES_NEW
+Always create a new saga. Suspend the
+old saga and resume it when the new one terminates.
+
+
+MANDATORY
+A saga must be already present. The
+existing saga is joined.
+
+
+SUPPORTS
+If a saga already exists, then join
+it.
+
+
+NOT_SUPPORTED
+If a saga already exists, it is
+suspended and resumed when the current block completes.
+
+
+NEVER
+The current block must never be invoked
+within a saga.
+
+
+
+
+## Using Manual Completion (Advanced)
+
+When a Saga cannot be all executed in a synchronous way, but it requires
+e.g. communication with external services using asynchronous
+communication channels, the completion mode cannot be set to `AUTO`
+(default). That is because the saga is not completed when the exchange
+that creates it is done.
+
+This is often the case for Sagas that have long execution times (hours,
+days). In these cases, the `MANUAL` completion mode should be used.
+
+ from("direct:mysaga")
+ .saga()
+ .completionMode(SagaCompletionMode.MANUAL)
+ .completion("direct:finalize")
+ .timeout(2, TimeUnit.HOURS)
+ .to("seda:newOrder")
+ .to("seda:reserveCredit");
+
+ // Put here asynchronous processing for seda:newOrder and seda:reserveCredit
+ // They will send asynchronous callbacks to seda:operationCompleted
+
+ from("seda:operationCompleted") // an asynchronous callback
+ .saga()
+ .propagation(SagaPropagation.MANDATORY)
+ .bean(controlService, "actionExecuted")
+ .choice()
+ .when(body().isEqualTo("ok"))
+ .to("saga:complete") // complete the current saga manually (saga component)
+ .end()
+
+ // You can put here the direct:finalize endpoint to execute final actions
+
+Setting the completion mode to `MANUAL` means that the saga is not
+completed when the exchange is processed in the route `direct:mysaga`
+but it will last longer (max duration is set to 2 hours).
+
+When both asynchronous actions are completed, the saga is completed. The
+call to complete is done using the Camel Saga Component’s
+`saga:complete` endpoint. There is a similar endpoint for manually
+compensating the Saga (`saga:compensate`).
+
+Apparently, the addition of the saga markers adds little value to the
+flow: it works also if you remove all Saga EIP configuration. But Sagas
+add a lot of value, since they guarantee that even in the presence of
+unexpected issues (servers crashing, messages are lost, etc.) there will
+always be a consistent outcome: order placed and credit reserved, or
+none of them changed. In particular, if the Saga is not completed within
+2 hours, the compensation mechanism will take care of fixing the status.
+
+# Using Saga with XML DSL
+
+Saga features are also available for users that want to use the XML DSL.
+
+The following snippet shows an example:
+
+
+
+
+
+
+
+ myOptionValue
+
+
+ myOptionValue2
+
+
+
+
+
diff --git a/camel-saga.md b/camel-saga.md
index 13e11945567a1ab9eddd3710d868de7a1faab143..f2b2f4d9b858e47c66ec03607cbb2b297cbaac01 100644
--- a/camel-saga.md
+++ b/camel-saga.md
@@ -10,8 +10,8 @@ route using the Saga EIP.
The component should be used for advanced tasks, such as deciding to
complete or compensate a Saga with completionMode set to **MANUAL**.
-Refer to the Saga EIP documentation for help on using sagas in common
-scenarios.
+Refer to the [Saga EIP](#eips:saga-eip.adoc) documentation for help on
+using sagas in common scenarios.
# URI format
diff --git a/camel-salesforce.md b/camel-salesforce.md
index 2eb65db1fc91be10435903ba526d7313df60c082..a293ae34543f892fd5b9d57b947194ad67e37e4f 100644
--- a/camel-salesforce.md
+++ b/camel-salesforce.md
@@ -61,7 +61,9 @@ Spring Boot users should use the starter instead.
3. **Create routes**. Starting creating routes that interact with
salesforce!
-# Authenticating to Salesforce
+# Usage
+
+## Authenticating to Salesforce
The component supports three OAuth authentication flows:
@@ -84,40 +86,40 @@ For each of the flows, different sets of properties need to be set:
-
+
Property
Where to find it on Salesforce
Flow
-
-clientId
+
+clientId
Connected App, Consumer Key
All flows
-
-clientSecret
+
+clientSecret
Connected App, Consumer Secret
Username-Password, Refresh Token,
Client Credentials
-
-userName
+
+userName
Salesforce user username
Username-Password, JWT Bearer
Token
-
-password
+
+password
Salesforce user password
Username-Password
-
-refreshToken
+
+refreshToken
From OAuth flow callback
Refresh Token
-
-keystore
+
+keystore
Connected App, Digital
Certificate
JWT Bearer Token
@@ -136,9 +138,9 @@ The certificate used in JWT Bearer Token Flow can be a self-signed
certificate. The KeyStore holding the certificate and the private key
must contain only a single certificate-private key entry.
-# General Usage
+## General Usage
-## URI format
+### URI format
When used as a consumer, receiving streaming events, the URI scheme is:
@@ -186,7 +188,7 @@ For example, to fetch API limits, you can specify:
In addition, HTTP response status code and text are available as headers
`Exchange.HTTP_RESPONSE_CODE` and `Exchange.HTTP_RESPONSE_TEXT`.
-## Sending null values to salesforce
+### Sending null values to salesforce
By default, SObject fields with null values are not sent to salesforce.
In order to send null values to salesforce, use the `fieldsToNull`
@@ -194,7 +196,7 @@ property, as follows:
accountSObject.getFieldsToNull().add("Site");
-# Supported Salesforce APIs
+## Supported Salesforce APIs
Camel supports the following Salesforce APIs:
@@ -212,7 +214,7 @@ Camel supports the following Salesforce APIs:
- [Reports API](#ReportsAPI)
-## REST API
+### REST API
The following operations are supported:
@@ -308,7 +310,7 @@ Unless otherwise specified, DTO types for the following options are from
`org.apache.camel.component.salesforce.api.dto` or one if its
sub-packages.
-### Versions
+#### Versions
`getVersions`
@@ -320,7 +322,7 @@ root.
Type: `List`
-### Resources by Version
+#### Resources by Version
`getResources`
@@ -331,7 +333,7 @@ resource name and URI.
Type: `Map`
-### Limits
+#### Limits
`limits`
@@ -375,7 +377,7 @@ or `1` (no API limits consumed).
.setBody(constant("Used up Salesforce API limits, leaving 10% for critical routes"))
.endChoice()
-### Recently Viewed Items
+#### Recently Viewed Items
`recent`
@@ -394,14 +396,14 @@ options in search.
-
+
Parameter
Type
Description
Default
Required
-
+
limit
int
An optional limit that specifies the
@@ -434,7 +436,7 @@ number of records to return. For example:
.split().body()
.log("${body.name} at ${body.attributes.url}");
-### Describe Global
+#### Describe Global
`getGlobalObjects`
@@ -446,7 +448,7 @@ maximum batch size permitted in queries.
Type: `GlobalObjects`
-### sObject Basic Information
+#### sObject Basic Information
`getBasicInfo`
@@ -461,14 +463,14 @@ Describes the individual metadata for the specified object.
-
+
Parameter
Type
Description
Default
Required
-
+
sObjectName
String
Name of SObject, e.g.
@@ -483,7 +485,7 @@ Describes the individual metadata for the specified object.
Type: `SObjectBasicInfo`
-### sObject Describe
+#### sObject Describe
`getDescription`
@@ -500,14 +502,14 @@ URLs, and child relationships for the Account object.
-
+
Parameter
Type
Description
Default
Required
-
+
sObjectName
String
Name of SObject, e.g.
@@ -522,7 +524,7 @@ URLs, and child relationships for the Account object.
Type: `SObjectDescription`
-### Retrieve SObject
+#### Retrieve SObject
`getSObject`
@@ -538,14 +540,14 @@ requires the `packages` option to be set.
-
+
Parameter
Type
Description
Default
Required
-
+
sObjectName
String
Name of SObject, e.g.
@@ -553,14 +555,14 @@ requires the `packages` option to be set.
x
-
+
sObjectId
String
Id of record to retrieve.
x
-
+
sObjectFields
String
Comma-separated list of fields to
@@ -568,7 +570,7 @@ retrieve
-
+
Body
AbstractSObjectBase
@@ -585,7 +587,7 @@ query salesforce. If supplied, overrides sObjectName and
Type: Subclass of `AbstractSObjectBase`
-### Retrieve SObject by External Id
+#### Retrieve SObject by External Id
`getSObjectWithId`
@@ -601,28 +603,28 @@ the `packages` option to be set.
-
+
Parameter
Type
Description
Default
Required
-
+
sObjectIdName
String
Name of External ID field
x
-
+
sObjectIdValue
String
External ID value
x
-
+
sObjectName
String
Name of SObject, e.g.
@@ -630,7 +632,7 @@ the `packages` option to be set.
x
-
+
Body
AbstractSObjectBase
@@ -647,7 +649,7 @@ query salesforce. If supplied, overrides sObjectName and
Type: Subclass of `AbstractSObjectBase`
-### sObject Blob Retrieve
+#### sObject Blob Retrieve
`getBlobField`
@@ -662,14 +664,14 @@ Retrieves the specified blob field from an individual record.
-
+
Parameter
Type
Description
Default
Required
-
+
sObjectBlobFieldName
String
@@ -677,7 +679,7 @@ style="text-align: left;">sObjectBlobFieldName
x
-
+
sObjectName
String
Name of SObject, e.g., Account
@@ -685,7 +687,7 @@ style="text-align: left;">sObjectBlobFieldName
Required if SObject not supplied in
body
-
+
sObjectId
String
Id of SObject
@@ -693,7 +695,7 @@ body
Required if SObject not supplied in
body
-
+
Body
AbstractSObjectBase
@@ -711,7 +713,7 @@ parameters will be used.
Type: `InputStream`
-### Create SObject
+#### Create SObject
`createSObject`
@@ -726,14 +728,14 @@ Creates a record in salesforce.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
AbstractSObjectBase or
String
@@ -741,7 +743,7 @@ Creates a record in salesforce.
x
-
+
sObjectName
String
Name of SObject, e.g.
@@ -758,7 +760,7 @@ Body.
Type: `CreateSObjectResult`
-### Update SObject
+#### Update SObject
`updateSObject`
@@ -773,14 +775,14 @@ Updates a record in salesforce.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
AbstractSObjectBase or
String
@@ -788,7 +790,7 @@ Updates a record in salesforce.
x
-
+
sObjectName
String
Name of SObject, e.g.
@@ -798,7 +800,7 @@ Body.
If Body is a
String
-
+
sObjectId
String
Id of record to update. Only used if
@@ -810,7 +812,7 @@ Camel cannot determine from Body.
-### Upsert SObject
+#### Upsert SObject
`upsertSObject`
@@ -825,14 +827,14 @@ Upserts a record by External ID.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
AbstractSObjectBase or
String
@@ -840,14 +842,14 @@ Upserts a record by External ID.
x
-
+
sObjectIdName
String
External ID field name.
x
-
+
sObjectIdValue
String
External ID value
@@ -855,7 +857,7 @@ Upserts a record by External ID.
If Body is a
String
-
+
sObjectName
String
Name of SObject, e.g.
@@ -872,7 +874,7 @@ Body.
Type: `UpsertSObjectResult`
-### Delete SObject
+#### Delete SObject
`deleteSObject`
@@ -887,14 +889,14 @@ Deletes a record in salesforce.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
AbstractSObjectBase
@@ -902,7 +904,7 @@ style="text-align: left;">AbstractSObjectBase
-
+
sObjectName
String
Name of SObject, e.g.
@@ -912,7 +914,7 @@ Body.
If Body is not an
AbstractSObjectBase instance
-
+
sObjectId
String
Id of record to delete.
@@ -923,7 +925,7 @@ Body.
-### Delete SObject by External Id
+#### Delete SObject by External Id
`deleteSObjectWithId`
@@ -938,14 +940,14 @@ Deletes a record in salesforce by External ID.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
AbstractSObjectBase
@@ -953,7 +955,7 @@ style="text-align: left;">AbstractSObjectBase
-
+
sObjectIdName
String
Name of External ID field
@@ -961,7 +963,7 @@ style="text-align: left;">AbstractSObjectBase
If Body is not an
AbstractSObjectBase instance
-
+
sObjectIdValue
String
External ID value
@@ -969,7 +971,7 @@ style="text-align: left;">AbstractSObjectBase
If Body is not an
AbstractSObjectBase instance
-
+
sObjectName
String
Name of SObject, e.g.
@@ -982,7 +984,7 @@ Body.
-### Query
+#### Query
`query`
@@ -999,14 +1001,14 @@ Runs a Salesforce SOQL query. If neither `sObjectClass` nor
-
+
Parameter
Type
Description
Default
Required
-
+
Body or
sObjectQuery
String
@@ -1014,7 +1016,7 @@ Runs a Salesforce SOQL query. If neither `sObjectClass` nor
x
-
+
streamQueryResult
Boolean
If true, returns a streaming
@@ -1024,7 +1026,7 @@ The sObjectClass option must reference an
false
-
+
sObjectClass
String
Fully qualified name of class to
@@ -1034,7 +1036,7 @@ deserialize response to. Usually a subclass of
-
+
sObjectName
String
Simple name of class to deserialize
@@ -1056,7 +1058,7 @@ Type: Instance of class supplied in `sObjectClass`, or
`CamelSalesforceQueryResultTotalSize` is set to the number of records
that matched the query.
-### Query More
+#### Query More
`queryMore`
@@ -1075,14 +1077,14 @@ response.
-
+
Parameter
Type
Description
Default
Required
-
+
Body or
sObjectQuery
String
@@ -1092,7 +1094,7 @@ found in a prior query result in the
X
-
+
sObjectClass
String
Fully qualified name of class to
@@ -1102,7 +1104,7 @@ deserialize response to. Usually a subclass of
-
+
sObjectName
String
Simple name of class to deserialize
@@ -1120,7 +1122,7 @@ option be set.
Type: Instance of class supplied in `sObjectClass`
-### Query All
+#### Query All
`queryAll`
@@ -1140,14 +1142,14 @@ based on the response.
-
+
Parameter
Type
Description
Default
Required
-
+
Body or
sObjectQuery
String
@@ -1155,7 +1157,7 @@ based on the response.
x
-
+
streamQueryResult
Boolean
If true, returns a streaming
@@ -1165,7 +1167,7 @@ The sObjectClass option must reference an
false
-
+
sObjectClass
String
Fully qualified name of class to
@@ -1175,7 +1177,7 @@ deserialize response to. Usually a subclass of
-
+
sObjectName
String
Simple name of class to deserialize
@@ -1194,7 +1196,7 @@ option be set.
Type: Instance of class supplied in `sObjectClass`, or
`Iterator` if `streamQueryResult` is true.
-### Search
+#### Search
`search`
@@ -1209,14 +1211,14 @@ Runs a Salesforce SOSL search
-
+
Parameter
Type
Description
Default
Required
-
+
Body or
sObjectSearch
String
@@ -1231,7 +1233,7 @@ Runs a Salesforce SOSL search
Type: `SearchResult2`
-### Submit Approval
+#### Submit Approval
`approval`
@@ -1246,14 +1248,14 @@ Submit a record or records (batch) for approval process.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
ApprovalRequest or
List<ApprovalRequest>
@@ -1262,7 +1264,7 @@ process
-
+
Approval.
Prefixed headers or endpoint options in
lieu of passing an ApprovalRequest in the body.
@@ -1325,7 +1327,7 @@ You could send a record for approval using:
final ApprovalResult result = template.requestBody("direct:example1", body, ApprovalResult.class);
-### Get Approvals
+#### Get Approvals
`approvals`
@@ -1335,7 +1337,7 @@ Returns a list of all approval processes.
Type: `Approvals`
-### Composite
+#### Composite
`composite`
@@ -1356,14 +1358,14 @@ provided *reference*.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
SObjectComposite
Contains REST API sub-requests to be
@@ -1371,7 +1373,7 @@ executed.
x
-
+
rawPayload
Boolean
Any (un)marshaling of requests and
@@ -1379,7 +1381,7 @@ responses are assumed to be handled by the route
false
x
-
+
compositeMethod
String
HTTP method to use for rawPayload
@@ -1458,15 +1460,14 @@ For instance, you can have the following route:
The route directly creates the body as JSON and directly submit to
salesforce endpoint using `rawPayload=true` option.
-With this approach, you have the complete control on the Salesforce
-request.
+With this approach, you have complete control on the Salesforce request.
`POST` is the default HTTP method used to send raw Composite requests to
salesforce. Use the `compositeMethod` option to override to the other
supported value, `GET`, which returns a list of other available
composite resources.
-### Composite Tree
+#### Composite Tree
`composite-tree`
@@ -1482,14 +1483,14 @@ levels) in one go.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
SObjectTree
Contains REST API sub-requests to be
@@ -1547,7 +1548,7 @@ Let’s look at an example:
final String firstId = succeeded.get(0).getId();
-### Composite Batch
+#### Composite Batch
`composite-batch`
@@ -1563,14 +1564,14 @@ sub-requests in a single request.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
SObjectBatch
Contains sub-requests to be
@@ -1639,7 +1640,7 @@ Let’s look at an example:
final int updateStatus = deleteResult.getStatusCode(); // probably 204
final Object updateResultData = deleteResult.getResult(); // probably null
-### Retrieve Multiple Records with Fewer Round-Trips
+#### Retrieve Multiple Records with Fewer Round-Trips
`compositeRetrieveSObjectCollections`
@@ -1654,14 +1655,14 @@ Retrieve one or more records of the same object type.
-
+
Parameter
Type
Description
Default
Required
-
+
sObjectIds
List of String or comma-separated
string
@@ -1670,7 +1671,7 @@ objects to return. All IDs must belong to the same object type.
x
-
+
sObjectFields
List of String or comma-separated
string
@@ -1680,7 +1681,7 @@ read-level permissions to each field.
x
-
+
sObjectName
String
Type of SObject, e.g.
@@ -1688,7 +1689,7 @@ read-level permissions to each field.
x
-
+
sObjectClass
String
Fully qualified class name of DTO class
@@ -1706,7 +1707,7 @@ specified by the package option.
Type: `List` of class determined by `sObjectName` or `sObjectClass`
header
-### Create SObject Collections
+#### Create SObject Collections
`compositeCreateSObjectCollections`
@@ -1721,21 +1722,21 @@ Add up to 200 records. Mixed SObject types is supported.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
List of SObject
A list of SObjects to create
x
-
+
allOrNone
boolean
Indicates whether to roll back the
@@ -1752,7 +1753,7 @@ request.
Type: `List`
-### Update SObject Collections
+#### Update SObject Collections
`compositeUpdateSObjectCollections`
@@ -1767,21 +1768,21 @@ Update up to 200 records. Mixed SObject types is supported.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
List of SObject
A list of SObjects to update
x
-
+
allOrNone
boolean
Indicates whether to roll back the
@@ -1797,7 +1798,7 @@ with the independent update of other objects in the request.
Type: `List`
-### Upsert SObject Collections
+#### Upsert SObject Collections
`compositeUpsertSObjectCollections`
@@ -1813,21 +1814,21 @@ field. Mixed SObject types is not supported.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
List of SObject
A list of SObjects to upsert
x
-
+
allOrNone
boolean
Indicates whether to roll back the
@@ -1836,7 +1837,7 @@ with the independent upsert of other objects in the request.
false
-
+
sObjectName
String
Type of SObject, e.g.
@@ -1844,7 +1845,7 @@ with the independent upsert of other objects in the request.
x
-
+
sObjectIdName
String
Name of External ID field
@@ -1858,7 +1859,7 @@ with the independent upsert of other objects in the request.
Type: `List`
-### Delete SObject Collections
+#### Delete SObject Collections
`compositeDeleteSObjectCollections`
@@ -1873,14 +1874,14 @@ Delete up to 200 records. Mixed SObject types is supported.
-
+
Parameter
Type
Description
Default
Required
-
+
sObjectIds or request
body
List of String or comma-separated
@@ -1890,7 +1891,7 @@ be deleted.
x
-
+
allOrNone
boolean
Indicates whether to roll back the
@@ -1907,7 +1908,7 @@ request.
Type: `List`
-### Get Event Schema
+#### Get Event Schema
`getEventSchema`
@@ -1925,14 +1926,14 @@ later.
-
+
Parameter
Type
Description
Default
Required
-
+
eventName
String
Name of event
@@ -1940,7 +1941,7 @@ later.
eventName or
eventSchemaId is required
-
+
eventSchemaId
String
ID of a schema
@@ -1948,7 +1949,7 @@ later.
eventName or
eventSchemaId is required
-
+
eventSchemaFormat
EventSchemaFormatEnum
EXPANDED: Apache Avro
@@ -1966,9 +1967,9 @@ later.
Type: `InputStream`
-## Apex REST API
+### Apex REST API
-### Invoke an Apex REST Web Service method
+#### Invoke an Apex REST Web Service method
`apexCall`
@@ -1999,14 +2000,14 @@ response.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
Map<String, Object>
if GET, otherwise String or
@@ -2017,7 +2018,7 @@ For other HTTP methods, the body is used for the HTTP body.
-
+
apexUrl
String
The portion of the endpoint URL after
@@ -2027,7 +2028,7 @@ For other HTTP methods, the body is used for the HTTP body.
Yes, unless supplied in
endpoint
-
+
apexMethod
String
The HTTP method (e.g. GET,
@@ -2035,7 +2036,7 @@ endpoint
GET
-
+
rawPayload
Boolean
If true, Camel will not serialize the
@@ -2043,7 +2044,7 @@ request or response bodies.
false
-
+
Header:
apexQueryParam.[paramName]
Object
@@ -2052,7 +2053,7 @@ passed in the endpoint.
-
+
sObjectName
String
Name of sObject (e.g.
@@ -2060,7 +2061,7 @@ passed in the endpoint.
-
+
sObjectClass
String
Fully qualified class name used to
@@ -2075,7 +2076,7 @@ deserialize the response
Type: Instance of class supplied in `sObjectClass` input header.
-## Bulk 2.0 API
+### Bulk 2.0 API
The Bulk 2.0 API has a simplified model over the original Bulk API. Use
it to quickly load a large amount of data into salesforce, or query a
@@ -2124,7 +2125,7 @@ following operations are supported:
- [bulk2GetAllQueryJobs](#bulk2GetAllQueryJobs) - Gets all query jobs.
-### Create a Job
+#### Create a Job
`bulk2CreateJob` Creates a bulk ingest job.
@@ -2137,14 +2138,14 @@ following operations are supported:
-
+
Parameter
Type
Description
Default
Required
-
+
Body
Job
Job to create
@@ -2158,7 +2159,7 @@ following operations are supported:
Type: `Job`
-### Upload a Batch of Job Data
+#### Upload a Batch of Job Data
`bulk2CreateBatch`
@@ -2173,14 +2174,14 @@ Adds a batch of data to an ingest job.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
InputStream or
String
@@ -2190,7 +2191,7 @@ headers.
Required if jobId not
supplied
-
+
jobId
String
Id of Job to create batch
@@ -2201,7 +2202,7 @@ under
-### Close a Job
+#### Close a Job
`bulk2CloseJob`
@@ -2217,14 +2218,14 @@ processed or aborted/deleted.
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to close
@@ -2238,7 +2239,7 @@ processed or aborted/deleted.
Type: `Job`
-### Abort a Job
+#### Abort a Job
`bulk2AbortJob`
@@ -2253,14 +2254,14 @@ Aborts an ingest job.
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to abort
@@ -2274,7 +2275,7 @@ Aborts an ingest job.
Type: `Job`
-### Delete a Job
+#### Delete a Job
`bulk2DeleteJob`
@@ -2289,14 +2290,14 @@ Deletes an ingest job.
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to delete
@@ -2306,7 +2307,7 @@ Deletes an ingest job.
-### Get Job Successful Record Results
+#### Get Job Successful Record Results
`bulk2GetSuccessfulResults`
@@ -2321,14 +2322,14 @@ Gets successful results for an ingest job.
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to get results for
@@ -2343,7 +2344,7 @@ Gets successful results for an ingest job.
Type: `InputStream`
Contents: CSV data
-### Get Job Failed Record Results
+#### Get Job Failed Record Results
`bulk2GetFailedResults`
@@ -2358,14 +2359,14 @@ Gets failed results for an ingest job.
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to get results for
@@ -2380,7 +2381,7 @@ Gets failed results for an ingest job.
Type: `InputStream`
Contents: CSV data
-### Get Job Unprocessed Record Results
+#### Get Job Unprocessed Record Results
`bulk2GetUnprocessedRecords`
@@ -2395,14 +2396,14 @@ Gets unprocessed records for an ingest job.
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to get records for
@@ -2416,7 +2417,7 @@ Gets unprocessed records for an ingest job.
Type: `InputStream` Contents: CSV data
-### Get Job Info
+#### Get Job Info
`bulk2GetJob`
@@ -2431,14 +2432,14 @@ Gets an ingest Job.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
Job
Will use Id of supplied Job to retrieve
@@ -2447,7 +2448,7 @@ Job
Required if jobId not
supplied
-
+
jobId
String
Id of Job to retrieve
@@ -2462,7 +2463,7 @@ supplied in body
Type: `Job`
-### Get All Jobs
+#### Get All Jobs
`bulk2GetAllJobs`
@@ -2477,14 +2478,14 @@ Gets all ingest jobs.
-
+
Parameter
Type
Description
Default
Required
-
+
queryLocator
String
Used in subsequent calls if results
@@ -2503,7 +2504,7 @@ If the `done` property of the `Jobs` instance is false, there are
additional pages to fetch, and the `nextRecordsUrl` property contains
the value to be set in the `queryLocator` parameter on subsequent calls.
-### Create a Query Job
+#### Create a Query Job
`bulk2CreateQueryJob`
@@ -2518,14 +2519,14 @@ Gets a query job.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
QueryJob
QueryJob to create
@@ -2539,7 +2540,7 @@ Gets a query job.
Type: `QueryJob`
-### Get Results for a Query Job
+#### Get Results for a Query Job
`bulk2GetQueryJobResults`
@@ -2555,21 +2556,21 @@ Get bulk query job results. `jobId` parameter is required. Accepts
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to get results for
x
-
+
maxRecords
Integer
The maximum number of records to
@@ -2583,7 +2584,7 @@ size.
-
+
locator
locator
A string that identifies a specific set
@@ -2604,7 +2605,7 @@ Response message headers include `Sforce-NumberOfRecords` and
`Sforce-Locator` headers. The value of `Sforce-Locator` can be passed
into subsequent calls via the `locator` parameter.
-### Abort a Query Job
+#### Abort a Query Job
`bulk2AbortQueryJob`
@@ -2619,14 +2620,14 @@ Aborts a query job.
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to abort
@@ -2640,7 +2641,7 @@ Aborts a query job.
Type: `QueryJob`
-### Delete a Query Job
+#### Delete a Query Job
`bulk2DeleteQueryJob`
@@ -2655,14 +2656,14 @@ Deletes a query job.
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to delete
@@ -2672,7 +2673,7 @@ Deletes a query job.
-### Get Information About a Query Job
+#### Get Information About a Query Job
`bulk2GetQueryJob`
@@ -2687,14 +2688,14 @@ Gets a query job.
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to retrieve
@@ -2708,7 +2709,7 @@ Gets a query job.
Type: `QueryJob`
-### Get Information About All Query Jobs
+#### Get Information About All Query Jobs
`bulk2GetAllQueryJobs`
@@ -2723,14 +2724,14 @@ Gets all query jobs.
-
+
Parameter
Type
Description
Default
Required
-
+
queryLocator
String
Used in subsequent calls if results
@@ -2749,7 +2750,7 @@ If the `done` property of the `QueryJobs` instance is false, there are
additional pages to fetch, and the `nextRecordsUrl` property contains
the value to be set in the `queryLocator` parameter on subsequent calls.
-## Bulk (original) API
+### Bulk (original) API
Producer endpoints can use the following APIs. All Job data formats,
i.e. xml, csv, zip/xml, and zip/csv are supported.
@@ -2787,7 +2788,7 @@ The following operations are supported:
- [getQueryResult](#getQueryResult) - Gets results for a Result Id
-### Create a Job
+#### Create a Job
`createJob`
@@ -2804,28 +2805,28 @@ pkChunking\* options. See an explanation
-
+
Parameter
Type
Description
Default
Required
-
+
Body
JobInfo
Job to create
x
-
+
pkChunking
Boolean
Whether to use PK Chunking
false
-
+
pkChunkingChunkSize
Integer
@@ -2833,7 +2834,7 @@ style="text-align: left;">pkChunkingChunkSize
-
+
pkChunkingStartRow
Integer
@@ -2841,7 +2842,7 @@ style="text-align: left;">pkChunkingStartRow
-
+
pkChunkingParent
String
@@ -2855,7 +2856,7 @@ style="text-align: left;">pkChunkingStartRow
Type: `JobInfo`
-### Get Job Details
+#### Get Job Details
`getJob`
@@ -2870,21 +2871,21 @@ Gets a Job
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job to get
Required if body not supplied
-
+
Body
JobInfo
JobInfo instance from
@@ -2900,7 +2901,7 @@ supplied
Type: `JobInfo`
-### Close a Job
+#### Close a Job
`closeJob`
@@ -2915,21 +2916,21 @@ Closes a Job
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
Required if body not supplied
-
+
Body
JobInfo
JobInfo instance from
@@ -2945,7 +2946,7 @@ supplied
Type: `JobInfo`
-### Abort a Job
+#### Abort a Job
`abortJob`
@@ -2960,21 +2961,21 @@ Aborts a Job
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
Required if body not supplied
-
+
Body
JobInfo
JobInfo instance from
@@ -2990,7 +2991,7 @@ supplied
Type: `JobInfo`
-### Add a Batch to a Job
+#### Add a Batch to a Job
`createBatch`
@@ -3005,21 +3006,21 @@ Submits a Batch within a Bulk Job
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
x
-
+
contentType
String
Content type of body. Can be XML, CSV,
@@ -3027,7 +3028,7 @@ ZIP_XML or ZIP_CSV
x
-
+
Body
InputStream or
String
@@ -3042,7 +3043,7 @@ ZIP_XML or ZIP_CSV
Type: `BatchInfo`
-### Get Information for a Batch
+#### Get Information for a Batch
`getBatch`
@@ -3057,28 +3058,28 @@ Get a Batch
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
Required if body not supplied
-
+
batchId
String
Id of Batch
Required if body not supplied
-
+
Body
BatchInfo
JobInfo instance from
@@ -3094,7 +3095,7 @@ which jobId and batchId will be used
Type: `BatchInfo`
-### Get Information for All Batches in a Job
+#### Get Information for All Batches in a Job
`getAllBatches`
@@ -3109,21 +3110,21 @@ Gets all Batches for a Bulk Job Id
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
Required if body not supplied
-
+
Body
JobInfo
JobInfo instance from
@@ -3139,7 +3140,7 @@ supplied
Type: `List`
-### Get a Batch Request
+#### Get a Batch Request
`getRequest`
@@ -3154,28 +3155,28 @@ Gets Request data (XML/CSV) for a Batch
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
Required if body not supplied
-
+
batchId
String
Id of Batch
Required if body not supplied
-
+
Body
BatchInfo
JobInfo instance from
@@ -3191,7 +3192,7 @@ which jobId and batchId will be used
Type: `InputStream`
-### Get Batch Results
+#### Get Batch Results
`getResults`
@@ -3206,28 +3207,28 @@ Gets the results of the Batch when it’s complete
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
Required if body not supplied
-
+
batchId
String
Id of Batch
Required if body not supplied
-
+
Body
BatchInfo
JobInfo instance from
@@ -3243,7 +3244,7 @@ which jobId and batchId will be used
Type: `InputStream`
-### Create Bulk Query Batch
+#### Create Bulk Query Batch
`createBatchQuery`
@@ -3258,21 +3259,21 @@ Creates a Batch from an SOQL query
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
Required if body not supplied
-
+
contentType
String
Content type of body. Can be XML, CSV,
@@ -3281,7 +3282,7 @@ ZIP_XML or ZIP_CSV
Required if JobInfo
instance not supplied in body
-
+
sObjectQuery
String
SOQL query to be used for this
@@ -3290,7 +3291,7 @@ batch
Required if not supplied in
body
-
+
Body
JobInfo or
String
@@ -3308,7 +3309,7 @@ or String to be used as the Batch query
Type: `BatchInfo`
-### Get Batch Results
+#### Get Batch Results
`getQueryResultIds`
@@ -3323,28 +3324,28 @@ Gets a list of Result Ids for a Batch Query
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
Required if body not supplied
-
+
batchId
String
Id of Batch
Required if body not supplied
-
+
Body
BatchInfo
JobInfo instance from
@@ -3360,7 +3361,7 @@ which jobId and batchId will be used
Type: `List`
-### Get Bulk Query Results
+#### Get Bulk Query Results
`getQueryResult`
@@ -3375,35 +3376,35 @@ Gets results for a Result Id
-
+
Parameter
Type
Description
Default
Required
-
+
jobId
String
Id of Job
Required if body not supplied
-
+
batchId
String
Id of Batch
Required if body not supplied
-
+
resultId
String
Id of Result
If not passed in body
-
+
Body
BatchInfo or
String
@@ -3432,14 +3433,14 @@ put message body will contain `BatchInfo` on success, or throw a
...to("salesforce:createBatch")..
-## Pub/Sub API
+### Pub/Sub API
The Pub/Sub API allows you to publish and subscribe to platform events,
including real-time event monitoring events, and change data capture
events. This API is based on gRPC and HTTP/2, and event payloads are
delivered in Apache Avro format.
-### Publishing Events
+#### Publishing Events
The URI format for publishing events is:
@@ -3449,7 +3450,7 @@ For example:
.to("salesforce:pubsubPublish:/event/MyCustomPlatformEvent__e")
-### Publish an Event
+#### Publish an Event
`pubSubPublish`
@@ -3462,14 +3463,14 @@ For example:
-
+
Parameter
Type
Description
Default
Required
-
+
Body
List. List can contained
mixed types (see description below).
@@ -3519,7 +3520,7 @@ Type:
The order of the items in the returned `List` correlates to the order of
the items in the input `List`.
-### Subscribing
+#### Subscribing
The URI format for subscribing to a Pub/Sub topic is:
@@ -3538,14 +3539,14 @@ For example:
-
+
Parameter
Type
Description
Default
Required
-
+
replayPreset
ReplayPreset
Values: LATEST,
@@ -3553,7 +3554,7 @@ For example:
LATEST
-
+
pubSubReplayId
String
When replayPreset is set
@@ -3562,7 +3563,7 @@ topic.
-
+
pubSubBatchSize
int
Max number of events to receive at a
@@ -3570,7 +3571,7 @@ time. Values >100 will be normalized to 100 by salesforce.
100
X
-
+
pubSubDeserializeType
PubSubDeserializeType
AVRO
X
-
+
pubSubPojoClass
Fully qualified class name to
deserialize Pub/Sub API event to.
@@ -3601,7 +3602,7 @@ Type: Determined by the `pubSubDeserializeType` option.
Headers: `CamelSalesforcePubSubReplayId`
-## Streaming API
+### Streaming API
The Streaming API enables streaming of events using push technology and
provides a subscription mechanism for receiving events in near real
@@ -3609,7 +3610,7 @@ time. The Streaming API subscription mechanism supports multiple types
of events, including PushTopic events, generic events, platform events,
and Change Data Capture events.
-### Push Topics
+#### Push Topics
The URI format for consuming Push Topics is:
@@ -3632,21 +3633,21 @@ To subscribe to an existing topic
-
+
Parameter
Type
Description
Default
Required
-
+
sObjectName
String
SObject to monitor
x
-
+
sObjectQuery
String
SOQL query used to create Push
@@ -3655,7 +3656,7 @@ Topic
Required for creating new
topics
-
+
updateTopic
Boolean
Whether to update an existing Push
@@ -3663,7 +3664,7 @@ Topic if exists
false
-
+
notifyForFields
NotifyForFieldsEnum
@@ -3672,7 +3673,7 @@ against the PushTopic query.
Referenced
-
+
notifyForOperationCreate
Boolean
@@ -3681,7 +3682,7 @@ generate a notification.
false
-
+
notifyForOperationDelete
Boolean
@@ -3690,7 +3691,7 @@ generate a notification.
false
-
+
notifyForOperationUndelete
Boolean
@@ -3699,7 +3700,7 @@ generate a notification.
false
-
+
notifyForOperationUpdate
Boolean
@@ -3708,7 +3709,7 @@ generate a notification.
false
-
+
notifyForOperations
All
-
+
replayId
int
The replayId value to use when
@@ -3726,7 +3727,7 @@ subscribing.
-
+
defaultReplayId
int
Default replayId setting if no value is
@@ -3734,7 +3735,7 @@ found in initialReplayIdMap.
-1
-
+
fallBackReplayId
int
ReplayId to fall back to after an
@@ -3749,7 +3750,7 @@ Invalid Replay Id response.
Type: Class passed via `sObjectName` parameter
-### Platform Events
+#### Platform Events
To emit a platform event use the [createSObject](#createSObject)
operation, passing an instance of a platform event, e.g.
@@ -3773,14 +3774,14 @@ For example, to receive platform events use for the event type
-
+
Parameter
Type
Description
Default
Required
-
+
rawPayload
Boolean
If false, operation returns a
@@ -3789,7 +3790,7 @@ Message
false
-
+
replayId
int
The replayId value to use when
@@ -3797,7 +3798,7 @@ subscribing.
-
+
defaultReplayId
int
Default replayId setting if no value is
@@ -3805,7 +3806,7 @@ found in initialReplayIdMap.
-1
-
+
fallBackReplayId
int
ReplayId to fall back to after an
@@ -3820,7 +3821,7 @@ Invalid Replay Id response.
Type: `PlatformEvent` or `org.cometd.bayeux.Message`
-### Change Data Capture Events
+#### Change Data Capture Events
Change Data Capture (CDC) allows you to receive near-real-time changes
of Salesforce records, and synchronize corresponding records in an
@@ -3869,14 +3870,14 @@ considerations could be of interest.
-
+
Parameter
Type
Description
Default
Required
-
+
rawPayload
Boolean
If false, operation returns a
@@ -3885,7 +3886,7 @@ considerations could be of interest.
false
-
+
replayId
int
The replayId value to use when
@@ -3893,7 +3894,7 @@ subscribing.
-
+
defaultReplayId
int
Default replayId setting if no value is
@@ -3901,7 +3902,7 @@ found in initialReplayIdMap.
-1
-
+
fallBackReplayId
int
ReplayId to fall back to after an
@@ -3924,11 +3925,11 @@ Headers
-
+
Name
Description
-
+
CamelSalesforceChangeType
CREATE,
@@ -3938,7 +3939,7 @@ style="text-align: left;">
CamelSalesforceChangeType
-## Reports API
+### Reports API
- [getRecentReports](#getRecentReports) - Gets up to 200 of the
reports you most recently viewed.
@@ -3958,7 +3959,7 @@ style="text-align: left;">CamelSalesforceChangeType
- [getReportResults](#getReportResults) - Retrieves results for an
instance of a report run asynchronously.
-### Report List
+#### Report List
`getRecentReports`
@@ -3968,7 +3969,7 @@ Gets up to 200 of the reports you most recently viewed.
Type: `List`
-### Describe Report
+#### Describe Report
`getReportDescription`
@@ -3984,14 +3985,14 @@ either in a tabular or summary or matrix format.
-
+
Parameter
Type
Description
Default
Required
-
+
reportId
String
Id of Report
@@ -3999,7 +4000,7 @@ either in a tabular or summary or matrix format.
Required if not supplied in
body
-
+
Body
String
Id of Report
@@ -4014,7 +4015,7 @@ body
Type: `ReportDescription`
-### Execute Sync
+#### Execute Sync
`executeSyncReport`
@@ -4030,14 +4031,14 @@ the latest summary data.
-
+
Parameter
Type
Description
Default
Required
-
+
reportId
String
Id of Report
@@ -4045,14 +4046,14 @@ the latest summary data.
Required if not supplied in
body
-
+
includeDetails
Boolean
Whether to include details
false
-
+
reportMetadata
ReportMetadata
Optionally, pass ReportMetadata here
@@ -4060,7 +4061,7 @@ instead of body
-
+
Body
ReportMetadata
If supplied, will use instead of
@@ -4076,7 +4077,7 @@ instead of body
Type: `AbstractReportResultsBase`
-### Execute Async
+#### Execute Async
`executeAsyncReport`
@@ -4092,14 +4093,14 @@ returns the summary data with or without details.
-
+
Parameter
Type
Description
Default
Required
-
+
reportId
String
Id of Report
@@ -4107,14 +4108,14 @@ returns the summary data with or without details.
Required if not supplied in
body
-
+
includeDetails
Boolean
Whether to include details
false
-
+
reportMetadata
ReportMetadata
Optionally, pass ReportMetadata here
@@ -4122,7 +4123,7 @@ instead of body
-
+
Body
ReportMetadata
If supplied, will use instead of
@@ -4138,7 +4139,7 @@ instead of body
Type: `ReportInstance`
-### Instances List
+#### Instances List
`getReportInstances`
@@ -4155,14 +4156,14 @@ of the report.
-
+
Parameter
Type
Description
Default
Required
-
+
reportId
String
Id of Report
@@ -4170,7 +4171,7 @@ of the report.
Required if not supplied in
body
-
+
Body
String
If supplied, will use instead of
@@ -4186,7 +4187,7 @@ body
Type: `List`
-### Instance Results
+#### Instance Results
`getReportResults`
@@ -4201,14 +4202,14 @@ Contains the results of running a report.
-
+
Parameter
Type
Description
Default
Required
-
+
reportId
String
Id of Report
@@ -4216,14 +4217,14 @@ Contains the results of running a report.
Required if not supplied in
body
-
+
instanceId
String
Id of Report instance
x
-
+
Body
String
If supplied, will use instead of
@@ -4239,12 +4240,12 @@ body
Type: `AbstractReportResultsBase`
-# Miscellaneous Operations
+## Miscellaneous Operations
- [raw](#raw) - Send requests to salesforce and have full, raw control
over endpoint, parameters, body, etc.
-## Raw
+### Raw
`raw`
@@ -4263,14 +4264,14 @@ can be overridden with the `rawHttpHeaders` option.
-
+
Parameter
Type
Description
Default
Required
-
+
Body
String or
InputStream
@@ -4278,7 +4279,7 @@ can be overridden with the `rawHttpHeaders` option.
-
+
rawPath
String
The portion of the endpoint URL after
@@ -4287,14 +4288,14 @@ the domain name, e.g.,
x
-
+
rawMethod
String
The HTTP method
x
-
+
rawQueryParameters
String
@@ -4304,7 +4305,7 @@ done automatically.
-
+
rawHttpHeaders
String
Comma separated list of message headers
@@ -4319,7 +4320,7 @@ to include as HTTP headers
Type: `InputStream`
-### Query example
+#### Query example
In this example we’ll send a query to the REST API. The query must be
passed in a URL parameter called "q", so we’ll create a message header
@@ -4331,7 +4332,7 @@ URL parameter:
.to("salesforce:raw?format=JSON&rawMethod=GET&rawQueryParameters=q&rawPath=/services/data/v51.0/query")
// deserialize JSON results or handle in some other way
-### SObject example
+#### SObject example
In this example, we’ll pass a Contact the REST API in a `create`
operation. Since the `raw` operation does not perform any serialization,
@@ -4349,7 +4350,7 @@ The response is:
true
-# Uploading a document to a ContentWorkspace
+## Uploading a document to a ContentWorkspace
Create the ContentVersion in Java, using a Processor instance:
@@ -4383,16 +4384,16 @@ Give the output from the processor to the Salesforce component:
// for the salesforce component
.to("salesforce:createSObject");
-# Generating SOQL query strings
+## Generating SOQL query strings
`org.apache.camel.component.salesforce.api.utils.QueryHelper` contains
-helper methods to generate SOQL queries. For instance to fetch all
-custom fields from *Account* SObject you can simply generate the SOQL
-SELECT by invoking:
+helper methods to generate SOQL queries. For instance, to fetch all
+custom fields from *Account* SObject, you can generate the SOQL SELECT
+by invoking:
String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom);
-# Camel Salesforce Maven Plugin
+## Camel Salesforce Maven Plugin
The Maven plugin generates Java DTOs to represent salesforce objects.
diff --git a/camel-sample-eip.md b/camel-sample-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..809d46ccb74328ea7d95b315e103be4e134fe7c3
--- /dev/null
+++ b/camel-sample-eip.md
@@ -0,0 +1,73 @@
+# Sample-eip.md
+
+A sampling throttler allows you to extract a sample of the exchanges
+from the traffic through a route.
+
+
+
+
+
+The Sample EIP works similar to a wire tap, but instead of tapping every
+message, the sampling will select a single message in a given time
+period. This selected message is allowed to pass through, and all other
+messages are stopped.
+
+# Options
+
+# Exchange properties
+
+# Using Sample EIP
+
+In the example below, we sample one message per second (default time
+period):
+
+Java
+from("direct:sample")
+.sample()
+.to("direct:sampled");
+
+XML
+
+
+
+
+
+
+
+## Sampling using time period
+
+The default time period is 1 second, but this can easily be configured.
+For example, to sample one message per 5 seconds, you can do:
+
+Java
+from("direct:sample")
+.sample(5, TimeUnit.SECONDS)
+.to("direct:sampled");
+
+XML
+
+
+
+
+
+
+
+## Sampling using message frequency
+
+The Sample EIP can also be configured to sample based on frequency
+instead of a time period.
+
+For example, to sample every 10th message you can do:
+
+Java
+from("direct:sample")
+.sample(10)
+.to("direct:sampled");
+
+XML
+
+
+
+
+
+
diff --git a/camel-scatter-gather.md b/camel-scatter-gather.md
new file mode 100644
index 0000000000000000000000000000000000000000..1e7ef4ab8567bfc7f20469127052f3b3ad56170c
--- /dev/null
+++ b/camel-scatter-gather.md
@@ -0,0 +1,208 @@
+# Scatter-gather.md
+
+Camel supports the
+[Scatter-Gather](https://www.enterpriseintegrationpatterns.com/patterns/messaging/BroadcastAggregate.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+The Scatter-Gather from the EIP patterns allows you to route messages to
+a number of dynamically specified recipients and re-aggregate the
+responses back into a single message.
+
+
+
+
+
+In Camel, the Scatter-Gather EIP is supported in two different
+synchronous modes.
+
+- request/reply mode, where the re-aggregated response message
+ continues being routed synchronously after the Scatter-Gather is
+ complete.
+
+- one-way mode, where the response message is being routed
+ asynchronous separately from the incoming message thread.
+
+# Request/Reply vs. One-Way messaging modes
+
+In Camel, the request/reply mode is done by using only the [Recipient
+List](#recipientList-eip.adoc) which comes with aggregation built-in
+(which is often the simplest solution).
+
+The request/reply mode refers to the fact that the response message is
+tied synchronously to the incoming message (that would wait) until the
+response message is ready, and then continue being routed. This allows
+for [Request Reply](#requestReply-eip.adoc) messaging style.
+
+The one-way mode refers to the fact that the response message is not
+tied to the incoming message (which will continue). And the response
+message (when its ready) will continue being routed independently of the
+incoming message. This only allows for [Event
+Message](#event-message.adoc) messaging style.
+
+In the one-way mode, then you combine the [Recipient
+List](#recipientList-eip.adoc) and [Aggregate](#aggregate-eip.adoc) EIPs
+together as the Scatter-Gather EIP solution.
+
+# Using Recipient List only
+
+In the following example, we want to call two HTTP services and gather
+their responses into a single message, as the response:
+
+
+
+
+
+
+ http:server1,http:server2
+
+
+
+
+
+
+This is a basic example that only uses basic functionality of the
+[Recipient List](#recipientList-eip.adoc). For more details on how the
+aggregation works, see the [Recipient List](#recipientList-eip.adoc)
+documentation.
+
+# Using Recipient List and Aggregate EIP
+
+In this example, we want to get the best quote for beer from several
+vendors.
+
+We use [Recipient List](#recipientList-eip.adoc) to get the request for
+a quote to all vendors and an [Aggregate](#aggregate-eip.adoc) to pick
+the best quote out of all the responses.
+
+The routes for this are defined as:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+So in the first route, you see that the [Recipient
+List](#recipientList-eip.adoc) is looking at the listOfVendors header
+for the list of recipients. So, we need to send a message like:
+
+ Map headers = new HashMap<>();
+ headers.put("listOfVendors", "bean:vendor1,bean:vendor2,bean:vendor3");
+ headers.put("quoteRequestId", "quoteRequest-1");
+
+ template.sendBodyAndHeaders("direct:start", "", headers);
+
+This message will be distributed to the following Endpoints:
+bean:vendor1, bean:vendor2, and bean:vendor3. These are all Java beans
+(called via the Camel [Bean](#components::bean-component.adoc)
+endpoint), which look like:
+
+ public class MyVendor {
+ private int beerPrice;
+
+ @Produce("seda:quoteAggregator")
+ private ProducerTemplate quoteAggregator;
+
+ public MyVendor(int beerPrice) {
+ this.beerPrice = beerPrice;
+ }
+
+ public void quote(@XPath("/quote_request/@item") String item, Exchange exchange) {
+ if ("beer".equals(item)) {
+ exchange.getMessage().setBody(beerPrice);
+ quoteAggregator.send(exchange);
+ } else {
+ // ignore no quote
+ }
+ }
+ }
+
+And are loaded up in XML like this:
+
+
+
+
+
+
+ 1
+
+
+
+ 2
+
+
+
+ 3
+
+
+
+
+Each bean is loaded with a different price for beer. When the message is
+sent to each bean endpoint, it will arrive at the `MyVendor.quote`
+method. This method does a simple check whether this quote request is
+for beer and then sets the price of beer on the exchange for retrieval
+at a later step. The message is forwarded on to the next step using
+[POJO Producing](#manual::pojo-producing.adoc) (see the `@Produce`
+annotation).
+
+At the next step, we want to take the beer quotes from all vendors and
+find out which one was the best (i.e., the lowest!). To do this, we use
+the [Aggregate](#aggregate-eip.adoc) EIP with a custom
+`AggregationStrategy`.
+
+The [Aggregate](#aggregate-eip.adoc) needs to be able to compare only
+the messages from this particular quote; this is easily done by
+specifying a correlation expression equal to the value of the
+quoteRequestId header. As shown above in the message sending snippet, we
+set this header to quoteRequest-1. This correlation value must be
+unique, or you may include responses that are not part of this quote. To
+pick the lowest quote out of the set, we use a custom
+`AggregationStrategy` like:
+
+ public class LowestQuoteAggregationStrategy implements AggregationStrategy {
+
+ public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
+ // the first time we only have the new exchange
+ if (oldExchange == null) {
+ return newExchange;
+ }
+
+ if (oldExchange.getMessage().getBody(int.class) < newExchange.getMessage().getBody(int.class)) {
+ return oldExchange;
+ } else {
+ return newExchange;
+ }
+ }
+ }
+
+And finally, the aggregator will assemble the response message with the
+best beer price (the lowest). Notice how the aggregator has timeout
+built-in, meaning that if one or more of the beer vendors does not
+respond, then the aggregator will discard those *late* responses, and
+send out a message with the *best price so far*.
+
+The message is then continued to another route via the `direct:bestBeer`
+endpoint.
+
+# See Also
+
+The Scatter-Gather EIP is a composite pattern built by exiting EIPs:
+
+- [Recipient List](#recipientList-eip.adoc)
+
+- [Aggregate](#aggregate-eip.adoc)
diff --git a/camel-scheduler.md b/camel-scheduler.md
index caade8a5caa150962d69977ace94a0638b998f6c..2c3a388cd45c22176c7b59c6c6934bea1190dfac 100644
--- a/camel-scheduler.md
+++ b/camel-scheduler.md
@@ -31,7 +31,9 @@ Consumer](http://camel.apache.org/polling-consumer.html) where you can
find more information about the options above, and examples at the
[Polling Consumer](http://camel.apache.org/polling-consumer.html) page.
-# Exchange Properties
+# Usage
+
+## Exchange Properties
When the timer is fired, it adds the following information as properties
to the `Exchange`:
@@ -43,21 +45,21 @@ to the `Exchange`:
-
+
-
+
Exchange.TIMER_NAME
String
The value of the name
option.
-
+
Exchange.TIMER_FIRED_TIME
Date
@@ -67,30 +69,13 @@ fired.
-# Sample
-
-To set up a route that generates an event every 60 seconds:
-
- from("scheduler://foo?delay=60000").to("bean:myBean?method=someMethodName");
-
-The above route will generate an event and then invoke the
-`someMethodName` method on the bean called `myBean` in the Registry such
-as JNDI or Spring.
-
-And the route in Spring DSL:
-
-
-
-
-
-
-# Forcing the scheduler to trigger immediately when completed
+## Forcing the scheduler to trigger immediately when completed
To let the scheduler trigger as soon as the previous task is complete,
you can set the option `greedy=true`. But beware then the scheduler will
keep firing all the time. So use this with caution.
-# Forcing the scheduler to be idle
+## Forcing the scheduler to be idle
There can be use cases where you want the scheduler to trigger and be
greedy. But sometimes you want to "tell the scheduler" that there was no
@@ -104,6 +89,23 @@ The consumer will otherwise as by default return 1 message polled to the
scheduler, every time the consumer has completed processing the
exchange.
+# Example
+
+To set up a route that generates an event every 60 seconds:
+
+ from("scheduler://foo?delay=60000").to("bean:myBean?method=someMethodName");
+
+The above route will generate an event and then invoke the
+`someMethodName` method on the bean called `myBean` in the Registry such
+as JNDI or Spring.
+
+And the route in Spring DSL:
+
+
+
+
+
+
## Component Configurations
diff --git a/camel-schematron.md b/camel-schematron.md
index d4ccd20855281028e2004efa06152d3756296cd6..3aee8e0072a63586cbd2d4b00835eb7efde7e018 100644
--- a/camel-schematron.md
+++ b/camel-schematron.md
@@ -34,7 +34,7 @@ object representing the rules.
-
+
-
+
CamelSchematronValidationStatus
The schematron validation status:
@@ -50,7 +50,7 @@ SUCCESS / FAILED
String
IN
-
+
CamelSchematronValidationReport
The schematrion report body in XML
@@ -61,7 +61,9 @@ format. See an example below
-# URI and path syntax
+# Examples
+
+## URI and path syntax
The following example shows how to invoke the schematron processor in
Java DSL. The schematron rules file is sourced from the class path:
@@ -69,7 +71,7 @@ Java DSL. The schematron rules file is sourced from the class path:
from("direct:start").to("schematron://sch/schematron.sch").to("mock:result")
The following example shows how to invoke the schematron processor in
-XML DSL. The schematrion rules file is sourced from the file system:
+XML DSL. The schematron rules file is sourced from the file system:
@@ -105,7 +107,7 @@ update, all you need is to restart the route or the component. No harm
in storing these rules in the class path though, but you will have to
build and deploy the component to pick up the changes.
-# Schematron rules and report samples
+## Schematron rules and report examples
Here is an example of schematron rules
diff --git a/camel-script-eip.md b/camel-script-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..8af3ddbfdecf01c4f2fae9ee0b722bdf04bd7e39
--- /dev/null
+++ b/camel-script-eip.md
@@ -0,0 +1,68 @@
+# Script-eip.md
+
+The Script EIP is used for executing a coding script.
+
+
+
+
+
+This is useful when you need to invoke some logic not in Java code such
+as JavaScript, Groovy or any of the other Languages.
+
+The returned value from the script is discarded and not used. If the
+returned value should be set as the new message body, then use the
+[Message Translator](#message-translator.adoc) EIP instead.
+
+# Options
+
+# Exchange properties
+
+# Using Script EIP
+
+The route below will read the file contents and call a groovy script
+
+Java
+from("file:inbox")
+.script().groovy("some groovy code goes here")
+.to("bean:myServiceBean.processLine");
+
+XML
+
+
+
+
+
+
+Mind that you can use *CDATA* if the script uses `< >` etc:
+
+
+
+
+
+
+
+## Scripting Context
+
+The scripting context has access to the current `Exchange` and can
+essentially change the message or headers directly.
+
+## Using external script files
+
+You can refer to external script files instead of inlining the script.
+For example, to load a groovy script from the classpath, you need to
+prefix the value with `resource:` as shown:
+
+
+
+
+
+
+
+You can also refer to the script from the file system with `file:`
+instead of `classpath:` such as `file:/var/myscript.groovy`
diff --git a/camel-seda.md b/camel-seda.md
index 0aa7868bf30b168ba6fb790e039e15b2965f21ed..e47d98f24b540cc999c84be8dbd7a75c25c10dee 100644
--- a/camel-seda.md
+++ b/camel-seda.md
@@ -29,7 +29,9 @@ invocation of any consumers when a producer sends a message exchange.
Where *someId* can be any string that uniquely identifies the endpoint
within the current CamelContext.
-# Choosing BlockingQueue implementation
+# Usage
+
+## Choosing BlockingQueue implementation
By default, the SEDA component always instantiates a
`LinkedBlockingQueue`, but you can use different implementation, you can
@@ -64,7 +66,7 @@ implementations are provided:
seda:priority?queueFactory=#priorityQueueFactory&size=100
-# Use of Request Reply
+## Use of Request Reply
The [SEDA](#seda-component.adoc) component supports using Request Reply,
where the caller will wait for the Async route to complete. For
@@ -80,7 +82,7 @@ it is a Request Reply message, we wait for the response. When the
consumer on the `seda:input` queue is complete, it copies the response
to the original message response.
-# Concurrent consumers
+## Concurrent consumers
By default, the SEDA endpoint uses a single consumer thread, but you can
configure it to use concurrent consumer threads. So instead of thread
@@ -92,7 +94,7 @@ As for the difference between the two, note a *thread pool* can
increase/shrink dynamically at runtime depending on load, whereas the
number of concurrent consumers is always fixed.
-# Thread pools
+## Thread pools
Be aware that adding a thread pool to a SEDA endpoint by doing something
like:
@@ -110,7 +112,7 @@ synchronously and asynchronously. For example:
You can also directly configure number of threads that process messages
on a SEDA endpoint using the `concurrentConsumers` option.
-# Sample
+# Examples
In the route below, we use the SEDA queue to send the request to this
async queue. As such, it is able to send a *fire-and-forget* message for
@@ -154,7 +156,7 @@ another thread for further processing. Since this is from a unit test,
it will be sent to a `mock` endpoint where we can do assertions in the
unit test.
-# Using multipleConsumers
+## Using multipleConsumers
In this example, we have defined two consumers.
@@ -186,7 +188,7 @@ message as a kind of *publish/subscribe* style messaging.
As the beans are part of a unit test, they simply send the message to a
mock endpoint.
-# Extracting queue information.
+## Extracting queue information.
If needed, information such as queue size, etc. can be obtained without
using JMX in this fashion:
@@ -204,7 +206,7 @@ using JMX in this fashion:
|defaultPollTimeout|The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown.|1000|integer|
|defaultBlockWhenFull|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted.|false|boolean|
|defaultDiscardWhenFull|Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue.|false|boolean|
-|defaultOfferTimeout|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue||integer|
+|defaultOfferTimeout|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Using the .offer(timeout) method of the underlining java queue||integer|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
|defaultQueueFactory|Sets the default queue factory.||object|
diff --git a/camel-selective-consumer.md b/camel-selective-consumer.md
new file mode 100644
index 0000000000000000000000000000000000000000..5f5ff6192e3b148daa48978308d993578d1d2627
--- /dev/null
+++ b/camel-selective-consumer.md
@@ -0,0 +1,62 @@
+# Selective-consumer.md
+
+Camel supports the [Selective
+Consumer](https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageSelector.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+How can a message consumer select which messages it wishes to receive?
+
+
+
+
+
+Make the consumer a Selective Consumer, one that filteres the messages
+delivered by its channel so that it only receives the ones that match
+its criteria.
+
+# Using Selecting Consumer
+
+In Camel, the Selective Consumer EIP is implemented in two ways:
+
+- Using [Components](#components::index.adoc) which supports message
+ selecting.
+
+- Using the [Filter](#filter-eip.adoc) EIP as message selecting.
+
+## Selective Consumer using Components
+
+The first solution is to provide a Message Selector to the underlying
+URIs when creating your consumer. For example, when using
+[JMS](#components::jms-component.adoc), you can specify a JMS selector
+parameter so that the message broker will only deliver messages matching
+your criteria.
+
+Java
+from("jms:queue:hello?selector=color='red'")
+.to("bean:red");
+
+XML
+
+
+
+
+
+## Selective Consumer using Filter EIP
+
+The other approach is to use a [Message Filter](#filter-eip.adoc) which
+is applied; if the filter matches the message, your "consumer" is
+invoked as shown in the following example:
+
+Java
+from("seda:colors")
+.filter(header("color").isEqualTo("red"))
+.to("bean:red")
+
+XML
+
+
+
+${header.color} == 'red'
+
+
+
diff --git a/camel-service-activator.md b/camel-service-activator.md
new file mode 100644
index 0000000000000000000000000000000000000000..bf11136473725348780301edaf260455f2b999c6
--- /dev/null
+++ b/camel-service-activator.md
@@ -0,0 +1,43 @@
+# Service-activator.md
+
+Camel supports the [Service
+Activator](https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessagingAdapter.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) book.
+
+How can an application design a service to be invoked both via various
+messaging technologies and via non-messaging techniques?
+
+
+
+
+
+Design a Service Activator that connects the messages on the channel to
+the service being accessed.
+
+Camel has several [Components](#ROOT:index.adoc) that support the
+Service Activator EIP.
+
+Components like [Bean](#ROOT:bean-component.adoc) and
+[CXF](#ROOT:bean-component.adoc) provide a way to bind the message
+[Exchange](#manual::exchange.adoc) to a Java interface/service where the
+route defines the endpoints and wires it up to the bean.
+
+In addition, you can use the [Bean
+Integration](#manual::bean-integration.adoc) to wire messages to a bean
+using Java annotation.
+
+# Example
+
+Here is a simple example of using a
+[Direct](#ROOT:direct-component.adoc) endpoint to create a messaging
+interface to a POJO [Bean](#ROOT:bean-component.adoc) service.
+
+Java
+from("direct:invokeMyService")
+.to("bean:myService");
+
+XML
+
+
+
+
diff --git a/camel-serviceCall-eip.md b/camel-serviceCall-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..7cb74b8fecc2ebc21c2b095c71e0fc758e553ef2
--- /dev/null
+++ b/camel-serviceCall-eip.md
@@ -0,0 +1,691 @@
+# ServiceCall-eip.md
+
+The Service Call EIP is deprecated in Camel 3.x and will be removed in a
+future Camel release. There is no direct replacement. If you use
+Kubernetes, then services can be called directly by name.
+
+How can I call a remote service in a distributed system where the
+service is looked up from a service registry of some sorts?
+
+
+
+
+
+Use a Service Call acting as a [Messaging
+Gateway](#messaging-gateway.adoc) for distributed systems that handles
+the complexity of calling the service in a reliable manner.
+
+The pattern has the following noteworthy features:
+
+- *Location transparency*: Decouples Camel and the physical location
+ of the services using logical names representing the services.
+
+- *URI templating*: Allows you to template the Camel endpoint URI as
+ the physical endpoint to use when calling the service.
+
+- *Service discovery*: it looks up the service from a service registry
+ of some sort to know the physical locations of the services.
+
+- *Service filter*: Allows you to filter unwanted services (for
+ example, blacklisted or unhealthy services).
+
+- *Service chooser*: Allows you to choose the most appropriate service
+ based on factors such as geographical zone, affinity, plans, canary
+ deployments, and SLAs.
+
+- *Load balancer*: A preconfigured Service Discovery, Filter, and
+ Chooser intended for a specific runtime (these three features
+ combined as one).
+
+In a nutshell, the EIP pattern sits between your Camel application and
+the services running in a distributed system (cluster). The pattern
+hides all the complexity of keeping track of all the physical locations
+where the services are running and allows you to call the service by a
+name.
+
+# Options
+
+# Exchange properties
+
+# Using Service Call
+
+The service to call is looked up in a service registry of some sorts
+such as Kubernetes, Consul, Zookeeper, DNS. The EIP separates the
+configuration of the service registry from the calling of the service.
+
+When calling a service, you may refer to the name of the service in the
+EIP as shown below:
+
+ from("direct:start")
+ .serviceCall("foo")
+ .to("mock:result");
+
+And in XML:
+
+
+
+
+
+
+
+
+
+Camel will then:
+
+- search for a service call configuration from the Camel context and
+ registry
+
+- lookup a service with the name \`\`\`foo\`\`\` from an external
+ service registry
+
+- filter the servers
+
+- select the server to use
+
+- build a Camel URI using the chosen server info
+
+By default, the Service Call EIP uses `camel-http` so assuming that the
+selected service instance runs on host \`\`\`myhost.com\`\`\` on port
+\`\`\`80\`\`\`, the computed Camel URI will be:
+
+ http:myhost.com:80
+
+## Mapping Service Name to Endpoint URI
+
+It is often needed to build more complex Camel URI which may include
+options or paths, which is possible through different options:name:
+value
+
+The **service name** supports a limited uri like syntax, here are some
+examples
+
+
+
+
+
+
+
+
+
+
+
+foo
+http://host:port
+
+
+foo/path
+http://host:port/path
+
+
+foo/path?foo=bar
+http://host:port/path?foo=bar
+
+
+
+
+ from("direct:start")
+ .serviceCall("foo/hello")
+ .to("mock:result");
+
+If you want to have more control over the uri construction, you can use
+the **uri** directive:
+
+
+
+
+
+
+
+
+
+
+
+
+foo
+undertow:http://foo/hello
+undertow:http://host:port/hello
+
+
+foo
+undertow:http://foo.host:foo.port/hello
+undertow:http://host:port/hello
+
+
+
+
+ from("direct:start")
+ .serviceCall("foo", "undertow:http://foo/hello")
+ .to("mock:result");
+
+Advanced users can have full control over the uri construction through
+expressions:
+
+ from("direct:start")
+ .serviceCall()
+ .name("foo")
+ .expression()
+ .simple("undertow:http://${header.CamelServiceCallServiceHost}:${header.CamelServiceCallServicePort}/hello");
+
+## Static Service Discovery
+
+This service discovery implementation does not query any external
+services to find out the list of services associated with a named
+service but keep them in memory. Each service should be provided in the
+following form:
+
+ [service@]host:port
+
+The \`\`service\`\` part is used to discriminate against the services
+but if not provided it acts like a wildcard so each non named service
+will be returned whatever the service name is. This is useful if you
+have a single service so the service name is redundant.
+
+This implementation is provided by \`\`camel-core\`\` artifact.
+
+Available options:
+
+
+
+
+
+
+
+
+
+
+
+
+servers
+String
+A comma separated list of servers in
+the form:
+[service@]host:port,[service@]host2:port,[service@]host3:port
+
+
+
+
+ from("direct:start")
+ .serviceCall("foo")
+ .staticServiceDiscovery()
+ .servers("service1@host1:80,service1@host2:80")
+ .servers("service2@host1:8080,service2@host2:8080,service2@host3:8080")
+ .end()
+ .to("mock:result");
+
+And in XML:
+
+
+
+
+
+
+ service1@host1:80,service1@host2:80
+ service2@host1:8080,service2@host2:8080,service2@host3:8080
+
+
+
+
+
+
+## Consul Service Discovery
+
+To leverage Consul for Service Discovery, maven users will need to add
+the following dependency to their pom.xml
+
+
+ org.apache.camel
+ camel-consul
+
+ x.y.z
+
+
+Available options:
+
+
+
+
+
+
+
+
+
+
+
+
+url
+String
+The Consul agent URL
+
+
+datacenter
+String
+The data center
+
+
+aclToken
+String
+Sets the ACL token to be used with
+Consul
+
+
+userName
+String
+Sets the username to be used for basic
+authentication
+
+
+password
+String
+Sets the password to be used for basic
+authentication
+
+
+connectTimeoutMillis
+Long
+Connect timeout for
+OkHttpClient
+
+
+readTimeoutMillis
+Long
+Read timeout for OkHttpClient
+
+
+writeTimeoutMillis
+Long
+Write timeout for OkHttpClient
+
+
+
+
+And example in Java
+
+ from("direct:start")
+ .serviceCall("foo")
+ .consulServiceDiscovery()
+ .url("http://consul-cluster:8500")
+ .datacenter("neverland")
+ .end()
+ .to("mock:result");
+
+## DNS Service Discovery
+
+To leverage DNS for Service Discovery, maven users will need to add the
+following dependency to their pom.xml
+
+
+ org.apache.camel
+ camel-dns
+
+ x.y.z
+
+
+Available options:
+
+
+
+
+
+
+
+
+
+
+
+
+proto
+String
+The transport protocol of the desired
+service, default "_tcp"
+
+
+domain
+String
+The user name to use for basic
+authentication
+
+
+
+
+Example in Java:
+
+ from("direct:start")
+ .serviceCall("foo")
+ .dnsServiceDiscovery("my.domain.com")
+ .to("mock:result");
+
+And in XML:
+
+
+
+
+
+
+
+
+
+
+
+## Kubernetes Service Discovery
+
+To leverage Kubernetes for Service Discovery, maven users will need to
+add the following dependency to their pom.xml
+
+
+ org.apache.camel
+ camel-kubernetes
+
+ x.y.z
+
+
+Available options:
+
+
+
+
+
+
+
+
+
+
+
+
+lookup
+String
+How to perform service lookup. Possible
+values: client, dns, environment
+
+
+apiVersion
+String
+Kubernetes API version when using
+client lookup
+
+
+caCertData
+String
+Sets the Certificate Authority data
+when using client lookup
+
+
+caCertFile
+String
+Sets the Certificate Authority data
+that are loaded from the file when using client lookup
+
+
+clientCertData
+String
+Sets the Client Certificate data when
+using client lookup
+
+
+clientCertFile
+String
+Sets the Client Certificate data that
+are loaded from the file when using client lookup
+
+
+clientKeyAlgo
+String
+Sets the Client Keystore algorithm,
+such as RSA when using client lookup
+
+
+clientKeyData
+String
+Sets the Client Keystore data when
+using client lookup
+
+
+clientKeyFile
+String
+Sets the Client Keystore data that are
+loaded from the file when using client lookup
+
+
+clientKeyPassphrase
+String
+Sets the Client Keystore passphrase
+when using client lookup
+
+
+dnsDomain
+String
+Sets the DNS domain to use for dns
+lookup
+
+
+namespace
+String
+The Kubernetes namespace to use. By
+default, the namespace’s name is taken from the environment variable
+KUBERNETES_MASTER
+
+
+oauthToken
+String
+Sets the OAUTH token for authentication
+(instead of username/password) when using client lookup
+
+
+username
+String
+Sets the username for authentication
+when using client lookup
+
+
+password
+String
+Sets the password for authentication
+when using client lookup
+
+
+trustCerts
+Boolean
+Sets whether to turn on trust
+certificate check when using client lookup
+
+
+
+
+Example in Java:
+
+ from("direct:start")
+ .serviceCall("foo")
+ .kubernetesServiceDiscovery()
+ .lookup("dns")
+ .namespace("myNamespace")
+ .dnsDomain("my.domain.com")
+ .end()
+ .to("mock:result");
+
+And in XML:
+
+
+
+
+
+
+
+
+
+
+
+## Using service filtering
+
+The Service Call EIP supports filtering the services using built-in
+filters, or a custom filter.
+
+### Blacklist Service Filter
+
+This service filter implementation removes the listed services from
+those found by the service discovery. Each service should be provided in
+the following form:
+
+ [service@]host:port
+
+The services are removed if they fully match
+
+Available options:
+
+
+
+
+
+
+
+
+
+
+
+
+servers
+String
+A comma separated list of servers to
+blacklist:
+[service@]host:port,[service@]host2:port,[service@]host3:port
+
+
+
+
+Example in Java:
+
+ from("direct:start")
+ .serviceCall("foo")
+ .staticServiceDiscovery()
+ .servers("service1@host1:80,service1@host2:80")
+ .servers("service2@host1:8080,service2@host2:8080,service2@host3:8080")
+ .end()
+ .blacklistFilter()
+ .servers("service2@host2:8080")
+ .end()
+ .to("mock:result");
+
+And in XML:
+
+
+
+
+
+
+ service1@host1:80,service1@host2:80
+ service2@host1:8080,service2@host2:8080,service2@host3:8080
+
+
+ service2@host2:8080
+
+
+
+
+
+
+### Custom Service Filter
+
+Service Filters choose suitable candidates from the service definitions
+found in the service discovery.
+
+The service filter has access to the current exchange, which allows you
+to create service filters comparing service metadata with message
+content.
+
+Assuming you have labeled one of the services in your service discovery
+to support a certain type of requests:
+
+ serviceDiscovery.addServer(new DefaultServiceDefinition("service", "127.0.0.1", 1003,
+ Collections.singletonMap("supports", "foo")));
+
+The current exchange has a property which says that it needs a foo
+service:
+
+ exchange.setProperty("needs", "foo");
+
+You can then use a `ServiceFilter` to select the service instances which
+match the exchange:
+
+ from("direct:start")
+ .serviceCall()
+ .name("service")
+ .serviceFilter((exchange, services) -> services.stream()
+ .filter(serviceDefinition -> Optional.ofNullable(serviceDefinition.getMetadata()
+ .get("supports"))
+ .orElse("")
+ .equals(exchange.getProperty("needs", String.class)))
+ .collect(Collectors.toList()));
+ .end()
+ .to("mock:result");
+
+## Shared configurations
+
+The Service Call EIP can be configured straight on the route definition
+or through shared configurations, here an example with two
+configurations registered in the `CamelContext`:
+
+ ServiceCallConfigurationDefinition globalConf = new ServiceCallConfigurationDefinition();
+ globalConf.setServiceDiscovery(
+ name -> Arrays.asList(
+ new DefaultServiceDefinition(name, "my.host1.com", 8080),
+ new DefaultServiceDefinition(name, "my.host2.com", 443))
+ );
+ globalConf.setServiceChooser(
+ list -> list.get(ThreadLocalRandom.current().nextInt(list.size()))
+ );
+
+ ServiceCallConfigurationDefinition httpsConf = new ServiceCallConfigurationDefinition();
+ httpsConf.setServiceFilter(
+ list -> list.stream().filter((exchange, s) -> s.getPort() == 443).collect(toList())
+ );
+
+ getContext().setServiceCallConfiguration(globalConf);
+ getContext().addServiceCallConfiguration("https", httpsConf);
+
+Each Service Call definition and configuration will inherit from the
+`globalConf` which can be seen as default configuration, then you can
+reference the `httpsConf` in your route:
+
+ from("direct:start")
+ .serviceCall()
+ .name("foo")
+ .serviceCallConfiguration("https")
+ .end()
+ .to("mock:result");
+
+This route will leverage the service discovery and service chooser from
+`globalConf` and the service filter from \`httpsConf, but you can
+override any of them if needed straight on the route:
+
+ from("direct:start")
+ .serviceCall()
+ .name("foo")
+ .serviceCallConfiguration("https")
+ .serviceChooser(list -> list.get(0))
+ .end()
+ .to("mock:result");
diff --git a/camel-servicenow.md b/camel-servicenow.md
index 61bce6ba4f3d411a14022fac8423362b3d95ffc7..5e5afef5d38d5ff2c09ec6f9d200429458428d3e 100644
--- a/camel-servicenow.md
+++ b/camel-servicenow.md
@@ -32,7 +32,7 @@ for this component:
-
+
-
+
TABLE
RETRIEVE
GET
/api/now/v1/table/{table_name}/{sys_id}
-
+
CREATE
POST
/api/now/v1/table/{table_name}
-
+
MODIFY
PUT
/api/now/v1/table/{table_name}/{sys_id}
-
+
DELETE
DELETE
/api/now/v1/table/{table_name}/{sys_id}
-
+
UPDATE
PATCH
/api/now/v1/table/{table_name}/{sys_id}
-
+
AGGREGATE
RETRIEVE
GET
/api/now/v1/stats/{table_name}
-
+
IMPORT
RETRIEVE
GET
/api/now/import/{table_name}/{sys_id}
-
+
CREATE
POST
/api/now/import/{table_name}
@@ -106,7 +106,7 @@ Documentation](http://wiki.servicenow.com/index.php?title=REST_API#Available_API
-
+
-
+
TABLE
RETRIEVE
GET
/api/now/v1/table/{table_name}/{sys_id}
-
+
CREATE
POST
/api/now/v1/table/{table_name}
-
+
MODIFY
PUT
/api/now/v1/table/{table_name}/{sys_id}
-
+
DELETE
DELETE
/api/now/v1/table/{table_name}/{sys_id}
-
+
UPDATE
PATCH
/api/now/v1/table/{table_name}/{sys_id}
-
+
AGGREGATE
RETRIEVE
GET
/api/now/v1/stats/{table_name}
-
+
IMPORT
RETRIEVE
GET
/api/now/import/{table_name}/{sys_id}
-
+
CREATE
POST
/api/now/import/{table_name}
-
+
ATTACHMENT
RETRIEVE
GET
/api/now/api/now/attachment/{sys_id}
-
+
CONTENT
GET
/api/now/attachment/{sys_id}/file
-
+
UPLOAD
POST
/api/now/api/now/attachment/file
-
+
DELETE
DELETE
/api/now/attachment/{sys_id}
-
+
SCORECARDS
RETRIEVE
PERFORMANCE_ANALYTICS
GET
/api/now/pa/scorecards
-
+
MISC
RETRIEVE
USER_ROLE_INHERITANCE
GET
/api/global/user_role_inheritance
-
+
CREATE
IDENTIFY_RECONCILE
POST
/api/now/identifyreconcile
-
+
SERVICE_CATALOG
RETRIEVE
GET
/sn_sc/servicecatalog/catalogs/{sys_id}
-
+
RETRIEVE
CATEGORIES
GET
/sn_sc/servicecatalog/catalogs/{sys_id}/categories
-
+
SERVICE_CATALOG_ITEMS
RETRIEVE
GET
/sn_sc/servicecatalog/items/{sys_id}
-
+
RETRIEVE
SUBMIT_GUIDE
POST
/sn_sc/servicecatalog/items/{sys_id}/submit_guide
-
+
RETRIEVE
CHECKOUT_GUIDE
POST
/sn_sc/servicecatalog/items/{sys_id}/checkout_guide
-
+
CREATE
SUBJECT_CART
POST
/sn_sc/servicecatalog/items/{sys_id}/add_to_cart
-
+
CREATE
SUBJECT_PRODUCER
POST
/sn_sc/servicecatalog/items/{sys_id}/submit_producer
-
+
SERVICE_CATALOG_CARTS
RETRIEVE
GET
/sn_sc/servicecatalog/cart
-
+
RETRIEVE
DELIVERY_ADDRESS
GET
/sn_sc/servicecatalog/cart/delivery_address/{user_id}
-
+
RETRIEVE
CHECKOUT
POST
/sn_sc/servicecatalog/cart/checkout
-
+
UPDATE
POST
/sn_sc/servicecatalog/cart/{cart_item_id}
-
+
UPDATE
CHECKOUT
POST
/sn_sc/servicecatalog/cart/submit_order
-
+
DELETE
DELETE
/sn_sc/servicecatalog/cart/{sys_id}/empty
-
+
SERVICE_CATALOG_CATEGORIES
RETRIEVE
@@ -326,7 +326,7 @@ API Mapping
[Helsinki REST API
Documentation](https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/integrate/inbound-rest/reference/r_RESTResources.html)
-# Usage examples:
+# Examples:
**Retrieve 10 Incidents**
diff --git a/camel-servlet.md b/camel-servlet.md
index 6815bf72a865ffc0ab2a2303c609f629d73ed68a..e0058fe54fbcec60ef4c8d016d1ef8448e1f5948 100644
--- a/camel-servlet.md
+++ b/camel-servlet.md
@@ -42,23 +42,23 @@ Camel will also populate **all** `request.parameter` and
[http://myserver/myserver?orderid=123](http://myserver/myserver?orderid=123), the exchange will contain a
header named `orderid` with the value `123`.
-# Usage
+# Examples
You can consume only `from` endpoints generated by the Servlet
component. Therefore, it should be used only as input into your Camel
routes. To issue HTTP requests against other HTTP endpoints, use the
[HTTP Component](#http-component.adoc).
-# Example `CamelHttpTransportServlet` configuration
+## Example `CamelHttpTransportServlet` configuration
-## Camel Spring Boot / Camel Quarkus
+### Camel Spring Boot / Camel Quarkus
When running camel-servlet on the Spring Boot or Camel Quarkus runtimes,
`CamelHttpTransportServlet` is configured for you automatically and is
driven by configuration properties. Refer to the camel-servlet
configuration documentation for these runtimes.
-## Servlet container / application server
+### Servlet container / application server
If you’re running Camel standalone on a Servlet container or application
server, you can use `web.xml` to configure `CamelHttpTransportServlet`.
@@ -78,7 +78,7 @@ path `/services`.
-# Example route
+## Example route
from("servlet:hello").process(new Processor() {
public void process(Exchange exchange) throws Exception {
@@ -92,7 +92,7 @@ path `/services`.
}
});
-# Camel Servlet HTTP endpoint path
+## Camel Servlet HTTP endpoint path
The full path where the camel-servlet HTTP endpoint is published depends
on:
@@ -108,7 +108,7 @@ For example, if the application context path is `/camel` and
`/services/*`. Then a Camel route like `from("servlet:hello")` would be
published to a path like [http://localhost:8080/camel/services/hello](http://localhost:8080/camel/services/hello).
-# Servlet asynchronous support
+## Servlet asynchronous support
To enable Camel to benefit from Servlet asynchronous support, you must
enable the `async` boolean init parameter by setting it to `true`.
@@ -151,7 +151,7 @@ follows.
-# Camel JARs on an application server boot classpath
+## Camel JARs on an application server boot classpath
If deploying into an application server / servlet container and you
choose to have Camel JARs such as `camel-core`, `camel-servlet`, etc on
@@ -223,7 +223,7 @@ unforeseen side effects.
|Name|Description|Default|Type|
|---|---|---|---|
|contextPath|The context-path to use||string|
-|disableStreamCache|Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body.|false|boolean|
+|disableStreamCache|Determines whether or not the raw input stream is cached or not. The Camel consumer (camel-servlet, camel-jetty etc.) will by default cache the input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The producer (camel-http) will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is (the stream can only be read once) as the message body.|false|boolean|
|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter header to and from Camel message.||object|
|httpBinding|To use a custom HttpBinding to control the mapping between Camel message and HttpClient.||object|
|chunked|If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response|true|boolean|
diff --git a/camel-setBody-eip.md b/camel-setBody-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d6089becc0b57febe82d93dc4fcae4a7e93248d
--- /dev/null
+++ b/camel-setBody-eip.md
@@ -0,0 +1,60 @@
+# SetBody-eip.md
+
+Camel supports the [Message
+Translator](http://www.enterpriseintegrationpatterns.com/MessageTranslator.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+
+
+
+
+The [Message Translator](#message-translator.adoc) can be done in
+different ways in Camel:
+
+- Using [Transform](#transform-eip.adoc) or [Set
+ Body](#setBody-eip.adoc) in the DSL
+
+- Calling a [Processor](#manual::processor.adoc) or
+ [bean](#manual::bean-integration.adoc) to perform the transformation
+
+- Using template-based [Components](#ROOT:index.adoc), with the
+ template being the source for how the message is translated
+
+- Messages can also be transformed using [Data
+ Format](#manual::data-format.adoc) to marshal and unmarshal messages
+ in different encodings.
+
+This page is documenting the first approach by using Set Body EIP.
+
+# Options
+
+# Exchange properties
+
+# Examples
+
+You can use a [Set Body](#setBody-eip.adoc) which uses an
+[Expression](#manual::expression.adoc) to do the transformation:
+
+In the example below, we prepend Hello to the message body using the
+[Simple](#components:languages:simple-language.adoc) language:
+
+Java
+from("direct:cheese")
+.setBody(simple("Hello ${body}"))
+.to("log:hello");
+
+XML
+
+
+
+Hello ${body}
+
+
+
+
+# What is the difference between Transform and Set Body?
+
+The Transform EIP always sets the result on the OUT message body.
+
+Set Body sets the result accordingly to the [Exchange
+Pattern](#manual::exchange-pattern.adoc) on the `Exchange`.
diff --git a/camel-setHeader-eip.md b/camel-setHeader-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..2ad1b8b3d03b9202a985303ab874bc29ad9a36d2
--- /dev/null
+++ b/camel-setHeader-eip.md
@@ -0,0 +1,79 @@
+# SetHeader-eip.md
+
+The SetHeader EIP is used for setting a [message](#message.adoc) header.
+
+# Options
+
+# Exchange properties
+
+# Using Set Header
+
+The following example shows how to set a header in a Camel route:
+
+Java
+from("direct:a")
+.setHeader("myHeader", constant("test"))
+.to("direct:b");
+
+XML
+
+
+
+test
+
+
+
+
+In the example, the header value is a
+[constant](#components:languages:constant-language.adoc).
+
+Any of the Camel languages can be used, such as
+[Simple](#components:languages:simple-language.adoc).
+
+Java
+from("direct:a")
+.setHeader("randomNumber", simple("${random(1,100)}"))
+.to("direct:b");
+
+Header can be set using fluent syntax.
+
+ from("direct:a")
+ .setHeader("randomNumber").simple("${random(1,100)}")
+ .to("direct:b");
+
+XML
+
+
+
+${random(1,100)}
+
+
+
+
+See
+[JSONPath](#components:languages:jsonpath-language.adoc#_using_header_as_input)
+for another example.
+
+## Setting a header from another header
+
+You can also set a header with the value from another header.
+
+In the example, we set the header foo with the value from an existing
+header named bar.
+
+Java
+from("direct:a")
+.setHeader("foo", header("bar"))
+.to("direct:b");
+
+XML
+
+
+
+
+
+
+
+
+If you need to set several headers on the message, see [Set
+Headers](#eips:setHeaders-eip.adoc).
diff --git a/camel-setHeaders-eip.md b/camel-setHeaders-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..df0241f6605c50a637625f592f917e6f31fe6953
--- /dev/null
+++ b/camel-setHeaders-eip.md
@@ -0,0 +1,149 @@
+# SetHeaders-eip.md
+
+The SetHeaders EIP is used for setting multiple [message](#message.adoc)
+headers at the same time.
+
+# Options
+
+# Exchange properties
+
+# Using Set Headers
+
+The following example shows how to set multiple headers in a Camel route
+using Java, XML or YAML. Note that the syntax is slightly different in
+each case.
+
+Java
+from("direct:a")
+.setHeaders("myHeader", constant("test"), "otherHeader", constant("other"))
+.to("direct:b");
+
+XML
+
+
+
+
+test
+
+
+other
+
+
+
+
+
+YAML
+\- from:
+uri: direct:a
+steps:
+\- setHeaders:
+headers:
+\- name: myHeader
+constant: test
+\- name: otherHeader
+constant: other
+\- to:
+uri:direct:b
+
+For example, the header values are
+[constants](#components:languages:constant-language.adoc).
+
+Any of the Camel languages can be used, such as
+[Simple](#components:languages:simple-language.adoc).
+
+Java
+from("direct:a")
+.setHeaders("randomNumber", simple("${random(1,100)}"), "body", simple("${body}"))
+.to("direct:b");
+
+XML
+
+
+
+
+${random(1,100)}
+
+
+${body}
+
+
+
+
+
+YAML
+\- from:
+uri: direct:a
+steps:
+\- setHeaders:
+headers:
+\- name: randomNumber
+simple: "${random(1,100)}"
+\- name: body
+simple: "${body}"
+\- to:
+uri:direct:b
+
+## Setting a header from another header
+
+You can also set several headers where later ones depend on earlier
+ones.
+
+In the example, we first set the header foo to the body and then set bar
+based on comparing foo with a value.
+
+Java
+from("direct:a")
+.setHeaders("foo", simple("${body}"), "bar", simple("${header.foo} \> 10", Boolean.class))
+.to("direct:b");
+
+XML
+
+
+
+
+${body}
+
+
+${header.foo} \> 10
+
+
+
+
+
+YAML
+\- from:
+uri: direct:a
+steps:
+\- setHeaders:
+headers:
+\- name: foo
+simple: "${body}"
+\- name: bar
+simple:
+expression: "${header.foo} \> 10"
+resultType: "boolean"
+\- to:
+uri:direct:b
+
+## Using a Map with Java DSL
+
+It’s also possible to build a Map and pass it as the single argument to
+`setHeaders().` If the order in which the headers should be set is
+important, use a `LinkedHashMap`.
+
+Java
+private Map\ headerMap = new java.util.LinkedHashMap\<\>();
+headerMap.put("foo", ConstantLanguage.constant("ABC"));
+headerMap.put("bar", ConstantLanguage.constant("XYZ"));
+
+ from("direct:startMap")
+ .setHeaders(headerMap)
+ .to("direct:b");
+
+If the ordering is not critical, then
+`Map.of(name1, expr1, name2, expr2...)` can be used.
+
+Java
+from("direct:startMap")
+.setHeaders(Map.of("foo", "ABC", "bar", "XYZ"))
+.to("direct:b");
diff --git a/camel-setProperty-eip.md b/camel-setProperty-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..b9764640f78fe0ea39471e67068293ca7b185ebe
--- /dev/null
+++ b/camel-setProperty-eip.md
@@ -0,0 +1,75 @@
+# SetProperty-eip.md
+
+The SetProperty EIP is used for setting an
+[Exchange](#manual:ROOT:exchange.adoc) property.
+
+An `Exchange` property is a key/value set as a `Map` on the
+`org.apache.camel.Exchange` instance. This is **not** for setting
+[property placeholders](#manual:ROOT:using-propertyplaceholder.adoc).
+
+# Options
+
+# Exchange properties
+
+# Example
+
+The following example shows how to set a property on the exchange in a
+Camel route:
+
+Java
+from("direct:a")
+.setProperty("myProperty", constant("test"))
+.to("direct:b");
+
+XML
+
+
+
+test
+
+
+
+
+## Setting an exchange property from another exchange property
+
+You can also set an exchange property with the value from another
+exchange property.
+
+In the example, we set the exchange property foo with the value from an
+existing exchange property named bar.
+
+Java
+from("direct:a")
+.setProperty("foo", exchangeProperty("bar"))
+.to("direct:b");
+
+XML
+
+
+
+bar
+
+
+
+
+## Setting an exchange property with the current message body
+
+It is also possible to set an exchange property with a value from
+anything on the `Exchange` such as the message body:
+
+Java
+from("direct:a")
+.setProperty("myBody", body())
+.to("direct:b");
+
+XML
+We use the [Simple](#components:languages:simple-language.adoc) language
+to refer to the message body:
+
+
+
+
+ ${body}
+
+
+
diff --git a/camel-setVariable-eip.md b/camel-setVariable-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..1535b950bb1e40c8954aa3021e027d3dd1e4ff0c
--- /dev/null
+++ b/camel-setVariable-eip.md
@@ -0,0 +1,67 @@
+# SetVariable-eip.md
+
+The SetVariable EIP is used for setting an
+[Exchange](#manual:ROOT:exchange.adoc) variable.
+
+# Options
+
+# Exchange properties
+
+# Example
+
+The following example shows how to set a variable on the exchange in a
+Camel route:
+
+Java
+from("direct:a")
+.setVariable("myVar", constant("test"))
+.to("direct:b");
+
+XML
+
+
+
+test
+
+
+
+
+## Setting an variable from a message header
+
+You can also set a variable with the value from a message header.
+
+Java
+from("direct:a")
+.setVariable("foo", header("bar"))
+.to("direct:b");
+
+XML
+
+
+
+
+
+
+
+
+## Setting variable with the current message body
+
+It is of course also possible to set a variable with a value from
+anything on the `Exchange` such as the message body:
+
+Java
+from("direct:a")
+.setVariable("myBody", body())
+.to("direct:b");
+
+XML
+We use the [Simple](#components:languages:simple-language.adoc) language
+to refer to the message body:
+
+
+
+
+ ${body}
+
+
+
diff --git a/camel-setVariables-eip.md b/camel-setVariables-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..06bd58a6f5e91f84a8d639fa53ac5157e0d09c77
--- /dev/null
+++ b/camel-setVariables-eip.md
@@ -0,0 +1,149 @@
+# SetVariables-eip.md
+
+The SetVariables EIP is used for setting multiple
+[Exchange](#manual:ROOT:exchange.adoc) variables at the same time.
+
+# Options
+
+# Exchange properties
+
+# Using Set Variables
+
+The following example shows how to set multiple variables in a Camel
+route using Java, XML or YAML. Note that the syntax is slightly
+different in each case.
+
+Java
+from("direct:a")
+.setVariables("myVar", constant("test"), "otherVar", constant("other"))
+.to("direct:b");
+
+XML
+
+
+
+
+test
+
+
+other
+
+
+
+
+
+YAML
+\- from:
+uri: direct:a
+steps:
+\- setVariables:
+variables:
+\- name: myVar
+constant: test
+\- name: otherVar
+constant: other
+\- to:
+uri: direct:b
+
+For example, the variables values are
+[constants](#components:languages:constant-language.adoc).
+
+Any of the Camel languages can be used, such as
+[Simple](#components:languages:simple-language.adoc).
+
+Java
+from("direct:a")
+.setVariables("randomNumber", simple("${random(1,100)}"), "body", simple("${body}"))
+.to("direct:b");
+
+XML
+
+
+
+
+${random(1,100)}
+
+
+${body}
+
+
+
+
+
+YAML
+\- from:
+uri: direct:a
+steps:
+\- setVariables:
+variables:
+\- name: randomNumber
+simple: "${random(1,100)}"
+\- name: body
+simple: "${body}"
+\- to:
+uri:direct:b
+
+## Setting a variable from another variable
+
+You can also set several variables where later ones depend on earlier
+ones.
+
+In the example, we first set the variable foo to the body and then set
+bar based on comparing foo with a value.
+
+Java
+from("direct:a")
+.setVariables("foo", simple("${body}"), "bar", simple("${variable.foo} \> 10", Boolean.class))
+.to("direct:b");
+
+XML
+
+
+
+
+${body}
+
+
+${variable.foo} \> 10
+
+
+
+
+
+YAML
+\- from:
+uri: direct:a
+steps:
+\- setVariables:
+variables:
+\- name: foo
+simple: "${body}"
+\- name: bar
+simple:
+expression: "${variable.foo} \> 10"
+resultType: "boolean"
+\- to:
+uri:direct:b
+
+## Using a Map with Java DSL
+
+It’s also possible to build a Map and pass it as the single argument to
+`setVariables().` If the order in which the variables should be set is
+important, use a `LinkedHashMap`.
+
+Java
+private Map\ variableMap = new java.util.LinkedHashMap\<\>();
+variableMap.put("foo", ConstantLanguage.constant("ABC"));
+variableMap.put("bar", ConstantLanguage.constant("XYZ"));
+
+ from("direct:startMap")
+ .setVariables(variableMap)
+ .to("direct:b");
+
+If the ordering is not critical, then
+`Map.of(name1, expr1, name2, expr2...)` can be used.
+
+Java
+from("direct:startMap")
+.setVariables(Map.of("foo", "ABC", "bar", "XYZ"))
+.to("direct:b");
diff --git a/camel-sftp.md b/camel-sftp.md
index 96855ad0fe16d20fc3c32c98ce89449308e8d020..a53fe4607adf3e6b8f58f3fb61f0acaa7b0b53f6 100644
--- a/camel-sftp.md
+++ b/camel-sftp.md
@@ -17,7 +17,9 @@ for this component:
-# Restoring Deprecated Key Types and Algorithms
+# Usage
+
+## Restoring Deprecated Key Types and Algorithms
As of Camel 3.17.0, key types and algorithms that use SHA1 have been
deprecated. These can be restored, if necessary, by setting JSch
diff --git a/camel-shiro.md b/camel-shiro.md
new file mode 100644
index 0000000000000000000000000000000000000000..b033e9c982b7f7a5c878f3534bf316644dd34300
--- /dev/null
+++ b/camel-shiro.md
@@ -0,0 +1,326 @@
+# Shiro.md
+
+**Since Camel 2.5**
+
+The Shiro Security component in Camel is a security-focused component,
+based on the Apache Shiro security project.
+
+Apache Shiro is a powerful and flexible open-source security framework
+that cleanly handles authentication, authorization, enterprise session
+management and cryptography. The objective of the Apache Shiro project
+is to provide the most robust and comprehensive application security
+framework available while also being straightforward to understand and
+extremely simple to use.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-shiro
+ x.x.x
+
+
+
+# Usage
+
+The camel shiro-security component allows authentication and
+authorization support to be applied to different segments of a camel
+route.
+
+Shiro security is applied on a route using a Camel Policy. A Policy in
+Camel uses a strategy pattern for applying interceptors on Camel
+Processors. It is offering the ability to apply cross-cutting concerns
+(for example, security, transactions, etc.), on sections/segments of a
+Camel route.
+
+## Shiro Security Basics
+
+To use Shiro security on a camel route, a ShiroSecurityPolicy object
+must be instantiated with security configuration details (including
+users, passwords, roles, etc). This object must then be applied to a
+camel route. This ShiroSecurityPolicy Object may also be registered in
+the Camel registry (JNDI or ApplicationContextRegistry) and then used on
+other routes in the Camel Context.
+
+Configuration details are provided to the ShiroSecurityPolicy using an
+Ini file (properties file) or an Ini object. The Ini file is a standard
+Shiro configuration file containing user/role details as shown below
+
+ [users]
+ # user 'ringo' with password 'starr' and the 'sec-level1' role
+ ringo = starr, sec-level1
+ george = harrison, sec-level2
+ john = lennon, sec-level3
+ paul = mccartney, sec-level3
+
+ [roles]
+ # 'sec-level3' role has all permissions, indicated by the
+ # wildcard '*'
+ sec-level3 = *
+
+ # The 'sec-level2' role can do anything with access of permission
+ # readonly (*) to help
+ sec-level2 = zone1:*
+
+ # The 'sec-level1' role can do anything with access of permission
+ # readonly
+ sec-level1 = zone1:readonly:*
+
+## Instantiating a ShiroSecurityPolicy Object
+
+A ShiroSecurityPolicy object is instantiated as follows
+
+ private final String iniResourcePath = "classpath:shiro.ini";
+ private final byte[] passPhrase = {
+ (byte) 0x08, (byte) 0x09, (byte) 0x0A, (byte) 0x0B,
+ (byte) 0x0C, (byte) 0x0D, (byte) 0x0E, (byte) 0x0F,
+ (byte) 0x10, (byte) 0x11, (byte) 0x12, (byte) 0x13,
+ (byte) 0x14, (byte) 0x15, (byte) 0x16, (byte) 0x17};
+ List permissionsList = new ArrayList();
+ Permission permission = new WildcardPermission("zone1:readwrite:*");
+ permissionsList.add(permission);
+
+ final ShiroSecurityPolicy securityPolicy =
+ new ShiroSecurityPolicy(iniResourcePath, passPhrase, true, permissionsList);
+
+## ShiroSecurityPolicy Options
+
+
+
+
+
+
+
+
+
+
+
+
+
+iniResourcePath or ini
+none
+Resource String or Ini Object
+A mandatory Resource String for the
+iniResourcePath or an instance of an Ini object must be passed to the
+security policy. Resources can be acquired from the file system,
+classpath, or URLs when prefixed with "file:, classpath:, or url:"
+respectively. For e.g., classpath:shiro.ini
+
+
+passPhrase
+An AES 128 based key
+byte[]
+A passPhrase to decrypt
+ShiroSecurityToken(s) sent along with Message Exchanges
+
+
+alwaysReauthenticate
+true
+boolean
+Setting to ensure re-authentication on
+every request. If set to false, the user is authenticated and locked
+such than only requests from the same user going forward are
+authenticated.
+
+
+permissionsList
+none
+List<Permission>
+A List of permissions required in order
+for an authenticated user to be authorized to perform further action
+i.e., continue further on the route. If no Permissions list or Roles
+List (see below) is provided to the ShiroSecurityPolicy object, then
+authorization is deemed as not required. Note that the default is that
+authorization is granted if any of the Permission Objects in the list
+are applicable.
+
+
+rolesList
+none
+List<String>
+A List of roles required in order for
+an authenticated user to be authorized to perform further action i.e.,
+continue further on the route. If no roles list or permissions list (see
+above) is provided to the ShiroSecurityPolicy object, then authorization
+is deemed as not required. Note that the default is that authorization
+is granted if any of the roles in the list are applicable.
+
+
+cipherService
+AES
+org.apache.shiro.crypto.CipherService
+Shiro ships with AES &
+Blowfish-based CipherServices. You may use one of these or pass in your
+own Cipher implementation
+
+
+base64
+false
+boolean
+To use base64 encoding for the security
+token header, which allows transferring the header over JMS etc. This option must also be
+set on ShiroSecurityTokenInjector as well.
+
+
+allPermissionsRequired
+false
+boolean
+The default is that authorization is
+granted if any of the Permission Objects in the permissionsList
+parameter are applicable. Set this to true to require all the
+Permissions to be met.
+
+
+allRolesRequired
+false
+boolean
+The default is that authorization is
+granted if any of the roles in the rolesList parameter are applicable.
+Set this to true to require all the roles to be met.
+
+
+
+
+## Applying Shiro Authentication on a Camel Route
+
+The ShiroSecurityPolicy, tests and permits incoming message exchanges
+containing an encrypted SecurityToken in the Message Header to proceed
+further following proper authentication. The SecurityToken object
+contains a Username/Password details that are used to determine where
+the user is a valid user.
+
+ protected RouteBuilder createRouteBuilder() throws Exception {
+ final ShiroSecurityPolicy securityPolicy =
+ new ShiroSecurityPolicy("classpath:shiro.ini", passPhrase);
+
+ return new RouteBuilder() {
+ public void configure() {
+ onException(UnknownAccountException.class).
+ to("mock:authenticationException");
+ onException(IncorrectCredentialsException.class).
+ to("mock:authenticationException");
+ onException(LockedAccountException.class).
+ to("mock:authenticationException");
+ onException(AuthenticationException.class).
+ to("mock:authenticationException");
+
+ from("direct:secureEndpoint").
+ to("log:incoming payload").
+ policy(securityPolicy).
+ to("mock:success");
+ }
+ };
+ }
+
+### Applying Shiro Authorization on a Camel Route
+
+Authorization can be applied on a camel route by associating a
+Permissions List with the ShiroSecurityPolicy. The Permissions List
+specifies the permissions necessary for the user to proceed with the
+execution of the route segment. If the user does not have the proper
+permission set, the request is not authorized to continue any further.
+
+ protected RouteBuilder createRouteBuilder() throws Exception {
+ final ShiroSecurityPolicy securityPolicy =
+ new ShiroSecurityPolicy("./src/test/resources/securityconfig.ini", passPhrase);
+
+ return new RouteBuilder() {
+ public void configure() {
+ onException(UnknownAccountException.class).
+ to("mock:authenticationException");
+ onException(IncorrectCredentialsException.class).
+ to("mock:authenticationException");
+ onException(LockedAccountException.class).
+ to("mock:authenticationException");
+ onException(AuthenticationException.class).
+ to("mock:authenticationException");
+
+ from("direct:secureEndpoint").
+ to("log:incoming payload").
+ policy(securityPolicy).
+ to("mock:success");
+ }
+ };
+ }
+
+## Creating a ShiroSecurityToken and injecting it into a Message Exchange
+
+A ShiroSecurityToken object may be created and injected into a Message
+Exchange using a Shiro Processor called ShiroSecurityTokenInjector. An
+example of injecting a ShiroSecurityToken using a
+ShiroSecurityTokenInjector in the client is shown below
+
+ ShiroSecurityToken shiroSecurityToken = new ShiroSecurityToken("ringo", "starr");
+ ShiroSecurityTokenInjector shiroSecurityTokenInjector =
+ new ShiroSecurityTokenInjector(shiroSecurityToken, passPhrase);
+
+ from("direct:client").
+ process(shiroSecurityTokenInjector).
+ to("direct:secureEndpoint");
+
+## Sending Messages to routes secured by a ShiroSecurityPolicy
+
+Messages and Message Exchanges sent along the camel route where the
+security policy is applied need to be accompanied by a SecurityToken in
+the Exchange Header. The SecurityToken is an encrypted object that holds
+a Username and Password. The SecurityToken is encrypted using AES 128
+bit security by default and can be changed to any cipher of your choice.
+
+Given below is an example of how a request may be sent using a
+ProducerTemplate in Camel along with a SecurityToken
+
+ @Test
+ public void testSuccessfulShiroAuthenticationWithNoAuthorization() throws Exception {
+ //Incorrect password
+ ShiroSecurityToken shiroSecurityToken = new ShiroSecurityToken("ringo", "stirr");
+
+ // TestShiroSecurityTokenInjector extends ShiroSecurityTokenInjector
+ TestShiroSecurityTokenInjector shiroSecurityTokenInjector =
+ new TestShiroSecurityTokenInjector(shiroSecurityToken, passPhrase);
+
+ successEndpoint.expectedMessageCount(1);
+ failureEndpoint.expectedMessageCount(0);
+
+ template.send("direct:secureEndpoint", shiroSecurityTokenInjector);
+
+ successEndpoint.assertIsSatisfied();
+ failureEndpoint.assertIsSatisfied();
+ }
+
+## Using ShiroSecurityToken
+
+You can send a message to a Camel route with a header of key
+`ShiroSecurityConstants.SHIRO_SECURITY_TOKEN` of the type
+`org.apache.camel.component.shiro.security.ShiroSecurityToken` that
+contains the username and password. For example:
+
+ ShiroSecurityToken shiroSecurityToken = new ShiroSecurityToken("ringo", "starr");
+
+ template.sendBodyAndHeader("direct:secureEndpoint", "Beatle Mania", ShiroSecurityConstants.SHIRO_SECURITY_TOKEN, shiroSecurityToken);
+
+You can also provide the username and password in two different headers
+as shown below:
+
+ Map headers = new HashMap();
+ headers.put(ShiroSecurityConstants.SHIRO_SECURITY_USERNAME, "ringo");
+ headers.put(ShiroSecurityConstants.SHIRO_SECURITY_PASSWORD, "starr");
+ template.sendBodyAndHeaders("direct:secureEndpoint", "Beatle Mania", headers);
+
+When you use the username and password headers, then the
+ShiroSecurityPolicy in the Camel route will automatically transform
+those into a single header with key
+ShiroSecurityConstants.SHIRO\_SECURITY\_TOKEN with the token. Then token
+is either a `ShiroSecurityToken` instance, or a base64 representation as
+a String (the latter is when you have set base64=true).
diff --git a/camel-simple-language.md b/camel-simple-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..a65be04e42b0fd6f682ebc3f3e61c35861848b7d
--- /dev/null
+++ b/camel-simple-language.md
@@ -0,0 +1,1290 @@
+# Simple-language.md
+
+**Since Camel 1.1**
+
+The Simple Expression Language was a really simple language when it was
+created, but has since grown more powerful. It is primarily intended for
+being a very small and simple language for evaluating `Expression` or
+`Predicate` without requiring any new dependencies or knowledge of other
+scripting languages such as Groovy.
+
+The simple language is designed with intent to cover almost all the
+common use cases when little need for scripting in your Camel routes.
+
+However, for much more complex use cases, then a more powerful language
+is recommended such as:
+
+- [Groovy](#groovy-language.adoc)
+
+- [MVEL](#mvel-language.adoc)
+
+- [OGNL](#ognl-language.adoc)
+
+The simple language requires `camel-bean` JAR as classpath dependency if
+the simple language uses OGNL expressions, such as calling a method
+named `myMethod` on the message body: `${body.myMethod()}`. At runtime
+the simple language will then us its built-in OGNL support which
+requires the `camel-bean` component.
+
+The simple language uses `${body}` placeholders for complex expressions
+or functions.
+
+See also the [CSimple](#csimple-language.adoc) language which is
+**compiled**.
+
+**Alternative syntax**
+
+You can also use the alternative syntax which uses `$simple{ }` as
+placeholders. This can be used in situations to avoid clashes when
+using, for example, Spring property placeholder together with Camel.
+
+# Simple Language options
+
+# Variables
+
+
+
+
+
+
+
+
+
+
+
+
+camelId
+String
+the CamelContext name
+
+
+camelContext.OGNL
+Object
+the CamelContext invoked using a Camel
+OGNL expression.
+
+
+exchange
+Exchange
+the Exchange
+
+
+exchange.OGNL
+Object
+the Exchange invoked using a Camel OGNL
+expression.
+
+
+exchangeId
+String
+the exchange id
+
+
+id
+String
+the message id
+
+
+messageTimestamp
+long
+the message timestamp (millis since
+epoc) that this message originates from. Some systems like JMS, Kafka,
+AWS have a timestamp on the event/message that Camel received. This
+method returns the timestamp if a timestamp exists. The message
+timestamp and exchange created are different. An exchange always has a
+created timestamp which is the local timestamp when Camel created the
+exchange. The message timestamp is only available in some Camel
+components when the consumer is able to extract the timestamp from the
+source event. If the message has no timestamp, then 0 is
+returned.
+
+
+body
+Object
+the body
+
+
+body.OGNL
+Object
+the body invoked using a Camel OGNL
+expression.
+
+
+bodyAs(type )
+Type
+Converts the body to the given type
+determined by its classname. The converted body can be null.
+
+
+bodyAs(type ).OGNL
+Object
+Converts the body to the given type
+determined by its classname and then invoke methods using a Camel OGNL
+expression. The converted body can be null.
+
+
+bodyOneLine
+String
+Converts the body to a String and
+removes all line-breaks, so the string is in one line.
+
+
+prettyBody
+String
+Converts the body to a String, and
+attempts to pretty print if JSon or XML; otherwise the body is returned
+as the String value.
+
+
+originalBody
+Object
+The original incoming body (only
+available if allowUseOriginalMessage=true).
+
+
+mandatoryBodyAs(type )
+Type
+Converts the body to the given type
+determined by its classname, and expects the body to be not
+null.
+
+
+mandatoryBodyAs(type ).OGNL
+Object
+Converts the body to the given type
+determined by its classname and then invoke methods using a Camel OGNL
+expression.
+
+
+header.foo
+Object
+refer to the foo header
+
+
+header[foo]
+Object
+refer to the foo header
+
+
+headers.foo
+Object
+refer to the foo header
+
+
+headers:foo
+Object
+refer to the foo header
+
+
+headers[foo]
+Object
+refer to the foo header
+
+
+header.foo[bar]
+Object
+regard foo header as a map and perform
+lookup on the map with bar as the key
+
+
+header.foo.OGNL
+Object
+refer to the foo header and invoke its
+value using a Camel OGNL expression.
+
+
+headerAs(key ,type )
+Type
+converts the header to the given type
+determined by its classname
+
+
+headers
+Map
+refer to the headers
+
+
+variable.foo
+Object
+refer to the foo variable
+
+
+variable[foo]
+Object
+refer to the foo variable
+
+
+variable.foo.OGNL
+Object
+refer to the foo variable and invoke
+its value using a Camel OGNL expression.
+
+
+variableAs(key ,type )
+Type
+converts the variable to the given type
+determined by its classname
+
+
+variables
+Map
+refer to the variables
+
+
+exchangeProperty.foo
+Object
+refer to the foo property on the
+exchange
+
+
+exchangeProperty[foo]
+Object
+refer to the foo property on the
+exchange
+
+
+exchangeProperty.foo.OGNL
+Object
+refer to the foo property on the
+exchange and invoke its value using a Camel OGNL expression.
+
+
+messageAs(type )
+Type
+Converts the message to the given type
+determined by its classname. The converted message can be null.
+
+
+messageAs(type ).OGNL
+Object
+Converts the message to the given type
+determined by its classname and then invoke methods using a Camel OGNL
+expression. The converted message can be null.
+
+
+sys.foo
+String
+refer to the JVM system
+property
+
+
+sysenv.foo
+String
+refer to the system environment
+variable
+
+
+env.foo
+String
+refer to the system environment
+variable
+
+
+exception
+Object
+refer to the exception object on the
+exchange, is null if no exception set on exchange. Will
+fall back and grab caught exceptions
+(Exchange.EXCEPTION_CAUGHT) if the Exchange has
+any.
+
+
+exception.OGNL
+Object
+refer to the exchange exception invoked
+using a Camel OGNL expression object
+
+
+exception.message
+String
+refer to the
+exception.message on the exchange, is null
+if no exception set on exchange. Will fall back and grab caught
+exceptions (Exchange.EXCEPTION_CAUGHT) if the Exchange has
+any.
+
+
+exception.stacktrace
+String
+refer to the exception.stracktrace on
+the exchange, is null if no exception set on exchange.
+Will fall back and grab caught exceptions
+(Exchange.EXCEPTION_CAUGHT) if the Exchange has
+any.
+
+
+date:_command_
+Date
+evaluates to a Date object. Supported
+commands are: now for current timestamp,
+exchangeCreated for the timestamp when the current exchange
+was created, header.xxx to use the Long/Date object in the
+header with the key xxx. variable.xxx to use the Long/Date
+in the variable with the key xxx. exchangeProperty.xxx to
+use the Long/Date object in the exchange property with the key xxx.
+file for the last modified timestamp of the file (available
+with a File consumer). Command accepts offsets such as:
+now-24h or header.xxx+1h or even
+now+1h30m-100.
+
+
+date:_command:pattern_
+String
+Date formatting using
+java.text.SimpleDateFormat patterns.
+
+
+date-with-timezone:_command:timezone:pattern_
+String
+Date formatting using
+java.text.SimpleDateFormat timezones and patterns.
+
+
+bean:_bean expression_
+Object
+Invoking a bean expression using the Bean language. Specifying a
+method name, you must use dot as the separator. We also support the
+?method=methodname syntax that is used by the Bean component. Camel will
+by default lookup a bean by the given name. However, if you need to
+refer to a bean class (such as calling a static method), then you can
+prefix with the type, such as
+bean:type:fqnClassName.
+
+
+properties:key:default
+String
+Lookup a property with the given key.
+If the key does not exist nor has a value, then an optional default
+value can be specified.
+
+
+propertiesExist:key
+boolean
+Checks whether a property placeholder
+with the given key exists or not. The result can be negated by prefixing
+the key with !.
+
+
+fromRouteId
+String
+Returns the original route id where
+this exchange was created.
+
+
+routeId
+String
+Returns the route id of the current
+route the Exchange is being routed.
+
+
+routeGroup
+String
+Returns the route group of the current
+route the Exchange is being routed. Not all routes have a group
+assigned, so this may be null.
+
+
+stepId
+String
+Returns the id of the current step the
+Exchange is being routed.
+
+
+threadId
+String
+Returns the id of the current thread.
+Can be used for logging.
+
+
+threadName
+String
+Returns the name of the current thread.
+Can be used for logging.
+
+
+hostname
+String
+Returns the local hostname (may be
+empty if not possible to resolve).
+
+
+ref:xxx
+Object
+To look up a bean from the Registry
+with the given id.
+
+
+type:name.field
+Object
+To refer to a type or field by its FQN
+name. To refer to a field, you can append .FIELD_NAME. For example, you
+can refer to the constant field from Exchange as:
+org.apache.camel.Exchange.FILE_NAME
+
+
+empty(type)
+depends on parameter
+Creates a new empty object of the type
+given as parameter. The type-parameter-Strings are
+case-insensitive.
+
+string → empty String
+list → empty ArrayList
+map → empty HashMap
+
+
+
+null
+null
+represents a
+null
+
+
+random(value)
+Integer
+returns a random Integer between 0
+(included) and value (excluded)
+
+
+random(min,max)
+Integer
+returns a random Integer between
+min (included) and max (excluded)
+
+
+replace(from,to)
+String
+replace all the string values in the
+message body. To make it easier to replace single and double quotes,
+then you can use XML escaped values \" as double
+quote, \' as single quote, and
+\∅ as empty value.
+
+
+replace(from,to,exp)
+String
+replace all the string values in the
+given expression. To make it easier to replace single and double quotes,
+then you can use XML escaped values \" as double
+quote, \' as single quote, and
+\∅ as empty value.
+
+
+substring(num1)
+String
+returns a substring of the message
+body. If the number is positive, then the returned string is clipped
+from the beginning. If the number is negative, then the returned string
+is clipped from the ending.
+
+
+substring(num1,num2)
+String
+returns a substring of the message
+body. If the number is positive, then the returned string is clipped
+from the beginning. If the number is negative, then the returned string
+is clipped from the ending.
+
+
+substring(num1,num2,exp)
+String
+returns a substring of the given
+expression. If the number is positive, then the returned string is
+clipped from the beginning. If the number is negative, then the returned
+string is clipped from the ending.
+
+
+collate(group)
+List
+The collate function iterates the
+message body and groups the data into sub lists of specified size. This
+can be used with the Splitter EIP to split a message body and
+group/batch the split sub message into a group of N sub lists. This
+method works similar to the collate method in Groovy.
+
+
+skip(number)
+Iterator
+The skip function iterates the message
+body and skips the first number of items. This can be used with the
+Splitter EIP to split a message body and skip the first N number of
+items.
+
+
+join(separator,prefix,exp)
+String
+The join function iterates the message
+body (by default) and joins the data into a string. The separator is by
+default a comma. The prefix is optional.
+The join uses the message body as source by default. It is possible
+to refer to another source (simple language) such as a header via the
+exp parameter. For example
+join('&','id=','${header.ids}').
+
+
+messageHistory
+String
+The message history of the current
+exchange - how it has been routed. This is similar to the route
+stack-trace message history the error handler logs in case of an
+unhandled exception.
+
+
+messageHistory(false)
+String
+As messageHistory but without the
+exchange details (only includes the route stack-trace). This can be used
+if you do not want to log sensitive data from the message
+itself.
+
+
+uuid(type)
+String
+Returns a UUID using the Camel
+UuidGenerator. You can choose between default,
+classic, short and simple as the
+type. If no type is given, the default is used. It is also possible to
+use a custom UuidGenerator and bind the bean to the Registry with an id. For example
+${uuid(myGenerator)} where the ID is
+myGenerator .
+
+
+hash(exp,algorithm)
+String
+Returns a hashed value (string in hex
+decimal) using JDK MessageDigest. The algorithm can be SHA-256 (default)
+or SHA3-256.
+
+
+jsonpath(exp)
+Object
+When working with JSon data, then this
+allows using the JsonPath language, for example, to extract data from
+the message body (in JSon format). This requires having camel-jsonpath
+JAR on the classpath.
+
+
+jsonpath(input,exp)
+Object
+When working with JSon data, then this
+allows using the JsonPath language, for example, to extract data from
+the message body (in JSon format). This requires having camel-jsonpath
+JAR on the classpath. For input , you can choose
+header:key, exchangeProperty:key or
+variable:key to use as input for the JSon payload instead
+of the message body.
+
+
+jq(exp)
+Object
+When working with JSon data, then this
+allows using the JQ language, for example, to extract data from the
+message body (in JSon format). This requires having camel-jq JAR on the
+classpath.
+
+
+jq(input,exp)
+Object
+When working with JSon data, then this
+allows using the JQ language, for example, to extract data from the
+message body (in JSon format). This requires having camel-jq JAR on the
+classpath. For input , you can choose header:key,
+exchangeProperty:key or variable:key to use as
+input for the JSon payload instead of the message body.
+
+
+xpath(exp)
+Object
+When working with XML data, then this
+allows using the XPath language, for example, to extract data from the
+message body (in XML format). This requires having camel-xpath JAR on
+the classpath.
+
+
+xpath(input,exp)
+Object
+When working with XML data, then this
+allows using the XPath language, for example, to extract data from the
+message body (in XML format). This requires having camel-xpath JAR on
+the classpath. For input you can choose
+header:key, exchangeProperty:key or
+variable:key to use as input for the JSon payload instead
+of the message body.
+
+
+pretty(exp)
+String
+Converts the inlined expression to a
+String, and attempts to pretty print if JSon or XML, otherwise the
+expression is returned as the String value.
+
+
+iif(predicate, trueExp,
+falseExp)
+Object
+Evaluates the predicate
+expression and returns the value of trueExp if the
+predicate is true, otherwise the value of falseExp is
+returned. This function is similar to the ternary operator in
+Java.
+
+
+
+
+# OGNL expression support
+
+When using **OGNL** then `camel-bean` JAR is required to be on the
+classpath.
+
+Camel’s OGNL support is for invoking methods only. You cannot access
+fields. Camel support accessing the length field of Java arrays.
+
+The [Simple](#simple-language.adoc) and [Bean](#simple-language.adoc)
+languages now support a Camel OGNL notation for invoking beans in a
+chain like fashion. Suppose the Message IN body contains a POJO which
+has a `getAddress()` method.
+
+Then you can use Camel OGNL notation to access the address object:
+
+ simple("${body.address}")
+ simple("${body.address.street}")
+ simple("${body.address.zip}")
+
+Camel understands the shorthand names for getters, but you can invoke
+any method or use the real name such as:
+
+ simple("${body.address}")
+ simple("${body.getAddress.getStreet}")
+ simple("${body.address.getZip}")
+ simple("${body.doSomething}")
+
+You can also use the null safe operator (`?.`) to avoid NPE if, for
+example, the body does NOT have an address
+
+ simple("${body?.address?.street}")
+
+It is also possible to index in `Map` or `List` types, so you can do:
+
+ simple("${body[foo].name}")
+
+To assume the body is `Map` based and look up the value with `foo` as
+key, and invoke the `getName` method on that value.
+
+If the key has space, then you **must** enclose the key with quotes, for
+example, *foo bar*:
+
+ simple("${body['foo bar'].name}")
+
+You can access the `Map` or `List` objects directly using their key name
+(with or without dots) :
+
+ simple("${body[foo]}")
+ simple("${body[this.is.foo]}")
+
+Suppose there was no value with the key `foo` then you can use the null
+safe operator to avoid the NPE as shown:
+
+ simple("${body[foo]?.name}")
+
+You can also access `List` types, for example, to get lines from the
+address you can do:
+
+ simple("${body.address.lines[0]}")
+ simple("${body.address.lines[1]}")
+ simple("${body.address.lines[2]}")
+
+There is a special `last` keyword which can be used to get the last
+value from a list.
+
+ simple("${body.address.lines[last]}")
+
+And to get the 2nd last you can subtract a number, so we can use
+`last-1` to indicate this:
+
+ simple("${body.address.lines[last-1]}")
+
+And the third last is, of course:
+
+ simple("${body.address.lines[last-2]}")
+
+And you can call the size method on the list with
+
+ simple("${body.address.lines.size}")
+
+Camel supports the length field for Java arrays as well, e.g.:
+
+ String[] lines = new String[]{"foo", "bar", "cat"};
+ exchange.getIn().setBody(lines);
+
+ simple("There are ${body.length} lines")
+
+And yes, you can combine this with the operator support as shown below:
+
+ simple("${body.address.zip} > 1000")
+
+# Operator support
+
+The parser is limited to only support a single operator.
+
+To enable it, the left value must be enclosed in `${ }`. The syntax is:
+
+ ${leftValue} OP rightValue
+
+Where the `rightValue` can be a String literal enclosed in `' '`,
+`null`, a constant value or another expression enclosed in `${ }`.
+
+There **must** be spaces around the operator.
+
+Camel will automatically type convert the rightValue type to the
+leftValue type, so it is able to e.g., convert a string into a numeric,
+so you can use `>` comparison for numeric values.
+
+The following operators are supported:
+
+
+
+
+
+
+
+
+
+
+
+==
+equals
+
+
+=~
+equals ignore case (will ignore case
+when comparing String values)
+
+
+>
+greater than
+
+
+>=
+greater than or equals
+
+
+<
+less than
+
+
+⇐
+less than or equals
+
+
+!=
+not equals
+
+
+!=~
+not equals ignore case (will ignore
+case when comparing String values)
+
+
+contains
+For testing if contains in a
+string-based value
+
+
+!contains
+For testing if it does not contain in a
+string-based value
+
+
+~~
+For testing if contains by ignoring
+case sensitivity in a string-based value
+
+
+!~~
+For testing if it does not contain by
+ignoring case sensitivity in a string-based value
+
+
+regex
+For matching against a given regular
+expression pattern defined as a String value
+
+
+!regex
+For not matching against a given
+regular expression pattern defined as a String value
+
+
+in
+For matching if in a set of values,
+each element must be separated by comma. If you want to include an empty
+value, then it must be defined using double comma, e.g.
+',, bronze,silver,gold', which is a set of four values with
+an empty value and then the three medals.
+
+
+!in
+For matching if not in a set of values,
+each element must be separated by comma. If you want to include an empty
+value, then it must be defined using double comma, e.g.
+',,bronze,silver,gold', which is a set of four values with
+an empty value and then the three medals.
+
+
+is
+For matching if the left-hand side type
+is an instance of the value.
+
+
+!is
+For matching if the left-hand side type
+is not an instance of the value.
+
+
+range
+For matching if the left-hand side is
+within a range of values defined as numbers:
+from..to..
+
+
+!range
+For matching if the left-hand side is
+not within a range of values defined as numbers: from..to.
+.
+
+
+startsWith
+For testing if the left-hand side
+string starts with the right-hand string.
+
+
+starts with
+Same as the startsWith
+operator.
+
+
+endsWith
+For testing if the left-hand side
+string ends with the right-hand string.
+
+
+ends with
+Same as the endsWith operator.
+
+
+
+
+And the following unary operators can be used:
+
+
+
+
+
+
+
+
+
+
+
+++
+To increment a number by one. The
+left-hand side must be a function, otherwise parsed as literal.
+
+
+—
+To decrement a number by one. The
+left-hand side must be a function, otherwise parsed as literal.
+
+
+\n
+To use newline character.
+
+
+\t
+To use tab character.
+
+
+\r
+To use carriage return
+character.
+
+
+}
+To use the } character as
+text. This may be needed when building a JSon structure with the simple
+language.
+
+
+
+
+And the following logical operators can be used to group expressions:
+
+
+
+
+
+
+
+
+
+
+
+&&
+The logical and operator is used to
+group two expressions.
+
+
+||
+The logical or operator is used to
+group two expressions.
+
+
+
+
+The syntax for AND is:
+
+ ${leftValue} OP rightValue && ${leftValue} OP rightValue
+
+And the syntax for OR is:
+
+ ${leftValue} OP rightValue || ${leftValue} OP rightValue
+
+Some examples:
+
+ // exact equals match
+ simple("${header.foo} == 'foo'")
+
+ // ignore case when comparing, so if the header has value FOO, this will match
+ simple("${header.foo} =~ 'foo'")
+
+ // here Camel will type convert '100' into the type of header.bar and if it is an Integer '100' will also be converter to an Integer
+ simple("${header.bar} == '100'")
+
+ simple("${header.bar} == 100")
+
+ // 100 will be converter to the type of header.bar, so we can do > comparison
+ simple("${header.bar} > 100")
+
+ // if the value of header.bar was 100, value returned will be 101. header.bar itself will not be changed.
+ simple("${header.bar}++")
+
+## Comparing with different types
+
+When you compare with different types such as String and int, then you
+have to take a bit of care. Camel will use the type from the left-hand
+side as first priority. And fallback to the right-hand side type if both
+values couldn’t be compared based on that type.
+This means you can flip the values to enforce a specific type. Suppose
+the bar value above is a String. Then you can flip the equation:
+
+ simple("100 < ${header.bar}")
+
+which then ensures the int type is used as first priority.
+
+This may change in the future if the Camel team improves the binary
+comparison operations to prefer numeric types to String-based. It’s most
+often the String type which causes problems when comparing with numbers.
+
+ // testing for null
+ simple("${header.baz} == null")
+
+ // testing for not null
+ simple("${header.baz} != null")
+
+And a bit more advanced example where the right value is another
+expression
+
+ simple("${header.date} == ${date:now:yyyyMMdd}")
+
+ simple("${header.type} == ${bean:orderService?method=getOrderType}")
+
+And an example with `contains`, testing if the title contains the word
+Camel
+
+ simple("${header.title} contains 'Camel'")
+
+And an example with regex, testing if the number header is a 4-digit
+value:
+
+ simple("${header.number} regex '\\d{4}'")
+
+And finally an example if the header equals any of the values in the
+list. Each element must be separated by comma, and no space around.
+This also works for numbers etc., as Camel will convert each element
+into the type of the left-hand side.
+
+ simple("${header.type} in 'gold,silver'")
+
+And for all the last 3, we also support the negate test using not:
+
+ simple("${header.type} !in 'gold,silver'")
+
+And you can test if the type is a certain instance, e.g., for instance a
+String
+
+ simple("${header.type} is 'java.lang.String'")
+
+We have added a shorthand for all `java.lang` types, so you can write it
+as:
+
+ simple("${header.type} is 'String'")
+
+Ranges are also supported. The range interval requires numbers and both
+from and end are inclusive. For instance, to test whether a value is
+between 100 and 199:
+
+ simple("${header.number} range 100..199")
+
+Notice we use `..` in the range without spaces. It is based on the same
+syntax as Groovy.
+
+ simple("${header.number} range '100..199'")
+
+As the XML DSL does not have all the power as the Java DSL with all its
+various builder methods, you have to resort to using some other
+languages for testing with simple operators. Now you can do this with
+the simple language. In the sample below, we want to test it if the
+header is a widget order:
+
+
+
+ ${header.type} == 'widget'
+
+
+
+
+## Using and / or
+
+If you have two expressions you can combine them with the `&&` or `||`
+operator.
+
+For instance:
+
+ simple("${header.title} contains 'Camel' && ${header.type'} == 'gold'")
+
+And of course the `||` is also supported. The sample would be:
+
+ simple("${header.title} contains 'Camel' || ${header.type'} == 'gold'")
+
+# Examples
+
+In the XML DSL sample below, we filter based on a header value:
+
+
+
+ ${header.foo}
+
+
+
+
+The Simple language can be used for the predicate test above in the
+Message Filter pattern, where we test if the in message has a `foo`
+header (a header with the key `foo` exists). If the expression evaluates
+to `*true*`, then the message is routed to the `mock:fooOrders`
+endpoint, otherwise the message is dropped.
+
+The same example in Java DSL:
+
+ from("seda:orders")
+ .filter().simple("${header.foo}")
+ .to("seda:fooOrders");
+
+You can also use the simple language for simple text concatenations such
+as:
+
+ from("direct:hello")
+ .transform().simple("Hello ${header.user} how are you?")
+ .to("mock:reply");
+
+Notice that we must use `${ }` placeholders in the expression now to
+allow Camel to parse it correctly.
+
+And this sample uses the date command to output current date.
+
+ from("direct:hello")
+ .transform().simple("The today is ${date:now:yyyyMMdd} and it is a great day.")
+ .to("mock:reply");
+
+And in the sample below, we invoke the bean language to invoke a method
+on a bean to be included in the returned string:
+
+ from("direct:order")
+ .transform().simple("OrderId: ${bean:orderIdGenerator}")
+ .to("mock:reply");
+
+Where `orderIdGenerator` is the id of the bean registered in the
+Registry. If using Spring, then it is the Spring bean id.
+
+If we want to declare which method to invoke on the order id generator
+bean we must prepend `.method name` such as below where we invoke the
+`generateId` method.
+
+ from("direct:order")
+ .transform().simple("OrderId: ${bean:orderIdGenerator.generateId}")
+ .to("mock:reply");
+
+We can use the `?method=methodname` option that we are familiar with the
+[Bean](#components::bean-component.adoc) component itself:
+
+ from("direct:order")
+ .transform().simple("OrderId: ${bean:orderIdGenerator?method=generateId}")
+ .to("mock:reply");
+
+You can also convert the body to a given type, for example, to ensure
+that it is a String you can do:
+
+
+ Hello ${bodyAs(String)} how are you?
+
+
+There are a few types which have a shorthand notation, so we can use
+`String` instead of `java.lang.String`. These are:
+`byte[], String, Integer, Long`. All other types must use their FQN
+name, e.g. `org.w3c.dom.Document`.
+
+It is also possible to look up a value from a header `Map`:
+
+
+ The gold value is ${header.type[gold]}
+
+
+In the code above we look up the header with name `type` and regard it
+as a `java.util.Map` and we then look up with the key `gold` and return
+the value. If the header is not convertible to Map, an exception is
+thrown. If the header with name `type` does not exist `null` is
+returned.
+
+You can nest functions, such as shown below:
+
+
+ ${properties:${header.someKey}}
+
+
+## Substring
+
+You can use the `substring` function to more easily clip the message
+body. For example if the message body contains the following 10 letters
+`ABCDEFGHIJ` then:
+
+
+ ${substring(3)}
+
+
+Then the message body after the substring will be `DEFGHIJ`. If you want
+to clip from the end instead, then use negative values such as
+`substring(-3)`.
+
+You can also clip from both ends at the same time such as
+`substring(1,-1)` that will clip the first and last character in the
+String.
+
+If the number is higher than the length of the message body, then an
+empty string is returned, for example `substring(99)`.
+
+Instead of the message body then a simple expression can be nested as
+input, for example, using a variable, as shown below:
+
+
+ ${substring(1,-1,${variable.foo})}
+
+
+## Replacing double and single quotes
+
+You can use the `replace` function to more easily replace all single or
+double quotes in the message body, using the XML escape syntax. This
+avoids to fiddle with enclosing a double quote or single quotes with
+outer quotes, that can get confusing to be correct as you may need to
+escape the quotes as well. So instead you can use the XML escape syntax
+where double quote is `\"` and single quote is `\'` (yeah that
+is the name).
+
+For example, to replace all double quotes with single quotes:
+
+ from("direct:order")
+ .transform().simple("${replace(" , ')}")
+ .to("mock:reply");
+
+And to replace all single quotes with double quotes:
+
+
+ ${replace(' , ")}
+
+
+Or to remove all double quotes:
+
+
+ ${replace(" , ∅)}
+
+
+# Setting the result type
+
+You can now provide a result type to the [Simple](#simple-language.adoc)
+expression, which means the result of the evaluation will be converted
+to the desired type. This is most usable to define types such as
+booleans, integers, etc.
+
+For example, to set a header as a boolean type, you can do:
+
+ .setHeader("cool", simple("true", Boolean.class))
+
+And in XML DSL
+
+
+
+ true
+
+
+# Using new lines or tabs in XML DSLs
+
+It is easier to specify new lines or tabs in XML DSLs as you can escape
+the value now
+
+
+ The following text\nis on a new line
+
+
+# Leading and trailing whitespace handling
+
+The trim attribute of the expression can be used to control whether the
+leading and trailing whitespace characters are removed or preserved. The
+default value is true, which removes the whitespace characters.
+
+
+ You get some trailing whitespace characters.
+
+
+# Loading script from external resource
+
+You can externalize the script and have Camel load it from a resource
+such as `"classpath:"`, `"file:"`, or `"http:"`. This is done using the
+following syntax: `"resource:scheme:location"`, e.g., to refer to a file
+on the classpath you can do:
+
+ .setHeader("myHeader").simple("resource:classpath:mysimple.txt")
diff --git a/camel-sjms.md b/camel-sjms.md
index 81034af6f4527445c1491fe8ec03dc4ff2f361a4..993723c70bf91de3b4b8a0fa8c77ab83a75a44ad 100644
--- a/camel-sjms.md
+++ b/camel-sjms.md
@@ -7,23 +7,6 @@
The Simple JMS Component is a JMS component that only uses JMS APIs and
no third-party framework such as Spring JMS.
-The component was reworked from Camel 3.8 onwards to be similar to the
-existing Camel JMS component that is based on Spring JMS.
-
-The reason is to offer many of the same features and functionality from
-the JMS component, but for users that require lightweight without having
-to include the Spring Framework.
-
-There are some advanced features in the Spring JMS component that has
-been omitted, such as shared queues for request/reply. Spring JMS offers
-fine-grained tunings for concurrency settings, which can be tweaked for
-dynamic scaling up and down depending on load. This is a special feature
-in Spring JMS that would require substantial code to implement in SJMS.
-
-The SJMS component does not support for Spring or JTA Transaction,
-however, support for internal local transactions is supported using JMS
-or Transaction or Client Acknowledge Mode. See further details below.
-
Maven users will need to add the following dependency to their `pom.xml`
for this component:
@@ -53,7 +36,26 @@ example, to connect to the topic, `Stocks.Prices`, use:
sjms:topic:Stocks.Prices
-# Reuse endpoint and send to different destinations computed at runtime
+# Usage
+
+The component was reworked from Camel 3.8 onwards to be similar to the
+existing Camel JMS component that is based on Spring JMS.
+
+The reason is to offer many of the same features and functionality from
+the JMS component, but for users that require lightweight without having
+to include the Spring Framework.
+
+There are some advanced features in the Spring JMS component that has
+been omitted, such as shared queues for request/reply. Spring JMS offers
+fine-grained tunings for concurrency settings, which can be tweaked for
+dynamic scaling up and down depending on load. This is a special feature
+in Spring JMS that would require substantial code to implement in SJMS.
+
+The SJMS component does not support for Spring or JTA Transaction,
+however, support for internal local transactions is supported using JMS
+or Transaction or Client Acknowledge Mode. See further details below.
+
+## Reuse endpoint and send to different destinations computed at runtime
If you need to send messages to a lot of different JMS destinations, it
makes sense to reuse a SJMS endpoint and specify the real destination in
@@ -73,14 +75,14 @@ You can specify the destination in the following headers:
-
+
-
+
CamelJmsDestinationName
String
@@ -118,7 +120,7 @@ them to the created JMS message to avoid the accidental loops in the
routes (in scenarios when the message will be forwarded to another JMS
endpoint).
-# Using toD
+## Using toD
If you need to send messages to a lot of different JMS destinations, it
makes sense to reuse a SJMS endpoint and specify the dynamic
@@ -127,11 +129,11 @@ destinations with simple language using [toD](#eips:toD-eip.adoc).
For example, suppose you need to send messages to queues with order
types, then using toD could, for example, be done as follows:
+**Example SJMS route with `toD`**
+
from("direct:order")
.toD("sjms:order-${header.orderType}");
-# Additional Notes
-
## Local transactions
When using `transacted=true` then JMS Transacted Acknowledge Mode are in
@@ -145,6 +147,8 @@ rollback.
You can combine consumer and producer, such as:
+**Example transacted SJMS route with consumer and producer**
+
from("sjms:cheese?transacted=true")
.to("bean:foo")
.to("sjms:foo?transacted=true")
diff --git a/camel-sjms2.md b/camel-sjms2.md
index 8101bf9c3c016cf3e26f21bcd17acaa9eec47eda..0b5b76e09ea21ec758b7b7601c5158abc23ac4d5 100644
--- a/camel-sjms2.md
+++ b/camel-sjms2.md
@@ -7,23 +7,6 @@
The Simple JMS Component is a JMS component that only uses JMS APIs and
no third-party framework such as Spring JMS.
-The component was reworked from Camel 3.8 onwards to be similar to the
-existing Camel JMS component that is based on Spring JMS.
-
-The reason is to offer many of the same features and functionality from
-the JMS component, but for users that require lightweight without having
-to include the Spring Framework.
-
-There are some advanced features in the Spring JMS component that has
-been omitted, such as shared queues for request/reply. Spring JMS offers
-fine-grained tunings for concurrency settings, which can be tweaked for
-dynamic scaling up and down depending on load. This is a special feature
-in Spring JMS that would require substantial code to implement in SJMS2.
-
-The SJMS2 component does not support for Spring or JTA Transaction,
-however, support for internal local transactions is supported using JMS
-or Transaction or Client Acknowledge Mode. See further details below.
-
Maven users will need to add the following dependency to their `pom.xml`
for this component:
@@ -56,7 +39,26 @@ example, to connect to the topic, `Stocks.Prices`, use:
You append query options to the URI using the following format,
`?option=value&option=value&...`
-# Reuse endpoint and send to different destinations computed at runtime
+# Usage
+
+The component was reworked from Camel 3.8 onwards to be similar to the
+existing Camel JMS component that is based on Spring JMS.
+
+The reason is to offer many of the same features and functionality from
+the JMS component, but for users that require lightweight without having
+to include the Spring Framework.
+
+There are some advanced features in the Spring JMS component that has
+been omitted, such as shared queues for request/reply. Spring JMS offers
+fine-grained tunings for concurrency settings, which can be tweaked for
+dynamic scaling up and down depending on load. This is a special feature
+in Spring JMS that would require substantial code to implement in SJMS2.
+
+The SJMS2 component does not support for Spring or JTA Transaction,
+however, support for internal local transactions is supported using JMS
+or Transaction or Client Acknowledge Mode. See further details below.
+
+## Reuse endpoint and send to different destinations computed at runtime
If you need to send messages to a lot of different JMS destinations, it
makes sense to reuse a SJMS endpoint and specify the real destination in
@@ -76,14 +78,14 @@ You can specify the destination in the following headers:
-
+
-
+
CamelJmsDestinationName
String
@@ -121,7 +123,7 @@ them to the created JMS message to avoid the accidental loops in the
routes (in scenarios when the message will be forwarded to another JMS
endpoint).
-# Using toD
+## Using toD
If you need to send messages to a lot of different JMS destinations, it
makes sense to reuse a SJMS2 endpoint and specify the dynamic
@@ -130,11 +132,11 @@ destinations with simple language using [toD](#eips:toD-eip.adoc).
For example, suppose you need to send messages to queues with order
types, then using toD could, for example, be done as follows:
+**Example SJMS2 route with `toD`**
+
from("direct:order")
.toD("sjms2:order-${header.orderType}");
-# Additional Notes
-
## Local transactions
When using `transacted=true` then JMS Transacted Acknowledge Mode are in
@@ -148,6 +150,8 @@ rollback.
You can combine consumer and producer, such as:
+**Example transacted SJMS2 route with consumer and producer**
+
from("sjms2:cheese?transacted=true")
.to("bean:foo")
.to("sjms2:foo?transacted=true")
diff --git a/camel-slack.md b/camel-slack.md
index e19af969293d7585a68cf670f884280ba9802f8e..74588374fdf55c1aa3dd991acab12f534ce9dfce 100644
--- a/camel-slack.md
+++ b/camel-slack.md
@@ -7,10 +7,6 @@
The Slack component allows you to connect to an instance of
[Slack](http://www.slack.com/) and to send and receive the messages.
-To send a message contained in the message body, a pre-established
-[Slack incoming webhook](https://api.slack.com/incoming-webhooks) must
-be configured in Slack.
-
Maven users will need to add the following dependency to their `pom.xml`
for this component:
@@ -31,40 +27,26 @@ To send a direct message to a Slack user.
slack:@userID[?options]
-# Configuring in Spring XML
+# Usage
+
+To send a message contained in the message body, a pre-established
+[Slack incoming webhook](https://api.slack.com/incoming-webhooks) must
+be configured in Slack.
-The SlackComponent with XML must be configured as a Spring or Blueprint
-bean that contains the incoming webhook url or the app token for the
-integration as a parameter.
+## Configuring in Spring XML
+
+The SlackComponent with XML must be configured as a Spring bean that
+contains the incoming webhook url or the app token for the integration
+as a parameter.
-For Java, you can configure this using Java code.
-
-# Example
-
-A CamelContext with Blueprint could be as:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+for Java, you can configure this using Java code.
-# Producer
+## Producer
You can now use a token to send a message instead of WebhookUrl
@@ -108,7 +90,7 @@ For User tokens, you’ll need the following permissions:
- chat:write
-# Consumer
+## Consumer
You can also use a consumer for messages in a channel
diff --git a/camel-smpp.md b/camel-smpp.md
index 541d9320b70b33156d0e296f57ae88941bbd4da0..c9c29ce901d6a145aa631707d383f1f2d8189f3d 100644
--- a/camel-smpp.md
+++ b/camel-smpp.md
@@ -186,7 +186,7 @@ Please refer to the [SMPP
specification](http://smsforum.net/SMPP_v3_4_Issue1_2.zip) for the
complete list of error codes and their meanings.
-# Samples
+# Examples
A route which sends an SMS using the Java DSL:
diff --git a/camel-snakeYaml-dataformat.md b/camel-snakeYaml-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..61337427d632d4349a6218b5fefa6cddb02c9be5
--- /dev/null
+++ b/camel-snakeYaml-dataformat.md
@@ -0,0 +1,108 @@
+# SnakeYaml-dataformat.md
+
+**Since Camel 2.17**
+
+YAML is a Data Format to marshal and unmarshal Java objects to and from
+[YAML](http://www.yaml.org/).
+
+For YAML to object marshalling, Camel provides integration with three
+popular YAML libraries:
+
+- The [SnakeYAML](http://www.snakeyaml.org/) library
+
+Every library requires adding the special camel component (see
+"Dependency…" paragraphs further down). By default Camel uses the
+SnakeYAML library.
+
+# YAML Options
+
+SnakeYAML can load any class from YAML definition which may lead to
+security breach so by default, SnakeYAML DataForma restrict the object
+it can load to standard Java objects like List or Long. If you want to
+load custom POJOs you need to add theirs type to SnakeYAML DataFormat
+type filter list. If your source is trusted, you can set the property
+allowAnyType to true so SnakeYAML DataForma won’t perform any filter on
+the types.
+
+# Using YAML data format with the SnakeYAML library
+
+- Turn Object messages into yaml then send to MQSeries
+
+ from("activemq:My.Queue")
+ .marshal().yaml()
+ .to("mqseries:Another.Queue");
+
+ from("activemq:My.Queue")
+ .marshal().yaml(YAMLLibrary.SnakeYAML)
+ .to("mqseries:Another.Queue");
+
+- Restrict classes to be loaded from YAML
+
+ // Creat a SnakeYAMLDataFormat instance
+ SnakeYAMLDataFormat yaml = new SnakeYAMLDataFormat();
+
+ // Restrict classes to be loaded from YAML
+ yaml.addTypeFilters(TypeFilters.types(MyPojo.class, MyOtherPojo.class));
+
+ from("activemq:My.Queue")
+ .unmarshal(yaml)
+ .to("mqseries:Another.Queue");
+
+# Using YAML in Spring DSL
+
+When using Data Format in Spring DSL, you need to declare the data
+formats first. This is done in the **DataFormats** XML tag.
+
+
+
+
+
+
+
+
+
+
+
+
+And then you can refer to those ids in the route:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Dependencies for SnakeYAML
+
+To use YAML in your camel routes, you need to add a dependency on
+`camel-snakeyaml` which implements this data format.
+
+If you use maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest release (see the download
+page for the latest versions).
+
+
+ org.apache.camel
+ camel-snakeyaml
+ ${camel-version}
+
diff --git a/camel-snmp.md b/camel-snmp.md
index db3c0a6499fa650b34f45de0b46336cb503aba5f..ac6197778b685a5b7564dd2694ce90901fcece26 100644
--- a/camel-snmp.md
+++ b/camel-snmp.md
@@ -30,7 +30,9 @@ It can also be used to request information using GET method.
The response body type is `org.apache.camel.component.snmp.SnmpMessage`.
-# The result of a poll
+# Usage
+
+## The result of a poll
Given the situation, that I poll for the following OIDs:
diff --git a/camel-soap-dataformat.md b/camel-soap-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8a00bd4cef5ab0f01c920fcc789fe596ad43e83
--- /dev/null
+++ b/camel-soap-dataformat.md
@@ -0,0 +1,204 @@
+# Soap-dataformat.md
+
+**Since Camel 2.3**
+
+SOAP is a Data Format which uses JAXB2 and JAX-WS annotations to marshal
+and unmarshal SOAP payloads. It provides the basic features of Apache
+CXF without the need for the CXF Stack.
+
+**Namespace prefix mapping**
+
+See [JAXB](#jaxb-dataformat.adoc) for details how you can control
+namespace prefix mappings when marshalling using SOAP data format.
+
+# SOAP Options
+
+# ElementNameStrategy
+
+An element name strategy is used for two purposes. The first is to find
+an XML element name for a given object and soap action when marshaling
+the object into a SOAP message. The second is to find an Exception class
+for a given soap fault name.
+
+
+
+
+
+
+
+
+
+
+
+QNameStrategy
+Uses a fixed qName that is configured
+on instantiation. Exception lookup is not supported
+
+
+TypeNameStrategy
+Uses the name and namespace from the
+@XMLType annotation of the given type. If no namespace is
+set, then package-info is used. Exception lookup is not
+supported
+
+
+ServiceInterfaceStrategy
+Uses information from a webservice
+interface to determine the type name and to find the exception class for
+a SOAP fault
+
+
+
+
+If you have generated the web service stub code with cxf-codegen or a
+similar tool, then you probably will want to use the
+`ServiceInterfaceStrategy`. In the case you have no annotated service
+interface you should use `QNameStrategy` or `TypeNameStrategy`.
+
+# Using the Java DSL
+
+The following example uses a named `DataFormat` of *soap* which is
+configured with the package `com.example.customerservice` to initialize
+the
+[JAXBContext](http://java.sun.com/javase/6/docs/api/javax/xml/bind/JAXBContext.html).
+The second parameter is the `ElementNameStrategy`. The route is able to
+marshal normal objects as well as exceptions.
+
+The below just sends a SOAP Envelope to a queue. A web service provider
+would actually need to be listening to the queue for a SOAP call to
+actually occur, in which case it would be a one way SOAP request. If you
+need to request a reply, then you should look at the next example.
+
+ SoapDataFormat soap = new SoapDataFormat("com.example.customerservice", new ServiceInterfaceStrategy(CustomerService.class));
+ from("direct:start")
+ .marshal(soap)
+ .to("jms:myQueue");
+
+**See also**
+
+As the SOAP dataformat inherits from the [JAXB](#jaxb-dataformat.adoc)
+dataformat, most settings apply here as well
+
+## Using SOAP 1.2
+
+**Since Camel 2.11**
+
+ SoapDataFormat soap = new SoapDataFormat("com.example.customerservice", new ServiceInterfaceStrategy(CustomerService.class));
+ soap.setVersion("1.2");
+ from("direct:start")
+ .marshal(soap)
+ .to("jms:myQueue");
+
+When using XML DSL, there is a version attribute you can set on the
+\ element.
+
+
+
+
+
+
+
+And in the Camel route
+
+
+
+
+
+
+
+
+
+# Multi-part Messages
+
+**Since Camel 2.8.1**
+
+Multipart SOAP messages are supported by the `ServiceInterfaceStrategy`.
+The `ServiceInterfaceStrategy` must be initialized with a service
+interface definition that is annotated in accordance with JAX-WS 2.2 and
+meets the requirements of the Document Bare style. The target method
+must meet the following criteria, as per the JAX-WS specification: 1. it
+must have at most one `in` or `in/out` non-header parameter, 2. if it
+has a return type other than `void` it must have no `in/out` or `out`
+non-header parameters, 3. if it has a return type of `void` it must have
+at most one `in/out` or `out` non-header parameter.
+
+The `ServiceInterfaceStrategy` should be initialized with a boolean
+parameter that indicates whether the mapping strategy applies to the
+request parameters or response parameters.
+
+ ServiceInterfaceStrategy strat = new ServiceInterfaceStrategy(com.example.customerservice.multipart.MultiPartCustomerService.class, true);
+ SoapDataFormat soapDataFormat = new SoapDataFormat("com.example.customerservice.multipart", strat);
+
+## Holder Object mapping
+
+JAX-WS specifies the use of a type-parameterized `javax.xml.ws.Holder`
+object for `In/Out` and `Out` parameters. You may use an instance of the
+parameterized-type directly. The camel-soap DataFormat marshals Holder
+values in accordance with the JAXB mapping for the class of the
+\`\`Holder\`\`'s value. No mapping is provided for `Holder` objects in
+an unmarshalled response.
+
+# Examples
+
+## Webservice client
+
+The following route supports marshalling the request and unmarshalling a
+response or fault.
+
+ String WS_URI = "cxf://http://myserver/customerservice?serviceClass=com.example.customerservice&dataFormat=RAW";
+ SoapDataFormat soapDF = new SoapDataFormat("com.example.customerservice", new ServiceInterfaceStrategy(CustomerService.class));
+ from("direct:customerServiceClient")
+ .onException(Exception.class)
+ .handled(true)
+ .unmarshal(soapDF)
+ .end()
+ .marshal(soapDF)
+ .to(WS_URI)
+ .unmarshal(soapDF);
+
+The below snippet creates a proxy for the service interface and makes a
+SOAP call to the above route.
+
+ import org.apache.camel.Endpoint;
+ import org.apache.camel.component.bean.ProxyHelper;
+ ...
+
+ Endpoint startEndpoint = context.getEndpoint("direct:customerServiceClient");
+ ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
+ // CustomerService below is the service endpoint interface, *not* the javax.xml.ws.Service subclass
+ CustomerService proxy = ProxyHelper.createProxy(startEndpoint, classLoader, CustomerService.class);
+ GetCustomersByNameResponse response = proxy.getCustomersByName(new GetCustomersByName());
+
+## Webservice Server
+
+Using the following route sets up a webservice server that consumes from
+the jms queue `customerServiceQueue` and processes requests using the
+class `CustomerServiceImpl`. The `customerServiceImpl` should implement
+the interface `CustomerService`. Instead of directly instantiating the
+server class it could be defined in a spring context as a regular bean.
+
+ SoapDataFormat soapDF = new SoapDataFormat("com.example.customerservice", new ServiceInterfaceStrategy(CustomerService.class));
+ CustomerService serverBean = new CustomerServiceImpl();
+ from("jms://queue:customerServiceQueue")
+ .onException(Exception.class)
+ .handled(true)
+ .marshal(soapDF)
+ .end()
+ .unmarshal(soapDF)
+ .bean(serverBean)
+ .marshal(soapDF);
+
+# Dependencies
+
+To use the SOAP dataformat in your Camel routes, you need to add the
+following dependency to your `pom.xml`.
+
+
+ org.apache.camel
+ camel-soap
+ x.y.z
+
diff --git a/camel-solr.md b/camel-solr.md
index 210d4843d9c2727a8973c1dd0de86245640e1846..118210f46244b7180d12ff39bc8f4f040b578b4f 100644
--- a/camel-solr.md
+++ b/camel-solr.md
@@ -23,15 +23,17 @@ for this component:
solrs://host[:port]/solr?[options]
solrCloud://host[:port]/solr?[options]
-# Message Operations
+# Usage
-The following Solr operations are currently supported. Simply set an
-exchange header with a key of "SolrOperation" and a value set to one of
-the following. Some operations also require the message body to be set.
+## Message Operations
-- INSERT
+The following Solr operations are currently supported. Set an exchange
+header with a key of `SolrOperation` and a value set to one of the
+following. Some operations also require the message body to be set.
-- INSERT\_STREAMING
+- `INSERT`
+
+- `INSERT_STREAMING`
@@ -40,83 +42,88 @@ the following. Some operations also require the message body to be set.
-
+
-
-INSERT/INSERT_STREAMING
+
+INSERT/INSERT_STREAMING
n/a
adds an index using message headers
(must be prefixed with "SolrField.")
-
-INSERT/INSERT_STREAMING
+
+INSERT/INSERT_STREAMING
File
adds an index using the given File
-(using ContentStreamUpdateRequest)
+(using ContentStreamUpdateRequest)
-
-INSERT/INSERT_STREAMING
-SolrInputDocument
+
+INSERT/INSERT_STREAMING
+SolrInputDocument
updates index based on the given
-SolrInputDocument
+SolrInputDocument
-
-INSERT/INSERT_STREAMING
+
+INSERT/INSERT_STREAMING
String XML
updates index based on the given XML
-(must follow SolrInputDocument format)
+(must follow SolrInputDocument format)
-
-ADD_BEAN
+
+ADD_BEAN
bean instance
adds an index based on values in an annotated
bean
-
-ADD_BEANS
-collection<bean>
+
+ADD_BEANS
+collection<bean>
adds index based on a collection of annotated
bean
-
-DELETE_BY_ID
+
+DELETE_BY_ID
index id to delete
delete a record by ID
-
-DELETE_BY_QUERY
+
+DELETE_BY_QUERY
query string
delete a record by a query
-
-COMMIT
+
+COMMIT
n/a
performs a commit on any pending index
changes
-
-SOFT_COMMIT
+
+SOFT_COMMIT
n/a
performs a soft commit
(without guarantee that Lucene index files are written to stable
storage; useful for Near Real Time operations) on any pending index
changes
-
-ROLLBACK
+
+ROLLBACK
n/a
performs a rollback on any pending
index changes
-
-OPTIMIZE
+
+OPTIMIZE
n/a
performs a commit on any pending index
changes and then runs the optimize command (This command reorganizes the
@@ -127,7 +134,7 @@ Solr index and might be a heavy task)
# Example
-Below is a simple INSERT, DELETE and COMMIT example
+Below is a simple `INSERT`, `DELETE` and `COMMIT` example
from("direct:insert")
.setHeader(SolrConstants.OPERATION, constant(SolrConstants.OPERATION_INSERT))
@@ -175,7 +182,7 @@ delete routes and then call the commit route.
template.sendBody("direct:delete", "1234");
template.sendBody("direct:commit", null);
-# Querying Solr
+## Querying Solr
The components provide a producer operation to query Solr.
@@ -184,6 +191,36 @@ For more information:
[Solr Query
Syntax](https://solr.apache.org/guide/solr/latest/query-guide/standard-query-parser.html)
-## Component ConfigurationsThere are no configurations for this component
-
-## Endpoint ConfigurationsThere are no configurations for this component
+## Component Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
+
+## Endpoint Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|url|Hostname and port for the Solr server(s). Multiple hosts can be specified, separated with a comma. See the solrClient parameter for more information on the SolrClient used to connect to Solr.||string|
+|autoCommit|If true, each producer operation will be automatically followed by a commit|false|boolean|
+|connectionTimeout|Sets the connection timeout on the SolrClient||integer|
+|defaultMaxConnectionsPerHost|maxConnectionsPerHost on the underlying HttpConnectionManager||integer|
+|httpClient|Sets the http client to be used by the solrClient. This is only applicable when solrClient is not set.||object|
+|maxRetries|Maximum number of retries to attempt in the event of transient errors||integer|
+|maxTotalConnections|maxTotalConnection on the underlying HttpConnectionManager||integer|
+|requestHandler|Set the request handler to be used||string|
+|solrClient|Uses the provided solr client to connect to solr. When this parameter is not specified, camel applies the following rules to determine the SolrClient: 1) when zkHost or zkChroot (=zookeeper root) parameter is set, then the CloudSolrClient is used. 2) when multiple hosts are specified in the uri (separated with a comma), then the CloudSolrClient (uri scheme is 'solrCloud') or the LBHttpSolrClient (uri scheme is not 'solrCloud') is used. 3) when the solr operation is INSERT\_STREAMING, then the ConcurrentUpdateSolrClient is used. 4) otherwise, the HttpSolrClient is used. Note: A CloudSolrClient should point to zookeeper endpoint(s); other clients point to Solr endpoint(s). The SolrClient can also be set via the exchange header 'CamelSolrClient'.||object|
+|soTimeout|Sets the socket timeout on the SolrClient||integer|
+|streamingQueueSize|Sets the queue size for the ConcurrentUpdateSolrClient|10|integer|
+|streamingThreadCount|Sets the number of threads for the ConcurrentUpdateSolrClient|2|integer|
+|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|collection|Set the default collection for SolrCloud||string|
+|zkChroot|Set the chroot of the zookeeper connection (include the leading slash; e.g. '/mychroot')||string|
+|zkHost|Set the ZooKeeper host(s) urls which the CloudSolrClient uses, e.g. zkHost=localhost:2181,localhost:2182. Optionally add the chroot, e.g. zkHost=localhost:2181,localhost:2182/rootformysolr. In case the first part of the url path (='contextroot') is set to 'solr' (e.g. 'localhost:2181/solr' or 'localhost:2181/solr/..'), then that path is not considered as zookeeper chroot for backward compatibility reasons (this behaviour can be overridden via zkChroot parameter).||string|
+|allowCompression|Server side must support gzip or deflate for this to have any effect||boolean|
+|followRedirects|Indicates whether redirects are used to get to the Solr server||boolean|
+|password|Sets password for basic auth plugin enabled servers||string|
+|username|Sets username for basic auth plugin enabled servers||string|
diff --git a/camel-sort-eip.md b/camel-sort-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..8c157a335599fd2e3f47b203282a181d39747d77
--- /dev/null
+++ b/camel-sort-eip.md
@@ -0,0 +1,71 @@
+# Sort-eip.md
+
+How can you sort the content of the message?
+
+
+
+
+
+Use a special filter, a [Message Translator](#message-translator.adoc),
+between other filters to sort the content of the message.
+
+# Options
+
+# Exchange properties
+
+# How sorting works
+
+Sort will by default sort the message body using a default `Comparator`
+that handles numeric values or uses the `String` representation.
+
+You can also configure a custom `Comparator` to control the sorting.
+
+An [Expression](#manual::expression.adoc) can also be used, which
+performs the sorting, and returns the sorted message body. The value
+returned from the `Expression` must be convertible to `java.util.List`
+as this is required by the JDK sort operation.
+
+## Using Sort EIP
+
+Imagine you consume text files and before processing each file, you want
+to be sure the content is sorted.
+
+In the route below, it will read the file content and tokenize by line
+breaks so each line can be sorted.
+
+ from("file:inbox")
+ .sort(body().tokenize("\n"))
+ .to("bean:MyServiceBean.processLine");
+
+You can pass in your own comparator as a second argument:
+
+ from("file:inbox")
+ .sort(body().tokenize("\n"), new MyReverseComparator())
+ .to("bean:MyServiceBean.processLine");
+
+In the route below, it will read the file content and tokenize by line
+breaks so each line can be sorted. In XML, you use the
+[Tokenize](#components:languages:tokenize-language.adoc) language as
+shown:
+
+
+
+
+ \n
+
+
+
+
+And to use our own `Comparator` we do as follows:
+
+
+
+
+ ${body}
+
+
+
+
+Notice how we use `${body} ` in the example above to
+tell Sort EIP that it should use the message body for sorting. This is
+needed when you use a custom `Comparator`.
diff --git a/camel-spel-language.md b/camel-spel-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..00f938eb293c8df34e776a150b0c0f35c34f3da5
--- /dev/null
+++ b/camel-spel-language.md
@@ -0,0 +1,178 @@
+# Spel-language.md
+
+**Since Camel 2.7**
+
+Camel allows [Spring Expression Language
+(SpEL)](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#expressions)
+to be used as an Expression or Predicate in the DSL or XML
+Configuration.
+
+It is recommended to use SpEL in Spring runtimes. Although you can use
+SpEL in other runtimes, there is some functionality that SpEL can only
+do in a Spring runtime.
+
+# SpEL Options
+
+# Variables
+
+The following Camel related variables are made available:
+
+
+
+
+
+
+
+
+
+
+
+
+this
+Exchange
+the Exchange is the root
+object
+
+
+context
+CamelContext
+the CamelContext
+
+
+exchange
+Exchange
+the Exchange
+
+
+exchangeId
+String
+the exchange id
+
+
+exception
+Throwable
+the Exchange exception (if
+any)
+
+
+request
+Message
+the message
+
+
+message
+Message
+the message
+
+
+headers
+Map
+the message headers
+
+
+header(name)
+Object
+the message header by the given
+name
+
+
+header(name, type)
+Type
+the message header by the given name as
+the given type
+
+
+properties
+Map
+the exchange properties
+
+
+property(name)
+Object
+the exchange property by the given
+name
+
+
+property(name, type)
+Type
+the exchange property by the given name
+as the given type
+
+
+
+
+# Example
+
+You can use SpEL as an expression for [Recipient
+List](#eips:recipientList-eip.adoc) or as a predicate inside a [Message
+Filter](#eips:filter-eip.adoc):
+
+
+
+
+ #{request.headers.foo == 'bar'}
+
+
+
+
+And the equivalent in Java DSL:
+
+ from("direct:foo")
+ .filter().spel("#{request.headers.foo == 'bar'}")
+ .to("direct:bar");
+
+## Expression templating
+
+SpEL expressions need to be surrounded by `#{` `}` delimiters since
+expression templating is enabled. This allows you to combine SpEL
+expressions with regular text and use this as an extremely lightweight
+template language.
+
+For example, if you construct the following route:
+
+ from("direct:example")
+ .setBody(spel("Hello #{request.body}! What a beautiful #{request.headers['dayOrNight']}"))
+ .to("mock:result");
+
+In the route above, notice `spel` is a static method which we need to
+import from `org.apache.camel.language.spel.SpelExpression.spel`, as we
+use `spel` as an Expression passed in as a parameter to the `setBody`
+method. Though if we use the fluent API, we can do this instead:
+
+ from("direct:example")
+ .setBody().spel("Hello #{request.body}! What a beautiful #{request.headers['dayOrNight']}")
+ .to("mock:result");
+
+Notice we now use the `spel` method from the `setBody()` method. And
+this does not require us to statically import the `spel` method.
+
+Then we send a message with the string "World" in the body, and a header
+`dayOrNight` with value `day`:
+
+ template.sendBodyAndHeader("direct:example", "World", "dayOrNight", "day");
+
+The output on `mock:result` will be *"Hello World! What a beautiful
+day"*
+
+## Bean integration
+
+You can reference beans defined in the
+[Registry](#manual::registry.adoc) in your SpEL expressions. For
+example, if you have a bean named "foo" registered in the Spring
+`ApplicationContext`. You can then invoke the "bar" method on this bean
+like this:
+
+ #{@foo.bar == 'xyz'}
+
+# Loading script from external resource
+
+You can externalize the script and have Apache Camel load it from a
+resource such as `"classpath:"`, `"file:"`, or `"http:"`. This is done
+using the following syntax: `"resource:scheme:location"`, e.g., to refer
+to a file on the classpath you can do:
+
+ .setHeader("myHeader").spel("resource:classpath:myspel.txt")
diff --git a/camel-split-eip.md b/camel-split-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..c7d6626a3a73cc53d3c9d82d7606e8ee4249d044
--- /dev/null
+++ b/camel-split-eip.md
@@ -0,0 +1,820 @@
+# Split-eip.md
+
+How can we process a message if it contains multiple elements, each of
+which may have to be processed in a different way?
+
+
+
+
+
+Use a Splitter to break out the composite message into a series of
+individual messages, each containing data related to one item.
+
+The
+[Splitter](http://www.enterpriseintegrationpatterns.com/patterns/messaging/Sequencer.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) allows
+you to split a message into a number of pieces and process them
+individually.
+
+# Options
+
+# Exchange properties
+
+# Using Split
+
+The following example shows how to take a request from the `direct:a`
+endpoint, then split into sub messages, which each are sent to the
+`direct:b` endpoint.
+
+The example splits the message body using a tokenizer to split into
+lines using the new-line character as separator.
+
+ from("direct:a")
+ .split(body().tokenize("\n"))
+ .to("direct:b");
+
+And in XML:
+
+
+
+
+
+
+
+
+
+The Split EIP has special support for splitting using a delimiter,
+instead of using
+[Tokenize](#components:languages:tokenize-language.adoc) language.
+
+The previous example can also be done as follows:
+
+ from("direct:a")
+ .split(body()).delimiter("\n")
+ .to("direct:b");
+
+And in XML:
+
+
+
+
+ ${body}
+
+
+
+
+The splitter can use any [Expression](#manual:ROOT:expression.adoc), so
+you could use any of the supported languages such as
+[Simple](#components:languages:simple-language.adoc),
+[XPath](#components:languages:xpath-language.adoc),
+[JSonPath](#components:languages:jsonpath-language.adoc),
+[Groovy](#components:languages:groovy-language.adoc) to perform the
+split.
+
+Java
+from("activemq:my.queue")
+.split(xpath("//foo/bar"))
+.to("file:some/directory")
+
+XML
+
+
+
+//foo/bar
+
+
+
+
+## Splitting the message body
+
+A common use case is to split a list/set/collection/map, array, or
+something that is iterable from the message body.
+
+The Split EIP will by default split the message body based on the value
+type:
+
+- `java.util.Collection`: splits by each element from the
+ collection/list/set.
+
+- `java.util.Map`: splits by each `Map.Entry` from the map.
+
+- `Object[]`: splits the array by each element
+
+- `Iterator`: splits by the iterator
+
+- `Iterable`: splits by the iterable
+
+- `org.w3c.dom.NodeList`: splits the XML document by each element from
+ the list
+
+- `String`: splits the string value by comma as separator
+
+For any other type, the message body is not split, and instead used
+*as-is*, meaning that the Split EIP will be split into a single message
+(the same).
+
+To use this with the splitter, you should *just* use body as the
+expression:
+
+Java
+from("direct:splitUsingBody")
+.split(body())
+.to("mock:result");
+
+XML
+In XML, you use [Simple](#components:languages:simple-language.adoc) to
+refer to the message body:
+
+
+
+
+ ${body}
+
+
+
+
+## Splitting with parallel processing
+
+You can enable parallel processing with Split EIP so each split message
+is processed by its own thread in parallel.
+
+The example below enabled parallel mode:
+
+Java
+from("direct:a")
+.split(body()).parallelProcessing()
+.to("direct:x")
+.to("direct:y")
+.to("direct:z");
+
+XML
+
+
+
+${body}
+
+
+
+
+
+
+When parallel processing is enabled, then the Camel routing engin will
+continue processing using last used thread from the parallel thread
+pool. However, if you want to use the original thread that called the
+splitter, then make sure to enable the synchronous option as well.
+
+## Ending a Split block
+
+You may want to continue routing the exchange after the Split EIP. In
+Java DSL you need to use `end()` to mark where split ends, and where
+other EIPs can be added to continue the route.
+
+In the example above then sending to mock:result happens after the Split
+EIP has finished. In other words, the message should finish splitting
+the entire message before the message continues.
+
+Java
+from("direct:a")
+.split(body()).parallelProcessing()
+.to("direct:x")
+.to("direct:y")
+.to("direct:z")
+.end()
+.to("mock:result");
+
+XML
+And in XML its intuitive as `` marks the end of the block:
+
+
+
+
+ ${body}
+
+
+
+
+
+
+
+## What is returned from Split EIP when its complete
+
+The Splitter will by default return the original input message.
+
+You can control this by using a custom `AggregationStrategy`.
+
+## Aggregating
+
+The `AggregationStrategy` is used for aggregating all the split
+exchanges together as a single response exchange, that becomes the
+outgoing exchange after the Split EIP block.
+
+The example now aggregates with the `MyAggregationStrategy` class:
+
+Java
+from("direct:start")
+.split(body(), new MyAggregationStrategy())
+.to("direct:x")
+.to("direct:y")
+.to("direct:z")
+.end()
+.to("mock:result");
+
+XML
+And in XML we can refer to the FQN class name with `#class:` syntax as
+shown below:
+
+
+
+
+ ${body}
+
+
+
+
+
+
+
+The Multicast, Recipient List, and Splitter EIPs have special support
+for using `AggregationStrategy` with access to the original input
+exchange. You may want to use this when you aggregate messages and there
+has been a failure in one of the messages, which you then want to enrich
+on the original input message and return as response; it’s the aggregate
+method with 3 exchange parameters.
+
+## Splitting modes
+
+The Split EIP operates in two modes when splitting:
+
+- *default mode*: The message is split into sub messages, which allows
+ to know the total split size. However, this causes all sub messages
+ to be kept temporary in-memory.
+
+- *streaming mode*: The message is split on-demand. This uses an
+ iterator to keep track of the splitting index, but avoids loading
+ all sub messages into memory. However, the total size cannot be
+ known ahead of time.
+
+## Using streaming mode
+
+You can split in streaming mode as shown:
+
+Java
+from("direct:streaming")
+.split(body().tokenize(",")).streaming()
+.to("activemq:my.parts");
+
+XML
+
+
+
+
+
+
+
+
+You can also supply a custom
+[Bean](#components:languages:bean-language.adoc) to perform the
+splitting in streaming mode like this:
+
+Java
+from("direct:streaming")
+.split(method(new MyCustomSplitter(), "splitMe")).streaming()
+.to("activemq:my.parts")
+
+XML
+
+
+
+
+
+
+
+
+Then the custom bean could, for example, be implemented as follows:
+
+ public class MyCustomSplitter {
+
+ public List splitMe(Exchange exchange) {
+ Object body = exchange.getMessage().getBody();
+
+ List answer = new ArrayList();
+ // split the message body how you like
+ return answer;
+ }
+ }
+
+The bean should just return something that the splitter can work with
+when splitting, such as a `List` or `Iterator` etc.
+
+The bean method `splitMe` uses `Exchange` as parameter, however, Camel
+supports [Bean Parameter Binding](#manual:ROOT:bean-binding.adoc), which
+allows using other parameters types instead.
+
+## Streaming big XML payloads
+
+**Splitting big XML payloads**
+
+The XPath engine in Java and Saxon will load the entire XML content into
+memory. And thus they are not well suited for very big XML payloads.
+Instead, you can use a custom Expression which will iterate the XML
+payload in a streamed fashion. You can use the Tokenizer language which
+supports this when you supply the start and end tokens. You can use the
+XMLTokenizer language which is specifically provided for tokenizing XML
+documents.
+
+There are two tokenizers that can be used to tokenize an XML payload:
+
+- [Tokenize](#components:languages:tokenize-language.adoc) language
+
+- [XML Tokenize](#components:languages:xtokenize-language.adoc)
+ language
+
+## Streaming big XML payloads using Tokenize language
+
+The first tokenizer uses the same principle as in the text tokenizer to
+scan the XML payload and extract a sequence of tokens. If you have a big
+XML payload, from a file source, and want to split it in streaming mode,
+then you can use the
+[Tokenize](#components:languages:tokenize-language.adoc) language with
+start/end tokens to do this with low memory footprint.
+
+**StAX component**
+
+The Camel StAX component can also be used to split big XML files in a
+streaming mode. See more details at
+[StAX](#components::stax-component.adoc).
+
+For example, you may have an XML payload structured as follows:
+
+
+
+
+
+
+
+
+ ...
+
+
+
+
+
+Now to split this big file using
+[XPath](#components:languages:xpath-language.adoc) would cause the
+entire content to be loaded into memory. So instead, we can use the
+[Tokenize](#components:languages:tokenize-language.adoc) language to do
+this as follows:
+
+Java
+from("file:inbox")
+.split().tokenizeXML("order").streaming()
+.to("activemq:queue:order");
+
+XML
+
+
+
+
+
+
+
+
+This will split the file using the tag name of the child nodes (more
+precisely speaking, the local name of the element without its namespace
+prefix if any), which mean it will grab the content between the
+`` and ` ` tags (incl. the tags).
+
+So for example, a split message would be structured as follows:
+
+
+
+
+
+If you want to inherit namespaces from a root/parent tag, then you can
+do this as well by providing the name of the root/parent tag:
+
+Java
+from("file:inbox")
+.split().tokenizeXML("order", "orders").streaming()
+.to("activemq:queue:order");
+
+XML
+
+
+
+
+
+
+
+
+You can set `inheritNamsepaceTagName` property to `*` to include the
+preceding context in each token (i.e., generating each token enclosed in
+its ancestor elements). It is noted that each token must share the same
+ancestor elements in this case. The above tokenizer works well on simple
+structures but has some inherent limitations in handling more complex
+XML structures.
+
+## Streaming big XML payloads using XML Tokenize language
+
+The second tokenizer ([XML
+Tokenize](#components:languages:xtokenize-language.adoc)) uses a StAX
+parser to overcome these limitations. This tokenizer recognizes XML
+namespaces and also handles simple and complex XML structures more
+naturally and efficiently.
+
+To split with XML namespaces on a tag with a local namespace such as
+`{urn:shop}order`, we can write:
+
+ Namespaces ns = new Namespaces("ns1", "urn:shop");
+
+ from("file:inbox")
+ .split().xtokenize("//ns1:order", 'i', ns).streaming()
+ .to("activemq:queue:order)
+
+Two arguments control the behavior of the tokenizer:
+
+1. The first argument specifies the element using a path notation This
+ path notation uses a subset of xpath with wildcard support.
+
+2. The second argument represents the extraction mode.
+
+The available extraction modes are:
+
+
+
+
+
+
+
+
+
+
+
+i
+injecting the contextual namespace
+bindings into the extracted token (default)
+
+
+w
+wrapping the extracted token in its
+ancestor context
+
+
+u
+unwrapping the extracted token to its
+child content
+
+
+t
+extracting the text content of the
+specified element
+
+
+
+
+Having an input XML:
+
+
+ 123 2014-02-25 ...
+ ...
+
+
+Each mode will result in the following tokens:
+
+
+
+
+
+
+
+
+
+
+
+i
+<m:order xmlns:m="urn:shop"
+xmlns:cat="urn:shop:catalog"><id>123</id><date>2014-02-25</date>…</m:order>
+
+
+w
+<m:orders xmlns:m="urn:shop"
+xmlns:cat="urn:shop:catalog">
+<m:order><id>123</id><date>2014-02-25</date>…</m:order>
+</m:orders>
+
+
+u
+<id>123</id><date>2014-02-25</date>…
+
+
+t
+1232014-02-25…
+
+
+
+
+In XML, the equivalent route would be written as follows:
+
+
+
+
+
+ //ns1:order
+
+
+
+
+
+or setting the extraction mode explicitly as
+
+ //ns1:order
+
+Note that this StAX based tokenizer uses StAX Location API and requires
+a StAX Reader implementation (such as Woodstox) that correctly returns
+the offset position pointing to the beginning of each event triggering
+segment (the offset position of `<` at each start and end element
+event). If you use a StAX Reader which does not implement that API
+correctly, it results in invalid XML snippets after the split.
+
+For example, the snippet could be wrongly terminated:
+
+ ...< .... ...
+
+## Splitting files by grouping N lines together
+
+The [Tokenize](#components:languages:tokenize-language.adoc) language
+can be used for grouping N parts together, for example, to split big
+files into chunks of 1000 lines.
+
+Doing this is easy as the following example shows:
+
+Java
+from("file:inbox")
+.split().tokenize("\\n", 1000).streaming()
+.to("activemq:queue:order");
+
+XML
+
+
+
+
+
+
+
+
+The `group` value must be a positive number dictating how many elements
+to combine in a group. Each part will be combined using the token.
+
+In the example above, the message being sent to the activemq order
+queue, will contain 1000 lines, and each line separated by the token
+(which is a new line token).
+
+The output when using the group option is always a `java.lang.String`
+type.
+
+## Split and aggregate example
+
+This sample shows how you can split an Exchange, process each split
+message, aggregate and return a combined response to the original
+caller.
+
+The route below illustrates this and how the split supports a custom
+`AggregationStrategy` to build up the combined response message.
+
+ // this routes starts from the direct:start endpoint
+ // the body is then split based on @ separator
+ // the splitter in Camel supports InOut as well, and for that we need
+ // to be able to aggregate what response we need to send back, so we provide our
+ // own strategy with the class MyOrderStrategy.
+ from("direct:start")
+ .split(body().tokenize("@"), new MyOrderStrategy())
+ // each split message is then send to this bean where we can process it
+ .to("bean:MyOrderService?method=handleOrder")
+ // this is important to end the splitter route as we do not want to do more routing
+ // on each split message
+ .end()
+ // after we have split and handled each message, we want to send a single combined
+ // response back to the original caller, so we let this bean build it for us,
+ // this bean will receive the result of the aggregate strategy: MyOrderStrategy
+ .to("bean:MyOrderService?method=buildCombinedResponse")
+
+And the OrderService bean is as follows:
+
+ public static class MyOrderService {
+
+ private static int counter;
+
+ /**
+ * We just handle the order by returning an id line for the order
+ */
+ public String handleOrder(String line) {
+ LOG.debug("HandleOrder: {}", line);
+ return "(id=" + ++counter + ",item=" + line + ")";
+ }
+
+ /**
+ * We use the same bean for building the combined response to send
+ * back to the original caller
+ */
+ public String buildCombinedResponse(String line) {
+ LOG.debug("BuildCombinedResponse: {}", line);
+ return "Response[" + line + "]";
+ }
+ }
+
+And our custom `AggregationStrategy` that is responsible for holding the
+in progress aggregated message that after the splitter is ended will be
+sent to the `buildCombinedResponse` method for final processing before
+the combined response can be returned to the waiting caller.
+
+ /**
+ * This is our own order aggregation strategy where we can control
+ * how each split message should be combined. As we do not want to
+ * loos any message we copy from the new to the old to preserve the
+ * order lines as we process them
+ */
+ public static class MyOrderStrategy implements AggregationStrategy {
+
+ public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
+ // put order together in old exchange by adding the order from new exchange
+
+ if (oldExchange == null) {
+ // the first time we aggregate we only have the new exchange,
+ // so we just return it
+ return newExchange;
+ }
+
+ String orders = oldExchange.getIn().getBody(String.class);
+ String newLine = newExchange.getIn().getBody(String.class);
+
+ LOG.debug("Aggregate old orders: {}", orders);
+ LOG.debug("Aggregate new order: {}", newLine);
+
+ // put orders together separating by semicolon
+ orders = orders + ";" + newLine;
+ // put combined order back on old to preserve it
+ oldExchange.getIn().setBody(orders);
+
+ // return old as this is the one that has all the orders gathered until now
+ return oldExchange;
+ }
+ }
+
+So let’s run the sample and see how it works.
+
+We send an Exchange to the direct:start endpoint containing a message
+body with the String value: `A@B@C`. The flow is:
+
+ HandleOrder: A
+ HandleOrder: B
+ Aggregate old orders: (id=1,item=A)
+ Aggregate new order: (id=2,item=B)
+ HandleOrder: C
+ Aggregate old orders: (id=1,item=A);(id=2,item=B)
+ Aggregate new order: (id=3,item=C)
+ BuildCombinedResponse: (id=1,item=A);(id=2,item=B);(id=3,item=C)
+ Response to caller: Response[(id=1,item=A);(id=2,item=B);(id=3,item=C)]
+
+## Stop processing in case of exception
+
+The Splitter will by default continue to process the entire Exchange
+even in case of one of the split messages will throw an exception during
+routing.
+
+For example, if you have an Exchange with 1000 rows that you split.
+During processing of these split messages, an exception is thrown at the
+17th. What Camel does by default is to process the remainder of the 983
+messages. You have the chance to deal with the exception when
+aggregating using an `AggregationStrategy`.
+
+But sometimes you want Apache Camel to stop and let the exception be
+propagated back, and let the Camel [Error
+Handler](#manual:ROOT:error-handler.adoc) handle it. You can do this by
+specifying that it should stop in case of an exception occurred. This is
+done by the `stopOnException` option as shown below:
+
+Java
+from("direct:start")
+.split(body().tokenize(",")).stopOnException()
+.process(new MyProcessor())
+.to("mock:split")
+.end()
+.to("direct:cheese");
+
+XML
+
+
+
+
+
+
+
+
+
+
+In the example above, then `MyProcessor` is causing a failure and throws
+an exception. This means the Split EIP will stop after this, and not
+split anymore.
+
+## Sharing unit of work
+
+The Splitter will by default not share unit of work between the parent
+exchange and each split exchange. This means each sub exchange has its
+own individual unit of work.
+
+For example, you need to split a big message, and regard that process as
+an atomic-isolated operation that either is a success or failure. In
+case of a failure, you want that big message to be moved into a dead
+letter queue.
+
+To support this use case, you would have to share the unit of work on
+the Splitter.
+
+Here is an example in Java DSL:
+
+ errorHandler(deadLetterChannel("mock:dead").useOriginalMessage()
+ .maximumRedeliveries(3).redeliveryDelay(0));
+
+ from("direct:start")
+ .to("mock:a")
+ // share unit of work in the splitter, which tells Camel to propagate failures from
+ // processing the split messages back to the result of the splitter, which allows
+ // it to act as a combined unit of work
+ .split(body().tokenize(",")).shareUnitOfWork()
+ .to("mock:b")
+ .to("direct:line")
+ .end()
+ .to("mock:result");
+
+ from("direct:line")
+ .to("log:line")
+ .process(new MyProcessor())
+ .to("mock:line");
+
+What would happen is that in case there is an exception thrown during
+splitting, then the error handler will kick in (yes error handling still
+applies for the sub messages).
+
+The error handler in this example is configured to retry up till three
+times. And when a split message fails all redelivery attempts (its
+exhausted), then this message is **not** moved into that dead letter
+queue.
+
+The reason is that we have shared the unit of work, so the split message
+will report the error on the shared unit of work. When the Splitter is
+done, it checks the state of the shared unit of work and checks if any
+errors occurred. If an error occurred it will set the exception on the
+`Exchange` and mark it for rollback.
+
+The error handler will yet again kick in, as the `Exchange` has been
+marked as rollback. No redelivery attempts are performed (as it was
+marked for rollback) and the `Exchange` will be moved into the dead
+letter queue.
+
+Using this from XML DSL is just as easy as all you have to set is the
+`shareUnitOfWork`:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# See Also
+
+Because [Multicast](#multicast-eip.adoc) EIP is a baseline for the
+[Recipient List](#recipientList-eip.adoc) and Split EIPs, then you can
+find more information in those EIPs about features that are also
+available with Split EIP.
diff --git a/camel-splunk-hec.md b/camel-splunk-hec.md
index aeb8183d0f8fc814183a0f5c4ec9eba030e20050..8ed34a6e1a90a3d48b4cc803dc77d68bf01565cc 100644
--- a/camel-splunk-hec.md
+++ b/camel-splunk-hec.md
@@ -21,7 +21,9 @@ for this component:
splunk-hec:[splunkURL]?[options]
-# Message body
+# Usage
+
+## Message body
The body must be serializable to JSON, so it may be sent to Splunk.
@@ -34,7 +36,7 @@ ingestion.
It is meant for high-volume ingestion of machine data.
-# Configuring the index time
+## Configuring the index time
By default, the index time for an event is when the event makes it to
the Splunk server.
@@ -60,6 +62,8 @@ could be skewed. If you want to override the index time, you can do so.
|---|---|---|---|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
+|sslContextParameters|Sets the default SSL configuration to use for all the endpoints. You can also configure it directly at the endpoint level.||object|
+|useGlobalSslContextParameters|Enable usage of global SSL context parameters.|false|boolean|
## Endpoint Configurations
@@ -74,6 +78,7 @@ could be skewed. If you want to override the index time, you can do so.
|source|Splunk source argument|camel|string|
|sourceType|Splunk sourcetype argument|camel|string|
|splunkEndpoint|Splunk endpoint Defaults to /services/collector/event To write RAW data like JSON use /services/collector/raw For a list of all endpoints refer to splunk documentation (HTTP Event Collector REST API endpoints) Example for Spunk 8.2.x: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Data/HECRESTendpoints To extract timestamps in Splunk8.0 /services/collector/eventauto\_extract\_timestamp=true Remember to utilize RAW{} for questionmarks or slashes in parameters.|/services/collector/event|string|
+|sslContextParameters|SSL configuration||object|
|time|Time this even occurred. By default, the time will be when this event hits the splunk server.||integer|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|https|Contact HEC over https.|true|boolean|
diff --git a/camel-splunk.md b/camel-splunk.md
index cec5c847ca5065aef93a74dad8fc89a97d739fb9..b8371982b29bba8619fd4a9c12512fa28379de40 100644
--- a/camel-splunk.md
+++ b/camel-splunk.md
@@ -22,7 +22,9 @@ for this component:
splunk://[endpoint]?[options]
-# Producer Endpoints:
+# Usage
+
+## Producer Endpoints
@@ -30,27 +32,27 @@ for this component:
-
+
-
-stream
+
+stream
Streams data to a named index or the
default if not specified. When using stream mode, be aware of that
Splunk has some internal buffer (about 1MB or so) before events get to
the index. If you need realtime, better use submit or tcp mode.
-
-submit
+
+submit
submit mode. Uses Splunk rest api to
publish events to a named index or the default if not
specified.
-
-tcp
+
+tcp
tcp mode. Streams data to a tcp port,
and requires an open receiver port in Splunk.
@@ -68,7 +70,7 @@ See comment under message body.
In this example, a converter is required to convert to a SplunkEvent
class.
-# Consumer Endpoints:
+## Consumer Endpoints
@@ -76,24 +78,24 @@ class.
-
+
-
-normal
+
+normal
Performs normal search and requires a
search query in the search option.
-
-realtime
+
+realtime
Performs realtime search on live data
and requires a search query in the search option.
-
-savedsearch
+
+savedsearch
Performs search based on a search query
saved in splunk and requires the name of the query in the savedSearch
option.
@@ -106,12 +108,12 @@ option.
from("splunk://normal?delay=5000&username=user&password=123&initEarliestTime=-10s&search=search index=myindex sourcetype=someSourcetype")
.to("direct:search-result");
-camel-splunk creates a route exchange per search result with a
+Camel Splunk component creates a route exchange per search result with a
SplunkEvent in the body.
-# Message body
+## Message body
-Splunk operates on data in key/value pairs. The SplunkEvent class is a
+Splunk operates on data in key/value pairs. The `SplunkEvent` class is a
placeholder for such data, and should be in the message body for the
producer. Likewise, it will be returned in the body per search result
for the consumer.
@@ -120,7 +122,7 @@ You can send raw data to Splunk by setting the raw option on the
producer endpoint. This is useful for e.g., json/xml and other payloads
where Splunk has built in support.
-# Use Cases
+## Use Cases
Search Twitter for tweets with music and publish events to Splunk
@@ -160,7 +162,7 @@ Search Splunk for tweets:
from("splunk://normal?username=foo&password=bar&initEarliestTime=-2m&search=search index=camel-tweets sourcetype=twitter")
.log("${body}");
-# Other comments
+## Other comments
Splunk comes with a variety of options for leveraging machine generated
data with prebuilt apps for analyzing and displaying this. For example,
@@ -230,3 +232,4 @@ metrics to Splunk, and display this on a dashboard.
|token|User's token for Splunk. This takes precedence over password when both are set||string|
|username|Username for Splunk||string|
|useSunHttpsHandler|Use sun.net.www.protocol.https.Handler Https handler to establish the Splunk Connection. Can be useful when running in application servers to avoid app. server https handling.|false|boolean|
+|validateCertificates|Sets client's certificate validation mode. Value false makes SSL vulnerable and is not recommended for the production environment.|true|boolean|
diff --git a/camel-spring-batch.md b/camel-spring-batch.md
index 6d4b7126f3346f0f2c377f934e8323c22de0c496..5fb18b72fad956fdaef69ddaa43caa927664c828 100644
--- a/camel-spring-batch.md
+++ b/camel-spring-batch.md
@@ -77,12 +77,12 @@ Batch API directly.
JobExecution jobExecution = mockEndpoint.getExchanges().get(0).getIn().getBody(JobExecution.class);
BatchStatus currentJobStatus = jobExecution.getStatus();
-# Support classes
+## Support classes
Apart from the Component, Camel Spring Batch provides also support
classes, which can be used to hook into Spring Batch infrastructure.
-## CamelItemReader
+### CamelItemReader
`CamelItemReader` can be used to read batch data directly from the Camel
infrastructure.
@@ -103,7 +103,7 @@ from JMS queue.
-## CamelItemWriter
+### CamelItemWriter
`CamelItemWriter` has similar purpose as `CamelItemReader`, but it is
dedicated to write chunk of the processed data.
@@ -124,7 +124,7 @@ from JMS queue.
-## CamelItemProcessor
+### CamelItemProcessor
`CamelItemProcessor` is the implementation of Spring Batch
`org.springframework.batch.item.ItemProcessor` interface. The latter
@@ -161,7 +161,7 @@ language](http://camel.apache.org/simple.html).
-## CamelJobExecutionListener
+### CamelJobExecutionListener
`CamelJobExecutionListener` is the implementation of the
`org.springframework.batch.core.JobExecutionListener` interface sending
diff --git a/camel-spring-main.md b/camel-spring-main.md
new file mode 100644
index 0000000000000000000000000000000000000000..4c2bd2b7b005333b2eb83263788b881985934c21
--- /dev/null
+++ b/camel-spring-main.md
@@ -0,0 +1,6 @@
+# Spring-main.md
+
+**Since Camel 3.2**
+
+This module is used for running classic Spring XML (not Spring Boot) via
+a main class extended from `camel-main`.
diff --git a/camel-spring-rabbitmq.md b/camel-spring-rabbitmq.md
index d26eb7b3babc0cba5060371886236df2b33da39a..5749618043f8cd6ec0309ece0bccedcdd467b45e 100644
--- a/camel-spring-rabbitmq.md
+++ b/camel-spring-rabbitmq.md
@@ -26,7 +26,9 @@ The exchange name determines the exchange to which the produced messages
will be sent to. In the case of consumers, the exchange name determines
the exchange the queue will be bound to.
-# Using a connection factory
+# Usage
+
+## Using a connection factory
To connect to RabbitMQ, you need to set up a `ConnectionFactory` (same
as with JMS) with the login details such as:
@@ -47,7 +49,7 @@ The `ConnectionFactory` is auto-detected by default, so you can do:
-# Default Exchange Name
+## Default Exchange Name
To use default exchange name (which would be an empty exchange name in
RabbitMQ) then you should use `default` as name in the endpoint uri,
@@ -55,7 +57,7 @@ such as:
to("spring-rabbitmq:default?routingKey=foo")
-# Auto declare exchanges, queues and bindings
+## Auto declare exchanges, queues and bindings
Before you can send or receive messages from RabbitMQ, then exchanges,
queues and bindings must be setup first.
@@ -83,7 +85,7 @@ configure the endpoint uri with
-
+
-
+
autoDelete
boolean
True if the server should delete the
@@ -99,7 +101,7 @@ exchange when it is no longer in use (if all bindings are
deleted).
false
-
+
durable
boolean
A durable exchange will survive a
@@ -122,7 +124,7 @@ RabbitMQ documentation.
-
+
-
+
autoDelete
boolean
True if the server should delete the
@@ -138,20 +140,20 @@ exchange when it is no longer in use (if all bindings are
deleted).
false
-
+
durable
boolean
A durable queue will survive a server
restart.
false
-
+
exclusive
boolean
Whether the queue is exclusive
false
-
+
x-dead-letter-exchange
String
The name of the dead letter exchange.
@@ -159,7 +161,7 @@ If none configured, then the component configured value is
used.
-
+
x-dead-letter-routing-key
String
The routing key for the dead letter
@@ -174,7 +176,7 @@ You can also configure any additional `x-` arguments, such as the
message time to live, via `x-message-ttl`, and many others. See details
in the RabbitMQ documentation.
-# Mapping from Camel to RabbitMQ
+## Mapping from Camel to RabbitMQ
The message body is mapped from Camel Message body to a `byte[]` which
is the type that RabbitMQ uses for message body. Camel will use its type
@@ -189,7 +191,7 @@ Custom message headers are mapped from Camel Message headers to RabbitMQ
headers. This behaviour can be customized by configuring a new
implementation of `HeaderFilterStrategy` on the Camel component.
-# Request / Reply
+## Request / Reply
Request and reply messaging is supported using [RabbitMQ direct
reply-to](https://www.rabbitmq.com/direct-reply-to.html).
@@ -219,7 +221,7 @@ the message being logged
.to("log:input")
.transform(body().prepend("Hello "));
-# Reuse endpoint and send to different destinations computed at runtime
+## Reuse endpoint and send to different destinations computed at runtime
If you need to send messages to a lot of different RabbitMQ exchanges,
it makes sense to reuse an endpoint and specify the real destination in
@@ -239,20 +241,20 @@ You can specify using the following headers:
-
+
-
+
CamelSpringRabbitmqExchangeOverrideName
String
The exchange name.
-
+
CamelSpringRabbitmqRoutingOverrideKey
String
@@ -293,7 +295,7 @@ not propagate them to the created Rabbitmq message to avoid the
accidental loops in the routes (in scenarios when the message will be
forwarded to another RabbitMQ endpoint).
-# Using toD
+## Using toD
If you need to send messages to a lot of different exchanges, it makes
sense to reuse an endpoint and specify the dynamic destinations with
@@ -302,10 +304,12 @@ simple language using [toD](#eips:toD-eip.adoc).
For example, suppose you need to send messages to exchanges with order
types, then using toD could, for example, be done as follows:
+**Example SJMS2 route with `toD`**
+
from("direct:order")
.toD("spring-rabbit:order-${header.orderType}");
-# Manual Acknowledgement
+## Manual Acknowledgement
If we need to manually acknowledge a message for some use case, we can
do it by setting and acknowledgeMode to Manual and using the below
@@ -394,6 +398,9 @@ the message:
|confirm|Controls whether to wait for confirms. The connection factory must be configured for publisher confirms and this method. auto = Camel detects if the connection factory uses confirms or not. disabled = Confirms is disabled. enabled = Confirms is enabled.|auto|string|
|confirmTimeout|Specify the timeout in milliseconds to be used when waiting for a message sent to be confirmed by RabbitMQ when doing send only messaging (InOnly). The default value is 5 seconds. A negative value indicates an indefinite timeout.|5000|duration|
|replyTimeout|Specify the timeout in milliseconds to be used when waiting for a reply message when doing request/reply (InOut) messaging. The default value is 30 seconds. A negative value indicates an indefinite timeout (Beware that this will cause a memory leak if a reply is not received).|30000|duration|
+|skipBindQueue|If true the queue will not be bound to the exchange after declaring it.|false|boolean|
+|skipDeclareExchange|This can be used if we need to declare the queue but not the exchange.|false|boolean|
+|skipDeclareQueue|If true the producer will not declare and bind a queue. This can be used for directing messages via an existing routing key.|false|boolean|
|usePublisherConnection|Use a separate connection for publishers and consumers|false|boolean|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|args|Specify arguments for configuring the different RabbitMQ concepts, a different prefix is required for each element: arg.consumer. arg.exchange. arg.queue. arg.binding. arg.dlq.exchange. arg.dlq.queue. arg.dlq.binding. For example to declare a queue with message ttl argument: args=arg.queue.x-message-ttl=60000||object|
diff --git a/camel-spring-redis.md b/camel-spring-redis.md
index 92b084bc688b8e01a077156e770f442e0943fa18..9703da097808d51cfdcb992f3026c5d40f2ed205 100644
--- a/camel-spring-redis.md
+++ b/camel-spring-redis.md
@@ -39,7 +39,7 @@ command execution is returned in the message body.
-
+
-
+
HSET
Set the string value of a hash
field
@@ -57,7 +57,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.VALUE/"CamelRedis.Value" (Object)
void
-
+
HGET
Get the value of a hash field
RedisConstants.KEY/"CamelRedis.Key"
(String)
String
-
+
HSETNX
Set the value of a hash field only if
the field does not exist
@@ -76,7 +76,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.VALUE/"CamelRedis.Value" (Object)
void
-
+
HMSET
Set multiple hash fields to multiple
values
@@ -86,7 +86,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Map<String, Object>)
void
-
+
HMGET
Get the values of all the given hash
fields
@@ -96,7 +96,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Collection<String>)
Collection<Object>
-
+
HINCRBY
Increment the integer value of a hash
field by the given number
@@ -106,7 +106,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.VALUE/"CamelRedis.Value" (Long)
Long
-
+
HEXISTS
Determine if a hash field
exists
@@ -116,7 +116,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Boolean
-
+
HDEL
Delete one or more hash fields
RedisConstants.KEY/"CamelRedis.Key"
(String)
void
-
+
HLEN
Get the number of fields in a
hash
@@ -134,7 +134,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Long
-
+
HKEYS
Get all the fields in a hash
RedisConstants.KEY/"CamelRedis.Key"
(String)
Set<String>
-
+
HVALS
Get all the values in a hash
RedisConstants.KEY/"CamelRedis.Key"
(String)
Collection<Object>
-
+
HGETALL
Get all the fields and values in a
hash
@@ -170,7 +170,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
-
+
-
+
RPUSH
Append one or multiple values to a
list
@@ -188,7 +188,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Long
-
+
RPUSHX
Append a value to a list only if the
list exists
@@ -198,7 +198,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Long
-
+
LPUSH
Prepend one or multiple values to a
list
@@ -208,7 +208,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Long
-
+
LLEN
Get the length of a list
RedisConstants.KEY/"CamelRedis.Key"
(String)
Long
-
+
LRANGE
Get a range of elements from a
list
@@ -226,7 +226,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.END/"CamelRedis.End" (Long)
List<Object>
-
+
LTRIM
Trim a list to the specified
range
@@ -236,7 +236,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.END/"CamelRedis.End" (Long)
void
-
+
LINDEX
Get an element from a list by its
index
@@ -246,7 +246,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
String
-
+
LINSERT
Insert an element before or after
another element in a list
@@ -258,7 +258,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Long
-
+
LSET
Set the value of an element in a list
by its index
@@ -268,7 +268,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.INDEX/"CamelRedis.Index" (Long)
void
-
+
LREM
Remove elements from a list
RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.COUNT/"CamelRedis.Count" (Long)
Long
-
+
LPOP
Remove and get the first element in a
list
@@ -286,7 +286,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Object
-
+
RPOP
Remove and get the last element in a
list
@@ -295,7 +295,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
String
-
+
RPOPLPUSH
Remove the last element in a list,
append it to another list and return it
@@ -306,7 +306,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Object
-
+
BRPOPLPUSH
Pop a value from a list, push it to
another list and return it; or block until one is available
@@ -318,7 +318,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
Object
-
+
BLPOP
Remove and get the first element in a
list, or block until one is available
@@ -328,7 +328,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
Object
-
+
BRPOP
Remove and get the last element in a
list, or block until one is available
@@ -349,7 +349,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
-
+
-
+
SADD
Add one or more members to a
set
@@ -367,7 +367,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Boolean
-
+
SMEMBERS
Get all the members in a set
RedisConstants.KEY/"CamelRedis.Key"
(String)
Set<Object>
-
+
SREM
Remove one or more members from a
set
@@ -385,7 +385,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Boolean
-
+
SPOP
Remove and return a random member from
a set
@@ -394,7 +394,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
String
-
+
SMOVE
Move a member from one set to
another
@@ -405,7 +405,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Boolean
-
+
SCARD
Get the number of members in a
set
@@ -414,7 +414,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Long
-
+
SISMEMBER
Determine if a given value is a member
of a set
@@ -424,7 +424,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Boolean
-
+
SINTER
Intersect multiple sets
RedisConstants.KEY/"CamelRedis.Key"
(String)
Set<Object>
-
+
SINTERSTORE
Intersect multiple sets and store the
resulting set in a key
@@ -444,7 +444,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
void
-
+
SUNION
Add multiple sets
RedisConstants.KEY/"CamelRedis.Key"
(String)
Set<Object>
-
+
SUNIONSTORE
Add multiple sets and store the
resulting set in a key
@@ -464,7 +464,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
void
-
+
SDIFF
Subtract multiple sets
RedisConstants.KEY/"CamelRedis.Key"
(String)
Set<Object>
-
+
SDIFFSTORE
Subtract multiple sets and store the
resulting set in a key
@@ -484,7 +484,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
void
-
+
SRANDMEMBER
Get one or multiple random members from
a set
@@ -504,7 +504,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
-
+
-
+
ZADD
Add one or more members to a sorted
set, or update its score if it already exists
@@ -522,7 +522,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.SCORE/"CamelRedis.Score" (Double)
Boolean
-
+
ZRANGE
Return a range of members in a sorted
set, by index
@@ -534,7 +534,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Boolean)
Object
-
+
ZREM
Remove one or more members from a
sorted set
@@ -544,7 +544,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Boolean
-
+
ZINCRBY
Increment the score of a member in a
sorted set
@@ -555,7 +555,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Double)
Double
-
+
ZRANK
Determine the index of a member in a
sorted set
@@ -565,7 +565,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Long
-
+
ZREVRANK
Determine the index of a member in a
sorted set, with scores ordered from high to low
@@ -575,7 +575,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Long
-
+
ZREVRANGE
Return a range of members in a sorted
set, by index, with scores ordered from high to low
@@ -587,7 +587,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Boolean)
Object
-
+
ZCARD
Get the number of members in a sorted
set
@@ -596,7 +596,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Long
-
+
ZCOUNT
Count the members in a sorted set with
scores within the given values
@@ -606,7 +606,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.MAX/"CamelRedis.Max" (Double)
Long
-
+
ZRANGEBYSCORE
Return a range of members in a sorted
set, by score
@@ -616,7 +616,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.MAX/"CamelRedis.Max" (Double)
Set<Object>
-
+
ZREVRANGEBYSCORE
Return a range of members in a sorted
set, by score, with scores ordered from high to low
@@ -626,7 +626,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.MAX/"CamelRedis.Max" (Double)
Set<Object>
-
+
ZREMRANGEBYRANK
Remove all members in a sorted set
within the given indexes
@@ -636,7 +636,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.END/"CamelRedis.End" (Long)
void
-
+
ZREMRANGEBYSCORE
Remove all members in a sorted set
within the given scores
@@ -646,7 +646,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.END/"CamelRedis.End" (Long)
void
-
+
ZUNIONSTORE
Add multiple sorted sets and store the
resulting-sorted set in a new key
@@ -657,7 +657,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
void
-
+
ZINTERSTORE
Intersect multiple sorted sets and
store the resulting sorted set in a new key
@@ -679,7 +679,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
-
+
-
+
SET
Set the string value of a key
RedisConstants.KEY/"CamelRedis.Key"
(Object)
void
-
+
GET
Get the value of a key
RedisConstants.KEY/"CamelRedis.Key"
(String)
Object
-
+
STRLEN
Get the length of the value stored in a
key
@@ -713,7 +713,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Long
-
+
APPEND
Append a value to a key
RedisConstants.KEY/"CamelRedis.Key"
(String)
Integer
-
+
SETBIT
Sets or clears the bit at offset in the
string value stored at key
@@ -732,7 +732,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.VALUE/"CamelRedis.Value" (Boolean)
void
-
+
GETBIT
Returns the bit value at offset in the
string value stored at key
@@ -742,7 +742,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
Boolean
-
+
SETRANGE
Overwrite part of a string at key
starting at the specified offset
@@ -752,7 +752,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.OFFSET/"CamelRedis.Offset" (Long)
void
-
+
GETRANGE
Get a substring of the string stored at
a key
@@ -762,7 +762,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
RedisConstants.END/"CamelRedis.End" (Long)
String
-
+
SETNX
Set the value of a key only if the key
does not exist
@@ -772,7 +772,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Boolean
-
+
SETEX
Set the value and expiration of a
key
@@ -783,7 +783,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
SECONDS
void
-
+
DECRBY
Decrement the integer value of a key by
the given number
@@ -793,7 +793,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
Long
-
+
DECR
Decrement the integer value of a key by
one
@@ -802,7 +802,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String),
Long
-
+
INCRBY
Increment the integer value of a key by
the given amount
@@ -812,7 +812,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
Long
-
+
INCR
Increment the integer value of a key by
one
@@ -821,7 +821,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Long
-
+
MGET
Get the values of all the given
keys
@@ -830,7 +830,7 @@ style="text-align: left;">RedisConstants.FIELDS/"CamelRedis.Fiel
(Collection<String>)
List<Object>
-
+
MSET
Set multiple keys to multiple
values
@@ -839,7 +839,7 @@ style="text-align: left;">RedisConstants.VALUES/"CamelRedis.Valu
(Map<String, Object>)
void
-
+
MSETNX
Set multiple keys to multiple values
only if none of the keys exist
@@ -849,7 +849,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
void
-
+
GETSET
Set the string value of a key and
return its old value
@@ -870,7 +870,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
-
+
-
+
EXISTS
Determine if a key exists
RedisConstants.KEY/"CamelRedis.Key"
(String)
Boolean
-
+
DEL
Delete a key
RedisConstants.KEYS/"CamelRedis.Keys"
(String)
void
-
+
TYPE
Determine the type stored at
key
@@ -903,7 +903,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
DataType
-
+
KEYS
Find all keys matching the given
pattern
@@ -912,7 +912,7 @@ style="text-align: left;">RedisConstants.PATERN/"CamelRedis.Patt
(String)
Collection<String>
-
+
RANDOMKEY
Return a random key from the
keyspace
@@ -922,7 +922,7 @@ style="text-align: left;">RedisConstants.PATERN/"CamelRedis.Patt
(String)
String
-
+
RENAME
Rename a key
RedisConstants.KEY/"CamelRedis.Key"
(String)
void
-
+
RENAMENX
Rename a key, only if the new key does
not exist
@@ -940,7 +940,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Boolean
-
+
EXPIRE
Set a key’s time to live in
seconds
@@ -950,7 +950,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
Boolean
-
+
SORT
Sort the elements in a list, set or
sorted set
@@ -959,7 +959,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
List<Object>
-
+
PERSIST
Remove the expiration from a
key
@@ -968,7 +968,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(String)
Boolean
-
+
EXPIREAT
Set the expiration for a key as a UNIX
timestamp
@@ -978,7 +978,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
Boolean
-
+
PEXPIRE
Set a key’s time to live in
milliseconds
@@ -988,7 +988,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
Boolean
-
+
PEXPIREAT
Set the expiration for a key as a UNIX
timestamp specified in milliseconds
@@ -998,7 +998,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Long)
Boolean
-
+
TTL
Get the time to live for a key
RedisConstants.KEY/"CamelRedis.Key"
(String)
Long
-
+
MOVE
Move a key to another database
RedisConstants.KEY/"CamelRedis.Key"
-
+
-
+
GEOADD
Adds the specified geospatial items
(latitude, longitude, name) to the specified key
@@ -1046,7 +1046,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object)
Long
-
+
GEODIST
Return the distance between two members
in the geospatial index for the specified key
@@ -1056,7 +1056,7 @@ style="text-align: left;">RedisConstants.KEY/"CamelRedis.Key"
(Object[])
Distance
-
+
GEOHASH
Return valid Geohash strings
representing the position of an element in the geospatial index for the
@@ -1067,7 +1067,7 @@ style="text-align: left;">
RedisConstants.KEY/"CamelRedis.Key"
(Object)
List<String>
-
+
GEOPOS
Return the positions (longitude,
latitude) of an element in the geospatial index for the specified
@@ -1078,7 +1078,7 @@ style="text-align: left;">
RedisConstants.KEY/"CamelRedis.Key"
(Object)
List<Point>
-
+
GEORADIUS
Return the element in the geospatial
index for the specified key which is within the borders of the area
@@ -1093,7 +1093,7 @@ style="text-align: left;">
RedisConstants.KEY/"CamelRedis.Key"
(Integer)
GeoResults
-
+
GEORADIUSBYMEMBER
This command is exactly like GEORADIUS
with the sole difference that instead of taking, as the center of the
@@ -1118,34 +1118,34 @@ style="text-align: left;">
RedisConstants.KEY/"CamelRedis.Key"
-
+
Other Command
Description
Parameters
Result
-
+
MULTI
Mark the start of a transaction
block
none
void
-
+
DISCARD
Discard all commands issued after
MULTI
none
void
-
+
EXEC
Execute all commands issued after
MULTI
none
void
-
+
WATCH
Watch the given keys to determine
execution of the MULTI/EXEC block
@@ -1154,13 +1154,13 @@ style="text-align: left;">RedisConstants.KEYS/"CamelRedis.Keys"
(String)
void
-
+
UNWATCH
Forget about all watched keys
none
void
-
+
ECHO
Echo the given string
RedisConstants.VALUE/"CamelRedis.Value
(String)
String
-
+
PING
Ping the server
none
String
-
+
QUIT
Close the connection
none
void
-
+
PUBLISH
Post a message to a channel
` element in Spring XML.
+
+The `` element may contain the following
+attributes:
+
+
+
+
+
+
+
+
+
+
+
+
+id
+null
+The unique Spring bean identifier which
+is used to reference the policy in routes (required)
+
+
+authenticationManager
+authenticationManager
+The name of the Spring Security
+AuthenticationManager object in the context
+
+
+authorizationManager
+authorizationManager
+The name of the Spring Security
+AuthorizationManager object in the context
+
+
+authenticationAdapter
+DefaultAuthenticationAdapter
+The name of a
+camel-spring-security
+AuthenticationAdapter object in the context that is used to
+convert a javax.security.auth.Subject into a Spring
+Security Authentication instance.
+
+
+useThreadSecurityContext
+true
+If a
+javax.security.auth.Subject cannot be found in the In
+message header under Exchange.AUTHENTICATION, check the
+Spring Security SecurityContextHolder for an
+Authentication object.
+
+
+alwaysReauthenticate
+false
+If set to true, the
+SpringSecurityAuthorizationPolicy will always call
+AuthenticationManager.authenticate() each time the policy
+is accessed.
+
+
+
+
+# Controlling access to Camel routes
+
+A Spring Security `AuthenticationManager` and `AuthorizationManager` are
+required to use this component. Here is an example of how to configure
+these objects in Spring XML using the Spring Security namespace:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Now that the underlying security objects are set up, we can use them to
+configure an authorization policy and use that policy to control access
+to a route:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+In this example, the endpoint `mock:end` will not be executed unless a
+Spring Security `Authentication` object that has been or can be
+authenticated and contains the `ROLE_ADMIN` authority can be located by
+the *admin* `SpringSecurityAuthorizationPolicy`.
+
+# Authentication
+
+This component does not specify the process of obtaining security
+credentials that are used for authorization. You can write your own
+processors or components which get authentication information from the
+exchange depending on your needs. For example, you might create a
+processor that gets credentials from an HTTP request header originating
+in the [Jetty](#ROOT:jetty-component.adoc) component. No matter how the
+credentials are collected, they need to be placed in the In message or
+the `SecurityContextHolder` so the Camel [Spring
+Security](#spring-security.adoc) component can access them:
+
+ import javax.security.auth.Subject;
+ import org.apache.camel.*;
+ import org.apache.commons.codec.binary.Base64;
+ import org.springframework.security.authentication.*;
+
+
+ public class MyAuthService implements Processor {
+ public void process(Exchange exchange) throws Exception {
+ // get the username and password from the HTTP header
+ // https://en.wikipedia.org/wiki/Basic_access_authentication
+ String userpass = new String(Base64.decodeBase64(exchange.getIn().getHeader("Authorization", String.class)));
+ String[] tokens = userpass.split(":");
+
+ // create an Authentication object
+ UsernamePasswordAuthenticationToken authToken = new UsernamePasswordAuthenticationToken(tokens[0], tokens[1]);
+
+ // wrap it in a Subject
+ Subject subject = new Subject();
+ subject.getPrincipals().add(authToken);
+
+ // place the Subject in the In message
+ exchange.getIn().setHeader(Exchange.AUTHENTICATION, subject);
+
+ // you could also do this if useThreadSecurityContext is set to true
+ // SecurityContextHolder.getContext().setAuthentication(authToken);
+ }
+ }
+
+The `SpringSecurityAuthorizationPolicy` will automatically authenticate
+the `Authentication` object if necessary.
+
+There are two issues to be aware of when using the
+`SecurityContextHolder` instead of or in addition to the
+`Exchange.AUTHENTICATION` header. First, the context holder uses a
+thread-local variable to hold the `Authentication` object. Any routes
+that cross thread boundaries, like **seda** or **jms**, will lose the
+`Authentication` object. Second, the Spring Security system appears to
+expect that an `Authentication` object in the context is already
+authenticated and has roles (see the Technical Overview [section
+5\.3.1](http://static.springsource.org/spring-security/site/docs/3.0.x/reference/technical-overview.html#tech-intro-authentication)
+for more details).
+
+The default behavior of **camel-spring-security** is to look for a
+`Subject` in the `Exchange.AUTHENTICATION` header. This `Subject` must
+contain at least one principal, which must be a subclass of
+`org.springframework.security.core.Authentication`. You can customize
+the mapping of `Subject` to `Authentication` object by providing an
+implementation of the
+`org.apache.camel.component.spring.security.AuthenticationAdapter` to
+your `` bean. This can be useful if you are working
+with components that do not use Spring Security but do provide a
+`Subject`. At this time, only the [CXF](#ROOT:cxf-component.adoc)
+component populates the `Exchange.AUTHENTICATION` header.
+
+# Handling authentication and authorization errors
+
+If authentication or authorization fails in the
+`SpringSecurityAuthorizationPolicy`, a `CamelAuthorizationException`
+will be thrown. This can be handled using Camel’s standard exception
+handling methods, like the Exception Clause. The
+`CamelAuthorizationException` will have a reference to the ID of the
+policy which threw the exception, so you can handle errors based on the
+policy as well as the type of exception:
+
+
+ org.springframework.security.authentication.AccessDeniedException
+
+
+ ${exception.policyId} == 'user'
+
+ You do not have ROLE_USER access!
+
+
+
+ ${exception.policyId} == 'admin'
+
+ You do not have ROLE_ADMIN access!
+
+
+
+
diff --git a/camel-spring-summary.md b/camel-spring-summary.md
new file mode 100644
index 0000000000000000000000000000000000000000..3ea905bb90b40156c3f2e7b6761a1a9e5de74d36
--- /dev/null
+++ b/camel-spring-summary.md
@@ -0,0 +1,397 @@
+# Spring-summary.md
+
+# Spring components
+
+See the following for usage of each component:
+
+indexDescriptionList::\[attributes=*group=Spring*,descriptionformat=description\]
+
+Apache Camel is designed to work nicely with the Spring Framework in a
+number of ways.
+
+- Camel supports Spring Boot using the `camel-spring-boot` component.
+
+- Allows Spring to dependency inject Component instances or the
+ CamelContext instance itself and auto-expose Spring beans as
+ components and endpoints.
+
+- Camel works with Spring XML processing with the XML DSL via
+ `camel-spring-xml` component
+
+- Camel provides powerful Bean Integration with any bean defined in a
+ Spring ApplicationContext
+
+- Camel uses Spring Transactions as the default transaction handling
+ in components like [JMS](#jms-component.adoc) and
+ [JPA](#jms-component.adoc)
+
+- Camel integrates with various Spring helper classes; such as
+ providing Type Converter support for Spring Resources, etc.
+
+- Allows you to reuse the Spring Testing framework to simplify your
+ unit and integration testing using [Enterprise Integration
+ Patterns](#eips:enterprise-integration-patterns.adoc) and Camel’s
+ powerful [Mock](#mock-component.adoc) and
+ [Test](#others:test-junit5.adoc) endpoints
+
+# Using Spring to configure the CamelContext
+
+You can configure a CamelContext inside any spring.xml using the
+`CamelContextFactoryBean`. This will automatically start the
+CamelContext along with any referenced Routes along any referenced
+Component and Endpoint instances.
+
+- Adding Camel schema
+
+- Configure Routes in two ways:
+
+ - Using Java Code
+
+ - Using Spring XML
+
+# Adding Camel Schema
+
+You need to add Camel to the `schemaLocation` declaration
+
+ http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd
+
+So the XML file looks like this:
+
+
+
+## Using camel: namespace
+
+Or you can refer to camel XSD in the XML declaration:
+
+ xmlns:camel="http://camel.apache.org/schema/spring"
+
+1. so the declaration is:
+
+
+
+
+
+1. and then use the camel: namespace prefix, and you can omit the
+ inline namespace declaration:
+
+
+
+
+ org.apache.camel.spring.example
+
+
+# Additional configuration of Spring XML
+
+See more details at [Camel Spring XML Auto
+Configuration](#manual::advanced-configuration-of-camelcontext-using-spring.adoc).
+
+## Using Java Code
+
+You can use Java Code to define your `RouteBuilder` implementations.
+These can be defined as beans in spring and then referenced in your
+camel context.
+
+## Using \
+
+Camel also provides a powerful feature that allows for the automatic
+discovery and initialization of routes in given packages. This is
+configured by adding tags to the camel context in your spring context
+definition, specifying the packages to be recursively searched for
+`RouteBuilder` implementations. To use this feature, requires a
+` ` tag specifying a comma-separated list of packages
+that should be searched e.g.
+
+
+ org.apache.camel.spring.config.scan.route
+
+
+Use caution when specifying the package name as `org.apache.camel` or a
+subpackage of this. This causes Camel to search in its own packages for
+your routes, which could cause problems.
+
+**Will ignore already instantiated classes**
+
+The `` and `` will skip all classes that Spring
+has already created etc. So if you define a route builder as a spring
+bean tag, then that class will be skipped. You can include those beans
+using ` ` or the `` feature.
+
+## Using \
+
+The component allows selective inclusion and exclusion of discovered
+route classes using Ant like path matching. In spring this is specified
+by adding a `` tag. The tag must contain one or more
+*package* elements, and optionally one or more *includes* or *excludes*
+elements specifying patterns to be applied to the fully qualified names
+of the discovered classes. e.g.
+
+
+
+ org.example.routes
+ **.*Excluded*
+ **.*
+
+
+
+Exclude patterns are applied before the include patterns. If no include
+or exclude patterns are defined, then all the Route classes discovered
+in the packages will be returned.
+
+In the above example, camel will scan all the `org.example.routes`
+package and any subpackages for `RouteBuilder` classes. Say the scan
+finds two RouteBuilders, one in `org.example.routes` called `MyRoute`
+and another `MyExcludedRoute` in a subpackage *excluded*. The fully
+qualified names of each of the classes are extracted
+(`org.example.routes.MyRoute`,
+`org.example.routes.excluded.MyExcludedRoute`) and the include and
+exclude patterns are applied.
+
+The exclude pattern **\*.\*Excluded** is going to match the fqcn
+`org.example.routes.excluded.MyExcludedRoute` and veto camel from
+initializing it.
+
+Under the covers, this is using *ant path* styles, which matches as
+follows
+
+ ? matches one character
+ * matches zero or more characters
+ ** matches zero or more segments of a fully qualified name
+
+For example:
+
+`**.*Excluded*` would match `org.simple.Excluded`,
+`org.apache.camel.SomeExcludedRoute` or
+`org.example.RouteWhichIsExcluded`
+
+`**.??cluded*` would match `org.simple.IncludedRoute`,
+`org.simple.Excluded` but not match `org.simple.PrecludedRoute`
+
+## Using contextScan
+
+You can allow Camel to scan the container context, e.g., the Spring
+`ApplicationContext` for route builder instances. This allows you to use
+the Spring `` feature and have Camel pickup any
+`RouteBuilder` instances which were created by Spring in its scan
+process.
+
+This allows you to just annotate your routes using the Spring
+`@Component` and have those routes included by Camel
+
+ @Component
+ public class MyRoute extends SpringRouteBuilder {
+
+ @Override
+ public void configure() throws Exception {
+ from("direct:start").to("mock:result");
+ }
+ }
+
+You can also use the ant style for inclusion and exclusion, as mentioned
+above in the `` documentation.
+
+# How do I import routes from other XML files?
+
+When defining routes in Camel using Spring XML, you may want to define
+some routes in other XML files. For example, you may have many routes,
+and it may help to maintain the application if some routes are in
+separate XML files. You may also want to store common and reusable
+routes in other XML files, which you can import when needed.
+
+It is possible to define routes outside ` ` which you do
+in a new ` ` tag.
+
+When you use `` then they are separated, and cannot reuse
+existing ``, ``, `` and similar
+cross-cutting functionalities defined in the ``. In other
+words the `` is currently isolated.
+
+For example, we could have a file named `myCoolRoutes.xml` which
+contains a couple of routes as shown:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Then in your XML file which contains the CamelContext you can use Spring
+to import the `myCoolRoute.xml` file. And then inside ` `
+you can refer to the ` ` by its id as shown below:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Also notice that you can mix and match, having routes inside
+`CamelContext` and also externalized in `RouteContext`.
+
+You can have as many ` ` as you like.
+
+**Reusable routes**
+
+The routes defined in ` ` can be reused by multiple
+` `. However, it is only the definition that is reused. At
+runtime, each CamelContext will create its own instance of the route
+based on the definition.
+
+## Test time exclusion.
+
+At test time, it is often desirable to be able to selectively exclude
+matching routes from being initialized that are not applicable or useful
+to the test scenario. For instance, you might have a spring context file
+`routes-context.xml` and three Route builders `RouteA`, `RouteB` and
+`RouteC` in the *org.example.routes* package. The packageScan definition
+would discover all three of these routes and initialize them.
+
+Say `RouteC` is not applicable to our test scenario and generates a lot
+of noise during the test. It would be nice to be able to exclude this
+route from this specific test. The `SpringTestSupport` class has been
+modified to allow this. It provides two methods (`excludeRoute` and
+`excludeRoutes`) that may be overridden to exclude a single class or an
+array of classes.
+
+ public class RouteAandRouteBOnlyTest extends SpringTestSupport {
+ @Override
+ protected Class> excludeRoute() {
+ return RouteC.class;
+ }
+ }
+
+To hook into the camelContext initialization by spring to exclude the
+class `MyExcludedRouteBuilder`, we need to intercept the spring context
+creation. When overriding createApplicationContext to create the spring
+context, we call the `getRouteExcludingApplicationContext()` method to
+provide a special parent spring context that takes care of the
+exclusion.
+
+ @Override
+ protected AbstractXmlApplicationContext createApplicationContext() {
+ return new ClassPathXmlApplicationContext(new String[] {"routes-context.xml"}, getRouteExcludingApplicationContext());
+ }
+
+`RouteC` will now be excluded from initialization. Similarly, in another
+test that is testing only `RouteC`, we could exclude RouteB and RouteA
+by overriding the method `excludeRoutes`.
+
+ @Override
+ protected Class>[] excludeRoutes() {
+ return new Class[]{RouteA.class, RouteB.class};
+ }
+
+# Using Spring XML
+
+You can use Spring XML configuration to specify your XML Configuration
+for Routes such as in the following
+
+
+
+
+
+
+
+
+
+
+
+
+# Configuring Components and Endpoints
+
+You can configure your Component or Endpoint instances in your Spring
+XML as follows:
+
+
+
+
+
+
+
+
+
+
+
+
+
+Which allows you to configure a component using some name (activemq in
+the above example), then you can refer to the component using
+**activemq:\[queue:\|topic:\]destinationName**. This works by the
+SpringCamelContext lazily fetching components from the spring context
+for the scheme name you use for Endpoint URIs.
+
+For more details, see [Configuring Endpoints and
+Components](#manual:faq:how-do-i-configure-endpoints.adoc).
+
+# CamelContextAware
+
+If you want the `CamelContext` to be injected in your POJO just
+implement the `CamelContextAware` interface; then when Spring creates
+your POJO, the `CamelContext` will be injected into your POJO. Also see
+the [Bean Integration](#manual::bean-integration.adoc) for further
+injections.
+
+# Integration Testing
+
+To avoid a hung route when testing using Spring Transactions, see the
+note about Spring Integration Testing under Transactional Client.
+
+# Cron Component Support
+
+The `camel-spring` module can be used as implementation of the Camel
+Cron component.
+
+Maven users will need to add the following additional dependency to
+their `pom.xml`:
+
+
+ org.apache.camel
+ camel-cron
+ x.x.x
+
+
+
+Users can then use the cron component inside routes of their Spring or
+Spring Boot application:
+
+
+
+
+
diff --git a/camel-spring-ws.md b/camel-spring-ws.md
index d258178395933728baa64b762b81142ec8219edc..80012bf987b8498333696511275e44b05cfae0e5 100644
--- a/camel-spring-ws.md
+++ b/camel-spring-ws.md
@@ -5,8 +5,8 @@
**Both producer and consumer are supported**
The Spring WS component allows you to integrate with [Spring Web
-Services](http://static.springsource.org/spring-ws/sites/1.5/). It
-offers both *client*-side support, for accessing web services, and
+Services](https://docs.spring.io/spring-ws/docs/4.0.x/reference/html/).
+It offers both *client*-side support, for accessing web services, and
*server*-side support for creating your own contract-first web services.
Maven users will need to add the following dependency to their `pom.xml`
@@ -37,41 +37,46 @@ following:
-
+
-
+
rootqname
Offers the option to map web service
requests based on the qualified name of the root element contained in
the message.
-
+
soapaction
Used to map web service requests based
on the SOAP action specified in the header of the message.
-
+
uri
To map web service requests that target
a specific URI.
-
+
+uripath
+To map web service requests that target
+a specific path in URI.
+
+
xpathresult
Used to map web service requests based
on the evaluation of an XPath expression against the
incoming message. The result of the evaluation should match the XPath
result specified in the endpoint URI.
-
+
beanname
Allows you to reference an
org.apache.camel.component.spring.ws.bean.CamelEndpointDispatcher
object to integrate with existing (legacy) endpoint
+href="https://docs.spring.io/spring-ws/docs/4.0.x/reference/html/#server-endpoint-mapping">endpoint
mappings like PayloadRootQNameEndpointMapping,
SoapActionEndpointMapping, etc.
@@ -83,7 +88,9 @@ specified mapping-type (e.g. a SOAP action, XPath expression). As a
producer, the address should be set to the URI of the web service your
calling upon.
-# Accessing web services
+# Usage
+
+## Accessing web services
To call a web service at `\http://foo.com/bar` simply define a route:
@@ -96,7 +103,7 @@ And sent a message:
Remember if it’s a SOAP service you’re calling, you don’t have to
include SOAP tags. Spring-WS will perform the XML-to-SOAP marshaling.
-# Sending SOAP and WS-Addressing action headers
+## Sending SOAP and WS-Addressing action headers
When a remote web service requires a SOAP action or use of the
WS-Addressing standard, you define your route as:
@@ -110,7 +117,7 @@ Optionally, you can override the endpoint options with header values:
"test message ",
SpringWebserviceConstants.SPRING_WS_SOAP_ACTION, "http://baz.com");
-# Using SOAP headers
+## Using SOAP headers
You can provide the SOAP header(s) as a Camel Message header when
sending a message to a spring-ws endpoint, for example, given the
@@ -131,15 +138,15 @@ Likewise, the spring-ws consumer will also enrich the Camel Message with
the SOAP header.
For example, see this [unit
-test](https://svn.apache.org/repos/asf/camel/trunk/components/camel-spring-ws/src/test/java/org/apache/camel/component/spring/ws/SoapHeaderTest.java).
+test](https://github.com/apache/camel/blob/main/components/camel-spring-ws/src/test/java/org/apache/camel/component/spring/ws/SoapHeaderTest.java).
-# The header and attachment propagation
+## The header and attachment propagation
Spring WS Camel supports propagation of the headers and attachments into
-Spring-WS WebServiceMessage response. The endpoint will use so-called
-"hook" the MessageFilter (default implementation is provided by
-BasicMessageFilter) to propagate the exchange headers and attachments
-into WebServiceMessage response. Now you can use
+Spring-WS `WebServiceMessage` response. The endpoint will use so-called
+"hook" the `MessageFilter` (default implementation is provided by
+`BasicMessageFilter`) to propagate the exchange headers and attachments
+into `WebServiceMessage` response. Now you can use
exchange.getOut().getHeaders().put("myCustom","myHeaderValue")
exchange.getIn().addAttachment("myAttachment", new DataHandler(...))
@@ -148,12 +155,11 @@ If the exchange header in the pipeline contains text, it generates
Qname(key)=value attribute in the soap header. Recommended is to create
a QName class directly and put any key into header.
-# How to transform the soap header using a stylesheet
+## How to transform the soap header using a stylesheet
-The header transformation filter
-(HeaderTransformationMessageFilter.java) can be used to transform the
-soap header for a soap request. If you want to use the header
-transformation filter, see the below example:
+The header transformation filter (`HeaderTransformationMessageFilter`)
+can be used to transform the soap header for a soap request. If you want
+to use the header transformation filter, see the below example:
-
-
-
-
-
-
-
-# Exposing web services
+## Exposing web services
To expose a web service using this component, you first need to set up a
-[MessageDispatcher](http://static.springsource.org/spring-ws/sites/1.5/reference/html/server.html)
+[MessageDispatcher](https://docs.spring.io/spring-ws/docs/4.0.x/reference/html/#_the_messagedispatcher)
to look for endpoint mappings in a Spring XML file. If you plan on
running inside a servlet container, you probably want to use a
`MessageDispatcherServlet` configured in `web.xml`.
@@ -274,14 +273,14 @@ your routes.
More information on setting up Spring-WS can be found in [Writing
Contract-First Web
-Services](http://static.springsource.org/spring-ws/sites/1.5/reference/html/tutorial.html).
+Services](https://docs.spring.io/spring-ws/docs/4.0.x/reference/html/#tutorial).
Basically paragraph 3.6 "Implementing the Endpoint" is handled by this
component (specifically paragraph 3.6.2 "Routing the Message to the
Endpoint" is where `CamelEndpointMapping` comes in). Also remember to
check out the Spring Web Services Example included in the Camel
distribution.
-# Endpoint mapping in routes
+## Endpoint mapping in routes
With the XML configuration in place, you can now use Camel’s DSL to
define what web service requests are handled by your endpoint:
@@ -311,13 +310,13 @@ namespace).
from("spring-ws:xpathresult:abc?expression=//foobar&endpointMapping=#endpointMapping")
.convertBodyTo(String.class).to(mock:example)
-# Alternative configuration, using existing endpoint mappings
+## Alternative configuration, using existing endpoint mappings
For every endpoint with mapping-type `beanname` one bean of type
`CamelEndpointDispatcher` with a corresponding name is required in the
-Registry/ApplicationContext. This bean acts as a bridge between the
+`Registry`/`ApplicationContext`. This bean acts as a bridge between the
Camel endpoint and an existing [endpoint
-mapping](http://static.springsource.org/spring-ws/sites/1.5/reference/html/server.html#server-endpoint-mapping)
+mapping](https://docs.spring.io/spring-ws/docs/4.0.x/reference/html/#server-endpoint-mapping)
like `PayloadRootQNameEndpointMapping`.
The use of the `beanname` mapping-type is primarily meant for (legacy)
@@ -351,7 +350,7 @@ An example of a route using `beanname`:
-# POJO (un)marshalling
+## POJO (un)marshalling
Camel’s pluggable data formats offer support for pojo/xml marshalling
using libraries such as JAXB. You can use these data formats in your
diff --git a/camel-spring-xml.md b/camel-spring-xml.md
new file mode 100644
index 0000000000000000000000000000000000000000..b0301dd36a24cd7b77562d6c00c27353d2c593b3
--- /dev/null
+++ b/camel-spring-xml.md
@@ -0,0 +1,16 @@
+# Spring-xml.md
+
+**Since Camel 3.9**
+
+The Spring XML component provides the XML DSL when using Spring XML (eg
+``)
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-spring-xml
+ x.x.x
+
+
diff --git a/camel-springdoc.md b/camel-springdoc.md
new file mode 100644
index 0000000000000000000000000000000000000000..3a36fd0d29005e16af4b21ac4307fcb596b6a27b
--- /dev/null
+++ b/camel-springdoc.md
@@ -0,0 +1,6 @@
+# Springdoc.md
+
+**Since Camel 3.14**
+
+This is available only as a Camel spring boot starter. It supplies
+support for using the Springdoc Swagger UI with openapi-java.
diff --git a/camel-sql-stored.md b/camel-sql-stored.md
index 19ab892d2f9ab14404115156f49f50af9f15059a..99311c44f99fddbbc51128523d006c19eaaf9ff5 100644
--- a/camel-sql-stored.md
+++ b/camel-sql-stored.md
@@ -36,8 +36,8 @@ system or classpath such as:
sql-stored:classpath:sql/myprocedure.sql[?options]
-Where sql/myprocedure.sql is a plain text file in the classpath with the
-template, as show:
+Where `sql/myprocedure.sql` is a plain text file in the classpath with
+the template, as show:
SUBNUMBERS(
INTEGER ${headers.num1},
@@ -46,7 +46,9 @@ template, as show:
OUT INTEGER out2
)
-# Declaring the stored procedure template
+# Usage
+
+## Declaring the stored procedure template
The template is declared using a syntax that would be similar to a Java
method signature. The name of the stored procedure, and then the
@@ -56,16 +58,16 @@ arguments enclosed in parentheses. An example explains this well:
The arguments are declared by a type and then a mapping to the Camel
message using simple expression. So, in this example, the first two
-parameters are IN values of INTEGER type, mapped to the message headers.
-The third parameter is INOUT, meaning it accepts an INTEGER and then
-returns a different INTEGER result. The last parameter is the OUT value,
-also an INTEGER type.
+parameters are `IN` values of `INTEGER` type, mapped to the message
+headers. The third parameter is `INOUT`, meaning it accepts an `INTEGER`
+and then returns a different `INTEGER` result. The last parameter is the
+`OUT` value, also an `INTEGER` type.
In SQL terms, the stored procedure could be declared as:
CREATE PROCEDURE STOREDSAMPLE(VALUE1 INTEGER, VALUE2 INTEGER, INOUT RESULT1 INTEGER, OUT RESULT2 INTEGER)
-## IN Parameters
+### IN Parameters
IN parameters take four parts separated by a space: parameter name, SQL
type (with scale), type name, and value source.
@@ -76,10 +78,10 @@ It must be given between quotes(').
SQL type is required and can be an integer (positive or negative) or
reference to integer field in some class. If SQL type contains a dot,
then the component tries to resolve that class and read the given field.
-For example, SQL type `com.Foo.INTEGER` is read from the field INTEGER
+For example, SQL type `com.Foo.INTEGER` is read from the field `INTEGER`
of class `com.Foo`. If the type doesn’t contain comma then class to
resolve the integer value will be `java.sql.Types`. Type can be
-postfixed by scale for example DECIMAL(10) would mean
+postfixed by scale for example `DECIMAL(10)` would mean
`java.sql.Types.DECIMAL` with scale 10.
Type name is optional and must be given between quotes(').
@@ -94,9 +96,9 @@ effect.
URI means that the stored procedure will be called with parameter name
-*param1*, it’s SQL type is read from field INTEGER of class
-`org.example.Types` and scale will be set to 10. Input value for the
-parameter is passed from the header *srcValue*.
+`param1`, it’s SQL type is read from field `INTEGER` of class
+`org.example.Types` and scale will be set to 10. The input value for the
+parameter is passed from the header `srcValue`.
@@ -106,7 +108,7 @@ URI is identical to previous on except SQL-type is 100 and type name is
Actual call will be done using
org.springframework.jdbc.core.SqlParameter.
-## OUT Parameters
+### OUT Parameters
OUT parameters work similarly IN parameters and contain three parts: SQL
type(with scale), type name, and output parameter name.
@@ -130,7 +132,7 @@ This is identical to previous one but type name will be `mytype`.
Actual call will be done using
`org.springframework.jdbc.core.SqlOutParameter`.
-## INOUT Parameters
+### INOUT Parameters
INOUT parameters are a combination of all of the above. They receive a
value from the exchange, as well as store a result as a message header.
@@ -143,7 +145,7 @@ result header name.
Actual call will be done using
org.springframework.jdbc.core.SqlInOutParameter.
-## Query Timeout
+### Query Timeout
You can configure query timeout (via `template.queryTimeout`) on
statements used for query processing as shown:
@@ -154,7 +156,7 @@ This will be overridden by the remaining transaction timeout when
executing within a transaction that has a timeout specified at the
transaction level.
-# Camel SQL Starter
+## Camel SQL Starter
A starter module is available to spring-boot users. When using the
starter, the `DataSource` can be directly configured using spring-boot
diff --git a/camel-sql.md b/camel-sql.md
index e31b22e98e4d2d10e046112edfb178d34a6232c7..0004beca0a449701a8e3af93e54752d78a8ff84d 100644
--- a/camel-sql.md
+++ b/camel-sql.md
@@ -95,7 +95,9 @@ And the `myquery.sql` file is in the classpath and is just a plain text
In the file, you can use multi-lines and format the SQL as you wish. And
also use comments such as the – dash line.
-# Treatment of the message body
+# Usage
+
+## Treatment of the message body
The SQL component tries to convert the message body to an object of
`java.util.Iterator` type and then uses this iterator to fill the query
@@ -124,7 +126,7 @@ from the message body. Use templating (such as
processing, e.g., to include or exclude `where` clauses depending on the
presence of query parameters.
-# Result of the query
+## Result of the query
For `select` operations, the result is an instance of
`List>` type, as returned by the
@@ -143,7 +145,7 @@ and outputType together:
.to("sql:select order_seq.nextval from dual?outputHeader=OrderId&outputType=SelectOne")
.to("jms:order.booking");
-# Using StreamList
+## Using StreamList
The producer supports `outputType=StreamList` that uses an iterator to
stream the output of the query. This allows processing the data in a
@@ -159,7 +161,7 @@ needed.
.to("mock:result")
.end();
-# Generated keys
+## Generated keys
If you insert data using SQL INSERT, then the RDBMS may support auto
generated keys. You can instruct the SQL producer to return the
@@ -176,13 +178,13 @@ if the driver cannot correctly determine the number of parameters.
You can see more details in this [unit
test](https://gitbox.apache.org/repos/asf?p=camel.git;a=blob_plain;f=components/camel-sql/src/test/java/org/apache/camel/component/sql/SqlGeneratedKeysTest.java;hb=HEAD).
-# DataSource
+## DataSource
You can set a reference to a `DataSource` in the URI directly:
sql:select * from table where id=# order by name?dataSource=#myDS
-# Using named parameters
+## Using named parameters
In the given route below, we want to get all the projects from the
`projects` table. Notice the SQL query has two named parameters, `:#lic`
@@ -203,7 +205,7 @@ parameters will be taken from the body.
from("direct:projects")
.to("sql:select * from projects where license = :#lic and id > :#min order by id")
-# Using expression parameters in producers
+## Using expression parameters in producers
In the given route below, we want to get all the projects from the
database. It uses the body of the exchange for defining the license and
@@ -214,7 +216,7 @@ uses the value of a property as the second parameter.
.setProperty("min", constant(123))
.to("sql:select * from projects where license = :#${body} and id > :#${exchangeProperty.min} order by id")
-## Using expression parameters in consumers
+### Using expression parameters in consumers
When using the SQL component as consumer, you can now also use
expression parameters (simple language) to build dynamic query
@@ -243,7 +245,7 @@ Notice that there is no existing `Exchange` with message body and
headers, so the simple expression you can use in the consumer is most
usable for calling bean methods as in this example.
-# Using IN queries with dynamic values
+## Using IN queries with dynamic values
The SQL producer allows using SQL queries with `IN` statements where the
`IN` values are dynamically computed. For example, from the message body
@@ -304,7 +306,7 @@ If the dynamic list of values is stored in the message body, you can use
where project in (:#in:${body})
order by id
-# Using the JDBC-based idempotent repository
+## Using the JDBC-based idempotent repository
In this section, we will use the JDBC-based idempotent repository.
@@ -337,7 +339,7 @@ prefer to use a different constraint, or your SQL server uses a
different syntax for table creation, you can create the table yourself
using the above schema as a starting point.
-## Customize the JDBC idempotency repository
+### Customize the JDBC idempotency repository
You have a few options to tune the
`org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository` for
@@ -350,41 +352,42 @@ your needs:
-
+
Parameter
Default Value
Description
-
-createTableIfNotExists
+
+createTableIfNotExists
true
Defines whether Camel should try to
create the table if it doesn’t exist.
-
-tableName
+
+tableName
CAMEL_MESSAGEPROCESSED
To use a custom table name instead of
the default name: CAMEL_MESSAGEPROCESSED.
-
-tableExistsString
+
+tableExistsString
SELECT 1 FROM CAMEL_MESSAGEPROCESSED WHERE 1 = 0
This query is used to figure out
whether the table already exists or not. It must throw an exception to
indicate the table doesn’t exist.
-
-createString
+
+createString
CREATE TABLE CAMEL_MESSAGEPROCESSED (processorName VARCHAR(255),messageId VARCHAR(100), createdAt TIMESTAMP)
The statement which is used to create
the table.
-
-queryString
+
+queryString
SELECT COUNT(*) FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ?
The query which is used to figure out
@@ -393,8 +396,8 @@ equals to 0 ). It takes two parameters. This first one is the
processor name (String) and the second one is the message
id (String).
-
-insertString
+
+insertString
INSERT INTO CAMEL_MESSAGEPROCESSED (processorName, messageId, createdAt) VALUES (?, ?, ?)
The statement which is used to add the
@@ -404,8 +407,8 @@ processor name (String), the second one is the message id
(java.sql.Timestamp) when this entry was added to the
repository.
-
-deleteString
+
+deleteString
DELETE FROM CAMEL_MESSAGEPROCESSED WHERE processorName = ? AND messageId = ?
The statement which is used to delete
@@ -420,7 +423,7 @@ The option `tableName` can be used to use the default SQL queries but
with a different table name. However, if you want to customize the SQL
queries, then you can configure each of them individually.
-## Orphan Lock aware Jdbc IdempotentRepository
+### Orphan Lock aware Jdbc IdempotentRepository
One of the limitations of
`org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository` is
@@ -459,25 +462,27 @@ This repository has two additional configuration parameters
-
+
Parameter
Description
-
-lockMaxAgeMillis
+
+lockMaxAgeMillis
This refers to the duration after which
-the lock is considered orphaned, i.e., if the currentTimestamp -
-createdAt >= lockMaxAgeMillis then lock is orphaned.
+the lock is considered orphaned, i.e., if the
+currentTimestamp - createdAt >= lockMaxAgeMillis then
+lock is orphaned.
-
-lockKeepAliveIntervalMillis
+
+lockKeepAliveIntervalMillis
The frequency at which keep-alive
updates are done to createdAt Timestamp column.
-## Caching Jdbc IdempotentRepository
+### Caching Jdbc IdempotentRepository
Some SQL implementations are not fast on a per-query basis. The
`JdbcMessageIdRepository` implementation does its idempotent checks
@@ -489,7 +494,7 @@ before passing through to the original implementation.
As with all cache implementations, there are considerations that should
be made with regard to stale data and your specific usage.
-# Using the JDBC based aggregation repository
+## Using the JDBC based aggregation repository
`JdbcAggregationRepository` is an `AggregationRepository` which on the
fly persists the aggregated messages. This ensures that you will not
@@ -510,7 +515,7 @@ when the `maximumRedeliveries` was hit.
You can see some examples in the unit tests of camel-sql, for example
`JdbcAggregateRecoverDeadLetterChannelTest.java`
-## Database
+### Database
To be operational, each aggregator uses two tables: the aggregation and
completed one. By convention, the completed has the same name as the
@@ -543,7 +548,7 @@ with your aggregator repository name.
constraint aggregation_completed_pk PRIMARY KEY (id)
);
-# Storing body and headers as text
+## Storing body and headers as text
You can configure the `JdbcAggregationRepository` to store message body
and select(ed) headers as String in separate columns. For example, to
@@ -587,7 +592,7 @@ below:
-## Codec (Serialization)
+### Codec (Serialization)
Since they can contain any type of payload, Exchanges are not
serializable by design. It is converted into a byte array to be stored
@@ -609,18 +614,18 @@ classes will be blacklisted. So you’ll need to change the filter in case
of a need. This could be accomplished by changing the
deserializationFilter field in the repository.
-## Transaction
+### Transaction
A Spring `PlatformTransactionManager` is required to orchestrate
transaction.
-### Service (Start/Stop)
+#### Service (Start/Stop)
The `start` method verify the connection of the database and the
presence of the required tables. If anything is wrong, it will fail
during starting.
-## Aggregator configuration
+### Aggregator configuration
Depending on the targeted environment, the aggregator might need some
configuration. As you already know, each aggregator should have its own
@@ -644,7 +649,7 @@ Here is the declaration for Oracle:
-## Optimistic locking
+### Optimistic locking
You can turn on `optimisticLocking` and use this JDBC-based aggregation
repository in a clustered environment where multiple Camel applications
@@ -697,7 +702,7 @@ JDBC vendor.
-## Propagation behavior
+### Propagation behavior
`JdbcAggregationRepository` uses two distinct *transaction templates*
from Spring-TX. One is read-only and one is used for read-write
@@ -721,12 +726,12 @@ Propagation is specified by constants of
`propagationBehaviorName` is convenient setter that allows to use names
of the constants.
-## Clustering
+### Clustering
-JdbcAggregationRepository does not provide recovery in a clustered
+`JdbcAggregationRepository` does not provide recovery in a clustered
environment.
-You may use ClusteredJdbcAggregationRepository that provides a limited
+You may use `ClusteredJdbcAggregationRepository` that provides a limited
support for recovery in a clustered environment: recovery mechanism is
dealt separately by members of the cluster, i.e., a member may only
recover exchanges that it completed itself.
@@ -743,7 +748,7 @@ will not be recovered until it is restarted, unless you update completed
table to affect them to another member (by changing `instance_id` for
those completed exchanges).
-## PostgreSQL case
+### PostgreSQL case
There’s a special database that may cause problems with optimistic
locking used by `JdbcAggregationRepository`: PostgreSQL marks connection
@@ -770,7 +775,7 @@ Further handling is exactly the same as with generic
`JdbcAggregationRepository`, but without marking PostgreSQL connection
as invalid.
-# Camel Sql Starter
+## Camel SQL Starter
A starter module is available to spring-boot users. When using the
starter, the `DataSource` can be directly configured using spring-boot
diff --git a/camel-ssh.md b/camel-ssh.md
index c8adcb6c34ee53adcd6e1f0c566b64281495eb24..7ee95103d2accb60ad6baa11515522c2695c767e 100644
--- a/camel-ssh.md
+++ b/camel-ssh.md
@@ -21,7 +21,9 @@ for this component:
ssh:[username[:password]@]host[:port][?options]
-# Usage as a Producer endpoint
+# Usage
+
+## Usage as a Producer endpoint
When the SSH Component is used as a Producer (`.to("ssh://...")`), it
will send the message body as the command to execute on the remote SSH
@@ -35,11 +37,11 @@ an XML encoded newline (`+
+`).
features:list
-
+
-# Authentication
+## Authentication
The SSH Component can authenticate against the remote SSH server using
one of two mechanisms: Public Key certificate or username/password.
@@ -80,7 +82,7 @@ In the Java DSL,
An example of using Public Key authentication is provided in
`examples/camel-example-ssh-security`.
-# Certificate Dependencies
+## Certificate Dependencies
You will need to add some additional runtime dependencies if you use
certificate-based authentication. You may need to use later versions
diff --git a/camel-stax.md b/camel-stax.md
index dc9ea59dd0b57216650140db833ac75b0db3cdd9..80c1a360300591bb3f1865a1105d0feb456f4afe 100644
--- a/camel-stax.md
+++ b/camel-stax.md
@@ -33,7 +33,9 @@ using the # syntax as shown:
stax:#myHandler
-# Usage of a content handler as StAX parser
+# Usage
+
+## Usage of a content handler as StAX parser
The message body after the handling is the handler itself.
@@ -50,7 +52,7 @@ Here is an example:
}
});
-# Iterate over a collection using JAXB and StAX
+## Iterate over a collection using JAXB and StAX
First, we suppose you have JAXB objects.
@@ -139,9 +141,8 @@ parameter to false, as shown below:
.split(stax(Record.class, false)).streaming()
.to("mock:records");
-## The previous example with XML DSL
-
-The example above could be implemented as follows in Spring XML
+Alternatively, the example above could be implemented as follows in
+Spring XML
diff --git a/camel-step-eip.md b/camel-step-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..e26ca818f763aca135c0a3f1f10198c4ba4870eb
--- /dev/null
+++ b/camel-step-eip.md
@@ -0,0 +1,116 @@
+# Step-eip.md
+
+Camel supports the [Pipes and
+Filters](http://www.enterpriseintegrationpatterns.com/PipesAndFilters.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) in
+various ways.
+
+
+
+
+
+With Camel, you can group your processing across multiple independent
+EIPs which can then be chained together in a logical unit, called a
+*step*.
+
+A step groups together the child processors into a single composite
+unit. This allows capturing metrics at a group level which can make
+management and monitoring of Camel routes easier by using higher-level
+abstractions. You can also think this as a middle-level between the
+route and each processor in the routes.
+
+You may want to do this when you have large routes and want to break up
+the routes into logical steps.
+
+This means you can monitor your Camel applications and gather statistics
+at 4-tiers:
+
+- context level
+
+ - route(s) level
+
+ - step(s) level
+
+ - processor(s) level
+
+# Options
+
+# Exchange properties
+
+# Using Step EIP
+
+In Java, you use `step` to group together sub nodes as shown:
+
+ from("activemq:SomeQueue")
+ .step("foo")
+ .bean("foo")
+ .to("activemq:OutputQueue")
+ .end()
+ .to("direct:bar");
+
+As you can see this groups together `.bean("foo")` and
+`.to("activemq:OutputQueue")` into a logical unit with the name foo.
+
+In XML, you use the `` tag:
+
+
+
+
+
+
+
+
+
+
+You can have multiple steps:
+
+Java
+from("activemq:SomeQueue")
+.step("foo")
+.bean("foo")
+.to("activemq:OutputQueue")
+.end()
+.step("bar")
+.bean("something")
+.to("log:Something")
+.end()
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+YAML
+\- route:
+from:
+uri: activemq:SomeQueue
+steps:
+\- step:
+id: foo
+steps:
+\- bean:
+ref: foo
+\- to:
+uri: activemq:OutputQueue
+\- step:
+id: bar
+steps:
+\- bean:
+ref: something
+\- to:
+uri: log:Something
+
+## JMX Management of Step EIP
+
+Each Step EIP is registered in JMX under the `type=steps` tree, which
+allows monitoring all the steps in the CamelContext. It is also possible
+to dump statistics in XML format by the `dumpStepStatsAsXml` operations
+on the `CamelContext` or `Route` mbeans.
diff --git a/camel-stickyLoadBalancer-eip.md b/camel-stickyLoadBalancer-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..77b8c2118d75cc7a4125cb223b574b6b4fcf01f9
--- /dev/null
+++ b/camel-stickyLoadBalancer-eip.md
@@ -0,0 +1,39 @@
+# StickyLoadBalancer-eip.md
+
+Sticky mode for the [Load Balancer](#loadBalance-eip.adoc) EIP.
+
+A stick mode means that a correlation key (calculated as
+[Expression](#manual::expression.adoc)) is used to determine the
+destination. This allows routing all messages with the same key to the
+same destination.
+
+# Options
+
+# Exchange properties
+
+# Examples
+
+In this case, we are using the header myKey as correlation expression:
+
+Java
+from("direct:start")
+.loadBalance().sticky(header("myKey"))
+.to("seda:x")
+.to("seda:y")
+.to("seda:z")
+.end();
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-stitch.md b/camel-stitch.md
index b2777f8ac63cdc856df31789cf9090b6c60a52c2..d8f8b4c5c24c2a6a506574a5fd31d6053f2f1260 100644
--- a/camel-stitch.md
+++ b/camel-stitch.md
@@ -11,15 +11,8 @@ has integrations for many enterprise software data sources, and can
receive data via WebHooks and an API (Stitch Import API) which Camel
Stitch Component uses to produce the data to Stitch ETL.
-For more info, feel free to visit their website:
-[https://www.stitchdata.com/](https://www.stitchdata.com/)
-
-Prerequisites
-
-You must have a valid Stitch account, you will need to enable Stitch
-Import API and generate a token for the integration, for more info,
-please find more info
-[here](https://www.stitchdata.com/docs/developers/import-api/guides/quick-start).
+For more info, feel free to visit their
+[website](https://www.stitchdata.com/)
Maven users will need to add the following dependency to their `pom.xml`
for this component:
@@ -35,16 +28,24 @@ for this component:
stitch:[tableName]//[?options]
-# Async Producer
+# Usage
+
+## Prerequisites
+
+You must have a valid Stitch account, you will need to enable Stitch
+Import API and generate a token for the integration, for more info,
+please find more info
+[here](https://www.stitchdata.com/docs/developers/import-api/guides/quick-start).
+
+## Async Producer
This component implements the async Consumer and producer.
This allows camel route to consume and produce events asynchronously
without blocking any threads.
-# Usage
-
-For example, to produce data to Stitch from a custom processor:
+**Example showing how to produce data to Stitch from a custom
+processor:**
from("direct:sendStitch")
.process(exchange -> {
@@ -101,7 +102,7 @@ component:
## Examples
-Here is list of examples of data that can be proceeded to Stitch:
+Here is list of examples showing data that can be proceeded to Stitch:
### Input body type `org.apache.camel.component.stitch.client.models.StitchRequestBody`:
@@ -210,9 +211,9 @@ Here is list of examples of data that can be proceeded to Stitch:
})
.to("stitch:table_1?token=RAW({{token}})");
-## Development Notes (Important)
+# Development Notes (Important)
-When developing on this component, you will need to obtain your Stitch
+When developing on this component, you will need to obtain a Stitch
token to run the integration tests. In addition to the mocked unit
tests, you **will need to run the integration tests with every change
you make** To run the integration tests, in this component directory,
diff --git a/camel-stomp.md b/camel-stomp.md
index c2eceb9e035f9df7c50ff3b2701beaab5bb1cb38..39d61eab00e0739a7c282072b48e159c3f832fb2 100644
--- a/camel-stomp.md
+++ b/camel-stomp.md
@@ -7,7 +7,7 @@
The Stomp component is used for communicating with
[Stomp](http://stomp.github.io/) compliant message brokers, like [Apache
ActiveMQ](http://activemq.apache.org) or [ActiveMQ
-Apollo](http://activemq.apache.org/apollo/)
+Artemis](https://activemq.apache.org/components/artemis/)
Since STOMP specification is not actively maintained, please note [STOMP
JMS
@@ -32,7 +32,7 @@ for this component:
Where **destination** is the name of the queue.
-# Samples
+# Examples
Sending messages:
@@ -51,22 +51,22 @@ are usually referred to in the DSL via their URIs.
From an Endpoint you can use the following methods
-- [createProducer()](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html#createProducer--)
+- [`createProducer()`](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html#createProducer--)
will create a
[Producer](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Producer.html)
for sending message exchanges to the endpoint
-- [createConsumer()](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html#createConsumer-org.apache.camel.Processor-)
+- [`createConsumer()`](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html#createConsumer-org.apache.camel.Processor-)
implements the Event Driven Consumer pattern for consuming message
exchanges from the endpoint via a
- [Processor](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Processor.html)
+ [`Processor`](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Processor.html)
when creating a
- [Consumer](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Consumer.html)
+ [`Consumer`](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Consumer.html)
-- [createPollingConsumer()](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html#createPollingConsumer--)
+- [`createPollingConsumer()`](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html#createPollingConsumer--)
implements the Polling Consumer pattern for consuming message
exchanges from the endpoint via a
- [PollingConsumer](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/PollingConsumer.html)
+ [`PollingConsumer`](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/PollingConsumer.html)
## Component Configurations
diff --git a/camel-stop-eip.md b/camel-stop-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..a61063e3de22254c6e06d42712155e810c4ac0aa
--- /dev/null
+++ b/camel-stop-eip.md
@@ -0,0 +1,54 @@
+# Stop-eip.md
+
+How can I stop routing a message?
+
+
+
+
+
+Use a special filter to mark the message to be stopped.
+
+# Options
+
+# Exchange properties
+
+# Using Stop
+
+We want to stop routing a message if the message body contains the word
+Bye. In the [Content-Based Router](#choice-eip.adoc) below we use `stop`
+in such a case.
+
+Java
+from("direct:start")
+.choice()
+.when(body().contains("Hello")).to("mock:hello")
+.when(body().contains("Bye")).to("mock:bye").stop()
+.otherwise().to("mock:other")
+.end()
+.to("mock:result");
+
+XML
+
+
+
+
+${body} contains 'Hello'
+
+
+
+${body} contains 'Bye'
+
+
+
+
+
+
+
+
+## Calling stop from Java
+
+You can also mark an `Exchange` to stop being routed from Java with the
+following code:
+
+ Exchange exchange = ...
+ exchange.setRouteStop(true);
diff --git a/camel-stream.md b/camel-stream.md
index ac05d6c76e28b18f3549eb8fc99ddc6eac165c1f..aa95c28fa9176f1e2c2ef12f260fad6c686f510f 100644
--- a/camel-stream.md
+++ b/camel-stream.md
@@ -30,7 +30,9 @@ If the `stream:header` URI is specified, the `stream` header is used to
find the stream to write to. This option is available only for stream
producers (that is, it cannot appear in `from()`).
-# Message content
+# Usage
+
+## Message content
The Stream component supports either `String` or `byte[]` for writing to
streams. Just add either `String` or `byte[]` content to the
@@ -43,7 +45,7 @@ add a `java.io.OutputStream` object to `message.in.header` in the key
`header`.
See samples for an example.
-# Samples
+# Examples
In the following sample we route messages from the `direct:in` endpoint
to the `System.out` stream:
@@ -75,7 +77,7 @@ should also turn on the `fileWatcher` and `retry` options.
from("stream:file?fileName=/server/logs/server.log&scanStream=true&scanStreamDelay=1000&retry=true&fileWatcher=true")
.to("bean:logService?method=parseLogLine");
-# Reading HTTP server side streaming
+## Reading HTTP server side streaming
The camel-stream component has basic support for connecting to a remote
HTTP server and read streaming data (chunk of data separated by
diff --git a/camel-streamConfig-eip.md b/camel-streamConfig-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..000149aa46de091875af1bcc9c330cb6e70b0afa
--- /dev/null
+++ b/camel-streamConfig-eip.md
@@ -0,0 +1,5 @@
+# StreamConfig-eip.md
+
+Configuring for [Resequence EIP](#resequence-eip.adoc) in stream mode.
+
+# Exchange properties
diff --git a/camel-string-template.md b/camel-string-template.md
index 116c9e6bd6ca13ad7f5fee0051dfba2b3c181aba..dd6c219fb708972cb265f4eeaf5b8fc330926e38 100644
--- a/camel-string-template.md
+++ b/camel-string-template.md
@@ -25,13 +25,15 @@ for this component:
Where **templateName** is the classpath-local URI of the template to
invoke; or the complete URL of the remote template.
-# Headers
+# Usage
+
+## Headers
Camel will store a reference to the resource in the message header with
key, `org.apache.camel.stringtemplate.resource`. The Resource is an
`org.springframework.core.io.Resource` object.
-# String Template Context
+## String Template Context
Camel will provide exchange information in the String Template context
(just a `Map`). The `Exchange` is transferred as:
@@ -42,44 +44,44 @@ Camel will provide exchange information in the String Template context
-
+
-
+
exchange
The Exchange
itself.
-
+
exchange.properties
The Exchange
properties.
-
+
variables
The variables
-
+
headers
The headers of the In message.
-
+
camelContext
The Camel Context.
-
+
request
The In message.
-
+
body
The In message body.
-
+
response
The Out message (only for InOut message
exchange pattern).
@@ -87,21 +89,21 @@ exchange pattern).
-# Hot reloading
+## Hot reloading
The string template resource is by default hot-reloadable for both file
and classpath resources (expanded jar). If you set `contentCache=true`,
Camel loads the resource only once and hot-reloading is not possible.
This scenario can be used in production when the resource never changes.
-# Dynamic templates
+## Dynamic templates
Camel provides two headers by which you can define a different resource
location for a template or the template content itself. If any of these
headers is set, then Camel uses this over the endpoint configured
resource. This allows you to provide a dynamic template at runtime.
-# StringTemplate Attributes
+## StringTemplate Attributes
You can define the custom context map by setting the message header
"**CamelStringTemplateVariableMap**" just like the below code.
@@ -114,7 +116,7 @@ You can define the custom context map by setting the message header
variableMap.put("exchange", exchange);
exchange.getIn().setHeader("CamelStringTemplateVariableMap", variableMap);
-# Samples
+# Examples
For example, you could use a string template as follows in order to
formulate a response to a message:
@@ -122,7 +124,7 @@ formulate a response to a message:
from("activemq:My.Queue").
to("string-template:com/acme/MyResponse.tm");
-# The Email Sample
+## The Email Example
In this sample, we want to use a string template to send an order
confirmation email. The email template is laid out in `StringTemplate`
@@ -135,8 +137,6 @@ as:
Regards Camel Riders Bookstore
-And the java code is as follows:
-
## Component Configurations
diff --git a/camel-stub.md b/camel-stub.md
index cd2d0a6c067873ebda6a6630e875d43205769f90..d42922c0a89650901ba4720d141c7aef008769bc 100644
--- a/camel-stub.md
+++ b/camel-stub.md
@@ -10,6 +10,14 @@ run a route without needing to actually connect to a specific
[SMTP](#mail-component.adoc) or [Http](#http-component.adoc) endpoint.
Add **stub:** in front of any endpoint URI to stub out the endpoint.
+# URI format
+
+ stub:someUri
+
+Where **`someUri`** can be any URI with any query parameters.
+
+# Usage
+
Internally, the Stub component creates [Seda](#seda-component.adoc)
endpoints. The main difference between [Stub](#stub-component.adoc) and
[Seda](#seda-component.adoc) is that [Seda](#seda-component.adoc) will
@@ -18,12 +26,6 @@ of a typical URI with query arguments will usually fail. Stub won’t,
though, as it basically ignores all query parameters to let you quickly
stub out one or more endpoints in your route temporarily.
-# URI format
-
- stub:someUri
-
-Where **`someUri`** can be any URI with any query parameters.
-
# Examples
Here are a few samples of stubbing endpoint uris
@@ -43,7 +45,7 @@ Here are a few samples of stubbing endpoint uris
|defaultPollTimeout|The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown.|1000|integer|
|defaultBlockWhenFull|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted.|false|boolean|
|defaultDiscardWhenFull|Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue.|false|boolean|
-|defaultOfferTimeout|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue||integer|
+|defaultOfferTimeout|Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Using the .offer(timeout) method of the underlining java queue||integer|
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
|defaultQueueFactory|Sets the default queue factory.||object|
diff --git a/camel-swiftMt-dataformat.md b/camel-swiftMt-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..5b6ee1ad8c40ed659ace94271d9cf7da997acd6a
--- /dev/null
+++ b/camel-swiftMt-dataformat.md
@@ -0,0 +1,92 @@
+# SwiftMt-dataformat.md
+
+**Since Camel 3.20**
+
+The SWIFT MT data format is used to encode and decode SWIFT MT messages.
+The data format leverages the library [Prowide
+Core](https://github.com/prowide/prowide-core) to encode and decode
+SWIFT MT messages.
+
+# Options
+
+In Spring DSL, you configure the data format using this tag:
+
+
+
+
+
+ ...
+
+
+Then you can use it later by its reference:
+
+
+
+
+
+
+
+Most of the time, you won’t need to declare the data format if you use
+the default options. In that case, you can declare the data format
+inline as shown below:
+
+
+
+
+
+
+
+
+
+# Marshal
+
+In this example, we marshal the messages read from a JMS queue in SWIFT
+format before storing the result into a file.
+
+ from("jms://myqueue")
+ .marshal().swiftMt()
+ .to("file://data.bin");
+
+In Spring DSL:
+
+
+
+
+
+
+
+# Unmarshal
+
+The unmarshaller converts the input data into the concrete class of type
+`com.prowidesoftware.swift.model.mt.AbstractMT` that best matches with
+the content of the message.
+
+In this example, we unmarshal the content of a file to get SWIFT MT
+objects before processing them with the `newOrder` processor.
+
+**SwiftMt example in Java**
+
+ from("file://data.bin")
+ .unmarshal().swiftMt()
+ .process("newOrder");
+
+**SwiftMt example in In Spring DSL**
+
+
+
+
+
+
+
+# Dependencies
+
+To use SWIFT MT in your Camel routes, you need to add a dependency on
+**camel-swift** which implements this data format.
+
+If you use Maven, you can add the following to your `pom.xml`:
+
+
+ org.apache.camel
+ camel-swift
+ x.x.x
+
diff --git a/camel-swiftMx-dataformat.md b/camel-swiftMx-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..434a6fd1bf9b93fbe0fafc2f09c285049354c638
--- /dev/null
+++ b/camel-swiftMx-dataformat.md
@@ -0,0 +1,90 @@
+# SwiftMx-dataformat.md
+
+**Since Camel 3.20**
+
+The SWIFT MX data format is used to encode and decode SWIFT MX messages.
+The data format leverages the library [Prowide ISO
+20022](https://github.com/prowide/prowide-iso20022) to encode and decode
+SWIFT MX messages.
+
+# Options
+
+In Spring DSL, you configure the data format using this tag:
+
+
+
+
+
+ ...
+
+
+Then you can use it later by its reference:
+
+
+
+
+
+
+
+Most of the time, you won’t need to declare the data format if you use
+the default options. In that case, you can declare the data format
+inline as shown below:
+
+
+
+
+
+
+
+
+
+# Marshal
+
+In this example, we marshal the messages read from a JMS queue in SWIFT
+format before storing the result into a file.
+
+ from("jms://myqueue")
+ .marshal().swiftMx()
+ .to("file://data.bin");
+
+In Spring DSL:
+
+
+
+
+
+
+
+# Unmarshal
+
+The unmarshaller converts the input data into the concrete class of type
+`com.prowidesoftware.swift.model.mx.AbstractMX` that best matches with
+the content of the message.
+
+In this example, we unmarshal the content of a file to get SWIFT MX
+objects before processing them with the `newOrder` processor.
+
+ from("file://data.bin")
+ .unmarshal().swiftMx()
+ .process("newOrder");
+
+In Spring DSL:
+
+
+
+
+
+
+
+# Dependencies
+
+To use SWIFT MX in your Camel routes, you need to add a dependency on
+**camel-swift** which implements this data format.
+
+If you use Maven, you can add the following to your pom.xml:
+
+
+ org.apache.camel
+ camel-swift
+ x.x.x
+
diff --git a/camel-syslog-dataformat.md b/camel-syslog-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..f994847d55074b394f8f1a6dd725e05cdf92da32
--- /dev/null
+++ b/camel-syslog-dataformat.md
@@ -0,0 +1,113 @@
+# Syslog-dataformat.md
+
+**Since Camel 2.6**
+
+The Syslog dataformat is used for working with
+[RFC3164](http://www.ietf.org/rfc/rfc3164.txt) and RFC5424 messages.
+
+This component supports the following:
+
+- UDP consumption of syslog messages
+
+- Agnostic data format using either plain String objects or
+ SyslogMessage model objects.
+
+- Type Converter from/to SyslogMessage and String
+
+- Integration with the [camel-mina](#ROOT:mina-component.adoc)
+ component.
+
+- Integration with the [camel-netty](#ROOT:netty-component.adoc)
+ component.
+
+- Encoder and decoder for the [Netty
+ component](#ROOT:netty-component.adoc).
+
+- Support for RFC5424 also.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-syslog
+ x.x.x
+
+
+
+# RFC3164 Syslog protocol
+
+Syslog uses the user datagram protocol (UDP) as its underlying transport
+layer mechanism. The UDP port that has been assigned to syslog is 514.
+
+To expose a Syslog listener service, we reuse the existing [Mina
+Component](#ROOT:mina-component.adoc) or [Netty
+Component](#ROOT:netty-component.adoc) where we just use the
+`Rfc3164SyslogDataFormat` to marshal and unmarshal messages. Notice that
+from **Camel 2.14** onwards the syslog dataformat is renamed to
+`SyslogDataFormat`.
+
+# Options
+
+# RFC5424 Syslog protocol
+
+**Since Camel 2.14**
+
+To expose a Syslog listener service, we reuse the existing [Mina
+Component](#ROOT:mina-component.adoc) or [Netty
+Component](#ROOT:netty-component.adoc) where we just use the
+`SyslogDataFormat` to marshal and unmarshal messages
+
+## Exposing a Syslog listener
+
+In our Spring XML file, we configure an endpoint to listen for udp
+messages on port 10514, note that in netty we disable the defaultCodec,
+this
+will allow a fallback to a NettyTypeConverter and delivers the message
+as an InputStream:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+The same route using [Mina Component](#ROOT:mina-component.adoc)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Sending syslog messages to a remote destination
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-tahu-edge.md b/camel-tahu-edge.md
new file mode 100644
index 0000000000000000000000000000000000000000..9f16f29823c767da1e9a3b8ef9cb08bf79e28d3a
--- /dev/null
+++ b/camel-tahu-edge.md
@@ -0,0 +1,145 @@
+# Tahu-edge
+
+**Since Camel 4.8**
+
+**Only producer is supported**
+
+# URI format
+
+Tahu Edge Nodes and Devices use the same URI scheme and Tahu Edge
+Component and Endpoint.
+
+**Edge Node endpoints, where `groupId` and `edgeNodeId` are the
+Sparkplug Group and Edge Node IDs describing the Edge Node.**
+
+ tahu-edge://groupId/edgeNodeId?options
+
+**Edge Node Producer for Group *Basic* and Edge Node *EdgeNode* using
+MQTT Client ID *EdgeClient1* connecting to Host Application
+*BasicHostApp***
+
+ tahu-edge://Basic/EdgeNode?clientId=EdgeClient1&primaryHostId=BasicHostApp&deviceIds=D2,D3,D4
+
+**Device endpoints, where `groupId`, `edgeNodeId`, and `deviceId` are
+the Sparkplug Group, Edge Node, and Device IDs describing the Device.**
+
+ tahu-edge://groupId/edgeNodeId/deviceId
+
+**Device Producers for Devices *D2*, *D3*, and *D4* connected to Edge
+Node *EdgeNode* in Group *Basic*, i.e. the Devices of the Edge Node in
+the example above**
+
+ tahu-edge://Basic/EdgeNode/D2
+ tahu-edge://Basic/EdgeNode/D3
+ tahu-edge://Basic/EdgeNode/D4
+
+# Usage
+
+## Edge Node Endpoint Configuration
+
+Sparkplug Edge Nodes are identified by a unique combination of Group ID
+and Edge Node ID, the Edge Node Descriptor. These two elements form the
+path of an Edge Node Endpoint URI. All other Edge Node Endpoint
+configuration properties use query string variables or are set via
+Endpoint property setters.
+
+If an Edge Node is tied to a particular Host Application, the
+`primaryHostId` query string variable can be set to enable the required
+Sparkplug behavior.
+
+Metric aliasing is handled automatically by the Eclipse Tahu library and
+enabled with the `useAliases` query string variable.
+
+### Birth/Death Sequence Numbers
+
+The Sparkplug specification requires careful handling of NBIRTH/NDEATH
+sequence numbers for Host Applications to correlate Edge Nodes' session
+behavior with the metrics the Host Application receives.
+
+By default, each Edge Node Endpoint writes a local file to store the
+next sequence number that Edge Node should use when publishing its
+NBIRTH message and setting its NDEATH MQTT Will Message when
+establishing an MQTT Server connection. The local path for this file can
+be set using the `bdSeqNumPath` query string variable.
+
+Should another Sparkplug spec-compliant Eclipse Tahu `BdSeqManager`
+instance be required, use the `bdSeqManager` Endpoint property setter
+method.
+
+## Device Endpoint Configuration
+
+Sparkplug Devices are identified by a unique combination of the Edge
+Node Descriptor to which the Device is connected and the Device’s Device
+ID. These three elements form the path of a Device Endpoint URI. Since
+any Sparkplug Device is associated with exactly one Edge Node, an MQTT
+Server connection and its associated Sparkplug behavior is managed per
+Edge Node, not per Device. This means all Device Endpoint configuration
+must be completed prior to starting the Edge Node Producer for a given
+Device Endpoint.
+
+Device Endpoints inherit all MQTT Server connection information from
+their associated Edge Node Endpoint. Setting Component- or
+Endpoint-level configuration values on Device Components or Endpoints is
+unnecessary and should be avoided.
+
+## Edge Node and Device Endpoint Interaction
+
+Sparkplug Edge Nodes are not required to have a Device hierarchy and
+physical devices may be represented directly as Edge Nodes—this decision
+is left to Sparkplug application developers.
+
+However if an Edge Node will be reporting Device-level metrics in
+addition to and Edge Node-level metrics, the Edge Node Endpoint is
+required to have a `deviceIds` list configured to publish correct NBIRTH
+and DBIRTH payloads required by the Sparkplug specification.
+
+Additionally, a Tahu `SparkplugBPayloadMap` instance is required to be
+set on each Edge Node and Device Endpoint to populate the NBIRTH/DBIRTH
+message with the required Sparkplug Metric names and data types. This is
+accomplished using the `metricDataTypePayloadMap` Endpoint property
+setter method.
+
+These requirements allow Sparkplug 3.0.0 compliant behavior.
+
+## Component Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|checkClientIdLength|MQTT client ID length check enabled|false|boolean|
+|clientId|MQTT client ID to use for all server definitions, rather than specifying the same one for each. Note that if neither the 'clientId' parameter nor an 'MqttClientId' are defined for an MQTT Server, a random MQTT Client ID will be generated automatically, prefaced with 'Camel'||string|
+|keepAliveTimeout|MQTT connection keep alive timeout, in seconds|30|integer|
+|rebirthDebounceDelay|Delay before recurring node rebirth messages will be sent|5000|integer|
+|servers|MQTT server definitions, given with the following syntax in a comma-separated list: MqttServerName:(MqttClientId:)(tcp/ssl)://hostname(:port),...||string|
+|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
+|configuration|To use a shared Tahu configuration||object|
+|password|Password for MQTT server authentication||string|
+|sslContextParameters|SSL configuration for MQTT server connections||object|
+|useGlobalSslContextParameters|Enable/disable global SSL context parameters use|false|boolean|
+|username|Username for MQTT server authentication||string|
+
+## Endpoint Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|groupId|ID of the group||string|
+|edgeNode|ID of the edge node||string|
+|deviceId|ID of this edge node device||string|
+|checkClientIdLength|MQTT client ID length check enabled|false|boolean|
+|clientId|MQTT client ID to use for all server definitions, rather than specifying the same one for each. Note that if neither the 'clientId' parameter nor an 'MqttClientId' are defined for an MQTT Server, a random MQTT Client ID will be generated automatically, prefaced with 'Camel'||string|
+|keepAliveTimeout|MQTT connection keep alive timeout, in seconds|30|integer|
+|rebirthDebounceDelay|Delay before recurring node rebirth messages will be sent|5000|integer|
+|servers|MQTT server definitions, given with the following syntax in a comma-separated list: MqttServerName:(MqttClientId:)(tcp/ssl)://hostname(:port),...||string|
+|metricDataTypePayloadMap|Tahu SparkplugBPayloadMap to configure metric data types for this edge node or device. Note that this payload is used exclusively as a Sparkplug B spec-compliant configuration for all possible edge node or device metric names, aliases, and data types. This configuration is required to publish proper Sparkplug B NBIRTH and DBIRTH payloads.||object|
+|headerFilterStrategy|To use a custom HeaderFilterStrategy to filter headers used as Sparkplug metrics. Default value notice: Defaults to sending all Camel Message headers with name prefixes of CamelTahuMetric., including those with null values||object|
+|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
+|bdSeqManager|To use a specific org.eclipse.tahu.message.BdSeqManager implementation to manage edge node birth-death sequence numbers|org.apache.camel.component.tahu.CamelBdSeqManager|object|
+|bdSeqNumPath|Path for Sparkplug B NBIRTH/NDEATH sequence number persistence files. This path will contain files named as -bdSeqNum and must be writable by the executing process' user|${sys:java.io.tmpdir}/CamelTahuTemp|string|
+|useAliases|Flag enabling support for metric aliases|false|boolean|
+|deviceIds|ID of each device connected to this edge node, as a comma-separated list||string|
+|primaryHostId|Host ID of the primary host application for this edge node||string|
+|password|Password for MQTT server authentication||string|
+|sslContextParameters|SSL configuration for MQTT server connections||object|
+|username|Username for MQTT server authentication||string|
diff --git a/camel-tahu-host.md b/camel-tahu-host.md
new file mode 100644
index 0000000000000000000000000000000000000000..5517ae9f31f043167ab1562265433453a50bbde7
--- /dev/null
+++ b/camel-tahu-host.md
@@ -0,0 +1,53 @@
+# Tahu-host
+
+**Since Camel 4.8**
+
+**Only consumer is supported**
+
+# URI format
+
+**Host Application endpoints, where `hostId` is the Sparkplug Host
+Application ID**
+
+ tahu-host://hostId?options
+
+**Host Application Consumer for Host App *BasicHostApp* using MQTT
+Client ID *HostClient1***
+
+ tahu-host:BasicHostApp?clientId=HostClient1
+
+## Component Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|checkClientIdLength|MQTT client ID length check enabled|false|boolean|
+|clientId|MQTT client ID to use for all server definitions, rather than specifying the same one for each. Note that if neither the 'clientId' parameter nor an 'MqttClientId' are defined for an MQTT Server, a random MQTT Client ID will be generated automatically, prefaced with 'Camel'||string|
+|keepAliveTimeout|MQTT connection keep alive timeout, in seconds|30|integer|
+|rebirthDebounceDelay|Delay before recurring node rebirth messages will be sent|5000|integer|
+|servers|MQTT server definitions, given with the following syntax in a comma-separated list: MqttServerName:(MqttClientId:)(tcp/ssl)://hostname(:port),...||string|
+|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
+|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
+|configuration|To use a shared Tahu configuration||object|
+|password|Password for MQTT server authentication||string|
+|sslContextParameters|SSL configuration for MQTT server connections||object|
+|useGlobalSslContextParameters|Enable/disable global SSL context parameters use|false|boolean|
+|username|Username for MQTT server authentication||string|
+
+## Endpoint Configurations
+
+
+|Name|Description|Default|Type|
+|---|---|---|---|
+|hostId|ID for the host application||string|
+|checkClientIdLength|MQTT client ID length check enabled|false|boolean|
+|clientId|MQTT client ID to use for all server definitions, rather than specifying the same one for each. Note that if neither the 'clientId' parameter nor an 'MqttClientId' are defined for an MQTT Server, a random MQTT Client ID will be generated automatically, prefaced with 'Camel'||string|
+|keepAliveTimeout|MQTT connection keep alive timeout, in seconds|30|integer|
+|rebirthDebounceDelay|Delay before recurring node rebirth messages will be sent|5000|integer|
+|servers|MQTT server definitions, given with the following syntax in a comma-separated list: MqttServerName:(MqttClientId:)(tcp/ssl)://hostname(:port),...||string|
+|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
+|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
+|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
+|password|Password for MQTT server authentication||string|
+|sslContextParameters|SSL configuration for MQTT server connections||object|
+|username|Username for MQTT server authentication||string|
diff --git a/camel-tahu-summary.md b/camel-tahu-summary.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a8d0361472f8f09c8f0c9ddc81dc7b6db6c3efd
--- /dev/null
+++ b/camel-tahu-summary.md
@@ -0,0 +1,134 @@
+# Tahu-summary.md
+
+**Since Camel 4.8**
+
+**Both producer and consumer are supported**
+
+The Tahu components adapt the [Eclipse
+Tahu](https://projects.eclipse.org/projects/iot.tahu) library for Camel.
+These components support creating Sparkplug Edge Nodes, Devices, and
+Host Applications as described by [Eclipse
+Sparkplug](https://projects.eclipse.org/projects/iot.sparkplug) using
+Sparkplug B payload encoding.
+
+For more information regarding Sparkplug concepts and required behavior,
+consult the [Sparkplug 3.0.0
+Specification](https://www.eclipse.org/tahu/spec/sparkplug_spec.pdf)
+
+Neither the use of the Eclipse Tahu library nor the Camel Tahu
+Components implies Sparkplug 3.0.0 specification compliance. While it
+**should** be possible to create Sparkplug 3.0.0-compliant applications
+using the Camel Tahu Components, no claims or guarantees are expressed
+or implied.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ {artifactid}
+ x.x.x
+
+
+
+# Tahu components
+
+See the following for usage of each component:
+
+indexDescriptionList::\[attributes=*group=Tahu*,descriptionformat=description\]
+
+# URI format
+
+## Edge Nodes and Devices (Producers)
+
+**Edge Node and Device endpoints, where `groupId`, `edgeNodeId`, and
+`deviceId` are the Sparkplug Group, Edge Node, and Device IDs describing
+the Edge Node or Device.**
+
+ tahu-edge://groupId/edgeNodeId[/deviceId]?options
+
+**Edge Node Producer for Group *Basic* and Edge Node *EdgeNode* using
+MQTT Client ID *EdgeClient1* connecting to Host Application
+*BasicHostApp***
+
+ tahu-edge://Basic/EdgeNode?clientId=EdgeClient1&primaryHostId=BasicHostApp&deviceIds=D2,D3,D4
+
+**Device Producers for Devices *D2*, *D3*, and *D4* connected to Edge
+Node *EdgeNode* in Group *Basic*, i.e. the Devices of the Edge Node in
+the example above**
+
+ tahu-device://Basic/EdgeNode/D2
+ tahu-device://Basic/EdgeNode/D3
+ tahu-device://Basic/EdgeNode/D4
+
+## Host Applications (Consumers)
+
+**Host Application endpoints, where `hostId` is the Sparkplug Host
+Application ID**
+
+ tahu-host://hostId?options
+
+**Host Application Consumer for Host App *BasicHostApp* using MQTT
+Client ID *HostClient1***
+
+ tahu-host:BasicHostApp?clientId=HostClient1
+
+# Endpoints
+
+Tahu component endpoints describe a Sparkplug Edge Node, Device, or Host
+Application. All Sparkplug specification requirements must be observed
+when defining the endpoint URIs, including allowed characters in names,
+uniqueness in IDs, etc. Device IDs can include additional hierarchy with
+*/* characters as allowed by the specification.
+
+Tahu Edge Node and Device endpoints only allow Producers to be created.
+Tahu Host Application endpoints only allow Consumers to be created.
+
+# Usage
+
+The Sparkplug 3.0.0 specification requires Sparkplug B MQTT message
+payloads to follow a Google Protobuf format with an specific structure
+and message order. Many of these requirements necessitate careful Tahu
+Component and Endpoint configurations.
+
+## Component Configuration
+
+Tahu Component configuration is primarily composed of MQTT Server
+connection information. These properties may be configured on Endpoint
+URIs or the Tahu Component to cover all Endpoints created using that
+Component instance.
+
+The `servers` property is a comma-separated list with the following
+syntax:
+
+ MqttServerName:[MqttClientId:](tcp|ssl)://hostname[:port],...
+
+This gives a unique server name to each MQTT Server as well as its
+connection scheme (`tcp` or `ssl`), hostname, and optionally the port
+number. A connection-specific MQTT Client ID may also be assigned when
+connecting to this particular server. MQTT Client ID uniqueness
+requirements apply.
+
+A common MQTT Client ID may also be configured through the `clientId`
+property and will apply to all MQTT Server connections NOT specifying a
+connection-specific `MqttClientId` in the `servers` list. Should neither
+the `clientId` nor the `MqttClientId` be set, a random MQTT Client ID
+will be generated prefaced by "Camel".
+
+MQTT Client IDs are limited to 23 characters in MQTT v3.1. However, MQTT
+v3.1.1 increased that limit to 256 characters. When connecting to MQTT
+Servers only supporting v3.1, setting the `checkClientIdLength` flag to
+`true` will add a 23-character length check to ensure proper Client ID
+lengths. This is a configuration-time check and is not required to
+connect to MQTT v3.1 Servers.
+
+MQTT Server authentication can be configured using the `username` and
+`password` properties. TLS configuration can also be configured by
+providing an `SSLContextParameters` instance or through the
+`useGlobalSslContextParameters` flag.
+
+An MQTT connection keep alive timeout can be configured using
+`keepAliveTimeout`.
+
+A delay can be added between Edge Node Rebirth publishing through the
+`rebirthDebounceDelay` property.
diff --git a/camel-tarFile-dataformat.md b/camel-tarFile-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..05bcaeafe28fbda170ee4ecfc43f969e6ad1a301
--- /dev/null
+++ b/camel-tarFile-dataformat.md
@@ -0,0 +1,121 @@
+# TarFile-dataformat.md
+
+**Since Camel 2.16**
+
+The Tar File Data Format is a message compression and decompression
+format. Messages can be marshalled (compressed) to Tar Files containing
+a single entry, and Tar Files containing a single entry can be
+unmarshalled (decompressed) to the original file contents.
+
+There is also an aggregation strategy that can aggregate multiple
+messages into a single Tar File.
+
+# TarFile Options
+
+# Marshal
+
+In this example, we marshal a regular text/XML payload to a compressed
+payload using Tar File compression, and send it to an ActiveMQ queue
+called MY\_QUEUE.
+
+ from("direct:start").marshal().tarFile().to("activemq:queue:MY_QUEUE");
+
+The name of the Tar entry inside the created Tar File is based on the
+incoming `CamelFileName` message header, which is the standard message
+header used by the file component. Additionally, the outgoing
+`CamelFileName` message header is automatically set to the value of the
+incoming `CamelFileName` message header, with the ".tar" suffix. So, for
+example, if the following route finds a file named "test.txt" in the
+input directory, the output will be a Tar File named "test.txt.tar"
+containing a single Tar entry named "test.txt":
+
+ from("file:input/directory?antInclude=*/.txt").marshal().tarFile().to("file:output/directory");
+
+If there is no incoming `CamelFileName` message header (for example, if
+the file component is not the consumer), then the message ID is used by
+default, and since the message ID is normally a unique generated ID, you
+will end up with filenames like
+`ID-MACHINENAME-2443-1211718892437-1-0.tar`. If you want to override
+this behavior, then you can set the value of the `CamelFileName` header
+explicitly in your route:
+
+ from("direct:start").setHeader(Exchange.FILE_NAME, constant("report.txt")).marshal().tarFile().to("file:output/directory");
+
+This route would result in a Tar File named "report.txt.tar" in the
+output directory, containing a single Tar entry named "report.txt".
+
+# Unmarshal
+
+In this example we unmarshal a Tar File payload from an ActiveMQ queue
+called MY\_QUEUE to its original format, and forward it for processing
+to the `UnTarpedMessageProcessor`.
+
+ from("activemq:queue:MY_QUEUE").unmarshal().tarFile().process(new UnTarpedMessageProcessor());
+
+If the Tar File has more than one entry, the usingIterator option of
+TarFileDataFormat to be true, and you can use splitter to do the further
+work.
+
+ TarFileDataFormat tarFile = new TarFileDataFormat();
+ tarFile.setUsingIterator(true);
+ from("file:src/test/resources/org/apache/camel/dataformat/tarfile/?delay=1000&noop=true")
+ .unmarshal(tarFile)
+ .split(bodyAs(Iterator.class))
+ .streaming()
+ .process(new UnTarpedMessageProcessor())
+ .end();
+
+Or you can use the TarSplitter as an expression for splitter directly
+like this
+
+ from("file:src/test/resources/org/apache/camel/dataformat/tarfile?delay=1000&noop=true")
+ .split(new TarSplitter())
+ .streaming()
+ .process(new UnTarpedMessageProcessor())
+ .end();
+
+You cannot use TarSplitter in *parallel* mode with the splitter.
+
+# Aggregate
+
+Please note that this aggregation strategy requires eager completion
+check to work properly.
+
+In this example, we aggregate all text files found in the input
+directory into a single Tar File that is stored in the output directory.
+
+ from("file:input/directory?antInclude=*/.txt")
+ .aggregate(new TarAggregationStrategy())
+ .constant(true)
+ .completionFromBatchConsumer()
+ .eagerCheckCompletion()
+ .to("file:output/directory");
+
+The outgoing `CamelFileName` message header is created using
+java.io.File.createTempFile, with the ".tar" suffix. If you want to
+override this behavior, then you can set the value of the
+`CamelFileName` header explicitly in your route:
+
+ from("file:input/directory?antInclude=*/.txt")
+ .aggregate(new TarAggregationStrategy())
+ .constant(true)
+ .completionFromBatchConsumer()
+ .eagerCheckCompletion()
+ .setHeader(Exchange.FILE_NAME, constant("reports.tar"))
+ .to("file:output/directory");
+
+# Dependencies
+
+To use Tar Files in your camel routes, you need to add a dependency on
+**camel-tarfile**, which implements this data format.
+
+If you use Maven you can add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release (see
+the download page for the latest versions).
+
+
+ org.apache.camel
+ camel-tarfile
+ x.x.x
+
+
diff --git a/camel-telegram.md b/camel-telegram.md
index 96386b0d9ecdfa6e17ca8a002862e14861c181c6..f5cb2517624443a127da14e16bc4caaa0b92ffa5 100644
--- a/camel-telegram.md
+++ b/camel-telegram.md
@@ -43,16 +43,16 @@ The Telegram component supports both consumer and producer endpoints. It
can also be used in **reactive chatbot mode** (to consume, then produce
messages).
-# Producer Example
+## Producer
The following is a basic example of how to send a message to a Telegram
chat through the Telegram Bot API.
-in Java DSL
+**Telegram producer example in Java DSL**
from("direct:start").to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere");
-or in Spring XML
+**Telegram producer example in Spring XML**
@@ -74,112 +74,112 @@ belong to the package `org.apache.camel.component.telegram.model`)
-
+
-
+
OutgoingTextMessage
To send a text message to a
chat
-
+
OutgoingPhotoMessage
To send a photo (JPG, PNG) to a
chat
-
+
OutgoingAudioMessage
To send a mp3 audio to a chat
-
+
OutgoingVideoMessage
To send a mp4 video to a chat
-
+
OutgoingDocumentMessage
To send a file to a chat (any media
type)
-
+
OutgoingStickerMessage
To send a sticker to a chat
(WEBP)
-
+
OutgoingAnswerInlineQuery
To send answers to an inline
query
-
+
EditMessageTextMessage
To edit text and game messages
(editMessageText)
-
+
EditMessageCaptionMessage
To edit captions of messages
(editMessageCaption)
-
+
EditMessageMediaMessage
To edit animation, audio, document,
photo, or video messages. (editMessageMedia)
-
+
EditMessageReplyMarkupMessage
To edit only the reply markup of a
message. (editMessageReplyMarkup)
-
+
EditMessageDelete
To delete a message, including service
messages. (deleteMessage)
-
+
SendLocationMessage
To send a location
(setSendLocation)
-
+
EditMessageLiveLocationMessage
To send changes to a live location
(editMessageLiveLocation)
-
+
StopMessageLiveLocationMessage
To stop updating a live location
message sent by the bot or via the bot (for inline bots) before
live_period expires (stopMessageLiveLocation)
-
+
SendVenueMessage
To send information about a venue
(sendVenue)
-
+
byte[]
To send any media type supported. It
requires the CamelTelegramMediaType header to be set to the
appropriate media type
-
+
String
To send a text message to a chat. It
gets converted automatically into a
@@ -188,15 +188,17 @@ gets converted automatically into a
-# Consumer Example
+## Consumer
The following is a basic example of how to receive all messages that
telegram users are sending to the configured Bot. In Java DSL
+**Telegram consumer example in Java DSL**
+
from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere")
.bean(ProcessorBean.class)
-or in Spring XML
+**Telegram producer example in Spring XML**
@@ -225,18 +227,18 @@ Supported types for incoming messages are
-
+
-
+
IncomingMessage
The full object representation of an
incoming message
-
+
String
The content of the message, for text
messages only
@@ -244,7 +246,7 @@ messages only
-# Reactive Chat-Bot Example
+## Reactive Chat-Bot Example
The reactive chatbot mode is a simple way of using the Camel component
to build a simple chatbot that replies directly to chat messages
@@ -252,11 +254,13 @@ received from the Telegram users.
The following is a basic configuration of the chatbot in Java DSL
+**Telegram reactive example in Java**
+
from("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere")
.bean(ChatBotLogic.class)
.to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere");
-or in Spring XML
+**Telegram reactive example in Spring XML**
@@ -285,7 +289,7 @@ Every non-null string returned by the `chatBotProcess` method is
automatically routed to the chat that originated the request (as the
`CamelTelegramChatId` header is used to route the message).
-# Getting the Chat ID
+## Getting the Chat ID
If you want to push messages to a specific Telegram chat when an event
occurs, you need to retrieve the corresponding chat ID. The chat ID is
@@ -310,7 +314,7 @@ a message to it.
Note that the corresponding URI parameter is simply `chatId`.
-# Customizing keyboard
+## Customizing keyboard
You can customize the user keyboard instead of asking him to write an
option. `OutgoingTextMessage` has the property `ReplyMarkup` which can
@@ -364,7 +368,7 @@ If you want to disable it, the next message must have the property
})
.to("telegram:bots?authorizationToken=123456789:insertYourAuthorizationTokenHere");
-# Webhook Mode
+## Webhook Mode
The Telegram component supports usage in the **webhook mode** using the
**camel-webhook** component.
diff --git a/camel-test-junit5.md b/camel-test-junit5.md
new file mode 100644
index 0000000000000000000000000000000000000000..953370a4ecd0c7f2a6a8067465a8619408a2e773
--- /dev/null
+++ b/camel-test-junit5.md
@@ -0,0 +1,127 @@
+# Test-junit5.md
+
+**Since Camel 3.0**
+
+The `camel-test-junit5` module is used for unit testing Camel.
+
+The class `org.apache.camel.test.junit5.CamelTestSupport` provides a
+base JUnit class which you would extend and implement your Camel unit
+test.
+
+# Simple unit test example
+
+As shown below is a basic junit test which uses `camel-test-junit5`. The
+`createRouteBuilder` method is used for build the routes to be tested.
+Then the methods with `@Test` annotations are JUnit test methods which
+will be executed. The base class `CamelTestSupport` has a number of
+helper methods to configure testing, see more at the Javadoc of this
+class.
+
+ import org.apache.camel.RoutesBuilder;
+ import org.apache.camel.builder.RouteBuilder;
+ import org.apache.camel.component.mock.MockEndpoint;
+ import org.apache.camel.test.junit5.CamelTestSupport;
+ import org.junit.jupiter.api.Test;
+
+ public class SimpleMockTest extends CamelTestSupport {
+
+ @Test
+ public void testMock() throws Exception {
+ getMockEndpoint("mock:result").expectedBodiesReceived("Hello World");
+
+ template.sendBody("direct:start", "Hello World");
+
+ MockEndpoint.assertIsSatisfied(context);
+ }
+
+ @Override
+ protected RoutesBuilder createRouteBuilder() {
+ return new RouteBuilder() {
+ @Override
+ public void configure() {
+ from("direct:start").to("mock:result");
+ }
+ };
+ }
+
+ }
+
+# Migrating Camel Tests from JUnit 4 to JUnit 5
+
+Find below some hints to help in migrating camel tests from JUnit 4 to
+JUnit 5.
+
+Projects using `camel-test` would need to use `camel-test-junit5`. For
+instance, maven users would update their `pom.xml` file as below:
+
+
+ org.apache.camel
+ camel-test-junit5
+ test
+
+
+It’s possible to run JUnit4 \& JUnit5 based Camel tests side by side
+including the following dependencies `camel-test`, `camel-test-junit5`
+and `junit-vintage-engine`. This configuration allows migrating Camel
+tests one by one.
+
+## Migration Steps
+
+- Imports of `org.apache.camel.test.junit4.\*` should be replaced with
+ `org.apache.camel.test.junit5.*`
+
+- `TestSupport` static methods should be imported where needed, for
+ instance
+ `import static org.apache.camel.test.junit5.TestSupport.assertIsInstanceOf`
+
+- Usage of the field `CamelTestSupport.log` should be replaced by
+ another logger, for instance
+ `org.slf4j.LoggerFactory.getLogger(MyCamelTest.class);`
+
+- Usage of the method `CamelTestSupport.createRegistry` should be
+ replaced by `CamelTestSupport.createCamelRegistry()`
+
+- Overrides of `isCreateCamelContextPerClass()` returning `false`
+ should be removed
+
+- Overrides of `isCreateCamelContextPerClass()` returning `true`
+ should be replaced by `@TestInstance(Lifecycle.PER_CLASS)`
+
+- Usage of the method `CamelTestSupport.assertMockEndpointsSatisfied`
+ should be replaced by `MockEndpoint.assertIsSatisfied(context)`
+
+Once Camel related steps have been performed, there are still typical
+JUnit 5 migration steps to remember:
+
+- New JUnit 5 assertions should be imported where needed, for instance
+ `import static org.junit.jupiter.api.Assertions.assertEquals`
+
+- Assertion messages should be moved to the last parameter where
+ needed, for instance `assertEquals("message", 2, 1)` becomes
+ `assertEquals(2, 1, "message")`
+
+- `org.junit.Test` should be changed in favor of
+ `org.junit.jupiter.api.Test`
+
+- `org.junit.Ignore` should be changed in favor of
+ `org.junit.jupiter.api.Disabled`
+
+- `org.junit.Before` should be changed in favor of
+ `org.junit.jupiter.api.BeforeEach`
+
+- `org.junit.After` should be changed in favor of
+ `org.junit.jupiter.api.AfterEach`
+
+- `org.junit.BeforeClass` should be changed in favor of
+ `import org.junit.jupiter.api.BeforeAll`
+
+- `org.junit.AfterClass` should be changed in favor of
+ `import org.junit.jupiter.api.AfterAll`
+
+- Built-in `assertThat` from third-party assertion libraries should be
+ used. For instance, use `org.hamcrest.MatcherAssert.assertThat` from
+ `java-hamcrest`
+
+Please check the [JUnit 5 User
+Guide](https://junit.org/junit5/docs/current/user-guide/) for additional
+insights about writing tests using JUnit 5.
diff --git a/camel-test-main-junit5.md b/camel-test-main-junit5.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a36d41c5d087981f0412e9c5da23cc3f30f8dac
--- /dev/null
+++ b/camel-test-main-junit5.md
@@ -0,0 +1,762 @@
+# Test-main-junit5.md
+
+**Since Camel 3.16**
+
+The `camel-test-main-junit5` module is used for unit testing Camel
+launched in Standalone mode with Camel Main.
+
+This module proposes two approaches to configure and launch Camel like a
+Camel Main application for testing purpose.
+
+- **Legacy**: This approach consists of extending the base class
+ `org.apache.camel.test.main.junit5.CamelMainTestSupport` and
+ overriding the appropriate methods to enable or disable a feature.
+
+- **Annotation**: This approach consists of annotating the test
+ classes with `org.apache.camel.test.main.junit5.CamelMainTest` with
+ the appropriate attributes to enable or disable a feature.
+
+In the next section, for each use case both approaches are proposed with
+the labels **legacy** and **annotation** to differentiate them.
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+
+ org.apache.camel
+ camel-test-main-junit5
+ test
+ x.x.x
+
+
+
+# Specify a main class
+
+Most of the time, a Camel Main application has a main class from which
+all the Camel related classes are found.
+
+In practice, this is done simply by providing the main class of the
+application in the constructor of Camel Main like for example
+`new Main(SomeApplication.class)` where `SomeApplication.class` is the
+main class of the application.
+
+## Legacy
+
+The same behavior can be simulated with `CamelMainTestSupport` by
+overriding the method `getMainClass()` to provide the main class of the
+application to test.
+
+## Annotation
+
+The same behavior can be simulated with `CamelMainTest` by setting the
+attribute `mainClass` to provide the main class of the application to
+test.
+
+## Examples
+
+In the next examples, the main class of the application to test is the
+class `SomeMainClass`.
+
+Legacy
+class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ protected Class> getMainClass() {
+ return SomeMainClass.class;
+ }
+
+ // Rest of the test class
+ }
+
+Annotation
+@CamelMainTest(mainClass = SomeMainClass.class)
+class SomeTest {
+
+ // Rest of the test class
+ }
+
+# Configure Camel as a Camel Main application
+
+A Camel Main application has access to many specific configuration
+properties that are not available from the base class
+`CamelTestSupport`.
+
+## Legacy
+
+The base class `CamelMainTestSupport` provides the method
+`configure(MainConfigurationProperties configuration)` that can be
+overridden to configure Camel for the test like a Camel Main
+application.
+
+## Annotation
+
+The annotation `Configure` allows to mark a method with an arbitrary
+name and a parameter of type `MainConfigurationProperties` to be called
+to configure Camel for the test like a Camel Main application. Several
+methods in the test class and/or its parent classes can be annotated.
+
+## Examples
+
+In the next examples, the test class `SomeTest` adds a configuration
+class and specifies the XML routes to include.
+
+Legacy
+import org.apache.camel.main.MainConfigurationProperties;
+
+ class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ protected void configure(MainConfigurationProperties configuration) {
+ // Add a configuration class
+ configuration.addConfiguration(SomeConfiguration.class);
+ // Add all the XML routes
+ configuration.withRoutesIncludePattern("routes/*.xml");
+ }
+
+ // Rest of the test class
+ }
+
+Annotation
+import org.apache.camel.main.MainConfigurationProperties;
+import org.apache.camel.test.main.junit5.Configure;
+
+ @CamelMainTest
+ class SomeTest {
+
+ @Configure
+ protected void configure(MainConfigurationProperties configuration) {
+ // Add a configuration class
+ configuration.addConfiguration(SomeConfiguration.class);
+ // Add all the XML routes
+ configuration.withRoutesIncludePattern("routes/*.xml");
+ }
+
+ // Rest of the test class
+ }
+
+# Configure a custom property placeholder location
+
+By default, the property placeholder used is `application.properties`
+from the default package. There are several ways to configure the
+property placeholder locations, you can either provide the file name of
+the property placeholder or a list of locations.
+
+## A list of property placeholder locations
+
+## Legacy
+
+The method `getPropertyPlaceholderLocations()` can be overridden to
+provide a comma separated list of locations.
+
+## Annotation
+
+The attribute `propertyPlaceholderLocations` can be set to provide a
+list of locations.
+
+The order in the list matters, especially in case of a property defined
+at several locations, the value of the property found in the first
+location where it is defined, is used.
+
+## Examples
+
+In the next examples, the property placeholder locations configured are
+`extra-application.properties` and `application.properties` both
+available in the default package.
+
+Legacy
+class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ protected String getPropertyPlaceholderLocations() {
+ return "classpath:extra-application.properties,classpath:application.properties";
+ }
+
+ // Rest of the test class
+ }
+
+Annotation
+@CamelMainTest(propertyPlaceholderLocations = { "classpath:extra-application.properties", "classpath:application.properties" })
+class SomeTest {
+
+ // Rest of the test class
+ }
+
+## The file name of the property placeholder
+
+For the sake of simplicity, in case you need only one property
+placeholder location.
+
+## Legacy
+
+The method `getPropertyPlaceholderFileName()` can be overridden to
+provide the file name of the property placeholder.
+
+## Annotation
+
+The attribute `propertyPlaceholderFileName` can be set to provide the
+file name of the property placeholder.
+
+It can then infer the locations of the property placeholder as it
+assumes that it is located either in the same package as the test class
+or directly in the default package.
+
+## Examples
+
+In the next examples, since the test class is `com.somecompany.SomeTest`
+and the file name of the property placeholder is
+`custom-application.properties` , the actual possible locations of the
+property placeholder are
+`classpath:com/somecompany/custom-application.properties;optional=true,classpath:custom-application.properties;optional=true`
+which means that for each property to find, it tries to get it first
+from the properties file of the same package if it exists and if it
+cannot be found, it tries to get it from the properties file with the
+same name but in the default package if it exists.
+
+Since the properties files are declared as optional, no exception is
+raised if they are both absent.
+
+Legacy
+package com.somecompany;
+
+ class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ protected String getPropertyPlaceholderFileName() {
+ return "custom-application.properties";
+ }
+
+ // Rest of the test class
+ }
+
+Annotation
+package com.somecompany;
+
+ @CamelMainTest(propertyPlaceholderFileName = "custom-application.properties")
+ class SomeTest {
+
+ // Rest of the test class
+ }
+
+# Replace an existing bean
+
+In Camel Main, you have the opportunity to bind custom beans dynamically
+using the specific annotation `@BindToRegistry` which is very helpful
+but for testing purpose, you may need to replace the bean by a mock, or
+test implementation.
+
+## Legacy
+
+To bind additional beans, you can still override the well known method
+`bindToRegistry(Registry registry)` but this method cannot be used to
+replace a bean created and bound automatically by Camel as it is called
+too early in the initialization process of Camel. To work around this
+problem, you can instead bind your beans by overriding the new method
+`bindToRegistryAfterInjections(Registry registry)` which is called after
+existing injections and automatic binding have been done.
+
+## Annotation
+
+The annotation `ReplaceInRegistry` allows to mark a method or a field to
+replace an existing bean in the registry.
+
+- In the case of a field, the name and its type are used to identify
+ the bean to replace, and the value of the field is the new value of
+ the bean. The field can be in the test class or in a parent class.
+
+- In the case of a method, the name and its return type are used to
+ identify the bean to replace, and the return value of the method is
+ the new value of the bean. The method can be in the test class or in
+ a parent class.
+
+## Examples
+
+In the next examples, an instance of a custom bean of type
+`CustomGreetings` is used to replace the bean of type `Greetings`
+automatically bound by Camel with the name `myGreetings`.
+
+Legacy
+class SomeTest extends CamelMainTestSupport {
+
+ @PropertyInject("name")
+ String name;
+
+ @Override
+ protected void bindToRegistryAfterInjections(Registry registry) throws Exception {
+ registry.bind("myGreetings", Greetings.class, new CustomGreetings(name));
+ }
+
+ // Rest of the test class
+ }
+
+Annotation (field)
+**Using a field**
+
+ import org.apache.camel.test.main.junit5.ReplaceInRegistry;
+
+ @CamelMainTest
+ class SomeTest {
+
+ @ReplaceInRegistry
+ Greetings myGreetings = new CustomGreetings("Willy"); //
+
+ // Rest of the test class
+ }
+
+- We cannot rely on the value of property that is injected thanks to
+ `@PropertyInject` like in the previous code snippet because the
+ injection occurs after the instantiation of the test class, so it
+ would be `null`.
+
+Annotation (method)
+**Using a method**
+
+ import org.apache.camel.test.main.junit5.ReplaceInRegistry;
+
+ @CamelMainTest
+ class SomeTest {
+
+ @PropertyInject("name")
+ String name;
+
+ @ReplaceInRegistry
+ Greetings myGreetings() {
+ return new CustomGreetings(name);
+ }
+
+ // Rest of the test class
+ }
+
+# Override existing properties
+
+Some properties are inherited from properties file like the
+`application.properties` and need to be overridden within the context of
+the test.
+
+## Legacy
+
+The method `useOverridePropertiesWithPropertiesComponent()` can be
+overridden to provide an instance of type `java.util.Properties` that
+contains the properties to override.
+
+## Annotation
+
+The attribute `properties` can be set to provide an array of `String`
+representing the key/value pairs of properties to override in the
+following format
+`"property-key-1=property-value-1", "property-key-2=property-value-1", ...`.
+
+## Examples
+
+In the next examples, the value of the property whose name is `host` is
+replaced with `localhost`.
+
+Legacy
+import static org.apache.camel.util.PropertiesHelper.asProperties;
+
+ class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ protected Properties useOverridePropertiesWithPropertiesComponent() {
+ return asProperties("host", "localhost");
+ }
+
+ // Rest of the test class
+ }
+
+Annotation
+@CamelMainTest(properties = { "host=localhost" })
+class SomeTest {
+
+ // Rest of the test class
+ }
+
+# Replace from endpoints
+
+To be able to test easily the behavior of a route without being affected
+by the type of `_from_` endpoint used in the route, it can be very
+helpful to replace the `_from_` endpoint with an endpoint more test
+friendly.
+
+## Legacy
+
+The method `replaceRouteFromWith()` can be called to provide the id of
+the route to modify and the URI of the new `_from_` endpoint.
+
+## Annotation
+
+The attribute `replaceRouteFromWith` can be set to provide an array of
+`String` representing a list of route IDs to modify and the URI of the
+new `_from_` endpoint in the following format
+`"route-id-1=new-uri-1", "route-id-2=new-uri-2", ...`.
+
+## Examples
+
+In the next examples, the route whose id is `main-route` is advised to
+replace its current from endpoint with a `direct:main` endpoint.
+
+Legacy
+class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ @BeforeEach
+ public void setUp() throws Exception {
+ replaceRouteFromWith("main-route", "direct:main");
+ super.setUp();
+ }
+
+ // Rest of the test class
+ }
+
+Annotation
+@CamelMainTest(replaceRouteFromWith = { "main-route=direct:main" })
+class SomeTest {
+
+ // Rest of the test class
+ }
+
+# Configure additional camel configuration classes
+
+In practice, additional camel configuration classes can be provided for
+the sake of simplicity directly from the constructor of the Camel Main
+like for example
+`new Main(SomeApplication.class, SomeCamelConfiguration.class)` where
+`SomeApplication.class` is the main class of the application and
+`SomeCamelConfiguration.class` is an additional camel configuration
+class.
+
+## Legacy
+
+There is no specific method for that, but it can be done by overriding
+the method `configure(MainConfigurationProperties configuration)` like
+described in a previous section.
+
+## Annotation
+
+The attribute `configurationClasses` can be set to provide an array of
+additional camel configuration classes.
+
+## Examples
+
+In the next examples, the camel configuration class
+`SomeCamelConfiguration` is added to the global configuration.
+
+Legacy
+class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ protected void configure(MainConfigurationProperties configuration) {
+ // Add the configuration class
+ configuration.addConfiguration(SomeCamelConfiguration.class);
+ }
+
+ // Rest of the test class
+ }
+
+Annotation
+@CamelMainTest(configurationClasses = SomeCamelConfiguration.class)
+class SomeTest {
+
+ // Rest of the test class
+ }
+
+# Advice a route
+
+It is possible to modify a route within the context of a test by using
+advices generally represented by specific route builders of type
+`AdviceWithRouteBuilder` as it proposes out-of-box utility methods
+allowing to advice a route easily.
+
+## Legacy
+
+A route needs to be advised directly in the test method using one of the
+utility method `AdviceWith.adviceWith` and the Camel context has to be
+started explicitly once the route has been advised to take it into
+account.
+
+## Annotation
+
+The attribute `advices` can be set to provide an array of annotations of
+type `AdviceRouteMapping` representing a mapping between a route to
+advice and the corresponding route builders to call to advice the route.
+As the route builders are instantiated using the default constructor,
+make sure that the default constructor exists.
+
+## Examples
+
+In the next examples, the route whose id is `main-route` is advised to
+replace its current from endpoint with a `direct:main` endpoint.
+
+Legacy
+class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ public boolean isUseAdviceWith() { //
+ return true;
+ }
+
+ @Test
+ void someTest() throws Exception {
+ // Advice the route to replace the from endpoint
+ AdviceWith.adviceWith(context, "main-route", ad -> ad.replaceFromWith("direct:main")); //
+
+ // must start Camel after we are done using advice-with
+ context.start(); //
+
+ // Rest of the test method
+ }
+
+ // Rest of the test class
+ }
+
+- Override the method `isUseAdviceWith` to return `true` indicating
+ that the Camel context should not be started before calling the test
+ method as there is at least one route to advise.
+
+- Call a utility method `AdviceWith.adviceWith` to advice a route
+
+- Start the Camel context as it was not yet started
+
+Annotation
+@CamelMainTest(advices = @AdviceRouteMapping(route = "main-route", advice = SomeTest.SomeRouteBuilder.class))
+class SomeTest {
+
+ static class SomeRouteBuilder extends AdviceWithRouteBuilder {
+
+ @Override
+ public void configure() throws Exception {
+ replaceFromWith("direct:main");
+ }
+ }
+
+ // Rest of the test class
+ }
+
+# Mock and skip an endpoint
+
+For testing purpose, it can be helpful to mock only or to mock and skip
+all the endpoints matching with a given pattern.
+
+## Legacy
+
+The method `isMockEndpoints()` can be overridden to provide the pattern
+that should match with the endpoints to mock. The method
+`isMockEndpointsAndSkip()` can be overridden to provide the pattern that
+should match with the endpoints to mock and skip.
+
+## Annotation
+
+The attribute `mockEndpoints` can be set to provide the pattern that
+should match with the endpoints to mock. The attribute
+`mockEndpointsAndSkip` can be set to provide the pattern that should
+match with the endpoints to mock and skip.
+
+## Examples
+
+In the next examples, the endpoints whose URI starts with `direct:` are
+mocked.
+
+Legacy
+class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ public String isMockEndpoints() {
+ return "direct:*";
+ }
+
+ // Rest of the test class
+ }
+
+Annotation
+@CamelMainTest(mockEndpoints = "direct:\*")
+class SomeTest {
+
+ // Rest of the test class
+ }
+
+# Dump route coverage
+
+It is possible to dump the route coverage of a given test. This feature
+needs JMX to be enabled which is done automatically when the feature
+itself is enabled, it also means that the `camel-management` has to be
+part of the dependencies of the project to be able to use it. The
+feature can be enabled globally by setting the system property
+`CamelTestRouteCoverage` to `true`.
+
+The result is generated in
+`target/camel-route-coverage/_class-name_-_test-name_.xml`.
+
+## Legacy
+
+The method `isDumpRouteCoverage()` can be overridden to return `true`
+indicating that the feature is enabled.
+
+## Annotation
+
+The attribute `dumpRouteCoverage` can be set to `true` indicating that
+the feature is enabled.
+
+# Override the shutdown timeout
+
+The default shutdown timeout of Camel is not really adapted for a test
+as it can be very long. This feature allows overriding it to 10 seconds
+by default, but it can also be set to a custom value knowing that it is
+expressed in seconds.
+
+## Legacy
+
+The method `getShutdownTimeout()` can be overridden to return the
+expected shutdown timeout.
+
+## Annotation
+
+The attribute `shutdownTimeout` can be set to the expected shutdown
+timeout.
+
+# Debug mode
+
+For debugging purpose, it is possible to be called before and after
+invoking a processor allowing you to log specific messages or add
+breakpoints in your favorite IDE.
+
+## Legacy
+
+The method `isUseDebugger()` can be overridden to return `true`
+indicating that the feature is enabled. The methods `debugBefore` and
+`debugAfter` can then be overridden to execute some specific code for
+debugging purpose.
+
+## Annotation
+
+The test class needs to implement the interface
+`org.apache.camel.test.main.junit5.DebuggerCallback` to enable the
+feature. The methods `debugBefore` and `debugAfter` can then be
+implemented to execute some specific code for debugging purpose.
+
+# Enable JMX
+
+JMX is disabled by default when launching the tests, however, if needed,
+it is still possible to enable it.
+
+## Legacy
+
+The method `useJmx()` can be overridden to return `true`. It returns
+`false` by default.
+
+## Annotation
+
+The attribute `useJmx` can be set to `true`. It is set to `false` by
+default.
+
+## Examples
+
+In the next examples, JMX has been enabled for the test.
+
+Legacy
+class SomeTest extends CamelMainTestSupport {
+
+ @Override
+ protected boolean useJmx() {
+ return true;
+ }
+
+ // Rest of the test class
+ }
+
+Annotation
+@CamelMainTest(useJmx = true)
+class SomeTest {
+
+ // Rest of the test class
+ }
+
+# Nested tests
+
+The annotation-based approach supports natively [Nested
+tests](https://junit.org/junit5/docs/current/user-guide/#writing-tests-nested).
+It is even possible to annotate `@Nested` test class with
+`@CamelMainTest` to change the configuration inherited from the outer
+class. However, please note that not all attributes can be set at nested
+test class level. Indeed, for the sake of simplicity, the attributes
+`dumpRouteCoverage` and `shutdownTimeout` can only be set at outer class
+level.
+
+According to the total number of values accepted by an attribute, if a
+`@Nested` test class set this attribute, the behavior can change:
+
+- In case of **multivalued** attributes like `properties`,
+ `replaceRouteFromWith`, `configurationClasses` and `advices`, the
+ values set on the `@Nested` test class are added to the values of
+ the outer classes, and the resulting values are ordered from
+ outermost to innermost.
+
+- In case of **mono-valued** attributes like `mainClass`,
+ `propertyPlaceholderFileName`, `mockEndpoints` and
+ `mockEndpointsAndSkip`, the value set on the innermost class is
+ used.
+
+The only exception is the attribute `propertyPlaceholderLocations` that
+behaves like a mono-valued attribute. Because it is tightly coupled with
+`propertyPlaceholderFileName`, so it must have the same behavior for the
+sake of consistency.
+
+To have a better understanding of the behavior for each type of
+attribute, please check the following examples:
+
+## Multivalued
+
+In the next example, the property `some-property` is set to `foo` for
+all the tests in `SomeTest` including the tests in `SomeNestedTest`.
+Additionally, the property `some-other-property` is set to `bar` but
+only for all the tests in `SomeNestedTest`.
+
+ @CamelMainTest(properties = { "some-property=foo" })
+ class SomeTest {
+
+ @Nested
+ @CamelMainTest(properties = { "some-other-property=bar" })
+ class SomeNestedTest {
+
+ // Rest of the nested test class
+ }
+
+ // Rest of the test class
+ }
+
+## Mono-valued
+
+In the next example, `SomeMainClass` is used as the main class for all
+the tests directly inside `SomeTest`, but also the tests in the
+`@Nested` test class `SomeOtherNestedTest` as it is not redefined.
+`SomeOtherMainClass` is used as the main class for all the tests
+directly inside `SomeNestedTest`, but also the tests in the `@Nested`
+test class `SomeDeeplyNestedTest` as it is not redefined.
+
+ @CamelMainTest(mainClass = SomeMainClass.class)
+ class SomeTest {
+
+ @CamelMainTest(mainClass = SomeOtherMainClass.class)
+ @Nested
+ class SomeNestedTest {
+
+ @Nested
+ class SomeDeeplyNestedTest {
+
+ // Rest of the nested test class
+ }
+
+ // Rest of the nested test class
+ }
+
+ @Nested
+ class SomeOtherNestedTest {
+
+ // Rest of the nested test class
+ }
+
+ // Rest of the test class
+ }
+
+The annotations `@Configure` and `@ReplaceInRegistry` can also be used
+on methods or fields inside `@Nested` test classes knowing that the
+annotations of outer classes are processed before the annotations of
+inner classes.
diff --git a/camel-test-spring-junit5.md b/camel-test-spring-junit5.md
new file mode 100644
index 0000000000000000000000000000000000000000..a74cad0d56e07c3b720466867904b886e0391f0b
--- /dev/null
+++ b/camel-test-spring-junit5.md
@@ -0,0 +1,323 @@
+# Test-spring-junit5.md
+
+**Since Camel 3.0**
+
+The `camel-test-spring-junit5` module is used for testing Camel with
+Spring; both the classic Spring XML files or Spring Boot.
+
+# Testing Spring Boot
+
+The recommended approach is to annotate the test class with
+`org.apache.camel.test.spring.junit5.CamelSpringBootTest`. This replaces
+the Junit4 `@RunWith` annotation using `SpringRunner.class` or
+`CamelSpringBootRunner.class`. To enable autoconfiguration of the Camel
+context and other Spring boot auto-configurable components, use the
+annotation
+`org.springframework.boot.autoconfigure.EnableAutoConfiguration`. The
+Spring test context may be specified in one of three ways:
+
+- a nested class annotated with
+ `org.springframework.context.annotation.Configuration`. This may
+ define one or more Beans such as a `RouteBuilder`.
+
+- a `SpringBootTest` annotation with a classes parameter to specify
+ the configuration class or classes. The `@SpringBootTest` annotation
+ may also specify custom properties as shown in the example below.
+
+- a class annotated with `SpringBootConfiguration` accessible in the
+ package of the test class or a parent package.
+
+
+
+ package com.foo;
+
+ @CamelSpringBootTest
+ @EnableAutoConfiguration
+ @SpringBootTest(
+ properties = { "camel.springboot.name=customName" }
+ )
+ class CamelSpringBootSimpleTest {
+
+ @Autowired
+ ProducerTemplate producerTemplate;
+
+ @EndpointInject("mock:test")
+ MockEndpoint mockEndpoint;
+
+ //Spring context fixtures
+ @Configuration
+ static class TestConfig {
+
+ @Bean
+ RoutesBuilder route() {
+ return new RouteBuilder() {
+ @Override
+ public void configure() throws Exception {
+ from("direct:test").to("mock:test");
+ }
+ };
+ }
+ }
+
+ @Test
+ public void shouldAutowireProducerTemplate() {
+ assertNotNull(producerTemplate);
+ }
+
+ @Test
+ public void shouldSetCustomName() {
+ assertEquals("customName", producerTemplate.getCamelContext().getName());
+ }
+
+ @Test
+ public void shouldInjectEndpoint() throws InterruptedException {
+ mockEndpoint.setExpectedMessageCount(1);
+ producerTemplate.sendBody("direct:test", "msg");
+ mockEndpoint.assertIsSatisfied();
+ }
+ }
+
+# Testing classic Spring XML
+
+There are multiple approaches to test Camel Spring 5.x based routes with
+JUnit 5. An approach is to extend
+`org.apache.camel.test.spring.junit5.CamelSpringTestSupport`, for
+instance:
+
+ public class SimpleMockTest extends CamelSpringTestSupport {
+
+ @EndpointInject("mock:result")
+ protected MockEndpoint resultEndpoint;
+
+ @Produce("direct:start")
+ protected ProducerTemplate template;
+
+ @Override
+ protected AbstractApplicationContext createApplicationContext() {
+ // loads a Spring XML file
+ return new ClassPathXmlApplicationContext("org/apache/camel/test/patterns/SimpleMockTest.xml");
+ }
+
+ @Test
+ public void testMock() throws Exception {
+ String expectedBody = "Hello World";
+ resultEndpoint.expectedBodiesReceived(expectedBody);
+ template.sendBodyAndHeader(expectedBody, "foo", "bar");
+ resultEndpoint.assertIsSatisfied();
+ }
+ }
+
+The example above is loading a classic Spring XML file (has `` as
+root tag).
+
+This approach provides feature parity with
+`org.apache.camel.test.junit5.CamelTestSupport` from
+[camel-test-junit5](#components:others:test-junit5.adoc) but does not
+support Spring annotations on the test class such as `@Autowired`,
+`@DirtiesContext`, and `@ContextConfiguration`.
+
+Instead of instantiating the `CamelContext` and routes programmatically,
+this class relies on a Spring context to wire the needed components
+together. If your test extends this class, you must provide the Spring
+context by implementing the following method:
+
+ protected abstract AbstractApplicationContext createApplicationContext();
+
+## Using the `@CamelSpringTest` annotation
+
+A better and recommended approach involves the usage of the
+`org.apache.camel.test.spring.junit5.CamelSpringTest` annotation, as
+shown:
+
+ package com.foo;
+
+ @CamelSpringTest
+ @ContextConfiguration
+ @DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD)
+ public class CamelSpringPlainTest {
+
+ @Autowired
+ protected CamelContext camelContext;
+
+ @EndpointInject("mock:a")
+ protected MockEndpoint mockA;
+
+ @EndpointInject("mock:b")
+ protected MockEndpoint mockB;
+
+ @Produce("direct:start")
+ protected ProducerTemplate start;
+
+ @Test
+ public void testPositive() throws Exception {
+ assertEquals(ServiceStatus.Started, camelContext.getStatus());
+
+ mockA.expectedBodiesReceived("David");
+ mockB.expectedBodiesReceived("Hello David");
+
+ start.sendBody("David");
+
+ MockEndpoint.assertIsSatisfied(camelContext);
+ }
+ }
+
+The above test will by default load a Spring XML file using the naming
+pattern *className*-context.xml, which means the example above loads the
+file `com/foo/CamelSpringPlainTest-context.xml`.
+
+This XML file is Spring XML file as shown:
+
+
+
+
+
+
+
+
+
+ Hello ${body}
+
+
+
+
+
+
+This approach supports both Camel and Spring annotations, such as
+`@Autowired`, `@DirtiesContext`, and `@ContextConfiguration`. However,
+it does NOT have feature parity with
+`org.apache.camel.test.junit5.CamelTestSupport`.
+
+# Camel test annotations
+
+The following annotations can be used with `camel-spring-junit5` unit
+testing.
+
+
+
+
+
+
+
+
+
+
+
+@CamelSpringBootTest
+Used for testing Camel with Spring
+Boot
+
+
+@CamelSpringTest
+Used for testing Camel with classic
+Spring XML (not Spring Boot)
+
+
+@DisableJmx
+Used for disabling JMX
+
+
+@EnableRouteCoverage
+Enables dumping route coverage
+statistics. The route coverage status is written as xml files in the
+target/camel-route-coverage directory after the test has
+finished. See more information at Camel Maven Report
+Plugin .
+
+
+@ExcludeRoutes
+Indicates if certain route builder
+classes should be excluded from package scan discovery
+
+
+@MockEndpoints
+Auto-mocking of endpoints whose URIs
+match the provided filter. For more information, see Advice With .
+
+
+@MockEndpointsAndSkip
+Auto-mocking of endpoints whose URIs
+match the provided filter with the added provision that the endpoints
+are also skipped. For more information, see Advice With .
+
+
+@ProvidesBreakpoint
+Indicates that the annotated method
+returns a Breakpoint for use in the test. Useful for
+intercepting traffic to all endpoints or simply for setting a break
+point in an IDE for debugging. The method must be
+public static, take no arguments, and return
+Breakpoint.
+
+
+@ShutdownTimeout
+Timeout to use for shutdown . The default is 10
+seconds.
+
+
+@UseAdviceWith
+To enable testing with Advice With .
+
+
+@UseOverridePropertiesWithPropertiesComponent
+To use custom Properties
+with the Properties
+component. The annotated method must be public and return
+Properties.
+
+
+
+
+# Migrating Camel Spring Tests from JUnit 4 to JUnit 5
+
+Find below some hints to help in migrating Camel Spring tests from JUnit
+4 to JUnit 5.
+
+Projects using `camel-test-spring` would need to use
+`camel-test-spring-junit5`. For instance, maven users would update their
+pom.xml file as below:
+
+
+ org.apache.camel
+ camel-test-spring-junit5
+ test
+
+
+It’s possible to run JUnit 4 \& JUnit 5 based Camel Spring tests side by
+side including the following dependencies `camel-test-spring`,
+`camel-test-spring-junit5` and `junit-vintage-engine`. This
+configuration allows migrating Camel tests one by one.
+
+## Migration steps
+
+- Migration steps from
+ [camel-test-junit5](#components:others:test-junit5.adoc) should have
+ been applied first
+
+- Imports of `org.apache.camel.test.spring.\*` should be replaced with
+ `org.apache.camel.test.spring.junit5.*`
+
+- Usage of `@RunWith(CamelSpringRunner.class)` should be replaced with
+ `@CamelSpringTest`
+
+- Usage of `@BootstrapWith(CamelTestContextBootstrapper.class)` should
+ be replaced with `@CamelSpringTest`
+
+- Usage of `@RunWith(CamelSpringBootRunner.class)` should be replaced
+ with `@CamelSpringBootTest`
diff --git a/camel-threadpoolfactory-vertx.md b/camel-threadpoolfactory-vertx.md
new file mode 100644
index 0000000000000000000000000000000000000000..46603d2f35bc2738f610fb780cd1216eda27d892
--- /dev/null
+++ b/camel-threadpoolfactory-vertx.md
@@ -0,0 +1,56 @@
+# Threadpoolfactory-vertx.md
+
+**Since Camel 3.5**
+
+The Camel ThreadPoolFactory Vert.x component is a VertX based
+implementation of the `ThreadPoolFactory` SPI.
+
+By default, Camel will use its own thread pool for EIPs that can use
+parallel processing (such as splitter, aggregator). You can plug in
+different engines via an SPI interface. This is a VertX based plugin
+that uses the VertX worker thread pool (executeBlocking).
+
+# Restrictions
+
+This implementation has been designed to use VertX worker threads for
+EIPs where concurrency has been enabled (using default settings).
+However, this is limited to only apply when the EIP is not configured
+with a specific thread pool. For example, the first example below will
+use VertX worker threads, and the 2nd below will not:
+
+ from("direct:start")
+ .to("log:foo")
+ .split(body()).parallelProcessing()
+ .to("mock:split")
+ .end()
+ .to("mock:result");
+
+The following Split EIP will refer to a custom thread pool, and
+therefore VertX is not in use, and Camel will use the custom thread
+pool:
+
+ // register a custom thread pool profile with id myLowPool
+ context.getExecutorServiceManager().registerThreadPoolProfile(
+ new ThreadPoolProfileBuilder("myLowPool").poolSize(2).maxPoolSize(10).build()
+ );
+
+ from("direct:start")
+ .to("log:foo")
+ .split(body()).executorService("myLowPool")
+ .to("mock:split")
+ .end()
+ .to("mock:result");
+
+# VertX instance
+
+This implementation will first look up in the registry for an existing
+`io.vertx.core.Vertx` to be used. However, you can configure an existing
+instance using the getter/setter on the `VertXThreadPoolFactory` class.
+
+# Auto-detection from classpath
+
+To use this implementation all you need to do is to add the
+`camel-threadpoolfactory-vertx` dependency to the classpath, and Camel
+should auto-detect this on startup and log as follows:
+
+ Using ThreadPoolFactory: camel-threadpoolfactory-vertx
diff --git a/camel-threads-eip.md b/camel-threads-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..0514e3375027ff745c264e26b1d2480bf058114f
--- /dev/null
+++ b/camel-threads-eip.md
@@ -0,0 +1,149 @@
+# Threads-eip.md
+
+How can I decouple the continued routing of a message from the current
+thread?
+
+
+
+
+
+Submit the message to a thread pool, which then is responsible for the
+continued routing of the message.
+
+In Camel, this is implemented as the Threads EIP.
+
+# Options
+
+# Exchange properties
+
+# Using Threads EIP
+
+The example below will add a Thread pool with a pool size of five
+threads before sending to `mock:result`.
+
+Java
+from("seda:a")
+.threads(5)
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+And to use a thread pool with a task queue of only 20 elements:
+
+Java
+from("seda:a")
+.threads(5).maxQueueSize(20)
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+And you can also use a thread pool with no queue (meaning that a task
+cannot be pending on a queue):
+
+Java
+from("seda:a")
+.threads(5).maxQueueSize(0)
+.to("mock:result");
+
+XML
+
+
+
+
+
+
+## About rejected tasks
+
+The Threads EIP uses a thread pool which has a worker queue for tasks.
+When the worker queue gets full, the task is rejected.
+
+You can customize how to react upon this using the `rejectedPolicy` and
+`callerRunsWhenRejected` options. The latter is used to easily switch
+between the two most common and recommended settings. Either let the
+current caller thread execute the task (i.e. it will become
+synchronous), but also give time for the thread pool to process its
+current tasks, without adding more tasks (self throttling). This is the
+default behavior.
+
+The `Abort` policy, means the task is rejected, and a
+`RejectedExecutionException` is thrown.
+
+The reject policy options `Discard` and `DiscardOldest` is deprecated in
+Camel 3.x and removed in Camel 4 onwards.
+
+## Default values
+
+The Threads EIP uses the default values from the default [Thread Pool
+Profile](#manual:ROOT:threading-model.adoc). If the profile has not been
+altered, then the default profile is as follows:
+
+
+
+
+
+
+
+
+
+
+
+
+poolSize
+10
+Sets the default core pool size
+(minimum number of threads to keep in pool)
+
+
+keepAliveTime
+60
+Sets the default keep-alive time (in
+seconds) for inactive threads
+
+
+maxPoolSize
+20
+Sets the default maximum pool
+size
+
+
+maxQueueSize
+1000
+Sets the default maximum number of
+tasks in the work queue. Use -1 for an unbounded queue.
+
+
+allowCoreThreadTimeOut
+true
+Sets default whether to allow core
+threads to timeout
+
+
+rejectedPolicy
+CallerRuns
+Sets the default handler for tasks
+which cannot be executed by the thread pool. Has four options:
+Abort, CallerRuns, Discard, DiscardOldest which corresponds
+to the same four options provided out of the box in the JDK.
+
+
+
+
+## See Also
+
+See [Threading Model](#manual:ROOT:threading-model.adoc)
diff --git a/camel-thrift-dataformat.md b/camel-thrift-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..ba854d393ff804ff37832569f9c463623e4ed461
--- /dev/null
+++ b/camel-thrift-dataformat.md
@@ -0,0 +1,123 @@
+# Thrift-dataformat.md
+
+**Since Camel 2.20**
+
+Camel provides a Data Format to serialize between Java and the [Apache
+Thrift](https://thrift.apache.org/). Apache Thrift is language-neutral
+and platform-neutral, so messages produced by your Camel routes may be
+consumed by other language implementations.
+
+Check the [Apache Thrift
+Implementation](https://github.com/apache/thrift) for additional
+details.
+
+# Thrift Options
+
+# Content type format
+
+It’s possible to parse JSON message to convert it to the Thrift format
+and serialize it back using native util converter. To use this option,
+set contentTypeFormat value to *json* or call thrift with second
+parameter. If the default instance is not specified, always use the
+native binary Thrift format. The simple JSON format is write-only
+(marshal) and produces a simple output format suitable for parsing by
+scripting languages. The sample code shows below:
+
+ from("direct:marshal")
+ .unmarshal()
+ .thrift("org.apache.camel.dataformat.thrift.generated.Work", "json")
+ .to("mock:reverse");
+
+# Thrift overview
+
+This quick overview of how to use Thrift. For more details, see the
+[complete tutorial](https://thrift.apache.org/tutorial/)
+
+# Defining the thrift format
+
+The first step is to define the format for the body of your exchange.
+This is defined in a .thrift file as so:
+
+**tutorial.thrift**
+
+ namespace java org.apache.camel.dataformat.thrift.generated
+
+ enum Operation {
+ ADD = 1,
+ SUBTRACT = 2,
+ MULTIPLY = 3,
+ DIVIDE = 4
+ }
+
+ struct Work {
+ 1: i32 num1 = 0,
+ 2: i32 num2,
+ 3: Operation op,
+ 4: optional string comment,
+ }
+
+# Generating Java classes
+
+The Apache Thrift provides a compiler which will generate the Java
+classes for the format we defined in our .thrift file.
+
+You can also run the compiler for any additional supported languages you
+require manually.
+
+`thrift -r --gen java -out ../java/ ./tutorial-dataformat.thrift`
+
+This will generate separate Java class for each type defined in .thrift
+file, i.e., struct or enum. The generated classes implement
+org.apache.thrift.TBase which is required by the serialization
+mechanism. For this reason, it is important that only these classes are
+used in the body of your exchanges. Camel will throw an exception on
+route creation if you attempt to tell the Data Format to use a class
+that does not implement org.apache.thrift.TBase.
+
+# Java DSL
+
+You can use create the ThriftDataFormat instance and pass it to Camel
+DataFormat marshal and unmarshal API like this.
+
+ ThriftDataFormat format = new ThriftDataFormat(new Work());
+
+ from("direct:in").marshal(format);
+ from("direct:back").unmarshal(format).to("mock:reverse");
+
+Or use the DSL thrift() passing the unmarshal default instance or
+default instance class name like this.
+
+ // You don't need to specify the default instance for the thrift marshaling
+ from("direct:marshal").marshal().thrift();
+ from("direct:unmarshalA").unmarshal()
+ .thrift("org.apache.camel.dataformat.thrift.generated.Work")
+ .to("mock:reverse");
+
+ from("direct:unmarshalB").unmarshal().thrift(new Work()).to("mock:reverse");
+
+# Spring DSL
+
+The following example shows how to use Thrift to unmarshal using Spring
+configuring the thrift data type
+
+
+
+
+
+
+
+
+
+
+
+# Dependencies
+
+To use Thrift in your Camel routes, you need to add a dependency on
+**camel-thrift**, which implements this data format.
+
+
+ org.apache.camel
+ camel-thrift
+ x.x.x
+
+
diff --git a/camel-throttle-eip.md b/camel-throttle-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..9f158e3c0a24043ffd24d3dc124a9695d116186f
--- /dev/null
+++ b/camel-throttle-eip.md
@@ -0,0 +1,306 @@
+# Throttle-eip.md
+
+How can I throttle messages to ensure that a specific endpoint does not
+get overloaded, or we don’t exceed an agreed SLA with some external
+service?
+
+
+
+
+
+Use a Throttler that controls the rate of how many or fast messages are
+flowing to the endpoint.
+
+# Options
+
+# Exchange properties
+
+# Using Throttle
+
+The below example will throttle messages received on seda:a before being
+sent to mock:result ensuring that a maximum of 3 messages is sent during
+a running 10-second window slot.
+
+Java
+from("seda:a")
+.throttle(3).timePeriodMillis(10000)
+.to("mock:result");
+
+XML
+
+
+
+3
+
+
+
+
+YAML
+\- from:
+uri: seda:a
+steps:
+\- throttle:
+expression:
+constant: 3
+timePeriodMillis: 10000
+\- to:
+uri: mock:result
+
+To use 10-seconds window, we set the `timePeriodMillis` to ten-thousand.
+The default value is 1000 (i.e., 1 second), meaning that setting just
+`throttle(3)` has the effect of setting the maximum number of requests
+per second.
+
+To throttle by 50 requests per second, it would look like this:
+
+Java
+from("seda:a")
+.throttle(50)
+.to("seda:b");
+
+XML
+
+
+
+50
+
+
+
+
+YAML
+\- from:
+uri: seda:a
+steps:
+\- throttle:
+expression:
+constant: 50
+\- to:
+uri: mock:result
+
+## Dynamically changing maximum requests per period
+
+The Throttler uses an [Expression](#manual:ROOT:expression.adoc) to
+configure the number of requests. In all the examples from above, we
+used a [constant](#components:languages:constant-language.adoc).
+However, the expression can be dynamic, such as determined from a
+message header from the current `Exchange`.
+
+At runtime Camel evaluates the expression and converts the result to a
+`java.lang.Long` type. In the example below, we use a header from the
+message to determine the maximum requests per period. If the header is
+absent, then the Throttler uses the old value. This allows you to only
+provide a header if the value is to be changed:
+
+Java
+from("seda:a")
+.throttle(header("throttleValue")).timePeriodMillis(500)
+.to("seda:b")
+
+XML
+
+
+
+
+
+
+
+
+
+YAML
+\- from:
+uri: seda:a
+steps:
+\- throttle:
+expression:
+\# use a header to determine how many messages to throttle per 0.5 sec
+header: throttleValue
+timePeriodMillis: 500
+\- to:
+uri: seda:b
+
+## Asynchronous delaying
+
+You can let the Throttler use non-blocking asynchronous delaying, which
+means Camel will use a scheduler to schedule a task to be executed in
+the future. The task will then continue routing. This allows the caller
+thread to not block and be able to service other messages, etc.
+
+You enable asynchronous delaying using `asyncDelayed` as shown:
+
+Java
+from("seda:a")
+.throttle(100).asyncDelayed()
+.to("seda:b");
+
+XML
+
+
+
+100
+
+
+
+
+YAML
+\- from:
+uri: seda:a
+steps:
+\- throttle:
+expression:
+constant: 100
+asyncDelayed: true
+\- to:
+uri: seda:b
+
+## Rejecting processing if rate limit hit
+
+When a message is being *throttled* due the maximum request per limit
+has been reached, then the Throttler will by default wait until there is
+*free space* before continue routing the message.
+
+Instead of waiting you can also configure the Throttler to reject the
+message by throwing `ThrottlerRejectedExecutionException` exception.
+
+Java
+from("seda:a")
+.throttle(100).rejectExecution(true)
+.to("seda:b");
+
+XML
+
+
+
+100
+
+
+
+
+YAML
+\- from:
+uri: seda:a
+steps:
+\- throttle:
+expression:
+constant: 100
+timePeriodMillis: 100
+rejectExecution: true
+\- to:
+uri: seda:b
+
+## Throttling per group
+
+The Throttler will by default throttle all messages in the same group.
+However, it is possible to use a *correlation expression* to diving into
+multiple groups, where each group is throttled independently.
+
+For example, you can throttle by a [message](#message.adoc) header as
+shown in the following example:
+
+Java
+from("seda:a")
+.throttle(100).correlationExpression(header("region"))
+.to("seda:b");
+
+XML
+
+
+
+100
+
+
+
+
+
+
+
+YAML
+\- from:
+uri: seda:a
+steps:
+\- throttle:
+expression:
+constant: 100
+correlationExpression:
+header: region
+\- to:
+uri: seda:b
+
+In the example above, messages are throttled by the header with name
+region. So suppose there are regions for US, EMEA, and ASIA. Then we
+have three different groups that each are throttled by 100 messages per
+second.
+
+# Throttling Modes
+
+Apache Camel comes with two distinct throttling modes to control and
+manage the flow of requests in their applications.
+
+These modes address different aspects of request handling:
+
+**Total Requests Mode**
+Throttles requests based on the total number of requests made within a
+defined unit of time. It regulates the overall traffic flow to prevent
+overwhelming the system with an excessive number of requests.
+
+**Concurrent Connections Mode**
+Throttles requests by managing concurrent connections using a [leaky
+bucket algorithm.](https://en.wikipedia.org/wiki/Leaky_bucket) This
+algorithm controls the rate at which requests are processed
+simultaneously, preventing system overload.
+
+## Default Mode
+
+By default, Camel uses the **Total Requests Mode** as the default
+throttling mechanism.
+
+This means that, unless specified otherwise, the framework regulates the
+flow of requests based on the total number of requests per unit of time.
+
+## Choosing Throttling Mode
+
+Users can choose their preferred throttling mode using different
+approaches:
+
+**DSL Methods**
+
+- `totalRequestsMode()`: Sets the total requests mode.
+
+- `concurrentRequestsMode()`: Sets the concurrent connections mode.
+
+**Mode DSL Method**
+
+- `mode(String)`: Users can specify the throttling mode by passing
+ either `TotalRequests` or `ConcurrentRequests` as an argument.
+
+For example, `mode("ConcurrentRequests")` sets the throttling mode based
+on concurrent connections.
+
+These options provide users with fine-grained control over how Camel
+manages the flow of requests, allowing them to choose the mode that best
+aligns with their specific application requirements.
+
+Java
+from("seda:a")
+.throttle(3).mode("ConcurrentRequests")
+.to("mock:result");
+
+XML
+
+
+
+3
+
+
+
+
+YAML
+\- from:
+uri: seda:a
+steps:
+\- throttle:
+expression:
+constant: 3
+mode: ConcurrentRequests
+timePeriodMillis: 10000
+\- to:
+uri: mock:result
diff --git a/camel-thymeleaf.md b/camel-thymeleaf.md
index dac9a18b74be5f66e1b1aa0716c3fa8d3b315e64..5dbc743dc5ed10bf0eed509f5414983151a7df5f 100644
--- a/camel-thymeleaf.md
+++ b/camel-thymeleaf.md
@@ -37,7 +37,9 @@ template `fruit-template.html`:
The `fruit` header is now accessible from the `message.out.headers`.
-# Thymeleaf Context
+# Usage
+
+## Thymeleaf Context
Camel will provide exchange information in the Thymeleaf context (just a
`Map`). The `Exchange` is transferred as:
@@ -48,49 +50,49 @@ Camel will provide exchange information in the Thymeleaf context (just a
-
+
-
+
exchange
The Exchange
itself.
-
+
exchange.properties
The Exchange
properties.
-
+
headers
The headers of the In message.
-
+
camelContext
The Camel Context instance.
-
+
request
The In message.
-
+
in
The In message.
-
+
body
The In message body.
-
+
out
The Out message (only for InOut message
exchange pattern).
-
+
response
The Out message (only for InOut message
exchange pattern).
@@ -105,7 +107,7 @@ You can set up a custom Thymeleaf Context yourself by setting property
EngineContext engineContext = new EngineContext(variableMap);
exchange.getIn().setHeader("CamelThymeleafContext", engineContext);
-# Hot reloading
+## Hot reloading
The Thymeleaf template resource is, by default, hot reloadable for both
file and classpath resources (expanded jar). If you set
@@ -113,7 +115,7 @@ file and classpath resources (expanded jar). If you set
hot reloading is not possible. This scenario can be used in production
when the resource never changes.
-# Dynamic templates
+## Dynamic templates
Camel provides two headers by which you can define a different resource
location for a template or the template content itself. If any of these
@@ -127,29 +129,31 @@ resource. This allows you to provide a dynamic template at runtime.
-
+
-
-CamelThymeleafResourceUri
-String
+
+CamelThymeleafResourceUri
+String
A URI for the template resource to use
instead of the endpoint configured.
-
-CamelThymeleafTemplate
-String
+
+CamelThymeleafTemplate
+String
The template to use instead of the
endpoint configured.
-# Samples
+# Examples
For a simple use case, you could use something like:
@@ -193,7 +197,7 @@ should use it dynamically via a header, so, for example:
.setHeader("CamelThymeleafTemplate").constant("Hi this is a thymeleaf template that can do templating ${body}")
.to("thymeleaf:dummy?allowTemplateFromHeader=true"");
-# The Email Sample
+## The Email Example
In this sample, we want to use Thymeleaf templating for an order
confirmation email. The email template is laid out in Thymeleaf as:
diff --git a/camel-tika.md b/camel-tika.md
index c721e1158f5fd51719da8e7ba288b916fb1feddf..fedd0c8a418ae1d671b90b9bfaa21aa04cd985de 100644
--- a/camel-tika.md
+++ b/camel-tika.md
@@ -21,14 +21,16 @@ dependency to their `pom.xml`:
-# To Detect a file’s MIME Type
+# Usage
+
+## To Detect a file’s MIME Type
The file should be placed in the Body.
from("direct:start")
.to("tika:detect");
-# To Parse a File
+## To Parse a File
The file should be placed in the Body.
diff --git a/camel-timer.md b/camel-timer.md
index cc33ef98b49b77eb0aa0133e730b3c705703b11c..017f902feee57b8576ce84f23d39d771955b0cb8 100644
--- a/camel-timer.md
+++ b/camel-timer.md
@@ -23,7 +23,9 @@ The *IN* body of the generated exchange is `null`. Therefore, calling
See also the [Quartz](#quartz-component.adoc) component that supports
much more advanced scheduling.
-# Exchange Properties
+# Usage
+
+## Exchange Properties
When the timer is fired, it adds the following information as properties
to the `Exchange`:
@@ -35,42 +37,42 @@ to the `Exchange`:
-
+
-
+
Exchange.TIMER_NAME
String
The value of the name
option.
-
+
Exchange.TIMER_TIME
Date
The value of the time
option.
-
+
Exchange.TIMER_PERIOD
long
The value of the period
option.
-
+
Exchange.TIMER_FIRED_TIME
Date
The time when the consumer
fired.
-
+
Exchange.TIMER_COUNTER
Long
@@ -80,7 +82,7 @@ style="text-align: left;">Exchange.TIMER_COUNTER
-# Sample
+# Example
To set up a route that generates an event every 60 seconds:
@@ -96,7 +98,7 @@ And the route in Spring DSL:
-# Firing as soon as possible
+## Firing as soon as possible
You may want to fire messages in a Camel route as soon as possible, you
can use a negative delay:
@@ -115,7 +117,7 @@ reached.
If you don’t specify a `repeatCount` then the timer will continue firing
messages until the route will be stopped.
-# Firing only once
+## Firing only once
You may want to fire a message in a Camel route only once, such as when
starting the route. To do that, you use the `repeatCount` option as
@@ -145,11 +147,11 @@ shown:
|fixedRate|Events take place at approximately regular intervals, separated by the specified period.|false|boolean|
|includeMetadata|Whether to include metadata in the exchange such as fired time, timer name, timer count etc.|false|boolean|
|period|Generate periodic events every period. Must be zero or positive value. The default value is 1000.|1000|duration|
-|repeatCount|Specifies a maximum limit of number of fires. So if you set it to 1, the timer will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.||integer|
+|repeatCount|Specifies a maximum limit for the number of fires. Therefore, if you set it to 1, the timer will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever.||integer|
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
-|daemon|Specifies whether or not the thread associated with the timer endpoint runs as a daemon. The default value is true.|true|boolean|
+|daemon|Specifies whether the thread associated with the timer endpoint runs as a daemon. The default value is true.|true|boolean|
|pattern|Allows you to specify a custom Date pattern to use for setting the time option using URI syntax.||string|
|synchronous|Sets whether synchronous processing should be strictly used|false|boolean|
|time|A java.util.Date the first event should be generated. If using the URI, the pattern expected is: yyyy-MM-dd HH:mm:ss or yyyy-MM-dd'T'HH:mm:ss.||string|
diff --git a/camel-to-eip.md b/camel-to-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..eb1611ac211025f110eb2d6c4d9eaa18486c5af4
--- /dev/null
+++ b/camel-to-eip.md
@@ -0,0 +1,67 @@
+# To-eip.md
+
+Camel supports the [Message
+Endpoint](http://www.enterpriseintegrationpatterns.com/MessageEndpoint.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) using the
+[Endpoint](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html)
+interface.
+
+How does an application connect to a messaging channel to send and
+receive messages?
+
+
+
+
+
+Connect an application to a messaging channel using a Message Endpoint,
+a client of the messaging system that the application can then use to
+send or receive messages.
+
+In Camel the To EIP is used for sending [messages](#message.adoc) to
+static [endpoints](#message-endpoint.adoc).
+
+The To and [ToD](#toD-eip.adoc) EIPs are the most common patterns to use
+in Camel [routes](#manual:ROOT:routes.adoc).
+
+# Options
+
+# Exchange properties
+
+# Different between To and ToD
+
+The `to` is used for sending messages to a static
+[endpoint](#message-endpoint.adoc). In other words `to` sends messages
+only to the **same** endpoint.
+
+The `toD` is used for sending messages to a dynamic
+[endpoint](#message-endpoint.adoc). The dynamic endpoint is evaluated
+*on-demand* by an [Expression](#manual:ROOT:expression.adoc). By
+default, the [Simple](#components:languages:simple-language.adoc)
+expression is used to compute the dynamic endpoint URI.
+
+the Java DSL also provides a `toF` EIP, which can be used to avoid
+concatenating route parameters and making the code harder to read.
+
+# Using To
+
+The following example route demonstrates the use of a
+[File](#ROOT:file-component.adoc) consumer endpoint and a
+[JMS](#ROOT:jms-component.adoc) producer endpoint, by their
+[URIs](#manual::uris.adoc):
+
+Java
+from("file:messages/foo")
+.to("jms:queue:foo");
+
+XML
+
+
+
+
+
+YAML
+\- from:
+uri: file:messages/foo
+steps:
+\- to:
+uri: jms:queue:foo
diff --git a/camel-toD-eip.md b/camel-toD-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..363176241d8f84416e0bd007b05cdee0a4cd09b0
--- /dev/null
+++ b/camel-toD-eip.md
@@ -0,0 +1,230 @@
+# ToD-eip.md
+
+Camel supports the [Message
+Endpoint](http://www.enterpriseintegrationpatterns.com/MessageEndpoint.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) using the
+[Endpoint](https://www.javadoc.io/doc/org.apache.camel/camel-api/current/org/apache/camel/Endpoint.html)
+interface.
+
+How does an application connect to a messaging channel to send and
+receive messages?
+
+
+
+
+
+Connect an application to a messaging channel using a Message Endpoint,
+a client of the messaging system that the application can then use to
+send or receive messages.
+
+In Camel the ToD EIP is used for sending [messages](#message.adoc) to
+dynamic [endpoints](#message-endpoint.adoc).
+
+The [To](#to-eip.adoc) and ToD EIPs are the most common patterns to use
+in Camel [routes](#manual::routes.adoc).
+
+# Options
+
+# Exchange properties
+
+# Different between To and ToD
+
+The `to` is used for sending messages to a static
+[endpoint](#message-endpoint.adoc). In other words `to` sends messages
+only to the **same** endpoint.
+
+The `toD` is used for sending messages to a dynamic
+[endpoint](#message-endpoint.adoc). The dynamic endpoint is evaluated
+*on-demand* by an [Expression](#manual::expression.adoc). By default,
+the [Simple](#languages:simple-language.adoc) expression is used to
+compute the dynamic endpoint URI.
+
+# Using ToD
+
+For example, to send a message to an endpoint which is dynamically
+determined by a [message header](#message.adoc), you can do as shown
+below:
+
+Java
+from("direct:start")
+.toD("${header.foo}");
+
+XML
+
+
+
+
+
+You can also prefix the uri with a value because the endpoint
+[URI](#manual::uris.adoc) is evaluated using the
+[Simple](#languages:simple-language.adoc) language:
+
+Java
+from("direct:start")
+.toD("mock:${header.foo}");
+
+XML
+
+
+
+
+
+In the example above, we compute the dynamic endpoint with a prefix
+"mock:" and then the header foo is appended. So, for example, if the
+header foo has value order, then the endpoint is computed as
+"mock:order".
+
+## Using other languages with toD
+
+You can also use other languages such as
+[XPath](#languages:xpath-language.adoc). Doing this requires starting
+with `language:` as shown below. If you do not specify `language:` then
+the endpoint is a component name. And in some cases, there is both a
+component and language with the same name such as xquery.
+
+Java
+from("direct:start")
+.toD("language:xpath:/order/@uri");
+
+XML
+
+
+
+
+
+## Avoid creating endless dynamic endpoints that take up resources
+
+When using dynamic computed endpoints with `toD` then you may compute a
+lot of dynamic endpoints, which results in an overhead of resources in
+use, by each dynamic endpoint uri, and its associated producer.
+
+For example, HTTP-based endpoints where you may have dynamic values in
+URI parameters when calling the HTTP service, such as:
+
+ from("direct:login")
+ .toD("http:myloginserver:8080/login?userid=${header.userName}");
+
+In the example above then the parameter `userid` is dynamically
+computed, and would result in one instance of endpoint and producer for
+each different userid. To avoid having too many dynamic endpoints you
+can configure `toD` to reduce its cache size, for example, to use a
+cache size of 10:
+
+Java
+from("direct:login")
+.toD("http:myloginserver:8080/login?userid=${header.userName}", 10);
+
+XML
+
+
+
+
+
+this will only reduce the endpoint cache of the `toD` that has a chance
+of being reused in case a message is routed with the same `userName`
+header. Therefore, reducing the cache size will not solve the *endless
+dynamic endpoint* problem. Instead, you should use static endpoints with
+`to` and provide the dynamic parts in Camel message headers (if
+possible).
+
+### Using static endpoints to avoid endless dynamic endpoints
+
+In the example above then the parameter `userid` is dynamically
+computed, and would result in one instance of endpoint and producer for
+each different userid. To avoid having too dynamic endpoints, you use a
+single static endpoint and use headers to provide the dynamic parts:
+
+ from("direct:login")
+ .setHeader(Exchange.HTTP_PATH, constant("/login"))
+ .setHeader(Exchange.HTTP_QUERY, simple("userid=${header.userName}"))
+ .toD("http:myloginserver:8080");
+
+However, you can use optimized components for `toD` that can *solve*
+this out of the box, as documented next.
+
+## Using optimized components with toD
+
+A better solution would be if the HTTP component could be optimized to
+handle the variations of dynamic computed endpoint uris. This is with
+among the following components, which have been optimized for `toD`:
+
+- camel-http
+
+- camel-jetty
+
+- camel-netty-http
+
+- camel-undertow
+
+- camel-vertx-http
+
+A number of non-HTTP components has been optimized as well:
+
+- camel-amqp
+
+- camel-file
+
+- camel-ftp
+
+- camel-jms
+
+- camel-kafka
+
+- camel-paho-mqtt5
+
+- camel-paho
+
+- camel-sjms
+
+- camel-sjms2
+
+- camel-spring-rabbitmq
+
+For the optimisation to work, then:
+
+1. The optimization is detected and activated during startup of the
+ Camel routes with `toD`.
+
+2. The dynamic uri in `toD` must provide the component name as either
+ static or resolved via [property
+ placeholders](#manual::using-propertyplaceholder.adoc).
+
+3. The supported components must be on the classpath.
+
+The HTTP based components will be optimized to use the same
+hostname:port for each endpoint, and the dynamic values for context-path
+and query parameters will be provided as headers:
+
+For example, this route:
+
+ from("direct:login")
+ .toD("http:myloginserver:8080/login?userid=${header.userName}");
+
+It Will essentially be optimized to (pseudo route):
+
+ from("direct:login")
+ .setHeader(Exchange.HTTP_PATH, expression("/login"))
+ .setHeader(Exchange.HTTP_QUERY, expression("userid=${header.userName}"))
+ .toD("http:myloginserver:8080")
+ .removeHeader(Exchange.HTTP_PATH)
+ .removeHeader(Exchange.HTTP_QUERY);
+
+Where *expression* will be evaluated dynamically. Notice how the uri in
+`toD` is now static (`http:myloginserver:8080`). This optimization
+allows Camel to reuse the same endpoint and its associated producer for
+all dynamic variations. This yields much lower resource overhead as the
+same http producer will be used for all the different variations of
+`userids`.
+
+When the optimized component is in use, then you cannot use the headers
+`Exchange.HTTP_PATH` and `Exchange.HTTP_QUERY` to provide dynamic values
+to override the uri in `toD`. If you want to use these headers, then use
+the plain `to` DSL instead. In other words these headers are used
+internally by `toD` to carry the dynamic details of the endpoint.
+
+In case of problems then you can turn on DEBUG logging level on
+`org.apache.camel.processor.SendDynamicProcessor` which will log during
+startup if `toD` was optimized, or if there was a failure loading the
+optimized component, with a stacktrace logged.
+
+ Detected SendDynamicAware component: http optimising toD: http:myloginserver:8080/login?userid=${header.userName}
diff --git a/camel-tokenize-language.md b/camel-tokenize-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..ec091058997d6167f9dab7874493fd731168fdf9
--- /dev/null
+++ b/camel-tokenize-language.md
@@ -0,0 +1,41 @@
+# Tokenize-language.md
+
+**Since Camel 2.0**
+
+The tokenizer language is a built-in language in `camel-core`, which is
+most often used with the [Split](#eips:split-eip.adoc) EIP to split a
+message using a token-based strategy.
+
+The tokenizer language is intended to tokenize text documents using a
+specified delimiter pattern. It can also be used to tokenize XML
+documents with some limited capability. For a truly XML-aware
+tokenization, the use of the [XML Tokenize](#xtokenize-language.adoc)
+language is recommended as it offers a faster, more efficient
+tokenization specifically for XML documents.
+
+# Tokenize Options
+
+# Example
+
+The following example shows how to take a request from the direct:a
+endpoint then split it into pieces using an
+[Expression](#manual::expression.adoc), then forward each piece to
+direct:b:
+
+
+
+
+
+
+
+
+
+And in Java DSL:
+
+ from("direct:a")
+ .split(body().tokenize("\n"))
+ .to("direct:b");
+
+# See Also
+
+For more examples see [Split](#eips:split-eip.adoc) EIP.
diff --git a/camel-topicLoadBalancer-eip.md b/camel-topicLoadBalancer-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..8d4cdfdd424528e62e037dd2e6816054c9f58ebd
--- /dev/null
+++ b/camel-topicLoadBalancer-eip.md
@@ -0,0 +1,31 @@
+# TopicLoadBalancer-eip.md
+
+Topic mode for the [Load Balancer](#loadBalance-eip.adoc) EIP. With this
+policy, then all destinations are selected.
+
+# Options
+
+# Exchange properties
+
+# Examples
+
+In this example, we send the message to all three endpoints:
+
+Java
+from("direct:start")
+.loadBalance().topic()
+.to("seda:x")
+.to("seda:y")
+.to("seda:z")
+.end();
+
+XML
+
+
+
+
+
+
+
+
+
diff --git a/camel-tracing.md b/camel-tracing.md
new file mode 100644
index 0000000000000000000000000000000000000000..9cb75c6b1a0dee905d5ad877beb0b47d56923e65
--- /dev/null
+++ b/camel-tracing.md
@@ -0,0 +1,12 @@
+# Tracing.md
+
+**Since Camel 3.5**
+
+This module is a common interface and API for distributed tracing.
+
+This module is not intended to be used by end users. Instead, you should
+use one of:
+
+- [`camel-opentelemetry`](#opentelemetry.adoc)
+
+- [`camel-observation`](#observation.adoc)
diff --git a/camel-transactional-client.md b/camel-transactional-client.md
new file mode 100644
index 0000000000000000000000000000000000000000..fa8ded9c0558e74881a4380bb303f4eca44ca057
--- /dev/null
+++ b/camel-transactional-client.md
@@ -0,0 +1,397 @@
+# Transactional-client.md
+
+Camel supports the [Transactional
+Client](http://www.enterpriseintegrationpatterns.com/TransactionalClient.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) using JTA
+transactions.
+
+How can a client control its transactions with the messaging system?
+
+
+
+
+
+Use a Transactional Client—make the client’s session with the messaging
+system transactional so that the client can specify transaction
+boundaries.
+
+Transactions are supported by Spring Transactions and also with a JTA
+Transaction Manager.
+
+Traditionally, a JTA Transaction Manager is included in JEE application
+servers. However, when running microservice applications with Spring
+Boot, or Quarkus, then a third-party JTA transaction manager can be
+embedded and used.
+
+In Camel transactions are supported by JMS messaging components:
+
+- [JMS](#ROOT:jms-component.adoc)
+
+- [Simple JMS](#ROOT:sjms-component.adoc)
+
+- [Simple JMS 2.x](#ROOT:sjms2-component.adoc)
+
+And all the SQL database components, such as:
+
+- [JDBC](#ROOT:jdbc-component.adoc)
+
+- [JPA](#ROOT:jpa-component.adoc)
+
+- [SQL](#ROOT:sql-component.adoc)
+
+- [MyBatis](#ROOT:mybatis-component.adoc)
+
+# Understanding Transactions
+
+A transaction is a series of events. The start of a transaction is often
+named begin, and the end is commit (or rollback if the transaction isn’t
+successfully completed).
+
+If you were to write in Java a locally managed transaction, then it
+could be something like:
+
+ TransactionManager tm = ...
+ Transaction tx = tm.getTransaction();
+ try {
+ tx.begin();
+ // code here under transaction
+ tx.commit();
+ } catch (Exception e) {
+ tx.rollback();
+ }
+
+You start the transaction using the `begin` method. Then you have a
+series of events to do whatever work needs to be done. At the end, you
+either `commit` or `rollback` the transaction, depending on whether an
+exception is thrown.
+
+You may already be familiar with this principle, and transactions in
+Camel use the same principle at a higher level of abstraction. In Camel
+transactions, you don’t invoke begin and commit methods from Java code;
+you use declarative transactions, which can be configured using Java
+code or in XML files. Camel doesn’t reinvent the wheel and implement a
+transaction manager, which is a complicated piece of technology to
+build. Instead, Camel uses APIs from either `camel-spring` or
+`camel-jta`.
+
+## Local vs Global Transactions
+
+When talking about transactions, you need to distinguish between single-
+and multiple-resource transactions. The former are also known as local
+transactions, and the latter as global transactions
+
+### Local Transactions
+
+If you only have a single resource (such as one database, or one
+messaging system), then transactions can be simpler to orchestrate by
+the transaction manager. This is known as local transactions.
+
+When using local transactions with Spring Transactions, then you can use
+the dedicated transaction manager for the resource type such as:
+
+- org.springframework.jdbc.datasource.DataSourceTransactionManager
+
+- org.springframework.jms.connection.JmsTransactionManager
+
+Consult the Spring documentation for more local transaction managers.
+
+### Global Transactions
+
+The situation changes when you need to span multiple resources in the
+same transaction, such as JMS and JDBC resources together.
+
+To support multiple resources, you need to use a JTA (XA) capable
+transaction manager, which means using
+`org.springframework.transaction.jta.JtaTransactionManager` with Spring
+Transactions.
+
+For more information on JTA, see the [Wikipedia page on the
+subject](http://en.wikipedia.org/wiki/Java_Transaction_API). here:
+[XA](http://en.wikipedia.org/wiki/X/Open_XA) is also briefly discussed.
+
+That is not all, you also need to use a JTA transaction implementation
+such as:
+
+- [Atomikos](https://www.atomikos.com/)
+
+- [Narayana](https://narayana.io/)
+
+- A JEE Application Server with JTA
+
+And all of this must be configured correctly to have JTA transaction
+working. You may also need to do special configuration from the vendors
+of the resources (i.e., database or messaging system) to have this work
+properly with JTA/XA transactions. Consult the documentation of those
+systems for more details.
+
+## About Spring Transactions
+
+Camel uses Spring Transaction APIs (`camel-spring`) to manage
+transactions via its `TransactionManager` API. Depending on the kinds of
+resources that are taking part in the transaction, an appropriate
+implementation of the transaction manager must be chosen. Spring offers
+a number of transaction managers out of the box that work for various
+local transactions such as JMS and JDBC. But for global transactions,
+you must use a third-party JTA transaction manager implementation; JTA
+transaction manager is provided by Java EE application servers. Spring
+doesn’t offer that out of the box, only the necessary API abstract that
+Camel uses.
+
+## About JTA Transactions
+
+Camel can also use directly the JTA Transaction APIs (`camel-jta`) to
+manage transactions via its `javax.transaction` API. You must use a
+third-party JTA transaction manager implementation; JTA transaction
+manager is provided by Java EE application servers.
+
+# Using Transactions in Camel
+
+In Camel, transactions are used by:
+
+1. Setting up transaction manager via either Spring Transactions or JTA
+ Transactions.
+
+2. Marking routes as transacted
+
+3. Using different transaction propagations for rare use-cases
+
+You will later in the two transactional examples further below, see how
+to set up transaction manager in Camel.
+
+## Marking a route as transacted
+
+When using transactions (JTA or Spring Transaction) in Camel then you
+enable this on routes by using `transacted` right after `from` in the
+routes.
+
+For example, that would be:
+
+Java
+from("jms:cheese")
+.transacted()
+.to("bean:foo");
+
+XML
+
+
+
+
+
+
+When you specify ` ` in a route, Camel uses transactions for
+that particular route and any other routes that the message may
+undertake.
+
+When a route is specified as ` `, then under the hood Camel
+looks up the Spring/JTA transaction manager and uses it. This is a
+convention over configuration.
+
+The convention over configuration applies only when you have a single
+Spring/JTA transaction manager configured. In more complex scenarios,
+where you either use multiple transaction managers or transaction
+propagation policies, you have to do additional configuration.
+
+## Using different transaction propagations
+
+In some rare situations, you may need to use multiple transactions with
+the same exchange.
+
+For example, an exchange starts off using `PROPAGATION_REQUIRED`, and
+then you need to use another transaction that’s independent of the
+existing transaction. You can do this by using
+`PROPAGATION_REQUIRES_NEW`, which will start a new transaction.
+
+In Camel, a route can only have exactly one transaction policy, which
+means that if you need to change transaction propagation, then you must
+use a new route.
+
+When the exchange completes, the transaction manager will issue commits
+or rollbacks to these two transactions, which ensures that they both
+complete at the same time. Because two transaction legs are in play,
+they can have different outcomes; for example, transaction 1 can roll
+back, while transaction 2 commits, and vice versa.
+
+In Camel, you need to configure the propagations using
+`SpringTransactionPolicy` as shown in the following XML snippets:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Then we have routes where each of the routes uses their different
+policy:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Notice how the ref attribute on `` refers to the
+corresponding bean id of the transaction policy.
+
+**Keep it simple:** Although you can use multiple propagation behaviors
+with multiple routes in Camel, do so with care. Try to design your
+solutions with as few propagations as possible, because complexity
+increases dramatically when you introduce new propagation behaviors
+
+# Transaction example with a database
+
+In this sample, we want to ensure that two endpoints are under
+transaction control. These two endpoints insert data into a database.
+
+The sample is in its full as a [unit
+test](https://github.com/apache/camel/tree/main/components/camel-spring-xml/src/test/java/org/apache/camel/spring/interceptor/TransactionalClientDataSourceMinimalConfigurationTest.java).
+
+First, we set up the usual spring stuff in its configuration file. Here
+we have defined a DataSource to the HSQLDB and most importantly the
+Spring `DataSourceTransactionManager` that is doing the heavy lifting of
+ensuring our transactional policies.
+
+As we use the new convention over configuration, we do **not** need to
+configure a transaction policy bean, so we do not have any
+`PROPAGATION_REQUIRED` beans. All the beans needed to be configured are
+**standard** Spring beans only, there is no Camel specific configuration
+at all.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Then we are ready to define our Camel routes. We have two routes: 1 for
+success conditions, and 1 for a forced rollback condition.
+
+This is, after all, based on a unit test. Notice that we mark each route
+as transacted using the ` ` XML tag.
+
+
+
+
+
+
+
+
+ Tiger in Action
+
+
+
+ Elephant in Action
+
+
+
+
+
+
+
+
+
+ Tiger in Action
+
+
+
+ Donkey in Action
+
+
+
+
+
+
+That is all that is needed to configure a Camel route as being
+transacted. Remember to use ` `. The rest is standard Spring
+XML to set up the transaction manager.
+
+# Transaction example with JMS
+
+In this sample, we want to listen for messages in a queue and process
+the messages with our business logic java code and send them along.
+Since it is based on a [unit
+test](https://github.com/apache/camel/tree/main/components/camel-jms/src/test/java/org/apache/camel/component/jms/tx/TransactionMinimalConfigurationTest.java),
+the destination is a mock endpoint.
+
+First, we configure the standard Spring XML to declare a JMS connection
+factory, a JMS transaction manager and our ActiveMQ component that we
+use in our routing.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+And then we configure our routes. Notice that all we have to do is mark
+the route as transacted using the ` ` XML tag.
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-transform-eip.md b/camel-transform-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..083b70ccec160560d5f3f4a42b43d24eb8c6e058
--- /dev/null
+++ b/camel-transform-eip.md
@@ -0,0 +1,108 @@
+# Transform-eip.md
+
+Camel supports the [Message
+Translator](http://www.enterpriseintegrationpatterns.com/MessageTranslator.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc).
+
+How can systems using different data formats communicate with each other
+using messaging?
+
+
+
+
+
+Use a special filter, a Message Translator, between other filters or
+applications to translate one data format into another.
+
+The [Message Translator](#message-translator.adoc) can be done in
+different ways in Camel:
+
+- Using [Transform](#transform-eip.adoc) or [Set
+ Body](#setBody-eip.adoc) in the DSL
+
+- Calling a [Processor](#manual::processor.adoc) or
+ [bean](#manual::bean-integration.adoc) to perform the transformation
+
+- Using template-based [Components](#ROOT:index.adoc), with the
+ template being the source for how the message is translated
+
+- Messages can also be transformed using [Data
+ Format](#manual::data-format.adoc) to marshal and unmarshal messages
+ in different encodings.
+
+This page is documenting the first approach by using Transform EIP.
+
+# Options
+
+# Exchange properties
+
+# Using Transform EIP
+
+You can use a [Transform](#transform-eip.adoc) which uses an
+[Expression](#manual::expression.adoc) to do the transformation:
+
+In the example below, we prepend Hello to the message body using the
+[Simple](#components:languages:simple-language.adoc) language:
+
+Java
+from("direct:cheese")
+.transform(simple("Hello ${body}"))
+.to("log:hello");
+
+XML
+
+
+
+Hello ${body}
+
+
+
+
+YAML
+\- from:
+uri: direct:cheese
+steps:
+\- transform:
+expression:
+simple: Hello ${body}
+\- to:
+uri: log:hello
+
+The [Transform](#transform-eip.adoc) may also reference a given from/to
+data type (`org.apache.camel.spi.DataType`).
+
+Java
+from("direct:cheese")
+.transform(new DataType("myDataType"))
+.to("log:hello");
+
+XML
+
+
+
+
+
+
+YAML
+\- from:
+uri: direct:cheese
+steps:
+\- transform:
+to-type: myDataType
+\- to:
+uri: log:hello
+
+The example above defines the [Transform](#transform-eip.adoc) EIP that
+uses a target data type `myDataType`. The given data type may reference
+a [Transformer](#manual::transformer.adoc) that is able to handle the
+data type transformation.
+
+Users may also specify `fromType` in order to reference a very specific
+transformation from a given data type to a given data type.
+
+# What is the difference between Transform and Set Body?
+
+The Transform EIP always sets the result on the OUT message body.
+
+Set Body sets the result accordingly to the [Exchange
+Pattern](#manual::exchange-pattern.adoc) on the `Exchange`.
diff --git a/camel-twilio.md b/camel-twilio.md
index 79eeb444dbd7284934240048859e27c1ad853961..ca4ce395ede4fd7e4a88206f1d7943deeb9b653d 100644
--- a/camel-twilio.md
+++ b/camel-twilio.md
@@ -8,7 +8,7 @@ The Twilio component provides access to Version 2010-04-01 of Twilio
REST APIs accessible using [Twilio Java
SDK](https://github.com/twilio/twilio-java).
-Maven users will need to add the following dependency to their pom.xml
+Maven users will need to add the following dependency to their `pom.xml`
for this component:
@@ -17,7 +17,9 @@ for this component:
${camel-version}
-# Producer Endpoints:
+# Usage
+
+## Producer Endpoints:
Producer endpoints can use endpoint prefixes followed by endpoint names
and associated options described next. A shorthand alias can be used for
@@ -38,38 +40,38 @@ Endpoint can be one of:
-
+
-
+
creator
create
Make the request to the Twilio API to
perform the create
-
+
deleter
delete
Make the request to the Twilio API to
perform the delete
-
+
fetcher
fetch
Make the request to the Twilio API to
perform the fetch
-
+
reader
read
Make the request to the Twilio API to
perform the read
-
+
updater
update
Make the request to the Twilio API to
diff --git a/camel-undertow-spring-security.md b/camel-undertow-spring-security.md
new file mode 100644
index 0000000000000000000000000000000000000000..2ecc2dd26ae21018bbc5c328a26a87747e56d801
--- /dev/null
+++ b/camel-undertow-spring-security.md
@@ -0,0 +1,46 @@
+# Undertow-spring-security.md
+
+**Since Camel 3.3**
+
+The Spring Security Provider provides Spring Security (5.x) token bearer
+security over the camel-undertow component. To force camel-undertow to
+use spring security provider:
+
+- Add the spring security provider library on classpath.
+
+- Provide instance of SpringSecurityConfiguration as
+ `securityConfiguration` parameter into the camel-undertow component
+ or provide both `securityConfiguration` and `securityProvider` into
+ camel-undertow component.
+
+- Configure spring-security.
+
+Configuration has to provide the following security attribute:
+
+
+
+
+
+
+
+
+
+Name
+Description
+Type
+
+
+securityFiler
+Provides security filter gained from
+configured spring security (5.x). Filter could be obtained, for example,
+from DelegatingFilterProxyRegistrationBean.
+Filter
+
+
+
+
+Each exchange created by Undertow endpoint with spring security contains
+header *SpringSecurityProvider\_principal* ( name of header is provided
+as a constant `SpringSecurityProvider.PRINCIPAL_NAME_HEADER`) with
+current authorized identity as value or header is not present in case of
+rejected requests.
diff --git a/camel-undertow.md b/camel-undertow.md
index 74b680227bb00c2a53bd1573c1de3818d7123d00..378bef9496d6ba6bbcea6f8bec061df7b8e98142 100644
--- a/camel-undertow.md
+++ b/camel-undertow.md
@@ -8,7 +8,7 @@ The Undertow component provides HTTP and WebSocket based endpoints for
consuming and producing HTTP/WebSocket requests.
That is, the Undertow component behaves as a simple Web server. Undertow
-can also be used as a http client that means you can also use it with
+can also be used as an HTTP client that means you can also use it with
Camel as a producer.
Since the component also supports WebSocket connections, it can serve as
@@ -37,7 +37,9 @@ for this component:
undertow:ws://hostname[:port][/resourceUri][?options]
undertow:wss://hostname[:port][/resourceUri][?options]
-# Message Headers
+# Usage
+
+## Message Headers
Camel uses the same message headers as the [HTTP](#http-component.adoc)
component. It also uses `Exchange.HTTP_CHUNKED,CamelHttpChunked` header
@@ -49,7 +51,44 @@ For example, given a client request with the URL,
`\http://myserver/myserver?orderid=123`, the exchange will contain a
header named `orderid` with the value `123`.
-# HTTP Producer Example
+## Using localhost as host
+
+When you specify `localhost` in a URL, Camel exposes the endpoint only
+on the local TCP/IP network interface, so it cannot be accessed from
+outside the machine it operates on.
+
+If you need to expose an Undertow endpoint on a specific network
+interface, the numerical IP address of this interface should be used as
+the host. If you need to expose an Undertow endpoint on all network
+interfaces, the `0.0.0.0` address should be used.
+
+To listen across an entire URI prefix, see [How do I let Jetty match
+wildcards?](#manual:faq:how-do-i-let-jetty-match-wildcards.adoc).
+
+If you actually want to expose routes by HTTP and already have a
+Servlet, you should instead refer to the [Servlet
+Transport](#servlet-component.adoc).
+
+## Security provider
+
+To plug in a security provider for endpoint authentication, implement
+SPI interface
+`org.apache.camel.component.undertow.spi.UndertowSecurityProvider`.
+
+Undertow component locates all implementations of
+`UndertowSecurityProvider` using Java SPI (Service Provider Interfaces).
+If there is an object passed to the component as parameter
+`securityConfiguration` and provider accepts it. Provider will be used
+for authentication of all requests.
+
+Property `requireServletContext` of security providers forces the
+Undertow server to start with servlet context. There will be no servlet
+actually handled. This feature is meant only for use with servlet
+filters, which needs servlet context for their functionality.
+
+# Examples
+
+## HTTP Producer Example
The following is a basic example of how to send an HTTP request to an
existing HTTP endpoint.
@@ -64,7 +103,7 @@ XML
-# HTTP Consumer Example
+## HTTP Consumer Example
In this sample we define a route that exposes a HTTP service at
`\http://localhost:8080/myapp/myservice`:
@@ -74,7 +113,7 @@ In this sample we define a route that exposes a HTTP service at
-# WebSocket Example
+## WebSocket Example
In this sample we define a route that exposes a WebSocket service at
`\http://localhost:8080/myapp/mysocket` and returns back a response to
@@ -86,41 +125,6 @@ the same channel:
-# Using localhost as host
-
-When you specify `localhost` in a URL, Camel exposes the endpoint only
-on the local TCP/IP network interface, so it cannot be accessed from
-outside the machine it operates on.
-
-If you need to expose an Undertow endpoint on a specific network
-interface, the numerical IP address of this interface should be used as
-the host. If you need to expose an Undertow endpoint on all network
-interfaces, the `0.0.0.0` address should be used.
-
-To listen across an entire URI prefix, see [How do I let Jetty match
-wildcards?](#manual:faq:how-do-i-let-jetty-match-wildcards.adoc).
-
-If you actually want to expose routes by HTTP and already have a
-Servlet, you should instead refer to the [Servlet
-Transport](#servlet-component.adoc).
-
-# Security provider
-
-To plug in a security provider for endpoint authentication, implement
-SPI interface
-`org.apache.camel.component.undertow.spi.UndertowSecurityProvider`.
-
-Undertow component locates all implementations of
-`UndertowSecurityProvider` using Java SPI (Service Provider Interfaces).
-If there is an object passed to the component as parameter
-`securityConfiguration` and provider accepts it. Provider will be used
-for authentication of all requests.
-
-Property `requireServletContext` of security providers forces the
-Undertow server to start with servlet context. There will be no servlet
-actually handled. This feature is meant only for use with servlet
-filters, which needs servlet context for their functionality.
-
## Component Configurations
diff --git a/camel-univocityCsv-dataformat.md b/camel-univocityCsv-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..57803b93b65beb6b84d7555ccb392bebffc64b81
--- /dev/null
+++ b/camel-univocityCsv-dataformat.md
@@ -0,0 +1,124 @@
+# UnivocityCsv-dataformat.md
+
+**Since Camel 2.15**
+
+This [Data Format](#manual::data-format.adoc) uses
+[uniVocity-parsers](https://www.univocity.com/pages/univocity_parsers_tutorial.html)
+for reading and writing three kinds of tabular data text files:
+
+- CSV (Comma Separated Values), where the values are separated by a
+ symbol (usually a comma)
+
+- fixed-width, where the values have known sizes
+
+- TSV (Tabular Separated Values), where the fields are separated by a
+ tabulation
+
+Thus, there are three data formats based on uniVocity-parsers.
+
+If you use Maven, you can add the following to your `pom.xml`,
+substituting the version number for the latest and greatest release.
+
+
+ org.apache.camel
+ camel-univocity-parsers
+ x.x.x
+
+
+# Options
+
+Most configuration options of the uniVocity-parsers are available in the
+data formats. If you want more information about a particular option,
+please refer to their [documentation
+page](https://www.univocity.com/pages/univocity_parsers_tutorial#settings).
+
+The three data formats share common options and have dedicated ones,
+this section presents them all.
+
+# Options
+
+# Marshalling usages
+
+The marshalling accepts either:
+
+- A list of maps (`List>`), one for each line
+
+- A single map (`Map`), for a single line
+
+Any other body will throw an exception.
+
+## Usage example: marshalling a Map into CSV format
+
+
+
+
+
+
+
+
+
+## Usage example: marshalling a Map into fixed-width format
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Usage example: marshalling a Map into TSV format
+
+
+
+
+
+
+
+
+
+# Unmarshalling usages
+
+The unmarshalling uses an `InputStream` in order to read the data.
+
+Each row produces either:
+
+- a list with all the values in it (`asMap` option with `false`);
+
+- A map with all the values indexed by the headers (`asMap` option
+ with `true`).
+
+All the rows can either:
+
+- be collected at once into a list (`lazyLoad` option with `false`);
+
+- be read on the fly using an iterator (`lazyLoad` option with
+ `true`).
+
+## Usage example: unmarshalling a CSV format into maps with automatic headers
+
+
+
+
+
+
+
+
+
+## Usage example: unmarshalling a fixed-width format into lists
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-univocityFixed-dataformat.md b/camel-univocityFixed-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..d2ab4f70cd2aaa8a13d6df3049b3dca9e338e3ad
--- /dev/null
+++ b/camel-univocityFixed-dataformat.md
@@ -0,0 +1,124 @@
+# UnivocityFixed-dataformat.md
+
+**Since Camel 2.15**
+
+This [Data Format](#manual::data-format.adoc) uses
+[uniVocity-parsers](https://www.univocity.com/pages/univocity_parsers_tutorial.html)
+for reading and writing three kinds of tabular data text files:
+
+- CSV (Comma Separated Values), where the values are separated by a
+ symbol (usually a comma)
+
+- fixed-width, where the values have known sizes
+
+- TSV (Tabular Separated Values), where the fields are separated by a
+ tabulation
+
+Thus, there are three data formats based on uniVocity-parsers.
+
+If you use Maven, you can add the following to your `pom.xml`,
+substituting the version number for the latest release.
+
+
+ org.apache.camel
+ camel-univocity-parsers
+ x.x.x
+
+
+# Options
+
+Most configuration options of the uniVocity-parsers are available in the
+data formats. If you want more information about a particular option,
+please refer to their [documentation
+page](https://www.univocity.com/pages/univocity_parsers_tutorial#settings).
+
+The three data formats share common options and have dedicated ones,
+this section presents them all.
+
+# Options
+
+# Marshalling usages
+
+The marshalling accepts either:
+
+- A list of maps (`List>`), one for each line
+
+- A single map (`Map`), for a single line
+
+Any other body will throws an exception.
+
+## Usage example: marshalling a Map into CSV format
+
+
+
+
+
+
+
+
+
+## Usage example: marshalling a Map into fixed-width format
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Usage example: marshalling a Map into TSV format
+
+
+
+
+
+
+
+
+
+# Unmarshalling usages
+
+The unmarshalling uses an `InputStream` in order to read the data.
+
+Each row produces either:
+
+- a list with all the values in it (`asMap` option with `false`);
+
+- A map with all the values indexed by the headers (`asMap` option
+ with `true`).
+
+All the rows can either:
+
+- be collected at once into a list (`lazyLoad` option with `false`);
+
+- be read on the fly using an iterator (`lazyLoad` option with
+ `true`).
+
+## Usage example: unmarshalling a CSV format into maps with automatic headers
+
+
+
+
+
+
+
+
+
+## Usage example: unmarshalling a fixed-width format into lists
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-univocityTsv-dataformat.md b/camel-univocityTsv-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..280e79c9b7a7fc6885f4c712a363cfa1c18fe3c9
--- /dev/null
+++ b/camel-univocityTsv-dataformat.md
@@ -0,0 +1,124 @@
+# UnivocityTsv-dataformat.md
+
+**Since Camel 2.15**
+
+This [Data Format](#manual::data-format.adoc) uses
+[uniVocity-parsers](https://www.univocity.com/pages/univocity_parsers_tutorial.html)
+for reading and writing three kinds of tabular data text files:
+
+- CSV (Comma Separated Values), where the values are separated by a
+ symbol (usually a comma)
+
+- fixed-width, where the values have known sizes
+
+- TSV (Tabular Separated Values), where the fields are separated by a
+ tabulation
+
+Thus, there are three data formats based on uniVocity-parsers.
+
+If you use Maven, you can add the following to your `pom.xml`,
+substituting the version number for the latest and greatest release.
+
+
+ org.apache.camel
+ camel-univocity-parsers
+ x.x.x
+
+
+# Options
+
+Most configuration options of the uniVocity-parsers are available in the
+data formats. If you want more information about a particular option,
+please refer to their [documentation
+page](https://www.univocity.com/pages/univocity_parsers_tutorial#settings).
+
+The three data formats share common options and have dedicated ones,
+this section presents them all.
+
+# Options
+
+# Marshalling usages
+
+The marshalling accepts either:
+
+- A list of maps (`List>`), one for each line
+
+- A single map (`Map`), for a single line
+
+Any other body will throws an exception.
+
+## Usage example: marshalling a Map into CSV format
+
+
+
+
+
+
+
+
+
+## Usage example: marshalling a Map into fixed-width format
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Usage example: marshalling a Map into TSV format
+
+
+
+
+
+
+
+
+
+# Unmarshalling usages
+
+The unmarshalling uses an `InputStream` in order to read the data.
+
+Each row produces either:
+
+- a list with all the values in it (`asMap` option with `false`);
+
+- A map with all the values indexed by the headers (`asMap` option
+ with `true`).
+
+All the rows can either:
+
+- be collected at once into a list (`lazyLoad` option with `false`);
+
+- be read on the fly using an iterator (`lazyLoad` option with
+ `true`).
+
+## Usage example: unmarshalling a CSV format into maps with automatic headers
+
+
+
+
+
+
+
+
+
+## Usage example: unmarshalling a fixed-width format into lists
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/camel-unmarshal-eip.md b/camel-unmarshal-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..f7c2f9c59fd66f0ad26a75d5bf28d29f29f104bd
--- /dev/null
+++ b/camel-unmarshal-eip.md
@@ -0,0 +1,89 @@
+# Unmarshal-eip.md
+
+The [Marshal](#marshal-eip.adoc) and [Unmarshal](#unmarshal-eip.adoc)
+EIPs are used for [Message Transformation](#message-translator.adoc).
+
+
+
+
+
+Camel has support for message transformation using several techniques.
+One such technique is [Data Formats](#dataformats:index.adoc), where
+marshal and unmarshal come from.
+
+So in other words, the [Marshal](#marshal-eip.adoc) and
+[Unmarshal](#unmarshal-eip.adoc) EIPs are used with [Data
+Formats](#dataformats:index.adoc).
+
+- *Marshal*: Transforms the message body (such as Java object) into a
+ binary or textual format, ready to be wired over the network.
+
+- *Unmarshal*: Transforms data in some binary or textual format (such
+ as received over the network) into a Java object; or some other
+ representation according to the data format being used.
+
+# Example
+
+The following example reads XML files from the inbox/xml directory. Each
+file is then transformed into Java Objects using
+[JAXB](#dataformats:jaxb-dataformat.adoc). Then a
+[Bean](#ROOT:bean-component.adoc) is invoked that takes in the Java
+object.
+
+Then the reverse operation happens to transform the Java objects back
+into XML also via JAXB, but using the `marshal` operation. And finally,
+the message is routed to a [JMS](#ROOT:jms-component.adoc) queue.
+
+Java
+from("file:inbox/xml")
+.unmarshal().jaxb()
+.to("bean:validateOrder")
+.marshal().jaxb()
+.to("jms:queue:order");
+
+XML
+
+
+
+
+
+
+
+
+YAML
+\- from:
+uri: file:inbox/xml
+steps:
+\- unmarshal:
+jaxb: {}
+\- to:
+uri: bean:validateOrder
+\- marshal:
+jaxb: {}
+\- to:
+uri: jms:queue:order
+
+# Allow Null Body
+
+Sometimes, there are situations where `null` can be a normal value for
+the body of a message but `null` by default is not an accepted value to
+unmarshal. To work around that, it is possible to allow `null` as value
+to a body to unmarshall using the option `allowNullBody` as shown in the
+next code snippets:
+
+Java
+// Beginning of the route
+.unmarshal().allowNullBody().jaxb()
+// End of the route
+
+XML
+
+
+
+
+YAML
+\# Beginning of the route
+unmarshal:
+allowNullBody: true
+jaxb: {}
+\# End of the route
diff --git a/camel-validate-eip.md b/camel-validate-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..e39ed651d3daba2b8c5ecf3393d707144a5f9ac2
--- /dev/null
+++ b/camel-validate-eip.md
@@ -0,0 +1,86 @@
+# Validate-eip.md
+
+The Validate EIP uses an [Expression](#manual::expression.adoc) or a
+[Predicate](#manual::predicate.adoc) to validate the contents of a
+message.
+
+
+
+
+
+This is useful for ensuring that messages are valid before attempting to
+process them.
+
+When a message is **not** valid then a `PredicateValidationException` is
+thrown.
+
+# Options
+
+# Exchange properties
+
+# Using Validate EIP
+
+The route below will read the file contents and validate the message
+body against a regular expression.
+
+Java
+from("file:inbox")
+.validate(body(String.class).regex("^\\w{10}\\,\\d{2}\\,\\w{24}$"))
+.to("bean:myServiceBean.processLine");
+
+XML
+
+
+
+${body} regex ^\\w{10}\\,\\d{2}\\,\\w{24}$
+
+
+
+
+YAML
+\- from:
+uri: file:inbox
+steps:
+\- validate:
+expression:
+simple: ${body} regex "^\\w{10}\\,\\d{2}\\,\\w{24}$"
+\- to:
+uri: bean:myServiceBean
+parameters:
+method: processLine
+
+Validate EIP is not limited to the message body. You can also validate
+the message header.
+
+Java
+from("file:inbox")
+.validate(header("bar").isGreaterThan(100))
+.to("bean:myServiceBean.processLine");
+
+You can also use `validate` together with the
+[Simple](#components:languages:simple-language.adoc) language.
+
+ from("file:inbox")
+ .validate(simple("${header.bar} > 100"))
+ .to("bean:myServiceBean.processLine");
+
+XML
+
+
+
+${header.bar} \> 100
+
+
+
+
+YAML
+\- from:
+uri: file:inbox
+steps:
+\- validate:
+expression:
+simple: ${header.bar} \> 100
+\- to:
+uri: bean:myServiceBean
+parameters:
+method: processLine
diff --git a/camel-variable-language.md b/camel-variable-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ad3f5c66cbbecb3f956c37d9a526e7126be7f72
--- /dev/null
+++ b/camel-variable-language.md
@@ -0,0 +1,30 @@
+# Variable-language.md
+
+**Since Camel 4.4**
+
+The Variable Expression Language allows you to extract values of named
+variables.
+
+# Variable Options
+
+# Example usage
+
+The `recipientList` EIP can utilize a variable:
+
+
+
+
+ myVar
+
+
+
+In this case, the list of recipients are contained in the variable
+*myVar*.
+
+And the same example in Java DSL:
+
+ from("direct:a").recipientList(variable("myVar"));
+
+# Dependencies
+
+The Header language is part of **camel-core**.
diff --git a/camel-velocity.md b/camel-velocity.md
index 1e5dbf6013004f07ab31fd7818b35e59de879fc3..be66088e9501761e6ff8e77f5f3a50561e0c8659 100644
--- a/camel-velocity.md
+++ b/camel-velocity.md
@@ -37,7 +37,9 @@ For example, to set the header value of `fruit` in the Velocity template
The `fruit` header is now accessible from the `message.out.headers`.
-# Velocity Context
+# Usage
+
+## Velocity Context
Camel will provide exchange information in the Velocity context (just a
`Map`). The `Exchange` is transferred as:
@@ -48,53 +50,53 @@ Camel will provide exchange information in the Velocity context (just a
-
+
-
+
exchange
The Exchange
itself.
-
+
exchange.properties
The Exchange
properties.
-
+
variables
The variables
-
+
headers
The headers of the In message.
-
+
camelContext
The Camel Context instance.
-
+
request
The In message.
-
+
in
The In message.
-
+
body
The In message body.
-
+
out
The Out message (only for InOut message
exchange pattern).
-
+
response
The Out message (only for InOut message
exchange pattern).
@@ -109,7 +111,7 @@ You can set up a custom Velocity Context yourself by setting property
VelocityContext velocityContext = new VelocityContext(variableMap);
exchange.getIn().setHeader("CamelVelocityContext", velocityContext);
-# Hot reloading
+## Hot reloading
The Velocity template resource is, by default, hot reloadable for both
file and classpath resources (expanded jar). If you set
@@ -117,7 +119,7 @@ file and classpath resources (expanded jar). If you set
hot reloading is not possible. This scenario can be used in production
when the resource never changes.
-# Dynamic templates
+## Dynamic templates
Camel provides two headers by which you can define a different resource
location for a template or the template content itself. If any of these
@@ -131,21 +133,23 @@ resource. This allows you to provide a dynamic template at runtime.
-
+
-
-CamelVelocityResourceUri
+
+CamelVelocityResourceUri
String
A URI for the template resource to use
instead of the endpoint configured.
-
-CamelVelocityTemplate
+
+CamelVelocityTemplate
String
The template to use instead of the
endpoint configured.
@@ -153,7 +157,7 @@ endpoint configured.
-# Samples
+# Examples
For example, you could use something like
@@ -197,7 +201,7 @@ should use it dynamically via a header, so for example:
setHeader("CamelVelocityTemplate").constant("Hi this is a velocity template that can do templating ${body}").
to("velocity:dummy?allowTemplateFromHeader=true"");
-# The Email Sample
+## The Email Example
In this sample, we want to use Velocity templating for an order
confirmation email. The email template is laid out in Velocity as:
diff --git a/camel-vertx-http.md b/camel-vertx-http.md
index 16f8d4c45e7a27d9f424eb8c56d2c87728802096..a578a3b4b13fe0e38b9aed4c043b83fd5bded9c5 100644
--- a/camel-vertx-http.md
+++ b/camel-vertx-http.md
@@ -32,13 +32,13 @@ headers `Exchange.HTTP_URI` and `Exchange.HTTP_PATH`.
from("direct:start")
.to("vertx-http:https://camel.apache.org");
-# URI Parameters
+## URI Parameters
The `vertx-http` producer supports URI parameters to be sent to the HTTP
server. The URI parameters can either be set directly on the endpoint
URI, or as a header with the key `Exchange.HTTP_QUERY` on the message.
-# Response code
+## Response code
Camel will handle, according to the HTTP response code:
@@ -53,13 +53,13 @@ Camel will handle, according to the HTTP response code:
failure and will throw a `HttpOperationFailedException` with the
information.
-# throwExceptionOnFailure
+## throwExceptionOnFailure
The option, `throwExceptionOnFailure`, can be set to `false` to prevent
the `HttpOperationFailedException` from being thrown for failed response
codes. This allows you to get any response from the remote server.
-# Exceptions
+## Exceptions
`HttpOperationFailedException` exception contains the following
information:
@@ -73,7 +73,7 @@ information:
- Response body as a `java.lang.String`, if server provided a body as
response
-# HTTP method
+## HTTP method
The following algorithm determines the HTTP method to be used:
@@ -84,7 +84,7 @@ The following algorithm determines the HTTP method to be used:
5. `POST` if there is data to send (body is not `null`).
6. `GET` otherwise.
-# HTTP form parameters
+## HTTP form parameters
You can send HTTP form parameters in one of two ways.
@@ -97,12 +97,12 @@ You can send HTTP form parameters in one of two ways.
[MultiMap](https://vertx.io/docs/apidocs/io/vertx/core/MultiMap.html)
which allows you to configure form parameter names and values.
-# Multipart form data
+## Multipart form data
You can upload text or binary files by setting the message body as a
[MultipartForm](https://vertx.io/docs/apidocs/io/vertx/ext/web/multipart/MultipartForm.html).
-# Customizing Vert.x Web Client options
+## Customizing Vert.x Web Client options
When finer control of the Vert.x Web Client configuration is required,
you can bind a custom
@@ -120,7 +120,7 @@ Then reference the options on the `vertx-http` producer.
from("direct:start")
.to("vertx-http:http://localhost:8080?webClientOptions=#clientOptions")
-# SSL
+## SSL
The Vert.x HTTP component supports SSL/TLS configuration through the
[Camel JSSE Configuration
@@ -129,7 +129,7 @@ Utility](#manual::camel-configuration-utilities.adoc).
It is also possible to configure SSL options by providing a custom
`WebClientOptions`.
-# Session Management
+## Session Management
Session management can be enabled via the `sessionManagement` URI
option. When enabled, an in-memory cookie store is used to track
diff --git a/camel-vertx-websocket.md b/camel-vertx-websocket.md
index 216b226566ac06a7ea3bfd40768d61a6dd6f07a9..682e570a9be598700d12316d599f32c8622683f9 100644
--- a/camel-vertx-websocket.md
+++ b/camel-vertx-websocket.md
@@ -25,8 +25,8 @@ for this component:
# Usage
The following example shows how to expose a WebSocket on
-[http://localhost:8080/echo](http://localhost:8080/echo) and returns an *echo* response back to the
-same channel:
+[http://localhost:8080/echo](http://localhost:8080/echo) and returns an **echo** response back to
+the same channel:
from("vertx-websocket:localhost:8080/echo")
.transform().simple("Echo: ${body}")
@@ -38,7 +38,7 @@ client on a remote address with the `consumeAsClient` option:
from("vertx-websocket:my.websocket.com:8080/chat?consumeAsClient=true")
.log("Got WebSocket message ${body}");
-# Path \& query parameters
+## Path \& query parameters
The WebSocket server consumer supports the configuration of
parameterized paths. The path parameter value will be set as a Camel
@@ -56,7 +56,7 @@ WebSocket client to connect to the server endpoint:
from("vertx-websocket:localhost:8080/chat/{user}")
.log("New message from ${header.user} (${header.role}) >>> ${body}")
-# Sending messages to peers connected to the vertx-websocket server consumer
+## Sending messages to peers connected to the vertx-websocket server consumer
This section only applies when producing messages to a WebSocket hosted
by the camel-vertx-websocket consumer. It is not relevant when producing
@@ -90,7 +90,7 @@ identifying the peer will be propagated via the
.setHeader(VertxWebsocketConstants.CONNECTION_KEY).constant("key-1,key-2,key-3")
.to("vertx-websocket:localhost:8080/chat");
-# SSL
+## SSL
By default, the `ws://` protocol is used, but secure connections with
`wss://` are supported by configuring the consumer or producer via the
diff --git a/camel-wal.md b/camel-wal.md
new file mode 100644
index 0000000000000000000000000000000000000000..e83d20106552e67b97b763751872589403c9b9d2
--- /dev/null
+++ b/camel-wal.md
@@ -0,0 +1,34 @@
+# Wal.md
+
+**Since Camel 3.20**
+
+The WAL component provides a resume strategy that uses a write-ahead log
+to
+
+A resume strategy that uses a write-ahead strategy to keep a transaction
+log of the in-processing and processed records. This strategy works by
+wrapping another strategy. This increases the reliability of the resume
+API by ensuring that records are saved locally before being sent to the
+remote data storage. Thus guaranteeing that records can be recovered in
+case that system crashes.
+
+# Usage
+
+Because this strategy wraps another one, then the other one should be
+created first and then passed as an argument to this strategy when
+creating it.
+
+ SomeOtherResumeStrategy resumeStrategy = new SomeOtherResumeStrategy();
+ final String logFile = System.getProperty("wal.log.file");
+
+ WriteAheadResumeStrategy writeAheadResumeStrategy = new WriteAheadResumeStrategy(new File(logFile), resumeStrategy);
+
+Subsequently, this strategy should be registered to the registry instead
+
+ getCamelContext().getRegistry().bind(ResumeStrategy.DEFAULT_NAME, writeAheadResumeStrategy);
+ ...
+
+ from("file:{{input.dir}}?noop=true&recursive=true&preSort=true")
+ .resumable(ResumeStrategy.DEFAULT_NAME)
+ .process(this::process)
+ .to("file:{{output.dir}}");
diff --git a/camel-wasm-language.md b/camel-wasm-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..ab600c2381fd193ca002c1a37d31836b090afcd2
--- /dev/null
+++ b/camel-wasm-language.md
@@ -0,0 +1,169 @@
+# Wasm-language.md
+
+**Since Camel 4.5**
+
+Camel supports Wasm to allow using
+[Expression](#manual::expression.adoc) or
+[Predicate](#manual::predicate.adoc).
+
+# Wasm Options
+
+# Writing A Wasm function
+
+In *Wasm*, sharing objects between the host, in this case the *JVM*, and
+the *Wasm* module is deliberately restricted and as of today, it
+requires a number of steps:
+
+1. From the *host*, call a function inside the webassembly module that
+ allocates a block of memory and returns its address, then save it
+
+2. From the *host*, write the data that should be exchanged with the
+ *Wasm* module to the saved address
+
+3. From the *host*, invoke the required function passing both the
+ address where the data is written and its size
+
+4. From the *Wasm* module, read the data and process it
+
+5. From the *host*, release the memory when done
+
+## Providing functions for memory management
+
+The module hosting the function **must** provide the functions to
+allocate/deallocate memory that **must** be named `alloc` and `dealloc`
+respectively.
+
+Here’s an example of the mentioned functions implemented in
+[Rust](https://www.rust-lang.org):
+
+ pub extern "C" fn alloc(size: u32) -> *mut u8 {
+ let mut buf = Vec::with_capacity(size as usize);
+ let ptr = buf.as_mut_ptr();
+
+ // tell Rust not to clean this up
+ mem::forget(buf);
+
+ ptr
+ }
+
+ pub unsafe extern "C" fn dealloc(ptr: &mut u8, len: i32) {
+ // Retakes the pointer which allows its memory to be freed.
+ let _ = Vec::from_raw_parts(ptr, 0, len as usize);
+ }
+
+## Data shapes
+
+It is not possible to share a Java object with the Wasm module directly,
+and as mentioned before, data exchange leverages Wasm’s memory that can
+be accessed by both the host and the guest runtimes. At this stage, the
+data structure that the component exchange with the Wasm function is a
+subset of the Apache Camel Message, containing headers the body encoded
+as a base64 string:
+
+ public static class Wrapper {
+ @JsonProperty
+ public Map headers = new HashMap<>();
+
+ @JsonProperty
+ public byte[] body;
+ }
+
+## Data processing
+
+The component expects the processing function to have the following
+signature:
+
+ fn function(ptr: u32, len: u32) -> u64
+
+- it accepts two 32bit unsigned integers arguments
+
+ - a pointer to the memory location when the input data has been
+ written (`ptr`)
+
+ - the size of the input data (`len`)
+
+- it returns a 64bit unsigned integer where:
+
+ - the first 32bit represents a pointer to the return data
+
+ - the last 31bit represents the size of the return data
+
+ - the most significant bit of the returned data size is reserved
+ to signal an error, so if it is set, then the return data could
+ contain an error message/code/etc
+
+Here’s an example of a complete function:
+
+ #[derive(Serialize, Deserialize)]
+ struct Message {
+ headers: HashMap,
+
+ #[serde(with = "Base64Standard")]
+ body: Vec,
+ }
+
+ #[cfg_attr(all(target_arch = "wasm32"), export_name = "transform")]
+ #[no_mangle]
+ pub extern fn transform(ptr: u32, len: u32) -> u64 {
+ let bytes = unsafe {
+ slice::from_raw_parts_mut(
+ ptr as *mut u8,
+ len as usize)
+ };
+
+ let msg: Message = serde_json::from_slice(bytes).unwrap();
+ let res = String::from_utf8(msg.body).unwrap().to_uppercase().as_bytes().to_vec();
+
+ let out_len = res.len();
+ let out_ptr = alloc(out_len as u32);
+
+ unsafe {
+ std::ptr::copy_nonoverlapping(
+ res.as_ptr(),
+ out_ptr,
+ out_len as usize)
+ };
+
+ return ((out_ptr as u64) << 32) | out_len as u64;
+ }
+
+# Examples
+
+Supposing we have compiled a Wasm module containing the function above,
+then it can be called in a Camel Route by its name and module resource
+location:
+
+ try (CamelContext cc = new DefaultCamelContext()) {
+ FluentProducerTemplate pt = cc.createFluentProducerTemplate();
+
+ cc.addRoutes(new RouteBuilder() {
+ @Override
+ public void configure() throws Exception {
+ from("direct:in")
+ .tramsform()
+ .wasm("transform", "classpath://functions.wasm");
+ }
+ });
+ cc.start();
+
+ Exchange out = pt.to("direct:in")
+ .withHeader("foo", "bar")
+ .withBody("hello")
+ .request(Exchange.class);
+
+ assertThat(out.getMessage().getHeaders())
+ .containsEntry("foo", "bar");
+ assertThat(out.getMessage().getBody(String.class))
+ .isEqualTo("HELLO");
+ }
+
+# Dependencies
+
+If you use Maven you could add the following to your `pom.xml`,
+substituting the version number for the latest and greatest release.
+
+
+ org.apache.camel
+ camel-wasm
+ x.x.x
+
diff --git a/camel-wasm.md b/camel-wasm.md
index 7f6868f8fcf38a0cf0044ad4bd63239c98f9728e..d048d983609c8ff134bebf5e6243cd58d77282c2 100644
--- a/camel-wasm.md
+++ b/camel-wasm.md
@@ -25,7 +25,9 @@ for this component:
wasm://functionName?[options]
-# Writing A Wasm processor
+# Usage
+
+## Writing A Wasm processor
In *Wasm*, sharing objects between the host, in this case the *JVM*, and
the *Wasm* module is deliberately restricted and as of today, it
@@ -44,7 +46,7 @@ requires a number of steps:
5. From the *host*, release the memory when done
-## Providing functions for memory management
+### Providing functions for memory management
The module hosting the function **must** provide the functions to
allocate/deallocate memory that **must** be named `alloc` and `dealloc`
@@ -68,7 +70,7 @@ Here’s an example of the mentioned functions implemented in
let _ = Vec::from_raw_parts(ptr, 0, len as usize);
}
-## Data shapes
+### Data shapes
It is not possible to share a Java object with the Wasm module directly,
and as mentioned before, data exchange leverages Wasm’s memory that can
@@ -85,7 +87,7 @@ as a base64 string:
public byte[] body;
}
-## Data processing
+### Data processing
The component expects the processing function to have the following
signature:
diff --git a/camel-weather.md b/camel-weather.md
index b2dd2e28f4cae964cf64bcfb0cb99926d9cceff1..1205ba3bc8177605367bd34286f664ba7419626f 100644
--- a/camel-weather.md
+++ b/camel-weather.md
@@ -27,12 +27,14 @@ for this component:
weather://[?options]
-# Exchange data format
+# Usage
+
+## Exchange data format
Camel will deliver the body as a json formatted `java.lang.String` (see
the `mode` option above).
-# Samples
+# Examples
In this sample we find the 7-day weather forecast for Madrid, Spain:
diff --git a/camel-web3j.md b/camel-web3j.md
index f384670ecfd8e31f17ff23e0b2ce7eda421a0c11..28f36c95150c54d2dae54dec093c67ddac160df8 100644
--- a/camel-web3j.md
+++ b/camel-web3j.md
@@ -32,7 +32,7 @@ for this component:
All URI options can also be set as exchange headers.
-# Samples
+# Examples
Listen for new mined blocks and send the block hash to a jms queue:
diff --git a/camel-weightedLoadBalancer-eip.md b/camel-weightedLoadBalancer-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..33a42080965db71ae19a31edd2c84b7eb2fb8f4e
--- /dev/null
+++ b/camel-weightedLoadBalancer-eip.md
@@ -0,0 +1,37 @@
+# WeightedLoadBalancer-eip.md
+
+Weighted mode for [Load Balancer](#loadBalance-eip.adoc) EIP. With this
+policy in case of failures, the exchange will be tried on the next
+endpoint.
+
+# Options
+
+# Exchange properties
+
+# Examples
+
+In this example, we want to send the most message to the first endpoint,
+then the second, and only a few to the last.
+
+The distribution ratio is `7 = 4 + 2 + 1`. This means that for every
+seventh message then 4 goes to the first, 2 for the second, and 1 for
+the last.
+
+Java
+from("direct:start")
+.loadBalance().weighted(false, "4,2,1")
+.to("seda:x")
+.to("seda:y")
+.to("seda:z")
+.end();
+
+XML
+
+
+
+
+
+
+
+
+
diff --git a/camel-whatsapp.md b/camel-whatsapp.md
index 85abf81aca5b4d3fb819e1b4841c1e7d6aa8cb56..b8b1bc66787c2616c09a104d3e865a6c2f010308 100644
--- a/camel-whatsapp.md
+++ b/camel-whatsapp.md
@@ -37,7 +37,9 @@ for this component:
The WhatsApp component supports only producer endpoints.
-# Producer Example
+# Examples
+
+## Producer Example
The following is a basic example of how to send a message to a WhatsApp
chat through the Business Cloud API.
@@ -62,9 +64,9 @@ Supported API are:
and
[Media](https://developers.facebook.com/docs/whatsapp/cloud-api/reference/media)
-# Webhook Mode
+## Webhook Mode
-The Whatsapp component supports usage in the **webhook mode** using the
+The WhatsApp component supports usage in the **webhook mode** using the
**camel-webhook** component.
To enable webhook mode, users need first to add a REST implementation to
diff --git a/camel-wireTap-eip.md b/camel-wireTap-eip.md
new file mode 100644
index 0000000000000000000000000000000000000000..8ac116f62b329cbcb1226726ff1cbf7987a560bf
--- /dev/null
+++ b/camel-wireTap-eip.md
@@ -0,0 +1,125 @@
+# WireTap-eip.md
+
+[Wire Tap](http://www.enterpriseintegrationpatterns.com/WireTap.html)
+from the [EIP patterns](#enterprise-integration-patterns.adoc) allows
+you to route messages to a separate location while they are being
+forwarded to the ultimate destination.
+
+
+
+
+
+# Options
+
+# Exchange properties
+
+# Wire Tap
+
+Camel’s Wire Tap will copy the original
+[Exchange](#manual::exchange.adoc) and set its [Exchange
+Pattern](#manual::exchange-pattern.adoc) to **`InOnly`**, as we want the
+tapped [Exchange](#manual::exchange.adoc) to be sent in a fire and
+forget style. The tapped [Exchange](#manual::exchange.adoc) is then sent
+in a separate thread, so it can run in parallel with the original.
+Beware that only the `Exchange` is copied - Wire Tap won’t do a deep
+clone (unless you specify a custom processor via **`onPrepare`** which
+does that). So all copies could share objects from the original
+`Exchange`.
+
+## Using Wire Tap
+
+In the example below, the exchange is wire tapped to the direct:tap
+route. This route delays message 1 second before continuing. This is
+because it allows you to see that the tapped message is routed
+independently of the original route, so that you would see log:result
+happens before log:tap
+
+Java
+from("direct:start")
+.to("log:foo")
+.wireTap("direct:tap")
+.to("log:result");
+
+ from("direct:tap")
+ .delay(1000).setBody().constant("Tapped")
+ .to("log:tap");
+
+XML
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+YAML
+\- from:
+uri: direct:start
+steps:
+\- wireTap:
+uri: direct:tap
+\- to:
+uri: log:result
+\- from:
+uri: direct:tap
+steps:
+\- to:
+uri: log:log
+
+## Wire tapping with dynamic URIs
+
+For example, to wire tap to a dynamic URI, then the URI uses the
+[Simple](#components:languages:simple-language.adoc) language that
+allows to construct dynamic URIs.
+
+For example, to wire tap to a JMS queue where the header ID is part of
+the queue name:
+
+Java
+from("direct:start")
+.wireTap("jms:queue:backup-${header.id}")
+.to("bean:doSomething");
+
+XML
+
+
+
+
+
+
+YAML
+\- from:
+uri: direct:start
+steps:
+\- wireTap:
+uri: jms:queue:backup-${header.id}
+\- to:
+uri: bean:doSomething
+
+# WireTap Thread Pools
+
+The WireTap uses a thread pool to process the tapped messages. This
+thread pool will by default use the settings detailed in the [Threading
+Model](#manual::threading-model.adoc).
+
+In particular, when the pool is exhausted (with all threads used),
+further wiretaps will be executed synchronously by the calling thread.
+To remedy this, you can configure an explicit thread pool on the Wire
+Tap having either a different rejection policy, a larger worker queue,
+or more worker threads.
+
+# Wire tapping Streaming based messages
+
+If you Wire Tap a stream message body, then you should consider enabling
+[Stream caching](#manual::stream-caching.adoc) to ensure the message
+body can be read at each endpoint.
+
+See more details at [Stream caching](#manual::stream-caching.adoc).
diff --git a/camel-wordpress.md b/camel-wordpress.md
index 2d4b2ae3ca3996e6a49008f585519d8a179a832c..c92d18463903aa848dd32efee6af2259afb213ed 100644
--- a/camel-wordpress.md
+++ b/camel-wordpress.md
@@ -17,7 +17,9 @@ the following `Consumer` as example:
wordpress:post?criteria.perPage=10&criteria.orderBy=author&criteria.categories=camel,dozer,json
-# Configuring WordPress component
+# Usage
+
+## Configuring WordPress component
The `WordpressConfiguration` class can be used to set initial properties
configuration to the component instead of passing it as query parameter.
@@ -36,7 +38,9 @@ routes.
.to("mock:result");
}
-# Consumer Example
+# Examples
+
+## Consumer Example
Consumer polls from the API from time to time domain objects from
WordPress. Following, an example using the `Post` operation:
@@ -45,7 +49,7 @@ WordPress. Following, an example using the `Post` operation:
- `wordpress:post?id=1` search for a specific post
-# Producer Example
+## Producer Example
Producer performs write operations on WordPress like adding a new user
or update a post. To be able to write, you must have an authorized user
@@ -61,7 +65,7 @@ credentials (see Authentication).
- `wordpress:post:delete?id=1` deletes a specific post
-# Authentication
+## Authentication
Producers that perform write operations, e.g., creating a new post,
[must have an authenticated
diff --git a/camel-xchange.md b/camel-xchange.md
index d9cf2fa873bbb238d4f0d1bc5582d79454e752f4..5b17f0833a78b4e28a0d08474f398954d5cfe777 100644
--- a/camel-xchange.md
+++ b/camel-xchange.md
@@ -26,11 +26,13 @@ for this component:
xchange://exchange?options
-# Authentication
+# Usage
-This component communicates with supported crypto currency exchanges via
+## Authentication
+
+This component communicates with supported cryptocurrency exchanges via
REST API. Some API requests use simple unauthenticated GET request. For
-most of the interesting stuff however, you’d need an account with the
+most of the interesting stuff, however, you’d need an account with the
exchange and have API access keys enabled.
These API access keys need to be guarded tightly, especially so when
@@ -39,7 +41,7 @@ who can get hold of your API keys can easily transfer funds from your
account to some other address i.e. steal your money.
Your API access keys can be strored in an exchange specific properties
-file in your SSH directory. For Binance for example this would be:
+file in your SSH directory. For Binance, for example, this would be:
`~/.ssh/binance-secret.keys`
##
@@ -49,7 +51,7 @@ file in your SSH directory. For Binance for example this would be:
apiKey = GuRW0*********
secretKey = nKLki************
-# Samples
+# Examples
In this sample we find the current Bitcoin market price in USDT:
diff --git a/camel-xj.md b/camel-xj.md
index 139b5a503b5d71d2eaa3d4076d85890d912f84e5..4a1d746a9b42d7140e60ce379281ba8dec28e062 100644
--- a/camel-xj.md
+++ b/camel-xj.md
@@ -35,15 +35,15 @@ XML2JSON or JSON2XML.
The **templateName** parameter allows using *identify transforma* by
specifying the name `identity`.
-# Using XJ endpoints
+# Usage
## Converting JSON to XML
The following route does an "identity" transform of the message because
-no xslt stylesheet is given. In the context of xml to xml
+no xslt stylesheet is given. In the context of XML to XML
transformations, "Identity" transform means that the output document is
just a copy of the input document. In the case of XJ, it means it
-transforms the json document to an equivalent xml representation.
+transforms the JSON document to an equivalent XML representation.
from("direct:start").
to("xj:identity?transformDirection=JSON2XML");
@@ -193,15 +193,15 @@ will result in
}
}
-You may have noted that the input xml and output json are very similar
+You may have noted that the input XML and output JSON are very similar
to the examples above when converting from json to xml, although nothing
special is done here. We only transformed an arbitrary XML document to
-json. XJ uses the following rules by default:
+JSON. XJ uses the following rules by default:
- The XML root element can be named somehow, it will always end in a
- json root object declaration `{}`
+ JSON root object declaration `{}`
-- The json key name is the name of the xml element
+- The JSON key name is the name of the XML element
- If there is a name clash as in `` above where two ``
elements exist a json array will be generated.
@@ -300,10 +300,10 @@ and get the following output:
}
}
-Note, this transformation resulted in exactly the same json document as
+Note, this transformation resulted in exactly the same JSON document as
we used as input to the *json2xml* conversion. What did the stylesheet
-do? We just gave some hints to XJ on how to write the json document. The
-following XML document is that what is passed to XJ after xsl
+do? We just gave some hints to XJ on how to write the JSON document. The
+following XML document is that what is passed to XJ after XSL
transformation:
@@ -326,10 +326,10 @@ transformation:
In the stylesheet we just provided the minimal required type hints to
get the same result. The supported type hints are exactly the same as XJ
-writes to a XML document when converting from json to xml.
+writes to an XML document when converting from json to xml.
In the end, that means that we can feed back in the result document from
-the json to xml transformation sample above:
+the JSON to XML transformation sample above:
@@ -372,7 +372,7 @@ As seen in the example above:
- xj:type lets you specify exactly the desired output type
-- xj:name lets you overrule the json key name.
+- xj:name lets you overrule the JSON key name.
This is required when you want to generate key names that contain chars
that aren’t allowed in XML element names.
@@ -385,40 +385,40 @@ that aren’t allowed in XML element names.
-
+
-
-object
-Generate a json object
+
+object
+Generate a JSON object
-
-array
-Generate a json array
+
+array
+Generate a JSON array
-
-string
-Generate a json string
+
+string
+Generate a JSON string
-
-int
-Generate a json number without
+
+int
+Generate a JSON number without
fractional part
-
-float
-Generate a json number with fractional
+
+float
+Generate a JSON number with fractional
part
-
-boolean
-Generate a json boolean
+
+boolean
+Generate a JSON boolean
-
-null
+
+null
Generate an empty value using the word
null
diff --git a/camel-xmlSecurity-dataformat.md b/camel-xmlSecurity-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..09c5b7cff947dce477ed7926bf85257d920c845c
--- /dev/null
+++ b/camel-xmlSecurity-dataformat.md
@@ -0,0 +1,180 @@
+# XmlSecurity-dataformat.md
+
+**Since Camel 2.0**
+
+The XMLSecurity Data Format facilitates encryption and decryption of XML
+payloads at the Document, Element, and Element Content levels (including
+simultaneous multi-node encryption/decryption using XPath). To sign
+messages using the XML Signature specification, please see the Camel XML
+Security component.
+
+The encryption capability is based on formats supported using the Apache
+XML Security (Santuario) project. Symmetric encryption/decryption is
+currently supported using Triple-DES and AES (128, 192, and 256)
+encryption formats. Additional formats can be easily added later as
+needed. This capability allows Camel users to encrypt/decrypt payloads
+while being dispatched or received along a route.
+
+**Since Camel 2.9**
+The XMLSecurity Data Format supports asymmetric key encryption. In this
+encryption model, a symmetric key is generated and used to perform XML
+content encryption or decryption. This "content encryption key" is then
+itself encrypted using an asymmetric encryption algorithm that leverages
+the recipient’s public key as the "key encryption key". Use of an
+asymmetric key encryption algorithm ensures that only the holder of the
+recipient’s private key can access the generated symmetric encryption
+key. Thus, only the private key holder can decode the message. The
+XMLSecurity Data Format handles all the logic required to encrypt and
+decrypt the message content and encryption key(s) using asymmetric key
+encryption.
+
+The XMLSecurity Data Format also has improved support for namespaces
+when processing the XPath queries that select content for encryption. A
+namespace definition mapping can be included as part of the data format
+configuration. This enables true namespace matching, even if the prefix
+values in the XPath query and the target XML document are not equivalent
+strings.
+
+# XMLSecurity Options
+
+## Key Cipher Algorithm
+
+The default Key Cipher Algorithm is now `XMLCipher.RSA_OAEP` instead of
+`XMLCipher.RSA_v1dot5`. Usage of `XMLCipher.RSA_v1dot5` is discouraged
+due to various attacks. Requests that use RSA v1.5 as the key cipher
+algorithm will be rejected unless it has been explicitly configured as
+the key cipher algorithm.
+
+# Marshal
+
+To encrypt the payload, the `marshal` processor needs to be applied on
+the route followed by the **`xmlSecurity()`** tag.
+
+# Unmarshal
+
+To decrypt the payload, the `unmarshal` processor needs to be applied on
+the route followed by the **`xmlSecurity()`** tag.
+
+# Examples
+
+Given below are several examples of how marshalling could be performed
+at the Document, Element, and Content levels.
+
+## Full Payload encryption/decryption
+
+ KeyGenerator keyGenerator = KeyGenerator.getInstance("AES");
+ keyGenerator.init(256);
+ Key key = keyGenerator.generateKey();
+
+ from("direct:start")
+ .marshal().xmlSecurity(key.getEncoded())
+ .unmarshal().xmlSecurity(key.getEncoded()
+ .to("direct:end");
+
+## Partial Payload Content Only encryption/decryption with choice of passPhrase(password)
+
+ String tagXPATH = "//cheesesites/italy/cheese";
+ boolean secureTagContent = true;
+ ...
+ String passPhrase = "Just another 24 Byte key";
+ from("direct:start")
+ .marshal().xmlSecurity(tagXPATH, secureTagContent, passPhrase)
+ .unmarshal().xmlSecurity(tagXPATH, secureTagContent, passPhrase)
+ .to("direct:end");
+
+## Partial Payload Content Only encryption/decryption with passPhrase(password) and Algorithm
+
+ import org.apache.xml.security.encryption.XMLCipher;
+ ....
+ String tagXPATH = "//cheesesites/italy/cheese";
+ boolean secureTagContent = true;
+ String passPhrase = "Just another 24 Byte key";
+ String algorithm= XMLCipher.TRIPLEDES;
+ from("direct:start")
+ .marshal().xmlSecurity(tagXPATH, secureTagContent, passPhrase, algorithm)
+ .unmarshal().xmlSecurity(tagXPATH, secureTagContent, passPhrase, algorithm)
+ .to("direct:end");
+
+## Partial Payload Content with Namespace support
+
+Java DSL
+
+ final Map namespaces = new HashMap();
+ namespaces.put("cust", "http://cheese.xmlsecurity.camel.apache.org/");
+
+ final KeyStoreParameters tsParameters = new KeyStoreParameters();
+ tsParameters.setPassword("password");
+ tsParameters.setResource("sender.truststore");
+
+ context.addRoutes(new RouteBuilder() {
+ public void configure() {
+ from("direct:start")
+ .marshal().xmlSecurity("//cust:cheesesites/italy", namespaces, true, "recipient",
+ testCypherAlgorithm, XMLCipher.RSA_v1dot5, tsParameters)
+ .to("mock:encrypted");
+ }
+ }
+
+Spring XML
+
+A namespace prefix defined as part of the `camelContext` definition can
+be re-used in context within the data format `secureTag` attribute of
+the `xmlSecurity` element.
+
+
+
+
+
+
+
+ ...
+
+## Asymmetric Key Encryption
+
+Spring XML Sender
+
+
+
+
+
+
+
+
+
+
+ ...
+
+Spring XML Recipient
+
+
+
+
+
+
+
+
+
+
+ ...
+
+# Dependencies
+
+This data format is provided within the **camel-xmlsecurity** component.
diff --git a/camel-xmlsecurity-sign.md b/camel-xmlsecurity-sign.md
index 7f64219d7ed424cf1be48e556cfd4575d6e3c11e..dc92b8c833d082723b8dbbdeb9f02e118b1503a9 100644
--- a/camel-xmlsecurity-sign.md
+++ b/camel-xmlsecurity-sign.md
@@ -199,7 +199,7 @@ Signatures as Siblings of the Signed Elements".
## Output Node Determination in Enveloping XML Signature Case
-After the validation the node is extracted from the XML signature
+After the validation, the node is extracted from the XML signature
document which is finally returned to the output-message body. In the
enveloping XML signature case, the default implementation
[`DefaultXmlSignature2Message`](https://github.com/apache/camel/blob/main/components/camel-xmlsecurity/src/main/java/org/apache/camel/component/xmlsecurity/api/DefaultXmlSignature2Message.java)
@@ -208,7 +208,7 @@ of
does this for the node search type `Default` in the following way (see
option `xmlSignature2Message`):
-- First an object reference is determined:
+- First, an object reference is determined:
- Only same document references are taken into account (URI must
start with `#`)
@@ -315,7 +315,7 @@ defined in the XML schema (see option `schemaResourceUri`). You specify
a list of XPATH expressions pointing to attributes of type ID (see
option `xpathsToIdAttributes`). These attributes determine the elements
to be signed. The elements are signed by the same key given by the
-`keyAccessor` bean. Elements with higher (i.e. deeper) hierarchy level
+`keyAccessor` bean. Elements with higher (i.e., deeper) hierarchy level
are signed first. In the example, the element `C` is signed before the
element `A`.
@@ -406,7 +406,7 @@ you must overwrite either the method
`DefaultXAdESSignatureProperties` overwrites the method
`getSigningCertificate()` and allows you to specify the signing
certificate via a keystore and alias. The following example shows all
-parameters you can specify. If you do not need certain parameters you
+parameters you can specify. If you do not need certain parameters, you
can just omit them.
**XAdES-BES/EPES Example in Java DSL**
diff --git a/camel-xmlsecurity-verify.md b/camel-xmlsecurity-verify.md
index 45b3e69b5d7bf1e69b8dc8b23006d16475cf8e3c..7f3f80baf21a1f33a8ab01ecdb4eb40b5f50a5cd 100644
--- a/camel-xmlsecurity-verify.md
+++ b/camel-xmlsecurity-verify.md
@@ -43,8 +43,8 @@ URI format:
xmlsecurity-verify:name[?options]
- With the signer endpoint, you can generate a XML signature for the
- body of the in-message which can be either a XML document or a plain
- text. The enveloped, enveloping, or detached (as of 12.14) XML
+ body of the in-message, which can be either a XML document or a
+ plain text. The enveloped, enveloping, or detached (as of 12.14) XML
signature(s) will be set to the body of the out-message.
- With the verifier endpoint, you can validate an enveloped or
@@ -190,8 +190,8 @@ In the example, the default signature algorithm
`\http://www.w3.org/2000/09/xmldsig#rsa-sha1` is used. You can set the
signature algorithm of your choice by the option `signatureAlgorithm`
(see below). The signer endpoint creates an *enveloping* XML signature.
-If you want to create an *enveloped* XML signature then you must specify
-the parent element of the Signature element; see option
+If you want to create an *enveloped* XML signature, then you must
+specify the parent element of the Signature element; see option
`parentLocalName` for more details.
For creating *detached* XML signatures, see sub-chapter "Detached XML
@@ -340,7 +340,7 @@ you must overwrite either the method
`DefaultXAdESSignatureProperties` overwrites the method
`getSigningCertificate()` and allows you to specify the signing
certificate via a keystore and alias. The following example shows all
-parameters you can specify. If you do not need certain parameters you
+parameters you can specify. If you do not need certain parameters, you
can just omit them.
**XAdES-BES/EPES Example in Java DSL**
diff --git a/camel-xmpp.md b/camel-xmpp.md
index a5d233dbffb87cfdc374a2f057f7f46c86b7ad5b..4ab3b240fc1d7c6db32536bc95562bfc80aec52a 100644
--- a/camel-xmpp.md
+++ b/camel-xmpp.md
@@ -26,7 +26,9 @@ The component supports both producer and consumer (you can get messages
from XMPP or send messages to XMPP). Consumer mode supports rooms
starting.
-# Headers and setting Subject or Language
+# Usage
+
+## Headers and setting Subject or Language
Camel sets the message IN headers as properties on the XMPP message. You
can configure a `HeaderFilterStategy` if you need custom filtering of
diff --git a/camel-xpath-language.md b/camel-xpath-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..f56fca890b88a2c64f25ce827871cd60ab10c7b5
--- /dev/null
+++ b/camel-xpath-language.md
@@ -0,0 +1,599 @@
+# Xpath-language.md
+
+**Since Camel 1.1**
+
+Camel supports [XPath](http://www.w3.org/TR/xpath) to allow an
+[Expression](#manual::expression.adoc) or
+[Predicate](#manual::predicate.adoc) to be used in the
+[DSL](#manual::dsl.adoc).
+
+For example, you could use XPath to create a predicate in a [Message
+Filter](#eips:filter-eip.adoc) or as an expression for a [Recipient
+List](#eips:recipientList-eip.adoc).
+
+# XPath Language options
+
+# Namespaces
+
+You can use namespaces with XPath expressions using the `Namespaces`
+helper class.
+
+# Variables
+
+Variables in XPath are defined in different namespaces. The default
+namespace is `\http://camel.apache.org/schema/spring`.
+
+
+
+Camel will resolve variables according to either:
+
+- namespace given
+
+- no namespace given
+
+## Namespace given
+
+If the namespace is given, then Camel is instructed exactly what to
+return. However, when resolving Camel will try to resolve a header with
+the given local part first, and return it. If the local part has the
+value **body,** then the body is returned instead.
+
+## No namespace given
+
+If there is no namespace given, then Camel resolves only based on the
+local part. Camel will try to resolve a variable in the following steps:
+
+- from `variables` that has been set using the `variable(name, value)`
+ fluent builder
+
+- from `message.in.header` if there is a header with the given key
+
+- from `exchange.properties` if there is a property with the given key
+
+# Functions
+
+Camel adds the following XPath functions that can be used to access the
+exchange:
+
+
+
+
+
+
+
+
+
+
+
+
+
+in:body
+none
+Object
+Will return the message body.
+
+
+in:header
+the header name
+Object
+Will return the message
+header.
+
+
+out:body
+none
+Object
+deprecated Will return
+the out message body.
+
+
+out:header
+the header name
+Object
+deprecated Will return
+the out message header.
+
+
+function:properties
+key for property
+String
+To use a Property
+Placeholder .
+
+
+function:simple
+simple expression
+Object
+To evaluate a Simple language.
+
+
+
+
+`function:properties` and `function:simple` is not supported when the
+return type is a `NodeSet`, such as when using with a
+[Split](#eips:split-eip.adoc) EIP.
+
+Here’s an example showing some of these functions in use.
+
+## Functions example
+
+If you prefer to configure your routes in your Spring XML file, then you
+can use XPath expressions as follows
+
+
+
+
+
+
+
+ /foo:person[@name='James']
+
+
+
+
+
+
+Notice how we can reuse the namespace prefixes, **foo** in this case, in
+the XPath expression for easier namespace-based XPath expressions.
+
+# Stream-based message bodies
+
+If the message body is stream based, which means the input it receives
+is submitted to Camel as a stream. That means you will only be able to
+read the content of the stream **once**.
+
+You often need to access the data multiple times when you use
+[XPath](#xpath-language.adoc) as a [Message
+Filter](#xpath-language.adoc) or Content-Based Router. Then, you should
+use Stream Caching or convert the message body to a `String` prior,
+which is safe to be re-read multiple times.
+
+ from("queue:foo").
+ filter().xpath("//foo").
+ to("queue:bar")
+
+ from("queue:foo").
+ choice().xpath("//foo").to("queue:bar").
+ otherwise().to("queue:others");
+
+# Setting a result type
+
+The XPath expression will return a result type using native XML objects
+such as `org.w3c.dom.NodeList`. However, many times you want a result
+type to be a `String`. To do this, you have to instruct the XPath which
+result type to use.
+
+In Java DSL:
+
+ xpath("/foo:person/@id", String.class)
+
+In XML DSL you use the **resultType** attribute to provide the fully
+qualified classname.
+
+ /foo:person/@id
+
+Classes from `java.lang` can omit the FQN name, so you can use
+`resultType="String"`
+
+Using `@XPath` annotation:
+
+ @XPath(value = "concat('foo-',//order/name/)", resultType = String.class) String name)
+
+Where we use the xpath function concat to prefix the order name with
+`foo-`. In this case we have to specify that we want a `String` as
+result type, so the concat function works.
+
+# Using XPath on Headers
+
+Some users may have XML stored in a header. To apply an XPath to a
+header’s value, you can do this by defining the *headerName* attribute.
+
+ /invoice/@orderType = 'premium'
+
+And in Java DSL you specify the headerName as the second parameter as
+shown:
+
+ xpath("/invoice/@orderType = 'premium'", "invoiceDetails")
+
+# Example
+
+Here is a simple example using an XPath expression as a predicate in a
+[Message Filter](#eips:filter-eip.adoc):
+
+ from("direct:start")
+ .filter().xpath("/person[@name='James']")
+ .to("mock:result");
+
+And in XML
+
+
+
+
+ /person[@name='James']
+
+
+
+
+# Using namespaces
+
+If you have a standard set of namespaces you wish to work with and wish
+to share them across many XPath expressions, you can use the
+`org.apache.camel.support.builder.Namespaces` when using Java DSL as
+shown:
+
+ Namespaces ns = new Namespaces("c", "http://acme.com/cheese");
+
+ from("direct:start")
+ .filter(xpath("/c:person[@name='James']", ns))
+ .to("mock:result");
+
+Notice how the namespaces are provided to `xpath` with the `ns` variable
+that are passed in as the second parameter.
+
+Each namespace is a key=value pair, where the prefix is the key. In the
+XPath expression then the namespace is used by its prefix, e.g.:
+
+ /c:person[@name='James']
+
+The namespace builder supports adding multiple namespaces as shown:
+
+ Namespaces ns = new Namespaces("c", "http://acme.com/cheese")
+ .add("w", "http://acme.com/wine")
+ .add("b", "http://acme.com/beer");
+
+When using namespaces in XML DSL, then it is different, as you set up
+the namespaces in the XML root tag (or one of the `camelContext`,
+`routes`, `route` tags).
+
+In the XML example below we use Spring XML where the namespace is
+declared in the root tag `beans`, in the line with
+`xmlns:foo="http://example.com/person"`:
+
+
+
+
+
+
+
+ /foo:person[@name='James']
+
+
+
+
+
+
+
+This namespace uses `foo` as prefix, so the `` expression uses
+`foo:` to use this namespace.
+
+# Using @XPath Annotation for Bean Integration
+
+You can use [Bean Integration](#manual::bean-integration.adoc) to invoke
+a method on a bean and use various languages such as `@XPath` to extract
+a value from the message and bind it to a method parameter.
+
+The default `@XPath` annotation has SOAP and XML namespaces available.
+
+ public class Foo {
+
+ @Consume(uri = "activemq:my.queue")
+ public void doSomething(@XPath("/person/@name") String name, String xml) {
+ // process the inbound message here
+ }
+ }
+
+# Using XPathBuilder without an Exchange
+
+You can now use the `org.apache.camel.language.xpath.XPathBuilder`
+without the need for an `Exchange`. This comes handy if you want to use
+it as a helper to do custom XPath evaluations.
+
+It requires that you pass in a `CamelContext` since a lot of the moving
+parts inside the `XPathBuilder` requires access to the Camel [Type
+Converter](#manual:ROOT:type-converter.adoc) and hence why
+`CamelContext` is needed.
+
+For example, you can do something like this:
+
+ boolean matches = XPathBuilder.xpath("/foo/bar/@xyz").matches(context, " "));
+
+This will match the given predicate.
+
+You can also evaluate as shown in the following three examples:
+
+ String name = XPathBuilder.xpath("foo/bar").evaluate(context, "cheese ", String.class);
+ Integer number = XPathBuilder.xpath("foo/bar").evaluate(context, "123 ", Integer.class);
+ Boolean bool = XPathBuilder.xpath("foo/bar").evaluate(context, "true ", Boolean.class);
+
+Evaluating with a `String` result is a common requirement and make this
+simpler:
+
+ String name = XPathBuilder.xpath("foo/bar").evaluate(context, "cheese ");
+
+# Using Saxon with XPathBuilder
+
+You need to add **camel-saxon** as dependency to your project.
+
+It’s now easier to use [Saxon](http://saxon.sourceforge.net/) with the
+XPathBuilder which can be done in several ways as shown below
+
+- Using a custom XPathFactory
+
+- Using ObjectModel
+
+The easy one
+
+## Setting a custom XPathFactory using System Property
+
+Camel now supports reading the [JVM system property
+`javax.xml.xpath.XPathFactory`]()
+that can be used to set a custom XPathFactory to use.
+
+This unit test shows how this can be done to use Saxon instead:
+
+Camel will log at `INFO` level if it uses a non-default `XPathFactory`
+such as:
+
+ XPathBuilder INFO Using system property javax.xml.xpath.XPathFactory:http://saxon.sf.net/jaxp/xpath/om with value:
+ net.sf.saxon.xpath.XPathFactoryImpl when creating XPathFactory
+
+To use Apache Xerces, you can configure the system property
+
+ -Djavax.xml.xpath.XPathFactory=org.apache.xpath.jaxp.XPathFactoryImpl
+
+## Enabling Saxon from XML DSL
+
+Similarly to Java DSL, to enable Saxon from XML DSL, you have three
+options:
+
+Referring to a custom factory:
+
+ current-dateTime()
+
+And declare a bean with the factory:
+
+
+
+Specifying the object model:
+
+ current-dateTime()
+
+And the recommended approach is to set `saxon=true` as shown:
+
+ current-dateTime()
+
+# Namespace auditing to aid debugging
+
+Many XPath-related issues that users frequently face are linked to the
+usage of namespaces. You may have some misalignment between the
+namespaces present in your message and those that your XPath expression
+is aware of or referencing. XPath predicates or expressions that are
+unable to locate the XML elements and attributes due to namespaces
+issues may look like *they are not working*, when in reality all there
+is to it is a lack of namespace definition.
+
+Namespaces in XML are completely necessary, and while we would love to
+simplify their usage by implementing some magic or voodoo to wire
+namespaces automatically, the truth is that any action down this path
+would disagree with the standards and would greatly hinder
+interoperability.
+
+Therefore, the utmost we can do is assist you in debugging such issues
+by adding two new features to the XPath Expression Language and are thus
+accessible from both predicates and expressions.
+
+## Logging the Namespace Context of your XPath expression/predicate
+
+Every time a new XPath expression is created in the internal pool, Camel
+will log the namespace context of the expression under the
+`org.apache.camel.language.xpath.XPathBuilder` logger. Since Camel
+represents Namespace Contexts in a hierarchical fashion (parent-child
+relationships), the entire tree is output in a recursive manner with the
+following format:
+
+ [me: {prefix -> namespace}, {prefix -> namespace}], [parent: [me: {prefix -> namespace}, {prefix -> namespace}], [parent: [me: {prefix -> namespace}]]]
+
+Any of these options can be used to activate this logging:
+
+- Enable TRACE logging on the
+ `org.apache.camel.language.xpath.XPathBuilder` logger, or some
+ parent logger such as `org.apache.camel` or the root logger
+
+- Enable the `logNamespaces` option as indicated in the following
+ section, in which case the logging will occur on the INFO level
+
+## Auditing namespaces
+
+Camel is able to discover and dump all namespaces present on every
+incoming message before evaluating an XPath expression, providing all
+the richness of information you need to help you analyze and pinpoint
+possible namespace issues.
+
+To achieve this, it in turn internally uses another specially tailored
+XPath expression to extract all namespace mappings that appear in the
+message, displaying the prefix and the full namespace URI(s) for each
+mapping.
+
+Some points to take into account:
+
+- The implicit XML namespace
+ (`xmlns:xml="http://www.w3.org/XML/1998/namespace"`) is suppressed
+ from the output because it adds no value
+
+- Default namespaces are listed under the `DEFAULT` keyword in the
+ output
+
+- Keep in mind that namespaces can be remapped under different scopes.
+ Think of a top-level `a` prefix which in inner elements can be
+ assigned a different namespace, or the default namespace changing in
+ inner scopes. For each discovered prefix, all associated URIs are
+ listed.
+
+You can enable this option in Java DSL and XML DSL:
+
+Java DSL:
+
+ XPathBuilder.xpath("/foo:person/@id", String.class).logNamespaces()
+
+XML DSL:
+
+ /foo:person/@id
+
+The result of the auditing will be appeared at the INFO level under the
+`org.apache.camel.language.xpath.XPathBuilder` logger and will look like
+the following:
+
+ 2012-01-16 13:23:45,878 [stSaxonWithFlag] INFO XPathBuilder - Namespaces discovered in message:
+ {xmlns:a=[http://apache.org/camel], DEFAULT=[http://apache.org/default],
+ xmlns:b=[http://apache.org/camelA, http://apache.org/camelB]}
+
+# Loading script from external resource
+
+You can externalize the script and have Apache Camel load it from a
+resource such as `"classpath:"`, `"file:"`, or `"http:"`. This is done
+using the following syntax: `"resource:scheme:location"`, e.g., to refer
+to a file on the classpath you can do:
+
+ .setHeader("myHeader").xpath("resource:classpath:myxpath.txt", String.class)
+
+# Transforming an XML message
+
+For basic XML transformation where you have a fixed structure, you can
+represent with a combination of using Camel simple and XPath language
+as:
+
+Given this XML body
+
+
+ - Brake
+ scott
+ jackson
+
+ sweden
+ 12345
+
+
+
+Which you want to transform to a smaller structure:
+
+
+ 123
+ sweden
+ scott
+
+
+Then you can use simple as template and XPath to grab the content from
+the message payload, as shown in the route snippet below:
+
+ from("direct:start")
+ .transform().simple("""
+
+ ${xpath(/order/@id)}
+ ${xpath(/order/address/co/text())}
+ ${xpath(/order/first/text())}
+ """)
+ .to("mock:result");
+
+Notice how we use `${xpath(exp}` syntax in the simple template to use
+xpath that will be evaluated on the message body, to extract the content
+to be used in the output (see previous for output).
+
+Since the simple language can output anything, you can also use this to
+output in plain text or JSON, etc.
+
+ from("direct:start")
+ .transform().simple("The order ${xpath(/order/@id)} is being shipped to ${xpath(/order/address/co/text())}")
+ .to("mock:result");
+
+# Dependencies
+
+To use XPath in your camel routes, you need to add the dependency on
+**camel-xpath**, which implements the XPath language.
+
+If you use Maven, you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-xpath
+ x.x.x
+
diff --git a/camel-xquery-language.md b/camel-xquery-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..decb9f70c81cc667d774d0958ef5d9424158f5b2
--- /dev/null
+++ b/camel-xquery-language.md
@@ -0,0 +1,216 @@
+# Xquery-language.md
+
+**Since Camel 1.0**
+
+Camel supports [XQuery](http://www.w3.org/TR/xquery/) to allow an
+[Expression](#manual::expression.adoc) or
+[Predicate](#manual::predicate.adoc) to be used in the
+[DSL](#manual::dsl.adoc).
+
+For example, you could use XQuery to create a predicate in a [Message
+Filter](#eips:filter-eip.adoc) or as an expression for a [Recipient
+List](#eips:recipientList-eip.adoc).
+
+# XQuery Language options
+
+# Variables
+
+The message body will be set as the `contextItem`. And the following
+variables are available as well:
+
+
+
+
+
+
+
+
+
+
+
+
+exchange
+Exchange
+The current Exchange
+
+
+in.body
+Object
+The message body
+
+
+out.body
+Object
+deprecated The OUT
+message body (if any)
+
+
+in.headers.*
+Object
+You can access the value of
+exchange.in.headers with key foo by using the variable
+which name is in.headers.foo
+
+
+out.headers.*
+Object
+deprecated You can
+access the value of exchange.out.headers with key
+foo by using the variable which name is
+out.headers.foo variable
+
+
+*key name*
+Object
+Any exchange.properties
+and exchange.in.headers and any additional parameters set
+using setParameters(Map). These parameters are added with
+their own key name, for instance, if there is an IN header with the key
+name foo then it is added as
+foo .
+
+
+
+
+# Example
+
+ from("queue:foo")
+ .filter().xquery("//foo")
+ .to("queue:bar")
+
+You can also use functions inside your query, in which case you need an
+explicit type conversion, or you will get an
+`org.w3c.dom.DOMException: HIERARCHY_REQUEST_ERR`). You need to pass in
+the expected output type of the function. For example, the concat
+function returns a `String` which is done as shown:
+
+ from("direct:start")
+ .recipientList().xquery("concat('mock:foo.', /person/@city)", String.class);
+
+And in XML DSL:
+
+
+
+
+ concat('mock:foo.', /person/@city
+
+
+
+## Using namespaces
+
+If you have a standard set of namespaces you wish to work with and wish
+to share them across many XQuery expressions, you can use the
+`org.apache.camel.support.builder.Namespaces` when using Java DSL as
+shown:
+
+ Namespaces ns = new Namespaces("c", "http://acme.com/cheese");
+
+ from("direct:start")
+ .filter().xquery("/c:person[@name='James']", ns)
+ .to("mock:result");
+
+Notice how the namespaces are provided to `xquery` with the `ns`
+variable that are passed in as the second parameter.
+
+Each namespace is a key=value pair, where the prefix is the key. In the
+XQuery expression then the namespace is used by its prefix, e.g.:
+
+ /c:person[@name='James']
+
+The namespace builder supports adding multiple namespaces as shown:
+
+ Namespaces ns = new Namespaces("c", "http://acme.com/cheese")
+ .add("w", "http://acme.com/wine")
+ .add("b", "http://acme.com/beer");
+
+When using namespaces in XML DSL then it is different, as you set up the
+namespaces in the XML root tag (or one of the `camelContext`, `routes`,
+`route` tags).
+
+In the XML example below we use Spring XML where the namespace is
+declared in the root tag `beans`, in the line with
+`xmlns:foo="http://example.com/person"`:
+
+
+
+
+
+
+
+ /foo:person[@name='James']
+
+
+
+
+
+
+This namespace uses `foo` as prefix, so the `` expression uses
+`foo:` to use this namespace.
+
+# Using XQuery as transformation
+
+We can do a message translation using transform or setBody in the route,
+as shown below:
+
+ from("direct:start").
+ transform().xquery("/people/person");
+
+Notice that xquery will use DOMResult by default, so if we want to grab
+the value of the person node, using `text()` we need to tell XQuery to
+use String as the result type, as shown:
+
+ from("direct:start").
+ transform().xquery("/people/person/text()", String.class);
+
+If you want to use Camel variables like headers, you have to explicitly
+declare them in the XQuery expression.
+
+
+
+ declare variable $in.headers.foo external;
+ element item {$in.headers.foo}
+
+
+
+# Loading script from external resource
+
+You can externalize the script and have Apache Camel load it from a
+resource such as `"classpath:"`, `"file:"`, or `"http:"`. This is done
+using the following syntax: `"resource:scheme:location"`, e.g., to refer
+to a file on the classpath you can do:
+
+ .setHeader("myHeader").xquery("resource:classpath:myxquery.txt", String.class)
+
+# Learning XQuery
+
+XQuery is a very powerful language for querying, searching, sorting and
+returning XML. For help learning XQuery, try these tutorials
+
+- Mike Kay’s [XQuery
+ Primer](http://www.stylusstudio.com/xquery_primer.html)
+
+- The W3Schools [XQuery
+ Tutorial](http://www.w3schools.com/xml/xquery_intro.asp)
+
+# Dependencies
+
+To use XQuery in your Camel routes, you need to add the dependency on
+**camel-saxon**, which implements the XQuery language.
+
+If you use Maven you could add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release.
+
+
+ org.apache.camel
+ camel-saxon
+ x.x.x
+
diff --git a/camel-xquery.md b/camel-xquery.md
index 66331c4cb4e01fd8079fda338096fc65c8abfed8..e82302bed2b0cedddc05596c24e637056ea1790a 100644
--- a/camel-xquery.md
+++ b/camel-xquery.md
@@ -21,7 +21,9 @@ as a second argument to the **xquery()** method.
from("direct:start")
.recipientList().xquery("concat('mock:foo.', /person/@city)", String.class);
-# Variables
+# Usage
+
+## Variables
The IN message body will be set as the `contextItem`. Besides this,
these Variables are also added as parameters:
@@ -33,45 +35,45 @@ these Variables are also added as parameters:
-
+
-
-exchange
+
+exchange
Exchange
The current Exchange
-
-in.body
+
+in.body
Object
The In message’s body
-
-out.body
+
+out.body
Object
The OUT message’s body (if
any)
-
-in.headers.*
+
+in.headers.*
Object
You can access the value of
exchange.in.headers with key foo by using the variable
which name is in.headers.foo
-
-out.headers.*
+
+out.headers.*
Object
You can access the value of
exchange.out.headers with key foo by using the variable
which name is out.headers.foo variable
-
-key name
+
+*key name*
Object
Any exchange.properties and
exchange.in.headers and any additional parameters set using
@@ -82,7 +84,7 @@ own key name, for instance, if there is an IN header with the key name
-# Using XML configuration
+## Using XML configuration
If you prefer to configure your routes in your Spring XML file, then you
can use XPath expressions as follows
@@ -114,7 +116,7 @@ attribute:
concat('mock:foo.', /person/@city)
-# Using XQuery as an endpoint
+## Using XQuery as an endpoint
Sometimes an XQuery expression can be quite large; it can essentally be
used for Templating. So you may want to use an XQuery Endpoint, so you
@@ -131,7 +133,7 @@ The following example shows how to take a message of an ActiveMQ queue
-# Loading script from external resource
+## Loading script from external resource
You can externalize the script and have Apache Camel load it from a
resource such as `"classpath:"`, `"file:"`, or `"http:"`. This is done
@@ -140,7 +142,7 @@ to a file on the classpath you can do:
.setHeader("myHeader").xquery("resource:classpath:myxquery.txt", String.class)
-# Learning XQuery
+## Learning XQuery
XQuery is a very powerful language for querying, searching, sorting and
returning XML. For help learning XQuery, try these tutorials
@@ -157,7 +159,7 @@ To use XQuery in your Camel routes, you need to add the dependency on
**camel-saxon**, which implements the XQuery language.
If you use Maven, you could add the following to your `pom.xml`,
-substituting the version number for the latest \& greatest release.
+substituting the version number for the latest release.
org.apache.camel
diff --git a/camel-xslt-saxon.md b/camel-xslt-saxon.md
index dbe46948a9f13cef4fc4a037e151ddb46875da52..ed536e1d5c26f1ecfbe18658c3226cf2af609758 100644
--- a/camel-xslt-saxon.md
+++ b/camel-xslt-saxon.md
@@ -30,26 +30,27 @@ You can append query options to the URI in the following format:
-
+
-
+
xslt-saxon:com/acme/mytransform.xsl
+style="text-align: left;">xslt-saxon:com/acme/mytransform.xsl
Refers to the file
-com/acme/mytransform.xsl on the classpath
+com/acme/mytransform.xsl on the classpath
-
-xslt-saxon:file:///foo/bar.xsl
+
+xslt-saxon:file:///foo/bar.xsl
Refers to the file
-/foo/bar.xsl
+/foo/bar.xsl
-
+
xslt-saxon:http://acme.com/cheese/foo.xsl
+style="text-align: left;">xslt-saxon:http://acme.com/cheese/foo.xsl
Refers to the remote http
resource
@@ -58,7 +59,9 @@ resource
Example URIs
-# Using XSLT endpoints
+# Usage
+
+## Using XSLT endpoints
The following format is an example of using an XSLT template to
formulate a response for a message for InOut message exchanges (where
@@ -74,7 +77,7 @@ destination, you could use the following route:
to("xslt-saxon:com/acme/mytransform.xsl").
to("activemq:Another.Queue");
-# Getting Usable Parameters into the XSLT
+## Getting Usable Parameters into the XSLT
By default, all headers are added as parameters which are then available
in the XSLT.
@@ -92,7 +95,7 @@ it to be available:
-# Spring XML versions
+## Spring XML versions
To use the above examples in Spring XML, you would use something like
the following code:
@@ -105,7 +108,7 @@ the following code:
-# Using xsl:include
+## Using `xsl:include`
Camel provides its own implementation of `URIResolver`. This allows
Camel to load included files from the classpath.
@@ -124,12 +127,12 @@ the prefix from the endpoint configuration. If no prefix is specified in
the endpoint configuration, the default is `classpath:`.
You can also refer backwards in the included paths. In the following
-example, the xsl file will be resolved under
+example, the XSL file will be resolved under
`org/apache/camel/component`.
-# Using xsl:include and default prefix
+## Using `xsl:include` and default prefix
Camel will use the prefix from the endpoint configuration as the default
prefix.
@@ -137,7 +140,7 @@ prefix.
You can explicitly specify `file:` or `classpath:` loading. The two
loading types can be mixed in an XSLT script, if necessary.
-# Using Saxon extension functions
+## Using Saxon extension functions
Since Saxon 9.2, writing extension functions has been supplemented by a
new mechanism, referred to as [extension
@@ -169,12 +172,12 @@ With Spring XML:
-# Dynamic stylesheets
+## Dynamic stylesheets
To provide a dynamic stylesheet at runtime, you can either:
- Define a dynamic URI. See [How to use a dynamic URI in
- to()](#manual:faq:how-to-use-a-dynamic-uri-in-to.adoc) for more
+ `to()`](#manual:faq:how-to-use-a-dynamic-uri-in-to.adoc) for more
information.
- Use header with the stylesheet.
@@ -205,7 +208,7 @@ as this will tell Camel to not load `dummy.xsl` on startup but to load
the stylesheet on demand. And because you provide the stylesheet via
headers, then it is fully dynamic.
-# Accessing warnings, errors and fatalErrors from XSLT ErrorListener
+## Accessing warnings, errors and fatalErrors from XSLT ErrorListener
Any warning/error or fatalError is stored on the current Exchange as a
property with the keys `Exchange.XSLT_ERROR`,
@@ -214,7 +217,7 @@ users to get hold of any errors happening during transformation.
For example, in the stylesheet below, we want to determinate whether a
staff has an empty dob field. And to include a custom error message
-using xsl:message.
+using `xsl:message`.
diff --git a/camel-xslt.md b/camel-xslt.md
index 35709f7629cc31e3b23eb5bdb8999961da207651..f376f83118985f87a2de2681d68976d5aacfcfb4 100644
--- a/camel-xslt.md
+++ b/camel-xslt.md
@@ -30,23 +30,23 @@ You can append query options to the URI in the following format:
-
+
-
+
xslt:com/acme/mytransform.xsl
Refers to the file
com/acme/mytransform.xsl on the classpath
-
+
xslt:file:///foo/bar.xsl
Refers to the file
/foo/bar.xsl
-
+
xslt:http://acme.com/cheese/foo.xsl
Refers to the remote http
@@ -125,12 +125,12 @@ the prefix from the endpoint configuration. If no prefix is specified in
the endpoint configuration, the default is `classpath:`.
You can also refer backwards in the included paths. In the following
-example, the xsl file will be resolved under
+example, the XSL file will be resolved under
`org/apache/camel/component`.
-# Using xsl:include and default prefix
+# Using `xsl:include` and default prefix
Camel will use the prefix from the endpoint configuration as the default
prefix.
diff --git a/camel-xtokenize-language.md b/camel-xtokenize-language.md
new file mode 100644
index 0000000000000000000000000000000000000000..25f0890bd2cb8c6994b40b704c7f1c7b34a610af
--- /dev/null
+++ b/camel-xtokenize-language.md
@@ -0,0 +1,21 @@
+# Xtokenize-language.md
+
+**Since Camel 2.14**
+
+The XML Tokenize language is a built-in language in `camel-stax`, which
+is a truly XML-aware tokenizer that can be used with the
+[Split](#eips:split-eip.adoc) EIP as the conventional
+[Tokenize](#tokenize-language.adoc) to efficiently and effectively
+tokenize XML documents.
+
+XML Tokenize is capable of not only recognizing XML namespaces and
+hierarchical structures of the document but also more efficiently
+tokenizing XML documents than the conventional
+[Tokenize](#tokenize-language.adoc) language.
+
+# XML Tokenizer Options
+
+# Example
+
+See [Split EIP](#eips:split-eip.adoc), which has examples using the XML
+Tokenize language.
diff --git a/camel-yaml-dsl.md b/camel-yaml-dsl.md
new file mode 100644
index 0000000000000000000000000000000000000000..c8c845eed16144fe7ff499f0dd06d3b435d3250f
--- /dev/null
+++ b/camel-yaml-dsl.md
@@ -0,0 +1,365 @@
+# Yaml-dsl.md
+
+**Since Camel 3.9**
+
+The YAML DSL provides the capability to define your Camel routes, route
+templates \& REST DSL configuration in YAML.
+
+# Defining a route
+
+A route is collection of elements defined as follows:
+
+ - from: #
+ uri: "direct:start"
+ steps: #
+ - filter:
+ expression:
+ simple: "${in.header.continue} == true"
+ steps: #
+ - to:
+ uri: "log:filtered"
+ - to:
+ uri: "log:original"
+
+- route entry point, by default `from` and `rest` are supported
+
+- processing steps
+
+Each step is represented by a YAML map that has a single entry where the
+field name is the EIP name
+
+As a general rule, each step provides all the parameters the related
+definition declares, but there are some minor differences/enhancements:
+
+- **Output Aware Steps**
+
+ Some steps such as `filter` and `split` have their own pipeline.
+ When an exchange matches the filter expression or for the items
+ generated by the split expression, such a pipeline can be defined by
+ the `steps` field:
+
+ filter:
+ expression:
+ simple: "${in.header.continue} == true"
+ steps:
+ - to:
+ uri: "log:filtered"
+
+- **Expression Aware Steps**
+
+ Some EIPs such as `filter` and `split` support the definition of an
+ expression through the `expression` field:
+
+ **Explicit Expression field**
+
+ filter:
+ expression:
+ simple: "${in.header.continue} == true"
+
+ To make the DSL less verbose, the `expression` field can be omitted:
+
+ **Implicit Expression field**
+
+ filter:
+ simple: "${in.header.continue} == true"
+
+ In general, `expression` can be defined inline like in the examples
+ above. But in case you need to provide more information, you can
+ *unroll* the expression definition and configure any single
+ parameter the expression defines.
+
+ **Full Expression definition**
+
+ filter:
+ tokenize:
+ token: "<"
+ endToken: ">"
+
+- **Data Format Aware Steps**
+
+ The EIP `marshal` and `unmarshal` supports the definition of data
+ formats:
+
+ marshal:
+ json:
+ library: Gson
+
+ In case you want to use the data-format’s default settings, you need
+ to place an empty block as data format parameters, like `json: {}`
+
+# Defining endpoints
+
+To define an endpoint with the YAML dsl you have two options:
+
+1. Using a classic Camel URI:
+
+ - from:
+ uri: "timer:tick?period=1s"
+ steps:
+ - to:
+ uri: "telegram:bots?authorizationToken=XXX"
+
+2. Using URI and parameters:
+
+ - from:
+ uri: "timer://tick"
+ parameters:
+ period: "1s"
+ steps:
+ - to:
+ uri: "telegram:bots"
+ parameters:
+ authorizationToken: "XXX"
+
+# Defining beans
+
+In addition to the general support for creating beans provided by [Camel
+Main](#others:main.adoc#_specifying_custom_beans), the YAML DSL provides
+a convenient syntax to define and configure them:
+
+ - beans:
+ - name: beanFromMap #
+ type: com.acme.MyBean #
+ properties: #
+ foo: bar
+
+- the name of the bean which will be used to bound the instance to the
+ Camel Registry
+
+- the full qualified class name of the bean
+
+- the properties of the bean to be set
+
+The properties of the bean can be defined using either a map or
+properties style, as shown in the example below:
+
+ - beans:
+ # map style
+ - name: beanFromMap
+ type: com.acme.MyBean
+ properties:
+ field1: 'f1'
+ field2: 'f2'
+ nested:
+ field1: 'nf1'
+ field2: 'nf2'
+ # properties style
+ - name: beanFromProps
+ type: com.acme.MyBean
+ properties:
+ field1: 'f1_p'
+ field2: 'f2_p'
+ nested.field1: 'nf1_p'
+ nested.field2: 'nf2_p'
+
+The `beans` elements can only be used as the root element
+
+## Creating bean using constructors
+
+When beans must be created with constructor arguments, then this is made
+easier in Camel 4.1 onwards.
+
+For example as shown below:
+
+ - beans:
+ - name: myBean
+ type: com.acme.MyBean
+ constructors:
+ 0: true
+ 1: "Hello World"
+
+The `constructors` is index based so the keys must be numbers starting
+from zero.
+
+You can use both constructors and properties.
+
+## Creating beans from factory method
+
+A bean can also be created from a factory method (public static) as
+shown below:
+
+ - beans:
+ - name: myBean
+ type: com.acme.MyBean
+ factoryMethod: createMyBean
+ constructors:
+ 0: true
+ 1: "Hello World"
+
+When using `factoryMethod` then the arguments to this method is taken
+from `constructors`. So in the example above, this means that class
+`com.acme.MyBean` should be as follows:
+
+ public class MyBean {
+
+ public static MyBean createMyBean(boolean important, String message) {
+ MyBean answer = ...
+ // create and configure the bean
+ return answer;
+ }
+ }
+
+The factory method must be `public static` and from the same class as
+the created class itself.
+
+## Creating beans from factory bean
+
+A bean can also be created from a factory bean as shown below:
+
+ - beans:
+ - name: myBean
+ type: com.acme.MyBean
+ factoryBean: com.acme.MyHelper
+ factoryMethod: createMyBean
+ constructors:
+ 0: true
+ 1: "Hello World"
+
+`factoryBean` can also refer to an existing bean by bean id instead of
+FQN classname.
+
+When using `factoryBean` and `factoryMethod` then the arguments to this
+method is taken from `constructors`. So in the example above, this means
+that class `com.acme.MyHelper` should be as follows:
+
+ public class MyHelper {
+
+ public static MyBean createMyBean(boolean important, String message) {
+ MyBean answer = ...
+ // create and configure the bean
+ return answer;
+ }
+ }
+
+The factory method must be `public static`.
+
+## Creating beans from builder classes
+
+A bean can also be created from another builder class as shown below:
+
+ - beans:
+ - name: myBean
+ type: com.acme.MyBean
+ builderClass: com.acme.MyBeanBuilder
+ builderMethod: createMyBean
+ properties:
+ id: 123
+ name: 'Acme'
+
+The builder class must be `public` and have a no-arg default
+constructor.
+
+The builder class is then used to create the actual bean by using fluent
+builder style configuration. So the properties will be set on the
+builder class, and the bean is created by invoking the `builderMethod`
+at the end. The invocation of this method is done via Java reflection.
+
+## Creating beans using script language
+
+For advanced use-cases then Camel allows to inline a script language,
+such as groovy, java, javascript, etc, to create the bean. This gives
+flexibility to use a bit of programming to create and configure the
+bean.
+
+ - beans:
+ - name: myBean
+ type: com.acme.MyBean
+ scriptLanguage: groovy
+ script: >
+ // some groovy script here to create the bean
+ bean = ...
+ ...
+ return bean
+
+When using `script` then constructors and factory bean/method is not in
+use
+
+## Using init and destroy methods on beans
+
+Sometimes beans need to do some initialization and cleanup work before a
+bean is ready to be used. For this you can use `initMethod` and
+`destroyMethod` that Camel triggers accordingly.
+
+Those methods must be public void and have no arguments, as shown below:
+
+ public class MyBean {
+
+ public void initMe() {
+ // do init work here
+ }
+
+ public void destroyMe() {
+ // do cleanup work here
+ }
+
+ }
+
+You then have to declare those methods in YAML DSL as follows:
+
+ - beans:
+ - name: myBean
+ type: com.acme.MyBean
+ initMethod: initMe
+ destroyMethod: destroyMe
+ constructors:
+ 0: true
+ 1: "Hello World"
+
+The init and destroy methods are optional, so a bean does not have to
+have both, for example you may only have destroy methods.
+
+# Configuring options on languages
+
+Some [Languages](#components:languages:index.adoc) have additional
+configurations you may need to use.
+
+For example, the
+[JSONPath](#components:languages:jsonpath-language.adoc) can be
+configured to ignore JSon parsing errors. This is intended when you use
+a [Content Based Router](#components:eips:choice-eip.adoc) and want to
+route the message to different endpoints. But the JSon payload of the
+message can be in different forms; meaning that the JSonPath expressions
+in some cases would fail with an exception, and other times not. In this
+situation, you need to set `suppress-exception` to true, as shown below:
+
+ - from:
+ uri: "direct:start"
+ steps:
+ - choice:
+ when:
+ - jsonpath:
+ expression: "person.middlename"
+ suppressExceptions: true
+ steps:
+ - to: "mock:middle"
+ - jsonpath:
+ expression: "person.lastname"
+ suppressExceptions: true
+ steps:
+ - to: "mock:last"
+ otherwise:
+ steps:
+ - to: "mock:other"
+
+In the route above, the following message
+
+ {
+ "person": {
+ "firstname": "John",
+ "lastname": "Doe"
+ }
+ }
+
+Would have failed the JSonPath expression `person.middlename` because
+the JSon payload does not have a `middlename` field. To remedy this we
+have suppressed the exception.
+
+# External examples
+
+You can find a set of examples using `main-yaml` in [Camel
+Examples](https://github.com/apache/camel-examples) which demonstrate
+creating Camel Routes with YAML.
+
+Another way to find examples of YAML DSL is to look in [Camel
+Kamelets](https://github.com/apache/camel-kamelets) where each Kamelet
+is defined using YAML.
diff --git a/camel-zeebe.md b/camel-zeebe.md
index 66ca345bf1eb8fbeb4210b3e8918fe319f4aaa0c..5d714180e92c7d99ba9dfc14aa328c71fdc36ccb 100644
--- a/camel-zeebe.md
+++ b/camel-zeebe.md
@@ -19,7 +19,9 @@ available at [Camunda Zeebe](https://camunda.com/platform/zeebe/).
zeebe://[endpoint]?[options]
-# Producer Endpoints:
+# Usage
+
+## Producer Endpoints
@@ -27,47 +29,47 @@ available at [Camunda Zeebe](https://camunda.com/platform/zeebe/).
-
+
-
-startProcess
+
+startProcess
Creates and starts an instance of the
specified process.
-
-cancelProcess
+
+cancelProcess
Cancels a running process
instance.
-
-publishMessage
+
+publishMessage
Publishes a message.
-
-completeJob
+
+completeJob
Completes a job for a service
task.
-
-failJob
+
+failJob
Fails a job.
-
-updateJobRetries
+
+updateJobRetries
Updates the number of retries for a
job.
-
-throwError
+
+throwError
Throw an error to indicate that a
business error has occurred.
-
-deployResource
+
+deployResource
Deploy a process resource. Currently
only supports process definitions.
@@ -276,7 +278,7 @@ becomes process\_id.
}
});
-# Consumer Endpoints:
+## Consumer Endpoints:
@@ -284,13 +286,13 @@ becomes process\_id.
-
+
-
+
worker
Registers a job worker for a job type
and provides messages for available jobs.
diff --git a/camel-zipDeflater-dataformat.md b/camel-zipDeflater-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..392409ffdf1efee0bf16b7f089a6bfe37b5a77e6
--- /dev/null
+++ b/camel-zipDeflater-dataformat.md
@@ -0,0 +1,52 @@
+# ZipDeflater-dataformat.md
+
+**Since Camel 2.12**
+
+The Zip Deflater Data Format is a message compression and decompression
+format. Messages marshaled using Zip compression can be unmarshalled
+using Zip decompression just prior to being consumed at the endpoint.
+The compression capability is quite useful when you deal with large XML
+and Text based payloads. It facilitates more optimal use of network
+bandwidth while incurring a small cost to compress and decompress
+payloads at the endpoint.
+
+This dataformat is not for working with zip files such as uncompressing
+and building zip files. Instead, use the
+[zipfile](#dataformats:zipFile-dataformat.adoc) dataformat.
+
+# Options
+
+# Marshal
+
+In this example we marshal a regular text/XML payload to a compressed
+payload employing zip compression `Deflater.BEST_COMPRESSION` and send
+it an ActiveMQ queue called MY\_QUEUE.
+
+ from("direct:start").marshal().zipDeflater(Deflater.BEST_COMPRESSION).to("activemq:queue:MY_QUEUE");
+
+Alternatively, if you would like to use the default setting, you could
+send it as
+
+ from("direct:start").marshal().zipDeflater().to("activemq:queue:MY_QUEUE");
+
+# Unmarshal
+
+In this example, we unmarshal a zipped payload from an ActiveMQ queue
+called MY\_QUEUE to its original format, and forward it for processing
+to the UnZippedMessageProcessor. Note that the compression Level
+employed during marshaling should be identical to the one employed
+during unmarshalling to avoid errors.
+
+ from("activemq:queue:MY_QUEUE").unmarshal().zipDeflater().process(new UnZippedMessageProcessor());
+
+# Dependencies
+
+If you use Maven you could add the following to your `pom.xml`,
+substituting the version number for the latest and greatest release (see
+the download page for the latest versions).
+
+
+ org.apache.camel
+ camel-zip-deflater
+ x.x.x
+
diff --git a/camel-zipFile-dataformat.md b/camel-zipFile-dataformat.md
new file mode 100644
index 0000000000000000000000000000000000000000..18753fb78bacce9ab7bd3eb56386e1089f1f84ba
--- /dev/null
+++ b/camel-zipFile-dataformat.md
@@ -0,0 +1,123 @@
+# ZipFile-dataformat.md
+
+**Since Camel 2.11**
+
+The Zip File Data Format is a message compression and decompression
+format. Messages can be marshaled (compressed) to Zip files containing a
+single entry, and Zip files containing a single entry can be
+unmarshalled (decompressed) to the original file contents. This data
+format supports ZIP64, as long as Java 7 or later is being used.
+
+# ZipFile Options
+
+# Marshal
+
+In this example, we marshal a regular text/XML payload to a compressed
+payload using Zip file compression, and send it to an ActiveMQ queue
+called MY\_QUEUE.
+
+ from("direct:start")
+ .marshal().zipFile()
+ .to("activemq:queue:MY_QUEUE");
+
+The name of the Zip entry inside the created Zip file is based on the
+incoming `CamelFileName` message header, which is the standard message
+header used by the file component. Additionally, the outgoing
+`CamelFileName` message header is automatically set to the value of the
+incoming `CamelFileName` message header, with the ".zip" suffix. So, for
+example, if the following route finds a file named "test.txt" in the
+input directory, the output will be a Zip file named "test.txt.zip"
+containing a single Zip entry named "test.txt":
+
+ from("file:input/directory?antInclude=*/.txt")
+ .marshal().zipFile()
+ .to("file:output/directory");
+
+If there is no incoming `CamelFileName` message header, (for example, if
+the file component is not the consumer), then the message ID is used by
+default. Since the message ID is normally a unique generated ID, you
+will end up with filenames like
+`ID-MACHINENAME-2443-1211718892437-1-0.zip`. If you want to override
+this behavior, then you can set the value of the `CamelFileName` header
+explicitly in your route:
+
+ from("direct:start")
+ .setHeader(Exchange.FILE_NAME, constant("report.txt"))
+ .marshal().zipFile()
+ .to("file:output/directory");
+
+This route would result in a Zip file named "report.txt.zip" in the
+output directory, containing a single Zip entry named "report.txt".
+
+# Unmarshal
+
+In this example we unmarshal a Zip file payload from an ActiveMQ queue
+called MY\_QUEUE to its original format, and forward it for processing
+to the `UnZippedMessageProcessor`.
+
+ from("activemq:queue:MY_QUEUE")
+ .unmarshal().zipFile()
+ .process(new UnZippedMessageProcessor());
+
+If the zip file has more than one entry, the usingIterator option of
+ZipFileDataFormat to be true, and you can use splitter to do the further
+work.
+
+ ZipFileDataFormat zipFile = new ZipFileDataFormat();
+ zipFile.setUsingIterator(true);
+
+ from("file:src/test/resources/org/apache/camel/dataformat/zipfile/?delay=1000&noop=true")
+ .unmarshal(zipFile)
+ .split(bodyAs(Iterator.class)).streaming()
+ .process(new UnZippedMessageProcessor())
+ .end();
+
+Or you can use the ZipSplitter as an expression for splitter directly
+like this:
+
+ from("file:src/test/resources/org/apache/camel/dataformat/zipfile?delay=1000&noop=true")
+ .split(new ZipSplitter()).streaming()
+ .process(new UnZippedMessageProcessor())
+ .end();
+
+You **cannot** use ZipSplitter in *parallel* mode with the splitter.
+
+# Aggregate
+
+Please note that this aggregation strategy requires eager completion
+check to work properly.
+
+In this example, we aggregate all text files found in the input
+directory into a single Zip file that is stored in the output directory.
+
+ from("file:input/directory?antInclude=*/.txt")
+ .aggregate(constant(true), new ZipAggregationStrategy())
+ .completionFromBatchConsumer().eagerCheckCompletion()
+ .to("file:output/directory");
+
+The outgoing `CamelFileName` message header is created using
+java.io.File.createTempFile, with the ".zip" suffix. If you want to
+override this behavior, then you can set the value of the
+`CamelFileName` header explicitly in your route:
+
+ from("file:input/directory?antInclude=*/.txt")
+ .aggregate(constant(true), new ZipAggregationStrategy())
+ .completionFromBatchConsumer().eagerCheckCompletion()
+ .setHeader(Exchange.FILE_NAME, constant("reports.zip"))
+ .to("file:output/directory");
+
+# Dependencies
+
+To use Zip files in your camel routes, you need to add a dependency on
+**camel-zipfile** which implements this data format.
+
+If you use Maven you can add the following to your `pom.xml`,
+substituting the version number for the latest \& greatest release (see
+the download page for the latest versions).
+
+
+ org.apache.camel
+ camel-zipfile
+ x.x.x
+
+
diff --git a/camel-zookeeper-master.md b/camel-zookeeper-master.md
index 6cfccd747aa025bfef59744aed53342a145e17c9..6dd606403b5c48e00fddab37e7d8d8efd3b04b74 100644
--- a/camel-zookeeper-master.md
+++ b/camel-zookeeper-master.md
@@ -33,7 +33,7 @@ doesn’t support exclusive consumers.
Where endpoint is any Camel endpoint, you want to run in master/slave
mode.
-# Example
+# Examples
You can protect a clustered Camel application to only consume files from
one active node.
@@ -60,7 +60,7 @@ environment variables.
export ZOOKEEPER_URL = "myzookeeper:2181"
-# Master RoutePolicy
+## Master RoutePolicy
You can also use a `RoutePolicy` to control routes in master/slave mode.
diff --git a/camel-zookeeper.md b/camel-zookeeper.md
index 26eb2d148021e9837a109ab0502af07c901633c1..226ff362f5fa5fd95f979bac9d9c9d67bb339c4d 100644
--- a/camel-zookeeper.md
+++ b/camel-zookeeper.md
@@ -33,7 +33,7 @@ for this component:
The path from the URI specifies the node in the ZooKeeper server (a.k.a.
*znode*) that will be the target of the endpoint:
-# Use cases
+# Usage
## Reading from a *znode*
@@ -103,7 +103,7 @@ or equivalently:
-ZooKeeper nodes can have different types; they can be *Ephemeral* or
+ZooKeeper’s nodes can have different types; they can be *Ephemeral* or
*Persistent* and *Sequenced* or *Unsequenced*. For further information
of each type, you can check
[here](http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#Ephemeral+Nodes).