Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- camel-aggregate-eip.md +673 -0
- camel-ai-summary.md +10 -0
- camel-asn1-dataformat.md +83 -0
- camel-atmosphere-websocket.md +1 -4
- camel-atom.md +3 -3
- camel-attachments.md +22 -0
- camel-avro-dataformat.md +69 -0
- camel-avro.md +4 -2
- camel-avroJackson-dataformat.md +54 -0
- camel-aws-bedrock.md +3 -1
- camel-aws-cloudtrail.md +14 -12
- camel-aws-secrets-manager.md +10 -3
- camel-aws-summary.md +13 -0
- camel-aws-xray.md +175 -0
- camel-aws2-athena.md +4 -4
- camel-aws2-ddb.md +1 -1
- camel-aws2-ddbstream.md +2 -2
- camel-aws2-ec2.md +2 -2
- camel-aws2-ecs.md +1 -1
- camel-aws2-eks.md +4 -2
- camel-aws2-eventbridge.md +3 -1
- camel-aws2-iam.md +4 -2
- camel-aws2-kinesis-firehose.md +0 -2
- camel-aws2-kinesis.md +6 -6
- camel-aws2-kms.md +4 -2
- camel-aws2-mq.md +1 -1
- camel-aws2-msk.md +1 -1
- camel-aws2-redshift-data.md +4 -2
- camel-aws2-s3.md +97 -18
- camel-aws2-sns.md +8 -8
- camel-aws2-sqs.md +18 -12
- camel-aws2-step-functions.md +4 -2
- camel-aws2-sts.md +4 -2
- camel-aws2-timestream.md +4 -2
- camel-aws2-translate.md +4 -2
- camel-azure-cosmosdb.md +77 -73
- camel-azure-eventhubs.md +94 -114
- camel-azure-files.md +21 -19
- camel-azure-key-vault.md +100 -3
- camel-azure-schema-registry.md +6 -0
- camel-azure-servicebus.md +10 -10
- camel-azure-storage-blob.md +27 -25
- camel-azure-storage-datalake.md +19 -17
- camel-azure-storage-queue.md +14 -12
- camel-azure-summary.md +11 -0
- camel-barcode-dataformat.md +157 -0
- camel-base64-dataformat.md +76 -0
- camel-batchConfig-eip.md +5 -0
- camel-bean-eip.md +141 -0
- camel-bean-language.md +85 -0
camel-aggregate-eip.md
ADDED
|
@@ -0,0 +1,673 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Aggregate-eip.md
|
| 2 |
+
|
| 3 |
+
The
|
| 4 |
+
[Aggregator](http://www.enterpriseintegrationpatterns.com/Aggregator.html)
|
| 5 |
+
from the [EIP patterns](#enterprise-integration-patterns.adoc) allows
|
| 6 |
+
you to combine a number of messages into a single message.
|
| 7 |
+
|
| 8 |
+
How do we combine the results of individual, but related, messages so
|
| 9 |
+
that they can be processed as a whole?
|
| 10 |
+
|
| 11 |
+
<figure>
|
| 12 |
+
<img src="eip/Aggregator.gif" alt="image" />
|
| 13 |
+
</figure>
|
| 14 |
+
|
| 15 |
+
Use a stateful filter, an Aggregator, to collect and store individual
|
| 16 |
+
messages until a complete set of related messages has been received.
|
| 17 |
+
Then, the Aggregator publishes a single message distilled from the
|
| 18 |
+
individual messages.
|
| 19 |
+
|
| 20 |
+
The aggregator is one of the most complex EIP and has many features and
|
| 21 |
+
configurations.
|
| 22 |
+
|
| 23 |
+
The logic for combing messages together is *correlated* in buckets based
|
| 24 |
+
on a *correlation key*. Messages with the same correlation key are
|
| 25 |
+
aggregated together, using an `AggregationStrategy`.
|
| 26 |
+
|
| 27 |
+
# Aggregate options
|
| 28 |
+
|
| 29 |
+
# Exchange properties
|
| 30 |
+
|
| 31 |
+
# Worker pools
|
| 32 |
+
|
| 33 |
+
The aggregate EIP will always use a worker pool used to process all the
|
| 34 |
+
outgoing messages from the aggregator. The worker pool is determined
|
| 35 |
+
accordingly:
|
| 36 |
+
|
| 37 |
+
- If a custom `ExecutorService` has been configured, then this is used
|
| 38 |
+
as worker pool.
|
| 39 |
+
|
| 40 |
+
- If `parallelProcessing=true` then a *default* worker pool (is 10
|
| 41 |
+
worker threads by default) is created. However, the thread pool size
|
| 42 |
+
and other configurations can be configured using *thread pool
|
| 43 |
+
profiles*.
|
| 44 |
+
|
| 45 |
+
- Otherwise, a single threaded worker pool is created.
|
| 46 |
+
|
| 47 |
+
- To achieve synchronous aggregation, use an instance of
|
| 48 |
+
`SynchronousExecutorService` for the `executorService` option. The
|
| 49 |
+
aggregated output will execute in the same thread that called the
|
| 50 |
+
aggregator.
|
| 51 |
+
|
| 52 |
+
# Aggregating
|
| 53 |
+
|
| 54 |
+
The `AggregationStrategy` is used for aggregating the old, and the new
|
| 55 |
+
exchanges together into a single exchange; that becomes the next old,
|
| 56 |
+
when the next message is aggregated, and so forth.
|
| 57 |
+
|
| 58 |
+
Possible implementations include performing some kind of combining or
|
| 59 |
+
delta processing. For instance, adding line items together into an
|
| 60 |
+
invoice or just using the newest exchange and removing old exchanges
|
| 61 |
+
such as for state tracking or market data prices, where old values are
|
| 62 |
+
of little use.
|
| 63 |
+
|
| 64 |
+
Notice the aggregation strategy is a mandatory option and must be
|
| 65 |
+
provided to the aggregator.
|
| 66 |
+
|
| 67 |
+
In the aggregate method, do not create a new exchange instance to
|
| 68 |
+
return, instead return either the old or new exchange from the input
|
| 69 |
+
parameters; favor returning the old exchange whenever possible.
|
| 70 |
+
|
| 71 |
+
Here are a few example `AggregationStrategy` implementations that should
|
| 72 |
+
help you create your own custom strategy.
|
| 73 |
+
|
| 74 |
+
//simply combines Exchange String body values using '+' as a delimiter
|
| 75 |
+
class StringAggregationStrategy implements AggregationStrategy {
|
| 76 |
+
|
| 77 |
+
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
|
| 78 |
+
if (oldExchange == null) {
|
| 79 |
+
return newExchange;
|
| 80 |
+
}
|
| 81 |
+
|
| 82 |
+
String oldBody = oldExchange.getIn().getBody(String.class);
|
| 83 |
+
String newBody = newExchange.getIn().getBody(String.class);
|
| 84 |
+
oldExchange.getIn().setBody(oldBody + "+" + newBody);
|
| 85 |
+
return oldExchange;
|
| 86 |
+
}
|
| 87 |
+
}
|
| 88 |
+
|
| 89 |
+
//simply combines Exchange body values into an ArrayList<Object>
|
| 90 |
+
class ArrayListAggregationStrategy implements AggregationStrategy {
|
| 91 |
+
|
| 92 |
+
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
|
| 93 |
+
Object newBody = newExchange.getIn().getBody();
|
| 94 |
+
ArrayList<Object> list = null;
|
| 95 |
+
if (oldExchange == null) {
|
| 96 |
+
list = new ArrayList<Object>();
|
| 97 |
+
list.add(newBody);
|
| 98 |
+
newExchange.getIn().setBody(list);
|
| 99 |
+
return newExchange;
|
| 100 |
+
} else {
|
| 101 |
+
list = oldExchange.getIn().getBody(ArrayList.class);
|
| 102 |
+
list.add(newBody);
|
| 103 |
+
return oldExchange;
|
| 104 |
+
}
|
| 105 |
+
}
|
| 106 |
+
}
|
| 107 |
+
|
| 108 |
+
The `org.apache.camel.builder.AggregationStrategies` is a builder that
|
| 109 |
+
can be used for creating commonly used aggregation strategies without
|
| 110 |
+
having to create a class.
|
| 111 |
+
|
| 112 |
+
## Aggregate by grouping exchanges
|
| 113 |
+
|
| 114 |
+
In the route below we group all the exchanges together using
|
| 115 |
+
`GroupedExchangeAggregationStrategy`:
|
| 116 |
+
|
| 117 |
+
from("direct:start")
|
| 118 |
+
// aggregates all using the same expression and group the
|
| 119 |
+
// exchanges, so we get one single exchange containing all
|
| 120 |
+
// the others
|
| 121 |
+
.aggregate(new GroupedExchangeAggregationStrategy()).constant(true)
|
| 122 |
+
// wait for 0.5 seconds to aggregate
|
| 123 |
+
.completionTimeout(500L).to("mock:result");
|
| 124 |
+
|
| 125 |
+
As a result we have one outgoing `Exchange` being routed to the
|
| 126 |
+
`"mock:result"` endpoint. The exchange is a holder containing all the
|
| 127 |
+
incoming Exchanges.
|
| 128 |
+
|
| 129 |
+
The output of the aggregator will then contain the exchanges grouped
|
| 130 |
+
together in a list as shown below:
|
| 131 |
+
|
| 132 |
+
List<Exchange> grouped = exchange.getMessage().getBody(List.class);
|
| 133 |
+
|
| 134 |
+
## Aggregating into a List
|
| 135 |
+
|
| 136 |
+
If you want to aggregate some value from the messages `<V>` into a
|
| 137 |
+
`List<V>` then you can use the
|
| 138 |
+
`org.apache.camel.processor.aggregate.AbstractListAggregationStrategy`
|
| 139 |
+
abstract class.
|
| 140 |
+
|
| 141 |
+
The completed Exchange sent out of the aggregator will contain the
|
| 142 |
+
`List<V>` in the message body.
|
| 143 |
+
|
| 144 |
+
For example, to aggregate a `List<Integer>` you can extend this class as
|
| 145 |
+
shown below, and implement the `getValue` method:
|
| 146 |
+
|
| 147 |
+
public class MyListOfNumbersStrategy extends AbstractListAggregationStrategy<Integer> {
|
| 148 |
+
|
| 149 |
+
@Override
|
| 150 |
+
public Integer getValue(Exchange exchange) {
|
| 151 |
+
// the message body contains a number, so return that as-is
|
| 152 |
+
return exchange.getIn().getBody(Integer.class);
|
| 153 |
+
}
|
| 154 |
+
}
|
| 155 |
+
|
| 156 |
+
The `org.apache.camel.builder.AggregationStrategies` is a builder that
|
| 157 |
+
can be used for creating commonly used aggregation strategies without
|
| 158 |
+
having to create a class.
|
| 159 |
+
|
| 160 |
+
The previous example can also be built using the builder as shown:
|
| 161 |
+
|
| 162 |
+
AggregationStrategy agg = AggregationStrategies.flexible(Integer.class)
|
| 163 |
+
.accumulateInCollection(ArrayList.class)
|
| 164 |
+
.pick(body());
|
| 165 |
+
|
| 166 |
+
## Aggregating on timeout
|
| 167 |
+
|
| 168 |
+
If your aggregation strategy implements
|
| 169 |
+
`TimeoutAwareAggregationStrategy`, then Camel will invoke the `timeout`
|
| 170 |
+
method when the timeout occurs. Notice that the values for index and
|
| 171 |
+
total parameters will be -1, and the timeout parameter will be provided
|
| 172 |
+
only if configured as a fixed value. You must **not** throw any
|
| 173 |
+
exceptions from the `timeout` method.
|
| 174 |
+
|
| 175 |
+
## Aggregate with persistent repository
|
| 176 |
+
|
| 177 |
+
The aggregator provides a pluggable repository which you can implement
|
| 178 |
+
your own `org.apache.camel.spi.AggregationRepository`.
|
| 179 |
+
|
| 180 |
+
If you need a persistent repository, then Camel provides numerous
|
| 181 |
+
implementations, such as from the
|
| 182 |
+
[Caffeine](#ROOT:caffeine-cache-component.adoc),
|
| 183 |
+
[CassandraQL](#ROOT:cql-component.adoc),
|
| 184 |
+
[EHCache](#ROOT:ehcache-component.adoc),
|
| 185 |
+
[Infinispan](#ROOT:infinispan-component.adoc),
|
| 186 |
+
[JCache](#ROOT:jcache-component.adoc), [LevelDB](#others:leveldb.adoc),
|
| 187 |
+
[Redis](#others:redis.adoc), or [SQL](#ROOT:sql-component.adoc)
|
| 188 |
+
components.
|
| 189 |
+
|
| 190 |
+
# Completion
|
| 191 |
+
|
| 192 |
+
When aggregation [Exchange](#manual::exchange.adoc)s at some point, you
|
| 193 |
+
need to indicate that the aggregated exchanges are complete, so they can
|
| 194 |
+
be sent out of the aggregator. Camel allows you to indicate completion
|
| 195 |
+
in various ways as follows:
|
| 196 |
+
|
| 197 |
+
- *completionTimeout*: Is an inactivity timeout in that is triggered
|
| 198 |
+
if no new exchanges have been aggregated for that particular
|
| 199 |
+
correlation key within the period.
|
| 200 |
+
|
| 201 |
+
- *completionInterval*: Once every X period all the current aggregated
|
| 202 |
+
exchanges are completed.
|
| 203 |
+
|
| 204 |
+
- *completionSize*: Is a number indicating that after X aggregated
|
| 205 |
+
exchanges its complete.
|
| 206 |
+
|
| 207 |
+
- *completionPredicate*: Runs a [Predicate](#manual::predicate.adoc)
|
| 208 |
+
when a new exchange is aggregated to determine if we are complete or
|
| 209 |
+
not. The configured aggregationStrategy can implement the Predicate
|
| 210 |
+
interface and will be used as the completionPredicate if no
|
| 211 |
+
completionPredicate is configured. The configured
|
| 212 |
+
aggregationStrategy can override the `preComplete` method and will
|
| 213 |
+
be used as the completionPredicate in pre-complete check mode. See
|
| 214 |
+
further below for more details.
|
| 215 |
+
|
| 216 |
+
- *completionFromBatchConsumer*: Special option for [Batch
|
| 217 |
+
Consumer](#manual::batch-consumer.adoc), which allows you to
|
| 218 |
+
complete when all the messages from the batch have been aggregated.
|
| 219 |
+
|
| 220 |
+
- *forceCompletionOnStop*: Indicates to complete all current
|
| 221 |
+
aggregated exchanges when the context is stopped
|
| 222 |
+
|
| 223 |
+
- *AggregateController*: which allows to use an external source
|
| 224 |
+
(`AggregateController` implementation) to complete groups or all
|
| 225 |
+
groups. This can be done using Java or JMX API.
|
| 226 |
+
|
| 227 |
+
All the different completions are per correlation key. You can combine
|
| 228 |
+
them in any way you like. It’s basically the first that triggers that
|
| 229 |
+
wins. So you can use a completion size together with a completion
|
| 230 |
+
timeout. Only completionTimeout and completionInterval cannot be used at
|
| 231 |
+
the same time.
|
| 232 |
+
|
| 233 |
+
Completion is mandatory and must be configured on the aggregation.
|
| 234 |
+
|
| 235 |
+
## Pre-completion mode
|
| 236 |
+
|
| 237 |
+
There can be use-cases where you want the incoming
|
| 238 |
+
[Exchange](#manual::exchange.adoc) to determine if the correlation group
|
| 239 |
+
should pre-complete, and then the incoming
|
| 240 |
+
[Exchange](#manual::exchange.adoc) is starting a new group from scratch.
|
| 241 |
+
The pre-completion mode must be enabled by the `AggregationStrategy` by
|
| 242 |
+
overriding the `canPreComplete` method to return a `true` value.
|
| 243 |
+
|
| 244 |
+
When pre-completion is enabled then the `preComplete` method is invoked:
|
| 245 |
+
|
| 246 |
+
/**
|
| 247 |
+
* Determines if the aggregation should complete the current group, and start a new group, or the aggregation
|
| 248 |
+
* should continue using the current group.
|
| 249 |
+
*
|
| 250 |
+
* @param oldExchange the oldest exchange (is <tt>null</tt> on first aggregation as we only have the new exchange)
|
| 251 |
+
* @param newExchange the newest exchange (can be <tt>null</tt> if there was no data possible to acquire)
|
| 252 |
+
* @return <tt>true</tt> to complete current group and start a new group, or <tt>false</tt> to keep using current
|
| 253 |
+
*/
|
| 254 |
+
boolean preComplete(Exchange oldExchange, Exchange newExchange);
|
| 255 |
+
|
| 256 |
+
If the `preComplete` method returns `true`, then the existing
|
| 257 |
+
correlation group is completed (without aggregating the incoming
|
| 258 |
+
exchange (`newExchange`). Then the `newExchange` is used to start the
|
| 259 |
+
correlation group from scratch, so the group would contain only that new
|
| 260 |
+
incoming exchange. This is known as pre-completion mode.
|
| 261 |
+
|
| 262 |
+
The `newExchange` contains the following exchange properties, which can
|
| 263 |
+
be used to determine whether to pre complete.
|
| 264 |
+
|
| 265 |
+
<table>
|
| 266 |
+
<colgroup>
|
| 267 |
+
<col style="width: 30%" />
|
| 268 |
+
<col style="width: 10%" />
|
| 269 |
+
<col style="width: 59%" />
|
| 270 |
+
</colgroup>
|
| 271 |
+
<thead>
|
| 272 |
+
<tr class="header">
|
| 273 |
+
<th style="text-align: left;">Property</th>
|
| 274 |
+
<th style="text-align: left;">Type</th>
|
| 275 |
+
<th style="text-align: left;">Description</th>
|
| 276 |
+
</tr>
|
| 277 |
+
</thead>
|
| 278 |
+
<tbody>
|
| 279 |
+
<tr class="odd">
|
| 280 |
+
<td
|
| 281 |
+
style="text-align: left;"><p><code>CamelAggregatedSize</code></p></td>
|
| 282 |
+
<td style="text-align: left;"><p><code>int</code></p></td>
|
| 283 |
+
<td style="text-align: left;"><p>The total number of messages
|
| 284 |
+
aggregated.</p></td>
|
| 285 |
+
</tr>
|
| 286 |
+
<tr class="even">
|
| 287 |
+
<td
|
| 288 |
+
style="text-align: left;"><p><code>CamelAggregatedCorrelationKey</code></p></td>
|
| 289 |
+
<td style="text-align: left;"><p><code>String</code></p></td>
|
| 290 |
+
<td style="text-align: left;"><p>The correlation identifier as a
|
| 291 |
+
<code>String</code>.</p></td>
|
| 292 |
+
</tr>
|
| 293 |
+
</tbody>
|
| 294 |
+
</table>
|
| 295 |
+
|
| 296 |
+
When the aggregation is in *pre-completion* mode, then only the
|
| 297 |
+
following completions are in use:
|
| 298 |
+
|
| 299 |
+
- *completionTimeout* or *completionInterval* can also be used as
|
| 300 |
+
fallback completions
|
| 301 |
+
|
| 302 |
+
- any other completions are not used (such as by size, from batch
|
| 303 |
+
consumer, etc.)
|
| 304 |
+
|
| 305 |
+
- *eagerCheckCompletion* is implied as `true`, but the option has no
|
| 306 |
+
effect
|
| 307 |
+
|
| 308 |
+
## CompletionAwareAggregationStrategy
|
| 309 |
+
|
| 310 |
+
If your aggregation strategy implements
|
| 311 |
+
`CompletionAwareAggregationStrategy`, then Camel will invoke the
|
| 312 |
+
`onComplete` method when the aggregated `Exchange` is completed. This
|
| 313 |
+
allows you to do any last minute custom logic such as to clean up some
|
| 314 |
+
resources, or additional work on the exchange as it’s now completed. You
|
| 315 |
+
must **not** throw any exceptions from the `onCompletion` method.
|
| 316 |
+
|
| 317 |
+
## Completing the current group decided from the AggregationStrategy
|
| 318 |
+
|
| 319 |
+
The `AggregationStrategy` supports checking for the exchange property
|
| 320 |
+
(`Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP`) on the returned
|
| 321 |
+
`Exchange` that contains a boolean to indicate if the current group
|
| 322 |
+
should be completed. This allows to overrule any existing completion
|
| 323 |
+
predicates / sizes / timeouts etc., and complete the group.
|
| 324 |
+
|
| 325 |
+
For example, the following logic will complete the group if the message
|
| 326 |
+
body size is larger than 5. This is done by setting the exchange
|
| 327 |
+
property `Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP` to `true`.
|
| 328 |
+
|
| 329 |
+
public final class MyCompletionStrategy implements AggregationStrategy {
|
| 330 |
+
@Override
|
| 331 |
+
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
|
| 332 |
+
if (oldExchange == null) {
|
| 333 |
+
return newExchange;
|
| 334 |
+
}
|
| 335 |
+
String body = oldExchange.getIn().getBody(String.class) + "+"
|
| 336 |
+
+ newExchange.getIn().getBody(String.class);
|
| 337 |
+
oldExchange.getIn().setBody(body);
|
| 338 |
+
if (body.length() >= 5) {
|
| 339 |
+
oldExchange.setProperty(Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP, true);
|
| 340 |
+
}
|
| 341 |
+
return oldExchange;
|
| 342 |
+
}
|
| 343 |
+
}
|
| 344 |
+
|
| 345 |
+
## Completing all previous group decided from the AggregationStrategy
|
| 346 |
+
|
| 347 |
+
The `AggregationStrategy` checks an exchange property, from the returned
|
| 348 |
+
exchange, indicating if all previous groups should be completed.
|
| 349 |
+
|
| 350 |
+
This allows to overrule any existing completion predicates / sizes /
|
| 351 |
+
timeouts etc., and complete all the existing previous group.
|
| 352 |
+
|
| 353 |
+
The following logic will complete all the previous groups, and start a
|
| 354 |
+
new aggregation group.
|
| 355 |
+
|
| 356 |
+
This is done by setting the property
|
| 357 |
+
`Exchange.AGGREGATION_COMPLETE_ALL_GROUPS` to `true` on the returned
|
| 358 |
+
exchange.
|
| 359 |
+
|
| 360 |
+
public final class MyCompletionStrategy implements AggregationStrategy {
|
| 361 |
+
@Override
|
| 362 |
+
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
|
| 363 |
+
if (oldExchange == null) {
|
| 364 |
+
// we start a new correlation group, so complete all previous groups
|
| 365 |
+
newExchange.setProperty(Exchange.AGGREGATION_COMPLETE_ALL_GROUPS, true);
|
| 366 |
+
return newExchange;
|
| 367 |
+
}
|
| 368 |
+
|
| 369 |
+
String body1 = oldExchange.getIn().getBody(String.class);
|
| 370 |
+
String body2 = newExchange.getIn().getBody(String.class);
|
| 371 |
+
|
| 372 |
+
oldExchange.getIn().setBody(body1 + body2);
|
| 373 |
+
return oldExchange;
|
| 374 |
+
}
|
| 375 |
+
}
|
| 376 |
+
|
| 377 |
+
## Manually force the completion of all aggregated Exchanges immediately
|
| 378 |
+
|
| 379 |
+
You can manually trigger completion of all current aggregated exchanges
|
| 380 |
+
by sending an exchange containing the exchange property
|
| 381 |
+
`Exchange.AGGREGATION_COMPLETE_ALL_GROUPS` set to `true`. The message is
|
| 382 |
+
considered a signal message only, the message headers/contents will not
|
| 383 |
+
be processed otherwise.
|
| 384 |
+
|
| 385 |
+
You can alternatively set the exchange property
|
| 386 |
+
`Exchange.AGGREGATION_COMPLETE_ALL_GROUPS_INCLUSIVE` to `true` to
|
| 387 |
+
trigger completion of all groups after processing the current message.
|
| 388 |
+
|
| 389 |
+
## Using a controller to force the aggregator to complete
|
| 390 |
+
|
| 391 |
+
The `org.apache.camel.processor.aggregate.AggregateController` allows
|
| 392 |
+
you to control the aggregate at runtime using Java or JMX API. This can
|
| 393 |
+
be used to force completing groups of exchanges, or query its current
|
| 394 |
+
runtime statistics.
|
| 395 |
+
|
| 396 |
+
The aggregator provides a default implementation if no custom one has
|
| 397 |
+
been configured, which can be accessed using `getAggregateController()`
|
| 398 |
+
method. Though it may be easier to configure a controller in the route
|
| 399 |
+
using `aggregateController` as shown below:
|
| 400 |
+
|
| 401 |
+
private AggregateController controller = new DefaultAggregateController();
|
| 402 |
+
|
| 403 |
+
from("direct:start")
|
| 404 |
+
.aggregate(header("id"), new MyAggregationStrategy())
|
| 405 |
+
.completionSize(10).id("myAggregator")
|
| 406 |
+
.aggregateController(controller)
|
| 407 |
+
.to("mock:aggregated");
|
| 408 |
+
|
| 409 |
+
Then there is API on `AggregateController` to force completion. For
|
| 410 |
+
example, to complete a group with key foo:
|
| 411 |
+
|
| 412 |
+
int groups = controller.forceCompletionOfGroup("foo");
|
| 413 |
+
|
| 414 |
+
The returned value is the number of groups completed. A value of 1 is
|
| 415 |
+
returned if the foo group existed, otherwise 0 is returned.
|
| 416 |
+
|
| 417 |
+
There is also a method to complete all groups:
|
| 418 |
+
|
| 419 |
+
int groups = controller.forceCompletionOfAllGroups();
|
| 420 |
+
|
| 421 |
+
The controller can also be used in XML DSL using the
|
| 422 |
+
`aggregateController` to refer to a bean with the controller
|
| 423 |
+
implementation, which is looked up in the registry.
|
| 424 |
+
|
| 425 |
+
When using Spring XML, you can create the bean with `<bean>` as shown:
|
| 426 |
+
|
| 427 |
+
<bean id="myController" class="org.apache.camel.processor.aggregate.DefaultAggregateController"/>
|
| 428 |
+
|
| 429 |
+
<camelContext xmlns="http://camel.apache.org/schema/spring">
|
| 430 |
+
<route>
|
| 431 |
+
<from uri="direct:start"/>
|
| 432 |
+
<aggregate aggregationStrategy="myAppender" completionSize="10"
|
| 433 |
+
aggregateController="myController">
|
| 434 |
+
<correlationExpression>
|
| 435 |
+
<header>id</header>
|
| 436 |
+
</correlationExpression>
|
| 437 |
+
<to uri="mock:result"/>
|
| 438 |
+
</aggregate>
|
| 439 |
+
</route>
|
| 440 |
+
</camelContext>
|
| 441 |
+
|
| 442 |
+
There is also JMX API on the aggregator which is available under the
|
| 443 |
+
processors node in the Camel JMX tree.
|
| 444 |
+
|
| 445 |
+
# Aggregating with Beans
|
| 446 |
+
|
| 447 |
+
To use the `AggregationStrategy` you had to implement the
|
| 448 |
+
`org.apache.camel.AggregationStrategy` interface, which means your logic
|
| 449 |
+
would be tied to the Camel API. You can use a bean for the logic and let
|
| 450 |
+
Camel adapt to your bean. To use a bean, then a convention must be
|
| 451 |
+
followed:
|
| 452 |
+
|
| 453 |
+
- there must be a public method to use
|
| 454 |
+
|
| 455 |
+
- the method must not be void
|
| 456 |
+
|
| 457 |
+
- the method can be static or non-static
|
| 458 |
+
|
| 459 |
+
- the method must have two or more parameters
|
| 460 |
+
|
| 461 |
+
- the parameters are paired, so the first half is applied to the
|
| 462 |
+
`oldExchange`, and the reminder half is for the `newExchange`.
|
| 463 |
+
Therefore, there must be an equal number of parameters, e.g., 2, 4,
|
| 464 |
+
6, etc.
|
| 465 |
+
|
| 466 |
+
The paired methods are expected to be ordered as follows:
|
| 467 |
+
|
| 468 |
+
- the first parameter is the message body
|
| 469 |
+
|
| 470 |
+
- optional, the second parameter is a `Map` of the headers
|
| 471 |
+
|
| 472 |
+
- optional, the third parameter is a `Map` of the exchange properties
|
| 473 |
+
|
| 474 |
+
This convention is best explained with some examples.
|
| 475 |
+
|
| 476 |
+
In the method below, we have only two parameters, so the first parameter
|
| 477 |
+
is the body of the `oldExchange`, and the second is paired to the body
|
| 478 |
+
of the `newExchange`:
|
| 479 |
+
|
| 480 |
+
public String append(String existing, String next) {
|
| 481 |
+
return existing + next;
|
| 482 |
+
}
|
| 483 |
+
|
| 484 |
+
In the method below, we have only four parameters, so the first
|
| 485 |
+
parameter is the body of the `oldExchange`, and the second is the `Map`
|
| 486 |
+
of the `oldExchange` headers, and the third is paired to the body of the
|
| 487 |
+
`newExchange`, and the fourth parameter is the `Map` of the
|
| 488 |
+
`newExchange` headers:
|
| 489 |
+
|
| 490 |
+
public String append(String existing, Map existingHeaders, String next, Map nextHeaders) {
|
| 491 |
+
return existing + next;
|
| 492 |
+
}
|
| 493 |
+
|
| 494 |
+
And finally, if we have six parameters, that includes the exchange
|
| 495 |
+
properties:
|
| 496 |
+
|
| 497 |
+
public String append(String existing, Map existingHeaders, Map existingProperties,
|
| 498 |
+
String next, Map nextHeaders, Map nextProperties) {
|
| 499 |
+
return existing + next;
|
| 500 |
+
}
|
| 501 |
+
|
| 502 |
+
To use this with the aggregate EIP, we can use a bean with the aggregate
|
| 503 |
+
logic as follows:
|
| 504 |
+
|
| 505 |
+
public class MyBodyAppender {
|
| 506 |
+
|
| 507 |
+
public String append(String existing, String next) {
|
| 508 |
+
return next + existing;
|
| 509 |
+
}
|
| 510 |
+
|
| 511 |
+
}
|
| 512 |
+
|
| 513 |
+
And then in the Camel route we create an instance of our bean, and then
|
| 514 |
+
refer to the bean in the route using `bean` method from
|
| 515 |
+
`org.apache.camel.builder.AggregationStrategies` as shown:
|
| 516 |
+
|
| 517 |
+
private MyBodyAppender appender = new MyBodyAppender();
|
| 518 |
+
|
| 519 |
+
public void configure() throws Exception {
|
| 520 |
+
from("direct:start")
|
| 521 |
+
.aggregate(constant(true), AggregationStrategies.bean(appender, "append"))
|
| 522 |
+
.completionSize(3)
|
| 523 |
+
.to("mock:result");
|
| 524 |
+
}
|
| 525 |
+
|
| 526 |
+
We can also provide the bean class type directly:
|
| 527 |
+
|
| 528 |
+
public void configure() throws Exception {
|
| 529 |
+
from("direct:start")
|
| 530 |
+
.aggregate(constant(true), AggregationStrategies.bean(MyBodyAppender.class, "append"))
|
| 531 |
+
.completionSize(3)
|
| 532 |
+
.to("mock:result");
|
| 533 |
+
}
|
| 534 |
+
|
| 535 |
+
And if the bean has only one method, we do not need to specify the name
|
| 536 |
+
of the method:
|
| 537 |
+
|
| 538 |
+
public void configure() throws Exception {
|
| 539 |
+
from("direct:start")
|
| 540 |
+
.aggregate(constant(true), AggregationStrategies.bean(MyBodyAppender.class))
|
| 541 |
+
.completionSize(3)
|
| 542 |
+
.to("mock:result");
|
| 543 |
+
}
|
| 544 |
+
|
| 545 |
+
And the `append` method could be static:
|
| 546 |
+
|
| 547 |
+
public class MyBodyAppender {
|
| 548 |
+
|
| 549 |
+
public static String append(String existing, String next) {
|
| 550 |
+
return next + existing;
|
| 551 |
+
}
|
| 552 |
+
|
| 553 |
+
}
|
| 554 |
+
|
| 555 |
+
If you are using XML DSL, then we need to declare a `<bean>` with the
|
| 556 |
+
bean:
|
| 557 |
+
|
| 558 |
+
<bean id="myAppender" class="com.foo.MyBodyAppender"/>
|
| 559 |
+
|
| 560 |
+
And in the Camel route we use `aggregationStrategy` to refer to the bean
|
| 561 |
+
by its id, and the `strategyMethodName` can be used to define the method
|
| 562 |
+
name to call:
|
| 563 |
+
|
| 564 |
+
<camelContext xmlns="http://camel.apache.org/schema/spring">
|
| 565 |
+
<route>
|
| 566 |
+
<from uri="direct:start"/>
|
| 567 |
+
<aggregate aggregationStrategy="myAppender" aggregationStrategyMethodName="append" completionSize="3">
|
| 568 |
+
<correlationExpression>
|
| 569 |
+
<constant>true</constant>
|
| 570 |
+
</correlationExpression>
|
| 571 |
+
<to uri="mock:result"/>
|
| 572 |
+
</aggregate>
|
| 573 |
+
</route>
|
| 574 |
+
</camelContext>
|
| 575 |
+
|
| 576 |
+
When using XML DSL, you can also specify the bean class directly in
|
| 577 |
+
`aggregationStrategy` using the `#class:` syntax as shown:
|
| 578 |
+
|
| 579 |
+
<route>
|
| 580 |
+
<from uri="direct:start"/>
|
| 581 |
+
<aggregate aggregationStrategy="#class:com.foo.MyBodyAppender" aggregationStrategyMethodName="append" completionSize="3">
|
| 582 |
+
<correlationExpression>
|
| 583 |
+
<constant>true</constant>
|
| 584 |
+
</correlationExpression>
|
| 585 |
+
<to uri="mock:result"/>
|
| 586 |
+
</aggregate>
|
| 587 |
+
</route>
|
| 588 |
+
|
| 589 |
+
You can use this in XML DSL when you are not using the classic Spring
|
| 590 |
+
XML files ( where you use XML only for Camel routes).
|
| 591 |
+
|
| 592 |
+
## Aggregating when no data
|
| 593 |
+
|
| 594 |
+
When using bean as `AggregationStrategy`, then the method is **only**
|
| 595 |
+
invoked when there is data to be aggregated, meaning that the message
|
| 596 |
+
body is not `null`. In cases where you want to have the method invoked,
|
| 597 |
+
even when there are no data (message body is `null`), then set the
|
| 598 |
+
`strategyMethodAllowNull` to `true`.
|
| 599 |
+
|
| 600 |
+
When using beans, this can be configured a bit easier using the
|
| 601 |
+
`beanAllowNull` method from `AggregationStrategies` as shown:
|
| 602 |
+
|
| 603 |
+
public void configure() throws Exception {
|
| 604 |
+
from("direct:start")
|
| 605 |
+
.pollEnrich("seda:foo", 1000, AggregationStrategies.beanAllowNull(appender, "append"))
|
| 606 |
+
.to("mock:result");
|
| 607 |
+
}
|
| 608 |
+
|
| 609 |
+
Then the `append` method in the bean would need to deal with the
|
| 610 |
+
situation that `newExchange` can be `null`:
|
| 611 |
+
|
| 612 |
+
public class MyBodyAppender {
|
| 613 |
+
|
| 614 |
+
public String append(String existing, String next) {
|
| 615 |
+
if (next == null) {
|
| 616 |
+
return "NewWasNull" + existing;
|
| 617 |
+
} else {
|
| 618 |
+
return existing + next;
|
| 619 |
+
}
|
| 620 |
+
}
|
| 621 |
+
|
| 622 |
+
}
|
| 623 |
+
|
| 624 |
+
In the example above we use the [Content
|
| 625 |
+
Enricher](#content-enricher.adoc) EIP using `pollEnrich`. The
|
| 626 |
+
`newExchange` will be `null` in the situation we could not get any data
|
| 627 |
+
from the "seda:foo" endpoint, and a timeout was hit after 1 second.
|
| 628 |
+
|
| 629 |
+
So if we need to do special merge logic, we would need to set
|
| 630 |
+
`setAllowNullNewExchange=true`. If we didn’t do this, then on timeout
|
| 631 |
+
the append method would normally not be invoked, meaning the [Content
|
| 632 |
+
Enricher](#content-enricher.adoc) did not merge/change the message.
|
| 633 |
+
|
| 634 |
+
In XML DSL you would configure the `strategyMethodAllowNull` option and
|
| 635 |
+
set it to `true` as shown below:
|
| 636 |
+
|
| 637 |
+
<camelContext xmlns="http://camel.apache.org/schema/spring">
|
| 638 |
+
<route>
|
| 639 |
+
<from uri="direct:start"/>
|
| 640 |
+
<aggregate aggregationStrategy="myAppender"
|
| 641 |
+
aggregationStrategyMethodName="append"
|
| 642 |
+
aggregationStrategyMethodAllowNull="true"
|
| 643 |
+
completionSize="3">
|
| 644 |
+
<correlationExpression>
|
| 645 |
+
<constant>true</constant>
|
| 646 |
+
</correlationExpression>
|
| 647 |
+
<to uri="mock:result"/>
|
| 648 |
+
</aggregate>
|
| 649 |
+
</route>
|
| 650 |
+
</camelContext>
|
| 651 |
+
|
| 652 |
+
## Aggregating with different body types
|
| 653 |
+
|
| 654 |
+
When, for example, using `strategyMethodAllowNull` as `true`, then the
|
| 655 |
+
parameter type of the message bodies does not have to be the same. For
|
| 656 |
+
example suppose we want to aggregate from a `com.foo.User` type to a
|
| 657 |
+
`List<String>` that contains the name of the user. We could code a bean
|
| 658 |
+
as follows:
|
| 659 |
+
|
| 660 |
+
public final class MyUserAppender {
|
| 661 |
+
|
| 662 |
+
public List addUsers(List names, User user) {
|
| 663 |
+
if (names == null) {
|
| 664 |
+
names = new ArrayList();
|
| 665 |
+
}
|
| 666 |
+
names.add(user.getName());
|
| 667 |
+
return names;
|
| 668 |
+
}
|
| 669 |
+
}
|
| 670 |
+
|
| 671 |
+
Notice that the return type is a `List` which we want to contain the
|
| 672 |
+
name of the users. The first parameter is the `List` of names, and the
|
| 673 |
+
second parameter is the incoming `com.foo.User` type.
|
camel-ai-summary.md
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Ai-summary.md
|
| 2 |
+
|
| 3 |
+
The Camel AI components are a group of components for applying Apache
|
| 4 |
+
Camel to various AI-related technologies.
|
| 5 |
+
|
| 6 |
+
# AI components
|
| 7 |
+
|
| 8 |
+
See the following for usage of each component:
|
| 9 |
+
|
| 10 |
+
indexDescriptionList::\[attributes=*group=AI*,descriptionformat=description\]
|
camel-asn1-dataformat.md
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Asn1-dataformat.md
|
| 2 |
+
|
| 3 |
+
**Since Camel 2.20**
|
| 4 |
+
|
| 5 |
+
The [ASN.1 Data
|
| 6 |
+
Format](https://www.itu.int/en/ITU-T/asn1/Pages/introduction.aspx) is a
|
| 7 |
+
Camel Framework’s data format implementation based on Bouncy Castle’s
|
| 8 |
+
bcprov-jdk18on library and jASN.1’s java compiler for the formal
|
| 9 |
+
notation used for describing data transmitted by telecommunications
|
| 10 |
+
protocols, regardless of language implementation and physical
|
| 11 |
+
representation of these data, whatever the application, whether complex
|
| 12 |
+
or very simple. Messages can be unmarshalled (conversion to simple Java
|
| 13 |
+
POJO(s)) to plain Java objects. With the help of Camel’s routing engine
|
| 14 |
+
and data transformations, you can then play with POJO(s) and apply
|
| 15 |
+
customized formatting and call other Camel Components to convert and
|
| 16 |
+
send messages to upstream systems.
|
| 17 |
+
|
| 18 |
+
# ASN.1 Data Format Options
|
| 19 |
+
|
| 20 |
+
# Unmarshal
|
| 21 |
+
|
| 22 |
+
There are 3 different ways to unmarshal ASN.1 structured messages.
|
| 23 |
+
(Usually binary files)
|
| 24 |
+
|
| 25 |
+
In this first example, we unmarshal BER file payload to OutputStream and
|
| 26 |
+
send it to mock endpoint.
|
| 27 |
+
|
| 28 |
+
from("direct:unmarshal").unmarshal(asn1).to("mock:unmarshal");
|
| 29 |
+
|
| 30 |
+
In the second example, we unmarshal BER file payload to a byte array
|
| 31 |
+
using Split EIP. The reason for applying Split EIP is that usually each
|
| 32 |
+
BER file or (ASN.1 structured file) contains multiple records to process
|
| 33 |
+
and Split EIP helps us to get each record in a file as byte arrays which
|
| 34 |
+
is actually ASN1Primitive’s instance (by the use of Bouncy Castle’s
|
| 35 |
+
ASN.1 support in bcprov-jdk18on library) Byte arrays then may be
|
| 36 |
+
converted to ASN1Primitive by the help of public static method in
|
| 37 |
+
(ASN1Primitive.fromByteArray) In such example, note that you need to set
|
| 38 |
+
`usingIterator=true`
|
| 39 |
+
|
| 40 |
+
from("direct:unmarshal")
|
| 41 |
+
.unmarshal(asn1)
|
| 42 |
+
.split(bodyAs(Iterator.class)).streaming()
|
| 43 |
+
.to("mock:unmarshal");
|
| 44 |
+
|
| 45 |
+
In the last example, we unmarshalled a BER file payload to plain old
|
| 46 |
+
Java Objects using Split EIP. The reason for applying Split EIP is
|
| 47 |
+
already mentioned in the previous example. Please note and keep in mind
|
| 48 |
+
that reason. In such example we also need to set the fully qualified
|
| 49 |
+
name of the class or \<YourObject\>.class reference through data
|
| 50 |
+
format. The important thing to note here is that your object should have
|
| 51 |
+
been generated by jasn1 compiler, which is a nice tool to generate java
|
| 52 |
+
object representations of your ASN.1 structure. For the reference usage
|
| 53 |
+
of jasn1 compiler, see the [JASN.1 Project
|
| 54 |
+
Page](https://www.beanit.com/asn1/) and please also see how the compiler
|
| 55 |
+
is invoked with the help of maven’s exec plugin. For example, in this
|
| 56 |
+
data format’s unit tests an example ASN.1 structure(TestSMSBerCdr.asn1)
|
| 57 |
+
is added in `src/test/resources/asn1_structure`. jasn1 compiler is
|
| 58 |
+
invoked, and java object’s representations are generated in
|
| 59 |
+
`${basedir}/target/generated/src/test/java` The nice thing about this
|
| 60 |
+
example, you will get POJO instance at the mock endpoint or at whatever
|
| 61 |
+
your endpoint is.
|
| 62 |
+
|
| 63 |
+
from("direct:unmarshaldsl")
|
| 64 |
+
.unmarshal()
|
| 65 |
+
.asn1("org.apache.camel.dataformat.asn1.model.testsmscbercdr.SmsCdr")
|
| 66 |
+
.split(bodyAs(Iterator.class)).streaming()
|
| 67 |
+
.to("mock:unmarshaldsl");
|
| 68 |
+
|
| 69 |
+
# Dependencies
|
| 70 |
+
|
| 71 |
+
To use ASN.1 data format in your camel routes you need to add a
|
| 72 |
+
dependency on **camel-asn1** which implements this data format.
|
| 73 |
+
|
| 74 |
+
If you use Maven you can add the following to your `pom.xml`,
|
| 75 |
+
substituting the version number for the latest \& greatest release (see
|
| 76 |
+
the download page for the latest versions).
|
| 77 |
+
|
| 78 |
+
<dependency>
|
| 79 |
+
<groupId>org.apache.camel</groupId>
|
| 80 |
+
<artifactId>camel-asn1</artifactId>
|
| 81 |
+
<version>x.x.x</version>
|
| 82 |
+
<!-- use the same version as your Camel core version -->
|
| 83 |
+
</dependency>
|
camel-atmosphere-websocket.md
CHANGED
|
@@ -88,7 +88,7 @@ And the equivalent Spring sample:
|
|
| 88 |
|---|---|---|---|
|
| 89 |
|servicePath|Name of websocket endpoint||string|
|
| 90 |
|chunked|If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response|true|boolean|
|
| 91 |
-
|disableStreamCache|Determines whether or not the raw input stream
|
| 92 |
|sendToAll|Whether to send to all (broadcast) or send to a single receiver.|false|boolean|
|
| 93 |
|transferException|If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean|
|
| 94 |
|useStreaming|To enable streaming to send data as multiple text fragments.|false|boolean|
|
|
@@ -114,6 +114,3 @@ And the equivalent Spring sample:
|
|
| 114 |
|traceEnabled|Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off.|false|boolean|
|
| 115 |
|bridgeEndpoint|If the option is true, HttpProducer will ignore the Exchange.HTTP\_URI header, and use the endpoint's URI for request. You may also set the option throwExceptionOnFailure to be false to let the HttpProducer send all the fault response back.|false|boolean|
|
| 116 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
| 117 |
-
|oauth2ClientId|OAuth2 client id||string|
|
| 118 |
-
|oauth2ClientSecret|OAuth2 client secret||string|
|
| 119 |
-
|oauth2TokenEndpoint|OAuth2 Token endpoint||string|
|
|
|
|
| 88 |
|---|---|---|---|
|
| 89 |
|servicePath|Name of websocket endpoint||string|
|
| 90 |
|chunked|If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response|true|boolean|
|
| 91 |
+
|disableStreamCache|Determines whether or not the raw input stream is cached or not. The Camel consumer (camel-servlet, camel-jetty etc.) will by default cache the input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The producer (camel-http) will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is (the stream can only be read once) as the message body.|false|boolean|
|
| 92 |
|sendToAll|Whether to send to all (broadcast) or send to a single receiver.|false|boolean|
|
| 93 |
|transferException|If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk.|false|boolean|
|
| 94 |
|useStreaming|To enable streaming to send data as multiple text fragments.|false|boolean|
|
|
|
|
| 114 |
|traceEnabled|Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off.|false|boolean|
|
| 115 |
|bridgeEndpoint|If the option is true, HttpProducer will ignore the Exchange.HTTP\_URI header, and use the endpoint's URI for request. You may also set the option throwExceptionOnFailure to be false to let the HttpProducer send all the fault response back.|false|boolean|
|
| 116 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
|
|
|
|
|
|
|
|
camel-atom.md
CHANGED
|
@@ -39,21 +39,21 @@ Depending on the `splitEntries` flag Camel will either return one
|
|
| 39 |
<col style="width: 79%" />
|
| 40 |
</colgroup>
|
| 41 |
<thead>
|
| 42 |
-
<tr>
|
| 43 |
<th style="text-align: left;">Option</th>
|
| 44 |
<th style="text-align: left;">Value</th>
|
| 45 |
<th style="text-align: left;">Behavior</th>
|
| 46 |
</tr>
|
| 47 |
</thead>
|
| 48 |
<tbody>
|
| 49 |
-
<tr>
|
| 50 |
<td style="text-align: left;"><p><code>splitEntries</code></p></td>
|
| 51 |
<td style="text-align: left;"><p><code>true</code></p></td>
|
| 52 |
<td style="text-align: left;"><p>Only a single entry from the currently
|
| 53 |
being processed feed is set:
|
| 54 |
<code>exchange.in.body(Entry)</code></p></td>
|
| 55 |
</tr>
|
| 56 |
-
<tr>
|
| 57 |
<td style="text-align: left;"><p><code>splitEntries</code></p></td>
|
| 58 |
<td style="text-align: left;"><p><code>false</code></p></td>
|
| 59 |
<td style="text-align: left;"><p>The entire list of entries from the
|
|
|
|
| 39 |
<col style="width: 79%" />
|
| 40 |
</colgroup>
|
| 41 |
<thead>
|
| 42 |
+
<tr class="header">
|
| 43 |
<th style="text-align: left;">Option</th>
|
| 44 |
<th style="text-align: left;">Value</th>
|
| 45 |
<th style="text-align: left;">Behavior</th>
|
| 46 |
</tr>
|
| 47 |
</thead>
|
| 48 |
<tbody>
|
| 49 |
+
<tr class="odd">
|
| 50 |
<td style="text-align: left;"><p><code>splitEntries</code></p></td>
|
| 51 |
<td style="text-align: left;"><p><code>true</code></p></td>
|
| 52 |
<td style="text-align: left;"><p>Only a single entry from the currently
|
| 53 |
being processed feed is set:
|
| 54 |
<code>exchange.in.body(Entry)</code></p></td>
|
| 55 |
</tr>
|
| 56 |
+
<tr class="even">
|
| 57 |
<td style="text-align: left;"><p><code>splitEntries</code></p></td>
|
| 58 |
<td style="text-align: left;"><p><code>false</code></p></td>
|
| 59 |
<td style="text-align: left;"><p>The entire list of entries from the
|
camel-attachments.md
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Attachments.md
|
| 2 |
+
|
| 3 |
+
**Since Camel 3.0**
|
| 4 |
+
|
| 5 |
+
The Attachments component provides the `javax.attachments` API support
|
| 6 |
+
for Apache Camel. A few Camel component uses attachments such as mail
|
| 7 |
+
and web-service components. The Attachments component is included
|
| 8 |
+
automatically when using these components.
|
| 9 |
+
|
| 10 |
+
The Attachments support is on Camel `Message` level, for example to get
|
| 11 |
+
the `javax.activation.DataHandler` instance of the attachment, you can
|
| 12 |
+
do as shown below:
|
| 13 |
+
|
| 14 |
+
AttachmentMessage attMsg = exchange.getIn(AttachmentMessage.class);
|
| 15 |
+
Attachment attachment = attMsg.getAttachmentObject("myAttachment");
|
| 16 |
+
DataHandler dh = attachment.getDataHandler();
|
| 17 |
+
|
| 18 |
+
And if you want to add an attachment, to a Camel `Message` you can do as
|
| 19 |
+
shown:
|
| 20 |
+
|
| 21 |
+
AttachmentMessage attMsg = exchange.getIn(AttachmentMessage.class);
|
| 22 |
+
attMsg.addAttachment("message1.xml", new DataHandler(new FileDataSource(new File("myMessage1.xml"))));
|
camel-avro-dataformat.md
ADDED
|
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Avro-dataformat.md
|
| 2 |
+
|
| 3 |
+
**Since Camel 2.14**
|
| 4 |
+
|
| 5 |
+
This component provides a dataformat for avro, which allows
|
| 6 |
+
serialization and deserialization of messages using Apache Avro’s binary
|
| 7 |
+
dataformat. Since Camel 3.2 rpc functionality was moved into separate
|
| 8 |
+
`camel-avro-rpc` component.
|
| 9 |
+
|
| 10 |
+
There is also `camel-jackson-avro` which is a more powerful Camel
|
| 11 |
+
dataformat for using Avro.
|
| 12 |
+
|
| 13 |
+
Maven users will need to add the following dependency to their `pom.xml`
|
| 14 |
+
for this component:
|
| 15 |
+
|
| 16 |
+
<dependency>
|
| 17 |
+
<groupId>org.apache.camel</groupId>
|
| 18 |
+
<artifactId>camel-avro</artifactId>
|
| 19 |
+
<version>x.x.x</version>
|
| 20 |
+
<!-- use the same version as your Camel core version -->
|
| 21 |
+
</dependency>
|
| 22 |
+
|
| 23 |
+
You can easily generate classes from a schema, using maven, ant etc.
|
| 24 |
+
More details can be found at the [Apache Avro
|
| 25 |
+
documentation](http://avro.apache.org/docs/current/).
|
| 26 |
+
|
| 27 |
+
# Avro Dataformat Options
|
| 28 |
+
|
| 29 |
+
# Examples
|
| 30 |
+
|
| 31 |
+
## Avro Data Format usage
|
| 32 |
+
|
| 33 |
+
Using the avro data format is as easy as specifying that the class that
|
| 34 |
+
you want to marshal or unmarshal in your route.
|
| 35 |
+
|
| 36 |
+
AvroDataFormat format = new AvroDataFormat(Value.SCHEMA$);
|
| 37 |
+
|
| 38 |
+
from("direct:in").marshal(format).to("direct:marshal");
|
| 39 |
+
from("direct:back").unmarshal(format).to("direct:unmarshal");
|
| 40 |
+
|
| 41 |
+
Where Value is an Avro Maven Plugin Generated class.
|
| 42 |
+
|
| 43 |
+
or in XML
|
| 44 |
+
|
| 45 |
+
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
|
| 46 |
+
<route>
|
| 47 |
+
<from uri="direct:in"/>
|
| 48 |
+
<marshal>
|
| 49 |
+
<avro instanceClass="org.apache.camel.dataformat.avro.Message" library="ApacheAvro"/>
|
| 50 |
+
</marshal>
|
| 51 |
+
<to uri="log:out"/>
|
| 52 |
+
</route>
|
| 53 |
+
</camelContext>
|
| 54 |
+
|
| 55 |
+
An alternative can be to specify the dataformat inside the context and
|
| 56 |
+
reference it from your route.
|
| 57 |
+
|
| 58 |
+
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
|
| 59 |
+
<dataFormats>
|
| 60 |
+
<avro id="avro" instanceClass="org.apache.camel.dataformat.avro.Message" library="ApacheAvro"/>
|
| 61 |
+
</dataFormats>
|
| 62 |
+
<route>
|
| 63 |
+
<from uri="direct:in"/>
|
| 64 |
+
<marshal><custom ref="avro"/></marshal>
|
| 65 |
+
<to uri="log:out"/>
|
| 66 |
+
</route>
|
| 67 |
+
</camelContext>
|
| 68 |
+
|
| 69 |
+
In the same manner, you can unmarshal using the avro data format.
|
camel-avro.md
CHANGED
|
@@ -79,7 +79,9 @@ schema above:
|
|
| 79 |
*Note: Existing classes can be used only for RPC (see below), not in
|
| 80 |
data format.*
|
| 81 |
|
| 82 |
-
#
|
|
|
|
|
|
|
| 83 |
|
| 84 |
As mentioned above, Avro also provides RPC support over multiple
|
| 85 |
transports such as http and netty. Camel provides consumers and
|
|
@@ -164,7 +166,7 @@ is used and `getProcessor` will receive Value class directly in body,
|
|
| 164 |
while `putProcessor` will receive an array of size 2 with `String` key
|
| 165 |
and `Value` value filled as array contents.
|
| 166 |
|
| 167 |
-
# Avro via HTTP SPI
|
| 168 |
|
| 169 |
The Avro RPC component offers the
|
| 170 |
`org.apache.camel.component.avro.spi.AvroRpcHttpServerFactory` service
|
|
|
|
| 79 |
*Note: Existing classes can be used only for RPC (see below), not in
|
| 80 |
data format.*
|
| 81 |
|
| 82 |
+
# Usage
|
| 83 |
+
|
| 84 |
+
## Using Avro RPC in Camel
|
| 85 |
|
| 86 |
As mentioned above, Avro also provides RPC support over multiple
|
| 87 |
transports such as http and netty. Camel provides consumers and
|
|
|
|
| 166 |
while `putProcessor` will receive an array of size 2 with `String` key
|
| 167 |
and `Value` value filled as array contents.
|
| 168 |
|
| 169 |
+
## Avro via HTTP SPI
|
| 170 |
|
| 171 |
The Avro RPC component offers the
|
| 172 |
`org.apache.camel.component.avro.spi.AvroRpcHttpServerFactory` service
|
camel-avroJackson-dataformat.md
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AvroJackson-dataformat.md
|
| 2 |
+
|
| 3 |
+
**Since Camel 3.10**
|
| 4 |
+
|
| 5 |
+
Jackson Avro is a Data Format which uses the [Jackson
|
| 6 |
+
library](https://github.com/FasterXML/jackson/) with the [Avro
|
| 7 |
+
extension](https://github.com/FasterXML/jackson-dataformats-binary) to
|
| 8 |
+
unmarshal an Avro payload into Java objects or to marshal Java objects
|
| 9 |
+
into an Avro payload.
|
| 10 |
+
|
| 11 |
+
If you are familiar with Jackson, this Avro data format behaves in the
|
| 12 |
+
same way as its JSON counterpart, and thus can be used with classes
|
| 13 |
+
annotated for JSON serialization/deserialization.
|
| 14 |
+
|
| 15 |
+
from("kafka:topic").
|
| 16 |
+
unmarshal().avro(JsonNode.class).
|
| 17 |
+
to("log:info");
|
| 18 |
+
|
| 19 |
+
# Avro Jackson Options
|
| 20 |
+
|
| 21 |
+
# Usage
|
| 22 |
+
|
| 23 |
+
## Configuring the `SchemaResolver`
|
| 24 |
+
|
| 25 |
+
Since Avro serialization is schema-based, this data format requires that
|
| 26 |
+
you provide a SchemaResolver object that is able to look up the schema
|
| 27 |
+
for each exchange that is going to be marshalled/unmarshalled.
|
| 28 |
+
|
| 29 |
+
You can add a single SchemaResolver to the registry, and it will be
|
| 30 |
+
looked up automatically. Or you can explicitly specify the reference to
|
| 31 |
+
a custom SchemaResolver.
|
| 32 |
+
|
| 33 |
+
## Using custom AvroMapper
|
| 34 |
+
|
| 35 |
+
You can configure `JacksonAvroDataFormat` to use a custom `AvroMapper`
|
| 36 |
+
in case you need more control of the mapping configuration.
|
| 37 |
+
|
| 38 |
+
If you set up a single `AvroMapper` in the registry, then Camel will
|
| 39 |
+
automatic lookup and use this `AvroMapper`.
|
| 40 |
+
|
| 41 |
+
# Dependencies
|
| 42 |
+
|
| 43 |
+
To use Avro Jackson in your Camel routes, you need to add the dependency
|
| 44 |
+
on **camel-jackson-avro**, which implements this data format.
|
| 45 |
+
|
| 46 |
+
If you use Maven, you could add the following to your pom.xml,
|
| 47 |
+
substituting the version number for the latest \& greatest release.
|
| 48 |
+
|
| 49 |
+
<dependency>
|
| 50 |
+
<groupId>org.apache.camel</groupId>
|
| 51 |
+
<artifactId>camel-jackson-avro</artifactId>
|
| 52 |
+
<version>x.x.x</version>
|
| 53 |
+
<!-- use the same version as your Camel core version -->
|
| 54 |
+
</dependency>
|
camel-aws-bedrock.md
CHANGED
|
@@ -661,7 +661,9 @@ producer side:
|
|
| 661 |
|
| 662 |
- invokeEmbeddingsModel
|
| 663 |
|
| 664 |
-
#
|
|
|
|
|
|
|
| 665 |
|
| 666 |
- invokeTextModel: this operation will invoke a model from Bedrock.
|
| 667 |
This is an example for both Titan Express and Titan Lite.
|
|
|
|
| 661 |
|
| 662 |
- invokeEmbeddingsModel
|
| 663 |
|
| 664 |
+
# Examples
|
| 665 |
+
|
| 666 |
+
## Producer Examples
|
| 667 |
|
| 668 |
- invokeTextModel: this operation will invoke a model from Bedrock.
|
| 669 |
This is an example for both Titan Express and Titan Lite.
|
camel-aws-cloudtrail.md
CHANGED
|
@@ -13,7 +13,19 @@ You must have a valid Amazon Web Services developer account, and be
|
|
| 13 |
signed up to use Amazon Cloudtrail. More information is available at
|
| 14 |
[AWS Cloudtrail](https://aws.amazon.com/cloudtrail/)
|
| 15 |
|
| 16 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
You have the possibility of avoiding the usage of explicit static
|
| 19 |
credentials by specifying the useDefaultCredentialsProvider option and
|
|
@@ -47,7 +59,7 @@ same time.
|
|
| 47 |
For more information about this you can look at [AWS credentials
|
| 48 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 49 |
|
| 50 |
-
# Cloudtrail Events consumed
|
| 51 |
|
| 52 |
The Cloudtrail consumer will use an API method called LookupEvents.
|
| 53 |
|
|
@@ -61,16 +73,6 @@ logs stored on S3, in case of creation of a new Trail.
|
|
| 61 |
This is important to notice, and it must be taken into account when
|
| 62 |
using this component.
|
| 63 |
|
| 64 |
-
# URI Format
|
| 65 |
-
|
| 66 |
-
aws-cloudtrail://label[?options]
|
| 67 |
-
|
| 68 |
-
The stream needs to be created prior to it being used.
|
| 69 |
-
|
| 70 |
-
You can append query options to the URI in the following format:
|
| 71 |
-
|
| 72 |
-
`?options=value&option2=value&...`
|
| 73 |
-
|
| 74 |
## Component Configurations
|
| 75 |
|
| 76 |
|
|
|
|
| 13 |
signed up to use Amazon Cloudtrail. More information is available at
|
| 14 |
[AWS Cloudtrail](https://aws.amazon.com/cloudtrail/)
|
| 15 |
|
| 16 |
+
# URI Format
|
| 17 |
+
|
| 18 |
+
aws-cloudtrail://label[?options]
|
| 19 |
+
|
| 20 |
+
The stream needs to be created prior to it being used.
|
| 21 |
+
|
| 22 |
+
You can append query options to the URI in the following format:
|
| 23 |
+
|
| 24 |
+
`?options=value&option2=value&...`
|
| 25 |
+
|
| 26 |
+
# Usage
|
| 27 |
+
|
| 28 |
+
## Static credentials, Default Credential Provider and Profile Credentials Provider
|
| 29 |
|
| 30 |
You have the possibility of avoiding the usage of explicit static
|
| 31 |
credentials by specifying the useDefaultCredentialsProvider option and
|
|
|
|
| 59 |
For more information about this you can look at [AWS credentials
|
| 60 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 61 |
|
| 62 |
+
## Cloudtrail Events consumed
|
| 63 |
|
| 64 |
The Cloudtrail consumer will use an API method called LookupEvents.
|
| 65 |
|
|
|
|
| 73 |
This is important to notice, and it must be taken into account when
|
| 74 |
using this component.
|
| 75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
## Component Configurations
|
| 77 |
|
| 78 |
|
camel-aws-secrets-manager.md
CHANGED
|
@@ -98,6 +98,11 @@ file such as:
|
|
| 98 |
camel.vault.aws.profileName = test-account
|
| 99 |
camel.vault.aws.region = region
|
| 100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
At this point, you’ll be able to reference a property in the following
|
| 102 |
way:
|
| 103 |
|
|
@@ -142,7 +147,7 @@ example:
|
|
| 142 |
<camelContext>
|
| 143 |
<route>
|
| 144 |
<from uri="direct:start"/>
|
| 145 |
-
<log message="Username is {{aws:database
|
| 146 |
</route>
|
| 147 |
</camelContext>
|
| 148 |
|
|
@@ -154,7 +159,7 @@ is not present on AWS Secret Manager:
|
|
| 154 |
<camelContext>
|
| 155 |
<route>
|
| 156 |
<from uri="direct:start"/>
|
| 157 |
-
<log message="Username is {{aws:database
|
| 158 |
</route>
|
| 159 |
</camelContext>
|
| 160 |
|
|
@@ -190,7 +195,7 @@ secret doesn’t exist or the version doesn’t exist.
|
|
| 190 |
<camelContext>
|
| 191 |
<route>
|
| 192 |
<from uri="direct:start"/>
|
| 193 |
-
<log message="Username is {{aws:database
|
| 194 |
</route>
|
| 195 |
</camelContext>
|
| 196 |
|
|
@@ -318,6 +323,8 @@ the producer side:
|
|
| 318 |
|
| 319 |
- getSecret
|
| 320 |
|
|
|
|
|
|
|
| 321 |
- updateSecret
|
| 322 |
|
| 323 |
- replicateSecretToRegions
|
|
|
|
| 98 |
camel.vault.aws.profileName = test-account
|
| 99 |
camel.vault.aws.region = region
|
| 100 |
|
| 101 |
+
`camel.vault.aws` configuration only applies to the AWS Secrets Manager
|
| 102 |
+
properties function (E.g when resolving properties). When using the
|
| 103 |
+
`operation` option to create, get, list secrets etc., you should provide
|
| 104 |
+
the usual options for connecting to AWS Services.
|
| 105 |
+
|
| 106 |
At this point, you’ll be able to reference a property in the following
|
| 107 |
way:
|
| 108 |
|
|
|
|
| 147 |
<camelContext>
|
| 148 |
<route>
|
| 149 |
<from uri="direct:start"/>
|
| 150 |
+
<log message="Username is {{aws:database#username}}"/>
|
| 151 |
</route>
|
| 152 |
</camelContext>
|
| 153 |
|
|
|
|
| 159 |
<camelContext>
|
| 160 |
<route>
|
| 161 |
<from uri="direct:start"/>
|
| 162 |
+
<log message="Username is {{aws:database#username:admin}}"/>
|
| 163 |
</route>
|
| 164 |
</camelContext>
|
| 165 |
|
|
|
|
| 195 |
<camelContext>
|
| 196 |
<route>
|
| 197 |
<from uri="direct:start"/>
|
| 198 |
+
<log message="Username is {{aws:database#username:admin@bf9b4f4b-8e63-43fd-a73c-3e2d3748b451}}"/>
|
| 199 |
</route>
|
| 200 |
</camelContext>
|
| 201 |
|
|
|
|
| 323 |
|
| 324 |
- getSecret
|
| 325 |
|
| 326 |
+
- batchGetSecret
|
| 327 |
+
|
| 328 |
- updateSecret
|
| 329 |
|
| 330 |
- replicateSecretToRegions
|
camel-aws-summary.md
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Aws-summary.md
|
| 2 |
+
|
| 3 |
+
The **aws-** component allows you to work with the
|
| 4 |
+
[AWS](https://aws.amazon.com/). AWS offers a great palette of different
|
| 5 |
+
components like cloudwatch, DynamoDB streams, storage service, email and
|
| 6 |
+
queue services. The main reason to use AWS is its cloud computing
|
| 7 |
+
platform.
|
| 8 |
+
|
| 9 |
+
# AWS components
|
| 10 |
+
|
| 11 |
+
See the following for usage of each component:
|
| 12 |
+
|
| 13 |
+
indexDescriptionList::\[attributes=*group=AWS*,descriptionformat=description\]
|
camel-aws-xray.md
ADDED
|
@@ -0,0 +1,175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Aws-xray.md
|
| 2 |
+
|
| 3 |
+
**Since Camel 2.21**
|
| 4 |
+
|
| 5 |
+
The camel-aws-xray component is used for tracing and timing incoming and
|
| 6 |
+
outgoing Camel messages using [AWS XRay](https://aws.amazon.com/xray/).
|
| 7 |
+
|
| 8 |
+
Events (subsegments) are captured for incoming and outgoing messages
|
| 9 |
+
being sent to/from Camel.
|
| 10 |
+
|
| 11 |
+
# Configuration
|
| 12 |
+
|
| 13 |
+
The configuration properties for the AWS XRay tracer are:
|
| 14 |
+
|
| 15 |
+
<table>
|
| 16 |
+
<colgroup>
|
| 17 |
+
<col style="width: 10%" />
|
| 18 |
+
<col style="width: 10%" />
|
| 19 |
+
<col style="width: 79%" />
|
| 20 |
+
</colgroup>
|
| 21 |
+
<thead>
|
| 22 |
+
<tr class="header">
|
| 23 |
+
<th style="text-align: left;">Option</th>
|
| 24 |
+
<th style="text-align: left;">Default</th>
|
| 25 |
+
<th style="text-align: left;">Description</th>
|
| 26 |
+
</tr>
|
| 27 |
+
</thead>
|
| 28 |
+
<tbody>
|
| 29 |
+
<tr class="odd">
|
| 30 |
+
<td style="text-align: left;"><p>addExcludePatterns</p></td>
|
| 31 |
+
<td style="text-align: left;"><p> </p></td>
|
| 32 |
+
<td style="text-align: left;"><p>Sets exclude pattern(s) that will
|
| 33 |
+
disable tracing for Camel messages that matches the pattern. The content
|
| 34 |
+
is a Set<String> where the key is a pattern matching routeId’s.
|
| 35 |
+
The pattern uses the rules from Intercept.</p></td>
|
| 36 |
+
</tr>
|
| 37 |
+
<tr class="even">
|
| 38 |
+
<td style="text-align: left;"><p>setTracingStrategy</p></td>
|
| 39 |
+
<td style="text-align: left;"><p>NoopTracingStrategy</p></td>
|
| 40 |
+
<td style="text-align: left;"><p>Allows a custom Camel
|
| 41 |
+
<code>InterceptStrategy</code> to be provided to track invoked processor
|
| 42 |
+
definitions like <code>BeanDefinition</code> or
|
| 43 |
+
<code>ProcessDefinition</code>.
|
| 44 |
+
<code>TraceAnnotatedTracingStrategy</code> will track any classes
|
| 45 |
+
invoked via <code>.bean(...)</code> or <code>.process(...)</code> that
|
| 46 |
+
contain a <code>@XRayTrace</code> annotation at class level.</p></td>
|
| 47 |
+
</tr>
|
| 48 |
+
</tbody>
|
| 49 |
+
</table>
|
| 50 |
+
|
| 51 |
+
There is currently only one way an AWS XRay tracer can be configured to
|
| 52 |
+
provide distributed tracing for a Camel application:
|
| 53 |
+
|
| 54 |
+
## Explicit
|
| 55 |
+
|
| 56 |
+
Include the `camel-aws-xray` component in your POM, along with any
|
| 57 |
+
specific dependencies associated with the AWS XRay Tracer.
|
| 58 |
+
|
| 59 |
+
To explicitly configure AWS XRay support, instantiate the `XRayTracer`
|
| 60 |
+
and initialize the camel context. You can optionally specify a `Tracer`,
|
| 61 |
+
or alternatively it can be implicitly discovered using the `Registry` or
|
| 62 |
+
`ServiceLoader`.
|
| 63 |
+
|
| 64 |
+
XRayTracer xrayTracer = new XRayTracer();
|
| 65 |
+
// By default, it uses a NoopTracingStrategy, but you can override it with a specific InterceptStrategy implementation.
|
| 66 |
+
xrayTracer.setTracingStrategy(...);
|
| 67 |
+
// And then initialize the context
|
| 68 |
+
xrayTracer.init(camelContext);
|
| 69 |
+
|
| 70 |
+
To use XRayTracer in XML, all you need to do is to define the AWS XRay
|
| 71 |
+
tracer bean. Camel will automatically discover and use it.
|
| 72 |
+
|
| 73 |
+
<bean id="tracingStrategy" class="..."/>
|
| 74 |
+
<bean id="aws-xray-tracer" class="org.apache.camel.component.aws.xray.XRayTracer">
|
| 75 |
+
<property name="tracer" ref="tracingStrategy"/>
|
| 76 |
+
</bean>
|
| 77 |
+
|
| 78 |
+
In case of the default `NoopTracingStrategy` only the creation and
|
| 79 |
+
deletion of exchanges is tracked but not the invocation of certain beans
|
| 80 |
+
or EIP patterns.
|
| 81 |
+
|
| 82 |
+
## Tracking of comprehensive route execution
|
| 83 |
+
|
| 84 |
+
To track the execution of an exchange among multiple routes, on exchange
|
| 85 |
+
creation a unique trace ID is generated and stored in the headers if no
|
| 86 |
+
corresponding value was yet available. This trace ID is copied over to
|
| 87 |
+
new exchanges to keep a consistent view of the processed exchange.
|
| 88 |
+
|
| 89 |
+
As AWS XRay traces work on a thread-local basis, the current sub/segment
|
| 90 |
+
should be copied over to the new thread and set as explained in [the AWS
|
| 91 |
+
XRay
|
| 92 |
+
documentation](https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-multithreading.html).
|
| 93 |
+
The Camel AWS XRay component therefore provides an additional header
|
| 94 |
+
field that the component will use to set the passed AWS XRay `Entity` to
|
| 95 |
+
the new thread and thus keep the tracked data to the route rather than
|
| 96 |
+
exposing a new segment which seems uncorrelated with any of the executed
|
| 97 |
+
routes.
|
| 98 |
+
|
| 99 |
+
The component will use the following constants found in the headers of
|
| 100 |
+
the exchange:
|
| 101 |
+
|
| 102 |
+
<table>
|
| 103 |
+
<colgroup>
|
| 104 |
+
<col style="width: 30%" />
|
| 105 |
+
<col style="width: 69%" />
|
| 106 |
+
</colgroup>
|
| 107 |
+
<thead>
|
| 108 |
+
<tr class="header">
|
| 109 |
+
<th style="text-align: left;">Header</th>
|
| 110 |
+
<th style="text-align: left;">Description</th>
|
| 111 |
+
</tr>
|
| 112 |
+
</thead>
|
| 113 |
+
<tbody>
|
| 114 |
+
<tr class="odd">
|
| 115 |
+
<td style="text-align: left;"><p>Camel-AWS-XRay-Trace-ID</p></td>
|
| 116 |
+
<td style="text-align: left;"><p>Contains a reference to the AWS XRay
|
| 117 |
+
<code>TraceID</code> object to provide a comprehensive view of the
|
| 118 |
+
invoked routes</p></td>
|
| 119 |
+
</tr>
|
| 120 |
+
<tr class="even">
|
| 121 |
+
<td style="text-align: left;"><p>Camel-AWS-XRay-Trace-Entity</p></td>
|
| 122 |
+
<td style="text-align: left;"><p>Contains a reference to the actual AWS
|
| 123 |
+
XRay <code>Segment</code> or <code>Subsegment</code> which is copied
|
| 124 |
+
over to the new thread. This header should be set in case a new thread
|
| 125 |
+
is spawned and the performed tasks should be exposed as part of the
|
| 126 |
+
executed route instead of creating a new unrelated segment.</p></td>
|
| 127 |
+
</tr>
|
| 128 |
+
</tbody>
|
| 129 |
+
</table>
|
| 130 |
+
|
| 131 |
+
Note that the AWS XRay `Entity` (i.e., `Segment` and `Subsegment`) are
|
| 132 |
+
not serializable and therefore should not get passed to other JVM
|
| 133 |
+
processes.
|
| 134 |
+
|
| 135 |
+
# Example
|
| 136 |
+
|
| 137 |
+
You can find an example demonstrating the way to configure AWS XRay
|
| 138 |
+
tracing within the tests accompanying this project.
|
| 139 |
+
|
| 140 |
+
# Dependency
|
| 141 |
+
|
| 142 |
+
To include AWS XRay support into Camel, the archive containing the Camel
|
| 143 |
+
related AWS XRay related classes needs to be added to the project. In
|
| 144 |
+
addition to that, AWS XRay libraries also need to be available.
|
| 145 |
+
|
| 146 |
+
To include both AWS XRay and Camel dependencies, use the following Maven
|
| 147 |
+
imports:
|
| 148 |
+
|
| 149 |
+
<dependencyManagement>
|
| 150 |
+
<dependencies>
|
| 151 |
+
<dependency>
|
| 152 |
+
<groupId>com.amazonaws</groupId>
|
| 153 |
+
<artifactId>aws-xray-recorder-sdk-bom</artifactId>
|
| 154 |
+
<version>2.4.0</version>
|
| 155 |
+
<type>pom</type>
|
| 156 |
+
<scope>import</scope>
|
| 157 |
+
</dependency>
|
| 158 |
+
</dependencies>
|
| 159 |
+
</dependencyManagement>
|
| 160 |
+
|
| 161 |
+
<dependencies>
|
| 162 |
+
<dependency>
|
| 163 |
+
<groupId>org.apache.camel</groupId>
|
| 164 |
+
<artifactId>camel-aws-xray</artifactId>
|
| 165 |
+
</dependency>
|
| 166 |
+
|
| 167 |
+
<dependency>
|
| 168 |
+
<groupId>com.amazonaws</groupId>
|
| 169 |
+
<artifactId>aws-xray-recorder-sdk-core</artifactId>
|
| 170 |
+
</dependency>
|
| 171 |
+
<dependency>
|
| 172 |
+
<groupId>com.amazonaws</groupId>
|
| 173 |
+
<artifactId>aws-xray-recorder-sdk-aws-sdk</artifactId>
|
| 174 |
+
</dependency>
|
| 175 |
+
</dependencies>
|
camel-aws2-athena.md
CHANGED
|
@@ -361,12 +361,12 @@ Camel.
|
|
| 361 |
|accessKey|Amazon AWS Access Key.||string|
|
| 362 |
|encryptionOption|The encryption type to use when storing query results in S3. One of SSE\_S3, SSE\_KMS, or CSE\_KMS.||object|
|
| 363 |
|kmsKey|For SSE-KMS and CSE-KMS, this is the KMS key ARN or ID.||string|
|
| 364 |
-
|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name||string|
|
| 365 |
|secretKey|Amazon AWS Secret Key.||string|
|
| 366 |
|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string|
|
| 367 |
|useDefaultCredentialsProvider|Set whether the Athena client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in|false|boolean|
|
| 368 |
|useProfileCredentialsProvider|Set whether the Athena client should expect to load credentials through a profile credentials provider.|false|boolean|
|
| 369 |
-
|useSessionCredentials|Set whether the Athena client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume
|
| 370 |
|
| 371 |
## Endpoint Configurations
|
| 372 |
|
|
@@ -400,9 +400,9 @@ Camel.
|
|
| 400 |
|accessKey|Amazon AWS Access Key.||string|
|
| 401 |
|encryptionOption|The encryption type to use when storing query results in S3. One of SSE\_S3, SSE\_KMS, or CSE\_KMS.||object|
|
| 402 |
|kmsKey|For SSE-KMS and CSE-KMS, this is the KMS key ARN or ID.||string|
|
| 403 |
-
|profileCredentialsName|If using a profile credentials provider this parameter will set the profile name||string|
|
| 404 |
|secretKey|Amazon AWS Secret Key.||string|
|
| 405 |
|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string|
|
| 406 |
|useDefaultCredentialsProvider|Set whether the Athena client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in|false|boolean|
|
| 407 |
|useProfileCredentialsProvider|Set whether the Athena client should expect to load credentials through a profile credentials provider.|false|boolean|
|
| 408 |
-
|useSessionCredentials|Set whether the Athena client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume
|
|
|
|
| 361 |
|accessKey|Amazon AWS Access Key.||string|
|
| 362 |
|encryptionOption|The encryption type to use when storing query results in S3. One of SSE\_S3, SSE\_KMS, or CSE\_KMS.||object|
|
| 363 |
|kmsKey|For SSE-KMS and CSE-KMS, this is the KMS key ARN or ID.||string|
|
| 364 |
+
|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string|
|
| 365 |
|secretKey|Amazon AWS Secret Key.||string|
|
| 366 |
|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string|
|
| 367 |
|useDefaultCredentialsProvider|Set whether the Athena client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in|false|boolean|
|
| 368 |
|useProfileCredentialsProvider|Set whether the Athena client should expect to load credentials through a profile credentials provider.|false|boolean|
|
| 369 |
+
|useSessionCredentials|Set whether the Athena client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Athena.|false|boolean|
|
| 370 |
|
| 371 |
## Endpoint Configurations
|
| 372 |
|
|
|
|
| 400 |
|accessKey|Amazon AWS Access Key.||string|
|
| 401 |
|encryptionOption|The encryption type to use when storing query results in S3. One of SSE\_S3, SSE\_KMS, or CSE\_KMS.||object|
|
| 402 |
|kmsKey|For SSE-KMS and CSE-KMS, this is the KMS key ARN or ID.||string|
|
| 403 |
+
|profileCredentialsName|If using a profile credentials provider, this parameter will set the profile name||string|
|
| 404 |
|secretKey|Amazon AWS Secret Key.||string|
|
| 405 |
|sessionToken|Amazon AWS Session Token used when the user needs to assume an IAM role||string|
|
| 406 |
|useDefaultCredentialsProvider|Set whether the Athena client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in|false|boolean|
|
| 407 |
|useProfileCredentialsProvider|Set whether the Athena client should expect to load credentials through a profile credentials provider.|false|boolean|
|
| 408 |
+
|useSessionCredentials|Set whether the Athena client should expect to use Session Credentials. This is useful in a situation in which the user needs to assume an IAM role for doing operations in Athena.|false|boolean|
|
camel-aws2-ddb.md
CHANGED
|
@@ -91,7 +91,7 @@ URI:
|
|
| 91 |
|
| 92 |
The `#client` refers to a `DynamoDbClient` in the Registry.
|
| 93 |
|
| 94 |
-
# Supported producer operations
|
| 95 |
|
| 96 |
- BatchGetItems
|
| 97 |
|
|
|
|
| 91 |
|
| 92 |
The `#client` refers to a `DynamoDbClient` in the Registry.
|
| 93 |
|
| 94 |
+
## Supported producer operations
|
| 95 |
|
| 96 |
- BatchGetItems
|
| 97 |
|
camel-aws2-ddbstream.md
CHANGED
|
@@ -72,9 +72,9 @@ same time.
|
|
| 72 |
For more information about this you can look at [AWS credentials
|
| 73 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 74 |
|
| 75 |
-
# Coping with Downtime
|
| 76 |
|
| 77 |
-
## AWS DynamoDB Streams outage of less than 24 hours
|
| 78 |
|
| 79 |
The consumer will resume from the last seen sequence number (as
|
| 80 |
implemented for
|
|
|
|
| 72 |
For more information about this you can look at [AWS credentials
|
| 73 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 74 |
|
| 75 |
+
## Coping with Downtime
|
| 76 |
|
| 77 |
+
### AWS DynamoDB Streams outage of less than 24 hours
|
| 78 |
|
| 79 |
The consumer will resume from the last seen sequence number (as
|
| 80 |
implemented for
|
camel-aws2-ec2.md
CHANGED
|
@@ -63,7 +63,7 @@ same time.
|
|
| 63 |
For more information about this you can look at [AWS credentials
|
| 64 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 65 |
|
| 66 |
-
# Supported producer operations
|
| 67 |
|
| 68 |
- createAndRunInstances
|
| 69 |
|
|
@@ -149,7 +149,7 @@ documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/c
|
|
| 149 |
})
|
| 150 |
.to("aws2-ec2://TestDomain?accessKey=xxxx&secretKey=xxxx&operation=terminateInstances");
|
| 151 |
|
| 152 |
-
# Using a POJO as body
|
| 153 |
|
| 154 |
Sometimes building an AWS Request can be complex because of multiple
|
| 155 |
options. We introduce the possibility to use a POJO as a body. In AWS
|
|
|
|
| 63 |
For more information about this you can look at [AWS credentials
|
| 64 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 65 |
|
| 66 |
+
## Supported producer operations
|
| 67 |
|
| 68 |
- createAndRunInstances
|
| 69 |
|
|
|
|
| 149 |
})
|
| 150 |
.to("aws2-ec2://TestDomain?accessKey=xxxx&secretKey=xxxx&operation=terminateInstances");
|
| 151 |
|
| 152 |
+
## Using a POJO as body
|
| 153 |
|
| 154 |
Sometimes building an AWS Request can be complex because of multiple
|
| 155 |
options. We introduce the possibility to use a POJO as a body. In AWS
|
camel-aws2-ecs.md
CHANGED
|
@@ -85,7 +85,7 @@ side:
|
|
| 85 |
from("direct:listClusters")
|
| 86 |
.to("aws2-ecs://test?ecsClient=#amazonEcsClient&operation=listClusters")
|
| 87 |
|
| 88 |
-
# Using a POJO as body
|
| 89 |
|
| 90 |
Sometimes building an AWS Request can be complex because of multiple
|
| 91 |
options. We introduce the possibility to use a POJO as a body. In AWS
|
|
|
|
| 85 |
from("direct:listClusters")
|
| 86 |
.to("aws2-ecs://test?ecsClient=#amazonEcsClient&operation=listClusters")
|
| 87 |
|
| 88 |
+
## Using a POJO as body
|
| 89 |
|
| 90 |
Sometimes building an AWS Request can be complex because of multiple
|
| 91 |
options. We introduce the possibility to use a POJO as a body. In AWS
|
camel-aws2-eks.md
CHANGED
|
@@ -76,7 +76,9 @@ side:
|
|
| 76 |
|
| 77 |
- deleteCluster
|
| 78 |
|
| 79 |
-
#
|
|
|
|
|
|
|
| 80 |
|
| 81 |
- listClusters: this operation will list the available clusters in EKS
|
| 82 |
|
|
@@ -85,7 +87,7 @@ side:
|
|
| 85 |
from("direct:listClusters")
|
| 86 |
.to("aws2-eks://test?eksClient=#amazonEksClient&operation=listClusters")
|
| 87 |
|
| 88 |
-
# Using a POJO as body
|
| 89 |
|
| 90 |
Sometimes building an AWS Request can be complex because of multiple
|
| 91 |
options. We introduce the possibility to use a POJO as a body. In AWS
|
|
|
|
| 76 |
|
| 77 |
- deleteCluster
|
| 78 |
|
| 79 |
+
# Examples
|
| 80 |
+
|
| 81 |
+
## Producer Examples
|
| 82 |
|
| 83 |
- listClusters: this operation will list the available clusters in EKS
|
| 84 |
|
|
|
|
| 87 |
from("direct:listClusters")
|
| 88 |
.to("aws2-eks://test?eksClient=#amazonEksClient&operation=listClusters")
|
| 89 |
|
| 90 |
+
## Using a POJO as body
|
| 91 |
|
| 92 |
Sometimes building an AWS Request can be complex because of multiple
|
| 93 |
options. We introduce the possibility to use a POJO as a body. In AWS
|
camel-aws2-eventbridge.md
CHANGED
|
@@ -28,6 +28,8 @@ You can append query options to the URI in the following format:
|
|
| 28 |
|
| 29 |
`?options=value&option2=value&...`
|
| 30 |
|
|
|
|
|
|
|
| 31 |
## Static credentials, Default Credential Provider and Profile Credentials Provider
|
| 32 |
|
| 33 |
You have the possibility of avoiding the usage of explicit static
|
|
@@ -289,7 +291,7 @@ this operation will return a list of rules associated with a target.
|
|
| 289 |
this operation will return a list of entries with related ID sent to
|
| 290 |
servicebus.
|
| 291 |
|
| 292 |
-
# Updating the rule
|
| 293 |
|
| 294 |
To update a rule, you’ll need to perform the putRule operation again.
|
| 295 |
There is no explicit update rule operation in the Java SDK.
|
|
|
|
| 28 |
|
| 29 |
`?options=value&option2=value&...`
|
| 30 |
|
| 31 |
+
# Usage
|
| 32 |
+
|
| 33 |
## Static credentials, Default Credential Provider and Profile Credentials Provider
|
| 34 |
|
| 35 |
You have the possibility of avoiding the usage of explicit static
|
|
|
|
| 291 |
this operation will return a list of entries with related ID sent to
|
| 292 |
servicebus.
|
| 293 |
|
| 294 |
+
## Updating the rule
|
| 295 |
|
| 296 |
To update a rule, you’ll need to perform the putRule operation again.
|
| 297 |
There is no explicit update rule operation in the Java SDK.
|
camel-aws2-iam.md
CHANGED
|
@@ -97,7 +97,9 @@ producer side:
|
|
| 97 |
|
| 98 |
- removeUserFromGroup
|
| 99 |
|
| 100 |
-
#
|
|
|
|
|
|
|
| 101 |
|
| 102 |
- createUser: this operation will create a user in IAM
|
| 103 |
|
|
@@ -145,7 +147,7 @@ producer side:
|
|
| 145 |
from("direct:listUsers")
|
| 146 |
.to("aws2-iam://test?iamClient=#amazonIAMClient&operation=listGroups")
|
| 147 |
|
| 148 |
-
# Using a POJO as body
|
| 149 |
|
| 150 |
Sometimes building an AWS Request can be complex because of multiple
|
| 151 |
options. We introduce the possibility to use a POJO as a body. In AWS
|
|
|
|
| 97 |
|
| 98 |
- removeUserFromGroup
|
| 99 |
|
| 100 |
+
# Examples
|
| 101 |
+
|
| 102 |
+
## Producer Examples
|
| 103 |
|
| 104 |
- createUser: this operation will create a user in IAM
|
| 105 |
|
|
|
|
| 147 |
from("direct:listUsers")
|
| 148 |
.to("aws2-iam://test?iamClient=#amazonIAMClient&operation=listGroups")
|
| 149 |
|
| 150 |
+
## Using a POJO as body
|
| 151 |
|
| 152 |
Sometimes building an AWS Request can be complex because of multiple
|
| 153 |
options. We introduce the possibility to use a POJO as a body. In AWS
|
camel-aws2-kinesis-firehose.md
CHANGED
|
@@ -13,8 +13,6 @@ You must have a valid Amazon Web Services developer account, and be
|
|
| 13 |
signed up to use Amazon Kinesis Firehose. More information is available
|
| 14 |
at [AWS Kinesis Firehose](https://aws.amazon.com/kinesis/firehose/)
|
| 15 |
|
| 16 |
-
The AWS2 Kinesis Firehose component is not supported in OSGI
|
| 17 |
-
|
| 18 |
# URI Format
|
| 19 |
|
| 20 |
aws2-kinesis-firehose://delivery-stream-name[?options]
|
|
|
|
| 13 |
signed up to use Amazon Kinesis Firehose. More information is available
|
| 14 |
at [AWS Kinesis Firehose](https://aws.amazon.com/kinesis/firehose/)
|
| 15 |
|
|
|
|
|
|
|
| 16 |
# URI Format
|
| 17 |
|
| 18 |
aws2-kinesis-firehose://delivery-stream-name[?options]
|
camel-aws2-kinesis.md
CHANGED
|
@@ -33,7 +33,9 @@ Required Kinesis component options
|
|
| 33 |
You have to provide the KinesisClient in the Registry with proxies and
|
| 34 |
relevant credentials configured.
|
| 35 |
|
| 36 |
-
#
|
|
|
|
|
|
|
| 37 |
|
| 38 |
This component implements the Batch Consumer.
|
| 39 |
|
|
@@ -47,7 +49,7 @@ therefore, if you leave the *shardId* property in the DSL configuration
|
|
| 47 |
empty, then it’ll consume all available shards otherwise only the
|
| 48 |
specified shard corresponding to the shardId will be consumed.
|
| 49 |
|
| 50 |
-
# Batch Producer
|
| 51 |
|
| 52 |
This component implements the Batch Producer.
|
| 53 |
|
|
@@ -60,8 +62,6 @@ it can be a `List`, `Set` or any other collection type. The message type
|
|
| 60 |
can be one or more of types `byte[]`, `ByteBuffer`, UTF-8 `String`, or
|
| 61 |
`InputStream`. Other types are not supported.
|
| 62 |
|
| 63 |
-
# Usage
|
| 64 |
-
|
| 65 |
## Static credentials, Default Credential Provider and Profile Credentials Provider
|
| 66 |
|
| 67 |
You have the possibility of avoiding the usage of explicit static
|
|
@@ -164,7 +164,7 @@ Camel.
|
|
| 164 |
|iteratorType|Defines where in the Kinesis stream to start getting records|TRIM\_HORIZON|object|
|
| 165 |
|maxResultsPerRequest|Maximum number of records that will be fetched in each poll|1|integer|
|
| 166 |
|sequenceNumber|The sequence number to start polling from. Required if iteratorType is set to AFTER\_SEQUENCE\_NUMBER or AT\_SEQUENCE\_NUMBER||string|
|
| 167 |
-
|shardClosed|Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will
|
| 168 |
|shardId|Defines which shardId in the Kinesis stream to get records from||string|
|
| 169 |
|shardMonitorInterval|The interval in milliseconds to wait between shard polling|10000|integer|
|
| 170 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
|
@@ -202,7 +202,7 @@ Camel.
|
|
| 202 |
|maxResultsPerRequest|Maximum number of records that will be fetched in each poll|1|integer|
|
| 203 |
|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|
| 204 |
|sequenceNumber|The sequence number to start polling from. Required if iteratorType is set to AFTER\_SEQUENCE\_NUMBER or AT\_SEQUENCE\_NUMBER||string|
|
| 205 |
-
|shardClosed|Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will
|
| 206 |
|shardId|Defines which shardId in the Kinesis stream to get records from||string|
|
| 207 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 208 |
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|
|
|
|
| 33 |
You have to provide the KinesisClient in the Registry with proxies and
|
| 34 |
relevant credentials configured.
|
| 35 |
|
| 36 |
+
# Usage
|
| 37 |
+
|
| 38 |
+
## Batch Consumer
|
| 39 |
|
| 40 |
This component implements the Batch Consumer.
|
| 41 |
|
|
|
|
| 49 |
empty, then it’ll consume all available shards otherwise only the
|
| 50 |
specified shard corresponding to the shardId will be consumed.
|
| 51 |
|
| 52 |
+
## Batch Producer
|
| 53 |
|
| 54 |
This component implements the Batch Producer.
|
| 55 |
|
|
|
|
| 62 |
can be one or more of types `byte[]`, `ByteBuffer`, UTF-8 `String`, or
|
| 63 |
`InputStream`. Other types are not supported.
|
| 64 |
|
|
|
|
|
|
|
| 65 |
## Static credentials, Default Credential Provider and Profile Credentials Provider
|
| 66 |
|
| 67 |
You have the possibility of avoiding the usage of explicit static
|
|
|
|
| 164 |
|iteratorType|Defines where in the Kinesis stream to start getting records|TRIM\_HORIZON|object|
|
| 165 |
|maxResultsPerRequest|Maximum number of records that will be fetched in each poll|1|integer|
|
| 166 |
|sequenceNumber|The sequence number to start polling from. Required if iteratorType is set to AFTER\_SEQUENCE\_NUMBER or AT\_SEQUENCE\_NUMBER||string|
|
| 167 |
+
|shardClosed|Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a WARN message will be logged once and the consumer will not process new messages until restarted,in case of silent there will be no logging and the consumer will not process new messages until restarted,in case of fail a ReachedClosedStateException will be thrown|ignore|object|
|
| 168 |
|shardId|Defines which shardId in the Kinesis stream to get records from||string|
|
| 169 |
|shardMonitorInterval|The interval in milliseconds to wait between shard polling|10000|integer|
|
| 170 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
|
|
|
| 202 |
|maxResultsPerRequest|Maximum number of records that will be fetched in each poll|1|integer|
|
| 203 |
|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|
| 204 |
|sequenceNumber|The sequence number to start polling from. Required if iteratorType is set to AFTER\_SEQUENCE\_NUMBER or AT\_SEQUENCE\_NUMBER||string|
|
| 205 |
+
|shardClosed|Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a WARN message will be logged once and the consumer will not process new messages until restarted,in case of silent there will be no logging and the consumer will not process new messages until restarted,in case of fail a ReachedClosedStateException will be thrown|ignore|object|
|
| 206 |
|shardId|Defines which shardId in the Kinesis stream to get records from||string|
|
| 207 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 208 |
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|
camel-aws2-kms.md
CHANGED
|
@@ -80,7 +80,9 @@ side:
|
|
| 80 |
|
| 81 |
- enableKey
|
| 82 |
|
| 83 |
-
#
|
|
|
|
|
|
|
| 84 |
|
| 85 |
- listKeys: this operation will list the available keys in KMS
|
| 86 |
|
|
@@ -112,7 +114,7 @@ side:
|
|
| 112 |
.setHeader(KMS2Constants.KEY_ID, constant("123")
|
| 113 |
.to("aws2-kms://test?kmsClient=#amazonKmsClient&operation=enableKey")
|
| 114 |
|
| 115 |
-
# Using a POJO as body
|
| 116 |
|
| 117 |
Sometimes building an AWS Request can be complex because of multiple
|
| 118 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
|
|
|
| 80 |
|
| 81 |
- enableKey
|
| 82 |
|
| 83 |
+
# Examples
|
| 84 |
+
|
| 85 |
+
## Producer Examples
|
| 86 |
|
| 87 |
- listKeys: this operation will list the available keys in KMS
|
| 88 |
|
|
|
|
| 114 |
.setHeader(KMS2Constants.KEY_ID, constant("123")
|
| 115 |
.to("aws2-kms://test?kmsClient=#amazonKmsClient&operation=enableKey")
|
| 116 |
|
| 117 |
+
## Using a POJO as body
|
| 118 |
|
| 119 |
Sometimes building an AWS Request can be complex because of multiple
|
| 120 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
camel-aws2-mq.md
CHANGED
|
@@ -133,7 +133,7 @@ side:
|
|
| 133 |
.setHeader(MQ2Constants.BROKER_ID, constant("123")
|
| 134 |
.to("aws2-mq://test?amazonMqClient=#amazonMqClient&operation=rebootBroker")
|
| 135 |
|
| 136 |
-
# Using a POJO as body
|
| 137 |
|
| 138 |
Sometimes building an AWS Request can be complex because of multiple
|
| 139 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
|
|
|
| 133 |
.setHeader(MQ2Constants.BROKER_ID, constant("123")
|
| 134 |
.to("aws2-mq://test?amazonMqClient=#amazonMqClient&operation=rebootBroker")
|
| 135 |
|
| 136 |
+
## Using a POJO as body
|
| 137 |
|
| 138 |
Sometimes building an AWS Request can be complex because of multiple
|
| 139 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
camel-aws2-msk.md
CHANGED
|
@@ -116,7 +116,7 @@ side:
|
|
| 116 |
})
|
| 117 |
.to("aws2-msk://test?mskClient=#amazonMskClient&operation=deleteCluster")
|
| 118 |
|
| 119 |
-
# Using a POJO as body
|
| 120 |
|
| 121 |
Sometimes building an AWS Request can be complex because of multiple
|
| 122 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
|
|
|
| 116 |
})
|
| 117 |
.to("aws2-msk://test?mskClient=#amazonMskClient&operation=deleteCluster")
|
| 118 |
|
| 119 |
+
## Using a POJO as body
|
| 120 |
|
| 121 |
Sometimes building an AWS Request can be complex because of multiple
|
| 122 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
camel-aws2-redshift-data.md
CHANGED
|
@@ -92,7 +92,9 @@ the producer side:
|
|
| 92 |
|
| 93 |
- getStatementResult
|
| 94 |
|
| 95 |
-
#
|
|
|
|
|
|
|
| 96 |
|
| 97 |
- listDatabases: this operation will list redshift databases
|
| 98 |
|
|
@@ -101,7 +103,7 @@ the producer side:
|
|
| 101 |
from("direct:listDatabases")
|
| 102 |
.to("aws2-redshift-data://test?awsRedshiftDataClient=#awsRedshiftDataClient&operation=listDatabases")
|
| 103 |
|
| 104 |
-
# Using a POJO as body
|
| 105 |
|
| 106 |
Sometimes building an AWS Request can be complex because of multiple
|
| 107 |
options. We introduce the possibility to use a POJO as body. In AWS
|
|
|
|
| 92 |
|
| 93 |
- getStatementResult
|
| 94 |
|
| 95 |
+
# Examples
|
| 96 |
+
|
| 97 |
+
## Producer Examples
|
| 98 |
|
| 99 |
- listDatabases: this operation will list redshift databases
|
| 100 |
|
|
|
|
| 103 |
from("direct:listDatabases")
|
| 104 |
.to("aws2-redshift-data://test?awsRedshiftDataClient=#awsRedshiftDataClient&operation=listDatabases")
|
| 105 |
|
| 106 |
+
## Using a POJO as body
|
| 107 |
|
| 108 |
Sometimes building an AWS Request can be complex because of multiple
|
| 109 |
options. We introduce the possibility to use a POJO as body. In AWS
|
camel-aws2-s3.md
CHANGED
|
@@ -28,7 +28,9 @@ Required S3 component options
|
|
| 28 |
You have to provide the amazonS3Client in the Registry or your accessKey
|
| 29 |
and secretKey to access the [Amazon’s S3](https://aws.amazon.com/s3).
|
| 30 |
|
| 31 |
-
#
|
|
|
|
|
|
|
| 32 |
|
| 33 |
This component implements the Batch Consumer.
|
| 34 |
|
|
@@ -36,14 +38,6 @@ This allows you, for instance, to know how many messages exist in this
|
|
| 36 |
batch and for instance, let the Aggregator aggregate this number of
|
| 37 |
messages.
|
| 38 |
|
| 39 |
-
# Usage
|
| 40 |
-
|
| 41 |
-
For example, to read file `hello.txt` from bucket `helloBucket`, use the
|
| 42 |
-
following snippet:
|
| 43 |
-
|
| 44 |
-
from("aws2-s3://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&prefix=hello.txt")
|
| 45 |
-
.to("file:/var/downloaded");
|
| 46 |
-
|
| 47 |
## S3 Producer operations
|
| 48 |
|
| 49 |
Camel-AWS2-S3 component provides the following operation on the producer
|
|
@@ -71,6 +65,14 @@ If you don’t specify an operation, explicitly the producer will do:
|
|
| 71 |
|
| 72 |
- a multipart upload if multiPartUpload option is enabled
|
| 73 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
## Advanced AmazonS3 configuration
|
| 75 |
|
| 76 |
If your Camel Application is running behind a firewall or if you need to
|
|
@@ -301,7 +303,84 @@ If checksum validations are enabled, the url will no longer be browser
|
|
| 301 |
compatible because it adds a signed header that must be included in the
|
| 302 |
HTTP request.
|
| 303 |
|
| 304 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 305 |
|
| 306 |
With the stream mode enabled, users will be able to upload data to S3
|
| 307 |
without knowing ahead of time the dimension of the data, by leveraging
|
|
@@ -373,14 +452,14 @@ As an example:
|
|
| 373 |
|
| 374 |
In this case, the upload will be completed after 10 seconds.
|
| 375 |
|
| 376 |
-
# Bucket Auto-creation
|
| 377 |
|
| 378 |
With the option `autoCreateBucket` users are able to avoid the
|
| 379 |
auto-creation of an S3 Bucket in case it doesn’t exist. The default for
|
| 380 |
this option is `false`. If set to false, any operation on a not-existent
|
| 381 |
bucket in AWS won’t be successful and an error will be returned.
|
| 382 |
|
| 383 |
-
# Moving stuff between a bucket and another bucket
|
| 384 |
|
| 385 |
Some users like to consume stuff from a bucket and move the content in a
|
| 386 |
different one without using the copyObject feature of this component. If
|
|
@@ -388,7 +467,7 @@ this is case for you, remember to remove the bucketName header from the
|
|
| 388 |
incoming exchange of the consumer, otherwise the file will always be
|
| 389 |
overwritten on the same original bucket.
|
| 390 |
|
| 391 |
-
# MoveAfterRead consumer option
|
| 392 |
|
| 393 |
In addition to deleteAfterRead, it has been added another option,
|
| 394 |
moveAfterRead. With this option enabled, the consumed object will be
|
|
@@ -418,7 +497,7 @@ to true as default).
|
|
| 418 |
So if the file name is test, in the *myothercamelbucket* you should see
|
| 419 |
a file called pre-test-suff.
|
| 420 |
|
| 421 |
-
# Using customer key as encryption
|
| 422 |
|
| 423 |
We introduced also the customer key support (an alternative of using
|
| 424 |
KMS). The following code shows an example.
|
|
@@ -435,7 +514,7 @@ KMS). The following code shows an example.
|
|
| 435 |
.setBody(constant("Test"))
|
| 436 |
.to(awsEndpoint);
|
| 437 |
|
| 438 |
-
# Using a POJO as body
|
| 439 |
|
| 440 |
Sometimes building an AWS Request can be complex because of multiple
|
| 441 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
|
@@ -449,7 +528,7 @@ brokers request, you can do something like:
|
|
| 449 |
In this way, you’ll pass the request directly without the need of
|
| 450 |
passing headers and options specifically related to this operation.
|
| 451 |
|
| 452 |
-
# Create S3 client and add component to registry
|
| 453 |
|
| 454 |
Sometimes you would want to perform some advanced configuration using
|
| 455 |
AWS2S3Configuration, which also allows to set the S3 client. You can
|
|
@@ -504,6 +583,7 @@ Camel.
|
|
| 504 |
|configuration|The component configuration||object|
|
| 505 |
|delimiter|The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in.||string|
|
| 506 |
|forcePathStyle|Set whether the S3 client should use path-style URL instead of virtual-hosted-style|false|boolean|
|
|
|
|
| 507 |
|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean|
|
| 508 |
|pojoRequest|If we want to use a POJO request as body or not|false|boolean|
|
| 509 |
|policy|The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method.||string|
|
|
@@ -520,7 +600,6 @@ Camel.
|
|
| 520 |
|destinationBucketSuffix|Define the destination bucket suffix to use when an object must be moved, and moveAfterRead is set to true.||string|
|
| 521 |
|doneFileName|If provided, Camel will only consume files if a done file exists.||string|
|
| 522 |
|fileName|To get the object from the bucket with the given file name||string|
|
| 523 |
-
|ignoreBody|If it is true, the S3 Object Body will be ignored completely if it is set to false, the S3 Object will be put in the body. Setting this to true will override any behavior defined by includeBody option.|false|boolean|
|
| 524 |
|includeBody|If it is true, the S3Object exchange will be consumed and put into the body and closed. If false, the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to the autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However, setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion.|true|boolean|
|
| 525 |
|includeFolders|If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those|true|boolean|
|
| 526 |
|moveAfterRead|Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation, the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved.|false|boolean|
|
|
@@ -569,6 +648,7 @@ Camel.
|
|
| 569 |
|autoCreateBucket|Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled, and it will create the destinationBucket if it doesn't exist already.|false|boolean|
|
| 570 |
|delimiter|The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in.||string|
|
| 571 |
|forcePathStyle|Set whether the S3 client should use path-style URL instead of virtual-hosted-style|false|boolean|
|
|
|
|
| 572 |
|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean|
|
| 573 |
|pojoRequest|If we want to use a POJO request as body or not|false|boolean|
|
| 574 |
|policy|The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method.||string|
|
|
@@ -584,7 +664,6 @@ Camel.
|
|
| 584 |
|destinationBucketSuffix|Define the destination bucket suffix to use when an object must be moved, and moveAfterRead is set to true.||string|
|
| 585 |
|doneFileName|If provided, Camel will only consume files if a done file exists.||string|
|
| 586 |
|fileName|To get the object from the bucket with the given file name||string|
|
| 587 |
-
|ignoreBody|If it is true, the S3 Object Body will be ignored completely if it is set to false, the S3 Object will be put in the body. Setting this to true will override any behavior defined by includeBody option.|false|boolean|
|
| 588 |
|includeBody|If it is true, the S3Object exchange will be consumed and put into the body and closed. If false, the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to the autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However, setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion.|true|boolean|
|
| 589 |
|includeFolders|If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those|true|boolean|
|
| 590 |
|maxConnections|Set the maxConnections parameter in the S3 client configuration|60|integer|
|
|
|
|
| 28 |
You have to provide the amazonS3Client in the Registry or your accessKey
|
| 29 |
and secretKey to access the [Amazon’s S3](https://aws.amazon.com/s3).
|
| 30 |
|
| 31 |
+
# Usage
|
| 32 |
+
|
| 33 |
+
## Batch Consumer
|
| 34 |
|
| 35 |
This component implements the Batch Consumer.
|
| 36 |
|
|
|
|
| 38 |
batch and for instance, let the Aggregator aggregate this number of
|
| 39 |
messages.
|
| 40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
## S3 Producer operations
|
| 42 |
|
| 43 |
Camel-AWS2-S3 component provides the following operation on the producer
|
|
|
|
| 65 |
|
| 66 |
- a multipart upload if multiPartUpload option is enabled
|
| 67 |
|
| 68 |
+
# Examples
|
| 69 |
+
|
| 70 |
+
For example, to read file `hello.txt` from bucket `helloBucket`, use the
|
| 71 |
+
following snippet:
|
| 72 |
+
|
| 73 |
+
from("aws2-s3://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&prefix=hello.txt")
|
| 74 |
+
.to("file:/var/downloaded");
|
| 75 |
+
|
| 76 |
## Advanced AmazonS3 configuration
|
| 77 |
|
| 78 |
If your Camel Application is running behind a firewall or if you need to
|
|
|
|
| 303 |
compatible because it adds a signed header that must be included in the
|
| 304 |
HTTP request.
|
| 305 |
|
| 306 |
+
## AWS S3 Producer minimum permissions
|
| 307 |
+
|
| 308 |
+
For making the producer work, you’ll need at least PutObject and
|
| 309 |
+
ListBuckets permissions. The following policy will be enough:
|
| 310 |
+
|
| 311 |
+
{
|
| 312 |
+
"Version": "2012-10-17",
|
| 313 |
+
"Statement": [
|
| 314 |
+
{
|
| 315 |
+
"Effect": "Allow",
|
| 316 |
+
"Action": "s3:PutObject",
|
| 317 |
+
"Resource": "arn:aws:s3:::*/*"
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"Effect": "Allow",
|
| 321 |
+
"Action": "s3:ListBucket",
|
| 322 |
+
"Resource": "arn:aws:s3:::*"
|
| 323 |
+
}
|
| 324 |
+
]
|
| 325 |
+
}
|
| 326 |
+
|
| 327 |
+
A variation to the minimum permissions is related to the usage of Bucket
|
| 328 |
+
autocreation. In that case the permissions will need to be increased
|
| 329 |
+
with CreateBucket permission
|
| 330 |
+
|
| 331 |
+
{
|
| 332 |
+
"Version": "2012-10-17",
|
| 333 |
+
"Statement": [
|
| 334 |
+
{
|
| 335 |
+
"Effect": "Allow",
|
| 336 |
+
"Action": "s3:PutObject",
|
| 337 |
+
"Resource": "arn:aws:s3:::*/*"
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"Effect": "Allow",
|
| 341 |
+
"Action": "s3:ListBucket",
|
| 342 |
+
"Resource": "arn:aws:s3:::*"
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"Effect": "Allow",
|
| 346 |
+
"Action": "s3:CreateBucket",
|
| 347 |
+
"Resource": "arn:aws:s3:::*"
|
| 348 |
+
}
|
| 349 |
+
]
|
| 350 |
+
}
|
| 351 |
+
|
| 352 |
+
## AWS S3 Consumer minimum permissions
|
| 353 |
+
|
| 354 |
+
For making the producer work, you’ll need at least GetObject,
|
| 355 |
+
ListBuckets and DeleteObject permissions. The following policy will be
|
| 356 |
+
enough:
|
| 357 |
+
|
| 358 |
+
{
|
| 359 |
+
"Version": "2012-10-17",
|
| 360 |
+
"Statement": [
|
| 361 |
+
{
|
| 362 |
+
"Effect": "Allow",
|
| 363 |
+
"Action": "s3:ListBucket",
|
| 364 |
+
"Resource": "arn:aws:s3:::*"
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"Effect": "Allow",
|
| 368 |
+
"Action": "s3:GetObject",
|
| 369 |
+
"Resource": "arn:aws:s3:::*/*"
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"Effect": "Allow",
|
| 373 |
+
"Action": "s3:DeleteObject",
|
| 374 |
+
"Resource": "arn:aws:s3:::*/*"
|
| 375 |
+
}
|
| 376 |
+
]
|
| 377 |
+
}
|
| 378 |
+
|
| 379 |
+
By Default the consumer will use the deleteAfterRead option, this means
|
| 380 |
+
the object will be deleted once consumed, this is why the DeleteObject
|
| 381 |
+
permission is required.
|
| 382 |
+
|
| 383 |
+
## Streaming Upload mode
|
| 384 |
|
| 385 |
With the stream mode enabled, users will be able to upload data to S3
|
| 386 |
without knowing ahead of time the dimension of the data, by leveraging
|
|
|
|
| 452 |
|
| 453 |
In this case, the upload will be completed after 10 seconds.
|
| 454 |
|
| 455 |
+
## Bucket Auto-creation
|
| 456 |
|
| 457 |
With the option `autoCreateBucket` users are able to avoid the
|
| 458 |
auto-creation of an S3 Bucket in case it doesn’t exist. The default for
|
| 459 |
this option is `false`. If set to false, any operation on a not-existent
|
| 460 |
bucket in AWS won’t be successful and an error will be returned.
|
| 461 |
|
| 462 |
+
## Moving stuff between a bucket and another bucket
|
| 463 |
|
| 464 |
Some users like to consume stuff from a bucket and move the content in a
|
| 465 |
different one without using the copyObject feature of this component. If
|
|
|
|
| 467 |
incoming exchange of the consumer, otherwise the file will always be
|
| 468 |
overwritten on the same original bucket.
|
| 469 |
|
| 470 |
+
## MoveAfterRead consumer option
|
| 471 |
|
| 472 |
In addition to deleteAfterRead, it has been added another option,
|
| 473 |
moveAfterRead. With this option enabled, the consumed object will be
|
|
|
|
| 497 |
So if the file name is test, in the *myothercamelbucket* you should see
|
| 498 |
a file called pre-test-suff.
|
| 499 |
|
| 500 |
+
## Using the customer key as encryption
|
| 501 |
|
| 502 |
We introduced also the customer key support (an alternative of using
|
| 503 |
KMS). The following code shows an example.
|
|
|
|
| 514 |
.setBody(constant("Test"))
|
| 515 |
.to(awsEndpoint);
|
| 516 |
|
| 517 |
+
## Using a POJO as body
|
| 518 |
|
| 519 |
Sometimes building an AWS Request can be complex because of multiple
|
| 520 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
|
|
|
| 528 |
In this way, you’ll pass the request directly without the need of
|
| 529 |
passing headers and options specifically related to this operation.
|
| 530 |
|
| 531 |
+
## Create S3 client and add component to registry
|
| 532 |
|
| 533 |
Sometimes you would want to perform some advanced configuration using
|
| 534 |
AWS2S3Configuration, which also allows to set the S3 client. You can
|
|
|
|
| 583 |
|configuration|The component configuration||object|
|
| 584 |
|delimiter|The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in.||string|
|
| 585 |
|forcePathStyle|Set whether the S3 client should use path-style URL instead of virtual-hosted-style|false|boolean|
|
| 586 |
+
|ignoreBody|If it is true, the S3 Object Body will be ignored completely if it is set to false, the S3 Object will be put in the body. Setting this to true will override any behavior defined by includeBody option.|false|boolean|
|
| 587 |
|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean|
|
| 588 |
|pojoRequest|If we want to use a POJO request as body or not|false|boolean|
|
| 589 |
|policy|The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method.||string|
|
|
|
|
| 600 |
|destinationBucketSuffix|Define the destination bucket suffix to use when an object must be moved, and moveAfterRead is set to true.||string|
|
| 601 |
|doneFileName|If provided, Camel will only consume files if a done file exists.||string|
|
| 602 |
|fileName|To get the object from the bucket with the given file name||string|
|
|
|
|
| 603 |
|includeBody|If it is true, the S3Object exchange will be consumed and put into the body and closed. If false, the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to the autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However, setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion.|true|boolean|
|
| 604 |
|includeFolders|If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those|true|boolean|
|
| 605 |
|moveAfterRead|Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation, the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved.|false|boolean|
|
|
|
|
| 648 |
|autoCreateBucket|Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled, and it will create the destinationBucket if it doesn't exist already.|false|boolean|
|
| 649 |
|delimiter|The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in.||string|
|
| 650 |
|forcePathStyle|Set whether the S3 client should use path-style URL instead of virtual-hosted-style|false|boolean|
|
| 651 |
+
|ignoreBody|If it is true, the S3 Object Body will be ignored completely if it is set to false, the S3 Object will be put in the body. Setting this to true will override any behavior defined by includeBody option.|false|boolean|
|
| 652 |
|overrideEndpoint|Set the need for overriding the endpoint. This option needs to be used in combination with the uriEndpointOverride option|false|boolean|
|
| 653 |
|pojoRequest|If we want to use a POJO request as body or not|false|boolean|
|
| 654 |
|policy|The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method.||string|
|
|
|
|
| 664 |
|destinationBucketSuffix|Define the destination bucket suffix to use when an object must be moved, and moveAfterRead is set to true.||string|
|
| 665 |
|doneFileName|If provided, Camel will only consume files if a done file exists.||string|
|
| 666 |
|fileName|To get the object from the bucket with the given file name||string|
|
|
|
|
| 667 |
|includeBody|If it is true, the S3Object exchange will be consumed and put into the body and closed. If false, the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to the autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However, setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion.|true|boolean|
|
| 668 |
|includeFolders|If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those|true|boolean|
|
| 669 |
|maxConnections|Set the maxConnections parameter in the S3 client configuration|60|integer|
|
camel-aws2-sns.md
CHANGED
|
@@ -33,8 +33,6 @@ You have to provide the amazonSNSClient in the Registry or your
|
|
| 33 |
accessKey and secretKey to access the [Amazon’s
|
| 34 |
SNS](https://aws.amazon.com/sns).
|
| 35 |
|
| 36 |
-
# Usage
|
| 37 |
-
|
| 38 |
## Static credentials, Default Credential Provider and Profile Credentials Provider
|
| 39 |
|
| 40 |
You have the possibility of avoiding the usage of explicit static
|
|
@@ -69,6 +67,8 @@ same time.
|
|
| 69 |
For more information about this you can look at [AWS credentials
|
| 70 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 71 |
|
|
|
|
|
|
|
| 72 |
## Advanced AmazonSNS configuration
|
| 73 |
|
| 74 |
If you need more control over the `SnsClient` instance configuration you
|
|
@@ -97,14 +97,14 @@ your SQS Queue
|
|
| 97 |
from("aws2-sqs://test-camel?amazonSQSClient=#amazonSQSClient&delay=50&maxMessagesPerPoll=5")
|
| 98 |
.to(...);
|
| 99 |
|
| 100 |
-
# Topic
|
| 101 |
|
| 102 |
With the option `autoCreateTopic` users are able to avoid the
|
| 103 |
-
|
| 104 |
this option is `false`. If set to false, any operation on a non-existent
|
| 105 |
topic in AWS won’t be successful and an error will be returned.
|
| 106 |
|
| 107 |
-
# SNS FIFO
|
| 108 |
|
| 109 |
SNS FIFO are supported. While creating the SQS queue, you will subscribe
|
| 110 |
to the SNS topic there is an important point to remember, you’ll need to
|
|
@@ -112,10 +112,10 @@ make possible for the SNS Topic to send the message to the SQS Queue.
|
|
| 112 |
|
| 113 |
This is clear with an example.
|
| 114 |
|
| 115 |
-
Suppose you created an SNS FIFO Topic called Order.fifo and an SQS
|
| 116 |
-
called QueueSub.fifo.
|
| 117 |
|
| 118 |
-
In the access Policy of the QueueSub.fifo you should submit something
|
| 119 |
like this
|
| 120 |
|
| 121 |
{
|
|
|
|
| 33 |
accessKey and secretKey to access the [Amazon’s
|
| 34 |
SNS](https://aws.amazon.com/sns).
|
| 35 |
|
|
|
|
|
|
|
| 36 |
## Static credentials, Default Credential Provider and Profile Credentials Provider
|
| 37 |
|
| 38 |
You have the possibility of avoiding the usage of explicit static
|
|
|
|
| 67 |
For more information about this you can look at [AWS credentials
|
| 68 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 69 |
|
| 70 |
+
# Usage
|
| 71 |
+
|
| 72 |
## Advanced AmazonSNS configuration
|
| 73 |
|
| 74 |
If you need more control over the `SnsClient` instance configuration you
|
|
|
|
| 97 |
from("aws2-sqs://test-camel?amazonSQSClient=#amazonSQSClient&delay=50&maxMessagesPerPoll=5")
|
| 98 |
.to(...);
|
| 99 |
|
| 100 |
+
## Topic Auto-creation
|
| 101 |
|
| 102 |
With the option `autoCreateTopic` users are able to avoid the
|
| 103 |
+
auto-creation of an SNS Topic in case it doesn’t exist. The default for
|
| 104 |
this option is `false`. If set to false, any operation on a non-existent
|
| 105 |
topic in AWS won’t be successful and an error will be returned.
|
| 106 |
|
| 107 |
+
## SNS FIFO
|
| 108 |
|
| 109 |
SNS FIFO are supported. While creating the SQS queue, you will subscribe
|
| 110 |
to the SNS topic there is an important point to remember, you’ll need to
|
|
|
|
| 112 |
|
| 113 |
This is clear with an example.
|
| 114 |
|
| 115 |
+
Suppose you created an SNS FIFO Topic called `Order.fifo` and an SQS
|
| 116 |
+
Queue called `QueueSub.fifo`.
|
| 117 |
|
| 118 |
+
In the access Policy of the `QueueSub.fifo` you should submit something
|
| 119 |
like this
|
| 120 |
|
| 121 |
{
|
camel-aws2-sqs.md
CHANGED
|
@@ -29,7 +29,9 @@ You have to provide the amazonSQSClient in the Registry or your
|
|
| 29 |
accessKey and secretKey to access the [Amazon’s
|
| 30 |
SQS](https://aws.amazon.com/sqs).
|
| 31 |
|
| 32 |
-
#
|
|
|
|
|
|
|
| 33 |
|
| 34 |
This component implements the Batch Consumer.
|
| 35 |
|
|
@@ -37,8 +39,6 @@ This allows you, for instance, to know how many messages exist in this
|
|
| 37 |
batch and for instance, let the Aggregator aggregate this number of
|
| 38 |
messages.
|
| 39 |
|
| 40 |
-
# Usage
|
| 41 |
-
|
| 42 |
## Static credentials, Default Credential Provider and Profile Credentials Provider
|
| 43 |
|
| 44 |
You have the possibility of avoiding the usage of explicit static
|
|
@@ -73,6 +73,8 @@ same time.
|
|
| 73 |
For more information about this you can look at [AWS credentials
|
| 74 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 75 |
|
|
|
|
|
|
|
| 76 |
## Advanced AmazonSQS configuration
|
| 77 |
|
| 78 |
If your Camel Application is running behind a firewall or if you need to
|
|
@@ -121,7 +123,7 @@ related option are: `serverSideEncryptionEnabled`, `keyMasterKeyId` and
|
|
| 121 |
explicitly set the option to true and set the related parameters as
|
| 122 |
queue attributes.
|
| 123 |
|
| 124 |
-
# JMS-style Selectors
|
| 125 |
|
| 126 |
SQS does not allow selectors, but you can effectively achieve this by
|
| 127 |
using the Camel Filter EIP and setting an appropriate
|
|
@@ -146,7 +148,7 @@ consumers.
|
|
| 146 |
Note we must set the property `Sqs2Constants.SQS_DELETE_FILTERED` to
|
| 147 |
`true` to instruct Camel to send the DeleteMessage, if being filtered.
|
| 148 |
|
| 149 |
-
# Available Producer Operations
|
| 150 |
|
| 151 |
- single message (default)
|
| 152 |
|
|
@@ -156,13 +158,13 @@ Note we must set the property `Sqs2Constants.SQS_DELETE_FILTERED` to
|
|
| 156 |
|
| 157 |
- listQueues
|
| 158 |
|
| 159 |
-
# Send Message
|
| 160 |
|
| 161 |
from("direct:start")
|
| 162 |
.setBody(constant("Camel rocks!"))
|
| 163 |
.to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1");
|
| 164 |
|
| 165 |
-
# Send Batch Message
|
| 166 |
|
| 167 |
You can set a `SendMessageBatchRequest` or an `Iterable`
|
| 168 |
|
|
@@ -186,7 +188,7 @@ As result, you’ll get an exchange containing a
|
|
| 186 |
messages were successful and what not. The id set on each message of the
|
| 187 |
batch will be a Random UUID.
|
| 188 |
|
| 189 |
-
# Delete single Message
|
| 190 |
|
| 191 |
Use deleteMessage operation to delete a single message. You’ll need to
|
| 192 |
set a receipt handle header for the message you want to delete.
|
|
@@ -199,7 +201,7 @@ set a receipt handle header for the message you want to delete.
|
|
| 199 |
As result, you’ll get an exchange containing a `DeleteMessageResponse`
|
| 200 |
instance, that you can use to check if the message was deleted or not.
|
| 201 |
|
| 202 |
-
# List Queues
|
| 203 |
|
| 204 |
Use listQueues operation to list queues.
|
| 205 |
|
|
@@ -210,7 +212,7 @@ Use listQueues operation to list queues.
|
|
| 210 |
As result, you’ll get an exchange containing a `ListQueuesResponse`
|
| 211 |
instance, that you can examine to check the actual queues.
|
| 212 |
|
| 213 |
-
# Purge Queue
|
| 214 |
|
| 215 |
Use purgeQueue operation to purge queue.
|
| 216 |
|
|
@@ -221,7 +223,7 @@ Use purgeQueue operation to purge queue.
|
|
| 221 |
As result you’ll get an exchange containing a `PurgeQueueResponse`
|
| 222 |
instance.
|
| 223 |
|
| 224 |
-
# Queue Auto-creation
|
| 225 |
|
| 226 |
With the option `autoCreateQueue` users are able to avoid the
|
| 227 |
autocreation of an SQS Queue in case it doesn’t exist. The default for
|
|
@@ -229,7 +231,7 @@ this option is `false`. If set to *false*, any operation on a
|
|
| 229 |
non-existent queue in AWS won’t be successful and an error will be
|
| 230 |
returned.
|
| 231 |
|
| 232 |
-
# Send Batch Message and Message Deduplication Strategy
|
| 233 |
|
| 234 |
In case you’re using a SendBatchMessage Operation, you can set two
|
| 235 |
different kinds of Message Deduplication Strategy: - useExchangeId -
|
|
@@ -275,6 +277,7 @@ Camel.
|
|
| 275 |
|attributeNames|A list of attribute names to receive when consuming. Multiple names can be separated by comma.||string|
|
| 276 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 277 |
|concurrentConsumers|Allows you to use multiple threads to poll the sqs queue to increase throughput|1|integer|
|
|
|
|
| 278 |
|defaultVisibilityTimeout|The default visibility timeout (in seconds)||integer|
|
| 279 |
|deleteAfterRead|Delete message from SQS after it has been read|true|boolean|
|
| 280 |
|deleteIfFiltered|Whether to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS\_DELETE\_FILTERED (CamelAwsSqsDeleteFiltered) set to true.|true|boolean|
|
|
@@ -283,6 +286,7 @@ Camel.
|
|
| 283 |
|kmsMasterKeyId|The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK.||string|
|
| 284 |
|messageAttributeNames|A list of message attribute names to receive when consuming. Multiple names can be separated by comma.||string|
|
| 285 |
|serverSideEncryptionEnabled|Define if Server Side Encryption is enabled or not on the queue|false|boolean|
|
|
|
|
| 286 |
|visibilityTimeout|The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only makes sense if it's different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently.||integer|
|
| 287 |
|waitTimeSeconds|Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response.||integer|
|
| 288 |
|batchSeparator|Set the separator when passing a String to send batch message operation|,|string|
|
|
@@ -331,6 +335,7 @@ Camel.
|
|
| 331 |
|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string|
|
| 332 |
|attributeNames|A list of attribute names to receive when consuming. Multiple names can be separated by comma.||string|
|
| 333 |
|concurrentConsumers|Allows you to use multiple threads to poll the sqs queue to increase throughput|1|integer|
|
|
|
|
| 334 |
|defaultVisibilityTimeout|The default visibility timeout (in seconds)||integer|
|
| 335 |
|deleteAfterRead|Delete message from SQS after it has been read|true|boolean|
|
| 336 |
|deleteIfFiltered|Whether to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS\_DELETE\_FILTERED (CamelAwsSqsDeleteFiltered) set to true.|true|boolean|
|
|
@@ -341,6 +346,7 @@ Camel.
|
|
| 341 |
|messageAttributeNames|A list of message attribute names to receive when consuming. Multiple names can be separated by comma.||string|
|
| 342 |
|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|
| 343 |
|serverSideEncryptionEnabled|Define if Server Side Encryption is enabled or not on the queue|false|boolean|
|
|
|
|
| 344 |
|visibilityTimeout|The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only makes sense if it's different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently.||integer|
|
| 345 |
|waitTimeSeconds|Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response.||integer|
|
| 346 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
|
|
|
| 29 |
accessKey and secretKey to access the [Amazon’s
|
| 30 |
SQS](https://aws.amazon.com/sqs).
|
| 31 |
|
| 32 |
+
# Usage
|
| 33 |
+
|
| 34 |
+
## Batch Consumer
|
| 35 |
|
| 36 |
This component implements the Batch Consumer.
|
| 37 |
|
|
|
|
| 39 |
batch and for instance, let the Aggregator aggregate this number of
|
| 40 |
messages.
|
| 41 |
|
|
|
|
|
|
|
| 42 |
## Static credentials, Default Credential Provider and Profile Credentials Provider
|
| 43 |
|
| 44 |
You have the possibility of avoiding the usage of explicit static
|
|
|
|
| 73 |
For more information about this you can look at [AWS credentials
|
| 74 |
documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html)
|
| 75 |
|
| 76 |
+
# Examples
|
| 77 |
+
|
| 78 |
## Advanced AmazonSQS configuration
|
| 79 |
|
| 80 |
If your Camel Application is running behind a firewall or if you need to
|
|
|
|
| 123 |
explicitly set the option to true and set the related parameters as
|
| 124 |
queue attributes.
|
| 125 |
|
| 126 |
+
## JMS-style Selectors
|
| 127 |
|
| 128 |
SQS does not allow selectors, but you can effectively achieve this by
|
| 129 |
using the Camel Filter EIP and setting an appropriate
|
|
|
|
| 148 |
Note we must set the property `Sqs2Constants.SQS_DELETE_FILTERED` to
|
| 149 |
`true` to instruct Camel to send the DeleteMessage, if being filtered.
|
| 150 |
|
| 151 |
+
## Available Producer Operations
|
| 152 |
|
| 153 |
- single message (default)
|
| 154 |
|
|
|
|
| 158 |
|
| 159 |
- listQueues
|
| 160 |
|
| 161 |
+
## Send Message
|
| 162 |
|
| 163 |
from("direct:start")
|
| 164 |
.setBody(constant("Camel rocks!"))
|
| 165 |
.to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1");
|
| 166 |
|
| 167 |
+
## Send Batch Message
|
| 168 |
|
| 169 |
You can set a `SendMessageBatchRequest` or an `Iterable`
|
| 170 |
|
|
|
|
| 188 |
messages were successful and what not. The id set on each message of the
|
| 189 |
batch will be a Random UUID.
|
| 190 |
|
| 191 |
+
## Delete single Message
|
| 192 |
|
| 193 |
Use deleteMessage operation to delete a single message. You’ll need to
|
| 194 |
set a receipt handle header for the message you want to delete.
|
|
|
|
| 201 |
As result, you’ll get an exchange containing a `DeleteMessageResponse`
|
| 202 |
instance, that you can use to check if the message was deleted or not.
|
| 203 |
|
| 204 |
+
## List Queues
|
| 205 |
|
| 206 |
Use listQueues operation to list queues.
|
| 207 |
|
|
|
|
| 212 |
As result, you’ll get an exchange containing a `ListQueuesResponse`
|
| 213 |
instance, that you can examine to check the actual queues.
|
| 214 |
|
| 215 |
+
## Purge Queue
|
| 216 |
|
| 217 |
Use purgeQueue operation to purge queue.
|
| 218 |
|
|
|
|
| 223 |
As result you’ll get an exchange containing a `PurgeQueueResponse`
|
| 224 |
instance.
|
| 225 |
|
| 226 |
+
## Queue Auto-creation
|
| 227 |
|
| 228 |
With the option `autoCreateQueue` users are able to avoid the
|
| 229 |
autocreation of an SQS Queue in case it doesn’t exist. The default for
|
|
|
|
| 231 |
non-existent queue in AWS won’t be successful and an error will be
|
| 232 |
returned.
|
| 233 |
|
| 234 |
+
## Send Batch Message and Message Deduplication Strategy
|
| 235 |
|
| 236 |
In case you’re using a SendBatchMessage Operation, you can set two
|
| 237 |
different kinds of Message Deduplication Strategy: - useExchangeId -
|
|
|
|
| 277 |
|attributeNames|A list of attribute names to receive when consuming. Multiple names can be separated by comma.||string|
|
| 278 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 279 |
|concurrentConsumers|Allows you to use multiple threads to poll the sqs queue to increase throughput|1|integer|
|
| 280 |
+
|concurrentRequestLimit|The maximum number of concurrent receive request send to AWS in single consumer polling.|50|integer|
|
| 281 |
|defaultVisibilityTimeout|The default visibility timeout (in seconds)||integer|
|
| 282 |
|deleteAfterRead|Delete message from SQS after it has been read|true|boolean|
|
| 283 |
|deleteIfFiltered|Whether to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS\_DELETE\_FILTERED (CamelAwsSqsDeleteFiltered) set to true.|true|boolean|
|
|
|
|
| 286 |
|kmsMasterKeyId|The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK.||string|
|
| 287 |
|messageAttributeNames|A list of message attribute names to receive when consuming. Multiple names can be separated by comma.||string|
|
| 288 |
|serverSideEncryptionEnabled|Define if Server Side Encryption is enabled or not on the queue|false|boolean|
|
| 289 |
+
|sortAttributeName|The name of the message attribute used for sorting the messages. When specified, the messages polled by the consumer will be sorted by this attribute. This configuration may be of importance when you configure maxMessagesPerPoll parameter exceeding 10. In such cases, the messages will be fetched concurrently so the ordering is not guaranteed.||string|
|
| 290 |
|visibilityTimeout|The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only makes sense if it's different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently.||integer|
|
| 291 |
|waitTimeSeconds|Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response.||integer|
|
| 292 |
|batchSeparator|Set the separator when passing a String to send batch message operation|,|string|
|
|
|
|
| 335 |
|uriEndpointOverride|Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option||string|
|
| 336 |
|attributeNames|A list of attribute names to receive when consuming. Multiple names can be separated by comma.||string|
|
| 337 |
|concurrentConsumers|Allows you to use multiple threads to poll the sqs queue to increase throughput|1|integer|
|
| 338 |
+
|concurrentRequestLimit|The maximum number of concurrent receive request send to AWS in single consumer polling.|50|integer|
|
| 339 |
|defaultVisibilityTimeout|The default visibility timeout (in seconds)||integer|
|
| 340 |
|deleteAfterRead|Delete message from SQS after it has been read|true|boolean|
|
| 341 |
|deleteIfFiltered|Whether to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS\_DELETE\_FILTERED (CamelAwsSqsDeleteFiltered) set to true.|true|boolean|
|
|
|
|
| 346 |
|messageAttributeNames|A list of message attribute names to receive when consuming. Multiple names can be separated by comma.||string|
|
| 347 |
|sendEmptyMessageWhenIdle|If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead.|false|boolean|
|
| 348 |
|serverSideEncryptionEnabled|Define if Server Side Encryption is enabled or not on the queue|false|boolean|
|
| 349 |
+
|sortAttributeName|The name of the message attribute used for sorting the messages. When specified, the messages polled by the consumer will be sorted by this attribute. This configuration may be of importance when you configure maxMessagesPerPoll parameter exceeding 10. In such cases, the messages will be fetched concurrently so the ordering is not guaranteed.||string|
|
| 350 |
|visibilityTimeout|The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only makes sense if it's different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently.||integer|
|
| 351 |
|waitTimeSeconds|Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response.||integer|
|
| 352 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
camel-aws2-step-functions.md
CHANGED
|
@@ -110,7 +110,9 @@ the producer side:
|
|
| 110 |
|
| 111 |
- getExecutionHistory
|
| 112 |
|
| 113 |
-
#
|
|
|
|
|
|
|
| 114 |
|
| 115 |
- createStateMachine: this operation will create a state machine
|
| 116 |
|
|
@@ -119,7 +121,7 @@ the producer side:
|
|
| 119 |
from("direct:createStateMachine")
|
| 120 |
.to("aws2-step-functions://test?awsSfnClient=#awsSfnClient&operation=createMachine")
|
| 121 |
|
| 122 |
-
# Using a POJO as body
|
| 123 |
|
| 124 |
Sometimes building an AWS Request can be complex because of multiple
|
| 125 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
|
|
|
| 110 |
|
| 111 |
- getExecutionHistory
|
| 112 |
|
| 113 |
+
# Examples
|
| 114 |
+
|
| 115 |
+
## Producer Examples
|
| 116 |
|
| 117 |
- createStateMachine: this operation will create a state machine
|
| 118 |
|
|
|
|
| 121 |
from("direct:createStateMachine")
|
| 122 |
.to("aws2-step-functions://test?awsSfnClient=#awsSfnClient&operation=createMachine")
|
| 123 |
|
| 124 |
+
## Using a POJO as body
|
| 125 |
|
| 126 |
Sometimes building an AWS Request can be complex because of multiple
|
| 127 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
camel-aws2-sts.md
CHANGED
|
@@ -77,7 +77,9 @@ side:
|
|
| 77 |
|
| 78 |
- getFederationToken
|
| 79 |
|
| 80 |
-
#
|
|
|
|
|
|
|
| 81 |
|
| 82 |
- assumeRole: this operation will make an AWS user assume a different
|
| 83 |
role temporary
|
|
@@ -106,7 +108,7 @@ side:
|
|
| 106 |
.setHeader(STS2Constants.FEDERATED_NAME, constant("federation-account"))
|
| 107 |
.to("aws2-sts://test?stsClient=#amazonSTSClient&operation=getSessionToken")
|
| 108 |
|
| 109 |
-
# Using a POJO as body
|
| 110 |
|
| 111 |
Sometimes building an AWS Request can be complex because of multiple
|
| 112 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
|
|
|
| 77 |
|
| 78 |
- getFederationToken
|
| 79 |
|
| 80 |
+
# Examples
|
| 81 |
+
|
| 82 |
+
## Producer Examples
|
| 83 |
|
| 84 |
- assumeRole: this operation will make an AWS user assume a different
|
| 85 |
role temporary
|
|
|
|
| 108 |
.setHeader(STS2Constants.FEDERATED_NAME, constant("federation-account"))
|
| 109 |
.to("aws2-sts://test?stsClient=#amazonSTSClient&operation=getSessionToken")
|
| 110 |
|
| 111 |
+
## Using a POJO as body
|
| 112 |
|
| 113 |
Sometimes building an AWS Request can be complex because of multiple
|
| 114 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
camel-aws2-timestream.md
CHANGED
|
@@ -149,7 +149,9 @@ producer side:
|
|
| 149 |
|
| 150 |
- cancelQuery
|
| 151 |
|
| 152 |
-
#
|
|
|
|
|
|
|
| 153 |
|
| 154 |
- Write Operation
|
| 155 |
|
|
@@ -172,7 +174,7 @@ producer side:
|
|
| 172 |
.setHeader(Timestream2Constants.QUERY_STRING, constant("SELECT * FROM testDb.testTable ORDER BY time DESC LIMIT 10"))
|
| 173 |
.to("aws2-timestream://query:test?awsTimestreamQueryClient=#awsTimestreamQueryClient&operation=query")
|
| 174 |
|
| 175 |
-
# Using a POJO as body
|
| 176 |
|
| 177 |
Sometimes building an AWS Request can be complex because of multiple
|
| 178 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
|
|
|
| 149 |
|
| 150 |
- cancelQuery
|
| 151 |
|
| 152 |
+
# Examples
|
| 153 |
+
|
| 154 |
+
## Producer Examples
|
| 155 |
|
| 156 |
- Write Operation
|
| 157 |
|
|
|
|
| 174 |
.setHeader(Timestream2Constants.QUERY_STRING, constant("SELECT * FROM testDb.testTable ORDER BY time DESC LIMIT 10"))
|
| 175 |
.to("aws2-timestream://query:test?awsTimestreamQueryClient=#awsTimestreamQueryClient&operation=query")
|
| 176 |
|
| 177 |
+
## Using a POJO as body
|
| 178 |
|
| 179 |
Sometimes building an AWS Request can be complex because of multiple
|
| 180 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
camel-aws2-translate.md
CHANGED
|
@@ -71,7 +71,9 @@ producer side:
|
|
| 71 |
|
| 72 |
- translateText
|
| 73 |
|
| 74 |
-
#
|
|
|
|
|
|
|
| 75 |
|
| 76 |
from("direct:start")
|
| 77 |
.setHeader(TranslateConstants.SOURCE_LANGUAGE, TranslateLanguageEnum.ITALIAN)
|
|
@@ -81,7 +83,7 @@ producer side:
|
|
| 81 |
|
| 82 |
As a result, you’ll get an exchange containing the translated text.
|
| 83 |
|
| 84 |
-
# Using a POJO as body
|
| 85 |
|
| 86 |
Sometimes building an AWS Request can be complex because of multiple
|
| 87 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
|
|
|
| 71 |
|
| 72 |
- translateText
|
| 73 |
|
| 74 |
+
# Examples
|
| 75 |
+
|
| 76 |
+
## Translate Text example
|
| 77 |
|
| 78 |
from("direct:start")
|
| 79 |
.setHeader(TranslateConstants.SOURCE_LANGUAGE, TranslateLanguageEnum.ITALIAN)
|
|
|
|
| 83 |
|
| 84 |
As a result, you’ll get an exchange containing the translated text.
|
| 85 |
|
| 86 |
+
## Using a POJO as body
|
| 87 |
|
| 88 |
Sometimes building an AWS Request can be complex because of multiple
|
| 89 |
options. We introduce the possibility to use a POJO as the body. In AWS
|
camel-azure-cosmosdb.md
CHANGED
|
@@ -37,7 +37,9 @@ operation being requested in container level, e.g: readItem, then
|
|
| 37 |
You can append query options to the URI in the following format,
|
| 38 |
`?options=value&option2=value&`…
|
| 39 |
|
| 40 |
-
#
|
|
|
|
|
|
|
| 41 |
|
| 42 |
To use this component, you have two options to provide the required
|
| 43 |
Azure authentication information:
|
|
@@ -50,21 +52,13 @@ Azure authentication information:
|
|
| 50 |
[CosmosAsyncClient](https://docs.microsoft.com/en-us/java/api/com.azure.cosmos.cosmosasyncclient?view=azure-java-stable)
|
| 51 |
instance which can be provided into `cosmosAsyncClient`.
|
| 52 |
|
| 53 |
-
# Async Consumer and Producer
|
| 54 |
|
| 55 |
This component implements the async Consumer and producer.
|
| 56 |
|
| 57 |
This allows camel route to consume and produce events asynchronously
|
| 58 |
without blocking any threads.
|
| 59 |
|
| 60 |
-
# Usage
|
| 61 |
-
|
| 62 |
-
For example, to consume records from a specific container in a specific
|
| 63 |
-
database to a file, use the following snippet:
|
| 64 |
-
|
| 65 |
-
from("azure-cosmosdb://camelDb/myContainer?accountKey=MyaccountKey&databaseEndpoint=https//myazure.com:443&leaseDatabaseName=myLeaseDB&createLeaseDatabaseIfNotExists=true&createLeaseContainerIfNotExists=true").
|
| 66 |
-
to("file://directory");
|
| 67 |
-
|
| 68 |
## Message headers evaluated by the component producer
|
| 69 |
|
| 70 |
<table>
|
|
@@ -75,7 +69,7 @@ database to a file, use the following snippet:
|
|
| 75 |
<col style="width: 69%" />
|
| 76 |
</colgroup>
|
| 77 |
<thead>
|
| 78 |
-
<tr>
|
| 79 |
<th style="text-align: left;">Header</th>
|
| 80 |
<th style="text-align: left;">Variable Name</th>
|
| 81 |
<th style="text-align: left;">Type</th>
|
|
@@ -83,7 +77,7 @@ database to a file, use the following snippet:
|
|
| 83 |
</tr>
|
| 84 |
</thead>
|
| 85 |
<tbody>
|
| 86 |
-
<tr>
|
| 87 |
<td
|
| 88 |
style="text-align: left;"><p><code>CamelAzureCosmosDbDatabaseName</code></p></td>
|
| 89 |
<td
|
|
@@ -94,7 +88,7 @@ the name of the Cosmos database that component should connect to. In
|
|
| 94 |
case you are producing data and have createDatabaseIfNotExists=true, the
|
| 95 |
component will automatically auto create a Cosmos database.</p></td>
|
| 96 |
</tr>
|
| 97 |
-
<tr>
|
| 98 |
<td
|
| 99 |
style="text-align: left;"><p><code>CamelAzureCosmosDbContainerName</code></p></td>
|
| 100 |
<td
|
|
@@ -106,7 +100,7 @@ case you are producing data and have createContainerIfNotExists=true,
|
|
| 106 |
the component will automatically auto create a Cosmos
|
| 107 |
container.</p></td>
|
| 108 |
</tr>
|
| 109 |
-
<tr>
|
| 110 |
<td
|
| 111 |
style="text-align: left;"><p><code>CamelAzureCosmosDbOperation</code></p></td>
|
| 112 |
<td
|
|
@@ -116,7 +110,7 @@ style="text-align: left;"><p><code>CosmosDbOperationsDefinition</code></p></td>
|
|
| 116 |
<td style="text-align: left;"><p>Set the producer operation which can be
|
| 117 |
used to execute a specific operation on the producer.</p></td>
|
| 118 |
</tr>
|
| 119 |
-
<tr>
|
| 120 |
<td
|
| 121 |
style="text-align: left;"><p><code>CamelAzureCosmosDbQuery</code></p></td>
|
| 122 |
<td
|
|
@@ -125,7 +119,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.QUERY</code></p></td>
|
|
| 125 |
<td style="text-align: left;"><p>Set the SQL query to execute on a given
|
| 126 |
producer query operations.</p></td>
|
| 127 |
</tr>
|
| 128 |
-
<tr>
|
| 129 |
<td
|
| 130 |
style="text-align: left;"><p><code>CamelAzureCosmosDbQueryRequestOptions</code></p></td>
|
| 131 |
<td
|
|
@@ -136,7 +130,7 @@ style="text-align: left;"><p><code>CosmosQueryRequestOptions</code></p></td>
|
|
| 136 |
can be used with queryItems, queryContainers, queryDatabases,
|
| 137 |
listDatabases, listItems, listContainers operations.</p></td>
|
| 138 |
</tr>
|
| 139 |
-
<tr>
|
| 140 |
<td
|
| 141 |
style="text-align: left;"><p><code>CamelAzureCosmosDbCreateDatabaseIfNotExist</code></p></td>
|
| 142 |
<td
|
|
@@ -146,7 +140,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.CREATE_DATABASE_IF_NOT_EXIS
|
|
| 146 |
Cosmos database automatically in case it doesn’t exist in the Cosmos
|
| 147 |
account.</p></td>
|
| 148 |
</tr>
|
| 149 |
-
<tr>
|
| 150 |
<td
|
| 151 |
style="text-align: left;"><p><code>CamelAzureCosmosDbCreateContainerIfNotExist</code></p></td>
|
| 152 |
<td
|
|
@@ -156,7 +150,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.CREATE_CONTAINER_IF_NOT_EXI
|
|
| 156 |
Cosmos container automatically in case it doesn’t exist in the Cosmos
|
| 157 |
account.</p></td>
|
| 158 |
</tr>
|
| 159 |
-
<tr>
|
| 160 |
<td
|
| 161 |
style="text-align: left;"><p><code>CamelAzureCosmosDbThroughputProperties</code></p></td>
|
| 162 |
<td
|
|
@@ -166,7 +160,7 @@ style="text-align: left;"><p><code>ThroughputProperties</code></p></td>
|
|
| 166 |
<td style="text-align: left;"><p>Sets throughput of the resources in the
|
| 167 |
Azure Cosmos DB service.</p></td>
|
| 168 |
</tr>
|
| 169 |
-
<tr>
|
| 170 |
<td
|
| 171 |
style="text-align: left;"><p><code>CamelAzureCosmosDbDatabaseRequestOptions</code></p></td>
|
| 172 |
<td
|
|
@@ -176,7 +170,7 @@ style="text-align: left;"><p><code>CosmosDatabaseRequestOptions</code></p></td>
|
|
| 176 |
<td style="text-align: left;"><p>Sets additional options to execute on
|
| 177 |
database operations.</p></td>
|
| 178 |
</tr>
|
| 179 |
-
<tr>
|
| 180 |
<td
|
| 181 |
style="text-align: left;"><p><code>CamelAzureCosmosDbContainerPartitionKeyPath</code></p></td>
|
| 182 |
<td
|
|
@@ -185,7 +179,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.CONTAINER_PARTITION_KEY_PAT
|
|
| 185 |
<td style="text-align: left;"><p>Set the container partition key
|
| 186 |
path.</p></td>
|
| 187 |
</tr>
|
| 188 |
-
<tr>
|
| 189 |
<td
|
| 190 |
style="text-align: left;"><p><code>CamelAzureCosmosDbContainerRequestOptions</code></p></td>
|
| 191 |
<td
|
|
@@ -195,7 +189,7 @@ style="text-align: left;"><p><code>CosmosContainerRequestOptions</code></p></td>
|
|
| 195 |
<td style="text-align: left;"><p>Set additional options to execute on
|
| 196 |
container operations.</p></td>
|
| 197 |
</tr>
|
| 198 |
-
<tr>
|
| 199 |
<td
|
| 200 |
style="text-align: left;"><p><code>CamelAzureCosmosDbItemPartitionKey</code></p></td>
|
| 201 |
<td
|
|
@@ -205,7 +199,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.ITEM_PARTITION_KEY</code></
|
|
| 205 |
partition key value in the Azure Cosmos DB database service. A partition
|
| 206 |
key identifies the partition where the item is stored in.</p></td>
|
| 207 |
</tr>
|
| 208 |
-
<tr>
|
| 209 |
<td
|
| 210 |
style="text-align: left;"><p><code>CamelAzureCosmosDbItemRequestOptions</code></p></td>
|
| 211 |
<td
|
|
@@ -215,7 +209,7 @@ style="text-align: left;"><p><code>CosmosItemRequestOptions</code></p></td>
|
|
| 215 |
<td style="text-align: left;"><p>Set additional options to execute on
|
| 216 |
item operations.</p></td>
|
| 217 |
</tr>
|
| 218 |
-
<tr>
|
| 219 |
<td
|
| 220 |
style="text-align: left;"><p><code>CamelAzureCosmosDbItemId</code></p></td>
|
| 221 |
<td
|
|
@@ -237,7 +231,7 @@ operation on item like <em>delete</em>, <em>replace</em>.</p></td>
|
|
| 237 |
<col style="width: 69%" />
|
| 238 |
</colgroup>
|
| 239 |
<thead>
|
| 240 |
-
<tr>
|
| 241 |
<th style="text-align: left;">Header</th>
|
| 242 |
<th style="text-align: left;">Variable Name</th>
|
| 243 |
<th style="text-align: left;">Type</th>
|
|
@@ -245,7 +239,7 @@ operation on item like <em>delete</em>, <em>replace</em>.</p></td>
|
|
| 245 |
</tr>
|
| 246 |
</thead>
|
| 247 |
<tbody>
|
| 248 |
-
<tr>
|
| 249 |
<td
|
| 250 |
style="text-align: left;"><p><code>CamelAzureCosmosDbRecourseId</code></p></td>
|
| 251 |
<td
|
|
@@ -254,7 +248,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.RESOURCE_ID</code></p></td>
|
|
| 254 |
<td style="text-align: left;"><p>The resource ID of the requested
|
| 255 |
resource.</p></td>
|
| 256 |
</tr>
|
| 257 |
-
<tr>
|
| 258 |
<td
|
| 259 |
style="text-align: left;"><p><code>CamelAzureCosmosDbEtag</code></p></td>
|
| 260 |
<td
|
|
@@ -263,7 +257,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.E_TAG</code></p></td>
|
|
| 263 |
<td style="text-align: left;"><p>The Etag ID of the requested
|
| 264 |
resource.</p></td>
|
| 265 |
</tr>
|
| 266 |
-
<tr>
|
| 267 |
<td
|
| 268 |
style="text-align: left;"><p><code>CamelAzureCosmosDbTimestamp</code></p></td>
|
| 269 |
<td
|
|
@@ -272,7 +266,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.TIMESTAMP</code></p></td>
|
|
| 272 |
<td style="text-align: left;"><p>The timestamp of the requested
|
| 273 |
resource.</p></td>
|
| 274 |
</tr>
|
| 275 |
-
<tr>
|
| 276 |
<td
|
| 277 |
style="text-align: left;"><p><code>CamelAzureCosmosDbResponseHeaders</code></p></td>
|
| 278 |
<td
|
|
@@ -281,7 +275,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.RESPONSE_HEADERS</code></p>
|
|
| 281 |
<td style="text-align: left;"><p>The response headers of the requested
|
| 282 |
resource.</p></td>
|
| 283 |
</tr>
|
| 284 |
-
<tr>
|
| 285 |
<td
|
| 286 |
style="text-align: left;"><p><code>CamelAzureCosmosDbStatusCode</code></p></td>
|
| 287 |
<td
|
|
@@ -290,7 +284,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.STATUS_CODE</code></p></td>
|
|
| 290 |
<td style="text-align: left;"><p>The status code of the requested
|
| 291 |
resource.</p></td>
|
| 292 |
</tr>
|
| 293 |
-
<tr>
|
| 294 |
<td
|
| 295 |
style="text-align: left;"><p><code>CamelAzureCosmosDbDefaultTimeToLiveInSeconds</code></p></td>
|
| 296 |
<td
|
|
@@ -299,7 +293,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.DEFAULT_TIME_TO_LIVE_SECOND
|
|
| 299 |
<td style="text-align: left;"><p>The TTL of the requested
|
| 300 |
resource.</p></td>
|
| 301 |
</tr>
|
| 302 |
-
<tr>
|
| 303 |
<td
|
| 304 |
style="text-align: left;"><p><code>CamelAzureCosmosDbManualThroughput</code></p></td>
|
| 305 |
<td
|
|
@@ -308,7 +302,7 @@ style="text-align: left;"><p><code>CosmosDbConstants.MANUAL_THROUGHPUT</code></p
|
|
| 308 |
<td style="text-align: left;"><p>The manual throughput of the requested
|
| 309 |
resource.</p></td>
|
| 310 |
</tr>
|
| 311 |
-
<tr>
|
| 312 |
<td
|
| 313 |
style="text-align: left;"><p><code>CamelAzureCosmosDbAutoscaleMaxThroughput</code></p></td>
|
| 314 |
<td
|
|
@@ -336,24 +330,24 @@ For these operations, `databaseName` is **required** except for
|
|
| 336 |
<col style="width: 89%" />
|
| 337 |
</colgroup>
|
| 338 |
<thead>
|
| 339 |
-
<tr>
|
| 340 |
<th style="text-align: left;">Operation</th>
|
| 341 |
<th style="text-align: left;">Description</th>
|
| 342 |
</tr>
|
| 343 |
</thead>
|
| 344 |
<tbody>
|
| 345 |
-
<tr>
|
| 346 |
<td style="text-align: left;"><p><code>listDatabases</code></p></td>
|
| 347 |
<td style="text-align: left;"><p>Gets a list of all databases as
|
| 348 |
<code>List<CosmosDatabaseProperties></code> set in the exchange
|
| 349 |
message body.</p></td>
|
| 350 |
</tr>
|
| 351 |
-
<tr>
|
| 352 |
<td style="text-align: left;"><p><code>createDatabase</code></p></td>
|
| 353 |
<td style="text-align: left;"><p>Create a database in the specified
|
| 354 |
Azure CosmosDB account.</p></td>
|
| 355 |
</tr>
|
| 356 |
-
<tr>
|
| 357 |
<td style="text-align: left;"><p><code>queryDatabases</code></p></td>
|
| 358 |
<td style="text-align: left;"><p><strong><code>query</code> is
|
| 359 |
required</strong> Execute an SQL query against the service level in
|
|
@@ -376,35 +370,35 @@ here and `containerName` only for `createContainer` and
|
|
| 376 |
<col style="width: 89%" />
|
| 377 |
</colgroup>
|
| 378 |
<thead>
|
| 379 |
-
<tr>
|
| 380 |
<th style="text-align: left;">Operation</th>
|
| 381 |
<th style="text-align: left;">Description</th>
|
| 382 |
</tr>
|
| 383 |
</thead>
|
| 384 |
<tbody>
|
| 385 |
-
<tr>
|
| 386 |
<td style="text-align: left;"><p><code>deleteDatabase</code></p></td>
|
| 387 |
<td style="text-align: left;"><p>Delete a database from the Azure
|
| 388 |
CosmosDB account.</p></td>
|
| 389 |
</tr>
|
| 390 |
-
<tr>
|
| 391 |
<td style="text-align: left;"><p><code>createContainer</code></p></td>
|
| 392 |
<td style="text-align: left;"><p>Create a container in the specified
|
| 393 |
Azure CosmosDB database.</p></td>
|
| 394 |
</tr>
|
| 395 |
-
<tr>
|
| 396 |
<td
|
| 397 |
style="text-align: left;"><p><code>replaceDatabaseThroughput</code></p></td>
|
| 398 |
<td style="text-align: left;"><p>Replace the throughput for the
|
| 399 |
specified Azure CosmosDB database.</p></td>
|
| 400 |
</tr>
|
| 401 |
-
<tr>
|
| 402 |
<td style="text-align: left;"><p><code>listContainers</code></p></td>
|
| 403 |
<td style="text-align: left;"><p>Gets a list of all containers in the
|
| 404 |
specified database as <code>List<CosmosContainerProperties></code>
|
| 405 |
set in the exchange message body.</p></td>
|
| 406 |
</tr>
|
| 407 |
-
<tr>
|
| 408 |
<td style="text-align: left;"><p><code>queryContainers</code></p></td>
|
| 409 |
<td style="text-align: left;"><p><strong><code>query</code> is
|
| 410 |
required</strong> Executes an SQL query against the database level in
|
|
@@ -427,57 +421,57 @@ for all operations here.
|
|
| 427 |
<col style="width: 89%" />
|
| 428 |
</colgroup>
|
| 429 |
<thead>
|
| 430 |
-
<tr>
|
| 431 |
<th style="text-align: left;">Operation</th>
|
| 432 |
<th style="text-align: left;">Description</th>
|
| 433 |
</tr>
|
| 434 |
</thead>
|
| 435 |
<tbody>
|
| 436 |
-
<tr>
|
| 437 |
<td style="text-align: left;"><p><code>deleteContainer</code></p></td>
|
| 438 |
<td style="text-align: left;"><p>Delete a container from the specified
|
| 439 |
Azure CosmosDB database.</p></td>
|
| 440 |
</tr>
|
| 441 |
-
<tr>
|
| 442 |
<td
|
| 443 |
style="text-align: left;"><p><code>replaceContainerThroughput</code></p></td>
|
| 444 |
<td style="text-align: left;"><p>Replace the throughput for the
|
| 445 |
specified Azure CosmosDB container.</p></td>
|
| 446 |
</tr>
|
| 447 |
-
<tr>
|
| 448 |
<td style="text-align: left;"><p><code>createItem</code></p></td>
|
| 449 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 450 |
is required</strong> Creates an item in the specified container, it
|
| 451 |
accepts POJO or key value as <code>Map<String, ?></code>.</p></td>
|
| 452 |
</tr>
|
| 453 |
-
<tr>
|
| 454 |
<td style="text-align: left;"><p><code>upsertItem</code></p></td>
|
| 455 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 456 |
is required</strong> Creates an item in the specified container if it
|
| 457 |
doesn’t exist otherwise overwrite it if it exists, it accepts POJO or
|
| 458 |
key value as <code>Map<String, ?></code>.</p></td>
|
| 459 |
</tr>
|
| 460 |
-
<tr>
|
| 461 |
<td style="text-align: left;"><p><code>replaceItem</code></p></td>
|
| 462 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 463 |
and <code>itemId</code> are required</strong> Overwrites an item in the
|
| 464 |
specified container, it accepts POJO or key value as
|
| 465 |
<code>Map<String, ?></code>.</p></td>
|
| 466 |
</tr>
|
| 467 |
-
<tr>
|
| 468 |
<td style="text-align: left;"><p><code>deleteItem</code></p></td>
|
| 469 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 470 |
and <code>itemId</code> are required</strong> Deletes an item in the
|
| 471 |
specified container.</p></td>
|
| 472 |
</tr>
|
| 473 |
-
<tr>
|
| 474 |
<td style="text-align: left;"><p><code>readItem</code></p></td>
|
| 475 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 476 |
and <code>itemId</code> are required</strong> Gets an item in the
|
| 477 |
specified container as <code>Map<String,?></code> set in the
|
| 478 |
exchange body message.</p></td>
|
| 479 |
</tr>
|
| 480 |
-
<tr>
|
| 481 |
<td style="text-align: left;"><p><code>readItem</code></p></td>
|
| 482 |
<td
|
| 483 |
style="text-align: left;"><p><strong><code>itemPartitionKey</code></strong>
|
|
@@ -486,7 +480,7 @@ Gets a list of items in the specified container per the
|
|
| 486 |
<code>List<Map<String,?>></code> set in the exchange body
|
| 487 |
message.</p></td>
|
| 488 |
</tr>
|
| 489 |
-
<tr>
|
| 490 |
<td style="text-align: left;"><p><code>queryItems</code></p></td>
|
| 491 |
<td style="text-align: left;"><p><strong><code>query</code> is
|
| 492 |
required</strong> Execute an SQL query against the container level in
|
|
@@ -500,7 +494,17 @@ message body.</p></td>
|
|
| 500 |
Refer to the example section in this page to learn how to use these
|
| 501 |
operations into your camel application.
|
| 502 |
|
| 503 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 504 |
|
| 505 |
- `listDatabases`:
|
| 506 |
|
|
@@ -692,7 +696,7 @@ this feature:
|
|
| 692 |
The consumer will set `List<Map<String,>>` in exchange message body
|
| 693 |
which reflect the list of items in a single feed.
|
| 694 |
|
| 695 |
-
### Example
|
| 696 |
|
| 697 |
For example, to listen to the events in `myContainer` container in
|
| 698 |
`myDb`:
|
|
@@ -700,7 +704,7 @@ For example, to listen to the events in `myContainer` container in
|
|
| 700 |
from("azure-cosmosdb://myDb/myContainer?leaseDatabaseName=myLeaseDb&createLeaseDatabaseIfNotExists=true&createLeaseContainerIfNotExists=true")
|
| 701 |
.to("mock:result");
|
| 702 |
|
| 703 |
-
#
|
| 704 |
|
| 705 |
When developing on this component, you will need to obtain your Azure
|
| 706 |
accessKey in order to run the integration tests. In addition to the
|
|
@@ -719,15 +723,15 @@ is the access key being generated from Azure CosmosDB portal.
|
|
| 719 |
|
| 720 |
|Name|Description|Default|Type|
|
| 721 |
|---|---|---|---|
|
| 722 |
-
|clientTelemetryEnabled|Sets the flag to enable client telemetry which will periodically collect database operations aggregation statistics, system information like cpu/memory and send it to cosmos monitoring service, which will be helpful during debugging. DEFAULT value is false indicating this is
|
| 723 |
|configuration|The component configurations||object|
|
| 724 |
-
|connectionSharingAcrossClientsEnabled|Enables connections sharing across multiple Cosmos Clients. The default is false. When you have multiple instances of Cosmos Client in the same JVM interacting
|
| 725 |
|consistencyLevel|Sets the consistency levels supported for Azure Cosmos DB client operations in the Azure Cosmos DB service. The requested ConsistencyLevel must match or be weaker than that provisioned for the database account. Consistency levels by order of strength are STRONG, BOUNDED\_STALENESS, SESSION and EVENTUAL. Refer to consistency level documentation for additional details: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels|SESSION|object|
|
| 726 |
|containerPartitionKeyPath|Sets the container partition key path.||string|
|
| 727 |
-
|contentResponseOnWriteEnabled|Sets the boolean to only return the headers and status code in Cosmos DB response in case of Create, Update and Delete operations on CosmosItem. In Consumer, it is enabled by default because of the ChangeFeed in the consumer that needs this flag to be enabled and thus
|
| 728 |
|cosmosAsyncClient|Inject an external CosmosAsyncClient into the component which provides a client-side logical representation of the Azure Cosmos DB service. This asynchronous client is used to configure and execute requests against the service.||object|
|
| 729 |
-
|createContainerIfNotExists|Sets if the component should create Cosmos container automatically in case it doesn't exist in Cosmos database|false|boolean|
|
| 730 |
-
|createDatabaseIfNotExists|Sets if the component should create Cosmos database automatically in case it doesn't exist in Cosmos account|false|boolean|
|
| 731 |
|databaseEndpoint|Sets the Azure Cosmos database endpoint the component will connect to.||string|
|
| 732 |
|multipleWriteRegionsEnabled|Sets the flag to enable writes on any regions for geo-replicated database accounts in the Azure Cosmos DB service. When the value of this property is true, the SDK will direct write operations to available writable regions of geo-replicated database account. Writable regions are ordered by PreferredRegions property. Setting the property value to true has no effect until EnableMultipleWriteRegions in DatabaseAccount is also set to true. DEFAULT value is true indicating that writes are directed to available writable regions of geo-replicated database account.|true|boolean|
|
| 733 |
|preferredRegions|Sets the comma separated preferred regions for geo-replicated database accounts. For example, East US as the preferred region. When EnableEndpointDiscovery is true and PreferredRegions is non-empty, the SDK will prefer to use the regions in the container in the order they are specified to perform operations.||string|
|
|
@@ -736,10 +740,10 @@ is the access key being generated from Azure CosmosDB portal.
|
|
| 736 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 737 |
|changeFeedProcessorOptions|Sets the ChangeFeedProcessorOptions to be used. Unless specifically set the default values that will be used are: maximum items per page or FeedResponse: 100 lease renew interval: 17 seconds lease acquire interval: 13 seconds lease expiration interval: 60 seconds feed poll delay: 5 seconds maximum scale count: unlimited||object|
|
| 738 |
|createLeaseContainerIfNotExists|Sets if the component should create Cosmos lease container for the consumer automatically in case it doesn't exist in Cosmos database|false|boolean|
|
| 739 |
-
|createLeaseDatabaseIfNotExists|Sets if the component should create Cosmos lease database for the consumer automatically in case it doesn't exist in Cosmos account|false|boolean|
|
| 740 |
|hostName|Sets the hostname. The host: a host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different instance name. If not specified, this will be a generated random hostname.||string|
|
| 741 |
-
|leaseContainerName|Sets the lease container which acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. It will be auto
|
| 742 |
-
|leaseDatabaseName|Sets the lease database where the leaseContainerName will be stored. If it is not specified, this component will store the lease container in the same database that is specified in databaseName. It will be auto
|
| 743 |
|itemId|Sets the itemId in case needed for operation on item like delete, replace||string|
|
| 744 |
|itemPartitionKey|Sets partition key. Represents a partition key value in the Azure Cosmos DB database service. A partition key identifies the partition where the item is stored in.||string|
|
| 745 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
|
@@ -758,14 +762,14 @@ is the access key being generated from Azure CosmosDB portal.
|
|
| 758 |
|---|---|---|---|
|
| 759 |
|databaseName|The name of the Cosmos database that component should connect to. In case you are producing data and have createDatabaseIfNotExists=true, the component will automatically auto create a Cosmos database.||string|
|
| 760 |
|containerName|The name of the Cosmos container that component should connect to. In case you are producing data and have createContainerIfNotExists=true, the component will automatically auto create a Cosmos container.||string|
|
| 761 |
-
|clientTelemetryEnabled|Sets the flag to enable client telemetry which will periodically collect database operations aggregation statistics, system information like cpu/memory and send it to cosmos monitoring service, which will be helpful during debugging. DEFAULT value is false indicating this is
|
| 762 |
-
|connectionSharingAcrossClientsEnabled|Enables connections sharing across multiple Cosmos Clients. The default is false. When you have multiple instances of Cosmos Client in the same JVM interacting
|
| 763 |
|consistencyLevel|Sets the consistency levels supported for Azure Cosmos DB client operations in the Azure Cosmos DB service. The requested ConsistencyLevel must match or be weaker than that provisioned for the database account. Consistency levels by order of strength are STRONG, BOUNDED\_STALENESS, SESSION and EVENTUAL. Refer to consistency level documentation for additional details: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels|SESSION|object|
|
| 764 |
|containerPartitionKeyPath|Sets the container partition key path.||string|
|
| 765 |
-
|contentResponseOnWriteEnabled|Sets the boolean to only return the headers and status code in Cosmos DB response in case of Create, Update and Delete operations on CosmosItem. In Consumer, it is enabled by default because of the ChangeFeed in the consumer that needs this flag to be enabled and thus
|
| 766 |
|cosmosAsyncClient|Inject an external CosmosAsyncClient into the component which provides a client-side logical representation of the Azure Cosmos DB service. This asynchronous client is used to configure and execute requests against the service.||object|
|
| 767 |
-
|createContainerIfNotExists|Sets if the component should create Cosmos container automatically in case it doesn't exist in Cosmos database|false|boolean|
|
| 768 |
-
|createDatabaseIfNotExists|Sets if the component should create Cosmos database automatically in case it doesn't exist in Cosmos account|false|boolean|
|
| 769 |
|databaseEndpoint|Sets the Azure Cosmos database endpoint the component will connect to.||string|
|
| 770 |
|multipleWriteRegionsEnabled|Sets the flag to enable writes on any regions for geo-replicated database accounts in the Azure Cosmos DB service. When the value of this property is true, the SDK will direct write operations to available writable regions of geo-replicated database account. Writable regions are ordered by PreferredRegions property. Setting the property value to true has no effect until EnableMultipleWriteRegions in DatabaseAccount is also set to true. DEFAULT value is true indicating that writes are directed to available writable regions of geo-replicated database account.|true|boolean|
|
| 771 |
|preferredRegions|Sets the comma separated preferred regions for geo-replicated database accounts. For example, East US as the preferred region. When EnableEndpointDiscovery is true and PreferredRegions is non-empty, the SDK will prefer to use the regions in the container in the order they are specified to perform operations.||string|
|
|
@@ -773,10 +777,10 @@ is the access key being generated from Azure CosmosDB portal.
|
|
| 773 |
|throughputProperties|Sets throughput of the resources in the Azure Cosmos DB service.||object|
|
| 774 |
|changeFeedProcessorOptions|Sets the ChangeFeedProcessorOptions to be used. Unless specifically set the default values that will be used are: maximum items per page or FeedResponse: 100 lease renew interval: 17 seconds lease acquire interval: 13 seconds lease expiration interval: 60 seconds feed poll delay: 5 seconds maximum scale count: unlimited||object|
|
| 775 |
|createLeaseContainerIfNotExists|Sets if the component should create Cosmos lease container for the consumer automatically in case it doesn't exist in Cosmos database|false|boolean|
|
| 776 |
-
|createLeaseDatabaseIfNotExists|Sets if the component should create Cosmos lease database for the consumer automatically in case it doesn't exist in Cosmos account|false|boolean|
|
| 777 |
|hostName|Sets the hostname. The host: a host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different instance name. If not specified, this will be a generated random hostname.||string|
|
| 778 |
-
|leaseContainerName|Sets the lease container which acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. It will be auto
|
| 779 |
-
|leaseDatabaseName|Sets the lease database where the leaseContainerName will be stored. If it is not specified, this component will store the lease container in the same database that is specified in databaseName. It will be auto
|
| 780 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 781 |
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|
| 782 |
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
|
|
|
|
| 37 |
You can append query options to the URI in the following format,
|
| 38 |
`?options=value&option2=value&`…
|
| 39 |
|
| 40 |
+
# Usage
|
| 41 |
+
|
| 42 |
+
## Authentication Information
|
| 43 |
|
| 44 |
To use this component, you have two options to provide the required
|
| 45 |
Azure authentication information:
|
|
|
|
| 52 |
[CosmosAsyncClient](https://docs.microsoft.com/en-us/java/api/com.azure.cosmos.cosmosasyncclient?view=azure-java-stable)
|
| 53 |
instance which can be provided into `cosmosAsyncClient`.
|
| 54 |
|
| 55 |
+
## Async Consumer and Producer
|
| 56 |
|
| 57 |
This component implements the async Consumer and producer.
|
| 58 |
|
| 59 |
This allows camel route to consume and produce events asynchronously
|
| 60 |
without blocking any threads.
|
| 61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
## Message headers evaluated by the component producer
|
| 63 |
|
| 64 |
<table>
|
|
|
|
| 69 |
<col style="width: 69%" />
|
| 70 |
</colgroup>
|
| 71 |
<thead>
|
| 72 |
+
<tr class="header">
|
| 73 |
<th style="text-align: left;">Header</th>
|
| 74 |
<th style="text-align: left;">Variable Name</th>
|
| 75 |
<th style="text-align: left;">Type</th>
|
|
|
|
| 77 |
</tr>
|
| 78 |
</thead>
|
| 79 |
<tbody>
|
| 80 |
+
<tr class="odd">
|
| 81 |
<td
|
| 82 |
style="text-align: left;"><p><code>CamelAzureCosmosDbDatabaseName</code></p></td>
|
| 83 |
<td
|
|
|
|
| 88 |
case you are producing data and have createDatabaseIfNotExists=true, the
|
| 89 |
component will automatically auto create a Cosmos database.</p></td>
|
| 90 |
</tr>
|
| 91 |
+
<tr class="even">
|
| 92 |
<td
|
| 93 |
style="text-align: left;"><p><code>CamelAzureCosmosDbContainerName</code></p></td>
|
| 94 |
<td
|
|
|
|
| 100 |
the component will automatically auto create a Cosmos
|
| 101 |
container.</p></td>
|
| 102 |
</tr>
|
| 103 |
+
<tr class="odd">
|
| 104 |
<td
|
| 105 |
style="text-align: left;"><p><code>CamelAzureCosmosDbOperation</code></p></td>
|
| 106 |
<td
|
|
|
|
| 110 |
<td style="text-align: left;"><p>Set the producer operation which can be
|
| 111 |
used to execute a specific operation on the producer.</p></td>
|
| 112 |
</tr>
|
| 113 |
+
<tr class="even">
|
| 114 |
<td
|
| 115 |
style="text-align: left;"><p><code>CamelAzureCosmosDbQuery</code></p></td>
|
| 116 |
<td
|
|
|
|
| 119 |
<td style="text-align: left;"><p>Set the SQL query to execute on a given
|
| 120 |
producer query operations.</p></td>
|
| 121 |
</tr>
|
| 122 |
+
<tr class="odd">
|
| 123 |
<td
|
| 124 |
style="text-align: left;"><p><code>CamelAzureCosmosDbQueryRequestOptions</code></p></td>
|
| 125 |
<td
|
|
|
|
| 130 |
can be used with queryItems, queryContainers, queryDatabases,
|
| 131 |
listDatabases, listItems, listContainers operations.</p></td>
|
| 132 |
</tr>
|
| 133 |
+
<tr class="even">
|
| 134 |
<td
|
| 135 |
style="text-align: left;"><p><code>CamelAzureCosmosDbCreateDatabaseIfNotExist</code></p></td>
|
| 136 |
<td
|
|
|
|
| 140 |
Cosmos database automatically in case it doesn’t exist in the Cosmos
|
| 141 |
account.</p></td>
|
| 142 |
</tr>
|
| 143 |
+
<tr class="odd">
|
| 144 |
<td
|
| 145 |
style="text-align: left;"><p><code>CamelAzureCosmosDbCreateContainerIfNotExist</code></p></td>
|
| 146 |
<td
|
|
|
|
| 150 |
Cosmos container automatically in case it doesn’t exist in the Cosmos
|
| 151 |
account.</p></td>
|
| 152 |
</tr>
|
| 153 |
+
<tr class="even">
|
| 154 |
<td
|
| 155 |
style="text-align: left;"><p><code>CamelAzureCosmosDbThroughputProperties</code></p></td>
|
| 156 |
<td
|
|
|
|
| 160 |
<td style="text-align: left;"><p>Sets throughput of the resources in the
|
| 161 |
Azure Cosmos DB service.</p></td>
|
| 162 |
</tr>
|
| 163 |
+
<tr class="odd">
|
| 164 |
<td
|
| 165 |
style="text-align: left;"><p><code>CamelAzureCosmosDbDatabaseRequestOptions</code></p></td>
|
| 166 |
<td
|
|
|
|
| 170 |
<td style="text-align: left;"><p>Sets additional options to execute on
|
| 171 |
database operations.</p></td>
|
| 172 |
</tr>
|
| 173 |
+
<tr class="even">
|
| 174 |
<td
|
| 175 |
style="text-align: left;"><p><code>CamelAzureCosmosDbContainerPartitionKeyPath</code></p></td>
|
| 176 |
<td
|
|
|
|
| 179 |
<td style="text-align: left;"><p>Set the container partition key
|
| 180 |
path.</p></td>
|
| 181 |
</tr>
|
| 182 |
+
<tr class="odd">
|
| 183 |
<td
|
| 184 |
style="text-align: left;"><p><code>CamelAzureCosmosDbContainerRequestOptions</code></p></td>
|
| 185 |
<td
|
|
|
|
| 189 |
<td style="text-align: left;"><p>Set additional options to execute on
|
| 190 |
container operations.</p></td>
|
| 191 |
</tr>
|
| 192 |
+
<tr class="even">
|
| 193 |
<td
|
| 194 |
style="text-align: left;"><p><code>CamelAzureCosmosDbItemPartitionKey</code></p></td>
|
| 195 |
<td
|
|
|
|
| 199 |
partition key value in the Azure Cosmos DB database service. A partition
|
| 200 |
key identifies the partition where the item is stored in.</p></td>
|
| 201 |
</tr>
|
| 202 |
+
<tr class="odd">
|
| 203 |
<td
|
| 204 |
style="text-align: left;"><p><code>CamelAzureCosmosDbItemRequestOptions</code></p></td>
|
| 205 |
<td
|
|
|
|
| 209 |
<td style="text-align: left;"><p>Set additional options to execute on
|
| 210 |
item operations.</p></td>
|
| 211 |
</tr>
|
| 212 |
+
<tr class="even">
|
| 213 |
<td
|
| 214 |
style="text-align: left;"><p><code>CamelAzureCosmosDbItemId</code></p></td>
|
| 215 |
<td
|
|
|
|
| 231 |
<col style="width: 69%" />
|
| 232 |
</colgroup>
|
| 233 |
<thead>
|
| 234 |
+
<tr class="header">
|
| 235 |
<th style="text-align: left;">Header</th>
|
| 236 |
<th style="text-align: left;">Variable Name</th>
|
| 237 |
<th style="text-align: left;">Type</th>
|
|
|
|
| 239 |
</tr>
|
| 240 |
</thead>
|
| 241 |
<tbody>
|
| 242 |
+
<tr class="odd">
|
| 243 |
<td
|
| 244 |
style="text-align: left;"><p><code>CamelAzureCosmosDbRecourseId</code></p></td>
|
| 245 |
<td
|
|
|
|
| 248 |
<td style="text-align: left;"><p>The resource ID of the requested
|
| 249 |
resource.</p></td>
|
| 250 |
</tr>
|
| 251 |
+
<tr class="even">
|
| 252 |
<td
|
| 253 |
style="text-align: left;"><p><code>CamelAzureCosmosDbEtag</code></p></td>
|
| 254 |
<td
|
|
|
|
| 257 |
<td style="text-align: left;"><p>The Etag ID of the requested
|
| 258 |
resource.</p></td>
|
| 259 |
</tr>
|
| 260 |
+
<tr class="odd">
|
| 261 |
<td
|
| 262 |
style="text-align: left;"><p><code>CamelAzureCosmosDbTimestamp</code></p></td>
|
| 263 |
<td
|
|
|
|
| 266 |
<td style="text-align: left;"><p>The timestamp of the requested
|
| 267 |
resource.</p></td>
|
| 268 |
</tr>
|
| 269 |
+
<tr class="even">
|
| 270 |
<td
|
| 271 |
style="text-align: left;"><p><code>CamelAzureCosmosDbResponseHeaders</code></p></td>
|
| 272 |
<td
|
|
|
|
| 275 |
<td style="text-align: left;"><p>The response headers of the requested
|
| 276 |
resource.</p></td>
|
| 277 |
</tr>
|
| 278 |
+
<tr class="odd">
|
| 279 |
<td
|
| 280 |
style="text-align: left;"><p><code>CamelAzureCosmosDbStatusCode</code></p></td>
|
| 281 |
<td
|
|
|
|
| 284 |
<td style="text-align: left;"><p>The status code of the requested
|
| 285 |
resource.</p></td>
|
| 286 |
</tr>
|
| 287 |
+
<tr class="even">
|
| 288 |
<td
|
| 289 |
style="text-align: left;"><p><code>CamelAzureCosmosDbDefaultTimeToLiveInSeconds</code></p></td>
|
| 290 |
<td
|
|
|
|
| 293 |
<td style="text-align: left;"><p>The TTL of the requested
|
| 294 |
resource.</p></td>
|
| 295 |
</tr>
|
| 296 |
+
<tr class="odd">
|
| 297 |
<td
|
| 298 |
style="text-align: left;"><p><code>CamelAzureCosmosDbManualThroughput</code></p></td>
|
| 299 |
<td
|
|
|
|
| 302 |
<td style="text-align: left;"><p>The manual throughput of the requested
|
| 303 |
resource.</p></td>
|
| 304 |
</tr>
|
| 305 |
+
<tr class="even">
|
| 306 |
<td
|
| 307 |
style="text-align: left;"><p><code>CamelAzureCosmosDbAutoscaleMaxThroughput</code></p></td>
|
| 308 |
<td
|
|
|
|
| 330 |
<col style="width: 89%" />
|
| 331 |
</colgroup>
|
| 332 |
<thead>
|
| 333 |
+
<tr class="header">
|
| 334 |
<th style="text-align: left;">Operation</th>
|
| 335 |
<th style="text-align: left;">Description</th>
|
| 336 |
</tr>
|
| 337 |
</thead>
|
| 338 |
<tbody>
|
| 339 |
+
<tr class="odd">
|
| 340 |
<td style="text-align: left;"><p><code>listDatabases</code></p></td>
|
| 341 |
<td style="text-align: left;"><p>Gets a list of all databases as
|
| 342 |
<code>List<CosmosDatabaseProperties></code> set in the exchange
|
| 343 |
message body.</p></td>
|
| 344 |
</tr>
|
| 345 |
+
<tr class="even">
|
| 346 |
<td style="text-align: left;"><p><code>createDatabase</code></p></td>
|
| 347 |
<td style="text-align: left;"><p>Create a database in the specified
|
| 348 |
Azure CosmosDB account.</p></td>
|
| 349 |
</tr>
|
| 350 |
+
<tr class="odd">
|
| 351 |
<td style="text-align: left;"><p><code>queryDatabases</code></p></td>
|
| 352 |
<td style="text-align: left;"><p><strong><code>query</code> is
|
| 353 |
required</strong> Execute an SQL query against the service level in
|
|
|
|
| 370 |
<col style="width: 89%" />
|
| 371 |
</colgroup>
|
| 372 |
<thead>
|
| 373 |
+
<tr class="header">
|
| 374 |
<th style="text-align: left;">Operation</th>
|
| 375 |
<th style="text-align: left;">Description</th>
|
| 376 |
</tr>
|
| 377 |
</thead>
|
| 378 |
<tbody>
|
| 379 |
+
<tr class="odd">
|
| 380 |
<td style="text-align: left;"><p><code>deleteDatabase</code></p></td>
|
| 381 |
<td style="text-align: left;"><p>Delete a database from the Azure
|
| 382 |
CosmosDB account.</p></td>
|
| 383 |
</tr>
|
| 384 |
+
<tr class="even">
|
| 385 |
<td style="text-align: left;"><p><code>createContainer</code></p></td>
|
| 386 |
<td style="text-align: left;"><p>Create a container in the specified
|
| 387 |
Azure CosmosDB database.</p></td>
|
| 388 |
</tr>
|
| 389 |
+
<tr class="odd">
|
| 390 |
<td
|
| 391 |
style="text-align: left;"><p><code>replaceDatabaseThroughput</code></p></td>
|
| 392 |
<td style="text-align: left;"><p>Replace the throughput for the
|
| 393 |
specified Azure CosmosDB database.</p></td>
|
| 394 |
</tr>
|
| 395 |
+
<tr class="even">
|
| 396 |
<td style="text-align: left;"><p><code>listContainers</code></p></td>
|
| 397 |
<td style="text-align: left;"><p>Gets a list of all containers in the
|
| 398 |
specified database as <code>List<CosmosContainerProperties></code>
|
| 399 |
set in the exchange message body.</p></td>
|
| 400 |
</tr>
|
| 401 |
+
<tr class="odd">
|
| 402 |
<td style="text-align: left;"><p><code>queryContainers</code></p></td>
|
| 403 |
<td style="text-align: left;"><p><strong><code>query</code> is
|
| 404 |
required</strong> Executes an SQL query against the database level in
|
|
|
|
| 421 |
<col style="width: 89%" />
|
| 422 |
</colgroup>
|
| 423 |
<thead>
|
| 424 |
+
<tr class="header">
|
| 425 |
<th style="text-align: left;">Operation</th>
|
| 426 |
<th style="text-align: left;">Description</th>
|
| 427 |
</tr>
|
| 428 |
</thead>
|
| 429 |
<tbody>
|
| 430 |
+
<tr class="odd">
|
| 431 |
<td style="text-align: left;"><p><code>deleteContainer</code></p></td>
|
| 432 |
<td style="text-align: left;"><p>Delete a container from the specified
|
| 433 |
Azure CosmosDB database.</p></td>
|
| 434 |
</tr>
|
| 435 |
+
<tr class="even">
|
| 436 |
<td
|
| 437 |
style="text-align: left;"><p><code>replaceContainerThroughput</code></p></td>
|
| 438 |
<td style="text-align: left;"><p>Replace the throughput for the
|
| 439 |
specified Azure CosmosDB container.</p></td>
|
| 440 |
</tr>
|
| 441 |
+
<tr class="odd">
|
| 442 |
<td style="text-align: left;"><p><code>createItem</code></p></td>
|
| 443 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 444 |
is required</strong> Creates an item in the specified container, it
|
| 445 |
accepts POJO or key value as <code>Map<String, ?></code>.</p></td>
|
| 446 |
</tr>
|
| 447 |
+
<tr class="even">
|
| 448 |
<td style="text-align: left;"><p><code>upsertItem</code></p></td>
|
| 449 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 450 |
is required</strong> Creates an item in the specified container if it
|
| 451 |
doesn’t exist otherwise overwrite it if it exists, it accepts POJO or
|
| 452 |
key value as <code>Map<String, ?></code>.</p></td>
|
| 453 |
</tr>
|
| 454 |
+
<tr class="odd">
|
| 455 |
<td style="text-align: left;"><p><code>replaceItem</code></p></td>
|
| 456 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 457 |
and <code>itemId</code> are required</strong> Overwrites an item in the
|
| 458 |
specified container, it accepts POJO or key value as
|
| 459 |
<code>Map<String, ?></code>.</p></td>
|
| 460 |
</tr>
|
| 461 |
+
<tr class="even">
|
| 462 |
<td style="text-align: left;"><p><code>deleteItem</code></p></td>
|
| 463 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 464 |
and <code>itemId</code> are required</strong> Deletes an item in the
|
| 465 |
specified container.</p></td>
|
| 466 |
</tr>
|
| 467 |
+
<tr class="odd">
|
| 468 |
<td style="text-align: left;"><p><code>readItem</code></p></td>
|
| 469 |
<td style="text-align: left;"><p><strong><code>itemPartitionKey</code>
|
| 470 |
and <code>itemId</code> are required</strong> Gets an item in the
|
| 471 |
specified container as <code>Map<String,?></code> set in the
|
| 472 |
exchange body message.</p></td>
|
| 473 |
</tr>
|
| 474 |
+
<tr class="even">
|
| 475 |
<td style="text-align: left;"><p><code>readItem</code></p></td>
|
| 476 |
<td
|
| 477 |
style="text-align: left;"><p><strong><code>itemPartitionKey</code></strong>
|
|
|
|
| 480 |
<code>List<Map<String,?>></code> set in the exchange body
|
| 481 |
message.</p></td>
|
| 482 |
</tr>
|
| 483 |
+
<tr class="odd">
|
| 484 |
<td style="text-align: left;"><p><code>queryItems</code></p></td>
|
| 485 |
<td style="text-align: left;"><p><strong><code>query</code> is
|
| 486 |
required</strong> Execute an SQL query against the container level in
|
|
|
|
| 494 |
Refer to the example section in this page to learn how to use these
|
| 495 |
operations into your camel application.
|
| 496 |
|
| 497 |
+
# Examples
|
| 498 |
+
|
| 499 |
+
## Consuming records from a specific container
|
| 500 |
+
|
| 501 |
+
For example, to consume records from a specific container in a specific
|
| 502 |
+
database to a file, use the following snippet:
|
| 503 |
+
|
| 504 |
+
from("azure-cosmosdb://camelDb/myContainer?accountKey=MyaccountKey&databaseEndpoint=https//myazure.com:443&leaseDatabaseName=myLeaseDB&createLeaseDatabaseIfNotExists=true&createLeaseContainerIfNotExists=true").
|
| 505 |
+
to("file://directory");
|
| 506 |
+
|
| 507 |
+
## Operations
|
| 508 |
|
| 509 |
- `listDatabases`:
|
| 510 |
|
|
|
|
| 696 |
The consumer will set `List<Map<String,>>` in exchange message body
|
| 697 |
which reflect the list of items in a single feed.
|
| 698 |
|
| 699 |
+
### Example
|
| 700 |
|
| 701 |
For example, to listen to the events in `myContainer` container in
|
| 702 |
`myDb`:
|
|
|
|
| 704 |
from("azure-cosmosdb://myDb/myContainer?leaseDatabaseName=myLeaseDb&createLeaseDatabaseIfNotExists=true&createLeaseContainerIfNotExists=true")
|
| 705 |
.to("mock:result");
|
| 706 |
|
| 707 |
+
# Important Development Notes
|
| 708 |
|
| 709 |
When developing on this component, you will need to obtain your Azure
|
| 710 |
accessKey in order to run the integration tests. In addition to the
|
|
|
|
| 723 |
|
| 724 |
|Name|Description|Default|Type|
|
| 725 |
|---|---|---|---|
|
| 726 |
+
|clientTelemetryEnabled|Sets the flag to enable client telemetry which will periodically collect database operations aggregation statistics, system information like cpu/memory and send it to cosmos monitoring service, which will be helpful during debugging. DEFAULT value is false indicating this is an opt-in feature, by default no telemetry collection.|false|boolean|
|
| 727 |
|configuration|The component configurations||object|
|
| 728 |
+
|connectionSharingAcrossClientsEnabled|Enables connections sharing across multiple Cosmos Clients. The default is false. When you have multiple instances of Cosmos Client in the same JVM interacting with multiple Cosmos accounts, enabling this allows connection sharing in Direct mode if possible between instances of Cosmos Client. Please note, when setting this option, the connection configuration (e.g., socket timeout config, idle timeout config) of the first instantiated client will be used for all other client instances.|false|boolean|
|
| 729 |
|consistencyLevel|Sets the consistency levels supported for Azure Cosmos DB client operations in the Azure Cosmos DB service. The requested ConsistencyLevel must match or be weaker than that provisioned for the database account. Consistency levels by order of strength are STRONG, BOUNDED\_STALENESS, SESSION and EVENTUAL. Refer to consistency level documentation for additional details: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels|SESSION|object|
|
| 730 |
|containerPartitionKeyPath|Sets the container partition key path.||string|
|
| 731 |
+
|contentResponseOnWriteEnabled|Sets the boolean to only return the headers and status code in Cosmos DB response in case of Create, Update and Delete operations on CosmosItem. In Consumer, it is enabled by default because of the ChangeFeed in the consumer that needs this flag to be enabled, and thus it shouldn't be overridden. In Producer, it is advised to disable it since it reduces the network overhead|true|boolean|
|
| 732 |
|cosmosAsyncClient|Inject an external CosmosAsyncClient into the component which provides a client-side logical representation of the Azure Cosmos DB service. This asynchronous client is used to configure and execute requests against the service.||object|
|
| 733 |
+
|createContainerIfNotExists|Sets if the component should create the Cosmos container automatically in case it doesn't exist in the Cosmos database|false|boolean|
|
| 734 |
+
|createDatabaseIfNotExists|Sets if the component should create the Cosmos database automatically in case it doesn't exist in the Cosmos account|false|boolean|
|
| 735 |
|databaseEndpoint|Sets the Azure Cosmos database endpoint the component will connect to.||string|
|
| 736 |
|multipleWriteRegionsEnabled|Sets the flag to enable writes on any regions for geo-replicated database accounts in the Azure Cosmos DB service. When the value of this property is true, the SDK will direct write operations to available writable regions of geo-replicated database account. Writable regions are ordered by PreferredRegions property. Setting the property value to true has no effect until EnableMultipleWriteRegions in DatabaseAccount is also set to true. DEFAULT value is true indicating that writes are directed to available writable regions of geo-replicated database account.|true|boolean|
|
| 737 |
|preferredRegions|Sets the comma separated preferred regions for geo-replicated database accounts. For example, East US as the preferred region. When EnableEndpointDiscovery is true and PreferredRegions is non-empty, the SDK will prefer to use the regions in the container in the order they are specified to perform operations.||string|
|
|
|
|
| 740 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 741 |
|changeFeedProcessorOptions|Sets the ChangeFeedProcessorOptions to be used. Unless specifically set the default values that will be used are: maximum items per page or FeedResponse: 100 lease renew interval: 17 seconds lease acquire interval: 13 seconds lease expiration interval: 60 seconds feed poll delay: 5 seconds maximum scale count: unlimited||object|
|
| 742 |
|createLeaseContainerIfNotExists|Sets if the component should create Cosmos lease container for the consumer automatically in case it doesn't exist in Cosmos database|false|boolean|
|
| 743 |
+
|createLeaseDatabaseIfNotExists|Sets if the component should create the Cosmos lease database for the consumer automatically in case it doesn't exist in the Cosmos account|false|boolean|
|
| 744 |
|hostName|Sets the hostname. The host: a host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different instance name. If not specified, this will be a generated random hostname.||string|
|
| 745 |
+
|leaseContainerName|Sets the lease container which acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. It will be auto-created if createLeaseContainerIfNotExists is set to true.|camel-lease|string|
|
| 746 |
+
|leaseDatabaseName|Sets the lease database where the leaseContainerName will be stored. If it is not specified, this component will store the lease container in the same database that is specified in databaseName. It will be auto-created if createLeaseDatabaseIfNotExists is set to true.||string|
|
| 747 |
|itemId|Sets the itemId in case needed for operation on item like delete, replace||string|
|
| 748 |
|itemPartitionKey|Sets partition key. Represents a partition key value in the Azure Cosmos DB database service. A partition key identifies the partition where the item is stored in.||string|
|
| 749 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
|
|
|
| 762 |
|---|---|---|---|
|
| 763 |
|databaseName|The name of the Cosmos database that component should connect to. In case you are producing data and have createDatabaseIfNotExists=true, the component will automatically auto create a Cosmos database.||string|
|
| 764 |
|containerName|The name of the Cosmos container that component should connect to. In case you are producing data and have createContainerIfNotExists=true, the component will automatically auto create a Cosmos container.||string|
|
| 765 |
+
|clientTelemetryEnabled|Sets the flag to enable client telemetry which will periodically collect database operations aggregation statistics, system information like cpu/memory and send it to cosmos monitoring service, which will be helpful during debugging. DEFAULT value is false indicating this is an opt-in feature, by default no telemetry collection.|false|boolean|
|
| 766 |
+
|connectionSharingAcrossClientsEnabled|Enables connections sharing across multiple Cosmos Clients. The default is false. When you have multiple instances of Cosmos Client in the same JVM interacting with multiple Cosmos accounts, enabling this allows connection sharing in Direct mode if possible between instances of Cosmos Client. Please note, when setting this option, the connection configuration (e.g., socket timeout config, idle timeout config) of the first instantiated client will be used for all other client instances.|false|boolean|
|
| 767 |
|consistencyLevel|Sets the consistency levels supported for Azure Cosmos DB client operations in the Azure Cosmos DB service. The requested ConsistencyLevel must match or be weaker than that provisioned for the database account. Consistency levels by order of strength are STRONG, BOUNDED\_STALENESS, SESSION and EVENTUAL. Refer to consistency level documentation for additional details: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels|SESSION|object|
|
| 768 |
|containerPartitionKeyPath|Sets the container partition key path.||string|
|
| 769 |
+
|contentResponseOnWriteEnabled|Sets the boolean to only return the headers and status code in Cosmos DB response in case of Create, Update and Delete operations on CosmosItem. In Consumer, it is enabled by default because of the ChangeFeed in the consumer that needs this flag to be enabled, and thus it shouldn't be overridden. In Producer, it is advised to disable it since it reduces the network overhead|true|boolean|
|
| 770 |
|cosmosAsyncClient|Inject an external CosmosAsyncClient into the component which provides a client-side logical representation of the Azure Cosmos DB service. This asynchronous client is used to configure and execute requests against the service.||object|
|
| 771 |
+
|createContainerIfNotExists|Sets if the component should create the Cosmos container automatically in case it doesn't exist in the Cosmos database|false|boolean|
|
| 772 |
+
|createDatabaseIfNotExists|Sets if the component should create the Cosmos database automatically in case it doesn't exist in the Cosmos account|false|boolean|
|
| 773 |
|databaseEndpoint|Sets the Azure Cosmos database endpoint the component will connect to.||string|
|
| 774 |
|multipleWriteRegionsEnabled|Sets the flag to enable writes on any regions for geo-replicated database accounts in the Azure Cosmos DB service. When the value of this property is true, the SDK will direct write operations to available writable regions of geo-replicated database account. Writable regions are ordered by PreferredRegions property. Setting the property value to true has no effect until EnableMultipleWriteRegions in DatabaseAccount is also set to true. DEFAULT value is true indicating that writes are directed to available writable regions of geo-replicated database account.|true|boolean|
|
| 775 |
|preferredRegions|Sets the comma separated preferred regions for geo-replicated database accounts. For example, East US as the preferred region. When EnableEndpointDiscovery is true and PreferredRegions is non-empty, the SDK will prefer to use the regions in the container in the order they are specified to perform operations.||string|
|
|
|
|
| 777 |
|throughputProperties|Sets throughput of the resources in the Azure Cosmos DB service.||object|
|
| 778 |
|changeFeedProcessorOptions|Sets the ChangeFeedProcessorOptions to be used. Unless specifically set the default values that will be used are: maximum items per page or FeedResponse: 100 lease renew interval: 17 seconds lease acquire interval: 13 seconds lease expiration interval: 60 seconds feed poll delay: 5 seconds maximum scale count: unlimited||object|
|
| 779 |
|createLeaseContainerIfNotExists|Sets if the component should create Cosmos lease container for the consumer automatically in case it doesn't exist in Cosmos database|false|boolean|
|
| 780 |
+
|createLeaseDatabaseIfNotExists|Sets if the component should create the Cosmos lease database for the consumer automatically in case it doesn't exist in the Cosmos account|false|boolean|
|
| 781 |
|hostName|Sets the hostname. The host: a host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different instance name. If not specified, this will be a generated random hostname.||string|
|
| 782 |
+
|leaseContainerName|Sets the lease container which acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. It will be auto-created if createLeaseContainerIfNotExists is set to true.|camel-lease|string|
|
| 783 |
+
|leaseDatabaseName|Sets the lease database where the leaseContainerName will be stored. If it is not specified, this component will store the lease container in the same database that is specified in databaseName. It will be auto-created if createLeaseDatabaseIfNotExists is set to true.||string|
|
| 784 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 785 |
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|
| 786 |
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
|
camel-azure-eventhubs.md
CHANGED
|
@@ -4,24 +4,19 @@
|
|
| 4 |
|
| 5 |
**Both producer and consumer are supported**
|
| 6 |
|
| 7 |
-
The Azure Event Hubs
|
| 8 |
-
|
|
|
|
| 9 |
[AMQP
|
| 10 |
protocol](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol).
|
| 11 |
Azure EventHubs is a highly scalable publish-subscribe service that can
|
| 12 |
ingest millions of events per second and stream them to multiple
|
| 13 |
consumers.
|
| 14 |
|
| 15 |
-
|
| 16 |
-
HTTPS protocols. Therefore, you can also use the [Camel
|
| 17 |
-
Kafka](#components::kafka-component.adoc) component to produce and
|
| 18 |
-
consume to Azure Event Hubs. You can lean more
|
| 19 |
-
[here](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs).
|
| 20 |
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
You must have a valid Windows Azure Event Hubs account. More information
|
| 24 |
-
is available at [Azure Documentation
|
| 25 |
Portal](https://docs.microsoft.com/azure/).
|
| 26 |
|
| 27 |
Maven users will need to add the following dependency to their `pom.xml`
|
|
@@ -38,42 +33,44 @@ for this component:
|
|
| 38 |
|
| 39 |
azure-eventhubs://[namespace/eventHubName][?options]
|
| 40 |
|
| 41 |
-
|
| 42 |
-
|
| 43 |
`connectionString`
|
| 44 |
|
| 45 |
-
#
|
|
|
|
|
|
|
| 46 |
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
instance yourself. To use this component, you have three options to
|
| 50 |
-
provide the required Azure authentication information:
|
| 51 |
|
| 52 |
**CONNECTION\_STRING**:
|
| 53 |
|
| 54 |
-
|
| 55 |
-
Event Hubs account. The sharedAccessKey can be generated through
|
| 56 |
-
your Event Hubs Azure portal.
|
| 57 |
|
| 58 |
-
- Provide `connectionString`
|
| 59 |
-
|
| 60 |
-
`sharedAccessKey` and `sharedAccessName` as
|
| 61 |
-
included
|
| 62 |
-
to get started. Learn more
|
| 63 |
[here](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string)
|
| 64 |
on how to generate the connection string.
|
| 65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
**TOKEN\_CREDENTIAL**:
|
| 67 |
|
| 68 |
-
-
|
| 69 |
-
`com.azure.core.credential.TokenCredential`
|
| 70 |
-
|
| 71 |
-
`com.azure.identity.DefaultAzureCredentialBuilder().build();` API.
|
| 72 |
-
See the documentation [here about Azure-AD
|
| 73 |
authentication](https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication).
|
| 74 |
|
| 75 |
-
AZURE\_IDENTITY: - This will use
|
| 76 |
-
`com.azure.identity.DefaultAzureCredentialBuilder().build()
|
| 77 |
This will follow the Default Azure Credential Chain. See the
|
| 78 |
documentation [here about Azure-AD
|
| 79 |
authentication](https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication).
|
|
@@ -82,60 +79,45 @@ authentication](https://docs.microsoft.com/en-us/azure/active-directory/authenti
|
|
| 82 |
|
| 83 |
- Provide a
|
| 84 |
[EventHubProducerAsyncClient](https://docs.microsoft.com/en-us/java/api/com.azure.messaging.eventhubs.eventhubproducerasyncclient?view=azure-java-stable)
|
| 85 |
-
instance which can be
|
| 86 |
-
this is **only
|
| 87 |
-
consumer, is not possible to inject the client due to
|
| 88 |
-
|
| 89 |
|
| 90 |
-
# Checkpoint Store Information
|
| 91 |
|
| 92 |
-
A checkpoint store stores and retrieves partition ownership information
|
| 93 |
and checkpoint details for each partition in a given consumer group of
|
| 94 |
an event hub instance. Users are not meant to implement a
|
| 95 |
-
CheckpointStore. Users are expected to choose existing implementations
|
| 96 |
of this interface, instantiate it, and pass it to the component through
|
| 97 |
-
`checkpointStore` option.
|
| 98 |
-
methods on a checkpoint store, these are used internally by the client.
|
| 99 |
|
| 100 |
-
|
| 101 |
-
|
| 102 |
[`BlobCheckpointStore`](https://docs.microsoft.com/en-us/javascript/api/@azure/eventhubs-checkpointstore-blob/blobcheckpointstore?view=azure-node-latest)
|
| 103 |
-
to store the checkpoint
|
| 104 |
-
chose to use the default `BlobCheckpointStore`, you will need to
|
| 105 |
-
the following options:
|
| 106 |
|
| 107 |
-
- `blobAccountName`:
|
| 108 |
authentication with azure blob services.
|
| 109 |
|
| 110 |
-
- `blobAccessKey`:
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
- `blobContainerName`: It sets the blob container that shall be used
|
| 114 |
-
by the BlobCheckpointStore to store the checkpoint offsets.
|
| 115 |
-
|
| 116 |
-
# Async Consumer and Producer
|
| 117 |
-
|
| 118 |
-
This component implements the async Consumer and producer.
|
| 119 |
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
# Usage
|
| 124 |
-
|
| 125 |
-
For example, to consume event from EventHub, use the following snippet:
|
| 126 |
-
|
| 127 |
-
from("azure-eventhubs:/camel/camelHub?sharedAccessName=SASaccountName&sharedAccessKey=SASaccessKey&blobAccountName=accountName&blobAccessKey=accessKey&blobContainerName=containerName")
|
| 128 |
-
.to("file://queuedirectory");
|
| 129 |
|
| 130 |
## Message body type
|
| 131 |
|
| 132 |
-
The
|
| 133 |
-
`byte[]`. This allows the
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
message body.
|
| 137 |
|
| 138 |
-
## Automatic detection of EventHubProducerAsyncClient client in registry
|
| 139 |
|
| 140 |
The component is capable of detecting the presence of an
|
| 141 |
EventHubProducerAsyncClient bean into the registry. If it’s the only
|
|
@@ -143,41 +125,39 @@ instance of that type, it will be used as the client, and you won’t have
|
|
| 143 |
to define it as uri parameter, like the example above. This may be
|
| 144 |
really useful for smarter configuration of the endpoint.
|
| 145 |
|
|
|
|
|
|
|
| 146 |
## Consumer Example
|
| 147 |
|
| 148 |
-
|
| 149 |
-
produced in JSON:
|
| 150 |
|
| 151 |
-
from("azure-eventhubs:?
|
| 152 |
-
|
| 153 |
-
.unmarshal().json(JsonLibrary.Jackson)
|
| 154 |
-
.to(result);
|
| 155 |
|
| 156 |
## Producer Example
|
| 157 |
|
| 158 |
-
|
| 159 |
|
| 160 |
from("direct:start")
|
| 161 |
-
|
| 162 |
exchange.getIn().setHeader(EventHubsConstants.PARTITION_ID, firstPartition);
|
| 163 |
exchange.getIn().setBody("test event");
|
| 164 |
-
|
| 165 |
-
|
| 166 |
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
data (e.g.: list of Strings). For example:
|
| 170 |
|
| 171 |
from("direct:start")
|
| 172 |
-
|
| 173 |
final List<String> messages = new LinkedList<>();
|
| 174 |
messages.add("Test String Message 1");
|
| 175 |
messages.add("Test String Message 2");
|
| 176 |
|
| 177 |
exchange.getIn().setHeader(EventHubsConstants.PARTITION_ID, firstPartition);
|
| 178 |
exchange.getIn().setBody(messages);
|
| 179 |
-
|
| 180 |
-
|
| 181 |
|
| 182 |
## Azure-AD Authentication example
|
| 183 |
|
|
@@ -191,9 +171,9 @@ about what environment variables you need to set for this to work:
|
|
| 191 |
}
|
| 192 |
|
| 193 |
from("direct:start")
|
| 194 |
-
|
| 195 |
|
| 196 |
-
#
|
| 197 |
|
| 198 |
When developing on this component, you will need to obtain your Azure
|
| 199 |
accessKey to run the integration tests. In addition to the mocked unit
|
|
@@ -213,30 +193,30 @@ is the access key being generated from Azure portal and
|
|
| 213 |
|
| 214 |
|Name|Description|Default|Type|
|
| 215 |
|---|---|---|---|
|
| 216 |
-
|amqpRetryOptions|Sets the retry policy for
|
| 217 |
-
|amqpTransportType|Sets the transport type by which all the communication with Azure Event Hubs occurs.
|
| 218 |
|configuration|The component configurations||object|
|
| 219 |
|blobAccessKey|In case you chose the default BlobCheckpointStore, this sets access key for the associated azure account name to be used for authentication with azure blob services.||string|
|
| 220 |
|blobAccountName|In case you chose the default BlobCheckpointStore, this sets Azure account name to be used for authentication with azure blob services.||string|
|
| 221 |
|blobContainerName|In case you chose the default BlobCheckpointStore, this sets the blob container that shall be used by the BlobCheckpointStore to store the checkpoint offsets.||string|
|
| 222 |
|blobStorageSharedKeyCredential|In case you chose the default BlobCheckpointStore, StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information.||object|
|
| 223 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 224 |
-
|checkpointBatchSize|Sets the batch size between each checkpoint
|
| 225 |
-
|checkpointBatchTimeout|Sets the batch timeout between each checkpoint
|
| 226 |
-
|checkpointStore|Sets the CheckpointStore the EventProcessorClient will use for storing partition ownership and checkpoint information. Users can, optionally, provide their own implementation of CheckpointStore which will store ownership and checkpoint information. By default it set to use com.azure.messaging.eventhubs.checkpointstore.blob.BlobCheckpointStore which stores all checkpoint offsets into Azure Blob Storage.|BlobCheckpointStore|object|
|
| 227 |
-
|consumerGroupName|Sets the name of the consumer group this consumer is associated with. Events are read in the context of this group. The name of the consumer group that is created by default is
|
| 228 |
-
|eventPosition|Sets the map containing the event position to use for each partition if a checkpoint for the partition does not exist in CheckpointStore. This map is keyed off of the partition id. If there is no checkpoint in CheckpointStore and there is no entry in this map, the processing of the partition will start from
|
| 229 |
|prefetchCount|Sets the count used by the receiver to control the number of events the Event Hub consumer will actively receive and queue locally without regard to whether a receive operation is currently active.|500|integer|
|
| 230 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
| 231 |
-
|partitionId|Sets the identifier of the Event Hub partition that the events will be sent to. If the identifier is not specified, the Event Hubs service will be responsible for routing events that are sent to an available partition.||string|
|
| 232 |
-
|partitionKey|Sets a hashing key to be provided for the batch of events, which instructs the Event Hubs service to map this key to a specific partition. The selection of a partition is stable for a given partition hashing key. Should any other batches of events be sent using the same exact partition hashing key, the Event Hubs service will route them all to the same partition. This should be specified only when there is a need to group events by partition, but there is flexibility into which partition they are routed. If ensuring that a batch of events is sent only to a specific partition, it is recommended that the
|
| 233 |
-
|producerAsyncClient|Sets the EventHubProducerAsyncClient.An asynchronous producer responsible for transmitting EventData to a specific Event Hub, grouped together in batches. Depending on the options specified when creating an
|
| 234 |
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
|
| 235 |
-
|connectionString|Instead of supplying namespace, sharedAccessKey, sharedAccessName .
|
| 236 |
|credentialType|Determines the credential strategy to adopt|CONNECTION\_STRING|object|
|
| 237 |
|sharedAccessKey|The generated value for the SharedAccessName.||string|
|
| 238 |
|sharedAccessName|The name you chose for your EventHubs SAS keys.||string|
|
| 239 |
-
|tokenCredential|
|
| 240 |
|
| 241 |
## Endpoint Configurations
|
| 242 |
|
|
@@ -245,27 +225,27 @@ is the access key being generated from Azure portal and
|
|
| 245 |
|---|---|---|---|
|
| 246 |
|namespace|EventHubs namespace created in Azure Portal.||string|
|
| 247 |
|eventHubName|EventHubs name under a specific namespace.||string|
|
| 248 |
-
|amqpRetryOptions|Sets the retry policy for
|
| 249 |
-
|amqpTransportType|Sets the transport type by which all the communication with Azure Event Hubs occurs.
|
| 250 |
|blobAccessKey|In case you chose the default BlobCheckpointStore, this sets access key for the associated azure account name to be used for authentication with azure blob services.||string|
|
| 251 |
|blobAccountName|In case you chose the default BlobCheckpointStore, this sets Azure account name to be used for authentication with azure blob services.||string|
|
| 252 |
|blobContainerName|In case you chose the default BlobCheckpointStore, this sets the blob container that shall be used by the BlobCheckpointStore to store the checkpoint offsets.||string|
|
| 253 |
|blobStorageSharedKeyCredential|In case you chose the default BlobCheckpointStore, StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information.||object|
|
| 254 |
-
|checkpointBatchSize|Sets the batch size between each checkpoint
|
| 255 |
-
|checkpointBatchTimeout|Sets the batch timeout between each checkpoint
|
| 256 |
-
|checkpointStore|Sets the CheckpointStore the EventProcessorClient will use for storing partition ownership and checkpoint information. Users can, optionally, provide their own implementation of CheckpointStore which will store ownership and checkpoint information. By default it set to use com.azure.messaging.eventhubs.checkpointstore.blob.BlobCheckpointStore which stores all checkpoint offsets into Azure Blob Storage.|BlobCheckpointStore|object|
|
| 257 |
-
|consumerGroupName|Sets the name of the consumer group this consumer is associated with. Events are read in the context of this group. The name of the consumer group that is created by default is
|
| 258 |
-
|eventPosition|Sets the map containing the event position to use for each partition if a checkpoint for the partition does not exist in CheckpointStore. This map is keyed off of the partition id. If there is no checkpoint in CheckpointStore and there is no entry in this map, the processing of the partition will start from
|
| 259 |
|prefetchCount|Sets the count used by the receiver to control the number of events the Event Hub consumer will actively receive and queue locally without regard to whether a receive operation is currently active.|500|integer|
|
| 260 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 261 |
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|
| 262 |
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
|
| 263 |
-
|partitionId|Sets the identifier of the Event Hub partition that the events will be sent to. If the identifier is not specified, the Event Hubs service will be responsible for routing events that are sent to an available partition.||string|
|
| 264 |
-
|partitionKey|Sets a hashing key to be provided for the batch of events, which instructs the Event Hubs service to map this key to a specific partition. The selection of a partition is stable for a given partition hashing key. Should any other batches of events be sent using the same exact partition hashing key, the Event Hubs service will route them all to the same partition. This should be specified only when there is a need to group events by partition, but there is flexibility into which partition they are routed. If ensuring that a batch of events is sent only to a specific partition, it is recommended that the
|
| 265 |
-
|producerAsyncClient|Sets the EventHubProducerAsyncClient.An asynchronous producer responsible for transmitting EventData to a specific Event Hub, grouped together in batches. Depending on the options specified when creating an
|
| 266 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
| 267 |
-
|connectionString|Instead of supplying namespace, sharedAccessKey, sharedAccessName .
|
| 268 |
|credentialType|Determines the credential strategy to adopt|CONNECTION\_STRING|object|
|
| 269 |
|sharedAccessKey|The generated value for the SharedAccessName.||string|
|
| 270 |
|sharedAccessName|The name you chose for your EventHubs SAS keys.||string|
|
| 271 |
-
|tokenCredential|
|
|
|
|
| 4 |
|
| 5 |
**Both producer and consumer are supported**
|
| 6 |
|
| 7 |
+
The Azure Event Hubs component provides the capability to produce and
|
| 8 |
+
consume events with [Azure Event
|
| 9 |
+
Hubs](https://azure.microsoft.com/en-us/services/event-hubs/) using the
|
| 10 |
[AMQP
|
| 11 |
protocol](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol).
|
| 12 |
Azure EventHubs is a highly scalable publish-subscribe service that can
|
| 13 |
ingest millions of events per second and stream them to multiple
|
| 14 |
consumers.
|
| 15 |
|
| 16 |
+
**Prerequisites**
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
You must have a valid Microsoft Azure Event Hubs account. More
|
| 19 |
+
information is available at the [Azure Documentation
|
|
|
|
|
|
|
| 20 |
Portal](https://docs.microsoft.com/azure/).
|
| 21 |
|
| 22 |
Maven users will need to add the following dependency to their `pom.xml`
|
|
|
|
| 33 |
|
| 34 |
azure-eventhubs://[namespace/eventHubName][?options]
|
| 35 |
|
| 36 |
+
When providing a `connectionString`, the `namespace` and `eventHubName`
|
| 37 |
+
options are not required as they are already included in the
|
| 38 |
`connectionString`
|
| 39 |
|
| 40 |
+
# Usage
|
| 41 |
+
|
| 42 |
+
## Authentication Information
|
| 43 |
|
| 44 |
+
There are three different Credential Types: `AZURE_IDENTITY`,
|
| 45 |
+
`TOKEN_CREDENTIAL` and `CONNECTION_STRING`.
|
|
|
|
|
|
|
| 46 |
|
| 47 |
**CONNECTION\_STRING**:
|
| 48 |
|
| 49 |
+
You can either:
|
|
|
|
|
|
|
| 50 |
|
| 51 |
+
- Provide the `connectionString` option. Using this options means that
|
| 52 |
+
you don’t need to specify additional options `namespace`,
|
| 53 |
+
`eventHubName`, `sharedAccessKey` and `sharedAccessName` , as this
|
| 54 |
+
data is already included within the `connectionString`. Learn more
|
|
|
|
| 55 |
[here](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string)
|
| 56 |
on how to generate the connection string.
|
| 57 |
|
| 58 |
+
Or
|
| 59 |
+
|
| 60 |
+
- Provide `sharedAccessName` and `sharedAccessKey` options for your
|
| 61 |
+
Azure Event Hubs account. The `sharedAccessKey` can be generated
|
| 62 |
+
through the Event Hubs Azure portal. The connection String will then
|
| 63 |
+
be generated automatically for you by the azure-eventhubs component.
|
| 64 |
+
|
| 65 |
**TOKEN\_CREDENTIAL**:
|
| 66 |
|
| 67 |
+
- Bind an implementation of
|
| 68 |
+
`com.azure.core.credential.TokenCredential` to the Camel Registry
|
| 69 |
+
(see example below). See the documentation [here about Azure-AD
|
|
|
|
|
|
|
| 70 |
authentication](https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication).
|
| 71 |
|
| 72 |
+
**AZURE\_IDENTITY**: - This will use an
|
| 73 |
+
`com.azure.identity.DefaultAzureCredentialBuilder().build()` instance.
|
| 74 |
This will follow the Default Azure Credential Chain. See the
|
| 75 |
documentation [here about Azure-AD
|
| 76 |
authentication](https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication).
|
|
|
|
| 79 |
|
| 80 |
- Provide a
|
| 81 |
[EventHubProducerAsyncClient](https://docs.microsoft.com/en-us/java/api/com.azure.messaging.eventhubs.eventhubproducerasyncclient?view=azure-java-stable)
|
| 82 |
+
instance which can be used for the `producerAsyncClient` option.
|
| 83 |
+
However, this is **only supported for azure-eventhubs producer**,
|
| 84 |
+
for the consumer, it is not possible to inject the client due to
|
| 85 |
+
some design constraints in the `EventProcessorClient`.
|
| 86 |
|
| 87 |
+
## Checkpoint Store Information
|
| 88 |
|
| 89 |
+
A checkpoint store, stores and retrieves partition ownership information
|
| 90 |
and checkpoint details for each partition in a given consumer group of
|
| 91 |
an event hub instance. Users are not meant to implement a
|
| 92 |
+
`CheckpointStore`. Users are expected to choose existing implementations
|
| 93 |
of this interface, instantiate it, and pass it to the component through
|
| 94 |
+
the `checkpointStore` option.
|
|
|
|
| 95 |
|
| 96 |
+
When no `CheckpointStore` implementation is provided, the
|
| 97 |
+
azure-eventhubs component will fall back to use
|
| 98 |
[`BlobCheckpointStore`](https://docs.microsoft.com/en-us/javascript/api/@azure/eventhubs-checkpointstore-blob/blobcheckpointstore?view=azure-node-latest)
|
| 99 |
+
to store the checkpoint information in the Azure Blob Storage account.
|
| 100 |
+
If you chose to use the default `BlobCheckpointStore`, you will need to
|
| 101 |
+
supply the following options:
|
| 102 |
|
| 103 |
+
- `blobAccountName`: The Azure account name to be used for
|
| 104 |
authentication with azure blob services.
|
| 105 |
|
| 106 |
+
- `blobAccessKey`: The access key for the associated azure account
|
| 107 |
+
name to be used for authentication with azure blob services.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
|
| 109 |
+
- `blobContainerName`: The name of the blob container that shall be
|
| 110 |
+
used by the BlobCheckpointStore to store the checkpoint offsets.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
## Message body type
|
| 113 |
|
| 114 |
+
The azure-eventhubs producer expects the data in the message body to be
|
| 115 |
+
of type `byte[]`. This allows the simple messages (E.g. `String` based
|
| 116 |
+
ones) to be marshalled /unmarshalled with ease. The same is true for the
|
| 117 |
+
azure-eventhubs consumer, it will set the encoded data as `byte[]` in
|
| 118 |
+
the message body.
|
| 119 |
|
| 120 |
+
## Automatic detection of EventHubProducerAsyncClient client in the Camel registry
|
| 121 |
|
| 122 |
The component is capable of detecting the presence of an
|
| 123 |
EventHubProducerAsyncClient bean into the registry. If it’s the only
|
|
|
|
| 125 |
to define it as uri parameter, like the example above. This may be
|
| 126 |
really useful for smarter configuration of the endpoint.
|
| 127 |
|
| 128 |
+
# Examples
|
| 129 |
+
|
| 130 |
## Consumer Example
|
| 131 |
|
| 132 |
+
To consume events:
|
|
|
|
| 133 |
|
| 134 |
+
from("azure-eventhubs:/camel/camelHub?sharedAccessName=SASaccountName&sharedAccessKey=SASaccessKey&blobAccountName=accountName&blobAccessKey=accessKey&blobContainerName=containerName")
|
| 135 |
+
.to("file://queuedirectory");
|
|
|
|
|
|
|
| 136 |
|
| 137 |
## Producer Example
|
| 138 |
|
| 139 |
+
To produce events:
|
| 140 |
|
| 141 |
from("direct:start")
|
| 142 |
+
.process(exchange -> {
|
| 143 |
exchange.getIn().setHeader(EventHubsConstants.PARTITION_ID, firstPartition);
|
| 144 |
exchange.getIn().setBody("test event");
|
| 145 |
+
})
|
| 146 |
+
.to("azure-eventhubs:?connectionString=RAW({{connectionString}})"
|
| 147 |
|
| 148 |
+
The azure-eventhubs producer supports sending sending events as an
|
| 149 |
+
`Iterable` (E.g. as a `List`). For example:
|
|
|
|
| 150 |
|
| 151 |
from("direct:start")
|
| 152 |
+
.process(exchange -> {
|
| 153 |
final List<String> messages = new LinkedList<>();
|
| 154 |
messages.add("Test String Message 1");
|
| 155 |
messages.add("Test String Message 2");
|
| 156 |
|
| 157 |
exchange.getIn().setHeader(EventHubsConstants.PARTITION_ID, firstPartition);
|
| 158 |
exchange.getIn().setBody(messages);
|
| 159 |
+
})
|
| 160 |
+
.to("azure-eventhubs:?connectionString=RAW({{connectionString}})"
|
| 161 |
|
| 162 |
## Azure-AD Authentication example
|
| 163 |
|
|
|
|
| 171 |
}
|
| 172 |
|
| 173 |
from("direct:start")
|
| 174 |
+
.to("azure-eventhubs:namespace/eventHubName?tokenCredential=#myTokenCredential&credentialType=TOKEN_CREDENTIAL)"
|
| 175 |
|
| 176 |
+
# Important Development Notes
|
| 177 |
|
| 178 |
When developing on this component, you will need to obtain your Azure
|
| 179 |
accessKey to run the integration tests. In addition to the mocked unit
|
|
|
|
| 193 |
|
| 194 |
|Name|Description|Default|Type|
|
| 195 |
|---|---|---|---|
|
| 196 |
+
|amqpRetryOptions|Sets the retry policy for EventHubProducerAsyncClient. If not specified, the default retry options are used.||object|
|
| 197 |
+
|amqpTransportType|Sets the transport type by which all the communication with Azure Event Hubs occurs.|AMQP|object|
|
| 198 |
|configuration|The component configurations||object|
|
| 199 |
|blobAccessKey|In case you chose the default BlobCheckpointStore, this sets access key for the associated azure account name to be used for authentication with azure blob services.||string|
|
| 200 |
|blobAccountName|In case you chose the default BlobCheckpointStore, this sets Azure account name to be used for authentication with azure blob services.||string|
|
| 201 |
|blobContainerName|In case you chose the default BlobCheckpointStore, this sets the blob container that shall be used by the BlobCheckpointStore to store the checkpoint offsets.||string|
|
| 202 |
|blobStorageSharedKeyCredential|In case you chose the default BlobCheckpointStore, StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information.||object|
|
| 203 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 204 |
+
|checkpointBatchSize|Sets the batch size between each checkpoint update. Works jointly with checkpointBatchTimeout.|500|integer|
|
| 205 |
+
|checkpointBatchTimeout|Sets the batch timeout between each checkpoint update. Works jointly with checkpointBatchSize.|5000|integer|
|
| 206 |
+
|checkpointStore|Sets the CheckpointStore the EventProcessorClient will use for storing partition ownership and checkpoint information. Users can, optionally, provide their own implementation of CheckpointStore which will store ownership and checkpoint information. By default, it's set to use com.azure.messaging.eventhubs.checkpointstore.blob.BlobCheckpointStore which stores all checkpoint offsets into Azure Blob Storage.|BlobCheckpointStore|object|
|
| 207 |
+
|consumerGroupName|Sets the name of the consumer group this consumer is associated with. Events are read in the context of this group. The name of the consumer group that is created by default is $Default.|$Default|string|
|
| 208 |
+
|eventPosition|Sets the map containing the event position to use for each partition if a checkpoint for the partition does not exist in CheckpointStore. This map is keyed off of the partition id. If there is no checkpoint in CheckpointStore and there is no entry in this map, the processing of the partition will start from EventPosition#latest() position.||object|
|
| 209 |
|prefetchCount|Sets the count used by the receiver to control the number of events the Event Hub consumer will actively receive and queue locally without regard to whether a receive operation is currently active.|500|integer|
|
| 210 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
| 211 |
+
|partitionId|Sets the identifier of the Event Hub partition that the EventData events will be sent to. If the identifier is not specified, the Event Hubs service will be responsible for routing events that are sent to an available partition.||string|
|
| 212 |
+
|partitionKey|Sets a hashing key to be provided for the batch of events, which instructs the Event Hubs service to map this key to a specific partition. The selection of a partition is stable for a given partition hashing key. Should any other batches of events be sent using the same exact partition hashing key, the Event Hubs service will route them all to the same partition. This should be specified only when there is a need to group events by partition, but there is flexibility into which partition they are routed. If ensuring that a batch of events is sent only to a specific partition, it is recommended that the identifier of the position be specified directly when sending the batch.||string|
|
| 213 |
+
|producerAsyncClient|Sets the EventHubProducerAsyncClient.An asynchronous producer responsible for transmitting EventData to a specific Event Hub, grouped together in batches. Depending on the com.azure.messaging.eventhubs.models.CreateBatchOptions options specified when creating an com.azure.messaging.eventhubs.EventDataBatch, the events may be automatically routed to an available partition or specific to a partition. Use by this component to produce the data in camel producer.||object|
|
| 214 |
|autowiredEnabled|Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.|true|boolean|
|
| 215 |
+
|connectionString|Instead of supplying namespace, sharedAccessKey, sharedAccessName, etc. you can supply the connection string for your eventHub. The connection string for EventHubs already includes all the necessary information to connect to your EventHub. To learn how to generate the connection string, take a look at this documentation: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string||string|
|
| 216 |
|credentialType|Determines the credential strategy to adopt|CONNECTION\_STRING|object|
|
| 217 |
|sharedAccessKey|The generated value for the SharedAccessName.||string|
|
| 218 |
|sharedAccessName|The name you chose for your EventHubs SAS keys.||string|
|
| 219 |
+
|tokenCredential|Provide custom authentication credentials using an implementation of TokenCredential.||object|
|
| 220 |
|
| 221 |
## Endpoint Configurations
|
| 222 |
|
|
|
|
| 225 |
|---|---|---|---|
|
| 226 |
|namespace|EventHubs namespace created in Azure Portal.||string|
|
| 227 |
|eventHubName|EventHubs name under a specific namespace.||string|
|
| 228 |
+
|amqpRetryOptions|Sets the retry policy for EventHubProducerAsyncClient. If not specified, the default retry options are used.||object|
|
| 229 |
+
|amqpTransportType|Sets the transport type by which all the communication with Azure Event Hubs occurs.|AMQP|object|
|
| 230 |
|blobAccessKey|In case you chose the default BlobCheckpointStore, this sets access key for the associated azure account name to be used for authentication with azure blob services.||string|
|
| 231 |
|blobAccountName|In case you chose the default BlobCheckpointStore, this sets Azure account name to be used for authentication with azure blob services.||string|
|
| 232 |
|blobContainerName|In case you chose the default BlobCheckpointStore, this sets the blob container that shall be used by the BlobCheckpointStore to store the checkpoint offsets.||string|
|
| 233 |
|blobStorageSharedKeyCredential|In case you chose the default BlobCheckpointStore, StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information.||object|
|
| 234 |
+
|checkpointBatchSize|Sets the batch size between each checkpoint update. Works jointly with checkpointBatchTimeout.|500|integer|
|
| 235 |
+
|checkpointBatchTimeout|Sets the batch timeout between each checkpoint update. Works jointly with checkpointBatchSize.|5000|integer|
|
| 236 |
+
|checkpointStore|Sets the CheckpointStore the EventProcessorClient will use for storing partition ownership and checkpoint information. Users can, optionally, provide their own implementation of CheckpointStore which will store ownership and checkpoint information. By default, it's set to use com.azure.messaging.eventhubs.checkpointstore.blob.BlobCheckpointStore which stores all checkpoint offsets into Azure Blob Storage.|BlobCheckpointStore|object|
|
| 237 |
+
|consumerGroupName|Sets the name of the consumer group this consumer is associated with. Events are read in the context of this group. The name of the consumer group that is created by default is $Default.|$Default|string|
|
| 238 |
+
|eventPosition|Sets the map containing the event position to use for each partition if a checkpoint for the partition does not exist in CheckpointStore. This map is keyed off of the partition id. If there is no checkpoint in CheckpointStore and there is no entry in this map, the processing of the partition will start from EventPosition#latest() position.||object|
|
| 239 |
|prefetchCount|Sets the count used by the receiver to control the number of events the Event Hub consumer will actively receive and queue locally without regard to whether a receive operation is currently active.|500|integer|
|
| 240 |
|bridgeErrorHandler|Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored.|false|boolean|
|
| 241 |
|exceptionHandler|To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored.||object|
|
| 242 |
|exchangePattern|Sets the exchange pattern when the consumer creates an exchange.||object|
|
| 243 |
+
|partitionId|Sets the identifier of the Event Hub partition that the EventData events will be sent to. If the identifier is not specified, the Event Hubs service will be responsible for routing events that are sent to an available partition.||string|
|
| 244 |
+
|partitionKey|Sets a hashing key to be provided for the batch of events, which instructs the Event Hubs service to map this key to a specific partition. The selection of a partition is stable for a given partition hashing key. Should any other batches of events be sent using the same exact partition hashing key, the Event Hubs service will route them all to the same partition. This should be specified only when there is a need to group events by partition, but there is flexibility into which partition they are routed. If ensuring that a batch of events is sent only to a specific partition, it is recommended that the identifier of the position be specified directly when sending the batch.||string|
|
| 245 |
+
|producerAsyncClient|Sets the EventHubProducerAsyncClient.An asynchronous producer responsible for transmitting EventData to a specific Event Hub, grouped together in batches. Depending on the com.azure.messaging.eventhubs.models.CreateBatchOptions options specified when creating an com.azure.messaging.eventhubs.EventDataBatch, the events may be automatically routed to an available partition or specific to a partition. Use by this component to produce the data in camel producer.||object|
|
| 246 |
|lazyStartProducer|Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.|false|boolean|
|
| 247 |
+
|connectionString|Instead of supplying namespace, sharedAccessKey, sharedAccessName, etc. you can supply the connection string for your eventHub. The connection string for EventHubs already includes all the necessary information to connect to your EventHub. To learn how to generate the connection string, take a look at this documentation: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-get-connection-string||string|
|
| 248 |
|credentialType|Determines the credential strategy to adopt|CONNECTION\_STRING|object|
|
| 249 |
|sharedAccessKey|The generated value for the SharedAccessName.||string|
|
| 250 |
|sharedAccessName|The name you chose for your EventHubs SAS keys.||string|
|
| 251 |
+
|tokenCredential|Provide custom authentication credentials using an implementation of TokenCredential.||object|
|
camel-azure-files.md
CHANGED
|
@@ -70,7 +70,9 @@ Azure authentication information:
|
|
| 70 |
|
| 71 |
azure-files://camelazurefiles/samples/inbox/spam?sharedKey=FAKE502UyuBD...3Z%2BASt9dCmJg%3D%3D&delete=true
|
| 72 |
|
| 73 |
-
#
|
|
|
|
|
|
|
| 74 |
|
| 75 |
The path separator is `/`. The absolute paths start with the path
|
| 76 |
separator. The absolute paths do not include the share name, and they
|
|
@@ -82,18 +84,18 @@ path separator appears, and the relative paths are relative to the share
|
|
| 82 |
root (rather than to the current working directory or to the endpoint
|
| 83 |
starting directory) so interpret them with a grain of salt.
|
| 84 |
|
| 85 |
-
# Concurrency
|
| 86 |
|
| 87 |
This component does not support concurrency on its endpoints.
|
| 88 |
|
| 89 |
-
# More Information
|
| 90 |
|
| 91 |
This component mimics the FTP component. So, there are more samples and
|
| 92 |
details on the FTP component page.
|
| 93 |
|
| 94 |
This component uses the Azure Java SDK libraries for the actual work.
|
| 95 |
|
| 96 |
-
# Consuming Files
|
| 97 |
|
| 98 |
The remote consumer will by default leave the consumed files untouched
|
| 99 |
on the remote cloud files server. You have to configure it explicitly if
|
|
@@ -108,7 +110,7 @@ to a `.camel` sub directory. The reason Camel does **not** do this by
|
|
| 108 |
default for the remote consumer is that it may lack permissions by
|
| 109 |
default to be able to move or delete files.
|
| 110 |
|
| 111 |
-
## Body Type Options
|
| 112 |
|
| 113 |
For each matching file, the consumer sends to the Camel exchange a
|
| 114 |
message with a selected body type:
|
|
@@ -122,7 +124,7 @@ message with a selected body type:
|
|
| 122 |
The body type configuration should be tuned to fit available resources,
|
| 123 |
performance targets, route processors, caching, resuming, etc.
|
| 124 |
|
| 125 |
-
## Limitations
|
| 126 |
|
| 127 |
The option **readLock** can be used to force Camel **not** to consume
|
| 128 |
files that are currently in the progress of being written. However, this
|
|
@@ -150,23 +152,23 @@ The consumer sets the following exchange properties
|
|
| 150 |
<col style="width: 50%" />
|
| 151 |
</colgroup>
|
| 152 |
<thead>
|
| 153 |
-
<tr>
|
| 154 |
<th style="text-align: left;">Header</th>
|
| 155 |
<th style="text-align: left;">Description</th>
|
| 156 |
</tr>
|
| 157 |
</thead>
|
| 158 |
<tbody>
|
| 159 |
-
<tr>
|
| 160 |
<td style="text-align: left;"><p><code>CamelBatchIndex</code></p></td>
|
| 161 |
<td style="text-align: left;"><p>The current index out of total number
|
| 162 |
of files being consumed in this batch.</p></td>
|
| 163 |
</tr>
|
| 164 |
-
<tr>
|
| 165 |
<td style="text-align: left;"><p><code>CamelBatchSize</code></p></td>
|
| 166 |
<td style="text-align: left;"><p>The total number of files being
|
| 167 |
consumed in this batch.</p></td>
|
| 168 |
</tr>
|
| 169 |
-
<tr>
|
| 170 |
<td
|
| 171 |
style="text-align: left;"><p><code>CamelBatchComplete</code></p></td>
|
| 172 |
<td style="text-align: left;"><p>True if there are no more files in this
|
|
@@ -175,7 +177,7 @@ batch.</p></td>
|
|
| 175 |
</tbody>
|
| 176 |
</table>
|
| 177 |
|
| 178 |
-
# Producing Files
|
| 179 |
|
| 180 |
The Files producer is optimized for two body types:
|
| 181 |
|
|
@@ -187,7 +189,7 @@ In either case, the remote file size is allocated and then rewritten
|
|
| 187 |
with body content. Any inconsistency between declared file length and
|
| 188 |
stream length results in a corrupted remote file.
|
| 189 |
|
| 190 |
-
## Limitations
|
| 191 |
|
| 192 |
The underlying Azure Files service does not allow growing files. The
|
| 193 |
file length must be known at its creation time, consequently:
|
|
@@ -197,7 +199,7 @@ file length must be known at its creation time, consequently:
|
|
| 197 |
|
| 198 |
- No appending mode is supported.
|
| 199 |
|
| 200 |
-
# About Timeouts
|
| 201 |
|
| 202 |
You can use the `connectTimeout` option to set a timeout in millis to
|
| 203 |
connect or disconnect.
|
|
@@ -211,7 +213,7 @@ For now, the file upload has no timeout. During the upload, the
|
|
| 211 |
underlying library could log timeout warnings. They are recoverable and
|
| 212 |
upload could continue.
|
| 213 |
|
| 214 |
-
# Using Local Work Directory
|
| 215 |
|
| 216 |
Camel supports consuming from remote files servers and downloading the
|
| 217 |
files directly into a local work directory. This avoids reading the
|
|
@@ -238,7 +240,7 @@ directly on the work file `java.io.File` handle and perform a
|
|
| 238 |
local work file, it can optimize and use a rename instead of a file
|
| 239 |
copy, as the work file is meant to be deleted anyway.
|
| 240 |
|
| 241 |
-
# Custom Filtering
|
| 242 |
|
| 243 |
Camel supports pluggable filtering strategies. This strategy it to use
|
| 244 |
the build in `org.apache.camel.component.file.GenericFileFilter` in
|
|
@@ -262,7 +264,7 @@ The accept(file) file argument has properties:
|
|
| 262 |
|
| 263 |
- file length: if not a directory, then a length of the file in bytes
|
| 264 |
|
| 265 |
-
# Filtering using ANT path matcher
|
| 266 |
|
| 267 |
The ANT path matcher is a filter shipped out-of-the-box in the
|
| 268 |
**camel-spring** jar. So you need to depend on **camel-spring** if you
|
|
@@ -283,13 +285,13 @@ The sample below demonstrates how to use it:
|
|
| 283 |
|
| 284 |
from("azure-files://...&antInclude=**/*.txt").to("...");
|
| 285 |
|
| 286 |
-
# Using a Proxy
|
| 287 |
|
| 288 |
Consult the [underlying
|
| 289 |
library](https://learn.microsoft.com/en-us/azure/developer/java/sdk/proxying)
|
| 290 |
documentation.
|
| 291 |
|
| 292 |
-
# Consuming a single file using a fixed name
|
| 293 |
|
| 294 |
Unlike FTP component that features a special combination of options:
|
| 295 |
|
|
@@ -303,7 +305,7 @@ to optimize *the single file using a fixed name* use case, it is
|
|
| 303 |
necessary to fall back to regular filters (i.e. the list permission is
|
| 304 |
needed).
|
| 305 |
|
| 306 |
-
# Debug logging
|
| 307 |
|
| 308 |
This component has log level **TRACE** that can be helpful if you have
|
| 309 |
problems.
|
|
|
|
| 70 |
|
| 71 |
azure-files://camelazurefiles/samples/inbox/spam?sharedKey=FAKE502UyuBD...3Z%2BASt9dCmJg%3D%3D&delete=true
|
| 72 |
|
| 73 |
+
# Usage
|
| 74 |
+
|
| 75 |
+
## Paths
|
| 76 |
|
| 77 |
The path separator is `/`. The absolute paths start with the path
|
| 78 |
separator. The absolute paths do not include the share name, and they
|
|
|
|
| 84 |
root (rather than to the current working directory or to the endpoint
|
| 85 |
starting directory) so interpret them with a grain of salt.
|
| 86 |
|
| 87 |
+
## Concurrency
|
| 88 |
|
| 89 |
This component does not support concurrency on its endpoints.
|
| 90 |
|
| 91 |
+
## More Information
|
| 92 |
|
| 93 |
This component mimics the FTP component. So, there are more samples and
|
| 94 |
details on the FTP component page.
|
| 95 |
|
| 96 |
This component uses the Azure Java SDK libraries for the actual work.
|
| 97 |
|
| 98 |
+
## Consuming Files
|
| 99 |
|
| 100 |
The remote consumer will by default leave the consumed files untouched
|
| 101 |
on the remote cloud files server. You have to configure it explicitly if
|
|
|
|
| 110 |
default for the remote consumer is that it may lack permissions by
|
| 111 |
default to be able to move or delete files.
|
| 112 |
|
| 113 |
+
### Body Type Options
|
| 114 |
|
| 115 |
For each matching file, the consumer sends to the Camel exchange a
|
| 116 |
message with a selected body type:
|
|
|
|
| 124 |
The body type configuration should be tuned to fit available resources,
|
| 125 |
performance targets, route processors, caching, resuming, etc.
|
| 126 |
|
| 127 |
+
### Limitations
|
| 128 |
|
| 129 |
The option **readLock** can be used to force Camel **not** to consume
|
| 130 |
files that are currently in the progress of being written. However, this
|
|
|
|
| 152 |
<col style="width: 50%" />
|
| 153 |
</colgroup>
|
| 154 |
<thead>
|
| 155 |
+
<tr class="header">
|
| 156 |
<th style="text-align: left;">Header</th>
|
| 157 |
<th style="text-align: left;">Description</th>
|
| 158 |
</tr>
|
| 159 |
</thead>
|
| 160 |
<tbody>
|
| 161 |
+
<tr class="odd">
|
| 162 |
<td style="text-align: left;"><p><code>CamelBatchIndex</code></p></td>
|
| 163 |
<td style="text-align: left;"><p>The current index out of total number
|
| 164 |
of files being consumed in this batch.</p></td>
|
| 165 |
</tr>
|
| 166 |
+
<tr class="even">
|
| 167 |
<td style="text-align: left;"><p><code>CamelBatchSize</code></p></td>
|
| 168 |
<td style="text-align: left;"><p>The total number of files being
|
| 169 |
consumed in this batch.</p></td>
|
| 170 |
</tr>
|
| 171 |
+
<tr class="odd">
|
| 172 |
<td
|
| 173 |
style="text-align: left;"><p><code>CamelBatchComplete</code></p></td>
|
| 174 |
<td style="text-align: left;"><p>True if there are no more files in this
|
|
|
|
| 177 |
</tbody>
|
| 178 |
</table>
|
| 179 |
|
| 180 |
+
## Producing Files
|
| 181 |
|
| 182 |
The Files producer is optimized for two body types:
|
| 183 |
|
|
|
|
| 189 |
with body content. Any inconsistency between declared file length and
|
| 190 |
stream length results in a corrupted remote file.
|
| 191 |
|
| 192 |
+
### Limitations
|
| 193 |
|
| 194 |
The underlying Azure Files service does not allow growing files. The
|
| 195 |
file length must be known at its creation time, consequently:
|
|
|
|
| 199 |
|
| 200 |
- No appending mode is supported.
|
| 201 |
|
| 202 |
+
## About Timeouts
|
| 203 |
|
| 204 |
You can use the `connectTimeout` option to set a timeout in millis to
|
| 205 |
connect or disconnect.
|
|
|
|
| 213 |
underlying library could log timeout warnings. They are recoverable and
|
| 214 |
upload could continue.
|
| 215 |
|
| 216 |
+
## Using Local Work Directory
|
| 217 |
|
| 218 |
Camel supports consuming from remote files servers and downloading the
|
| 219 |
files directly into a local work directory. This avoids reading the
|
|
|
|
| 240 |
local work file, it can optimize and use a rename instead of a file
|
| 241 |
copy, as the work file is meant to be deleted anyway.
|
| 242 |
|
| 243 |
+
## Custom Filtering
|
| 244 |
|
| 245 |
Camel supports pluggable filtering strategies. This strategy it to use
|
| 246 |
the build in `org.apache.camel.component.file.GenericFileFilter` in
|
|
|
|
| 264 |
|
| 265 |
- file length: if not a directory, then a length of the file in bytes
|
| 266 |
|
| 267 |
+
## Filtering using ANT path matcher
|
| 268 |
|
| 269 |
The ANT path matcher is a filter shipped out-of-the-box in the
|
| 270 |
**camel-spring** jar. So you need to depend on **camel-spring** if you
|
|
|
|
| 285 |
|
| 286 |
from("azure-files://...&antInclude=**/*.txt").to("...");
|
| 287 |
|
| 288 |
+
## Using a Proxy
|
| 289 |
|
| 290 |
Consult the [underlying
|
| 291 |
library](https://learn.microsoft.com/en-us/azure/developer/java/sdk/proxying)
|
| 292 |
documentation.
|
| 293 |
|
| 294 |
+
## Consuming a single file using a fixed name
|
| 295 |
|
| 296 |
Unlike FTP component that features a special combination of options:
|
| 297 |
|
|
|
|
| 305 |
necessary to fall back to regular filters (i.e. the list permission is
|
| 306 |
needed).
|
| 307 |
|
| 308 |
+
## Debug logging
|
| 309 |
|
| 310 |
This component has log level **TRACE** that can be helpful if you have
|
| 311 |
problems.
|
camel-azure-key-vault.md
CHANGED
|
@@ -53,6 +53,11 @@ You can also enable the usage of Azure Identity in the
|
|
| 53 |
camel.vault.azure.azureIdentityEnabled = true
|
| 54 |
camel.vault.azure.vaultName = vaultName
|
| 55 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
At this point, you’ll be able to reference a property in the following
|
| 57 |
way:
|
| 58 |
|
|
@@ -97,7 +102,7 @@ example:
|
|
| 97 |
<camelContext>
|
| 98 |
<route>
|
| 99 |
<from uri="direct:start"/>
|
| 100 |
-
<log message="Username is {{azure:database
|
| 101 |
</route>
|
| 102 |
</camelContext>
|
| 103 |
|
|
@@ -109,7 +114,7 @@ is not present on Azure Key Vault:
|
|
| 109 |
<camelContext>
|
| 110 |
<route>
|
| 111 |
<from uri="direct:start"/>
|
| 112 |
-
<log message="Username is {{azure:database
|
| 113 |
</route>
|
| 114 |
</camelContext>
|
| 115 |
|
|
@@ -145,7 +150,7 @@ secret doesn’t exist or the version doesn’t exist.
|
|
| 145 |
<camelContext>
|
| 146 |
<route>
|
| 147 |
<from uri="direct:start"/>
|
| 148 |
-
<log message="Username is {{azure:database
|
| 149 |
</route>
|
| 150 |
</camelContext>
|
| 151 |
|
|
@@ -223,6 +228,98 @@ or the properties with an `azure:` prefix.
|
|
| 223 |
The only requirement is adding the camel-azure-key-vault jar to your
|
| 224 |
Camel application.
|
| 225 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 226 |
## Azure Key Vault Producer operations
|
| 227 |
|
| 228 |
Azure Key Vault component provides the following operation on the
|
|
|
|
| 53 |
camel.vault.azure.azureIdentityEnabled = true
|
| 54 |
camel.vault.azure.vaultName = vaultName
|
| 55 |
|
| 56 |
+
`camel.vault.azure` configuration only applies to the Azure Key Vault
|
| 57 |
+
properties function (E.g when resolving properties). When using the
|
| 58 |
+
`operation` option to create, get, list secrets etc., you should provide
|
| 59 |
+
the usual options for connecting to Azure Services.
|
| 60 |
+
|
| 61 |
At this point, you’ll be able to reference a property in the following
|
| 62 |
way:
|
| 63 |
|
|
|
|
| 102 |
<camelContext>
|
| 103 |
<route>
|
| 104 |
<from uri="direct:start"/>
|
| 105 |
+
<log message="Username is {{azure:database#username}}"/>
|
| 106 |
</route>
|
| 107 |
</camelContext>
|
| 108 |
|
|
|
|
| 114 |
<camelContext>
|
| 115 |
<route>
|
| 116 |
<from uri="direct:start"/>
|
| 117 |
+
<log message="Username is {{azure:database#username:admin}}"/>
|
| 118 |
</route>
|
| 119 |
</camelContext>
|
| 120 |
|
|
|
|
| 150 |
<camelContext>
|
| 151 |
<route>
|
| 152 |
<from uri="direct:start"/>
|
| 153 |
+
<log message="Username is {{azure:database#username:admin@bf9b4f4b-8e63-43fd-a73c-3e2d3748b451}}"/>
|
| 154 |
</route>
|
| 155 |
</camelContext>
|
| 156 |
|
|
|
|
| 228 |
The only requirement is adding the camel-azure-key-vault jar to your
|
| 229 |
Camel application.
|
| 230 |
|
| 231 |
+
## Automatic Camel context reloading on Secret Refresh - Required Infrastructure’s creation
|
| 232 |
+
|
| 233 |
+
First of all we need to create an application
|
| 234 |
+
|
| 235 |
+
\`\`\` az ad app create --display-name test-app-key-vault \`\`\`
|
| 236 |
+
|
| 237 |
+
Then we need to obtain credentials
|
| 238 |
+
|
| 239 |
+
\`\`\` az ad app credential reset --id \<appId\> --append
|
| 240 |
+
\--display-name *Description: Key Vault app client* --end-date
|
| 241 |
+
*2024-12-31* \`\`\`
|
| 242 |
+
|
| 243 |
+
This will return a result like this
|
| 244 |
+
|
| 245 |
+
\`\`\` { "appId": "appId", "password": "pwd", "tenant": "tenantId" }
|
| 246 |
+
\`\`\`
|
| 247 |
+
|
| 248 |
+
You should take note of the password and use it as clientSecret
|
| 249 |
+
parameter, together with the clientId and tenantId.
|
| 250 |
+
|
| 251 |
+
Now create the key vault
|
| 252 |
+
|
| 253 |
+
\`\`\` az keyvault create --name \<vaultName\> --resource-group
|
| 254 |
+
\<resourceGroup\> \`\`\`
|
| 255 |
+
|
| 256 |
+
Create a service principal associated with the application Id
|
| 257 |
+
|
| 258 |
+
\`\`\` az ad sp create --id \<appId\> \`\`\`
|
| 259 |
+
|
| 260 |
+
At this point we need to add a role to the application with role
|
| 261 |
+
assignment
|
| 262 |
+
|
| 263 |
+
\`\`\` az role assignment create --assignee \<appId\> --role "Key
|
| 264 |
+
Vault Administrator" --scope
|
| 265 |
+
/subscriptions/\<subscriptionId\>/resourceGroups/\<resourceGroup\>/providers/Microsoft.KeyVault/vaults/\<vaultName\>
|
| 266 |
+
\`\`\`
|
| 267 |
+
|
| 268 |
+
Last step is to create policy on what can be or cannot be done with the
|
| 269 |
+
application. In this case we just want to read the secret value. So This
|
| 270 |
+
should be enough.
|
| 271 |
+
|
| 272 |
+
\`\`\` az keyvault set-policy --name \<vaultName\> --spn
|
| 273 |
+
\<appId\> --secret-permissions get \`\`\`
|
| 274 |
+
|
| 275 |
+
You can create a secret through Azure CLI with the following command:
|
| 276 |
+
|
| 277 |
+
\`\`\` az keyvault secret set --name \<secret\_name\> --vault-name
|
| 278 |
+
\<vaultName\> -f \<json-secret\> \`\`\`
|
| 279 |
+
|
| 280 |
+
Now we need to setup the Eventhub/EventGrid notification for being
|
| 281 |
+
informed about secrets updates.
|
| 282 |
+
|
| 283 |
+
First of all we’ll need a Blob account and Blob container, to track
|
| 284 |
+
Eventhub consuming activities.
|
| 285 |
+
|
| 286 |
+
\`\`\` az storage account create --name \<blobAccountName\>
|
| 287 |
+
\--resource-group \<resourceGroup\> \`\`\`
|
| 288 |
+
|
| 289 |
+
Then create a container
|
| 290 |
+
|
| 291 |
+
\`\`\` az storage container create --account-name
|
| 292 |
+
\<blobAccountName\> --name \<blobContainerName\> \`\`\`
|
| 293 |
+
|
| 294 |
+
Then recover the access key for this purpose
|
| 295 |
+
|
| 296 |
+
\`\`\` az storage account keys list -g \<resourceGroup\> -n
|
| 297 |
+
\<blobAccountName\> \`\`\`
|
| 298 |
+
|
| 299 |
+
Take note of the blob Account name, blob Container name and Blob Access
|
| 300 |
+
Key to be used for setting up the vault.
|
| 301 |
+
|
| 302 |
+
Let’s now create the Eventhub side
|
| 303 |
+
|
| 304 |
+
Create the namespace first
|
| 305 |
+
|
| 306 |
+
\`\`\` az eventhubs namespace create --resource-group
|
| 307 |
+
\<resourceGroup\> --name \<eventhub-namespace\> --location
|
| 308 |
+
westus --sku Standard --enable-auto-inflate --maximum-throughput-units
|
| 309 |
+
20 \`\`\`
|
| 310 |
+
|
| 311 |
+
Now create the resource
|
| 312 |
+
|
| 313 |
+
\`\`\` az eventhubs eventhub create --resource-group
|
| 314 |
+
\<resourceGroup\> --namespace-name \<eventhub-namespace\> --name
|
| 315 |
+
\<eventhub-name\> --cleanup-policy Delete --partition-count 15
|
| 316 |
+
\`\`\`
|
| 317 |
+
|
| 318 |
+
In the Azure portal create a shared policy for the just created eventhub
|
| 319 |
+
resource with "MANAGE" permissions and copy the connection string.
|
| 320 |
+
|
| 321 |
+
You now have all the required parameters to set up the vault.
|
| 322 |
+
|
| 323 |
## Azure Key Vault Producer operations
|
| 324 |
|
| 325 |
Azure Key Vault component provides the following operation on the
|
camel-azure-schema-registry.md
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Azure-schema-registry.md
|
| 2 |
+
|
| 3 |
+
**Since Camel 4.2**
|
| 4 |
+
|
| 5 |
+
The camel-azure-schema-registry component contains some useful classes
|
| 6 |
+
to deal with authentication against the Azure Schema Registry
|
camel-azure-servicebus.md
CHANGED
|
@@ -25,11 +25,11 @@ Portal](https://docs.microsoft.com/azure/).
|
|
| 25 |
<!-- use the same version as your Camel core version -->
|
| 26 |
</dependency>
|
| 27 |
|
| 28 |
-
#
|
| 29 |
|
| 30 |
-
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
## Authentication Information
|
| 35 |
|
|
@@ -84,18 +84,18 @@ In the consumer, the returned message body will be of type \`String.
|
|
| 84 |
<col style="width: 89%" />
|
| 85 |
</colgroup>
|
| 86 |
<thead>
|
| 87 |
-
<tr>
|
| 88 |
<th style="text-align: left;">Operation</th>
|
| 89 |
<th style="text-align: left;">Description</th>
|
| 90 |
</tr>
|
| 91 |
</thead>
|
| 92 |
<tbody>
|
| 93 |
-
<tr>
|
| 94 |
<td style="text-align: left;"><p><code>sendMessages</code></p></td>
|
| 95 |
<td style="text-align: left;"><p>Sends a set of messages to a Service
|
| 96 |
Bus queue or topic using a batched approach.</p></td>
|
| 97 |
</tr>
|
| 98 |
-
<tr>
|
| 99 |
<td style="text-align: left;"><p><code>scheduleMessages</code></p></td>
|
| 100 |
<td style="text-align: left;"><p>Sends a scheduled message to the Azure
|
| 101 |
Service Bus entity this sender is connected to. A scheduled message is
|
|
@@ -113,18 +113,18 @@ time.</p></td>
|
|
| 113 |
<col style="width: 89%" />
|
| 114 |
</colgroup>
|
| 115 |
<thead>
|
| 116 |
-
<tr>
|
| 117 |
<th style="text-align: left;">Operation</th>
|
| 118 |
<th style="text-align: left;">Description</th>
|
| 119 |
</tr>
|
| 120 |
</thead>
|
| 121 |
<tbody>
|
| 122 |
-
<tr>
|
| 123 |
<td style="text-align: left;"><p><code>receiveMessages</code></p></td>
|
| 124 |
<td style="text-align: left;"><p>Receives an <b>infinite</b>
|
| 125 |
stream of messages from the Service Bus entity.</p></td>
|
| 126 |
</tr>
|
| 127 |
-
<tr>
|
| 128 |
<td style="text-align: left;"><p><code>peekMessages</code></p></td>
|
| 129 |
<td style="text-align: left;"><p>Reads the next batch of active messages
|
| 130 |
without changing the state of the receiver or the message
|
|
@@ -133,7 +133,7 @@ source.</p></td>
|
|
| 133 |
</tbody>
|
| 134 |
</table>
|
| 135 |
|
| 136 |
-
#
|
| 137 |
|
| 138 |
- `sendMessages`
|
| 139 |
|
|
|
|
| 25 |
<!-- use the same version as your Camel core version -->
|
| 26 |
</dependency>
|
| 27 |
|
| 28 |
+
# Usage
|
| 29 |
|
| 30 |
+
## Consumer and Producer
|
| 31 |
|
| 32 |
+
This component implements the Consumer and Producer.
|
| 33 |
|
| 34 |
## Authentication Information
|
| 35 |
|
|
|
|
| 84 |
<col style="width: 89%" />
|
| 85 |
</colgroup>
|
| 86 |
<thead>
|
| 87 |
+
<tr class="header">
|
| 88 |
<th style="text-align: left;">Operation</th>
|
| 89 |
<th style="text-align: left;">Description</th>
|
| 90 |
</tr>
|
| 91 |
</thead>
|
| 92 |
<tbody>
|
| 93 |
+
<tr class="odd">
|
| 94 |
<td style="text-align: left;"><p><code>sendMessages</code></p></td>
|
| 95 |
<td style="text-align: left;"><p>Sends a set of messages to a Service
|
| 96 |
Bus queue or topic using a batched approach.</p></td>
|
| 97 |
</tr>
|
| 98 |
+
<tr class="even">
|
| 99 |
<td style="text-align: left;"><p><code>scheduleMessages</code></p></td>
|
| 100 |
<td style="text-align: left;"><p>Sends a scheduled message to the Azure
|
| 101 |
Service Bus entity this sender is connected to. A scheduled message is
|
|
|
|
| 113 |
<col style="width: 89%" />
|
| 114 |
</colgroup>
|
| 115 |
<thead>
|
| 116 |
+
<tr class="header">
|
| 117 |
<th style="text-align: left;">Operation</th>
|
| 118 |
<th style="text-align: left;">Description</th>
|
| 119 |
</tr>
|
| 120 |
</thead>
|
| 121 |
<tbody>
|
| 122 |
+
<tr class="odd">
|
| 123 |
<td style="text-align: left;"><p><code>receiveMessages</code></p></td>
|
| 124 |
<td style="text-align: left;"><p>Receives an <b>infinite</b>
|
| 125 |
stream of messages from the Service Bus entity.</p></td>
|
| 126 |
</tr>
|
| 127 |
+
<tr class="even">
|
| 128 |
<td style="text-align: left;"><p><code>peekMessages</code></p></td>
|
| 129 |
<td style="text-align: left;"><p>Reads the next batch of active messages
|
| 130 |
without changing the state of the receiver or the message
|
|
|
|
| 133 |
</tbody>
|
| 134 |
</table>
|
| 135 |
|
| 136 |
+
# Examples
|
| 137 |
|
| 138 |
- `sendMessages`
|
| 139 |
|
camel-azure-storage-blob.md
CHANGED
|
@@ -129,19 +129,19 @@ For these operations, `accountName` is **required**.
|
|
| 129 |
<col style="width: 89%" />
|
| 130 |
</colgroup>
|
| 131 |
<thead>
|
| 132 |
-
<tr>
|
| 133 |
<th style="text-align: left;">Operation</th>
|
| 134 |
<th style="text-align: left;">Description</th>
|
| 135 |
</tr>
|
| 136 |
</thead>
|
| 137 |
<tbody>
|
| 138 |
-
<tr>
|
| 139 |
<td
|
| 140 |
style="text-align: left;"><p><code>listBlobContainers</code></p></td>
|
| 141 |
<td style="text-align: left;"><p>Get the content of the blob. You can
|
| 142 |
restrict the output of this operation to a blob range.</p></td>
|
| 143 |
</tr>
|
| 144 |
-
<tr>
|
| 145 |
<td style="text-align: left;"><p><code>getChangeFeed</code></p></td>
|
| 146 |
<td style="text-align: left;"><p>Returns transaction logs of all the
|
| 147 |
changes that occur to the blobs and the blob metadata in your storage
|
|
@@ -162,27 +162,27 @@ For these operations, `accountName` and `containerName` are
|
|
| 162 |
<col style="width: 89%" />
|
| 163 |
</colgroup>
|
| 164 |
<thead>
|
| 165 |
-
<tr>
|
| 166 |
<th style="text-align: left;">Operation</th>
|
| 167 |
<th style="text-align: left;">Description</th>
|
| 168 |
</tr>
|
| 169 |
</thead>
|
| 170 |
<tbody>
|
| 171 |
-
<tr>
|
| 172 |
<td
|
| 173 |
style="text-align: left;"><p><code>createBlobContainer</code></p></td>
|
| 174 |
<td style="text-align: left;"><p>Create a new container within a storage
|
| 175 |
account. If a container with the same name already exists, the producer
|
| 176 |
will ignore it.</p></td>
|
| 177 |
</tr>
|
| 178 |
-
<tr>
|
| 179 |
<td
|
| 180 |
style="text-align: left;"><p><code>deleteBlobContainer</code></p></td>
|
| 181 |
<td style="text-align: left;"><p>Delete the specified container in the
|
| 182 |
storage account. If the container doesn’t exist, the operation
|
| 183 |
fails.</p></td>
|
| 184 |
</tr>
|
| 185 |
-
<tr>
|
| 186 |
<td style="text-align: left;"><p><code>listBlobs</code></p></td>
|
| 187 |
<td style="text-align: left;"><p>Returns a list of blobs in this
|
| 188 |
container, with folder structures flattened.</p></td>
|
|
@@ -202,25 +202,25 @@ For these operations, `accountName`, `containerName` and `blobName` are
|
|
| 202 |
<col style="width: 79%" />
|
| 203 |
</colgroup>
|
| 204 |
<thead>
|
| 205 |
-
<tr>
|
| 206 |
<th style="text-align: left;">Operation</th>
|
| 207 |
<th style="text-align: left;">Blob Type</th>
|
| 208 |
<th style="text-align: left;">Description</th>
|
| 209 |
</tr>
|
| 210 |
</thead>
|
| 211 |
<tbody>
|
| 212 |
-
<tr>
|
| 213 |
<td style="text-align: left;"><p><code>getBlob</code></p></td>
|
| 214 |
<td style="text-align: left;"><p>Common</p></td>
|
| 215 |
<td style="text-align: left;"><p>Get the content of the blob. You can
|
| 216 |
restrict the output of this operation to a blob range.</p></td>
|
| 217 |
</tr>
|
| 218 |
-
<tr>
|
| 219 |
<td style="text-align: left;"><p><code>deleteBlob</code></p></td>
|
| 220 |
<td style="text-align: left;"><p>Common</p></td>
|
| 221 |
<td style="text-align: left;"><p>Delete a blob.</p></td>
|
| 222 |
</tr>
|
| 223 |
-
<tr>
|
| 224 |
<td
|
| 225 |
style="text-align: left;"><p><code>downloadBlobToFile</code></p></td>
|
| 226 |
<td style="text-align: left;"><p>Common</p></td>
|
|
@@ -229,7 +229,7 @@ specified by the path. The file will be created and must not exist, if
|
|
| 229 |
the file already exists a <code>FileAlreadyExistsException</code> will
|
| 230 |
be thrown.</p></td>
|
| 231 |
</tr>
|
| 232 |
-
<tr>
|
| 233 |
<td style="text-align: left;"><p><code>downloadLink</code></p></td>
|
| 234 |
<td style="text-align: left;"><p>Common</p></td>
|
| 235 |
<td style="text-align: left;"><p>Generate the download link for the
|
|
@@ -237,7 +237,7 @@ specified blob using shared access signatures (SAS). This by default
|
|
| 237 |
only limits to 1hour of allowed access. However, you can override the
|
| 238 |
default expiration duration through the headers.</p></td>
|
| 239 |
</tr>
|
| 240 |
-
<tr>
|
| 241 |
<td style="text-align: left;"><p><code>uploadBlockBlob</code></p></td>
|
| 242 |
<td style="text-align: left;"><p>BlockBlob</p></td>
|
| 243 |
<td style="text-align: left;"><p>Creates a new block blob, or updates
|
|
@@ -246,7 +246,7 @@ overwrites any existing metadata on the blob. Partial updates are not
|
|
| 246 |
supported with PutBlob; the content of the existing blob is overwritten
|
| 247 |
with the new content.</p></td>
|
| 248 |
</tr>
|
| 249 |
-
<tr>
|
| 250 |
<td
|
| 251 |
style="text-align: left;"><p><code>stageBlockBlobList</code></p></td>
|
| 252 |
<td style="text-align: left;"><p><code>BlockBlob</code></p></td>
|
|
@@ -257,7 +257,7 @@ commitBlobBlockList. However, in case header
|
|
| 257 |
<code>commitBlockListLater</code> is set to false, this will commit the
|
| 258 |
blocks immediately after staging the blocks.</p></td>
|
| 259 |
</tr>
|
| 260 |
-
<tr>
|
| 261 |
<td
|
| 262 |
style="text-align: left;"><p><code>commitBlobBlockList</code></p></td>
|
| 263 |
<td style="text-align: left;"><p><code>BlockBlob</code></p></td>
|
|
@@ -270,20 +270,20 @@ those blocks that have changed, then committing the new and existing
|
|
| 270 |
blocks together. Any blocks not specified in the block list and
|
| 271 |
permanently deleted.</p></td>
|
| 272 |
</tr>
|
| 273 |
-
<tr>
|
| 274 |
<td style="text-align: left;"><p><code>getBlobBlockList</code></p></td>
|
| 275 |
<td style="text-align: left;"><p><code>BlockBlob</code></p></td>
|
| 276 |
<td style="text-align: left;"><p>Returns the list of blocks that have
|
| 277 |
been uploaded as part of a block blob using the specified blocklist
|
| 278 |
filter.</p></td>
|
| 279 |
</tr>
|
| 280 |
-
<tr>
|
| 281 |
<td style="text-align: left;"><p><code>createAppendBlob</code></p></td>
|
| 282 |
<td style="text-align: left;"><p><code>AppendBlob</code></p></td>
|
| 283 |
<td style="text-align: left;"><p>Creates a 0-length append blob. Call
|
| 284 |
commitAppendBlo`b operation to append data to an append blob.</p></td>
|
| 285 |
</tr>
|
| 286 |
-
<tr>
|
| 287 |
<td style="text-align: left;"><p><code>commitAppendBlob</code></p></td>
|
| 288 |
<td style="text-align: left;"><p><code>AppendBlob</code></p></td>
|
| 289 |
<td style="text-align: left;"><p>Commits a new block of data to the end
|
|
@@ -293,14 +293,14 @@ of the existing append blob. In case of header
|
|
| 293 |
the appendBlob through internal call to <code>createAppendBlob</code>
|
| 294 |
operation first before committing.</p></td>
|
| 295 |
</tr>
|
| 296 |
-
<tr>
|
| 297 |
<td style="text-align: left;"><p><code>createPageBlob</code></p></td>
|
| 298 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 299 |
<td style="text-align: left;"><p>Creates a page blob of the specified
|
| 300 |
length. Call <code>uploadPageBlob</code> operation to upload data to a
|
| 301 |
page blob.</p></td>
|
| 302 |
</tr>
|
| 303 |
-
<tr>
|
| 304 |
<td style="text-align: left;"><p><code>uploadPageBlob</code></p></td>
|
| 305 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 306 |
<td style="text-align: left;"><p>Write one or more pages to the page
|
|
@@ -310,25 +310,25 @@ blob. The size must be a multiple of 512. In case of header
|
|
| 310 |
the appendBlob through internal call to <code>createPageBlob</code>
|
| 311 |
operation first before uploading.</p></td>
|
| 312 |
</tr>
|
| 313 |
-
<tr>
|
| 314 |
<td style="text-align: left;"><p><code>resizePageBlob</code></p></td>
|
| 315 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 316 |
<td style="text-align: left;"><p>Resizes the page blob to the specified
|
| 317 |
size, which must be a multiple of 512.</p></td>
|
| 318 |
</tr>
|
| 319 |
-
<tr>
|
| 320 |
<td style="text-align: left;"><p><code>clearPageBlob</code></p></td>
|
| 321 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 322 |
<td style="text-align: left;"><p>Free the specified pages from the page
|
| 323 |
blob. The size of the range must be a multiple of 512.</p></td>
|
| 324 |
</tr>
|
| 325 |
-
<tr>
|
| 326 |
<td style="text-align: left;"><p><code>getPageBlobRanges</code></p></td>
|
| 327 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 328 |
<td style="text-align: left;"><p>Returns the list of valid page ranges
|
| 329 |
for a page blob or snapshot of a page blob.</p></td>
|
| 330 |
</tr>
|
| 331 |
-
<tr>
|
| 332 |
<td style="text-align: left;"><p><code>copyBlob</code></p></td>
|
| 333 |
<td style="text-align: left;"><p><code>Common</code></p></td>
|
| 334 |
<td style="text-align: left;"><p>Copy a blob from one container to
|
|
@@ -340,6 +340,8 @@ another one, even from different accounts.</p></td>
|
|
| 340 |
Refer to the example section in this page to learn how to use these
|
| 341 |
operations into your camel application.
|
| 342 |
|
|
|
|
|
|
|
| 343 |
## Consumer Examples
|
| 344 |
|
| 345 |
To consume a blob into a file using the file component, this can be done
|
|
@@ -690,7 +692,7 @@ file so that it can be loaded by the camel route, for example:
|
|
| 690 |
from("direct:copyBlob")
|
| 691 |
.to("azure-storage-blob://account/containerblob2?operation=uploadBlockBlob&credentialType=AZURE_SAS")
|
| 692 |
|
| 693 |
-
#
|
| 694 |
|
| 695 |
All integration tests use
|
| 696 |
[Testcontainers](https://www.testcontainers.org/) and run by default.
|
|
|
|
| 129 |
<col style="width: 89%" />
|
| 130 |
</colgroup>
|
| 131 |
<thead>
|
| 132 |
+
<tr class="header">
|
| 133 |
<th style="text-align: left;">Operation</th>
|
| 134 |
<th style="text-align: left;">Description</th>
|
| 135 |
</tr>
|
| 136 |
</thead>
|
| 137 |
<tbody>
|
| 138 |
+
<tr class="odd">
|
| 139 |
<td
|
| 140 |
style="text-align: left;"><p><code>listBlobContainers</code></p></td>
|
| 141 |
<td style="text-align: left;"><p>Get the content of the blob. You can
|
| 142 |
restrict the output of this operation to a blob range.</p></td>
|
| 143 |
</tr>
|
| 144 |
+
<tr class="even">
|
| 145 |
<td style="text-align: left;"><p><code>getChangeFeed</code></p></td>
|
| 146 |
<td style="text-align: left;"><p>Returns transaction logs of all the
|
| 147 |
changes that occur to the blobs and the blob metadata in your storage
|
|
|
|
| 162 |
<col style="width: 89%" />
|
| 163 |
</colgroup>
|
| 164 |
<thead>
|
| 165 |
+
<tr class="header">
|
| 166 |
<th style="text-align: left;">Operation</th>
|
| 167 |
<th style="text-align: left;">Description</th>
|
| 168 |
</tr>
|
| 169 |
</thead>
|
| 170 |
<tbody>
|
| 171 |
+
<tr class="odd">
|
| 172 |
<td
|
| 173 |
style="text-align: left;"><p><code>createBlobContainer</code></p></td>
|
| 174 |
<td style="text-align: left;"><p>Create a new container within a storage
|
| 175 |
account. If a container with the same name already exists, the producer
|
| 176 |
will ignore it.</p></td>
|
| 177 |
</tr>
|
| 178 |
+
<tr class="even">
|
| 179 |
<td
|
| 180 |
style="text-align: left;"><p><code>deleteBlobContainer</code></p></td>
|
| 181 |
<td style="text-align: left;"><p>Delete the specified container in the
|
| 182 |
storage account. If the container doesn’t exist, the operation
|
| 183 |
fails.</p></td>
|
| 184 |
</tr>
|
| 185 |
+
<tr class="odd">
|
| 186 |
<td style="text-align: left;"><p><code>listBlobs</code></p></td>
|
| 187 |
<td style="text-align: left;"><p>Returns a list of blobs in this
|
| 188 |
container, with folder structures flattened.</p></td>
|
|
|
|
| 202 |
<col style="width: 79%" />
|
| 203 |
</colgroup>
|
| 204 |
<thead>
|
| 205 |
+
<tr class="header">
|
| 206 |
<th style="text-align: left;">Operation</th>
|
| 207 |
<th style="text-align: left;">Blob Type</th>
|
| 208 |
<th style="text-align: left;">Description</th>
|
| 209 |
</tr>
|
| 210 |
</thead>
|
| 211 |
<tbody>
|
| 212 |
+
<tr class="odd">
|
| 213 |
<td style="text-align: left;"><p><code>getBlob</code></p></td>
|
| 214 |
<td style="text-align: left;"><p>Common</p></td>
|
| 215 |
<td style="text-align: left;"><p>Get the content of the blob. You can
|
| 216 |
restrict the output of this operation to a blob range.</p></td>
|
| 217 |
</tr>
|
| 218 |
+
<tr class="even">
|
| 219 |
<td style="text-align: left;"><p><code>deleteBlob</code></p></td>
|
| 220 |
<td style="text-align: left;"><p>Common</p></td>
|
| 221 |
<td style="text-align: left;"><p>Delete a blob.</p></td>
|
| 222 |
</tr>
|
| 223 |
+
<tr class="odd">
|
| 224 |
<td
|
| 225 |
style="text-align: left;"><p><code>downloadBlobToFile</code></p></td>
|
| 226 |
<td style="text-align: left;"><p>Common</p></td>
|
|
|
|
| 229 |
the file already exists a <code>FileAlreadyExistsException</code> will
|
| 230 |
be thrown.</p></td>
|
| 231 |
</tr>
|
| 232 |
+
<tr class="even">
|
| 233 |
<td style="text-align: left;"><p><code>downloadLink</code></p></td>
|
| 234 |
<td style="text-align: left;"><p>Common</p></td>
|
| 235 |
<td style="text-align: left;"><p>Generate the download link for the
|
|
|
|
| 237 |
only limits to 1hour of allowed access. However, you can override the
|
| 238 |
default expiration duration through the headers.</p></td>
|
| 239 |
</tr>
|
| 240 |
+
<tr class="odd">
|
| 241 |
<td style="text-align: left;"><p><code>uploadBlockBlob</code></p></td>
|
| 242 |
<td style="text-align: left;"><p>BlockBlob</p></td>
|
| 243 |
<td style="text-align: left;"><p>Creates a new block blob, or updates
|
|
|
|
| 246 |
supported with PutBlob; the content of the existing blob is overwritten
|
| 247 |
with the new content.</p></td>
|
| 248 |
</tr>
|
| 249 |
+
<tr class="even">
|
| 250 |
<td
|
| 251 |
style="text-align: left;"><p><code>stageBlockBlobList</code></p></td>
|
| 252 |
<td style="text-align: left;"><p><code>BlockBlob</code></p></td>
|
|
|
|
| 257 |
<code>commitBlockListLater</code> is set to false, this will commit the
|
| 258 |
blocks immediately after staging the blocks.</p></td>
|
| 259 |
</tr>
|
| 260 |
+
<tr class="odd">
|
| 261 |
<td
|
| 262 |
style="text-align: left;"><p><code>commitBlobBlockList</code></p></td>
|
| 263 |
<td style="text-align: left;"><p><code>BlockBlob</code></p></td>
|
|
|
|
| 270 |
blocks together. Any blocks not specified in the block list and
|
| 271 |
permanently deleted.</p></td>
|
| 272 |
</tr>
|
| 273 |
+
<tr class="even">
|
| 274 |
<td style="text-align: left;"><p><code>getBlobBlockList</code></p></td>
|
| 275 |
<td style="text-align: left;"><p><code>BlockBlob</code></p></td>
|
| 276 |
<td style="text-align: left;"><p>Returns the list of blocks that have
|
| 277 |
been uploaded as part of a block blob using the specified blocklist
|
| 278 |
filter.</p></td>
|
| 279 |
</tr>
|
| 280 |
+
<tr class="odd">
|
| 281 |
<td style="text-align: left;"><p><code>createAppendBlob</code></p></td>
|
| 282 |
<td style="text-align: left;"><p><code>AppendBlob</code></p></td>
|
| 283 |
<td style="text-align: left;"><p>Creates a 0-length append blob. Call
|
| 284 |
commitAppendBlo`b operation to append data to an append blob.</p></td>
|
| 285 |
</tr>
|
| 286 |
+
<tr class="even">
|
| 287 |
<td style="text-align: left;"><p><code>commitAppendBlob</code></p></td>
|
| 288 |
<td style="text-align: left;"><p><code>AppendBlob</code></p></td>
|
| 289 |
<td style="text-align: left;"><p>Commits a new block of data to the end
|
|
|
|
| 293 |
the appendBlob through internal call to <code>createAppendBlob</code>
|
| 294 |
operation first before committing.</p></td>
|
| 295 |
</tr>
|
| 296 |
+
<tr class="odd">
|
| 297 |
<td style="text-align: left;"><p><code>createPageBlob</code></p></td>
|
| 298 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 299 |
<td style="text-align: left;"><p>Creates a page blob of the specified
|
| 300 |
length. Call <code>uploadPageBlob</code> operation to upload data to a
|
| 301 |
page blob.</p></td>
|
| 302 |
</tr>
|
| 303 |
+
<tr class="even">
|
| 304 |
<td style="text-align: left;"><p><code>uploadPageBlob</code></p></td>
|
| 305 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 306 |
<td style="text-align: left;"><p>Write one or more pages to the page
|
|
|
|
| 310 |
the appendBlob through internal call to <code>createPageBlob</code>
|
| 311 |
operation first before uploading.</p></td>
|
| 312 |
</tr>
|
| 313 |
+
<tr class="odd">
|
| 314 |
<td style="text-align: left;"><p><code>resizePageBlob</code></p></td>
|
| 315 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 316 |
<td style="text-align: left;"><p>Resizes the page blob to the specified
|
| 317 |
size, which must be a multiple of 512.</p></td>
|
| 318 |
</tr>
|
| 319 |
+
<tr class="even">
|
| 320 |
<td style="text-align: left;"><p><code>clearPageBlob</code></p></td>
|
| 321 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 322 |
<td style="text-align: left;"><p>Free the specified pages from the page
|
| 323 |
blob. The size of the range must be a multiple of 512.</p></td>
|
| 324 |
</tr>
|
| 325 |
+
<tr class="odd">
|
| 326 |
<td style="text-align: left;"><p><code>getPageBlobRanges</code></p></td>
|
| 327 |
<td style="text-align: left;"><p><code>PageBlob</code></p></td>
|
| 328 |
<td style="text-align: left;"><p>Returns the list of valid page ranges
|
| 329 |
for a page blob or snapshot of a page blob.</p></td>
|
| 330 |
</tr>
|
| 331 |
+
<tr class="even">
|
| 332 |
<td style="text-align: left;"><p><code>copyBlob</code></p></td>
|
| 333 |
<td style="text-align: left;"><p><code>Common</code></p></td>
|
| 334 |
<td style="text-align: left;"><p>Copy a blob from one container to
|
|
|
|
| 340 |
Refer to the example section in this page to learn how to use these
|
| 341 |
operations into your camel application.
|
| 342 |
|
| 343 |
+
# Examples
|
| 344 |
+
|
| 345 |
## Consumer Examples
|
| 346 |
|
| 347 |
To consume a blob into a file using the file component, this can be done
|
|
|
|
| 692 |
from("direct:copyBlob")
|
| 693 |
.to("azure-storage-blob://account/containerblob2?operation=uploadBlockBlob&credentialType=AZURE_SAS")
|
| 694 |
|
| 695 |
+
# Important Development Notes
|
| 696 |
|
| 697 |
All integration tests use
|
| 698 |
[Testcontainers](https://www.testcontainers.org/) and run by default.
|
camel-azure-storage-datalake.md
CHANGED
|
@@ -90,13 +90,13 @@ For these operations, `accountName` option is required
|
|
| 90 |
<col style="width: 89%" />
|
| 91 |
</colgroup>
|
| 92 |
<thead>
|
| 93 |
-
<tr>
|
| 94 |
<th style="text-align: left;">Operation</th>
|
| 95 |
<th style="text-align: left;">Description</th>
|
| 96 |
</tr>
|
| 97 |
</thead>
|
| 98 |
<tbody>
|
| 99 |
-
<tr>
|
| 100 |
<td style="text-align: left;"><p><code>listFileSystem</code></p></td>
|
| 101 |
<td style="text-align: left;"><p>List all the file systems that are
|
| 102 |
present in the given azure account.</p></td>
|
|
@@ -115,23 +115,23 @@ required
|
|
| 115 |
<col style="width: 89%" />
|
| 116 |
</colgroup>
|
| 117 |
<thead>
|
| 118 |
-
<tr>
|
| 119 |
<th style="text-align: left;">Operation</th>
|
| 120 |
<th style="text-align: left;">Description</th>
|
| 121 |
</tr>
|
| 122 |
</thead>
|
| 123 |
<tbody>
|
| 124 |
-
<tr>
|
| 125 |
<td style="text-align: left;"><p><code>createFileSystem</code></p></td>
|
| 126 |
<td style="text-align: left;"><p>Create a new file System with the
|
| 127 |
storage account</p></td>
|
| 128 |
</tr>
|
| 129 |
-
<tr>
|
| 130 |
<td style="text-align: left;"><p><code>deleteFileSystem</code></p></td>
|
| 131 |
<td style="text-align: left;"><p>Delete the specified file system within
|
| 132 |
the storage account</p></td>
|
| 133 |
</tr>
|
| 134 |
-
<tr>
|
| 135 |
<td style="text-align: left;"><p><code>listPaths</code></p></td>
|
| 136 |
<td style="text-align: left;"><p>Returns list of all the files within
|
| 137 |
the given path in the given file system, with folder structure
|
|
@@ -151,18 +151,18 @@ For these operations, `accountName`, `fileSystemName` and
|
|
| 151 |
<col style="width: 89%" />
|
| 152 |
</colgroup>
|
| 153 |
<thead>
|
| 154 |
-
<tr>
|
| 155 |
<th style="text-align: left;">Operation</th>
|
| 156 |
<th style="text-align: left;">Description</th>
|
| 157 |
</tr>
|
| 158 |
</thead>
|
| 159 |
<tbody>
|
| 160 |
-
<tr>
|
| 161 |
<td style="text-align: left;"><p><code>createFile</code></p></td>
|
| 162 |
<td style="text-align: left;"><p>Create a new file in the specified
|
| 163 |
directory within the fileSystem</p></td>
|
| 164 |
</tr>
|
| 165 |
-
<tr>
|
| 166 |
<td style="text-align: left;"><p><code>deleteDirectory</code></p></td>
|
| 167 |
<td style="text-align: left;"><p>Delete the specified directory within
|
| 168 |
the file system</p></td>
|
|
@@ -181,44 +181,44 @@ options are required
|
|
| 181 |
<col style="width: 89%" />
|
| 182 |
</colgroup>
|
| 183 |
<thead>
|
| 184 |
-
<tr>
|
| 185 |
<th style="text-align: left;">Operation</th>
|
| 186 |
<th style="text-align: left;">Description</th>
|
| 187 |
</tr>
|
| 188 |
</thead>
|
| 189 |
<tbody>
|
| 190 |
-
<tr>
|
| 191 |
<td style="text-align: left;"><p><code>getFile</code></p></td>
|
| 192 |
<td style="text-align: left;"><p>Get the contents of a file</p></td>
|
| 193 |
</tr>
|
| 194 |
-
<tr>
|
| 195 |
<td style="text-align: left;"><p><code>downloadToFile</code></p></td>
|
| 196 |
<td style="text-align: left;"><p>Download the entire file from the file
|
| 197 |
system into a path specified by fileDir.</p></td>
|
| 198 |
</tr>
|
| 199 |
-
<tr>
|
| 200 |
<td style="text-align: left;"><p><code>downloadLink</code></p></td>
|
| 201 |
<td style="text-align: left;"><p>Generate a download link for the
|
| 202 |
specified file using Shared Access Signature (SAS). The expiration time
|
| 203 |
to be set for the link can be specified otherwise 1 hour is taken as
|
| 204 |
default.</p></td>
|
| 205 |
</tr>
|
| 206 |
-
<tr>
|
| 207 |
<td style="text-align: left;"><p><code>deleteFile</code></p></td>
|
| 208 |
<td style="text-align: left;"><p>Delete the specified file.</p></td>
|
| 209 |
</tr>
|
| 210 |
-
<tr>
|
| 211 |
<td style="text-align: left;"><p><code>appendToFile</code></p></td>
|
| 212 |
<td style="text-align: left;"><p>Appends the data passed to the
|
| 213 |
specified file in the file System. Flush command is required after
|
| 214 |
append.</p></td>
|
| 215 |
</tr>
|
| 216 |
-
<tr>
|
| 217 |
<td style="text-align: left;"><p><code>flushToFile</code></p></td>
|
| 218 |
<td style="text-align: left;"><p>Flushes the data already appended to
|
| 219 |
the specified file.</p></td>
|
| 220 |
</tr>
|
| 221 |
-
<tr>
|
| 222 |
<td
|
| 223 |
style="text-align: left;"><p><code>openQueryInputStream</code></p></td>
|
| 224 |
<td style="text-align: left;"><p>Opens an <code>InputStream</code> based
|
|
@@ -231,6 +231,8 @@ register the query acceleration feature with your subscription.</p></td>
|
|
| 231 |
Refer to the examples section below for more details on how to use these
|
| 232 |
operations
|
| 233 |
|
|
|
|
|
|
|
| 234 |
## Consumer Examples
|
| 235 |
|
| 236 |
To consume a file from the storage datalake into a file using the file
|
|
|
|
| 90 |
<col style="width: 89%" />
|
| 91 |
</colgroup>
|
| 92 |
<thead>
|
| 93 |
+
<tr class="header">
|
| 94 |
<th style="text-align: left;">Operation</th>
|
| 95 |
<th style="text-align: left;">Description</th>
|
| 96 |
</tr>
|
| 97 |
</thead>
|
| 98 |
<tbody>
|
| 99 |
+
<tr class="odd">
|
| 100 |
<td style="text-align: left;"><p><code>listFileSystem</code></p></td>
|
| 101 |
<td style="text-align: left;"><p>List all the file systems that are
|
| 102 |
present in the given azure account.</p></td>
|
|
|
|
| 115 |
<col style="width: 89%" />
|
| 116 |
</colgroup>
|
| 117 |
<thead>
|
| 118 |
+
<tr class="header">
|
| 119 |
<th style="text-align: left;">Operation</th>
|
| 120 |
<th style="text-align: left;">Description</th>
|
| 121 |
</tr>
|
| 122 |
</thead>
|
| 123 |
<tbody>
|
| 124 |
+
<tr class="odd">
|
| 125 |
<td style="text-align: left;"><p><code>createFileSystem</code></p></td>
|
| 126 |
<td style="text-align: left;"><p>Create a new file System with the
|
| 127 |
storage account</p></td>
|
| 128 |
</tr>
|
| 129 |
+
<tr class="even">
|
| 130 |
<td style="text-align: left;"><p><code>deleteFileSystem</code></p></td>
|
| 131 |
<td style="text-align: left;"><p>Delete the specified file system within
|
| 132 |
the storage account</p></td>
|
| 133 |
</tr>
|
| 134 |
+
<tr class="odd">
|
| 135 |
<td style="text-align: left;"><p><code>listPaths</code></p></td>
|
| 136 |
<td style="text-align: left;"><p>Returns list of all the files within
|
| 137 |
the given path in the given file system, with folder structure
|
|
|
|
| 151 |
<col style="width: 89%" />
|
| 152 |
</colgroup>
|
| 153 |
<thead>
|
| 154 |
+
<tr class="header">
|
| 155 |
<th style="text-align: left;">Operation</th>
|
| 156 |
<th style="text-align: left;">Description</th>
|
| 157 |
</tr>
|
| 158 |
</thead>
|
| 159 |
<tbody>
|
| 160 |
+
<tr class="odd">
|
| 161 |
<td style="text-align: left;"><p><code>createFile</code></p></td>
|
| 162 |
<td style="text-align: left;"><p>Create a new file in the specified
|
| 163 |
directory within the fileSystem</p></td>
|
| 164 |
</tr>
|
| 165 |
+
<tr class="even">
|
| 166 |
<td style="text-align: left;"><p><code>deleteDirectory</code></p></td>
|
| 167 |
<td style="text-align: left;"><p>Delete the specified directory within
|
| 168 |
the file system</p></td>
|
|
|
|
| 181 |
<col style="width: 89%" />
|
| 182 |
</colgroup>
|
| 183 |
<thead>
|
| 184 |
+
<tr class="header">
|
| 185 |
<th style="text-align: left;">Operation</th>
|
| 186 |
<th style="text-align: left;">Description</th>
|
| 187 |
</tr>
|
| 188 |
</thead>
|
| 189 |
<tbody>
|
| 190 |
+
<tr class="odd">
|
| 191 |
<td style="text-align: left;"><p><code>getFile</code></p></td>
|
| 192 |
<td style="text-align: left;"><p>Get the contents of a file</p></td>
|
| 193 |
</tr>
|
| 194 |
+
<tr class="even">
|
| 195 |
<td style="text-align: left;"><p><code>downloadToFile</code></p></td>
|
| 196 |
<td style="text-align: left;"><p>Download the entire file from the file
|
| 197 |
system into a path specified by fileDir.</p></td>
|
| 198 |
</tr>
|
| 199 |
+
<tr class="odd">
|
| 200 |
<td style="text-align: left;"><p><code>downloadLink</code></p></td>
|
| 201 |
<td style="text-align: left;"><p>Generate a download link for the
|
| 202 |
specified file using Shared Access Signature (SAS). The expiration time
|
| 203 |
to be set for the link can be specified otherwise 1 hour is taken as
|
| 204 |
default.</p></td>
|
| 205 |
</tr>
|
| 206 |
+
<tr class="even">
|
| 207 |
<td style="text-align: left;"><p><code>deleteFile</code></p></td>
|
| 208 |
<td style="text-align: left;"><p>Delete the specified file.</p></td>
|
| 209 |
</tr>
|
| 210 |
+
<tr class="odd">
|
| 211 |
<td style="text-align: left;"><p><code>appendToFile</code></p></td>
|
| 212 |
<td style="text-align: left;"><p>Appends the data passed to the
|
| 213 |
specified file in the file System. Flush command is required after
|
| 214 |
append.</p></td>
|
| 215 |
</tr>
|
| 216 |
+
<tr class="even">
|
| 217 |
<td style="text-align: left;"><p><code>flushToFile</code></p></td>
|
| 218 |
<td style="text-align: left;"><p>Flushes the data already appended to
|
| 219 |
the specified file.</p></td>
|
| 220 |
</tr>
|
| 221 |
+
<tr class="odd">
|
| 222 |
<td
|
| 223 |
style="text-align: left;"><p><code>openQueryInputStream</code></p></td>
|
| 224 |
<td style="text-align: left;"><p>Opens an <code>InputStream</code> based
|
|
|
|
| 231 |
Refer to the examples section below for more details on how to use these
|
| 232 |
operations
|
| 233 |
|
| 234 |
+
# Examples
|
| 235 |
+
|
| 236 |
## Consumer Examples
|
| 237 |
|
| 238 |
To consume a file from the storage datalake into a file using the file
|
camel-azure-storage-queue.md
CHANGED
|
@@ -124,13 +124,13 @@ For these operations, `accountName` is **required**.
|
|
| 124 |
<col style="width: 89%" />
|
| 125 |
</colgroup>
|
| 126 |
<thead>
|
| 127 |
-
<tr>
|
| 128 |
<th style="text-align: left;">Operation</th>
|
| 129 |
<th style="text-align: left;">Description</th>
|
| 130 |
</tr>
|
| 131 |
</thead>
|
| 132 |
<tbody>
|
| 133 |
-
<tr>
|
| 134 |
<td style="text-align: left;"><p><code>listQueues</code></p></td>
|
| 135 |
<td style="text-align: left;"><p>Lists the queues in the storage account
|
| 136 |
that pass the filter starting at the specified marker.</p></td>
|
|
@@ -148,26 +148,26 @@ For these operations, `accountName` and `queueName` are **required**.
|
|
| 148 |
<col style="width: 89%" />
|
| 149 |
</colgroup>
|
| 150 |
<thead>
|
| 151 |
-
<tr>
|
| 152 |
<th style="text-align: left;">Operation</th>
|
| 153 |
<th style="text-align: left;">Description</th>
|
| 154 |
</tr>
|
| 155 |
</thead>
|
| 156 |
<tbody>
|
| 157 |
-
<tr>
|
| 158 |
<td style="text-align: left;"><p><code>createQueue</code></p></td>
|
| 159 |
<td style="text-align: left;"><p>Creates a new queue.</p></td>
|
| 160 |
</tr>
|
| 161 |
-
<tr>
|
| 162 |
<td style="text-align: left;"><p><code>deleteQueue</code></p></td>
|
| 163 |
<td style="text-align: left;"><p>Permanently deletes the queue.</p></td>
|
| 164 |
</tr>
|
| 165 |
-
<tr>
|
| 166 |
<td style="text-align: left;"><p><code>clearQueue</code></p></td>
|
| 167 |
<td style="text-align: left;"><p>Deletes all messages in the
|
| 168 |
queue..</p></td>
|
| 169 |
</tr>
|
| 170 |
-
<tr>
|
| 171 |
<td style="text-align: left;"><p><code>sendMessage</code></p></td>
|
| 172 |
<td style="text-align: left;"><p><strong>Default Producer
|
| 173 |
Operation</strong> Sends a message with a given time-to-live and timeout
|
|
@@ -178,24 +178,24 @@ disable this, set the config <code>createQueue</code> or header
|
|
| 178 |
<code>CamelAzureStorageQueueCreateQueue</code> to
|
| 179 |
<code>false</code>.</p></td>
|
| 180 |
</tr>
|
| 181 |
-
<tr>
|
| 182 |
<td style="text-align: left;"><p><code>deleteMessage</code></p></td>
|
| 183 |
<td style="text-align: left;"><p>Deletes the specified message in the
|
| 184 |
queue.</p></td>
|
| 185 |
</tr>
|
| 186 |
-
<tr>
|
| 187 |
<td style="text-align: left;"><p><code>receiveMessages</code></p></td>
|
| 188 |
<td style="text-align: left;"><p>Retrieves up to the maximum number of
|
| 189 |
messages from the queue and hides them from other operations for the
|
| 190 |
timeout period. However, it will not dequeue the message from the queue
|
| 191 |
due to reliability reasons.</p></td>
|
| 192 |
</tr>
|
| 193 |
-
<tr>
|
| 194 |
<td style="text-align: left;"><p><code>peekMessages</code></p></td>
|
| 195 |
<td style="text-align: left;"><p>Peek messages from the front of the
|
| 196 |
queue up to the maximum number of messages.</p></td>
|
| 197 |
</tr>
|
| 198 |
-
<tr>
|
| 199 |
<td style="text-align: left;"><p><code>updateMessage</code></p></td>
|
| 200 |
<td style="text-align: left;"><p>Updates the specific message in the
|
| 201 |
queue with a new message and resets the visibility timeout. The message
|
|
@@ -207,6 +207,8 @@ text is evaluated from the exchange message body.</p></td>
|
|
| 207 |
Refer to the example section in this page to learn how to use these
|
| 208 |
operations into your camel application.
|
| 209 |
|
|
|
|
|
|
|
| 210 |
## Consumer Examples
|
| 211 |
|
| 212 |
To consume a queue into a file component with maximum five messages in
|
|
@@ -345,7 +347,7 @@ one batch, this can be done like this:
|
|
| 345 |
})
|
| 346 |
.to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=updateMessage");
|
| 347 |
|
| 348 |
-
#
|
| 349 |
|
| 350 |
When developing on this component, you will need to obtain your Azure
|
| 351 |
`accessKey` to run the integration tests. In addition to the mocked unit
|
|
|
|
| 124 |
<col style="width: 89%" />
|
| 125 |
</colgroup>
|
| 126 |
<thead>
|
| 127 |
+
<tr class="header">
|
| 128 |
<th style="text-align: left;">Operation</th>
|
| 129 |
<th style="text-align: left;">Description</th>
|
| 130 |
</tr>
|
| 131 |
</thead>
|
| 132 |
<tbody>
|
| 133 |
+
<tr class="odd">
|
| 134 |
<td style="text-align: left;"><p><code>listQueues</code></p></td>
|
| 135 |
<td style="text-align: left;"><p>Lists the queues in the storage account
|
| 136 |
that pass the filter starting at the specified marker.</p></td>
|
|
|
|
| 148 |
<col style="width: 89%" />
|
| 149 |
</colgroup>
|
| 150 |
<thead>
|
| 151 |
+
<tr class="header">
|
| 152 |
<th style="text-align: left;">Operation</th>
|
| 153 |
<th style="text-align: left;">Description</th>
|
| 154 |
</tr>
|
| 155 |
</thead>
|
| 156 |
<tbody>
|
| 157 |
+
<tr class="odd">
|
| 158 |
<td style="text-align: left;"><p><code>createQueue</code></p></td>
|
| 159 |
<td style="text-align: left;"><p>Creates a new queue.</p></td>
|
| 160 |
</tr>
|
| 161 |
+
<tr class="even">
|
| 162 |
<td style="text-align: left;"><p><code>deleteQueue</code></p></td>
|
| 163 |
<td style="text-align: left;"><p>Permanently deletes the queue.</p></td>
|
| 164 |
</tr>
|
| 165 |
+
<tr class="odd">
|
| 166 |
<td style="text-align: left;"><p><code>clearQueue</code></p></td>
|
| 167 |
<td style="text-align: left;"><p>Deletes all messages in the
|
| 168 |
queue..</p></td>
|
| 169 |
</tr>
|
| 170 |
+
<tr class="even">
|
| 171 |
<td style="text-align: left;"><p><code>sendMessage</code></p></td>
|
| 172 |
<td style="text-align: left;"><p><strong>Default Producer
|
| 173 |
Operation</strong> Sends a message with a given time-to-live and timeout
|
|
|
|
| 178 |
<code>CamelAzureStorageQueueCreateQueue</code> to
|
| 179 |
<code>false</code>.</p></td>
|
| 180 |
</tr>
|
| 181 |
+
<tr class="odd">
|
| 182 |
<td style="text-align: left;"><p><code>deleteMessage</code></p></td>
|
| 183 |
<td style="text-align: left;"><p>Deletes the specified message in the
|
| 184 |
queue.</p></td>
|
| 185 |
</tr>
|
| 186 |
+
<tr class="even">
|
| 187 |
<td style="text-align: left;"><p><code>receiveMessages</code></p></td>
|
| 188 |
<td style="text-align: left;"><p>Retrieves up to the maximum number of
|
| 189 |
messages from the queue and hides them from other operations for the
|
| 190 |
timeout period. However, it will not dequeue the message from the queue
|
| 191 |
due to reliability reasons.</p></td>
|
| 192 |
</tr>
|
| 193 |
+
<tr class="odd">
|
| 194 |
<td style="text-align: left;"><p><code>peekMessages</code></p></td>
|
| 195 |
<td style="text-align: left;"><p>Peek messages from the front of the
|
| 196 |
queue up to the maximum number of messages.</p></td>
|
| 197 |
</tr>
|
| 198 |
+
<tr class="even">
|
| 199 |
<td style="text-align: left;"><p><code>updateMessage</code></p></td>
|
| 200 |
<td style="text-align: left;"><p>Updates the specific message in the
|
| 201 |
queue with a new message and resets the visibility timeout. The message
|
|
|
|
| 207 |
Refer to the example section in this page to learn how to use these
|
| 208 |
operations into your camel application.
|
| 209 |
|
| 210 |
+
# Examples
|
| 211 |
+
|
| 212 |
## Consumer Examples
|
| 213 |
|
| 214 |
To consume a queue into a file component with maximum five messages in
|
|
|
|
| 347 |
})
|
| 348 |
.to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=updateMessage");
|
| 349 |
|
| 350 |
+
# Important Development Notes
|
| 351 |
|
| 352 |
When developing on this component, you will need to obtain your Azure
|
| 353 |
`accessKey` to run the integration tests. In addition to the mocked unit
|
camel-azure-summary.md
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Azure-summary.md
|
| 2 |
+
|
| 3 |
+
The Camel Components for [Microsoft Azure
|
| 4 |
+
Services](https://azure.microsoft.com/) provide connectivity to Azure
|
| 5 |
+
services from Camel.
|
| 6 |
+
|
| 7 |
+
# Azure components
|
| 8 |
+
|
| 9 |
+
See the following for usage of each component:
|
| 10 |
+
|
| 11 |
+
indexDescriptionList::\[attributes=*group=Azure*,descriptionformat=description\]
|
camel-barcode-dataformat.md
ADDED
|
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Barcode-dataformat.md
|
| 2 |
+
|
| 3 |
+
**Since Camel 2.14**
|
| 4 |
+
|
| 5 |
+
The Barcode data format is based on the [zxing
|
| 6 |
+
library](https://github.com/zxing/zxing). The goal of this component is
|
| 7 |
+
to create a barcode image from a String (marshal) and a String from a
|
| 8 |
+
barcode image (unmarshal). You’re free to use all features that zxing
|
| 9 |
+
offers.
|
| 10 |
+
|
| 11 |
+
# Dependencies
|
| 12 |
+
|
| 13 |
+
To use the barcode data format in your camel routes, you need to add a
|
| 14 |
+
dependency on **camel-barcode** which implements this data format.
|
| 15 |
+
|
| 16 |
+
If you use maven, you could just add the following to your pom.xml,
|
| 17 |
+
substituting the version number for the latest \& greatest release (see
|
| 18 |
+
the download page for the latest versions).
|
| 19 |
+
|
| 20 |
+
<dependency>
|
| 21 |
+
<groupId>org.apache.camel</groupId>
|
| 22 |
+
<artifactId>camel-barcode</artifactId>
|
| 23 |
+
<version>x.x.x</version>
|
| 24 |
+
</dependency>
|
| 25 |
+
|
| 26 |
+
# Barcode Options
|
| 27 |
+
|
| 28 |
+
# Using the Java DSL
|
| 29 |
+
|
| 30 |
+
First, you have to initialize the barcode data format class. You can use
|
| 31 |
+
the default constructor, or one of parameterized (see JavaDoc). The
|
| 32 |
+
default values are:
|
| 33 |
+
|
| 34 |
+
<table>
|
| 35 |
+
<colgroup>
|
| 36 |
+
<col style="width: 10%" />
|
| 37 |
+
<col style="width: 89%" />
|
| 38 |
+
</colgroup>
|
| 39 |
+
<thead>
|
| 40 |
+
<tr class="header">
|
| 41 |
+
<th style="text-align: left;">Parameter</th>
|
| 42 |
+
<th style="text-align: left;">Default Value</th>
|
| 43 |
+
</tr>
|
| 44 |
+
</thead>
|
| 45 |
+
<tbody>
|
| 46 |
+
<tr class="odd">
|
| 47 |
+
<td style="text-align: left;"><p>image type (BarcodeImageType)</p></td>
|
| 48 |
+
<td style="text-align: left;"><p>PNG</p></td>
|
| 49 |
+
</tr>
|
| 50 |
+
<tr class="even">
|
| 51 |
+
<td style="text-align: left;"><p>width</p></td>
|
| 52 |
+
<td style="text-align: left;"><p>100 px</p></td>
|
| 53 |
+
</tr>
|
| 54 |
+
<tr class="odd">
|
| 55 |
+
<td style="text-align: left;"><p>height</p></td>
|
| 56 |
+
<td style="text-align: left;"><p>100 px</p></td>
|
| 57 |
+
</tr>
|
| 58 |
+
<tr class="even">
|
| 59 |
+
<td style="text-align: left;"><p>encoding</p></td>
|
| 60 |
+
<td style="text-align: left;"><p>UTF-8</p></td>
|
| 61 |
+
</tr>
|
| 62 |
+
<tr class="odd">
|
| 63 |
+
<td style="text-align: left;"><p>barcode format (BarcodeFormat)</p></td>
|
| 64 |
+
<td style="text-align: left;"><p>QR-Code</p></td>
|
| 65 |
+
</tr>
|
| 66 |
+
</tbody>
|
| 67 |
+
</table>
|
| 68 |
+
|
| 69 |
+
// QR-Code default
|
| 70 |
+
DataFormat code = new BarcodeDataFormat();
|
| 71 |
+
|
| 72 |
+
If you want to use zxing hints, you can use the *addToHintMap* method of
|
| 73 |
+
your BarcodeDataFormat instance:
|
| 74 |
+
|
| 75 |
+
code.addToHintMap(DecodeHintType.TRY_HARDER, Boolean.true);
|
| 76 |
+
|
| 77 |
+
For possible hints, please consult the xzing documentation.
|
| 78 |
+
|
| 79 |
+
## Marshalling
|
| 80 |
+
|
| 81 |
+
from("direct://code")
|
| 82 |
+
.marshal(code)
|
| 83 |
+
.to("file://barcode_out");
|
| 84 |
+
|
| 85 |
+
You can call the route from a test class with:
|
| 86 |
+
|
| 87 |
+
template.sendBody("direct://code", "This is a testmessage!");
|
| 88 |
+
|
| 89 |
+
You should find inside the *barcode\_out* folder this image:
|
| 90 |
+
|
| 91 |
+
<figure>
|
| 92 |
+
<img src="ROOT:qr-code.png" alt="image" />
|
| 93 |
+
</figure>
|
| 94 |
+
|
| 95 |
+
## Unmarshalling
|
| 96 |
+
|
| 97 |
+
The unmarshaller is generic. For unmarshalling, you can use any
|
| 98 |
+
BarcodeDataFormat instance. If you’ve two instances, one for
|
| 99 |
+
(generating) QR-Code and one for PDF417, it doesn’t matter which one
|
| 100 |
+
will be used.
|
| 101 |
+
|
| 102 |
+
from("file://barcode_in?noop=true")
|
| 103 |
+
.unmarshal(code) // for unmarshalling, the instance doesn't matter
|
| 104 |
+
.to("mock:out");
|
| 105 |
+
|
| 106 |
+
If you’ll paste the QR-Code image above into the *barcode\_in* folder,
|
| 107 |
+
you should find *`This is a testmessage!`* inside the mock. You can find
|
| 108 |
+
the barcode data format as header variable:
|
| 109 |
+
|
| 110 |
+
<table>
|
| 111 |
+
<colgroup>
|
| 112 |
+
<col style="width: 10%" />
|
| 113 |
+
<col style="width: 10%" />
|
| 114 |
+
<col style="width: 79%" />
|
| 115 |
+
</colgroup>
|
| 116 |
+
<thead>
|
| 117 |
+
<tr class="header">
|
| 118 |
+
<th style="text-align: left;">Name</th>
|
| 119 |
+
<th style="text-align: left;">Type</th>
|
| 120 |
+
<th style="text-align: left;">Description</th>
|
| 121 |
+
</tr>
|
| 122 |
+
</thead>
|
| 123 |
+
<tbody>
|
| 124 |
+
<tr class="odd">
|
| 125 |
+
<td style="text-align: left;"><p>BarcodeFormat</p></td>
|
| 126 |
+
<td style="text-align: left;"><p>String</p></td>
|
| 127 |
+
<td style="text-align: left;"><p>Value of
|
| 128 |
+
com.google.zxing.BarcodeFormat.</p></td>
|
| 129 |
+
</tr>
|
| 130 |
+
</tbody>
|
| 131 |
+
</table>
|
| 132 |
+
|
| 133 |
+
If you’ll paste the code 39 barcode that is rotated some degrees into
|
| 134 |
+
the *barcode\_in* folder, You can find the ORIENTATION as header
|
| 135 |
+
variable:
|
| 136 |
+
|
| 137 |
+
<table>
|
| 138 |
+
<colgroup>
|
| 139 |
+
<col style="width: 10%" />
|
| 140 |
+
<col style="width: 10%" />
|
| 141 |
+
<col style="width: 79%" />
|
| 142 |
+
</colgroup>
|
| 143 |
+
<thead>
|
| 144 |
+
<tr class="header">
|
| 145 |
+
<th style="text-align: left;">Name</th>
|
| 146 |
+
<th style="text-align: left;">Type</th>
|
| 147 |
+
<th style="text-align: left;">Description</th>
|
| 148 |
+
</tr>
|
| 149 |
+
</thead>
|
| 150 |
+
<tbody>
|
| 151 |
+
<tr class="odd">
|
| 152 |
+
<td style="text-align: left;"><p>ORIENTATION</p></td>
|
| 153 |
+
<td style="text-align: left;"><p>Integer</p></td>
|
| 154 |
+
<td style="text-align: left;"><p>rotate value in degrees .</p></td>
|
| 155 |
+
</tr>
|
| 156 |
+
</tbody>
|
| 157 |
+
</table>
|
camel-base64-dataformat.md
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Base64-dataformat.md
|
| 2 |
+
|
| 3 |
+
**Since Camel 2.11**
|
| 4 |
+
|
| 5 |
+
The Base64 data format is used for base64 encoding and decoding.
|
| 6 |
+
|
| 7 |
+
# Options
|
| 8 |
+
|
| 9 |
+
In Spring DSL, you configure the data format using this tag:
|
| 10 |
+
|
| 11 |
+
<camelContext>
|
| 12 |
+
<dataFormats>
|
| 13 |
+
<!-- for a newline character (\n), use the HTML entity notation coupled with the ASCII code. -->
|
| 14 |
+
<base64 lineSeparator=" " id="base64withNewLine" />
|
| 15 |
+
<base64 lineLength="64" id="base64withLineLength64" />
|
| 16 |
+
</dataFormats>
|
| 17 |
+
...
|
| 18 |
+
</camelContext>
|
| 19 |
+
|
| 20 |
+
Then you can use it later by its reference:
|
| 21 |
+
|
| 22 |
+
<route>
|
| 23 |
+
<from uri="direct:startEncode" />
|
| 24 |
+
<marshal ref="base64withLineLength64" />
|
| 25 |
+
<to uri="mock:result" />
|
| 26 |
+
</route>
|
| 27 |
+
|
| 28 |
+
Most of the time, you won’t need to declare the data format if you use
|
| 29 |
+
the default options. In that case, you can declare the data format
|
| 30 |
+
inline as shown below.
|
| 31 |
+
|
| 32 |
+
# Marshal
|
| 33 |
+
|
| 34 |
+
In this example, we marshal the file content to a base64 object.
|
| 35 |
+
|
| 36 |
+
from("file://data.bin")
|
| 37 |
+
.marshal().base64()
|
| 38 |
+
.to("jms://myqueue");
|
| 39 |
+
|
| 40 |
+
In Spring DSL:
|
| 41 |
+
|
| 42 |
+
<from uri="file://data.bin">
|
| 43 |
+
<marshal>
|
| 44 |
+
<base64/>
|
| 45 |
+
</marshal>
|
| 46 |
+
<to uri="jms://myqueue"/>
|
| 47 |
+
|
| 48 |
+
# Unmarshal
|
| 49 |
+
|
| 50 |
+
In this example, we unmarshal the payload from the JMS queue to a
|
| 51 |
+
byte\[\] object, before its processed by the `newOrder` processor.
|
| 52 |
+
|
| 53 |
+
from("jms://queue/order")
|
| 54 |
+
.unmarshal().base64()
|
| 55 |
+
.process("newOrder");
|
| 56 |
+
|
| 57 |
+
In Spring DSL:
|
| 58 |
+
|
| 59 |
+
<from uri="jms://queue/order">
|
| 60 |
+
<unmarshal>
|
| 61 |
+
<base64/>
|
| 62 |
+
</unmarshal>
|
| 63 |
+
<to uri="bean:newOrder"/>
|
| 64 |
+
|
| 65 |
+
# Dependencies
|
| 66 |
+
|
| 67 |
+
To use Base64 in your Camel routes, you need to add a dependency on
|
| 68 |
+
**camel-base64** which implements this data format.
|
| 69 |
+
|
| 70 |
+
If you use Maven, you can add the following to your pom.xml:
|
| 71 |
+
|
| 72 |
+
<dependency>
|
| 73 |
+
<groupId>org.apache.camel</groupId>
|
| 74 |
+
<artifactId>camel-base64</artifactId>
|
| 75 |
+
<version>x.x.x</version> <!-- use the same version as your Camel core version -->
|
| 76 |
+
</dependency>
|
camel-batchConfig-eip.md
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BatchConfig-eip.md
|
| 2 |
+
|
| 3 |
+
Configuring for [Resequence EIP](#resequence-eip.adoc) in batching mode.
|
| 4 |
+
|
| 5 |
+
# Exchange properties
|
camel-bean-eip.md
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Bean-eip.md
|
| 2 |
+
|
| 3 |
+
The Bean EIP is used for invoking a method on a bean, and the returned
|
| 4 |
+
value is the new message body.
|
| 5 |
+
|
| 6 |
+
The Bean EIP is similar to the [Bean](#ROOT:bean-component.adoc)
|
| 7 |
+
component which also is used for invoking beans, but in the form as a
|
| 8 |
+
Camel component.
|
| 9 |
+
|
| 10 |
+
# URI Format
|
| 11 |
+
|
| 12 |
+
bean:beanID[?options]
|
| 13 |
+
|
| 14 |
+
Where **beanID** can be any string used to look up the bean in the
|
| 15 |
+
[Registry](#manual::registry.adoc).
|
| 16 |
+
|
| 17 |
+
# EIP options
|
| 18 |
+
|
| 19 |
+
# Exchange properties
|
| 20 |
+
|
| 21 |
+
## Bean scope
|
| 22 |
+
|
| 23 |
+
When using `singleton` scope (default) the bean is created or looked up
|
| 24 |
+
only once and reused for the lifetime of the endpoint. The bean should
|
| 25 |
+
be thread-safe in case concurrent threads are calling the bean at the
|
| 26 |
+
same time.
|
| 27 |
+
|
| 28 |
+
When using `request` scope the bean is created or looked up once per
|
| 29 |
+
request (exchange). This can be used if you want to store state on a
|
| 30 |
+
bean while processing a request, and you want to call the same bean
|
| 31 |
+
instance multiple times while processing the request. The bean does not
|
| 32 |
+
have to be thread-safe as the instance is only called from the same
|
| 33 |
+
request.
|
| 34 |
+
|
| 35 |
+
When using `prototype` scope, then the bean will be looked up or created
|
| 36 |
+
per call. However, in case of lookup then this is delegated to the bean
|
| 37 |
+
registry such as Spring or CDI (if in use), which depends on their
|
| 38 |
+
configuration can act as either singleton or prototype scope. However,
|
| 39 |
+
when using `prototype` then behaviour is dependent on the delegated
|
| 40 |
+
registry (such as Spring, Quarkus or CDI).
|
| 41 |
+
|
| 42 |
+
# Example
|
| 43 |
+
|
| 44 |
+
The Bean EIP can be used directly in the routes as shown below:
|
| 45 |
+
|
| 46 |
+
Java
|
| 47 |
+
// lookup bean from registry and invoke the given method by the name
|
| 48 |
+
from("direct:foo").bean("myBean", "myMethod");
|
| 49 |
+
|
| 50 |
+
// lookup bean from registry and invoke best matching method
|
| 51 |
+
from("direct:bar").bean("myBean");
|
| 52 |
+
|
| 53 |
+
XML
|
| 54 |
+
With Spring XML you can declare the bean using `<bean>` as shown:
|
| 55 |
+
|
| 56 |
+
<bean id="myBean" class="com.foo.ExampleBean"/>
|
| 57 |
+
|
| 58 |
+
And in XML DSL you can call this bean:
|
| 59 |
+
|
| 60 |
+
<routes>
|
| 61 |
+
<route>
|
| 62 |
+
<from uri="direct:foo"/>
|
| 63 |
+
<bean ref="myBean" method="myMethod"/>
|
| 64 |
+
</route>
|
| 65 |
+
<route>
|
| 66 |
+
<from uri="direct:bar"/>
|
| 67 |
+
<bean ref="myBean"/>
|
| 68 |
+
</route>
|
| 69 |
+
</routes>
|
| 70 |
+
|
| 71 |
+
YAML
|
| 72 |
+
\- from:
|
| 73 |
+
uri: direct:start
|
| 74 |
+
steps:
|
| 75 |
+
\- bean:
|
| 76 |
+
ref: myBean
|
| 77 |
+
method: myMethod
|
| 78 |
+
\- from:
|
| 79 |
+
uri: direct:start
|
| 80 |
+
steps:
|
| 81 |
+
\- bean:
|
| 82 |
+
ref: myBean
|
| 83 |
+
\- beans:
|
| 84 |
+
\- name: myBean
|
| 85 |
+
type: com.foo.ExampleBean
|
| 86 |
+
|
| 87 |
+
Instead of passing the name of the reference to the bean (so that Camel
|
| 88 |
+
will look up for it in the registry), you can provide the bean:
|
| 89 |
+
|
| 90 |
+
Java
|
| 91 |
+
// Send a message to the given bean instance.
|
| 92 |
+
from("direct:foo").bean(new ExampleBean());
|
| 93 |
+
|
| 94 |
+
// Explicit selection of bean method to be invoked.
|
| 95 |
+
from("direct:bar").bean(new ExampleBean(), "myMethod");
|
| 96 |
+
|
| 97 |
+
// Camel will create a singleton instance of the bean, and reuse the instance for the following calls (see scope)
|
| 98 |
+
from("direct:cheese").bean(ExampleBean.class);
|
| 99 |
+
|
| 100 |
+
XML
|
| 101 |
+
<routes>
|
| 102 |
+
<route>
|
| 103 |
+
<from uri="direct:foo"/>
|
| 104 |
+
<bean beanType="com.foo.ExampleBean" method="myMethod"/>
|
| 105 |
+
</route>
|
| 106 |
+
<route>
|
| 107 |
+
<from uri="direct:bar"/>
|
| 108 |
+
<bean beanType="com.foo.ExampleBean"/>
|
| 109 |
+
</route>
|
| 110 |
+
<route>
|
| 111 |
+
<from uri="direct:cheese"/>
|
| 112 |
+
<bean beanType="com.foo.ExampleBean"/>
|
| 113 |
+
</route>
|
| 114 |
+
</routes>
|
| 115 |
+
|
| 116 |
+
YAML
|
| 117 |
+
\- from:
|
| 118 |
+
uri: direct:foo
|
| 119 |
+
steps:
|
| 120 |
+
\- bean:
|
| 121 |
+
beanType: com.foo.ExampleBean
|
| 122 |
+
method: myMethod
|
| 123 |
+
\- from:
|
| 124 |
+
uri: direct:bar
|
| 125 |
+
steps:
|
| 126 |
+
\- bean:
|
| 127 |
+
beanType: com.foo.ExampleBean
|
| 128 |
+
\- from:
|
| 129 |
+
uri: direct:cheese
|
| 130 |
+
steps:
|
| 131 |
+
\- bean:
|
| 132 |
+
beanType: com.foo.ExampleBean
|
| 133 |
+
|
| 134 |
+
# Bean binding
|
| 135 |
+
|
| 136 |
+
How bean methods to be invoked are chosen (if they are not specified
|
| 137 |
+
explicitly through the **method** parameter) and how parameter values
|
| 138 |
+
are constructed from the [Message](#message.adoc) are all defined by the
|
| 139 |
+
[Bean Binding](#manual::bean-binding.adoc) mechanism. This is used
|
| 140 |
+
throughout all the various [Bean
|
| 141 |
+
Integration](#manual::bean-integration.adoc) mechanisms in Camel.
|
camel-bean-language.md
ADDED
|
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Bean-language.md
|
| 2 |
+
|
| 3 |
+
**Since Camel 1.3**
|
| 4 |
+
|
| 5 |
+
The Bean language is used for calling a method on an existing Java bean.
|
| 6 |
+
|
| 7 |
+
Camel adapts to the method being called via [Bean
|
| 8 |
+
Binding](#manual::bean-binding.adoc). The binding process will, for
|
| 9 |
+
example, automatically convert the message payload to the parameter of
|
| 10 |
+
type of the first parameter in the method. The binding process has a lot
|
| 11 |
+
more features, so it is recommended to read the [Bean
|
| 12 |
+
Binding](#manual::bean-binding.adoc) documentation for mor details.
|
| 13 |
+
|
| 14 |
+
# Bean Method options
|
| 15 |
+
|
| 16 |
+
# Examples
|
| 17 |
+
|
| 18 |
+
In the given route below, we call a Java Bean Method with `method`,
|
| 19 |
+
where "myBean" is the id of the bean to use (lookup from
|
| 20 |
+
[Registry](#manual::registry.adoc)), and "isGoldCustomer" is the name of
|
| 21 |
+
the method to call.
|
| 22 |
+
|
| 23 |
+
Java
|
| 24 |
+
from("activemq:topic:OrdersTopic")
|
| 25 |
+
.filter().method("myBean", "isGoldCustomer")
|
| 26 |
+
.to("activemq:BigSpendersQueue");
|
| 27 |
+
|
| 28 |
+
It is also possible to omit the method name. In this case, then Camel
|
| 29 |
+
would choose the best suitable method to use. This process is complex,
|
| 30 |
+
so it is good practice to specify the method name.
|
| 31 |
+
|
| 32 |
+
XML
|
| 33 |
+
<route>
|
| 34 |
+
<from uri="activemq:topic:OrdersTopic"/>
|
| 35 |
+
<filter>
|
| 36 |
+
<method ref="myBean" method="isGoldCustomer"/>
|
| 37 |
+
<to uri="activemq:BigSpendersQueue"/>
|
| 38 |
+
</filter>
|
| 39 |
+
</route>
|
| 40 |
+
|
| 41 |
+
The bean could be implemented as follows:
|
| 42 |
+
|
| 43 |
+
public class MyBean {
|
| 44 |
+
public boolean isGoldCustomer(Exchange exchange) {
|
| 45 |
+
// ...
|
| 46 |
+
}
|
| 47 |
+
}
|
| 48 |
+
|
| 49 |
+
How this method uses `Exchange` in the method signature. You would often
|
| 50 |
+
not do that, and use non-Camel types. For example, by using `String`
|
| 51 |
+
then Camel will automatically convert the message body to this type when
|
| 52 |
+
calling the method:
|
| 53 |
+
|
| 54 |
+
public boolean isGoldCustomer(String body) {...}
|
| 55 |
+
|
| 56 |
+
## Using Annotations for bean integration
|
| 57 |
+
|
| 58 |
+
You can also use the [Bean Integration](#manual::bean-integration.adoc)
|
| 59 |
+
annotations, such as `@Header`, `@Body`, `@Variable` etc
|
| 60 |
+
|
| 61 |
+
public boolean isGoldCustomer(@Header(name = "foo") Integer fooHeader) {...}
|
| 62 |
+
|
| 63 |
+
So you can bind parameters of the method to the `Exchange`, the
|
| 64 |
+
[Message](#eips:message.adoc) or individual headers, properties, the
|
| 65 |
+
body or other expressions.
|
| 66 |
+
|
| 67 |
+
## Non-Registry Beans
|
| 68 |
+
|
| 69 |
+
The Bean Method Language also supports invoking beans that are not
|
| 70 |
+
registered in the [Registry](#manual::registry.adoc).
|
| 71 |
+
|
| 72 |
+
Camel can instantiate the bean of a given type and invoke the method or
|
| 73 |
+
invoke the method on an already existing instance.
|
| 74 |
+
|
| 75 |
+
from("activemq:topic:OrdersTopic")
|
| 76 |
+
.filter().method(MyBean.class, "isGoldCustomer")
|
| 77 |
+
.to("activemq:BigSpendersQueue");
|
| 78 |
+
|
| 79 |
+
The first parameter can also be an existing instance of a Bean such as:
|
| 80 |
+
|
| 81 |
+
private MyBean my = ...;
|
| 82 |
+
|
| 83 |
+
from("activemq:topic:OrdersTopic")
|
| 84 |
+
.filter().method(my, "isGoldCustomer")
|
| 85 |
+
.to("activemq:BigSpendersQueue");
|